Feb 242020
 

Every so often, I tune into a video on some form of virtualisation which perpetuates the myth that ‘virtual cores’ that you allocate to a virtual machine are equivalent to the physical cores that the host has. In other words if you create a virtual machine with two cores, that is two cores that the rest of the host cannot use.

Preposterious.

Conceptually at least, a core is a queue runner that takes a task on a queue, runs that task for a while, and then sticks that task back on the queue. Except for specialised workloads, those cores are very often (even mostly) idle.

To the host machine, tasks scheduled to run on a virtual core are just tasks to be performed waiting in the queue; ignoring practicality, there is no reason why there should not be more virtual cores in a virtual machine than there are in the host machine.

If you take a look at the configuration of my virtual Windows machine in VirtualBox :-

You see :-

  1. I’ve allocated 8 virtual cores to this machine. I rarely use this machine (although it is usually running), but it does not take much resources to run idle cores.
  2. VirtualBox arbitrarily limits the number of cores I can allocate to the virtual machine to the number of threads my processor has; it also has a warning at the number of cores my processor has but doesn’t stop me allocating virtual cores in the “red” zone.

Qemu on the other hand has no such qualms about launching a virtual machine with 64 cores – well in excess of what my physical processor has.

Of course you have to be sensible, but creating a virtual machine with 4 cores does not make four cores unavailable to your host machine. If a virtual machine is idle, it won’t be running much (no machine is ever completely idle) on your real cores.

Apr 192010
 

One of the irritating things about reading or listening to people go on about CPUs or processors is how inaccurate they can be. In particular the complexity of modern processors allows for multiple “virtual processors” which many people seem to think are equivalent to each other. Not so! Some are and some are not.

In the old days you would have a socket on the motherboard of a computer into which you would fit a rectangular or square thing with lots of sharp legs on the underside (the chip) which was the processor. And yes I’m totally ignoring the period before single-chip processors when a chip might contain only a small part of a processor! One socket, one processor, one core (although you rarely if ever heard that), and one thread.

Although multi-threaded processors came before multiple cores, we will look at the later first.

One of the disadvantages of single processor computers was that for servers, they frequently did not have enough processor power. The solution was obvious – add more sockets so you could have more than one processor, although making the solution work was very difficult. Once multiprocessor servers came into use the cost of them was slowly reduced over time until they started being used at the high end of workstations where it become obvious that a multiprocessor machine for a single user was helpful in getting work done. Even though it was a rare piece of software that was written to take advantage of multiple processors.

At the same time, single core processors were becoming faster and hotter and it slowly became obvious that the old way of making computers faster was simple not feasible over the long term. Those who look into the future could see that if things continued as they were going, computers would rapidly become too hot to run easily. There was an almost collective decision that putting more than one processor onto a single chip was the way to make future computers “faster”, although there remains the problem of making software utilise those multiple cores properly.

Today you are most likely to encounter a multi-core chip going into that socket in your computer. This is more or less the same as the old multi-socket workstations and servers. Each “core” on a multi-core chip is roughly equivalent to an old single-core processor chip. If you have two cores inside your computer, your operating system will see (and hopefully use) each as a separate processor.

Now we come to threads, and this is where it becomes even trickier. Inside a single-core processor, there are a number of different units used to run your software which were often idle when running software. Each piece of software is made of of millions of instructions, and the processor runs a single instruction at a time. When a processor runs a single instruction, it has to go through a number of different stages which each use different units. At any time during the execution of an instruction, some of the units will be idle.

A variety of different strategies were tried to utilise these idle units, but the easiest to understand was one of the more complex to implement. This was to make a single-core processor pretend to be a multi-core processor and run more than one (usually two) pieces of software in what became known as “threads”. However whilst a simplistic piece of software may identify these threads as “virtual CPUs”, they are not quite the same – a processor with two threads will be slower than a processor with two cores (and no threads).

The “problem” with threads is that when two pieces of software attempt to run on the same processor, they will each try to grab a selection of units to use. These units change over time of course, but there is still a strong possibility that the two threads will both try to grab a single unit – and one will have to be stopped.

In many cases this performance difference between threads and cores does not make a noticeable difference. Almost all software spends far more time waiting for things to happen (for a bit of a file to come off a disk drive, for a user to press a key, etc.) than actually doing anything. However there are some software workloads that are significantly affected by the minor performance hit of threads – sufficient that it is even possible to improve performance by turning off threads!

This of course is an overly simplistic look at the issue, but may well be enough to convince some that threads and cores are not equivalent. A processor with 8 cores each of which can run 4 threads, is not equivalent to a processor with 32-cores. More sophisticated operating systems could well schedule software to run in a way that unused cores are referred to running software in a thread on a processor that is already being used.