Feb 242020
 

Every so often, I tune into a video on some form of virtualisation which perpetuates the myth that ‘virtual cores’ that you allocate to a virtual machine are equivalent to the physical cores that the host has. In other words if you create a virtual machine with two cores, that is two cores that the rest of the host cannot use.

Preposterious.

Conceptually at least, a core is a queue runner that takes a task on a queue, runs that task for a while, and then sticks that task back on the queue. Except for specialised workloads, those cores are very often (even mostly) idle.

To the host machine, tasks scheduled to run on a virtual core are just tasks to be performed waiting in the queue; ignoring practicality, there is no reason why there should not be more virtual cores in a virtual machine than there are in the host machine.

If you take a look at the configuration of my virtual Windows machine in VirtualBox :-

You see :-

  1. I’ve allocated 8 virtual cores to this machine. I rarely use this machine (although it is usually running), but it does not take much resources to run idle cores.
  2. VirtualBox arbitrarily limits the number of cores I can allocate to the virtual machine to the number of threads my processor has; it also has a warning at the number of cores my processor has but doesn’t stop me allocating virtual cores in the “red” zone.

Qemu on the other hand has no such qualms about launching a virtual machine with 64 cores – well in excess of what my physical processor has.

Of course you have to be sensible, but creating a virtual machine with 4 cores does not make four cores unavailable to your host machine. If a virtual machine is idle, it won’t be running much (no machine is ever completely idle) on your real cores.

Apr 102019
 

So earlier today, I had a need to mount a disk image from a virtual machine on the host, and discovered a “new” method before remembering I’d made notes on this in the past. So I’m recording the details in the probably vain hope that I’ll remember this post in the future.

The first thing to do is to add an option to include partition support in the relevant kernel module, which I’ve done by adding a line to /etc/modprobe.d/etc-modules-parameters.conf :-

options nbd max_part=63

The next step is to load the module:

# modprobe nbd

The next is to use a Qemu tool to connect a disk image to a network block device :-

# qemu-nbd -r -c /dev/nbd0 /home/mike/lib/virtual-machine-disks/W10.vdi
# ls /dev/nbd0*
/dev/nbd0  /dev/nbd0p1  /dev/nbd0p2  /dev/nbd0p3

And next mount the relevant partition :-

# mount -o ro /dev/nbd0p2 /mnt

All done! Except for un-mounting it and finally disconnecting the network block device :-

# umount /mnt
# ls /dev/nbd0*
/dev/nbd0  /dev/nbd0p1  /dev/nbd0p2  /dev/nbd0p3
# qemu-nbd -d /dev/nbd0
/dev/nbd0 disconnected
# ls /dev/nbd0*        
/dev/nbd0

The trickiest part is the qemu-nbd command (so not very tricky at all).

The “-r” option specifies that the disk image should be connected read-only, which seems to be sensible when you’re working with a disk image that “belongs” to another machine. Obviously if you need to write to the disk image then you should drop the “-r” (but consider cloning or taking a snapshot).

The “-c” option connects the disk image to a specific device and the “-d” option disconnects the specific device.

Old Metal 2