Feb 242020
 

Every so often, I tune into a video on some form of virtualisation which perpetuates the myth that ‘virtual cores’ that you allocate to a virtual machine are equivalent to the physical cores that the host has. In other words if you create a virtual machine with two cores, that is two cores that the rest of the host cannot use.

Preposterious.

Conceptually at least, a core is a queue runner that takes a task on a queue, runs that task for a while, and then sticks that task back on the queue. Except for specialised workloads, those cores are very often (even mostly) idle.

To the host machine, tasks scheduled to run on a virtual core are just tasks to be performed waiting in the queue; ignoring practicality, there is no reason why there should not be more virtual cores in a virtual machine than there are in the host machine.

If you take a look at the configuration of my virtual Windows machine in VirtualBox :-

You see :-

  1. I’ve allocated 8 virtual cores to this machine. I rarely use this machine (although it is usually running), but it does not take much resources to run idle cores.
  2. VirtualBox arbitrarily limits the number of cores I can allocate to the virtual machine to the number of threads my processor has; it also has a warning at the number of cores my processor has but doesn’t stop me allocating virtual cores in the “red” zone.

Qemu on the other hand has no such qualms about launching a virtual machine with 64 cores – well in excess of what my physical processor has.

Of course you have to be sensible, but creating a virtual machine with 4 cores does not make four cores unavailable to your host machine. If a virtual machine is idle, it won’t be running much (no machine is ever completely idle) on your real cores.

Apr 102019
 

So earlier today, I had a need to mount a disk image from a virtual machine on the host, and discovered a “new” method before remembering I’d made notes on this in the past. So I’m recording the details in the probably vain hope that I’ll remember this post in the future.

The first thing to do is to add an option to include partition support in the relevant kernel module, which I’ve done by adding a line to /etc/modprobe.d/etc-modules-parameters.conf :-

options nbd max_part=63

The next step is to load the module:

# modprobe nbd

The next is to use a Qemu tool to connect a disk image to a network block device :-

# qemu-nbd -r -c /dev/nbd0 /home/mike/lib/virtual-machine-disks/W10.vdi
# ls /dev/nbd0*
/dev/nbd0  /dev/nbd0p1  /dev/nbd0p2  /dev/nbd0p3

And next mount the relevant partition :-

# mount -o ro /dev/nbd0p2 /mnt

All done! Except for un-mounting it and finally disconnecting the network block device :-

# umount /mnt
# ls /dev/nbd0*
/dev/nbd0  /dev/nbd0p1  /dev/nbd0p2  /dev/nbd0p3
# qemu-nbd -d /dev/nbd0
/dev/nbd0 disconnected
# ls /dev/nbd0*        
/dev/nbd0

The trickiest part is the qemu-nbd command (so not very tricky at all).

The “-r” option specifies that the disk image should be connected read-only, which seems to be sensible when you’re working with a disk image that “belongs” to another machine. Obviously if you need to write to the disk image then you should drop the “-r” (but consider cloning or taking a snapshot).

The “-c” option connects the disk image to a specific device and the “-d” option disconnects the specific device.

Old Metal 2
Apr 302017
 

Despite how long I have been running Windows in virtual machines (as far back as Vmware Workstation 1.0), I have never gotten around to looking at the virtio network interface – except for naïvely turning it on once, finding it didn’t work, and turning it off – so I decided to have a look at it. I was prompted to do this by a suggestion that emulating the NIC hardware as opposed to simply using a virtual communications channel to the host would hurt network performance. Good job I chose a long weekend because I ran into a few issues :-

  • Getting appropriate test tools took a while because most of the tools I know of are very old; I ended up using iperf2 on both the Linux main host and the Windows 10 guest (within the “Windows
  • The “stable” virtio drivers (also called “NetKVM”) drivers didn’t work. Specifically they could send packets but not receive them (judging from the DORA conversation that was more of a DODO). I installed the “latest” drivers from https://fedoraproject.org/wiki/Windows_Virtio_Drivers. Note to late readers: this was as of 2017-04-30; different versions may offer different results.
  • Upgrading my ancient Debian Jessie kernel to 4.9 on the off-chance it was a kernel bug turned into a bit of an exercise what with ZFS disappearing after the upgrade, and sorting out the package dependencies to get it re-installed was “interesting” (for small values of course). No data loss though.

I ran two tests :-

  1. sudo nping –tcp -p 445 –count 200 –data-len 1280 ${ip of windows guest) – to judge how reliable the network connection was.
  2. On the Linux host: sudo iperf -p 50001 
  3. On the Windows guest (from within the Ubuntu-based environment): sudo iperf -p 50001 -c ${ip of Linux host}
Device nping result iperf result
Windows guest (virtual Intel Pro 1000 MT Desktop 1 lost 416 Mbits/sec
Windows guest (virtio) 0 lost 164 Mbits/sec
CuBox running ARM Linux n/a 425 Mbits/sec

Which is not the result I was expecting. And yes I did repeat the tests a number of times (I’ve cheated and chosen the best numbers for the above table), and no I did not confuse which NIC was configured at the time of the tests nor did I get the tests mixed up. And to those who claim that the use of the Ubuntu environment screwed things up, that appears not to be the case – I repeated the test with a Windows compiled version of iperf with much the same results.

So it seems despite common sense indicating that a NIC “hardware” custom designed for a virtual environment should perform better than an emulation of a hardware NIC, the actual result in this case was the other way around. Except for the nping result which shows the loss of a single packet with the emulated hardware NIC.

May 102015
 

Whilst messing around with malware, memory dumps, and memory forensics, it is kind of handy to be able to use VirtualBox. Particularly when that is your virtual machine "weapon of choice".

According to the documentation, Volatility can read core dumps from VirtualBox. Once you realise that you need to specify a “profile” to read the result, this is quite simple :-

✓ mike@pica» VBoxManage list vms | grep Windows
"Windows" {9cefc95e-eaf2-4052-b466-cb665c73a36a}
✓ mike@pica» VBoxManage debugvm "Windows" dumpguestcore --filename ~/windows.elf
✓ mike@pica» ls -l ~/windows.elf
-rw------- 1 mike mike 2.1G May 10 14:11 /home/mike/windows.elf

If you specify the right profile option, then Volatility can make use of this :-

✓ mike@pica» volatility -f ~/windows.elf --profile=Win7SP1x86 cmdline          
Volatility Foundation Volatility Framework 2.4
************************************************************************
System pid:      4
************************************************************************
smss.exe pid:    260
Command line : \SystemRoot\System32\smss.exe
{Long list of processes removed}

All fairly obvious really, but if you do not specify the profile, volatility will present you an error that indicates it does not understand the format of the memory dump which is a bit confusing :-

✓ mike@pica» volatility -f ~/windows.elf cmdline                     
Volatility Foundation Volatility Framework 2.4
No suitable address space mapping found
Tried to open image as:
{Long list of memory image formats}

At least to someone as thick as me! Yes it took me ages to get this figured out.