Oct 102020
 

One of the big names in the opensource world – Eric Raymond – has declared that Windows will soon be effectively a Linux distribution. Which seems like a ridiculous notion; except technically it might make a lot of sense.

How?

It seems impossible for Microsoft to replace Windows with Linux, but actually it could be done. Windows itself consists of a bunch of software applications which call Windows “APIs” which in turn make calls to the legacy NT kernel. If all that software is written cleanly (it won’t be, but bear with me), it should be possible to make modifications to both (or either) the Linux kernel and the Windows APIs to allow Windows software to run natively.

Impossible? Nope – it has already been done to a certain extent – Wine and Proton allow a considerable amount of Windows software (and games!) to run under Linux.

Why?

So it’s not impossible, but surely it is a lot of work. So why?

Microsoft has a bit of a problem – they don’t make a huge amount of money selling the Windows operating system, and maintaining it is hugely expensive. All those security fixes, all those bug fixes, and all those new features they want to introduce.

Now most of this is done to the “userland” rather than the kernel itself, but the kernel does still need to be maintained. But what if you could use the Linux kernel and get some level of maintenance supplied by those not employed by Microsoft?

Would that save Microsoft money? It seems quite possible, and you can bet someone in Microsoft has estimated whether it would or not.

Will It Happen?

There are those who point to certain actions by Microsoft – the Linux subsystem for Windows, the Edge browser for Linux, the rumour of an Office build under Linux, etc. as indicators that Microsoft is planning this.

I think they’re wrong to the extent that those actions don’t say whether Microsoft is planning to make Windows a Linux distribution or not. There are plenty of reasons why Microsoft is releasing Linux software not least because they will almost certainly have developers that believe that porting software is a good way of finding bugs.

The real answer is that the only people who know are inside Microsoft.

The Join
Apr 302017
 

Despite how long I have been running Windows in virtual machines (as far back as Vmware Workstation 1.0), I have never gotten around to looking at the virtio network interface – except for naïvely turning it on once, finding it didn’t work, and turning it off – so I decided to have a look at it. I was prompted to do this by a suggestion that emulating the NIC hardware as opposed to simply using a virtual communications channel to the host would hurt network performance. Good job I chose a long weekend because I ran into a few issues :-

  • Getting appropriate test tools took a while because most of the tools I know of are very old; I ended up using iperf2 on both the Linux main host and the Windows 10 guest (within the “Windows
  • The “stable” virtio drivers (also called “NetKVM”) drivers didn’t work. Specifically they could send packets but not receive them (judging from the DORA conversation that was more of a DODO). I installed the “latest” drivers from https://fedoraproject.org/wiki/Windows_Virtio_Drivers. Note to late readers: this was as of 2017-04-30; different versions may offer different results.
  • Upgrading my ancient Debian Jessie kernel to 4.9 on the off-chance it was a kernel bug turned into a bit of an exercise what with ZFS disappearing after the upgrade, and sorting out the package dependencies to get it re-installed was “interesting” (for small values of course). No data loss though.

I ran two tests :-

  1. sudo nping –tcp -p 445 –count 200 –data-len 1280 ${ip of windows guest) – to judge how reliable the network connection was.
  2. On the Linux host: sudo iperf -p 50001 
  3. On the Windows guest (from within the Ubuntu-based environment): sudo iperf -p 50001 -c ${ip of Linux host}
Device nping result iperf result
Windows guest (virtual Intel Pro 1000 MT Desktop 1 lost 416 Mbits/sec
Windows guest (virtio) 0 lost 164 Mbits/sec
CuBox running ARM Linux n/a 425 Mbits/sec

Which is not the result I was expecting. And yes I did repeat the tests a number of times (I’ve cheated and chosen the best numbers for the above table), and no I did not confuse which NIC was configured at the time of the tests nor did I get the tests mixed up. And to those who claim that the use of the Ubuntu environment screwed things up, that appears not to be the case – I repeated the test with a Windows compiled version of iperf with much the same results.

So it seems despite common sense indicating that a NIC “hardware” custom designed for a virtual environment should perform better than an emulation of a hardware NIC, the actual result in this case was the other way around. Except for the nping result which shows the loss of a single packet with the emulated hardware NIC.