No ads? Contribute with BitCoins: 16hQid2ddoCwHDWN9NdSnARAfdXc2Shnoa
Apr 302017
 

Despite how long I have been running Windows in virtual machines (as far back as Vmware Workstation 1.0), I have never gotten around to looking at the virtio network interface – except for naïvely turning it on once, finding it didn’t work, and turning it off – so I decided to have a look at it. I was prompted to do this by a suggestion that emulating the NIC hardware as opposed to simply using a virtual communications channel to the host would hurt network performance. Good job I chose a long weekend because I ran into a few issues :-

  • Getting appropriate test tools took a while because most of the tools I know of are very old; I ended up using iperf2 on both the Linux main host and the Windows 10 guest (within the “Windows
  • The “stable” virtio drivers (also called “NetKVM”) drivers didn’t work. Specifically they could send packets but not receive them (judging from the DORA conversation that was more of a DODO). I installed the “latest” drivers from https://fedoraproject.org/wiki/Windows_Virtio_Drivers. Note to late readers: this was as of 2017-04-30; different versions may offer different results.
  • Upgrading my ancient Debian Jessie kernel to 4.9 on the off-chance it was a kernel bug turned into a bit of an exercise what with ZFS disappearing after the upgrade, and sorting out the package dependencies to get it re-installed was “interesting” (for small values of course). No data loss though.

I ran two tests :-

  1. sudo nping –tcp -p 445 –count 200 –data-len 1280 ${ip of windows guest) – to judge how reliable the network connection was.
  2. On the Linux host: sudo iperf -p 50001 
  3. On the Windows guest (from within the Ubuntu-based environment): sudo iperf -p 50001 -c ${ip of Linux host}
Device nping result iperf result
Windows guest (virtual Intel Pro 1000 MT Desktop 1 lost 416 Mbits/sec
Windows guest (virtio) 0 lost 164 Mbits/sec
CuBox running ARM Linux n/a 425 Mbits/sec

Which is not the result I was expecting. And yes I did repeat the tests a number of times (I’ve cheated and chosen the best numbers for the above table), and no I did not confuse which NIC was configured at the time of the tests nor did I get the tests mixed up. And to those who claim that the use of the Ubuntu environment screwed things up, that appears not to be the case – I repeated the test with a Windows compiled version of iperf with much the same results.

So it seems despite common sense indicating that a NIC “hardware” custom designed for a virtual environment should perform better than an emulation of a hardware NIC, the actual result in this case was the other way around. Except for the nping result which shows the loss of a single packet with the emulated hardware NIC.

Oct 032015
 

One thing that has always puzzled me about Linux Containers was why it is necessary to configure the network address in two places – the container configuration, and the operating system configuration. The short answer is that it isn't.

If you configure network addresses statically within the container configuration :-

» grep net /var/lib/lxc/mango/config 
# networking
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.ipv4 = 10.0.0.35/16
lxc.network.ipv4.gateway = 10.0.0.1
lxc.network.ipv6 =         2001:0db8:ca2c:dead:0000:0000:0000:000a/64
lxc.network.ipv6.gateway = 2001:0db8:ca2c:dead:0000:0000:0000:0001

Then the configuration within the container's operating system can simply be :-

» cat /var/lib/lxc/mango/rootfs/etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual
iface eth0 inet6 manual

And that works fine.

May 222015
 

So on my upgrade from Wheezy to Jessie, I found myself (amongst other issues) looking at a graphical interface where the mouse worked fine, but no mouse pointer was visible. After trying a few other things, it turned out that :-

gsettings set org.gnome.settings-daemon.plugins.cursor active false

Did the trick.

Of course that tip came from somewhere else, but as it worked for me, it’s worth making a note of.

May 022015
 

I have recently been upgrading my Linux containers from Debian wheezy to jessie, and each time have encountered a problem preventing the container from booting. Or rather as it turns out, preventing the equivalent of init from starting any daemons. Which is systemd of course.

Now this is not some addition to the Great Systemd Debate (although my contribution to that debate may well arrive someday), but a simple fix, or at this stage a workaround (to use the dreaded ITIL phrase).

The fix is to re-install the traditional SystemV init package replacing the new systemd package. This can be done during the upgrade by running the following at the end of the usual process :-

apt-get install sysvinit-core

Of course you will probably be reading this after you have encountered the problem. There are probably many ways of dealing with the situation after you have tried rebooting and encountered this issue, but my choice is to run the following commands from what I tend to call the "global container" :-

chroot ${container root filesystem}
apt-get install sysvinit-core

As mentioned before, this is not a fix. And indeed the problem may be my own fault – perhaps it doesn't help having the "global container" still running wheezy. Perhaps there are some instructions in the Debian upgrade manual that details some extra step you should run. And of course by switching back to System V init, we are missing out on all of the systemd fun.

Dec 222014
 

This is a series of working notes on the Yubikey which is an interesting device used to supplement passwords to make two-factor authentication easier. It is essentially a hardware security token device that pretends to your computer to be a keyboard and enters a one-time only password that can be used to verify your identity – much like a password, but much more secure.

Well perhaps "easier" only if someone does all the configuration for you, although I am inclined to look a bit deeper into such things for my own amusement. My own key is a Yubikey NEO, but much of what follows also applies to the other Yubikey models.

Observations

This is the spot for observations on using the Yubikey over time.

  1. For some reason the Yubikey doesn't always "light up" on my workstation at work. It works fine at home – the green light always turns on ready for a key press – but at work it often seems to flicker and stay out. Not sure what causes this, but it always seems to be persistent when you really need to use it! 

Configuration

… is to some extent unnecessary, but under Linux there are three bits of software that can be installed to configure additional features of the Yubikey :-

  1. The library: https://developers.yubico.com/libykneomgr/
  2. The command-line tool: https://developers.yubico.com/yubikey-personalization/
  3. The GUI: https://developers.yubico.com/yubikey-personalization-gui/

All three build easily from the instructions given. Just make sure to remember to copy the udev rules from yubikey-personalization to /etc/udev/rules.d/ and run udevadm trigger to enable them. This will make sure you can access your yubikey as a console user, so you don't have to become root.

Enabling Linux Authentication

This was all done with a Linux container (LXC), so it could be relatively easily thrown away and restarted. The first step was to install the relevant PAM module :-

# apt-get install libpam-yubico

This pulls in a ton of other required packages.

The next is to grab the unchanging part of your Yubikey token. This is the first 12 characters of what you get when you activate it. Whilst you have it to hand, now would be a good time to create the mapping file – /etc/yubikey-mappings :-

# Yubikey ID mappings
# Format:
#       user-id:yubikey-id:yubikey-id:...
# (But usually only one)
user-id:ccccccsomeid

Next step is to add a little something to one of the pam files. For testing (assuming you have console) access, the relevant file might be /etc/pam.d/sshd but once you have things working, /etc/pam.d/common-auth might be a better choice. Right at the top of the file add :-

auth       sufficient   pam_yubico.so debug id=16 authfile=/etc/yubikey-mappings
#       Added for Yubikey authentication.

Because these things always have problems when you first try them, it makes sense to set up the debugging log :-

touch /var/run/pam-debug.log
chmod a+w /var/run/pam-debug.log

At this point, assuming everything works as expected :-

  1. You will be able to authenticate using ssh using either your Yubikey, or your password.
  2. This assumes your server is able to communicate with the Yubi Cloud.

There are further improvements to be made … and we'll get to those shortly.

But That's Not Two-Factor Authentication!

Indeed not, so we'll fix that right now.

Firstly remove the line we previously added to /etc/pam.d/sshd; because of the way that Debian configures pam, it is less disruptive (i.e. fewer changes) to make the change to /etc/pam.d/common-auth :-

auth       requisite     pam_yubico.so id=16 debug authfile=/etc/yubikey-mappings
#       Yubikey configuration added.
auth    [success=1 default=ignore]      pam_unix.so nullok_secure use_first_pass

But before restarting sshd (you have been doing that haven't you?), you will need to add a Yubikey ID to /etc/yubikey-mappings for the root user.

At this point, you will only be able to authenticate if you enter your username, followed by both your Unix password and activate your Yubikey at the password prompt. Entering both at the same prompt is a little weird especially when you consider that there is no indications anywhere that Yubikey authentication is required.

But we can fix that. First of all, one small change to common-auth – remove the use_first_pass phrase.

Next edit the file /etc/ssh/sshd_config and find the ChallengeResponseAuthentication phrase and set to "Yes" :-

ChallengeResponseAuthentication yes

And after a quick reboot, the log in process works in a sensible way :-

» ssh chagers
Yubikey for `mike': (Press YubiKey)
Password: (Enter Unix password)
Linux chagers 3.14-0.bpo.1-amd64 #1 SMP Debian 3.14.12-1~bpo70+1 (2014-07-13) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Dec 31 15:37:05 2014
...
</pre>
Aug 022014
 

One of the questions I always ask myself when setting up a resilient server, is just how well will it cope with a disk failure? Ultimately you cannot answer that without trying it out.

But as practice (and to determine whether it mostly works), it’s perfectly sensible to try it out on a virtual machine.

Debian Installation

If you are looking for full instructions on installing Debian, this is not the place to look. I configured the virtual machine with 2GBytes of memory, an LsiLogic SAS controller with two attached disks each of 64GBytes.

The installation process was much as per normal (I unselected “Desktop” to save time), but the storage was somewhat different :-

  • Manual partitioning method
  • Create an empty partition on both disks
  • Select Software RAID
  • Create an MD device
  • RAID1
  • And put both disks into the RAID
  • Configure LVM
  • Create a Volume Group (“sys”)
  • Select md0 for the volume group device
  • Create logical volumes (boot: 512MB, root: 16GB, var: 8GB, home: 512M (it’s a server))
  • In the partitioning manager select each Logical Volume in turn and specify the file system parameters.

You will notice that no swap was created – this was a mistake that I’m in the unfortunate habit of making! However for a test, it wasn’t a problem and with LVM it is possible to create swap after the installation.

Post Installation

After the server has booted, it is possible to check the second hard disk for the presence of grub in the MBR (dd if=/dev/sdb of=/var/tmp/sdb.boot bs=1M count=1, and then run strings on the result). It turns out that nothing is installed in the MBR of the second disk by default. Which would make booting in a degraded environment an interesting challenge (i.e. you’ll have to find a rescue CD and boot off the relevant hard disk).

However this can be fixed by installing grub onto the second hard disk: grub-install /dev/sdb

Testing Resilience

But what happens when you lose a disk? Now is the time to test. Shut down the virtual machine and remove the second hard disk – leaving the first hard disk in place does not provide a full test.

If your first attempt at booting afterwards results in a failure to acquire a grub menu, then either you have failed to run grub-install as detailed above (guess what mistake I made?), or your BIOS settings don’t permit the computer to boot off anything other than the first hard disk.

However, in my second attempt, the server booted normally with the addition of a few messages that indicate that there is just one disk making up the mirrored pair.

Summary

  1. Yes, you can put /boot onto an LVM file system that sits on mirrored disks. That hasn’t always been the case.
  2. It is still necessary to run grub-install to put Grub onto the MBR of the second hard disk.
  3. It works.
Jul 292013
 

… which is of course massive overkill. But fun. It should increase the raw bandwidth available between the two machines from 1Gbps to 20Gbps (with one link) and 40Gbps with both links bonded. It was a bit of a surprise to me when I looked around at prices of second-hand kit to realise that InfiniBand was so much cheaper to acquire than Fibre Channel; the kit I acquired cost less than £100 all in whereas FC kit would be in the region of £1,000, and InfiniBand is generally quicker. There is of course 16Gb FC and 10Gb InfiniBand, but that is hardly comparing like with like. So what is this overkill for? Networking of course. I’ve acquired two HP InfiniBand dual link cards which means I can connect my workstation to my server :- InfiniBand Network Using dual links is of course overkill on top of overkill, but given that these cards have dual links, why not use them? And it does give a couple of experiments to try later. To prepare in advance, the following network addresses will be used :-

Server Link Number IPv4 Address IPv6 Address
A 1 10.255.0.1 AAISP:d00d::1
A 2 10.255.1.1 AAISP:d00f::1
B 1 10.255.0.254 AAISP:d00d:2
B 1 10.255.1.254 AAISP:d00f:2

Yes I have cheated for the IPv6 addresses! The first step is to configure each “server” … one is running Debian Linux, and the other is running FreeBSD.

Configuring Linux

This was subject to much delay whilst I believed that I had a problem with the InfiniBand card, but putting the card into a new desktop machine caused it to spring back to life. Either some sort of incompatibility with my old desktop (which was quite old), or some sort of problem with the BIOS settings.

Inserting the card should load the core module (mlx4_core) automatically, and spit out messages similar to the following :-

[    3.678189] mlx4_core 0000:07:00.0: irq 108 for MSI/MSI-X
[    3.678195] mlx4_core 0000:07:00.0: irq 109 for MSI/MSI-X
[    3.678199] mlx4_core 0000:07:00.0: irq 110 for MSI/MSI-X
[    3.678204] mlx4_core 0000:07:00.0: irq 111 for MSI/MSI-X
[    3.678208] mlx4_core 0000:07:00.0: irq 112 for MSI/MSI-X
[    3.678212] mlx4_core 0000:07:00.0: irq 113 for MSI/MSI-X
[    3.678216] mlx4_core 0000:07:00.0: irq 114 for MSI/MSI-X
[    3.678220] mlx4_core 0000:07:00.0: irq 115 for MSI/MSI-X
[    3.678223] mlx4_core 0000:07:00.0: irq 116 for MSI/MSI-X
[    3.678228] mlx4_core 0000:07:00.0: irq 117 for MSI/MSI-X
[    3.678232] mlx4_core 0000:07:00.0: irq 118 for MSI/MSI-X
[    3.678236] mlx4_core 0000:07:00.0: irq 119 for MSI/MSI-X
[    3.678239] mlx4_core 0000:07:00.0: irq 120 for MSI/MSI-X
[    3.678243] mlx4_core 0000:07:00.0: irq 121 for MSI/MSI-X
[    3.678247] mlx4_core 0000:07:00.0: irq 122 for MSI/MSI-X
[    3.678250] mlx4_core 0000:07:00.0: irq 123 for MSI/MSI-X
[    3.678254] mlx4_core 0000:07:00.0: irq 124 for MSI/MSI-X
[    3.678259] mlx4_core 0000:07:00.0: irq 125 for MSI/MSI-X
[    3.678263] mlx4_core 0000:07:00.0: irq 126 for MSI/MSI-X
[    3.678267] mlx4_core 0000:07:00.0: irq 127 for MSI/MSI-X
[    3.678271] mlx4_core 0000:07:00.0: irq 128 for MSI/MSI-X
[    3.678275] mlx4_core 0000:07:00.0: irq 129 for MSI/MSI-X

This is just the core driver; at this point additional modules are needed to do anything useful. You can manually load the modules with modprobe but sooner or later it is better to make sure they’re loaded automatically by adding their names to /etc/modules. The modules you want to load are :-

  1. mlx4_ib
  2. ib_umad
  3. ib_uverbs
  4. ib_ipoib

This is a minimal set necessary for networking (“IP”) rather than additional features such as SCSI. It’s generally better to start with a minimal set of features initially. At this point, it is generally a good idea to reboot to verify that things are getting closer. After a reboot, you should have one or more new network interfaces listed by ifconfig :-

ib0       Link encap:UNSPEC  HWaddr 80-00-00-48-FE-80-00-00-00-00-00-00-00-00-00-00  
          UP BROADCAST RUNNING MULTICAST  MTU:2044  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:256 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

ib1       Link encap:UNSPEC  HWaddr 80-00-00-49-FE-80-00-00-00-00-00-00-00-00-00-00  
          UP BROADCAST RUNNING MULTICAST  MTU:2044  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:256 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Despite the appearance, we still have quite a way to go yet. The next step is to install some additional packages: ibutilsinfiniband-diags, and opensm. The last package is for a subnet manager which is unnecessary if you have an InfiniBand switch (but I don’t). The first step is to get opensm up and running. Edit /etc/default/opensm and change the PORTS variable to “ALL” (unless you want to restrict the managed ports, and make things more complicated). And start opensm: /etc/init.d/opensm start; update-rc.d opensm defaults.

At this point, you can configure the network addresses by editing /etc/network/interfaces. If you need help doing this, then you’re in the tech pool beyond your depth! Without something at the other end, these interfaces won’t work (obviously), so it’s time to start work on the other end …

Configuring FreeBSD

See: https://wiki.freebsd.org/InfiniBand I hadn’t had cause to build a custom kernel before, so the very first task was to use subversion to checkout a copy of the FreeBSD source code :-

svn co svn://svn0.us-east.FreeBSD.org/base/stable/9 /usr/src

Updating will of course require just: cd /usr/src && svn update. Once installed, create a symlink from /sys to /usr/src/sys if the link does not already exist: ln -s /usr/src/sys /sys

Go to the kernel configuration directory (/usr/src/sys/amd64/conf), copy the GENERIC configuration file to a new file, and edit the new file to add in certain options :-

# Infiniband stuff (locally added)
options         OFED
options         IPOIB_CM
device          ipoib
device          mlx4ib

Again, this is a minimal set that will not offer full functionality … but should be enough to get IP networking up and running. The next step is to build and install the kernel :-

make buildkernel KERNCONF=${NAME-OF-YOUR-CONFIG}; make installkernel KERNCONF=${NAME-OF-YOUR-CONFIG}

The next step is to build the “world”  :-

  1. Edit /etc/src.conf and add “WITH_OFED=’yes'” to that file.
  2. Change to /usr/src and run: make buildworld
  3. Finalise with make installworld

As it happens I had to build the user-land first, as the kernel compilation needed a new user-land feature.

After a reboot, the new network interface(s) should show up as ib0 upwards. And these can be configured with an address in exactly the same as any other network interface.

Testing The Network

A tip for making sure the interfaces you think are connected together is to configure one of the machines, send a broadcast ping to the relevant network address of each interface in turn, and run tcpdump on the other machine to verify that the packets coming down the wire match what you expect.

Below the level of IP, it is possible to run an InfiniBand ping to verify connectivity. First you need a GUID on “the server”, which can be obtained by running ibstat and looking for the “Port GUID”, which will be something like “0x0002c90200273985”. Next run ibping -S on the server.

Now on the other machine (“the client”), run ibping :-

# ibping -G 0x0002c90200273985
Pong from polio.inside.zonky.org (Lid 3): time 0.242 ms
Pong from polio.inside.zonky.org (Lid 3): time 0.153 ms
Pong from polio.inside.zonky.org (Lid 3): time 0.160 ms

The next step is to run an IP ping to one of the hosts. If that works, it is time to start looking at something that will do a reasonable attempt at a speed test.

This can be done in a variety of different ways, but I chose to use nttcp which is widely available. On one of the hosts, run nttcp -i to act as the “partner” (or server). On the sending server, run nntcp -T ${ip-address-to-test} which will give output something like :-

# nttcp -T 10.0.0.26
     Bytes  Real s   CPU s Real-MBit/s  CPU-MBit/s   Calls  Real-C/s   CPU-C/s
l  8388608    0.70    0.01     95.7975   5592.4053    2048   2923.51  170666.7
1  8388608    0.71    0.04     94.0667   1878.6950    5444   7630.87  152403.3

According to the documentation, the second line should begin with ‘r’, but for a simple speed test we can simply average the numbers in the “Real-MBit/s” to get an approximate speed. Oddly my gigabit ethernet seems to have mysteriously degraded to 100Mbps! At least it makes the InfiniBand speed slightly more impressive :-

# nttcp -T 10.255.0.2
     Bytes  Real s   CPU s Real-MBit/s  CPU-MBit/s   Calls  Real-C/s   CPU-C/s
l  8388608    0.03    0.00   2521.9415  16777.2160    2048  76963.55  512000.0
1  8388608    0.03    0.03   2206.6574   2568.6620    4032 132579.25  154329.0

Before getting into a panic over what appears to be a pretty poor result, it is worth bearing in mind that IP over InfiniBand isn’t especially efficient, and InfiniBand seems to suffer from marketing exaggeration. From what I understand, DDR’s 20Gbps signalling rate becomes 16Gbps, which in turn becomes 8.5Gbps when looking at the output of ibstatus (not ibstat) – why the halving here is a bit of a mystery, but that may become apparent later.

There has also been a hint that FreeBSD is due for a significant improvement in InfiniBand performance sometime after the release of 9.2.

As a late addition, it would appear that running OpenSM (the subnet manager) on both hosts means that when one or other is rebooting, the other can take over the duties of the subnet manager. To enable on FreeBSD, simply add opensm_enable=”YES” to the file /etc/rc.conf and reboot.

Oct 172012
 

I have recently become interested in the amount of entropy available in Linux and decided to spend some time poking around on my Debian workstation. Specifically looking to increase the amount of entropy available to improve the speed of random number generation. There are a variety of different ways of accomplishing this including hardware devices (some of which cost rather too much for a simple experiment).

Eh?

Linux has a device (/dev/random) which makes available random numbers to software packages that really need access to a high quality source of random numbers. Any decently written cryptographic software will use /dev/random (and not /dev/urandom which does not generate “proper” random numbers of quality) to implement encryption.

Using poor quality random numbers can potentially result in encryption not being secure. Or perhaps more realisticallybecause Linux waits until there is sufficient entropy available before releasing numbers through /dev/random, software reading from that device may be subject to random stalling. Not necessarily long enough to cause a major problem, but perhaps enough to have an effect on performance.

Especially for a server in a virtualised environment!

Adding Entropy The Software Way (haveged)

HAVEGED is a way of using processor flutter to add entropy to the Linux /dev/random device. It can be installed relatively easily with :-

apt-get install haveged
/etc/init.d/haveged start

As soon as this was running the amount of entropy available (cat /proc/sys/kernel/random/entropy_avail) jumped from several hundred to close to 4,000.

Now does this increased entropy have an effect on performance? Copying a CD-sized ISO image file using ssh :-

Default entropy 29.496
With HAVEGED 28.636

A 2% improvement in performance is hardly a dramatic improvement, but every little bit helps and it may well have a more dramatic effect on a server which regularly exhausts entropy.

Checking The Randomness

But hang on … more important than performance is the randomness of the numbers generated. And you cannot mess with the generation of random numbers without checking the results. The first part of checking the randomness is making sure you have the right tools installed :-

apt-get install rng-tools

Once installed you can test the current set of random numbers :-

dd if=/dev/random bs=1k count=32768 iflag=fullblock| rngtest

This produces a whole bunch of output, but the key bits of output are the “FIPS 140-2 failures” and “FIPS 140-2 successes”; if you have too many failures something is wrong. For the record my failure rate is 0.05% with haveged running (without: tests ongoing).

Links

… to more information.

Facebook Auto Publish Powered By : XYZScripts.com

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close