Jan 302012
 

Looking through my archives, I realised that my ancient guide to the Internet was no longer online. Whilst really only of historical interest, it seems a shame that it is not available for the terminally bored, or those who wish to investigate what the Internet was like during the 1990s.

An online version is available here with a PDF version also available.

Jan 212012
 

One of the things I miss from Solaris are the Solaris Containers – zones – which are extremely useful for isolating lightweight services in their own “virtual machines”. This blog entry is about LXC – Linux Containers. There are of course other methods of accomplishing much the same thing with Linux, but LXC has the advantage that the necessary kernel extensions are included by default.

Or in other words it isn’t necessary to compile a custom kernel. Which has advantages in certain environments.

There are of course some disadvantages too – LXC isn’t quite as mature as some of the alternatives, and there’s a few missing features. But it works well enough.

What?

Operating system level virtualisation, or what I prefer to call lightweight virtualisation is a method by which you can run multiple virtual servers on a single physical (or virtual!) machine. Like normal virtualisation supplied by products such as ESX, Hyper-V, etc., light-weight virtualisation allows you to run multiple servers on a single instance of server hardware (actually you can do this on a virtual server too!).

Operating system level virtualisation is not quite the same as full virtualisation where you get a complete virtual machine with a VGA display, a keyboard, mouse, etc. Instead you trick the first program that starts on a normal Unix (or Linux) system – /sbin/init – into believing that it is running on a machine by itself when it is in fact running inside a specially created environment. That is if you want a full virtual operating system; it is also possible to setup an environment so that a container simply starts a single application.

In some ways this is similar to the ancient chroot mechanism which was often recommended for securely installing applications such as BIND which were prone to attack. But it has been improved with greater isolation from the operating system running on the hardware itself.

Note that I said /sbin/init – these containers do not run their own kernel. Merely their own user processes.

Why?

So why are these containers useful ? Especially in a world where virtualisation is ubiquitous and can even be free (VirtualBox, and various KVM solutions). Well of course they are, or they wouldn’t exist – the equivalent of BSD Jails has been introduced for every single remaining Unix-based system as well as Linux.

First of all, containers provide a perfectly viable virtualisation mechanism if all you require are numbers of Linux machines. Indeed it is possible to use this kind of virtualisation on already virtualised machines – for example on a Cloud-based virtual server (as you might get from Amazon) which could potentially save you money.

Secondly, containers provide lightweight virtualisation in that there is little to no overhead in using them. There is no virtualised CPU, no I/O virtualisation, etc. In most “heavyweight” virtualisation solutions the overhead of virtualisation is negligible except for I/O where there is often a considerable performance hit for disk-based applications.

Next if carefully setup, it is possible to reduce the incremental cost of each server installation if you use containers rather than full virtual machines. Every additional server you run has a cost associated with maintaining it in terms of money and time; a variety of different mechanisms can reduce this incremental cost, but there is still a cost there. Containers can be another means of reducing this incremental cost by making it easier to manage the individual servers.

As an example, it is possible to update which DNS servers each container uses by simply copying the /etc/resolv.conf file to each container :-

for container in $(lxc-ls | sort | uniq)
do
  cp /etc/resolv.conf /srv/lxc/${container}/rootfs/etc/resolv.conf
done

It’s also very handy for testing – create a container on a test server to mess around with some component or other to find out how it should be installed, and then throw away the container. This avoids “corrupting” a test server with the results of repeated experiments over time.

Of course there are also reasons why you should not use them.

First of all, it is another virtualisation technology to learn. Not a difficult one, but there is still more to learn.

In addition, it does not provide complete isolation – if you need to reboot the physical server, you will have to reboot all of the containers. So it is probably not a good technology for multiple servers that need to stay up forever – although the only real way of arranging that is to use a load balancer in front of multiple servers (even clustering sometimes requires an outage).

There is also the fact that this is not entirely a mature technology. That is not to say it isn’t stable, but more that the tools are not quite polished as yet

Finally there are hints that containers do not provide complete isolation – someone with root access on a container might be able to escape from the container. Thus it is probably not a good solution to provide isolation for security reasons.

How ?

The following instructions assume the use of Debian, although most modern distributions should be perfectly fine too – I’ve also done this with SLES, and seen instructions for ArchLinux. You can also mix and match distributions – SLES as the master operating system, and Debian for the distribution in the containers. That works perfectly fine.

To see if your currently running kernel supports the relevant extensions, see if the cgroups filesystem is available :-

# grep cgroup /proc/filesystems
nodev	cgroup

If the grep command doesn’t return the output as shown, you will need to upgrade and/or build your own kernel. Which is a step beyond where I’m going.

Initial Setup

Before installing your first container, you need to setup your server to support LXC. This is all pretty simple – the most complicated part is to setup your server to use a bridged adapter as it’s network interface. Which we will tackle first.

To start with, install the bridge-utils package :-

# apt-get install bridge-utils

Next edit your /etc/interfaces file. Leave the section dealing with the loopback interface (lo) alone and comment out anything relating to eth0. Once that’s done, add something to setup the bridge interface (br0) with :-

auto br0
iface br0 inet dhcp
   bridge_ports eth0
   bridge_fd 0

That sets up a bridged interface configured with an address supplied by a DHCP server – not perhaps the best idea for a production server, but perfectly fine for a test server. At least if you have a DHCP server on the network! If you need to configure a static address, copy the relevant parts from your commented out section – the address, netmask, and gateway keywords can be added below bridge_fd.

At this point it is worth rebooting the server to check that the new network interface is working fine. You could perform all of the steps in this initial setup section and do a reboot just at the end, but I prefer to reboot after each step when I’m trying something new. It’s easier to find the horrible mistake when you do it step by step.

Assuming the reboot fine, the next step is to automatically mount the cgroup filesystem. Add the following to the end of the /etc/fstab file :-

cgroup        /cgroup        cgroup        defaults    0    0

Make the /cgroup directory, and try to mount the filesystem :-

mkdir /cgroup
mount /cgroup

At this point you should be able to see a whole bunch of files in the /cgroup directory, but it won’t show up if you try a df. You should also reboot at this point to make sure that the filesystem automatically mounts when the system boots.

The final stage is to install the LXC runtime package :-

apt-get install lxc debootstrap

Note that the LXC package is available for many distributions, and the debootstrap package is a shell script runnable under most distributions given the presence of the right set of tools.

Now we are ready to start creating containers. Well, almost.

Creating Containers

When creating containers, it is usual to use a helper script to do the donkey work of setting this up. The LXC runtime package includes a number of such helper scripts. These scripts are very useful, but it is also worth indicating that they may require some local customisation – for instance the template script I used sets the root password to root; whilst it does suggest that this should be changed when the container is built, it is also very sensible to change this initial password to something at least half-reasonable.

And the longer and more extensively you use containers, the more local customisations you are likely to want. For instance, I tend to use a bind to ensure that the /site filesystem is mounted under each container, so I can be sure that my local scripts and configuration files are available on each and every container easily. So :-

  cd /usr/lib/lxc/templates
  cp lxc-debian lxc-local

When not using Debian, it is possible that the template scripts are installed in the main $PATH. In which case you may choose to remove them, or move them somewhere else to avoid their use in preference to your own versions.

It is also worth creating a template configuration file for the lxc-create script :-

cat > /etc/lxc/lxc.conf
lxc.network.type=veth
lxc.network.link=br0
lxc.network.flags=up
^D

To create your first container :-

  mkdir -p /var/lib/lxc/first
  lvcreate --name root-first --size=768M /dev/debian
  mkfs -t ext3 /dev/debian/root-first
  {Edit /etc/fstab to mount the new filesystem at /var/lib/lxc/first}
  mount /var/lib/lxc/root-first
  lxc-create -n first -t local -f /etc/lxc/lxc.conf

This goes through the process of “bootstrapping” Debian into a directory on your current system and setting up a configuration for your LXC container. Once complete, you are ready to start but first you will notice that I created a filesystem for the container’s root filesystem which should be self-evidently necessary if you want to avoid the possibility of a badly behaved container from bringing down other containers.

There are of course other things you can do at this stage before starting the container for the first time – for instance editing /srv/lxc/first/rootfs/etc/network/interfaces may be necessary to enter a static address which is particularly useful

Starting The Container

Once you have created a container, you will probably want to start it :-

lxc-start --daemon --name=first

You can start a container without the “daemon” option, but this means you are immediately connected to the console and it can be difficult to escape from. To connect to the system’s console try lsx-console, which should result in something like :-

# lxc-console --name=first

Type  to exit the console
--hit return here--
Debian GNU/Linux 5.0 first tty1

first login:

At this point you can login as root, and “fix” your container as you would do with an ordinary server. Install the software you need, make any changes you want, etc.

But there is one most noticeable oddity – rebooting a container does not seem to work properly. It seems to get stuck at the point where a normal server would reset the machine to boot again. Undoubtedly this will be fixed at some point … it’s possible that the fix is relatively simple anyway.

But for now the sequence :-

(in the container) shutdown -h now
(in the server) lxc-stop --name ${name-of-container}
(in the server) lxc-start --name ${name-of-container}

Will have to do.

Jan 192012
 

Took me a long time to get around to processing these, but they are ready now …

Who Eats Who?

Who Eats Who?

This beastie was real close, but they’re perfectly harmless!

The Swan

The Swan

The Claw

The Claw

I have no idea what this really is, and I’m pretty sure I don’t want to know … to me it’s the claw.

Traffic Cone

Traffic Cone

Someone had a fun night out.

Surfaces

Surfaces

 

Jan 112012
 

According to the BBC it has been announced that the current curriculum for computer training (ICT) in schools is to be torn up and replaced. And curiously enough the new curriculum is to include programming to a certain extent – as people have been urging for decades.

The first programming language intended for use by children was the Logo programming language first developed in 1967. So it is not as if this is a new idea.

To many of us, the most interesting aspect of computers is not that they allow us to use applications such as word processors, web browsers, and the like – all very useful tools that I would not want to give up – but that they can be controlled by programming. This could be as low-level as writing a device driver in C, or could be using some application macro language to automate a tedious task.

It is perhaps an over simplification to say so, but to a certain extent programming is that last bit – automating tedious tasks. Computers are good at tedious tasks; humans are not. We should be “teaching” computers to perform tedious tasks for us, and that is called programming.

Programming can of course get rather tricky particularly the lower the level you are getting to, but it can also be quite easy with an interactive language with more or less immediate results. For instance the old BASIC :-

10 for i = 1 to 80
20 print "Hello"
30 next i

Can be quickly typed in and then run gives an immediate result – the computer “says” hello to you. A simple example that can be typed in quickly, modified to give a more personal result … or enhanced to give different and slightly more interesting results. The immediacy is important to hook people in and interest them in programming.

And programming is not just useful for those who want to become programmers. Someone who has been introduced to programming may well be better able to :-

  1. Better specify to an IT department what they need, or the error they’re encountering. This will save time and money.
  2. Better appreciate what is and what is not possible.
  3. Be capable of automating computing tasks themselves – not quite programming, but very similar.
Jan 032012
 

As someone who spends quite a bit of time with a viewfinder stuck to his eyeball, and has used cameras ranging from an ancient Canon 1DS, through various compact cameras (including Micro 4/3 cameras) to my latest camera – a Leica M8, it is hardly surprising that I have some strong opinions on cameras. Here are just a few …

Camera user interfaces are too complex

In some ways there are too many buttons doing too much – it is all too likely to result in accidental changes whilst shooting. Which is the last thing that you want! The important thing when shooting is just that – not fiddling with the settings. Anything that gets in the way of the most difficult part of making images – composing that image – is a compromise on what a camera should be.

Whilst there are many settings that can be changed, it is rare that someone wants to make a change to say the ISO setting during shooting. Or the colour balance – as someone who always shoots raw, I have no use whatsoever for the ability to change the colour balance on the camera. Or the shutter speed when you are not shooting in “Manual” (which rather few people do).

Of course getting a bunch of photographers to agree on what controls live on the camera to allow immediate settings changes is more or less impossible without increasing the number of controls to the current level of confusion. Different kinds of photography call for different settings to be adjusted; if I’m shooting landscapes, I don’t want any kind of autofocus interfering (although automatically adjusting to the hyperfocal distance would be handy) and when I’m shooting people I may want to fiddle with the ISO setting (even accepting a bit of noise to avoid motion blur).

Some controls can not only live on the lens, but live there perfectly naturally (as someone who uses old-fashioned manual lenses, I may be prejudiced here) – the manual focus control, and the aperture control. No need for controls for these under the thumb!

The Leica wins here with only a small number of controls without going into the menus – it is perfectly possible for any photographer to pick up a Leica and immediately start using it. Of course the downside of the Leica is that it isn’t as flexible as many modern cameras. And that is something else that is important – cameras need to be as flexible as possible.

That would seem to be a conflicting requirement, but can quite easily be catered for by allowing settings to be changed through the menus on the big LCD panel that appears on practically all cameras. And assigning those settings to a set of user-settings which can be quickly selected using a dial on the camera – perhaps that dial that already selects from different scene settings on existing cameras.

The key here is to allow the photographer to change predefined settings on that dial so those who want full control can have that. Indeed it would be handy if the camera were supplied with something to stick over the existing dial to give numbers instead of pre-defined scenes.

Time to rethink the viewfinder

By which I mean not that large LCD screen on the back of the camera. Whilst that’s pretty nifty for inside shots where keeping the camera as steady as possible (whilst hand-holding) is not that vital, it fails miserably when outside. Too many times the screen is so washed out by sunlight that making sure the composition is right never mind the focus is impossible.

When pretty much the only choice was some kind of optical viewfinder – either through the lens as in an SLR or a separate viewfinder as in TLR or rangefinder cameras – it was quite impossible to do much with the location of the viewfinder. Whilst electronic viewfinders have their limitations, it is possible at last to do interesting things with the location.

So why are all electronic viewfinders nailed to the top of the camera as an optical viewfinder needs to be ? If you are lucky you will get something that rotates from horizontal to vertical – which is very useful, but does not go quite far enough.

The camera (or rather the lens) needs to be positioned to get the shot – perhaps down on the floor for an “interesting” angle. And the viewfinder belongs where the photographer’s eye is. Which is not usually on the floor – and remember that some of us are old enough that we find “interesting” angles difficult to get into.

This could be done quite easily with an EVF attached to glasses with an HDMI cable (or wireless connection) to the camera. And whilst we’re talking about EVF’s, they should be connected to the camera using an open-standard interface, so that EVF’s can be moved from camera to camera.

Open-source The Firmware

Whatever you do with the firmware of a camera, there are those who will not be totally happy with the result. The advantage of an open-source firmware is that anyone who is unsatisfied can hire someone to make modifications to the firmware.

And that may result in the incorporation of features that a camera manufacturer may later realise is a really good idea. So why not ?