Apr 272008
 

This entry is about upgrading a machine running Ubuntu 7.10 to Ubuntu 8.04 which is only just out. But not in the standard way which would be quite boring.

I have at least two computers running Ubuntu, both configured in a fairly complex way and both fairly important (in the sense I really don’t need to try an upgrade and end up with a broken system). Whilst Ubuntu frequently does upgrade without a hitch, it can occasionally choke; this is seemingly more common with more complex installations.

Why not preserve an old copy of the install around to revert to ? Well with LVM it is perfectly possible. Ignoring what happens underneath, I have an LVM volume group called “internal” (actually I don’t, but I would if I were to re-install) which has :-

  • var – 4Gbytes to be mounted as /var
  • root – 8Gbytes to be mounted as /
  • home – “enough” to be mounted as /home

Note I do not believe in allocating all available disk space with a storage management system like LVM available; I do a great deal of storage management work and the biggest mistake anyone can make is assuming that they know the storage requirements of a system throughout it’s whole lifetime. This applies in spades to a desktop machine. Without some free space, the suggested upgrade mechanism won’t work.

Now with modern hard disks, we are likely to have more than enough storage to allocate. For instance on this machine right away I have 138Gbytes of free storage (mirrored). And that it is on a two year old machine; a newer machine would have larger disks. Easily enough storage to have two or more “copies” of different versions of Ubuntu around.

It would be nice if Ubuntu could do much of the work for us, but for now it’s pretty much a manual process. As an aside, the Ubuntu developers should probably think about using LVM in the default installer to assist in the development of this kind of feature.

The first stage is to create new logical volumes and build filesystems on them. I chose to name the logical volumes after the operating system version they would be running …

lvcreate -n 804root --size=8G /dev/internal
lvcreate -n 804var --size=4G /dev/internal
mkfs -t xfs /dev/internal/804root
mkfs -t xfs /dev/internal/804var

Now the key here is not to look at the current size of your /var filesystem and decide you need a much smaller filesystem … or the upgrade process will refuse to start. You can always reduce it later if you really want to quibble over 1-2Gbytes.

The next stage is to copy the relevant filesystems across. At this point you should avoid running as much as possible and probably do this from a text terminal after shutting down GDM …

/etc/init.d/gdm stop
apt-get install star
     (If you don't have it installed already)
mount /dev/internal/804var /mnt
star -v -xdev -acl -copy /var/* /mnt
umount /mnt
mount /dev/internal/804root /mnt
star -v -xdev -acl -copy / /mnt

This stage will take some time to complete. You will want to do a quick check of the new / and /var to ensure they look roughly like the originals (I always seem to come up with the equivalent of /var/var when I do something like this). Notice that the new root filesystem is still mounted … you need to edit /mnt/etc/fstab to alter what devices are mounted for / and /var.

The next stage is a bit tricky because I didn’t do it “right”, so I will be suggesting something that I didn’t try myself. The task is to modify /boot/grub/menu.lst in such a way as to result in two separate menu entries that will boot either the old operating environment or the new operating environment.

I would suggest that you :-

  1. Create an entry outside of the “DEBIAN AUTOMAGIC KERNELS LIST” that essentially replicates one of the entries. It should not be modified to boot off the new root filesystem.
  2. Modify all of the entries in the “DEBIAN AUTOMAGIC KERNELS LIST” (it makes sense when you review the menu.lst file) to alter the “root=’ kernel parameter to point to the new root filesystem. This is not the “root (hd0,0)” part, but the kernel parameter “root”. It will specify the old root filesystem logical volume (something like “root=/dev/internal/root”) and you want to change this to “root=/dev/internal/804root”.

At this point you should probably reboot to check that both environments work. Just make sure you have a recent rescue CD knocking around before you do.

After you have done the checking you can boot the new environment and use ‘update-manager’ to upgrade the new environment to Ubuntu 8.04. This will probably work (it worked fine for me).

Undoubtedly the next time I try this, I will figure out how to make it work better, but it is good enough to have a “fallback” option in case an upgrade goes badly. For instance until last week, running Vmware Server under 8.04beta was pretty tricky and if it were still the case I would have to revert back to 7.10.

Aug 252007
 

If you’re hoping to read about Linux finally getting ZFS (except as a FUSE module) then you are going to be disappointed … this is merely a rant about the foolishness shown by the open-source world. It seems that the reason we won’t see ZFS in the Linux kernel is not because of technical issues but because of licensing issues … the two open-source licenses (GPL and CDDL) are allegedly incompatible!

Now some may wonder why ZFS is so great given that most of the features are available in other storage/filesystem solutions. Well as an old Unix systems administrator, I have seen many different storage and filesystem solutions over time … Veritas, Solaris Volume Manager, the AIX logical volume manager, Linux software RAID, Linux LVM, …, and none come as close to perfection as ZFS. In particular ZFS is insanely simple to manage, and those who have never managed a server with hundreds of disks may not appreciate just how desireable this simplicity is.

Lets take a relatively common example from Linux; we have two disks and no RAID controller so it makes sense to use Linux software RAID to create a virtual disk that is a mirror of the two physical disks. Not a difficult task. Now we want to split that disk up into seperate virtual disks to put filesystems on; we don’t know how large the different filesystems will become so we need to have some facility to grow and shrink those virtual disks. So we use LVM and make that software RAID virtual disk into an LVM “physical volume”, add the “physical volume” to a volume group, and finally create “logical volumes” for each filesystem we want. Then of course we need to put a filesystem on each “logical volume”. None of these steps are particularly difficult, but there are 5 seperate steps, and the separate software components are isolated from each other … which imposes some limitations.

Now imagine doing the same thing with ZFS … we create a storage pool consisting of two mirrored physical disks with a single command. This storage pool is automatically mounted as a filesystem ready for immediate use. If we need separate filesystems, we can create each with a single command. Now we come to the advantages … filesystem ‘snapshots’ are almost instantaneous and do not consume additional disk space until changes are made to the original filesystem at which point the increase in size is directly proportional to the changes made. Each ZFS filesystem shares the storage pool with the size being totally dynamic (by default) so that you do not have a set size reserved for each filesystem … essentially the free space on every single filesystem is available to all filesystems.

So what is the reason for not having ZFS under Linux ? It is open-source so it is technically possible to add to the Linux kernel. It has already been added to the FreeBSD kernel (in “-CURRENT”) and will shortly be added to the released version of OSX. Allegedly because the license is incompatible. The ZFS code from Sun is licensed under the CDDL license and the Linux kernel is licensed under the GPL license. I’m not sure how they are incompatible because frankly I have better things to do with my time than read license small-print and try to determine the effects.

But Linux (reluctantly admittedly) allows binary kernel modules to be loaded into the kernel and the license on those certainly isn’t the GPL! So why is not possible to allow GPLed code and CDDLed code to co-exist peacefully ? After all it seems that if ZFS were compiled as a kernel module and released as a binary blob, it could then be used … which is insane!

The suspicion I have is that there is a certain amount of “not invented here” going on.