Nov 202010
 

For some time now, I have been contemplating switching Linux distributions on my main workstation from Ubuntu to something a little less … user friendly ? Or perhaps that should be a little more Unix geek friendly. The distribution I chose was ArchLinux for a variety of reasons. If you come across this blog entry looking for a solution to a problem, it may be worth reading through in case the solution appears later on – this is long, and searches may “hit” on something later on.

First of all, let me point out there is really nothing wrong with Ubuntu for most users. It is a pretty useful distribution that is pretty good for the kind of users who have never compiled their own kernel. Nothing wrong with that, but it seems that Ubuntu is gradually becoming a little trickier to use for those of us who prefer to customise their desktop environment with something like Enlightenment – it seems that Ubuntu is really intended for those who want the Ubuntu way.

Nothing wrong with that, and I’m intending to keep running Ubuntu on my netbook. However I wanted a little more control for my main workstation. And what with an SSD to install as a new boot device, it seemed like a good time to try out ArchLinux especially as I could reboot into Ubuntu if things looked bad. As it happens I haven’t needed to do that! This blog entry is going to get quite long as a place to record my notes on getting ArchLinux to do the things I want, and will grow over time.

The Install

I downloaded the core install image rather than the net install image – not for any good reason as I have done test installs from the net install image and it works well. After installing the SSD into my workstation (stuck to the bottom of the case with duct tape – I should really get a 2.5->3.5″ disk tray), I changed the boot order of the disks in my BIOS to boot from the SSD first. This was perhaps not the best idea as it made things a little trickier later, but it’s workable if you are prepared to juggle disk names (both Linux ones and BIOS/Grub ones).

First for the boring bit :-

  1. Booted off the install CD
  2. Selected CD as source
  3. Set Europe/London as timezone
  4. Set hwclock as UTC
  5. Prep hard drives-
    1. Manually configure hard drives
    2. Partition /dev/sdc (the SSD – identified by the fact it was empty)
    3. Created 256Mb partition /dev/sdc1 (for /boot)
    4. Created partition with the rest of the space /dev/sdc2 as LVM
    5. Manually configure block devices
      1. By device name
      2. Created /boot on /dev/sdc1 as ext2
      3. /dev/sdc2 becomes Volume Group
      4. / as XFS (16G)
      5. /var as ResierFSS (4G)
      6. swap (4G) – Although I have a tendency to forget this one!
      7. /opt as XFS (4G)
      8. /tmp as ReiserFS (4G) – perhaps a bit too big.
  6. Select Packages
    1. Select Base + Development.
    2. Pick random additions that look like they might be useful (note that it may be necessary to pick all of the various mkinitcpio variations as I did that on the later attempts).
  7. Install Packages
  8. Configure System
    1. Select ‘vi’ as editor
    2. Made the following changes to rc.conf
      1. UseLVM=yes
      2. HOSTNAME=scrofula
      3. eth0=”eth0 10.0.0.18 netmask 255.255.0.0 broadcast 10.0.255.255′
      4. gateway=”default gw 10.0.0.254″
      5. ROUTES=(gateway)
    3. Made the following changes to mkinitcpio.conf
      1. BINARIES=”/sbin/lvm”. This shouldn’t be necessary, but at one point I ended up with a miniroot shell which was unable to mount the root filesystem and with no LVM present, I couldn’t see what was wrong! This error could be related to the raid problems detailed below, but adding this won’t cause any harm.
      2. HOOKS=”base udev autodetect scsi sata lvm2 filesystems”. Note that “raid” is suggested as necessary for software RAID; that turns out to be incorrect as discovered later. Although I needed software RAID to mount my /home, I left that for later after putting raid in here gave errors)
    4. Made the following changes to resolv.conf
      1. search inside.zonky.org
      2. nameserver 10.0.0.12
    5. Made the following changes to mirrorlist
      1. Select something from “Great Britain”.
    6. Set root password.
    7. Done
  9. Install Bootloader
    1. Grub
    2. Installed to /dev/sdc! This is because although the SSD is the third by address, it is also the first boot device in the BIOS.

This didn’t work the first time around. Firstly grub wasn’t setup properly as it wanted to boot the next stage from (hd2,0) which would be one of the hard disks rather than the SSD, as at this point the BIOS is still in charge (more or less). This was easily fixed on a temporary basis by editing the boot setting at the menu, and later on a more permanent basis by editing /boot/grub/menu.lst.

Secondly the first couple of times around, I found myself in what I term the “miniroot shell” which is the shell you get when the Linux install fails to mount the root filesystem. The only hint I had here was that a) it couldn’t mount the root filesystem, and b) the binary /bin/lvm was not present. On the third or fourth attempt (my notes aren’t sufficiently accurate) I managed to get past this stage by excluding the raid “hook” and including the /bin/lvm binary in the mkinitcpio configuration file.

It would seem that at some point ArchLinux has changed the “hook” name from raid to dmraid and some instructions out there still refer to the hook as “raid”. My fault for not checking closely enough with enough sources! But there’s no harm in the ArchLinux people configuring both names – probably just a case of setting up a hard link somewhere!

Post-Installation

With a distribution such as ArchLinux, the easy part is the installation; things get a bit trickier with the post-installation configuration. This is simply because to allow you to do things your way, it needs to leave things alone and let you do your stuff. In other words this lack of default configuration is a feature and not a bug!

The first thing to so after a core install (and probably a net install too) is to perform a full update :-

pacman -Suy

The “pacman” tool is of course the ArchLinux package management tool. This operation sits somewhere between a normal Ubuntu package upgrade and a full Ubuntu distribution upgrade. ArchLinux does not have distribution versions in the same way as Ubuntu – whilst the installation media is undoubtedly upgraded from time to time, once actually installed the command above will perform both upgrades to apply necessary fixes, and upgrade packages when new versions come out.

This can lead to some surprises from time to time of course, but there is also never quite the same level of shock that comes with a distribution upgrade.

In any case, I needed to run the command twice as pacman itself needed an upgrade.

After doing that, I set CONSOLEFONT in /etc/rc.conf to “sun12x22.psfu” to improve the appearance of the console, although there are another couple of fonts based on that font that may well be a better choice. Later I used the “consolefont” hook to set the console font at an earlier stage during the boot process – which is neater; however you should specify the font without the file extension – “sun12x22”, and of course add “consolefont” to the HOOKS variable in /etc/mkinitcpio.conf.

I also edited /boot/grub/menu.lst to change the line that specifies what kernel to load and it’s options :-

kernel /vmlinuz26 root=/dev/mapper/ssd-root ro vga=775

Specifically adding the “vga=775″ to the end of that. This makes the appearance of the console not quite so overwhelming on a 30” monitor!

Also added “dmraid” to the HOOKS variable in /etc/mkinitcpio.conf although reading more documentation hints that the right hook is actually “mdadm”. Run mkinitcpio -p kernel26 to update things.

Rebooted to verify that things are still working. Plus check that the CONSOLEFONT was ok, and that the old volume group:sys was visible.

Aug 252007
 

If you’re hoping to read about Linux finally getting ZFS (except as a FUSE module) then you are going to be disappointed … this is merely a rant about the foolishness shown by the open-source world. It seems that the reason we won’t see ZFS in the Linux kernel is not because of technical issues but because of licensing issues … the two open-source licenses (GPL and CDDL) are allegedly incompatible!

Now some may wonder why ZFS is so great given that most of the features are available in other storage/filesystem solutions. Well as an old Unix systems administrator, I have seen many different storage and filesystem solutions over time … Veritas, Solaris Volume Manager, the AIX logical volume manager, Linux software RAID, Linux LVM, …, and none come as close to perfection as ZFS. In particular ZFS is insanely simple to manage, and those who have never managed a server with hundreds of disks may not appreciate just how desireable this simplicity is.

Lets take a relatively common example from Linux; we have two disks and no RAID controller so it makes sense to use Linux software RAID to create a virtual disk that is a mirror of the two physical disks. Not a difficult task. Now we want to split that disk up into seperate virtual disks to put filesystems on; we don’t know how large the different filesystems will become so we need to have some facility to grow and shrink those virtual disks. So we use LVM and make that software RAID virtual disk into an LVM “physical volume”, add the “physical volume” to a volume group, and finally create “logical volumes” for each filesystem we want. Then of course we need to put a filesystem on each “logical volume”. None of these steps are particularly difficult, but there are 5 seperate steps, and the separate software components are isolated from each other … which imposes some limitations.

Now imagine doing the same thing with ZFS … we create a storage pool consisting of two mirrored physical disks with a single command. This storage pool is automatically mounted as a filesystem ready for immediate use. If we need separate filesystems, we can create each with a single command. Now we come to the advantages … filesystem ‘snapshots’ are almost instantaneous and do not consume additional disk space until changes are made to the original filesystem at which point the increase in size is directly proportional to the changes made. Each ZFS filesystem shares the storage pool with the size being totally dynamic (by default) so that you do not have a set size reserved for each filesystem … essentially the free space on every single filesystem is available to all filesystems.

So what is the reason for not having ZFS under Linux ? It is open-source so it is technically possible to add to the Linux kernel. It has already been added to the FreeBSD kernel (in “-CURRENT”) and will shortly be added to the released version of OSX. Allegedly because the license is incompatible. The ZFS code from Sun is licensed under the CDDL license and the Linux kernel is licensed under the GPL license. I’m not sure how they are incompatible because frankly I have better things to do with my time than read license small-print and try to determine the effects.

But Linux (reluctantly admittedly) allows binary kernel modules to be loaded into the kernel and the license on those certainly isn’t the GPL! So why is not possible to allow GPLed code and CDDLed code to co-exist peacefully ? After all it seems that if ZFS were compiled as a kernel module and released as a binary blob, it could then be used … which is insane!

The suspicion I have is that there is a certain amount of “not invented here” going on.