Feb 252009
 

Traditionally I have always mounted just the filesystems I needed in single user mode whilst tinkering in Solaris. Turns out this is a dumb method for ZFS filesystems.

What happens is that the zfs mount command will create any directories necessary to mount the filesystem. Later this can stop other ZFS filesystems from mounting when the tinkering is finished. This could be an argument for not creating hierarchies of filesystems, but that’s rather extreme.

The better solution is to mount all the ZFS filesystems in one go with :-

zfs mount -a
Feb 082009
 

I was reading a comment about the df command (in relation to reserved filesystem space) and realised that the clueless newbie was right; it is odd that df does not mention reserved space. Of course it would also be wrong for df to lie about the matter too. I then realised that df is long overdue for a bit of refreshing. If you look at the typical output of the df command, you will find it inconveniently cluttered :-

Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/datavg-810root
                       12G  7.8G  4.3G  65% /
tmpfs                 2.0G     0  2.0G   0% /lib/init/rw
varrun                2.0G  416K  2.0G   1% /var/run
varlock               2.0G     0  2.0G   0% /var/lock
udev                  2.0G  3.1M  2.0G   1% /dev
tmpfs                 2.0G  344K  2.0G   1% /dev/shm
lrm                   2.0G  2.4M  2.0G   1% /lib/modules/2.6.27-7-generic/volatile
/dev/sdb1             130M   36M   88M  29% /boot
/dev/mapper/datavg-opt
                      2.0G  776M  1.3G  39% /opt
/dev/mapper/datavg-810var
                      5.0G  1.4G  3.7G  28% /var
/dev/mapper/datavg-home
                      256G  116G  141G  46% /home
/dev/mapper/datavg-vmachines
                       96G   62G   35G  64% /vmachines
/dev/mapper/datavg-bragspool
                      256G  6.2G  250G   3% /var/spool/brag
/dev/mapper/datavg-herpesbackup
                       16G  4.6G   12G  29% /var/herpes
/dev/sda1             463G  147G  293G  34% /mdata
/dev/scd0             2.4G  2.4G     0 100% /media/CIVCOMPLETEEU
/dev/mapper/datavg-cdimages
                       32G  1.9G   31G   6% /cdimages
/dev/mapper/datavg-ontapsim
                       16G  498M   16G   4% /sim

Part of the problem is that df does not do quite what it claims to do … to report free space on the mounted filesystems. It also gives some (a very small amount) of additional information about the relevant filesystems … particularly the device the filesystem is mounted on. This “helps” to make the output more cluttered that it needs to be. It is possible that there are those who will argue that the device is the filesystem and not where it is mounted; they are arguably right, but when you use df you are either looking at where in the Unix file hierarchy there are places that have less space than is comfortable, or for places that have enough space to put that big file you are about to download.

Next the command itself has an obscure command to make it easier to type on a slow type-writter like terminal (those who are below a certain age will not realise that we used to comminicate with Unix machines using a terminal that was more like a printer than the screens we use today). It might be better named fsspace with an alias of diskspace for those who want to concentrate on what worries them rather than on what worries the machine.

Next why not take advantage of certain features that have crept almost silently into the command line over the last few decades ? Why not adjust the output to the width of the terminal window (look for the $COLUMNS evironment variable), spacing things out or even adding more information when you have enough space?

Finally if you were to dig around the df command a little bit you will encounter something peculiar called “inodes”. Now I know what an inode is, and I dare say quite a few people reading this will know, but if you do not, knowing how many inodes there are is not very useful information. It is relatively rare (these days) for a filesystem to run out of inodes so this information has a low priority, and why not use a term more understandable than “inodes” ?

Changing a term is something to be avoided in most circumstances which is why we still have “inode” where even the originator of the term has to guess that the “i” means “index”. I would suggest that something like “fileslots”or perhaps “fslots”

We now have the basic specification of something that should look like :-

% diskspace
Filesystem                            Size  %Used %fslots  Avail
/                                      12G    70%      3%   3.6G
/lib/init/rw                          2.0G     0%      0%   2.0G
/var/run                              2.0G     0%      0%   2.0G
/var/lock                             2.0G     0%      0%   2.0G
/dev                                  2.0G     0%      1%   2.0G
/dev/shm                              2.0G     0%      0%   2.0G
/lib/modules/2.6.27-7-generic/vola+   2.0G     0%      0%   2.0G
/boot                                 130M    29%      0%    88M
/opt                                  2.0G    40%      1%   1.2G
/var                                  5.0G    18%      0%   4.1G
/home                                 256G    52%      0%   124G
/cdimages                              32G    65%      0%    12G
/mdata                                463G    36%      1%   280G

This could be improved in some ways – for instance it would be helpful to skip over certain of the filesystems that are not strictly speaking backed by disk. However it is beginning to be useful.

Or would be if the code exists. Fortunately it does.

Jan 142009
 

But not writing them down is dumber.

Supposedly we are not supposed to write down passwords, but who can remember hundreds of passwords ? In the distant past where the advice to not write down passwords was first suggested most users would have had just a few passwords.

Gradually things become more IT-orientated, and users would start complaining about the number of passwords they had to remember.

And we made things simpler for them by coming up with single-sign on mechanisms. Which was the wrong thing to do. Yes it makes things easier, but now a single compromised password will open up many different systems.

And of course we have the web with zillions of web sites that insist that each are important enough to have a unique account for. More passwords to “remember”.

Trying to tell people not to write passwords down is in the end going to reduce security. Firstly users will use the same password in many places so that they have fewer passwords to remember, and secondly they will write those passwords down. Why not let them do it right ?

So how can password be written down securely ? Well the first possibility is to use a secure password store so that passwords are held in an encrypted form. The second is to write them down using a consistent system to encode the passwords in some way (for example adding 1 to every digit, and moving each letter down 1) and splitting the usernames and passwords into seperate lists.

And of course encourage them to use different passwords in different places so that if one becomes compromised they will only have one site broken into.

But is it time to move on from passwords ?

We (as users) do not really want to enter passwords to use things. The login screen is an interruption in the flow of activities. We need something that will allow a distant server to establish the identity of ourselves without a login screen. Preferrably using something similar to Kerberos.

This will probably require an initial authentication process. Again the use of passwords should be avoided (except for critical services such as banking). Why not use some form of biometrics ?

Jan 082009
 

If you think that you can use a ZFS volume as a Solaris LVM metadevice database, you will be wrong. Whilst it works initially, the LVM subsystem is initialised before ZFS this cannot find the databases. Whilst this may seem to be a perverse configuration at least one administrator has tried it – being me!

Dec 152008
 

I have recently (in the last few days) picked up a Sony eBook reader, but I have also been reading ebooks for quite a while on various mobile phones. As an avid book reader, I have the classic problem of where to keep all my books. Books take up space, and sooner or later you realise that they take up an inconveniently large amount of space.

Sometimes I think that eBooks are the solution and sometimes I think they’re not quite there yet. The Sony reader has a few rough edges; in particular the irritating screen refresh (I don’t mind it being slow, but the flicker as it redraws is irritating) and the page turn buttons being slightly awkward.

But the price of ebooks themselves is somewhat ridiculous. In particular with a DRM-protected format, which means no guarantee that you will be able to read them on future devices … I have books several times older than myself, and I somehow doubt that “LRX” format books will be readable in a hundred years. For those who aren’t aware it seems that the prices for LRX books is between about £6 and £15 (and probably more).

Of course authors and publishers deserve a fair return on their investment in producing a book, but is pricing ebooks at roughly the cost of a physical book sensible ? I am thinking of replacing some 750 books with ebook equivalents which amounts to a cost of around £5,000 for something I already own!

No thanks.

And after all, not producing physical books and then shipping then around would be a huge cost saving so why isn’t that saving being passed on ? It comes across as the classic ripoff to most consumers.

Ebooks should be much cheaper than the physical books which would also have the advantage of bringing the cost down to a level where people will be more likely to make impulse buys. This would probably increase sales to the point where the cost cutting would have a negligable effect on the profitability.

Why not give free copies of ebooks away to those who purchase a physical book ? This would also popularise the ebook method. If I had “coupons” from all the physical books I had purchased this year, I would probably have bought an eReader much sooner.