Aug 282021
 

For a while now, my workstation has been spewing out this error in rather large volumes :-

Aug 27 00:00:07 pica multipathd[1686]: pktcdvd0: unusable path (wild) - checker failed

(about 18,000 per day)

The multipath daemon is for handling block devices (disks) with multiple connections and dynamically updating the geometry when errors occur. Not the sort of thing that you usually find in a workstation (or indeed most servers) and indeed it appears that I only have this installed because I started with the server install of Ubuntu.

It wasn’t causing any harm but it was annoying that it was spamming syslog log files, so I took a look at fixing it. Turns out it is rather easy. Just edit /etc/multipath.conf and add a “blacklist” section :-

blacklist {
       devnode "^pktcdvd0"
}

The parameter to “devnode” is a regular expression but in this case we can get away with a “^” (meaning beginning of string) followed by the name of the device.

At this point, you could restart the daemon :-

systemctl restart multipathd.service

This shouldn’t cause any problems on most machines without multiple paths; and it probably won’t be a problem for servers which do have multiple paths. But in the later case, I’d test it or just go for a full reboot.

Morning Lighthouse
Oct 102020
 

One of the big names in the opensource world – Eric Raymond – has declared that Windows will soon be effectively a Linux distribution. Which seems like a ridiculous notion; except technically it might make a lot of sense.

How?

It seems impossible for Microsoft to replace Windows with Linux, but actually it could be done. Windows itself consists of a bunch of software applications which call Windows “APIs” which in turn make calls to the legacy NT kernel. If all that software is written cleanly (it won’t be, but bear with me), it should be possible to make modifications to both (or either) the Linux kernel and the Windows APIs to allow Windows software to run natively.

Impossible? Nope – it has already been done to a certain extent – Wine and Proton allow a considerable amount of Windows software (and games!) to run under Linux.

Why?

So it’s not impossible, but surely it is a lot of work. So why?

Microsoft has a bit of a problem – they don’t make a huge amount of money selling the Windows operating system, and maintaining it is hugely expensive. All those security fixes, all those bug fixes, and all those new features they want to introduce.

Now most of this is done to the “userland” rather than the kernel itself, but the kernel does still need to be maintained. But what if you could use the Linux kernel and get some level of maintenance supplied by those not employed by Microsoft?

Would that save Microsoft money? It seems quite possible, and you can bet someone in Microsoft has estimated whether it would or not.

Will It Happen?

There are those who point to certain actions by Microsoft – the Linux subsystem for Windows, the Edge browser for Linux, the rumour of an Office build under Linux, etc. as indicators that Microsoft is planning this.

I think they’re wrong to the extent that those actions don’t say whether Microsoft is planning to make Windows a Linux distribution or not. There are plenty of reasons why Microsoft is releasing Linux software not least because they will almost certainly have developers that believe that porting software is a good way of finding bugs.

The real answer is that the only people who know are inside Microsoft.

The Join
Jun 222020
 

Unfortunately, the serial communication program I tend to use (kermit) appears to have not been updated in quite a while. Which in some ways is reasonable (it’s a very old program and probably does not need much work), understandable (the main developer is no longer employed to make it work), but is somewhat frustrating when it no longer compiles.

To get it to work on my latest system :-

  1. Download the cku302.tar.gz source code and unpack.
  2. Try the first compile with make linux KFLAGS=-DNOARROWKEYS (losing the arrow keys is unfortunate but not fatal unless you’re in command mode far too long).
  3. If that doesn’t compile with zillions of undefined references to curses sounding functions (printw, stdscr, wmove, etc.) then scroll up to the top of the errors where the final command to “compile” all the objects into a final binary is available. Paste that command and add a “-lncurses” :-
$ gcc  -o wermit \
      ckcmai.o ckclib.o ckutio.o ckufio.o \
      ckcfns.o ckcfn2.o ckcfn3.o ckuxla.o \
      ckcpro.o ckucmd.o ckuus2.o ckuus3.o \
      ckuus4.o ckuus5.o ckuus6.o ckuus7.o \
      ckuusx.o ckuusy.o ckuusr.o ckucns.o \
      ckudia.o ckuscr.o ckcnet.o ckusig.o \
      ckctel.o ckcuni.o ckupty.o ckcftp.o \
      ckuath.o ck_crp.o ck_ssl.o -lutil -lresolv -lcrypt -lncurses -lm

The final output of “wermit” just needs to be stripped, moved to a proper location, and renamed :-

$ strip wermit
$ sudo mv wermit /opt/bin/kermit

And there it seems to work fine.

Of course this is not a proper fix, and we are missing a lot of features but it is at least working. And saves me from having to struggle with minicom, screen, or cu.

May 172020
 

There are two aspects to ZFS that I will be covering here – checksums and error-correcting memory. The first is a feature of ZFS itself; the second is a feature of the hardware that you are running and some claim that it is required for ZFS.

Checksums

By default ZFS keeps checksums of the blocks of data that it writes to later verify that the data block hasn’t been subject to silent corruption. If it detects corruption, it can use resilience (if any) to correct the corruption or it can indicate there’s a problem.

If you have only one disk and don’t ask to keep multiple copies of each block, then checksums will do little more than protect the most important metadata and tell you when things go wrong.

All that checksum calculation does make file operations slightly slower but frankly without benchmarks you are unlikely to notice. And it gives extra protection to your data.

For those who do not believe that silent data corruption exists, take a look at the relevant Wikipedia page. Everyone who has old enough files has come across occasional weird corruption in them, and whilst there are many possible causes, silent data corruption is certainly one of them.

Personally I feel like a probably unnoticeable loss of performance is more than balanced by greater data resilience.

Error-Correcting Memory

(Henceforth “ECC”)

I’m an enthusiast for ECC memory – my main workstation has a ton of it, and I’ve insisted on ECC memory for years. I’ve seen errors being corrected (although that was back when I was running an SGI Indigo2). Reliability is everything.

However there are those who will claim you cannot run ZFS without ECC memory. Or that ZFS without ECC is more dangerous than any other file system format without ECC.

Not really.

Part of the problem is that those with the most experience of ZFS are salty old Unix veterans who would are justifiably contemptuous of server hardware that lacks ECC memory (that includes me). We would no sooner consider running a serious file server on hardware that lacks ECC memory than rely on disk ‘reliability’ and not mirror or RAID those fallible pieces of spinning rust.

ZFS will run fine without ECC memory.

But will it make it worse?

It’s exceptionally unlikely – there are arguable examples of exceptionally esoteric failure conditions that may make things worse (the “scrub of death”) but I side with those who feel that such situations are not likely to occur in the real world.

And as always, why isn’t your data backed up anyway?

Apr 262020
 

Experimenting with Ubuntu’s “new” (relatively so) ZFS installation option is all very well, but encryption is not optional for a laptop that is taken around the place.

Whether I should have spent more time poking around the installer to find the option is a possibility, but post-install enabling encryption isn’t so difficult.

The first step is to create an encrypted filesystem – encryption only works on newly created filesystems and cannot be turned on later :-

zfs create -o encryption=on \
  -o keyformat=passphrase \
  rpool/USERDATA/ehome

You will be asked for the passphrase as it is created. Forgetting this is extremely inadvisable!

One created, reboot to check that :-

  1. You get prompted for the passphrase (as of Ubuntu 20.04 you do).
  2. That the encrypted filesystem gets mounted automatically (likewise).

At this point you should be able to create the filesystems for the relevant home directories :-

zfs create rpool/USERDATA/ehome/root
cd /root
rsync -arv . /ehome/root
cd /
zfs set mountpoint=/root rpool/USERDATA/ehome/root
(An error will result as there is something already there but it does the important bit)
zfs set mountpoint=none rpool/USERDATA/root_xyzzy
(A similar error)

Repeat this for each user on the system, and reboot. See if you can login and your files are present.

This leaves the old unencrypted home directories around (which can be removed with zfs destroy -r rpool/USERDATA/root_xyzzy). It is possible that this re-arrangement of how home directories work will break some of Ubuntu’s features – such as scheduled snapshots of home directories (which is why the destroy command needs the “-r” flag before).

But it’s getting there.

WP2Social Auto Publish Powered By : XYZScripts.com