For the benefit of those tuning in late, or to refresh the memories of those coming back to this years later, Dominic Cummings (Boris Johnson’s chief political advisor) has been caught breaching lockdown restrictions whilst ill with the coronavirus.
Now there are those who believe he did nothing wrong; they’re idiots but it is no longer a question of what he did or didn’t do, or whether it was against the regulations themselves, or whether it was against the spirit of the regulations. Or even if he put other people at risk.
Although it is worth pointing out that as his wife was already sick with coronavirus he was not supposed to leave the house for any reason.
No, now it is the response that is more significant.
If Dom and his buddy Boris had responded sensibly – admitted that it was wrong, and Dom had resigned – that would be fine. Or at least no worse than we expect from the Tories.
But to claim that he did nothing wrong when to most of us it has all the appearance of one rule for us and quite another for the Tory toffs, is just inflaming the situation.
Dumb move by a political advisor!
There are two aspects to ZFS that I will be covering here – checksums and error-correcting memory. The first is a feature of ZFS itself; the second is a feature of the hardware that you are running and some claim that it is required for ZFS.
Checksums
By default ZFS keeps checksums of the blocks of data that it writes to later verify that the data block hasn’t been subject to silent corruption. If it detects corruption, it can use resilience (if any) to correct the corruption or it can indicate there’s a problem.
If you have only one disk and don’t ask to keep multiple copies of each block, then checksums will do little more than protect the most important metadata and tell you when things go wrong.
All that checksum calculation does make file operations slightly slower but frankly without benchmarks you are unlikely to notice. And it gives extra protection to your data.
For those who do not believe that silent data corruption exists, take a look at the relevant Wikipedia page. Everyone who has old enough files has come across occasional weird corruption in them, and whilst there are many possible causes, silent data corruption is certainly one of them.
Personally I feel like a probably unnoticeable loss of performance is more than balanced by greater data resilience.
Error-Correcting Memory
(Henceforth “ECC”)
I’m an enthusiast for ECC memory – my main workstation has a ton of it, and I’ve insisted on ECC memory for years. I’ve seen errors being corrected (although that was back when I was running an SGI Indigo2). Reliability is everything.
However there are those who will claim you cannot run ZFS without ECC memory. Or that ZFS without ECC is more dangerous than any other file system format without ECC.
Not really.
Part of the problem is that those with the most experience of ZFS are salty old Unix veterans who would are justifiably contemptuous of server hardware that lacks ECC memory (that includes me). We would no sooner consider running a serious file server on hardware that lacks ECC memory than rely on disk ‘reliability’ and not mirror or RAID those fallible pieces of spinning rust.
ZFS will run fine without ECC memory.
But will it make it worse?
It’s exceptionally unlikely – there are arguable examples of exceptionally esoteric failure conditions that may make things worse (the “scrub of death”) but I side with those who feel that such situations are not likely to occur in the real world.
And as always, why isn’t your data backed up anyway?
Experimenting with Ubuntu’s “new” (relatively so) ZFS installation option is all very well, but encryption is not optional for a laptop that is taken around the place.
Whether I should have spent more time poking around the installer to find the option is a possibility, but post-install enabling encryption isn’t so difficult.
The first step is to create an encrypted filesystem – encryption only works on newly created filesystems and cannot be turned on later :-
zfs create -o encryption=on \ -o keyformat=passphrase \ rpool/USERDATA/ehome
You will be asked for the passphrase as it is created. Forgetting this is extremely inadvisable!
One created, reboot to check that :-
- You get prompted for the passphrase (as of Ubuntu 20.04 you do).
- That the encrypted filesystem gets mounted automatically (likewise).
At this point you should be able to create the filesystems for the relevant home directories :-
zfs create rpool/USERDATA/ehome/root
cd /root
rsync -arv . /ehome/root
cd /
zfs set mountpoint=/root rpool/USERDATA/ehome/root
(An error will result as there is something already there but it does the important bit)
zfs set mountpoint=none rpool/USERDATA/root_xyzzy
(A similar error)
Repeat this for each user on the system, and reboot. See if you can login and your files are present.
This leaves the old unencrypted home directories around (which can be removed with zfs destroy -r rpool/USERDATA/root_xyzzy). It is possible that this re-arrangement of how home directories work will break some of Ubuntu’s features – such as scheduled snapshots of home directories (which is why the destroy command needs the “-r” flag before).
But it’s getting there.