Jun 222020
 

I have a problem with serial ports (usually “virtual ones” or USB←→serial port dongles) – I have too many of them, and I usually end up with the wrong one. And selecting a TrueRNG serial port and connecting a terminal emulator to it gets very messy very quickly.

So I was searching around, semi-idly wondering if I could somehow build a device name to USB name mapping that I could stuff into rofi (or dmenu) and I discovered the /dev/serial/by-id/ which did 99% of the work for me.

So yes, I can invoke kermit and up will pop a menu allowing me to select which serial port to connect to :-

ls /dev/serial/by-id |\
  rofi -dmenu -l 20 -p "Pick a serial device" -font "mono 20"

That is the core of it, but to make it functional I need to embed it into a command line argument to kermit :-

alias kermit='kermit -C "set line /dev/serial/by-id/$(ls /dev/serial/by-id | rofi -dmenu -l 20 -p "Pick a serial device" -font "mono 20"),set carrier-watch off"'

Which is admittedly a bit of a mouthful!

But so useful if you have two or three USB to serial adopters plugged in plus a switch’s console port and a Linux widget that provides a serial console.

Jun 222020
 

Unfortunately, the serial communication program I tend to use (kermit) appears to have not been updated in quite a while. Which in some ways is reasonable (it’s a very old program and probably does not need much work), understandable (the main developer is no longer employed to make it work), but is somewhat frustrating when it no longer compiles.

To get it to work on my latest system :-

  1. Download the cku302.tar.gz source code and unpack.
  2. Try the first compile with make linux KFLAGS=-DNOARROWKEYS (losing the arrow keys is unfortunate but not fatal unless you’re in command mode far too long).
  3. If that doesn’t compile with zillions of undefined references to curses sounding functions (printw, stdscr, wmove, etc.) then scroll up to the top of the errors where the final command to “compile” all the objects into a final binary is available. Paste that command and add a “-lncurses” :-
$ gcc  -o wermit \
      ckcmai.o ckclib.o ckutio.o ckufio.o \
      ckcfns.o ckcfn2.o ckcfn3.o ckuxla.o \
      ckcpro.o ckucmd.o ckuus2.o ckuus3.o \
      ckuus4.o ckuus5.o ckuus6.o ckuus7.o \
      ckuusx.o ckuusy.o ckuusr.o ckucns.o \
      ckudia.o ckuscr.o ckcnet.o ckusig.o \
      ckctel.o ckcuni.o ckupty.o ckcftp.o \
      ckuath.o ck_crp.o ck_ssl.o -lutil -lresolv -lcrypt -lncurses -lm

The final output of “wermit” just needs to be stripped, moved to a proper location, and renamed :-

$ strip wermit
$ sudo mv wermit /opt/bin/kermit

And there it seems to work fine.

Of course this is not a proper fix, and we are missing a lot of features but it is at least working. And saves me from having to struggle with minicom, screen, or cu.

Jun 212020
 

If you are just running Ubunbtu with ZFS without poking into the details, you may not be aware of the scrubber running. For background information, and for the benefit of those who prefer to go their own way, this is all about that little scrubber.

A pool scrub operation is where the kernel runs through checking all of the data in a pool and makes any necessary repairs. Whilst ZFS does check the integrity of the data (using checksums) when performing reads, a regular scrub repairs these issues in advance.

It need only be run weekly for larger systems or monthly for normal systems (it’s a pretty arbitrary border line). And can be started manually with :-

# zpool scrub pool0

(And “pool0” being the name of the pool to scrub)

Whilst a scrub is going on in the background, the only effect on the system is that disk accesses to that pool will be slightly slower than normal. Usually not enough to notice unless you are benchmarking!

When in progress the output of zpool status pool0 will show the current state and how long it is expected to take to complete the scrub. Once finished the status will look like :-

# zpool status | grep scan:
  scan: scrub repaired 0B in 0 days 09:19:27 with 0 errors on Sun Jun 21 10:36:28 2020

Jun 212020
 

There has been a bit of a rush on Youtube videos describing how to hide the terminal when running certain commands; when using the terminal as a file browser you may want to show a media file. And sometimes when showing that file, you may wish to hide the terminal – particularly on a small laptop screen.

Just for fun, I decided to take one of the recipes and adapt it for my working environment which revolves around the zsh when I am in a terminal window.

To start with, I need the xdo tool, which can be installed with :-

$ sudo apt install xdo

Once that is installed, I modified a shell script from https://github.com/salman-abedin/devour/blob/master/ into a zsh function. I created a function file within my FPATH (~/lib/zsh/functions/devour) :-

function devour {
  id=$(xdo id)
  xdo hide
  . $@
  xdo show "$id"
}

Note that I have deliberately not redirected stdout and stderr away.

I then added the following to my .zshrc :-

autoload devour

The next step is to create aliases for the commands you might want to run automatically in such a way :-

alias mpv="devour mpv"

Although I am not sure “devour” is the right word to use for this action, but I have yet to think of a better one.

May 172020
 

There are two aspects to ZFS that I will be covering here – checksums and error-correcting memory. The first is a feature of ZFS itself; the second is a feature of the hardware that you are running and some claim that it is required for ZFS.

Checksums

By default ZFS keeps checksums of the blocks of data that it writes to later verify that the data block hasn’t been subject to silent corruption. If it detects corruption, it can use resilience (if any) to correct the corruption or it can indicate there’s a problem.

If you have only one disk and don’t ask to keep multiple copies of each block, then checksums will do little more than protect the most important metadata and tell you when things go wrong.

All that checksum calculation does make file operations slightly slower but frankly without benchmarks you are unlikely to notice. And it gives extra protection to your data.

For those who do not believe that silent data corruption exists, take a look at the relevant Wikipedia page. Everyone who has old enough files has come across occasional weird corruption in them, and whilst there are many possible causes, silent data corruption is certainly one of them.

Personally I feel like a probably unnoticeable loss of performance is more than balanced by greater data resilience.

Error-Correcting Memory

(Henceforth “ECC”)

I’m an enthusiast for ECC memory – my main workstation has a ton of it, and I’ve insisted on ECC memory for years. I’ve seen errors being corrected (although that was back when I was running an SGI Indigo2). Reliability is everything.

However there are those who will claim you cannot run ZFS without ECC memory. Or that ZFS without ECC is more dangerous than any other file system format without ECC.

Not really.

Part of the problem is that those with the most experience of ZFS are salty old Unix veterans who would are justifiably contemptuous of server hardware that lacks ECC memory (that includes me). We would no sooner consider running a serious file server on hardware that lacks ECC memory than rely on disk ‘reliability’ and not mirror or RAID those fallible pieces of spinning rust.

ZFS will run fine without ECC memory.

But will it make it worse?

It’s exceptionally unlikely – there are arguable examples of exceptionally esoteric failure conditions that may make things worse (the “scrub of death”) but I side with those who feel that such situations are not likely to occur in the real world.

And as always, why isn’t your data backed up anyway?