Mike Meredith

Jul 052020
 

This is inspired by a tweet claiming that something was an example of mediæval blood libel; I remembered it being earlier than that, looked it up, and found the relevant tweet had disappeared off the bottom of the page.

So this blog posting.

The earliest reference to the blood libel in the relevant Wikipedia article was an accusation that Jews sacrificed Greeks in the temple in Jerusalem in the BCE era.

Calling this “blood libel” is mildly controversial – there are those who prefer to stick to a very specific definition which specifies christians accusing Jews. Whilst I’m fine with a definition this specific for a specific instance (“The Blood Libel”), being too pedantic prevents discussions about instances (real or theoretical) of similar accusations by other groups against other groups (“a blood libel”).

In addition, if you prohibit the use of the phrase “a blood libel” in reference to any accusations which don’t meet a specific definition too closely, it makes discussing generic blood libel accusations somewhat tricky – be too quick to dismiss the fictional accusation that atheists use the blood of neo-pagan children to make “holy bread” as not being “The Blood Libel” and you risk implying that it’s not that bad.

Whilst the Jews are a popular target for the evil ones that like emphasising the “us versus them” (frequently as a means of bolstering their own power), they are not the only target – from a Eurocentric perspective those other targets include the Romani, blacks, Irish, Asians, and immigrants of any kind.

This general demonisation of “them” isn’t any kind of blood libel of course, but it is possible that non-Jewish blood libel accusations have been made against “them” ever since religion became a source of power for priests – well before history (the written record) began, and well before the Jewish people called themselves Jews.

The blood libel is specifically a false accusation that Jews sacrifice the children of christians (or Greeks in the earliest examples) to use their blood to make a “holy bread”. Ignoring the ethnic groups, the key elements of a blood libel are :-

  1. The accusation is false (or it wouldn’t be a libel).
  2. The accusation involves blood consumption – blood has been important symbolically since forever.
  3. The religious aspect – consuming the blood is a religious act. Well, religion has been the curse blighting humanity ever since it began.

There is nothing in there that is dependent on the identity of the perpetrator or the target group (because I’ve removed it), but doesn’t it cover the essentials of a blood libel?

Being cynical about human nature, I’m pretty sure that blood libels have been around ever since religion could be used to divide us and them. Which is a good deal further back than the last 2,000 years.

None of this is meant to undermine the seriousness of the blood libel against Jews.

No Fun At The Fair

Jun 272020
 

So Apple has announced that it is replacing Intel processors with ARM processors in its Mac machines. And as a result we’re going to be plagued with awful puns endlessly until we get bored of the discussion. Sorry about that!

This is hardly unexpected – Apple has been using ARM-based processors in its iThingies for years now, and this is not the first time they have changed processor architectures for the Mac. Apple started with the Motorola 68000, switched to the Motorola/IBM Power architecture, and then switched to Intel processors.

So they have a history of changing processor architectures, and know how to do it. We remember the problems, but it is actually quite an accomplishment to take a macOS binary compiled for the Power architecture and run it on an Intel processor. It is analogous to taking a monolingual Spanish speaker, providing them with a smartphone based translator and dropping them into an English city.

So running Intel binary macOS applications on an ARM-based system will usually work. They’ll be corner cases that do not of course, but these are likely to be relatively rare.

But what about performance? On a theoretical level, emulating a different processor architecture is always going to be slower, but in practice you probably won’t notice.

First of all, most macOS applications very often consist of a relatively small wrapper around Apple-provided libraries of code (although that “wrapper” is the important bit). For example, the user interface of any application is going to be mostly Apple code provided by the base operating system – so the user interface is going to feel as snappy as any native ARM macOS application.

Secondly, Apple knows that the performance of macOS applications originally compiled for Intel is important and has Rosetta 2 to “translate” applications into instructions for the ARM processors. This will probably work better than the doom-sayers expect, but it will never be as fast as natively compiled code.

But it will be good enough especially as most major applications will be made ARM natively relatively quickly.

But there is another aspect of performance – are the ARM processors fast enough compared with the Intel processors? Well, the world’s fastest supercomputer runs on the ARM processors, although Intel fanboys will quite rightly point out that a supercomputer is a special case and that a single Intel core will outperform a single ARM core.

Except that with the exception of games, and specialised applications that have not been optimised for parallel processing, more cores beats faster single cores.

And a single ARM core will beat a single Intel core if the later is thermally throttled. And thermals has been holding back the performance of Apple laptops for quite a while now.

Lastly, Apple knows that ARM processors are slower than Intel processors in single core performance and is likely pushing ARM and themselves to solve this. It isn’t rocket science (if anything it’s thermals), and both have likely been working on this problem in the background for a while.

Most of us don’t really need ultimate processor speed; for most tasks merely the appearance of speed is sufficient – web pages loading snappily, videos playing silkily, etc.

Ultimately if you happen to run some heavy-processing application (you will know if you do) whose performance is critical to your work, benchmark it. And keep benchmarking it if the ARM-based performance isn’t all that good to start with.

And most of these tasks can be performed fine with a relatively modest modern processor and/or can be accelerated with specialised “co-processors”. For example, Apple’s Mac Pro has an optional accelerator card that offloads video encoding and makes it much faster than it would otherwise be.

Apple has a “slide” :-

That implies that their “Apple silicon” processors will contain not just the ordinary processor cores but also specialised accelerators to improve performance.

Jun 222020
 

I have a problem with serial ports (usually “virtual ones” or USB←→serial port dongles) – I have too many of them, and I usually end up with the wrong one. And selecting a TrueRNG serial port and connecting a terminal emulator to it gets very messy very quickly.

So I was searching around, semi-idly wondering if I could somehow build a device name to USB name mapping that I could stuff into rofi (or dmenu) and I discovered the /dev/serial/by-id/ which did 99% of the work for me.

So yes, I can invoke kermit and up will pop a menu allowing me to select which serial port to connect to :-

ls /dev/serial/by-id |\
  rofi -dmenu -l 20 -p "Pick a serial device" -font "mono 20"

That is the core of it, but to make it functional I need to embed it into a command line argument to kermit :-

alias kermit='kermit -C "set line /dev/serial/by-id/$(ls /dev/serial/by-id | rofi -dmenu -l 20 -p "Pick a serial device" -font "mono 20"),set carrier-watch off"'

Which is admittedly a bit of a mouthful!

But so useful if you have two or three USB to serial adopters plugged in plus a switch’s console port and a Linux widget that provides a serial console.

Jun 222020
 

Unfortunately, the serial communication program I tend to use (kermit) appears to have not been updated in quite a while. Which in some ways is reasonable (it’s a very old program and probably does not need much work), understandable (the main developer is no longer employed to make it work), but is somewhat frustrating when it no longer compiles.

To get it to work on my latest system :-

  1. Download the cku302.tar.gz source code and unpack.
  2. Try the first compile with make linux KFLAGS=-DNOARROWKEYS (losing the arrow keys is unfortunate but not fatal unless you’re in command mode far too long).
  3. If that doesn’t compile with zillions of undefined references to curses sounding functions (printw, stdscr, wmove, etc.) then scroll up to the top of the errors where the final command to “compile” all the objects into a final binary is available. Paste that command and add a “-lncurses” :-
$ gcc  -o wermit \
      ckcmai.o ckclib.o ckutio.o ckufio.o \
      ckcfns.o ckcfn2.o ckcfn3.o ckuxla.o \
      ckcpro.o ckucmd.o ckuus2.o ckuus3.o \
      ckuus4.o ckuus5.o ckuus6.o ckuus7.o \
      ckuusx.o ckuusy.o ckuusr.o ckucns.o \
      ckudia.o ckuscr.o ckcnet.o ckusig.o \
      ckctel.o ckcuni.o ckupty.o ckcftp.o \
      ckuath.o ck_crp.o ck_ssl.o -lutil -lresolv -lcrypt -lncurses -lm

The final output of “wermit” just needs to be stripped, moved to a proper location, and renamed :-

$ strip wermit
$ sudo mv wermit /opt/bin/kermit

And there it seems to work fine.

Of course this is not a proper fix, and we are missing a lot of features but it is at least working. And saves me from having to struggle with minicom, screen, or cu.

Jun 212020
 

If you are just running Ubunbtu with ZFS without poking into the details, you may not be aware of the scrubber running. For background information, and for the benefit of those who prefer to go their own way, this is all about that little scrubber.

A pool scrub operation is where the kernel runs through checking all of the data in a pool and makes any necessary repairs. Whilst ZFS does check the integrity of the data (using checksums) when performing reads, a regular scrub repairs these issues in advance.

It need only be run weekly for larger systems or monthly for normal systems (it’s a pretty arbitrary border line). And can be started manually with :-

# zpool scrub pool0

(And “pool0” being the name of the pool to scrub)

Whilst a scrub is going on in the background, the only effect on the system is that disk accesses to that pool will be slightly slower than normal. Usually not enough to notice unless you are benchmarking!

When in progress the output of zpool status pool0 will show the current state and how long it is expected to take to complete the scrub. Once finished the status will look like :-

# zpool status | grep scan:
  scan: scrub repaired 0B in 0 days 09:19:27 with 0 errors on Sun Jun 21 10:36:28 2020