No ads? Contribute with BitCoins: 16hQid2ddoCwHDWN9NdSnARAfdXc2Shnoa
Sep 102018
 

If you have not heard, Steam have added a compatibility layer to Steam which allows a limited number of Windows games to run. The “compatibility layer” is in fact a fork of WINE called Proton.

Peered at from 500 metres away, Proton allows Windows software to run (or not infrequently crash and burn) by translating the Win32 API into Linux APIs, and translating the variety of graphics APIs into Vulkan. That is a really difficult thing to do.

I have taken a very quick look at the new Steam client (and “Proton” is no longer part of a beta release of the Steam client – it’s in the standard client). It works perfectly adequately, although you will have variable experiences running Windows software.

For some reason this news has captured the imagination of a number of ‘tubers who are more gamers than Linux users, which has lead to some misunderstanding :-

  1. This is not Linux gaming; it is Windows gaming under Linux. If you have a bad experience with Steam under Linux, you are not experiencing a bad time with Linux gaming. Linux gaming involves native Linux software, and yes there is some out there.
  2. Problems with Steam could well be down to the Proton compatibility layer with unsupported API calls or buggy usage of the Win32 API which relies on Windows behaving in a certain way for undefined parameters.
  3. In addition problems with Steam could be due to the hardware you are running; take a game that works perfectly fine with an Nvidia card. It may behave problematically with an AMD card or even a different Nvidia card. Or the other way around.

The important thing to remember when looking at videos about Steam is that the person looking at Steam may not be the most experienced Linux user out there. That is not necessarily bad – the whole purpose of Steam is to be able to run games easily without a whole lot of Linux experience.

But they may not be understanding properly what is going on – for example the first thing I would do as a professional game-orientated ‘tuber would be to try out a selection of games with an nvidia card, and then repeat using an AMD card – just to see if things work better, worse, or at least differently.

And again, this is not about Linux gaming but about allowing easy access to old Windows titles that someone may have bought in the past. 

Pentland Hills
Sep 082018
 

Having used Linux for well over 20 years (yes it is that old), and Unix before that, I’m often puzzled by how scary some people seem to find Linux. Why should it be scary? It’s just a computer – you’re the human in charge of it.

Yes There Are Gooeys

(graphical user interfaces – GUIs – gooeys)

Yes there is plenty of software with a graphical user interface – I use plenty on a daily basis including a standard web browser, an email client, a password manager, an office package.

On a slightly less frequent basis there are many more that I use. Indeed providing that you accept the use of alternatives, you can find Linux software to do just about anything.

But Don’t Ignore The Command-Line

Yes, Linux has a command-line, and for those of us familiar with it, it can be very powerful. And there is no harm in learning how to use the command-line just to the point where you can follow instructions on how to “get something done” there.

Because if I have a fix for some niggle that you are having, it is easier and less error-prone to pass instructions for a command-line incantation than instructions for a gooey (and yes I have done both).

The Birds

Sep 062018
 

I recently put together a new PC (or mostly) and had occasion to look at what PC cases are like these days. In the end I kept my existing case, but did spend enough time looking to have certain opinions.

And they suck.

All about the glass windows to let the silly lights show through, but how about useful features?

  • Tool-less case panels? Or at least the top panel (to access the expansion cards).
  • Built-in cable runs so things (fans, SATA drives, etc) can be plugged in next to where they are installed.
  • On the subject of fans, servers often have easily removable fan trays; fans are mounted to a plastic frame which in turn slots into position together with power and control signals. A doddle to clean, which would be handy for a desktop workstation.
  • A front panel display to show fault messages during startup – firmware fault codes (some motherboards have a two-digit display but they’re optional and usually not visible when the case is closed). Post-boot it could be used for other things. If it breaks the clean lines of the case, put it behind a sliding panel or something.
  • Handles. And wheels. 
Signs Of The Sea

There are probably a whole bunch more that could usefully be considered, and some of these are inherited from cases known to me (the old Mac Pro case is a good place to start from).

Aug 092018
 

Well that was a weird error; I recently discovered that ntpd had mysteriously stopped working; specifically it was not able to resolve NTP “pool” names :-

ntpd: error resolving pool europe.pool.ntp.org: Name or service not known (-2)

After some time spent blundering around down dead ends with the help of an appropriate search engine, I ended up resorting to strace. This is a tool most commonly used by developers but can be surprisingly useful for diagnosing system problems too.

As long as you can look past all the inscrutable output!

The strace tool runs a command and records every system call that the command calls together with the results. And of course most commands make zillions of system calls so you’re likely to end up with a huge output file.

To generate the output file, I ran the modern equivalent of ntpdate (ntpd -d) which tries to do the same thing using the actual NTP daemon. Usefully in this case because the command starts, configures itself (which is where the error occurs), and then exits (unlike the normal dæmon). It is important to redirect the output to have a file to trawl through later :-

strace ntpd -d > /var/tmp/ntpd.strace 2>&1

Once the output was generated, it was necessary to trawl through it to look for clues. The first thing was to search for “europe” (as I use europe.pool.ntp.org as one of my NTP servers). The first occurrence was the error claiming that the name didn’t exist :-

write(2, "error resolving pool europe.pool"..., 73error resolving pool europe.pool.ntp.org: Name or service not known (-2)

Which was somewhat odd because you would expect the string “europe” to occur within an instructable attempt resolve the name. Yet it appears as though the error occurs without any attempt to resolve the name!

As a bit of a guess I searched for “resolv.conf” which revealed :-

stat("/etc/resolv.conf", {st_mode=S_IFREG|0644, st_size=362, ...}) = 0
openat(AT_FDCWD, "/etc/resolv.conf", O_RDONLY|O_CLOEXEC) = -1 EACCES (Permission denied)

Apparently ntpd is unable to open the file due to a permissions problem!

Looking at my /etc/resolv.conf revealed an oddity dating back to when I tried configuring /etc/resolv.conf as a symbolic link to a file on a separate file system. The file itself was a symbolic link to /etc/resolv.conf.file.

For some reason ntpd didn’t like the symbolic link, which is a bit odd but changing it to an ordinary file fixed the problem.

Jul 302018
 

Alternatively, why does Windows use drive letters? Because if you are coming from an old unix background, drive letters are just as weird as the lack of them if you are coming from a Windows background.

I mean, why is Windows installed on drive C? What ever happened to drives A and B?

Technically Linux does have the equivalent of drive letters but they are rarely used directly (unless you’re weird like I am). For example I currently have an SD card plugged into my desktop system, and it has the path /dev/disk/by-label/EOS_DIGITAL (or /dev/sdo1).

Historically, Unix (which is loosely the predecessor of Linux) ran on large minicomputers where system administrators would decide what disks were “mounted” where.  The Linux equivalent of drive C is effectively “/” (root), and you can attach (or “mount”) disks at any point underneath that – for example /home.

This allowed people to use an old Unix machine without worrying where this disks were; and allowed system administrators to add and remove disks as and where they were needed. These days we are all system administrators as well as users – that little voice you hear from time to time saying things like “When would be a good time to update the operating system?” and “I must clean up those temporary files all over the place” are your inner system administrator speaking up.

And if you don’t hear that inner voice, cultivate it!

With device paths, Linux has the opportunity to create sensible friendly names for disks, but a historical accident has resulted in almost every kind of disk being identified as a SCSI disk – SATA disks (a normal hard disk), SAS disks (server hard disks), Fiber Channel disks (SAN hard disks), and even USB storage devices all use SCSI commands.

So nearly all Linux disks are identified as /dev/sd followed by a letter (a “drive letter” – we can’t get away from them) and a number indicating the partition. Fortunately there is also the relatively new /dev/disks directory that has slightly friendlier names for disk devices. If you are getting into low-level disk management, learn these directories; in particular if you are looking into enterprise disk management look at WWNs (each disk has a unique “world-wide-number”).

Now back to Windows. Windows is the descendent of DOS, which goes back to the time when PCs may not have had hard disks and by default would have booted off a floppy disk in drive A with a data disk in drive B. Later PCs came with hard disks which used drive C on the assumption that you would have one or two floppy drives.

Windows has been updated over the years and there is a great deal of sophistication under the surface, but it does act a bit conservatively when it comes to drive letters – A and B are by default reserved for floppy drives even though I haven’t seen one of those on an ordinary system for years. You can use A and B for other purposes such as mapping network drives – A makes a good drive for a NAS drive.

If we get away from the terminology of “drive letters” and “device paths” and instead refer to them as “storage device names”, both Linux and Windows have “storage device names” but Linux prefers to hide that level of detail.

Personally I prefer the Linux way, but whatever floats your boat.

Jul 242018
 

As someone who has spent far too much time dealing with the Domain Name System, I get kind of miffed when people insist on creating names that conflict with the DNS ordering. You see the DNS naming works from right-to-left (the wrong way around if you’re reading this in English).

Take the name for this site – really.zonky.org – which is admittedly a rather quirky name. The most significant part of the name is at the right (org – and yes I’m ignoring the really significant and invisible “dot”). The next most significant part (zonky) specifies what organisation has registered the site (me), and the least significant part (really) points to one service at that organisation.

So when people ask for names that break that ordering it is ever so slightly irritating – for example if you have a service called mail.zonky.org and wanted a test service you might request mail-test.zonky.org which breaks the ordering of things. As an alternative, test.mail.zonky.org doesn’t break the naming, looks a bit nicer, and ultimately more reasonably flexible.

Let us look at a slightly more complex example; let’s assume that we have a domain called db.zonky.org and want to register a service name for each database. We could register names such as db-addresses.zonky.org, and db-orders.zonky.org, or we could register them instead as addresses.db.zonky.org and orders.db.zonky.org. In the later case, I can very quickly write a firewall rule that allows access to *.db.zonky.org (whereas db-*.zonky.org would not work).

Ultimately suggest names in DNS naming order unless you can justify why it is not suitable.

 

Jun 122018
 

This posting is about using the command-line ssh tool for relatively securely copying stuff around, and logging into devices. Many of the tips contained within are things I have had to pry out of the manual page for my own use and these notes are a way of keeping the information around without relying on my brain.

#1: It Comes With Windows

If you are running the latest version of Windows 10, you get the command-line versions of ssh and scp without dropping into the Linux shell :-

Of course you have been able to install ssh clients for Windows for years or even decades, but having it available by default is a big win. Particularly for Windows machines you don’t tweak with your favourite applications.

#2: Public/Private Key Authentication

This the first part of increasing security by only permitting key authentication so that password brute forcing attacks become impossible. With the assistance of an ssh agent (not covered here) or a passphrase-less key pair (not advisable), it is no longer necessary to enter a password.

Of course getting into this sort of thing can be very confusing especially as most instructions tend to get into far too much detail on the cryptography involved. To keep it simple, I shall avoid going on about the cryptography, and concentrate on how to get it to work.

The most important thing to remember about key authentication is that there are two keys – the private key (which should be kept as secure as possible on the client machine) and the public key (which is copied to the devices you want to connect to).

So to get started, you first need to generate a key pair, which can be done with ssh-keygen; this has lots of options, but at this point you can ignore them. After you enter the command, you can simply hit return at all the prompts to generate a key pair :-

Generating public/private rsa key pair.
Enter file in which to save the key (/home/mike/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/mike/.ssh/id_rsa.
Your public key has been saved in /home/mike/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:REMOVED mike@Michelin
The key's randomart image is:
+---[RSA 2048]----+
|=*+o ..  .B*..=o |
|o+++.  . =+o. o+.|
|.BE.+   + .  =  .|
|o+=& . . .    o  |
|. o +   S    .   |
|     .           |
|    SS           |
|              .  |
|     --          |
+----[SHA256]-----+

Of course this is not ideal because there is no passphrase, but to get started with that’s fine. You can ignore most of this output (except for the first item in the following list) but just in case :-

  1. The key pair is saved in the files ~/.ssh/id_rsa (the private key) and ~/.ssh/id_rsa.pub (the public key). The permissions are usually generated properly, but just to be safe you may want to reset the permissions anyway: chmod 0400 ~/.ssh ~/.ssh/id_rsa; chmod u+x ~/.ssh
  2. The key fingerprint can be used to check that when you are connecting that the keys haven’t changed unexpectedly.
  3. Alternatively (and slightly more of a reasonable check) you can check the fingerprint using the “randoart”.

Of course on its own, it doesn’t do much good. You have to copy it into place onto the machine you wish to authenticate to :-

$ ssh username@server mkdir .ssh
$ cat ~/.ssh/id_rsa.pub | ssh username@server cat ">>" .ssh/authorized_keys

Note the quotes around the “>>”; these are significant because you do not want the local machine’s shell to interpret them – they need to be interpreted by the remote machine’s shell. Normally I would simply “scp” the file into place, but appending to a supposedly non-existent file is safer – just in case it does exist and does contain public keys that are currently in use.

There are a whole bunch of options to the command, but the two most important ones are :-

  1. The -t option which is used to specify the key type to generate (dsa, rsa, ecdsa, and ed25519). This is mostly unnecessary, but some older and limited devices do not understand certain key types. And as time goes on, more key types will be declared “insecure”. So you may sometimes find the need to generate more secure keys. The simplest (but not very efficient) process for dealing with such situations is to generate a key for each key type and try each one in turn.
  2. The -f option which is used to specify the output filenames – the private key is saved under the name ‘filename’ and the public key under the name ‘filename.pub’.

#3: SSH Configuration File and Usernames

There are a ton of things that can be done with the ssh configuration file, but for this section I’ll stick with setting the username used to login to specific hosts – not because this is the most interesting thing that can be done, although it is quite useful.

The configuration file can be found (if it has been created) at ~/.ssh/config (with a system-wide version at /etc/ssh/ssh_config). Within that file, you can set global preferences, or host specific preferences :-

Username fred

Host router
  Username admin
Host dns*
  Username fxb
Host ds-* web-*
  Username baileyf
Host *
  Username fred

The first line (Username fred) instructs ssh to use the username ‘fred’ when no username is specified – ssh 192.168.77.98 effectively becomes ssh fred@192.168.77.87.

If you specify the same username within a Host section, the specified username is used for any hosts that the specification following the Host word. In the first case (“Host router”) the username “admin” will be used for any host called “router” but not “router.some.domain”.

In the case of the second clause, a wildcard is used which is very useful for specifying a range of hosts – the example can match “dns01”, “dns01.some.domain”, or even “dns02”. In fact the first Host section is an example of what you should not do – put in a single hostname without a wildcard because it will only activate if the hostname is specified exactly as given. Put a wildcard in there, and it will work whether you use a single hostname or use the fully qualified domain name.

You can also have more than one host specification – as in the “ds-* web-*” list.

And lastly you can (if you choose) use the Host declaration to specify a set of default values – in much the same way that configuration settings in the global context specify default values. Use whatever method you choose.

#4: Cryptographic Incompatibility

I have commented elsewhere on this, but basically the ssh developers have chosen to disable weak encryption by default. Personally I would prefer that ssh throw up huge warnings about weak cryptography, but what is done is done.

If you need to connect to something with weak cryptography, there are three potential ‘fixes’ to allow connections. Each of these is a keyword to add to a specific host section, followed by a specification of what ‘algorithm’ to add.

In each case, a connection attempt will give an indication of what is wrong together with an indication of what algorithm to include :-

» ssh admin@${someswitch}
Unable to negotiate with ${ip} port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1

In this case, we can see that it is the KexAlgorithms we need to adjust and the algorithm we need to add is “diffie-hellman-group1-sha1” :-

Host someswitch*
  KexAlgorithms +diffie-hellman-group1-sha1

This can be repeated for Ciphers and (rarely) MACs.

#5: X11 and Port Forwarding

Run X11 gooey programs over an ssh connection? Of course .. why not?

This can be enabled on a host-by-host basis (it is off by default because it can be insecure) using the configuration file :-

Host pica*
  ForwardX11 yes

This is just a special case of port forwarding where a network port is connected (via the ssh session) to a remote network port. Port forwarding can be very useful – for example to access an internal web site temporarily that isn’t (and probably shouldn’t be) exposed with a hole through the firewall.

Of course this can be done with a VPN, but ssh may be simpler :-

Host pica*
  LocalForward 8000 8000

When the connection is made, a local port is opened (tcp/8000) and connected to tcp/8000 on the machine you are logging into.
 

Jun 052018
 

As the subject says, this blog has been offline for just over a week because of a hardware failure. Just when I wanted to moan about all the GDPR hissy fits that people are throwing.

Noticed some websites are blocking you because of the GDPR?

That’s the hissy fit. Seems that some international web site operators who previously assumed that GDPR didn’t apply to them, are suddenly realising that it does. Which is an indication that they have been impersonating an ostrich for a couple of years now.

Smaller businesses get a free pass on that one, but any reasonably sized company should have been aware of GDPR by now. It was put in place and deliberately put on hold for two years to allow people to get started with complying with GDPR. Anyone involved in the security business has been hearing “GDPR” for over two years now.

So there are those who claim they’ve not heard of it, and are now panicking and trying to catch up, making a mountain out of a molehill, and claiming that it’s a dumb law. Technically it isn’t actually a law but an EU regulation that member states are required to make law.

Anyway onto some of the biggest arguments against the GDPR …

The Whois Question

This is a great example of what happens when you ignore a situation and then panic.

When you register a domain (such as zonky.org) or a netblock (a set of IP addresses), you are expected to provide contact details for the individual(s) involved in the registration process – to allow for billing, and contact to be made in the event of operational issues.

Storing that information is perfectly reasonable.

Publishing that information is perfectly reasonable given informed consent.

Ideally the domain registration would offer a choice to the registrant – public listing of personal details, public listing of role contact information, or public listing of indirect contacts (i.e. keeping the contact details private).

There is a German court case decision saying that it isn’t necessary to have contact information for registering a domain; all I can say is that the German court obviously didn’t have the full facts.

GDPR’s “Right To Be Forgotten”

One of the misconceptions is that the “right to be forgotten” is an absolute human right; for a start it’s not a a human right, but a right under the law. And it is not absolute; the text of the GDPR includes numerous exceptions to the right to be forgotten, such as :-

  • A legal or regulatory obligation to keep the personal information.
  • An overriding public interest.
  • Ongoing legitimate business processes still require that personal information.

The key is that if you are an ethical business (in particular don’t plan to sell personal information and/or keep spamming people) then the right to be forgotten isn’t anything to worry about.

GDPR: The Fines

The strange thing is that there is doubt over the level of fines that can be levied under the GDPR which is remarkable as the language is quite clear – the lower level of breach can be fine of up to either €10 million or 2% of annual turnover.

Or to put it another way, for the lower level of breach, the maximum fine is whichever is greater €10 million or 2% of annual turnover. The maximum.

Do you know how often the ICO has imposed the maximum level of fine under existing legislation? Never.

The Jurisdiction Issue

Now here there is some legitimate grounds for grievance; after all whenever the US starts imposing its laws outside of the US, people outside the US start jumping up and down. And yes, the EU does expect non-EU companies to obey the GDPR regulation if they store data on EU citizens.

In practice, the EU isn’t going to try going after small companies outside the EU; particularly not small companies that are just ordinary business and not engaged in Cambridge Analytica type business.

The other way of looking at the global reach of the GDPR is whether it would be a good idea for there to be a world-wide law in relation to the protection of personal information. The Internet means that world-wide laws are necessary in this area, or those abusing personal information will merely move to the jurisdiction with the weakest protection of personal information.

Rusty Handrail

May 042018
 

I had the pleasure of upgrading a server today which involved fixing a number of little niggles; one of which was that connecting to switches suddenly stopped working :-

✗ msm@${server}» ssh admin@${someswitch}
Unable to negotiate with ${ip} port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1

This was relatively easily fixed :-

✗ msm@${server}» ssh -o KexAlgorithms=+diffie-hellman-group1-sha1 admin@${someswitch}
Password: 

Of course doing this command-by-command is a little tedious, so a more permanent solution is to re-enable all the supported key exchange algorithms. The relevant algorithms can be listed with ssh -Q kex, and they can be listed in the server-wide client configuration in /etc/ssh/ssh_config :-

Host *
    KexAlgorithms ${comma-separated-list}

But Why?

According the OpenSSH developers, the latest version of ssh are refusing to use certain key exchange algorithms (and other cryptographic ‘functions’).

Their intention is perfectly reasonable – by default the software refuses to use known weak crypto. I’m fully behind the idea of discouraging the use of weak crypto.

But the effect of disabling weak crypto in the client is unfortunate – all of a sudden people are unable to connect to certain devices. The developers suggest that the best way of fixing the problem is to upgrade the server so that it supports strong cryptography.

I fully agree, but there are problems with that :-

  1. Some of the devices may very well be unsupported with no means to upgrade the ssh dæmon. Now in an ideal world, these devices wouldn’t be on the network, but in the real world there are such devices on the network.
  2. Some devices may not be capable of being upgraded because of processor or memory limitations. Network switches are notorious for having slow processors and tiny amounts of memory, and it is entirely possible that such a device would not be capable of running more exotic and modern crypto. Similarly lights out management processors are often severely limited.
  3. Even if a device is capable of being upgraded, there are the standard problems – the vendor may be slow at releasing updates, change control gets in the way, and lastly resourcing may be an issue – upgrading several hundred switches manually with just one or two people doing it is not going to be a quick job.

Lastly, whilst security is important, breaking things just to make a point is a little extreme. Whilst it is possible to fix the problem, it is something that isn’t immediately obvious to someone who doesn’t routinely configure ssh. And someone, somewhere has had this breakage occur just before they really need to fiddle with a switch Right Now.

There is a far better option available – leave the weak crypto enabled, but warn noisily about its use :-

WARNING!!!!! (2 second delay)
WARNING!!!!! (2 second delay)

The device you are connecting to only supports known weak crypto which means this connection
is subject to interception by an attacker.

You should look at upgrading the device as soon as possible.

Telling people what is wrong noisily and continuing to work is far better than simply breaking with a rather terse message.

Foggy Reflection

 

Apr 012018
 

This is a continuation of an earlier post regarding ECC memory under Linux, and is how I added a little widget to display the current ECC memory status. Because I don’t really know lua, most of the work is carried out with a shell script that is run via cron on a frequent basis.

The shell script simply runs edac-util to obtain the number of correctable errors and uncorrectable errors, and formats the numbers in a way suitable for setting the text of a widget :-

#!/bin/zsh
#
# Use edac-util to report some numbers to display ...

correctables=$(edac-util --report=ce | awk '{print $NF}')
uncorrectables=$(edac-util --report=ue | awk '{print $NF}')

c="chartreuse"
if [[ "$correctables" != "0" ]]
then 
  c="orange"
fi
if [[ "$uncorrectables" != "0" ]]
then
  c="red"
fi

echo "ECC: $correctables/$uncorrectables "

This is run with a crontab entry :-

*/7 * * * * /site/scripts/gen-ecc-wtext > /home/mike/lib/awesome/widget-texts/ecc-status

Once the file is being generated, the Awesome configuration can take effect :-

-- The following function does what it says and is used in a number of dumb widgets
-- to gather strings from shell scripts
function readfiletostring (filename)
  file = io.open(filename, "r")
  io.input(file)
  s = io.read()
  io.close(file)
  return s
end

eccstatus = wibox.widget.textbox()
eccstatus:set_markup(readfiletostring(homedir .. "/lib/awesome/widget-texts/ecc-status"))
eccstatustimer = timer({ timeout = 60 })
eccstatustimer:connect_signal("timeout",
  function()
      eccstatus:set_markup(readfiletostring(homedir .. "/lib/awesome/widget-texts/ecc-status"))
  end
)
eccstatustimer:start()
...
layout = wibox.layout.fixed.horizontal, ... eccstatus, ...

There plenty of ways this could be improved – there’s nothing really that requires a separate shell script, but this works which is good enough for now.

WP Facebook Auto Publish Powered By : XYZScripts.com

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close