No ads? Contribute with BitCoins: 16hQid2ddoCwHDWN9NdSnARAfdXc2Shnoa
May 042018
 

I had the pleasure of upgrading a server today which involved fixing a number of little niggles; one of which was that connecting to switches suddenly stopped working :-

✗ msm@${server}» ssh admin@${someswitch}
Unable to negotiate with ${ip} port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1

This was relatively easily fixed :-

✗ msm@${server}» ssh -o KexAlgorithms=+diffie-hellman-group1-sha1 admin@${someswitch}
Password: 

Of course doing this command-by-command is a little tedious, so a more permanent solution is to re-enable all the supported key exchange algorithms. The relevant algorithms can be listed with ssh -Q kex, and they can be listed in the server-wide client configuration in /etc/ssh/ssh_config :-

Host *
    KexAlgorithms ${comma-separated-list}

But Why?

According the OpenSSH developers, the latest version of ssh are refusing to use certain key exchange algorithms (and other cryptographic ‘functions’).

Their intention is perfectly reasonable – by default the software refuses to use known weak crypto. I’m fully behind the idea of discouraging the use of weak crypto.

But the effect of disabling weak crypto in the client is unfortunate – all of a sudden people are unable to connect to certain devices. The developers suggest that the best way of fixing the problem is to upgrade the server so that it supports strong cryptography.

I fully agree, but there are problems with that :-

  1. Some of the devices may very well be unsupported with no means to upgrade the ssh dæmon. Now in an ideal world, these devices wouldn’t be on the network, but in the real world there are such devices on the network.
  2. Some devices may not be capable of being upgraded because of processor or memory limitations. Network switches are notorious for having slow processors and tiny amounts of memory, and it is entirely possible that such a device would not be capable of running more exotic and modern crypto. Similarly lights out management processors are often severely limited.
  3. Even if a device is capable of being upgraded, there are the standard problems – the vendor may be slow at releasing updates, change control gets in the way, and lastly resourcing may be an issue – upgrading several hundred switches manually with just one or two people doing it is not going to be a quick job.

Lastly, whilst security is important, breaking things just to make a point is a little extreme. Whilst it is possible to fix the problem, it is something that isn’t immediately obvious to someone who doesn’t routinely configure ssh. And someone, somewhere has had this breakage occur just before they really need to fiddle with a switch Right Now.

There is a far better option available – leave the weak crypto enabled, but warn noisily about its use :-

WARNING!!!!! (2 second delay)
WARNING!!!!! (2 second delay)

The device you are connecting to only supports known weak crypto which means this connection
is subject to interception by an attacker.

You should look at upgrading the device as soon as possible.

Telling people what is wrong noisily and continuing to work is far better than simply breaking with a rather terse message.

Foggy Reflection

 

Apr 012018
 

This is a continuation of an earlier post regarding ECC memory under Linux, and is how I added a little widget to display the current ECC memory status. Because I don’t really know lua, most of the work is carried out with a shell script that is run via cron on a frequent basis.

The shell script simply runs edac-util to obtain the number of correctable errors and uncorrectable errors, and formats the numbers in a way suitable for setting the text of a widget :-

#!/bin/zsh
#
# Use edac-util to report some numbers to display ...

correctables=$(edac-util --report=ce | awk '{print $NF}')
uncorrectables=$(edac-util --report=ue | awk '{print $NF}')

c="chartreuse"
if [[ "$correctables" != "0" ]]
then 
  c="orange"
fi
if [[ "$uncorrectables" != "0" ]]
then
  c="red"
fi

echo "ECC: $correctables/$uncorrectables "

This is run with a crontab entry :-

*/7 * * * * /site/scripts/gen-ecc-wtext > /home/mike/lib/awesome/widget-texts/ecc-status

Once the file is being generated, the Awesome configuration can take effect :-

-- The following function does what it says and is used in a number of dumb widgets
-- to gather strings from shell scripts
function readfiletostring (filename)
  file = io.open(filename, "r")
  io.input(file)
  s = io.read()
  io.close(file)
  return s
end

eccstatus = wibox.widget.textbox()
eccstatus:set_markup(readfiletostring(homedir .. "/lib/awesome/widget-texts/ecc-status"))
eccstatustimer = timer({ timeout = 60 })
eccstatustimer:connect_signal("timeout",
  function()
      eccstatus:set_markup(readfiletostring(homedir .. "/lib/awesome/widget-texts/ecc-status"))
  end
)
eccstatustimer:start()
...
layout = wibox.layout.fixed.horizontal, ... eccstatus, ...

There plenty of ways this could be improved – there’s nothing really that requires a separate shell script, but this works which is good enough for now.

Mar 292018
 

For some reason when I look at RADIUS packet captures using Wireshark, the attribute Operator_Name is instead interpreted as Multi-Link-Flag (an integer rather than a string). I’m not sure what this is, but it is much more useful to me to be able to see the Operator_Name properly – and for example, filter on it.

It turns out this is easy to “fix” (if it is a fix) :-

  1. Find the file radius/dictionary.usr (mine was /usr/share/wireshark/radius/dictionary.usr)
  2. Edit that file, and comment out three lines containing “Multi-Link-Flag” which in my case appeared like :-
    1. ATTRIBUTE Multi-Link-Flag 126 integer
    2. VALUE Multi-Link-Flag True 1
    3. VALUE Multi-Link-Flag False 0
  3. Save the modified file.

After a restart, Wireshark now understands it.

It is possible that later versions of Wireshark have fixed this, or not – it is possible that the bug is down to whoever assigned RADIUS attribute codes!

Mar 252018
 

It seems likely that the company Cambridge Analytica paid Facebook for access to data and using it’s access, downloaded as much data as possible for nefarious purposes. Nobody should be that surprised at this.

Facebook does not host an enormously expensive social network just because it is fun; it does it to make money. It probably does this primarily through advertising, but selling access to social network data is always going to take place.

And from time to time, scandals when companies like Cambridge Analytica are going to take place. At which point Facebook will protest saying that it didn’t realise that the associated firm was doing such naughty things. And once the story drops out of the news, Facebook will carry on leaking data.

As the saying goes: “If you are not paying for it, you are the product.”

In the end, the only solution to something like this, is to produce some kind of peer-to-peer application that is as easy to use as Facebook, uses strong end-to-end encryption, and keeps our data private to those people and groups we choose to share it with.

The Hole

Mar 092018
 

One of the things that annoys me about pagers such as lessmore, most, etc. is that they are dumb in the sense that they cannot detect the format of the text file they are displaying. For example, all of a sudden I find myself reading lots of markdown-formatted files, and I find myself using most to display it – never remembering that it is mdv I want.

As it happens, when I invoke a pager at the shell prompt, I typically use an alias (page or pg) to invoke a preferred pager, and by extending this functionality into a function I can start to approach what I want :-

function extension {
  printf "%s\n" ${argv/*\./}
}

function page {
 if [[ -z $argv ]]
 then
   $PAGER
 else
   case $(extension $argv) in
     "md")
       mdv -A $argv | $PAGER
       ;;
     "man")
       groff -m mandoc -Tutf8 $argv | $PAGER
       ;;
     *)
       $PAGER $argv
       ;;
     esac
   fi
}

Of course there are undoubtedly umpteen errors in that, and probably better ways to do it too. And it won’t work properly on its own ($PAGER hasn’t been set).
But it’s the start of something I can use to display all sorts of text files in a terminal window without having to remember all those commands. But as for ‘intelligent’, nope it’s not that – just a bit smarter than the average pager.

Feb 082018
 

Some time ago, I wrote about using new (for the time) partition tables to create a memory stick with 100 partitions; each with a mountable file system on. And decided the time was right to have another look to see if things have improved … or degraded. After all, things have moved on, and everything has been updated.

I also improved the creation script slightly :-

#!/bin/zsh

disk=/dev/sdb

parted $disk mklabel gpt
for x in {1..99}     
do
  echo Partition: $x
  parted -s $disk mkpart FAT $(($x * 100)) $((x * 100 + 99))
  sleep 0.2
  mkfs -t vfat -n DOOM${x} ${disk}${x} 
  sleep 0.2
done

And I used a zsh-ism – so shoot me.

The script ran fairly well, but :-

  1. The load average shot up through the roof as copies of systemd-udevd started, worked, and closed.
  2. Strangely the links in /dev/disk/by-label (and presumably elsewhere) kept disappearing and re-appearing. As if on each partition change to the disk, all of the disk’s devices were removed and re-created. This is probably not dangerous, but harmful to performance.
  3. Given that I used sleep within my script, it is hard to criticise performance, but it did seem slow. However this is not an area worth optimising for.
  4. Unlike last time, Linux did not refuse to create any file systems.

Now onto trying to stick the memory stick of doom into various systems…

Ubuntu 17.10

This was of course the machine I ran the script on initially.

This did not go so well, with the machine initially freezing momentarily (although it is a cheap and nasty laptop), apparently silently refusing to mount half the file systems, and “Files” (or Nautilus) getting wedged at 100% processor usage.

After some 10 minutes, Nautilus was still stuck with no signs of making any progress.

After I lost patience and restarted “Files”, it came up okay showing the mounted file systems and showing the file systems it had failed to mount. On one occasion the additional file systems were shown as unmounted (and could be mounted) and on another they were shown as mounted (even though they weren’t).

So both “Files” gets a thumb down for getting stuck, and whatever else gets a thumb down for trying and failing (silently) to mount all the file systems.

This is definitely a serious degradation from the previous try, although probably GNOME-specific rather than Linux-specific. Especially as a later mounted all the file systems from the command-line on a different system without an issue.

Windows 10

Windows 10 became unusually sluggish, although it may have been in the mysterious “we’ll run Windows update at the most inconvenient time possible” mode. It did attempt to mount the file systems, and failed miserably – it mounted the first set until it ran out of drive letters.

Which is just about understandable, as there aren’t 100 drive letters. However :-

  1. Where was the message saying “There are 100 partitions in this silly USB stick. You can see the first 22; additional ones can be mounted within folders if there is important data on them.”.
  2. Why is Windows still limiting itself with single letter device names? Okay it is what we’re used to, but when you run out of drive letters, start using the file system label – “DOOM99:”. Hell, I’d like all my removable disks treated that way under Windows.

As for the whole “ran out of drive letters, so don’t bother with the rest”, how many people are aware that drives can be mounted (as Unix does) in directories?

macOS 10.13 (OSX)

Oddly enough (but perhaps sensibly), macOS refused to have anything to do with the memory stick. Indeed it popped up a dialog suggesting initialising the disk, which is perhaps not particularly sensible with a disk that could contain data.

The “Disk Utility” happily showed the disk – increasing the size of the window inconveniently wide in the process – and happily indicated 99 partitions.

At the Terminal prompt, it was apparent that the operating system had created device files for each of the partitions, but for some reason wouldn’t mount them.

Summary

Inserting a “stick of doom” with 100 partitions on it into any machine is still a risky thing to do. It’s also a dumb thing to do, but something operating system developers should be doing.

Linux (or rather GNOME) performs significant worse this time around than previously, and my suspicions are that systemd is to blame.

But however bad Linux does, none of the operating systems actually do sensible things with the “stick of doom”. macOS arguably comes closest with refusing to have anything to do with the disk, but it also encourages you to reformat the disk without saying that it could be erasing data.

Ideally, a gooey would pop up a window listing the file system labels and ask you which you want to mount. That’s not even a bad idea for a more sensibly set up memory stick.

Pebble On Steel

Feb 022018
 

On occasions, I have run into issues where mounting a filesystem from /etc/fstab fails on a reboot because it depends on something else happening first. The easiest example to recall is when mounting a conventional filesystem constructed from a ZPool block device – the block device isn’t ready until ZFS has finished starting which often occurs after the filesystem mounts are attempted.

The fix is dead simple; just add the option “_netdev” to the options field in /etc/fstab and the problem is sorted :-

/dev/zvol/pool1/vol-splunk      /opt/splunk     ext2    noatime,_netdev         0 2

Yes the reason I am using a block device is that Splunk doesn’t support being installed on a ZFS filesystem.

Jan 292018
 

I recently dived into the rabbit hole of educational computers and came across a site which made a big song and dance about how Python is a great deal more complicated than BASIC. Well that is perhaps arguably correct, but the comparison they made was grossly unfair :-

#!/usr/bin/env python
#-*- coding: UTF-8 -*-

from random import randint
from time import sleep
import sys

string = "Hello World!"
while true:
  attr = str(randint(30,48))
  out = "\x1b[%sm%s\x1b[0m" % (attr, string)
  sys.stdout.write(out)
  sys.stdout.flush()

  sleep(1)

Now for the criticisms :-

  1. The first line (“#!/usr/bin/env …”) is nothing to do with Python; and in fact a BASIC program should also include this if it wants to run in the same way as a Python program under Linux. The “#!” is in fact a directive to the Linux kernel to tell it what script to pass the rest of the file through.
  2. The second line (“# -*-…”) also has nothing to do with Python; it is a directive to an editor to tell it to use the UTF-8 character set. Why doesn’t the basic equivalent also include this?
  3. Now onto the Python itself … first of all there are a whole bunch of imports which are done in the verbose way just so that you can call sleep rather than time.sleep; I generally prefer the later (which would result in the import time rather than from time import sleep). But yes, in Python you have to import lots of stuff to get anything done, and it would be helpful for quick and dirty scripts if you could just import lots to get a fair amount of ordinary stuff loaded.
  4. The rest of the code is … um … obviously designed to make Python look bad and glossing over the fact that Python runs in the Linux runtime environment whereas the BASIC equivalent does not – it has a BASIC runtime environment.

That last point is worth going into more detail on – the BASIC code was written for a BASIC runtime environment, and one method of sending output to the screen. Linux has many ways of writing to the screen, and the chosen method above is perhaps historically the worst (it only works for devices that understand the escape sequences; there is a curses library for doing this properly).

So is Python unsuited to a quick and easy learning environment? A quick hackers language? As it is, perhaps not, but that is not quite what Python is designed to be. And with a suitable set of modules, Python could be suitable :-

import lots

white True:
  screen.ink(random.choice(inkcolours))
  screen.paper(random.choice(papercolours))
  screen.print("Hello World!")

  time.sleep(1)

(That’s entirely hypothetical of course as there is no “screen” module)

I’m not qualified to judge whether BASIC or Python are better languages for beginners – I’ve been programming for around 35 years, and the BASIC I remember was very primitive. But at least when you compare the two languages, make the comparison a fair one.

Jan 042018
 

Well, there’s another big and bad security vulnerability; actually there are three. These are known as Meltdown and Spectre (two different Spectres). There are all sorts of bits of information and misinformation out there at the moment and this posting will be no different.

In short, nobody but those involved in the vulnerability research or implementing work-arounds within the wel-known operating systems really knows these vulnerabilities well enough to say anything about them with complete accuracy.

The problem is that both vulnerabilities are exceptionally technical and require detailed knowledge of technicalities that most people are not familiar with. Even people who work in the IT industry.

Having said that I’m not likely to be 100% accurate, let’s dive in …

What Is Vulnerable?

For Meltdown, every modern Intel processor is vulnerable; in fact the only processors from Intel that are not vulnerable are only likely to be encountered in retro-computing. Processors from AMD and ARM are probably not vulnerable, although it is possible to configure at least one AMD processor in such a way that it becomes vulnerable.

It appears that that more processors are likely to be vulnerable to the Spectre vulnerabilities. Exactly what is vulnerable is a bit of work to assess, and people are concentrating on the Meltdown vulnerability as it is more serious (although Spectre is itself serious enough to qualify for a catchy code name).

What Is The Fix?

Replace the processor. But wait until fixed ones have been produced.

However there is a work-around for the Meltdown vulnerability, which is an operating system patch (to fix the operating system) and a firmware patch (to fix the UEFI environment). All of the patches “fix” the problem by removing kernel memory from the user memory map, which stops user processes exploiting Meltdown to read kernel memory.

Unfortunately there is a performance hit with this fix; every time you call the operating system (actually the kernel) to perform something, the memory map needs to be loaded with the kernel maps and re-loaded with the old map when the routine exits.

This “costs” between 5% and 30% when performing system calls. With very modern processors the performance hit will be consistently 5% and with older processors the hit will be consistently 30%.

Having said that, this only happens when calling the operating system kernel, and many applications may very well make relatively few kernel operating system calls in which case the performance hit will be barely noticeable. Nobody is entirely sure what the performance hit will be for real world use, but the best guesses say that most desktop applications will be fine with occasional exceptions (and the web browser is likely to be one); the big performance hit will be on the server.

How Serious Are They?

Meltdown is very serious not only because it allows a user process to read privileged data, but because it allows an attacker to effectively remove a standard attack mitigation which makes many older-style attacks impracticable. Essentially it make older-style attacks practicable again.

Although Spectre is still serious, it may be less so than Meltdown because an attacker needs to be able to control some data that the victim process uses to indulge in some speculative execution. In the case of browsers (for example) this is relatively easy, but in general it is not so easy.

It is also easier to fix and/or protect against on an individual application basis – expect browser patches shortly.

Some Technicalities

Within this section I will attempt to explain some of the technical aspects of the vulnerabilities. By all means skip to the summary if you wish.

The Processor?

Normally security vulnerabilities are found within software – the operating system, or a ‘layered product’ – something installed on top of the operating system such as an application, a helper application, or a run-time environment.

Less often we hear of vulnerabilities that involve hardware in some sense – requiring firmware updates to either the system itself, graphics cards, or network cards.

Similar to firmware updates, it is possible for microcode updates to fix problems with the processor’s instructions.

Unfortunately these vulnerabilities are not found within the processor instructions, but in the way that the processor executes those instructions. And no microcode update can fix this problem (although it is possible to weaken the side-channel attack by making the cache instructions execute in a fixed time).

Essentially the processor hardware needs to be re-designed and new processors released to fix this problem – you need a new processor. The patches for Meltdown and Spectre – both the ones available today, and those available in the future – are strictly speaking workarounds.

The Kernel and Address Space

Meldown specifically targets the kernel and the kernel’s memory. But what is the kernel?

It is a quite common term in the Linux community, but every single mainstream has the same split between kernel mode and user mode. Kernel mode has privileged access to the hardware whereas user mode is prevented from accessing the hardware and indeed the memory of any other user process running. It would be easy to think of this as the operating system and user applications, but that would be technically incorrect.

Whilst the kernel is the operating system, plenty of software that runs in user mode is also part of the operating system. But the over-simplification will do because it contains a useful element of the truth.

Amongst other things the kernel address space contains many secrets that user mode software should not have access to. So why is the kernel mode address space overlaid upon the user mode address space?

One of the jobs that the kernel does when it starts a user mode process, is give to that process a virtual view of the processor’s memory that entirely fills the processor’s memory addressing capability – even if that it is more memory than the machine contains. The reasons for this can be ignored for the moment.

If real memory is allocated to a user process, it can be seen and used by that process and no other.

For performance reasons, the kernel includes it’s own memory within each user process (but protected). It isn’t necessary, but re-programming the memory management unit to map the kernel memory for each system call is slower than not. And after all, memory protection should stop user processes reading kernel memory directly.

That is of course unless memory protection is broken …

Speculative Execution

Computer memory is much slower than modern processors which is why we have cache memory – indeed multiple levels of cache memory. To improve performance processors have long been doing things that come under the umbrella of ‘speculative execution’.

If for example we have the following sample of pseudo-code :-

load variable A from memory location A-in-memory
if A is zero
then
do one thing
else
do another
endif

Because memory is so slow, a processor running this code could stop whilst it is waiting for the memory location to be read. This is how processors of old worked, and is often how processor execution is taught - the next step starts getting really weird.

However it could also execute the code assuming that A will be zero (or not, or even both), so it has the results ready for once the memory has been read. Now there are some obvious limitations to this - the processor can't turn your screen green assuming that A is zero, but it can sometimes get some useful work done.

The problem (with both Meltdown and Spectre) is that speculative execution seems to bypass the various forms of memory protection. Now whilst the speculative results are ignored once the memory is properly read, and the memory protection kicks in, there is a side-channel attack that allows some of the details of the speculative results to be sniffed by an attacker.

 

Summary

  1. Don't panic! These attacks are not currently in use and because of the complexity it will take some time for the attacks to appear in the wild.
  2. Intel processors are vulnerable to Meltdown, and will need a patch to apply a work-around. Apply the patch as soon as it comes out even if it hurts performance.
  3. The performance hit is likely to be significant only on a small set of applications, and in general only significant on a macro-scale - if you run as many servers as Google, you will have to buy more servers soon.
  4. Things are a little more vague with Spectre, but it seems likely that individual applications will need to be patched to remove their vulnerability. Expect more patches.

Tunnel To The Old Town

 

 

Dec 102017
 

If you take a look at a modern keyboard, there will be more than a passing resemblance to the IBM PC/AT keyboard of 1984. The differences are relatively minor – the keyboard may have shrunk slightly in terms of the non-functional bezel, there may be some additional media keys (typically above the number pad), and the overall construction will probably have been made a lot cheaper (the PC/AT was an expensive system and the keyboard was expensive too).

 

(The pictured mainframe keyboard is not a PC/AT keyboard but does have a half-reasonable number of keys)

But very little about the keyboard layout has changed. Oh there are variants such as the ten-key-less keyboard where the number pad has been removed, or even more extreme 60% keyboards which do away with the navigation keys as well, but overall the layout is still pretty much the same.

The very first thing to say is that ergonomically, keyboards are too wide which causes you to move your mouse too far out to use comfortably. This is where the age of the PC/AT keyboard shows; at the time it was designed, mice and gooey interfaces were a rarity and everyone’s hands were nailed to the keyboard. This is the reason why the ten-key-less keyboards exist, and from experience of using both them, and a modular keyboard with the number pad on the left, I can say that a narrower keyboard is more comfortable when taking the mouse into consideration.

But I like big keyboards (as you can tell from the picture), or more specifically I like keyboards with plenty of keys. A keyboard can have plenty of keys without being wide if it is deep. Changing keyboard layouts is contentious, but as someone who has used a wildly different set of keyboards I can say it is perfectly possible to get used to different layouts when those different layouts involve changing the non-touch-typing keys.

That is not to say that changing the touch-typing keys should not be considered; for one thing the staggered layout of the old QWERTY keyboard does make things tricky so orthogonal layouts should be considered.

Now onto some specifics …

Relabelling

In some cases, keys have been labelled the way they are just because that is always the way it has been done. Which is a damn silly reason especially when the name is not only inscrutable but wrong.

For example, Backspace is by description (and historically) a key that should move the cursor back one space to allow typewritten text to be overwritten – you could get an umlaut over an ‘A’ by typing A, Backspace, “ which would get you a very rough approximation of ä. Which is not what the key on our modern keyboard does – it rubs out a mistake, and some old keyboards labelled it properly as Rubout. I have also moved it to just above the Enter key which is traditional on Unix-layout keyboards which is not a bad idea more generally – it is still in a prominent position, and by reducing its size slightly we have room for an additional key in the main section of the keyboard.

The PrtScn key is one of those inscrutable keys that nobody who wasn’t around in the early days knows what it did. Pressing it would send the text contents of the screen to a printer. There are two reasons why we should relabel it Screen Copy – firstly that is what it does (it copies the screen contents to the clipboard), and secondly it gives people who don’t know what PrtScn does a fighting chance of discovering a useful feature.

In a similar way, it would be helpful to add Next Field to the Tab key as a description of one of its more useful functions. You can hear my teeth grinding every time someone takes their hands off the keyboard, uses the mouse to click in the next field, and then types again when one simple press of the Tab key will do all that for them. Of course the original use is still there and used within word-processors.

Finally, the Esc key has been moved to its traditional position, and added what is effectively its most common usage – Cancel.

The right Alt key is often configured as an AltGr key to allow it to be used in combination with other keys to generate characters not found on the keyboard – such as æ, þ, or œ (all of which should be used in English but rarely are because they are so difficult to type).

I have not been able to resist relabelling the Win keys to Super keys, which is what they are configured for in Linux (and used for much the same purpose).

Moving/Shrinking Keys

Why do both Shift keys have to be so big? It is well understood that inserting an extra key between Z and the left shift is unpopular because you have to stretch further for the Shift, but keeping it in position and adding a new key to the left (here a small Caps Lock) would work.

And on the subject of Caps Lock, why give such a prominent key next to A to such a rarely used function? EXCEPT FOR THOSE WHO INSIST ON SHOUTING! Of course, moving the Caps Lock key somewhere else may just lead to less shouting. And it allows a very common request amongst those who use it a lot – moving the Control key back to its traditional position.

Some “New” Keys

Where is the Help key? We all know that F1 almost always functions as a help key, but why not have a dedicated Help key when the keyboard standard allows for it?

And in these days of increased concern over security, why don’t we add a Lock Screen button? Whilst it may not seem that important at home, in a corporate environment it should be mandatory, and it is not a bad idea in a home environment either.

The CutCopy, and Paste keys do the equivalent of Control-X, -C, -V, which might seem unnecessary but not everyone knows the keyboard shortcuts. Besides which, in edge cases the control key shortcuts are used for other purposes.

Most of the media control keys in the top right are pretty much standard if labelled differently. I have merged the up/down keys – so rather than use two keys to control the volume, you use one key (unshifted is down and shifted is up); I have “added” Bright ± and Contrast ± which are commonly found on laptop keyboards as Function sequences, but why shouldn’t they have their own dedicated keys and appear on desktop keyboards too?

The smiley key (😀) is a feature stolen from smartphones – an easy way to pick and select emoticons. I envision it popping up a dialog box to allow the arrow keys to move onto the preferred emoticon and Enter used to insert that symbol.

The Compose key is copied from old keyboards and allows you to enter certain symbols by using keyboard sequences – for example Compose, results in “ä”, and there are many possible sequences. It is a quick and easy way to type certain symbols.

And Find is also an obvious key to add – to search for things.

The Blank Keys

Also I have added a whole row of blank keys which would ideally be populated with re-legend-able keycaps (a clear plastic top which can be removed to insert a tiny scrap of paper with your preferred label). And they should be able to be programmed for whatever the owner of the keyboard wants.

Because many people have their own ideas on what should be on a keyboard.

Indeed with a proper keyboard controller (such as one from the keyboard enthusiasts‘ arena) any key could be programmed to send whatever you want.

Removing Keys

Don’t.

However much you believe a particular key is unused, there is probably some population of some type of computer user that uses that key more than you would believe possible. For example, I rarely use Scroll Lock (enough that I often use it as a custom key to control VirtualBox), but it is often used with Excel.

And I have seen suggestions that the grave/tilde (` and ~) should be removed because nobody uses it; well I use it a hell of a lot.

WP Facebook Auto Publish Powered By : XYZScripts.com

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close