Dec 082018
 

I recently bought a second-hand camera – but this is not specific to photography (but perhaps particularly relevant). The seller threw in an old SD card which was nice of them (although unnecessary for me).

After doing the photo thing with the new-to-me camera, and having carefully replaced the SD card, it occurred to me that I could test a file recovery tool to see if there was any previously shot photos on the card.

Using photorec, I fired it off and came back 30m later – not because it’s particularly slow but I have spent far too much time watching the equivalent of a progress bar, and I would rather get on and do something useful.

By the time I came back, it had recovered in excess of 1,000 images and videos. It turns out to be probably the most boring collection of photos you can imagine – an ordinary collection of family (not your own) photos would be interesting in comparison.

I won’t be including any of those recovered photos here because that would be unprofessional and potentially embarrassing to the camera seller (although they would most likely never find out). 

But you can easily imagine how such a recovery could be potentially embarrassing; even distressing. We usually choose whether a photo should be made public or not.

So how do you protect such things from happening? Is it sufficient to format a card in camera?

No it isn’t. Tools such as photorec are designed to recover images from cards where the images have been deleted or when the card has been formatted. Surprisingly enough, formatting a card does not overwrite all of the data blocks on a storage device; it merely replaces the data structures that allows an operating system to find files with a new blank structure.

So what are the solutions to keep your private photos to yourself?

It should be emphasised that this is advice intended to protect you from personal embarrassment; if there are legal or risk to life issues involved, seek professional advice.

The first rather obvious solution is to never give away or sell old cards; if you want to dispose of the cards, destroy them. It is not as if you could recover much by selling them – who wants a 5-year old 512Mbyte SD card?

If you do want to let others use your old cards, then use a special utility to destroy the contents completely; optionally (but nice for the recipient) is to then format the cards afterwards.

If you are using Windows (or macOS although the following Linux recipe can be adapted), then you will need a tool such as SafeWiper. There are those who claim that Windows format can do the job, but I wouldn’t trust it – the “quick format” option is the default which definitely doesn’t erase the data from the disk, and I have not personally checked that a “slow format” really removes the data beyond recovery with normal tools.

Whatever method you choose, check, double-check, and triple-check that the device you are erasing really

The first step under Linux is to identify the block device path to erase. You may well find that your SD card is automatically mounted when you plug it in. So running df from the command-line will give you a device path (/dev/sdb

But to double check, run lsblk

✓ mike@Michelin» lsblk -o NAME,FSTYPE,MOUNTPOINT,VENDOR,MODEL,SIZE | grep -v loop 
NAME                    FSTYPE      MOUNTPOINT                      VENDOR   MODEL              SIZE
sda                                                                 ATA      SAMSUNG MZNTY128 119.2G
├─sda1                  vfat        /boot/efi                                                   512M
├─sda2                  ext4        /boot                                                       732M
└─sda3                  crypto_LUKS                                                             118G
  └─sda3_crypt          LVM2_member                                                             118G
    ├─ubuntu--vg-root   ext4        /                                                         114.1G
    └─ubuntu--vg-swap_1 swap        [SWAP]                                                      3.9G
sdb                                                                 Generic  USB  SD Reader     3.8G
└─sdb1                  vfat        /media/mike/disk                                            3.8G

Note that how we have “USB SD Reader” alongside /dev/sdb and that it’s size is just 4Gbytes. So we have three confirmations that this is the device we want to erase.

To erase it, first we unmount it, run a hdparm command to erase it, and erase it a second time :-

✓ mike@Michelin» umount /dev/sdb1
✓ mike@Michelin» sudo hdparm --security-erase NULL /dev/sdb
security_password: ""

/dev/sdb:
 Issuing SECURITY_ERASE command, password="", user=user
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
✓ mike@Michelin» sudo dd if=/dev/zero of=/dev/sdb bs=64M

Whilst we’re waiting for the “dd” command to finish writing zeros all over the SD card, why are we erasing this twice? 

We’re using hdparm

And I then suggest using the old slow method of “dd” as well because there is nothing wrong with being cautious in this area.

Misty Trees
Jun 052018
 

As the subject says, this blog has been offline for just over a week because of a hardware failure. Just when I wanted to moan about all the GDPR hissy fits that people are throwing.

Noticed some websites are blocking you because of the GDPR?

That’s the hissy fit. Seems that some international web site operators who previously assumed that GDPR didn’t apply to them, are suddenly realising that it does. Which is an indication that they have been impersonating an ostrich for a couple of years now.

Smaller businesses get a free pass on that one, but any reasonably sized company should have been aware of GDPR by now. It was put in place and deliberately put on hold for two years to allow people to get started with complying with GDPR. Anyone involved in the security business has been hearing “GDPR” for over two years now.

So there are those who claim they’ve not heard of it, and are now panicking and trying to catch up, making a mountain out of a molehill, and claiming that it’s a dumb law. Technically it isn’t actually a law but an EU regulation that member states are required to make law.

Anyway onto some of the biggest arguments against the GDPR …

The Whois Question

This is a great example of what happens when you ignore a situation and then panic.

When you register a domain (such as zonky.org) or a netblock (a set of IP addresses), you are expected to provide contact details for the individual(s) involved in the registration process – to allow for billing, and contact to be made in the event of operational issues.

Storing that information is perfectly reasonable.

Publishing that information is perfectly reasonable given informed consent.

Ideally the domain registration would offer a choice to the registrant – public listing of personal details, public listing of role contact information, or public listing of indirect contacts (i.e. keeping the contact details private).

There is a German court case decision saying that it isn’t necessary to have contact information for registering a domain; all I can say is that the German court obviously didn’t have the full facts.

GDPR’s “Right To Be Forgotten”

One of the misconceptions is that the “right to be forgotten” is an absolute human right; for a start it’s not a a human right, but a right under the law. And it is not absolute; the text of the GDPR includes numerous exceptions to the right to be forgotten, such as :-

  • A legal or regulatory obligation to keep the personal information.
  • An overriding public interest.
  • Ongoing legitimate business processes still require that personal information.

The key is that if you are an ethical business (in particular don’t plan to sell personal information and/or keep spamming people) then the right to be forgotten isn’t anything to worry about.

GDPR: The Fines

The strange thing is that there is doubt over the level of fines that can be levied under the GDPR which is remarkable as the language is quite clear – the lower level of breach can be fine of up to either €10 million or 2% of annual turnover.

Or to put it another way, for the lower level of breach, the maximum fine is whichever is greater €10 million or 2% of annual turnover. The maximum.

Do you know how often the ICO has imposed the maximum level of fine under existing legislation? Never.

The Jurisdiction Issue

Now here there is some legitimate grounds for grievance; after all whenever the US starts imposing its laws outside of the US, people outside the US start jumping up and down. And yes, the EU does expect non-EU companies to obey the GDPR regulation if they store data on EU citizens.

In practice, the EU isn’t going to try going after small companies outside the EU; particularly not small companies that are just ordinary business and not engaged in Cambridge Analytica type business.

The other way of looking at the global reach of the GDPR is whether it would be a good idea for there to be a world-wide law in relation to the protection of personal information. The Internet means that world-wide laws are necessary in this area, or those abusing personal information will merely move to the jurisdiction with the weakest protection of personal information.

Rusty Handrail

May 042018
 

I had the pleasure of upgrading a server today which involved fixing a number of little niggles; one of which was that connecting to switches suddenly stopped working :-

✗ msm@${server}» ssh admin@${someswitch}
Unable to negotiate with ${ip} port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1

This was relatively easily fixed :-

✗ msm@${server}» ssh -o KexAlgorithms=+diffie-hellman-group1-sha1 admin@${someswitch}
Password: 

Of course doing this command-by-command is a little tedious, so a more permanent solution is to re-enable all the supported key exchange algorithms. The relevant algorithms can be listed with ssh -Q kex, and they can be listed in the server-wide client configuration in /etc/ssh/ssh_config :-

Host *
    KexAlgorithms ${comma-separated-list}

But Why?

According the OpenSSH developers, the latest version of ssh are refusing to use certain key exchange algorithms (and other cryptographic ‘functions’).

Their intention is perfectly reasonable – by default the software refuses to use known weak crypto. I’m fully behind the idea of discouraging the use of weak crypto.

But the effect of disabling weak crypto in the client is unfortunate – all of a sudden people are unable to connect to certain devices. The developers suggest that the best way of fixing the problem is to upgrade the server so that it supports strong cryptography.

I fully agree, but there are problems with that :-

  1. Some of the devices may very well be unsupported with no means to upgrade the ssh dæmon. Now in an ideal world, these devices wouldn’t be on the network, but in the real world there are such devices on the network.
  2. Some devices may not be capable of being upgraded because of processor or memory limitations. Network switches are notorious for having slow processors and tiny amounts of memory, and it is entirely possible that such a device would not be capable of running more exotic and modern crypto. Similarly lights out management processors are often severely limited.
  3. Even if a device is capable of being upgraded, there are the standard problems – the vendor may be slow at releasing updates, change control gets in the way, and lastly resourcing may be an issue – upgrading several hundred switches manually with just one or two people doing it is not going to be a quick job.

Lastly, whilst security is important, breaking things just to make a point is a little extreme. Whilst it is possible to fix the problem, it is something that isn’t immediately obvious to someone who doesn’t routinely configure ssh. And someone, somewhere has had this breakage occur just before they really need to fiddle with a switch Right Now.

There is a far better option available – leave the weak crypto enabled, but warn noisily about its use :-

WARNING!!!!! (2 second delay)
WARNING!!!!! (2 second delay)

The device you are connecting to only supports known weak crypto which means this connection
is subject to interception by an attacker.

You should look at upgrading the device as soon as possible.

Telling people what is wrong noisily and continuing to work is far better than simply breaking with a rather terse message.

Foggy Reflection

 

Mar 252018
 

It seems likely that the company Cambridge Analytica paid Facebook for access to data and using it’s access, downloaded as much data as possible for nefarious purposes. Nobody should be that surprised at this.

Facebook does not host an enormously expensive social network just because it is fun; it does it to make money. It probably does this primarily through advertising, but selling access to social network data is always going to take place.

And from time to time, scandals when companies like Cambridge Analytica are going to take place. At which point Facebook will protest saying that it didn’t realise that the associated firm was doing such naughty things. And once the story drops out of the news, Facebook will carry on leaking data.

As the saying goes: “If you are not paying for it, you are the product.”

In the end, the only solution to something like this, is to produce some kind of peer-to-peer application that is as easy to use as Facebook, uses strong end-to-end encryption, and keeps our data private to those people and groups we choose to share it with.

The Hole

Jan 042018
 

Well, there’s another big and bad security vulnerability; actually there are three. These are known as Meltdown and Spectre (two different Spectres). There are all sorts of bits of information and misinformation out there at the moment and this posting will be no different.

In short, nobody but those involved in the vulnerability research or implementing work-arounds within the wel-known operating systems really knows these vulnerabilities well enough to say anything about them with complete accuracy.

The problem is that both vulnerabilities are exceptionally technical and require detailed knowledge of technicalities that most people are not familiar with. Even people who work in the IT industry.

Having said that I’m not likely to be 100% accurate, let’s dive in …

What Is Vulnerable?

For Meltdown, every modern Intel processor is vulnerable; in fact the only processors from Intel that are not vulnerable are only likely to be encountered in retro-computing. Processors from AMD and ARM are probably not vulnerable, although it is possible to configure at least one AMD processor in such a way that it becomes vulnerable.

It appears that that more processors are likely to be vulnerable to the Spectre vulnerabilities. Exactly what is vulnerable is a bit of work to assess, and people are concentrating on the Meltdown vulnerability as it is more serious (although Spectre is itself serious enough to qualify for a catchy code name).

What Is The Fix?

Replace the processor. But wait until fixed ones have been produced.

However there is a work-around for the Meltdown vulnerability, which is an operating system patch (to fix the operating system) and a firmware patch (to fix the UEFI environment). All of the patches “fix” the problem by removing kernel memory from the user memory map, which stops user processes exploiting Meltdown to read kernel memory.

Unfortunately there is a performance hit with this fix; every time you call the operating system (actually the kernel) to perform something, the memory map needs to be loaded with the kernel maps and re-loaded with the old map when the routine exits.

This “costs” between 5% and 30% when performing system calls. With very modern processors the performance hit will be consistently 5% and with older processors the hit will be consistently 30%.

Having said that, this only happens when calling the operating system kernel, and many applications may very well make relatively few kernel operating system calls in which case the performance hit will be barely noticeable. Nobody is entirely sure what the performance hit will be for real world use, but the best guesses say that most desktop applications will be fine with occasional exceptions (and the web browser is likely to be one); the big performance hit will be on the server.

How Serious Are They?

Meltdown is very serious not only because it allows a user process to read privileged data, but because it allows an attacker to effectively remove a standard attack mitigation which makes many older-style attacks impracticable. Essentially it make older-style attacks practicable again.

Although Spectre is still serious, it may be less so than Meltdown because an attacker needs to be able to control some data that the victim process uses to indulge in some speculative execution. In the case of browsers (for example) this is relatively easy, but in general it is not so easy.

It is also easier to fix and/or protect against on an individual application basis – expect browser patches shortly.

Some Technicalities

Within this section I will attempt to explain some of the technical aspects of the vulnerabilities. By all means skip to the summary if you wish.

The Processor?

Normally security vulnerabilities are found within software – the operating system, or a ‘layered product’ – something installed on top of the operating system such as an application, a helper application, or a run-time environment.

Less often we hear of vulnerabilities that involve hardware in some sense – requiring firmware updates to either the system itself, graphics cards, or network cards.

Similar to firmware updates, it is possible for microcode updates to fix problems with the processor’s instructions.

Unfortunately these vulnerabilities are not found within the processor instructions, but in the way that the processor executes those instructions. And no microcode update can fix this problem (although it is possible to weaken the side-channel attack by making the cache instructions execute in a fixed time).

Essentially the processor hardware needs to be re-designed and new processors released to fix this problem – you need a new processor. The patches for Meltdown and Spectre – both the ones available today, and those available in the future – are strictly speaking workarounds.

The Kernel and Address Space

Meldown specifically targets the kernel and the kernel’s memory. But what is the kernel?

It is a quite common term in the Linux community, but every single mainstream has the same split between kernel mode and user mode. Kernel mode has privileged access to the hardware whereas user mode is prevented from accessing the hardware and indeed the memory of any other user process running. It would be easy to think of this as the operating system and user applications, but that would be technically incorrect.

Whilst the kernel is the operating system, plenty of software that runs in user mode is also part of the operating system. But the over-simplification will do because it contains a useful element of the truth.

Amongst other things the kernel address space contains many secrets that user mode software should not have access to. So why is the kernel mode address space overlaid upon the user mode address space?

One of the jobs that the kernel does when it starts a user mode process, is give to that process a virtual view of the processor’s memory that entirely fills the processor’s memory addressing capability – even if that it is more memory than the machine contains. The reasons for this can be ignored for the moment.

If real memory is allocated to a user process, it can be seen and used by that process and no other.

For performance reasons, the kernel includes it’s own memory within each user process (but protected). It isn’t necessary, but re-programming the memory management unit to map the kernel memory for each system call is slower than not. And after all, memory protection should stop user processes reading kernel memory directly.

That is of course unless memory protection is broken …

Speculative Execution

Computer memory is much slower than modern processors which is why we have cache memory – indeed multiple levels of cache memory. To improve performance processors have long been doing things that come under the umbrella of ‘speculative execution’.

If for example we have the following sample of pseudo-code :-

load variable A from memory location A-in-memory
if A is zero
then
do one thing
else
do another
endif

Because memory is so slow, a processor running this code could stop whilst it is waiting for the memory location to be read. This is how processors of old worked, and is often how processor execution is taught - the next step starts getting really weird.

However it could also execute the code assuming that A will be zero (or not, or even both), so it has the results ready for once the memory has been read. Now there are some obvious limitations to this - the processor can't turn your screen green assuming that A is zero, but it can sometimes get some useful work done.

The problem (with both Meltdown and Spectre) is that speculative execution seems to bypass the various forms of memory protection. Now whilst the speculative results are ignored once the memory is properly read, and the memory protection kicks in, there is a side-channel attack that allows some of the details of the speculative results to be sniffed by an attacker.

 

Summary

  1. Don't panic! These attacks are not currently in use and because of the complexity it will take some time for the attacks to appear in the wild.
  2. Intel processors are vulnerable to Meltdown, and will need a patch to apply a work-around. Apply the patch as soon as it comes out even if it hurts performance.
  3. The performance hit is likely to be significant only on a small set of applications, and in general only significant on a macro-scale - if you run as many servers as Google, you will have to buy more servers soon.
  4. Things are a little more vague with Spectre, but it seems likely that individual applications will need to be patched to remove their vulnerability. Expect more patches.

Tunnel To The Old Town