Dec 102012
 

Today it was announced that the NHS would be mapping the DNA of cancer patients (with their consent) to be stored and used by researchers. Which on the surface seems to be a perfectly sensible thing to do.

Of course there are those who are concerned with the privacy issue of the data being stored. Which is fair enough – any large storage of data like this is subject to privacy issues, and there are genuine fears that the data may be made available to private companies with no interest in health research.

Amusingly one of the comments was that the data would be made anonymous by removing any personal data from the data made available to researchers. Amusing because with the most personal data and ultimate means of identifying individuals is the DNA sequence itself – nothing can be more fundamental in identifying an individual than their unique DNA sequence.

On a more serious note, it is effectively impossible to make this kind of data completely anonymous. To be of any use the data in this database needs to include more data than just the DNA sequence – such as disease(s), treatments used, outcomes, etc. Whilst this may not be useful in identifying every individual taking part, it may well be enough to identify individuals with rarer combinations of disease and circumstances.

Nov 242012
 

NTP is one of those strange services that are so vital to the operation of an organisation’s network; if the servers around the network get their time in a muddle, all sorts of strange things can start happening. Besides which most people expect their computers to be able to tell the right time.

But often it is one of the unloved services. After all no user is going to ask about the health of the NTP service. And if you are a senior manager involved in IT, do you know who manages your NTP infrastructure ? If so, have you ever asked them to explain the design of the NTP infrastructure ? If not, you may find a nasty surprise – your network’s NTP infrastructure may rely on whatever servers can be scavenged and with the minimum investment of time.

Of course, NTP is pretty reliable and in most circumstances extremely resilient. NTP has built in safeguards against against confused time servers sending wildly inappropriate time adjustments, and even in the event of a total NTP failure, servers should be able to keep reasonable time for at least a while. Even with a minimal of investment, an NTP infrastructure can often run merrily in the background for years without an issue.

Not that it is a good idea to ignore NTP for years. It is better by far to spend a little time and money on a yearly basis to keep things fresh – perhaps a little server, and a day’s time each year.

That was quite a long rambling introduction to the NTP “glitch” that I learned about this week, but perhaps goes some way to explaining why such a glitch occurred.

A number of organisations reported that their network had started reporting a time way back in the year 2000. It turns out that :-

  • The USN(aval)O(observatory) had a server that for 51 minutes reported the year as 2000 rather than 2012.
  • A number of organisations with an insufficient number of clock sources (i.e. just the erroneous USNO one) attempted to synchronise to the year 2000 causing the NTP daemon to stop.
  • Some “clever” servers noticed that NTP had stopped, and restarted it. Because most default NTP startup scripts set the clock on startup, these servers were suddenly sent back in time to the year 2000.

And a cascade of relative minor issues, becomes a major issue.

Reading around, the recommendations to prevent this sort of thing happening :-

  1. Use an appropriate number of time sources for your main NTP servers; various suggestions have been made ranging from 5 (probably too few) to 8 (perhaps about right) to 20 (possibly overkill).
  2. Have an appropriate number of main NTP servers for your servers (and other equipment) to synchronise their time with. Anything less than 3 is inadequate; more than 4 is recommended.
  3. Prevent your main NTP servers from setting their time when NTP is restarted and monitor the time on each server regularly.
  4. And a personal recommendation: Restart all your NTP daemons regularly – perhaps daily – to get them to check with the DNS for any updated NTP server names.
  5. And as suggested above, regularly review your NTP infrastructure.
Nov 242012
 

As could be expected, when there are yet again moves made to pass the job of Internet Governance into the hands of the ITU, there is a huge wave of objections from the Americans; some of whom are objecting more from a reflex anti-UN position (or a wish to see the US remain “in control” of the Internet) rather than a more considered objection.

What is perhaps more surprising is the EU’s objections to the ITU taking control.

What Is Internet Governance?

In a very real sense, there is no such thing as the Internet; there are merely a large number of different networks that agree to use the Internet standards – protocol numbers, network addresses, names, etc. With the exception of names this is all pretty invisible to ordinary users of the Internet; at least when it works.

There is nothing to stop different networks from changing the Internet standards, or coming up with their own networking standards. Except of course that a network’s customers might very well object if they suddenly can’t reach Google because of different standards. Historically there has been a migration towards Internet standards rather than away from them.

In a very real sense, this is governance by consent. At least by the network operators.

It may be worthwhile to list those things that the current Internet Governance doesn’t do :-

  • It does not control network traffic flows or peering arrangements. Such control is exercised by individual networks and/or governments.
  • It does not control the content of the Internet. Not only is censureship not part of the current governance mission; it isn’t even within their power. Any current censureship is exercised by the individual networks and/or governments.
  • It does not control access, pricing, or any other form of network control. Your access to the Internet is controlled by your ISP and any laws enacted by your government.

There is probably a long, long list of other things that the current Internet Governance does not do. To a very great extent, the current governance is about technical governance.

What’s So Bad About The Status Quo?

“The Internet” is currently governed by ICANN (the “Internet Corporation for Assigned Names and Numbers”) which is a US-based (and controlled) non-profit corporation. Whilst there are plenty of those who complain about ICANN and how it performs it’s work, the key metric of how well they have performed is that just one of their areas of responsibility – the control of the top-level domains in the DNS – has resulted in any alternatives.

And those alternatives are really not very successful; as someone who runs an institutional DNS infrastructure, I would be under pressure to support alternative roots if they were successful enough to interest normal people. No such requests have reached me.

So you could very well argue that technically ICANN has done a perfectly reasonable job.

But politically, it is a far more difficult situation. ICANN is a US-based corporation whose authority over the Internet standards is effectively granted to it by the US Department of Commerce. This grates with anyone who is not a US citizen, which is now by far a majority of the Internet population.

Historically the Internet is a US invention (although the historical details are quite a bit more complex than that; it is widely acknowledged that the packet switching nature of the ARPAnet was inspired by work done by a British computer scientist), so it is not unreasonable that Internet governance started as a US organisation.

But in the long term, if it remains so, it will be undemocratic and tyrannical; whilst the US is a democratic government it is only US citizens that can hold their government to account with a vote. The rest of us have no say in how the US government surpervises ICANN which is an untenable situation.

What About The ITU ?

The key to any change in how Internet governance is managed, is to make as few changes as possible. If we accept that ICANN has managed reasonably well at the technical governance, there is no overriding reason to take that away from them. If we accept that control of ICANN has to be passed to an international body, then what about the ITU ?

Many people object to the idea of the ITU being in charge for a variety of reasons, but probably the biggest reason of all is that it is a UN body and certain people start frothing at the mouth at the mere mention of the UN.

But if you look at the history of the ITU, you will see that despite the beaurocratic nature of the organisation (which predates the UN by a considerable number of years), it has managed to maintain international telecommunications through two world wars. A not inconsiderable achievement even if it succeeded because it had to succeed.

Time For A Compromise

International agreement is all about making all parties equally satisfied … or at the very least equally disastisfied, with a solution that comes as close as possible to giving everyone what they want. A seemingly impossible task.

But despite spending nowhere near enough time studying the issues, one solution does occur to me. Hand over the authority by which ICANN operates to the ITU with the proviso that any changes to the mandate of  ICANN (in particular giving it additional authority) should be subject to oversite by the UN as a whole; and of course subject to UN Security Council vetos.

Of course this is not a decision that should be made hastily; given that the main issue at stake is “political” rather than technical, there is no reason why the decision to do something has to be made quickly. But it does need to be made within 10 years.

Nov 192012
 

Over the years, whenever I’ve run into problems getting SSH key authentication to work, there’s always been the problem of a certain lack of information (partially because much of the information is held within the server logs which aren’t always accessible). This post is running through some of the issues I’ve encountered.

  1. The file server-to-login-to:~user/.ssh/authorized_keys has the key in, but the values are stored on multiple lines (as can happen when the contents are pasted in). Simply join the lines together, removing any extra spaces added by the editor, and it should work. Usually caused by pasting the key.
  2. Naming the file server-to-login-to:~user/.ssh/authorized_keys incorrectly – my fingers seem to prefer authorised_hosts – which whilst the authorised bit is the correct spelling, the code expects the Americanised spelling. Although you can set AuthorizedKeysFile to a space separated list of files, it’s usually best to assume it hasn’t been done.
  3. Getting confused over public/private keys. Not that I’m ever going to admit to being as dumb as to put the private key into the authorized_keys file, but it’s worth reminding myself that the private key belongs on the workstation I’m trying to connect from.
  4. Trying to login to a server where key authentication has been disabled (why would anyone do this?). Check PubkeyAuthentication in /etc/ssh/sshd_config.
  5. Not one of my mistakes (I’m on the side who disabled root logins), but logging in as root directly is often turned off.
  6. The permissions on the server-to-login-to:~user/.ssh directory and the file server-to-login-to:~user/.ssh/authorized_keys need to be very restricted. Basically no permissions for anyone other than the owner.

I am sure there are plenty of other possible mistakes, but running through this checklist seems to work for me.

Nov 032012
 

Previously I ranted about how Apple had “complied” with a UK court order by criticising the decision made by the UK courts and implying they had gotten it wrong. Now Apple have been dragged into court again to explain their lack of compliance, and been ordered to remove their previous statement and replace it with another whose wording has been dictated by the court.

Apple in a mind-blowing exhibition of stupidity tried to claim that whilst it would take just 24 hours to take down their previous statement, it would take up to 14 days to put up a replacement statement. For “technical reasons”.

Now as it happens, in addition to writing drivel on this website (where the only delay “technical reasons” might impose would be due to an infrastructure failure/upgrade, but “personal reasons” might well impose a 14-day delay), I have been involved in more “corporate” websites where content management systems can indeed impose “technical reasons” for a delay in updating a website. But not 14 days! More like a few hours, or at most 24 hours.

And if a content management system does impose a long delay in publishing website updates, it is always possible to bypass the CMS to publish emergency updates. Even if it is necessary to “break” the CMS to do so.

It may very well be that an internal approval process within Apple’s CMS normally requires 14 days for an update to be published. In which case the reason for the supposed 14 day delay is for “business reasons” rather than “technical reasons”.

Of course there is also another possibility. Given that Apple have recently launched new products, they may be very reluctant to put anything up on their home page (which the revised court order now requires) which distracts from their new product. You do have to wonder if this mysterious delay for “technical reasons” is in fact so that nobody gets distracted from the pretty pictures of Apple’s new products.

That would be very, very silly of them.

The court evidently did not think much of Apple’s excuse of why they could not put up a replacement statement promptly and have given them 48 hours to comply. So either Apple has to comply within 48-hours – demonstrating that they lied in court, or has to come up with detailed technical reasons why they cannot comply – which will demonstrate they are surprisingly incompetent when it comes to technical matters.

Neither alternative is comfortable for Apple executives, but this position is all their fault.