Dec 012012
 

I have probably ranted about this all before, but as nothing has really improved it is worth trying again … not that I am expecting anyone to pay attention here of course! The rant here is about the myth of home delivery.

When I shop online, I have three different delivery addresses to choose from, none of which is likely to result in a delivery to my home address. Of course one of those three address is my home address, and sometimes choosing it can result in finding a parcel outside my front door when I get home, but most commonly results in a little card telling me to walk into the central post office to collect the parcel.

If I pay money for home delivery I expect the delivery to be made to my home address when I am at home. Delivery companies seem to live in some mythical world of the past where they assume everyone has someone standing by at their home address during working hours. Trying a delivery to my home address during working hours is a waste of time, and leaving a card in my letterbox does not count as a delivery.

Perhaps those who end up having to collect parcels from depots should start demanding their money back for any delivery charges.

Compare if you will with supermarket deliveries, or even fast food deliveries. Without paying any extra, you either get a delivery at the time of your own choosing, or even a same day delivery! You can pay extra for “guaranteed” next day delivery and not get a service that good.

And why do we put up with the shop’s choice of delivery agent? I pay for the delivery; I should get to choose who provides that delivery service. The more you think about it, the more it seems like a bloody cheek for shops to insist on their choice of delivery agent.

Nov 242012
 

NTP is one of those strange services that are so vital to the operation of an organisation’s network; if the servers around the network get their time in a muddle, all sorts of strange things can start happening. Besides which most people expect their computers to be able to tell the right time.

But often it is one of the unloved services. After all no user is going to ask about the health of the NTP service. And if you are a senior manager involved in IT, do you know who manages your NTP infrastructure ? If so, have you ever asked them to explain the design of the NTP infrastructure ? If not, you may find a nasty surprise – your network’s NTP infrastructure may rely on whatever servers can be scavenged and with the minimum investment of time.

Of course, NTP is pretty reliable and in most circumstances extremely resilient. NTP has built in safeguards against against confused time servers sending wildly inappropriate time adjustments, and even in the event of a total NTP failure, servers should be able to keep reasonable time for at least a while. Even with a minimal of investment, an NTP infrastructure can often run merrily in the background for years without an issue.

Not that it is a good idea to ignore NTP for years. It is better by far to spend a little time and money on a yearly basis to keep things fresh – perhaps a little server, and a day’s time each year.

That was quite a long rambling introduction to the NTP “glitch” that I learned about this week, but perhaps goes some way to explaining why such a glitch occurred.

A number of organisations reported that their network had started reporting a time way back in the year 2000. It turns out that :-

  • The USN(aval)O(observatory) had a server that for 51 minutes reported the year as 2000 rather than 2012.
  • A number of organisations with an insufficient number of clock sources (i.e. just the erroneous USNO one) attempted to synchronise to the year 2000 causing the NTP daemon to stop.
  • Some “clever” servers noticed that NTP had stopped, and restarted it. Because most default NTP startup scripts set the clock on startup, these servers were suddenly sent back in time to the year 2000.

And a cascade of relative minor issues, becomes a major issue.

Reading around, the recommendations to prevent this sort of thing happening :-

  1. Use an appropriate number of time sources for your main NTP servers; various suggestions have been made ranging from 5 (probably too few) to 8 (perhaps about right) to 20 (possibly overkill).
  2. Have an appropriate number of main NTP servers for your servers (and other equipment) to synchronise their time with. Anything less than 3 is inadequate; more than 4 is recommended.
  3. Prevent your main NTP servers from setting their time when NTP is restarted and monitor the time on each server regularly.
  4. And a personal recommendation: Restart all your NTP daemons regularly – perhaps daily – to get them to check with the DNS for any updated NTP server names.
  5. And as suggested above, regularly review your NTP infrastructure.
Nov 242012
 

As could be expected, when there are yet again moves made to pass the job of Internet Governance into the hands of the ITU, there is a huge wave of objections from the Americans; some of whom are objecting more from a reflex anti-UN position (or a wish to see the US remain “in control” of the Internet) rather than a more considered objection.

What is perhaps more surprising is the EU’s objections to the ITU taking control.

What Is Internet Governance?

In a very real sense, there is no such thing as the Internet; there are merely a large number of different networks that agree to use the Internet standards – protocol numbers, network addresses, names, etc. With the exception of names this is all pretty invisible to ordinary users of the Internet; at least when it works.

There is nothing to stop different networks from changing the Internet standards, or coming up with their own networking standards. Except of course that a network’s customers might very well object if they suddenly can’t reach Google because of different standards. Historically there has been a migration towards Internet standards rather than away from them.

In a very real sense, this is governance by consent. At least by the network operators.

It may be worthwhile to list those things that the current Internet Governance doesn’t do :-

  • It does not control network traffic flows or peering arrangements. Such control is exercised by individual networks and/or governments.
  • It does not control the content of the Internet. Not only is censureship not part of the current governance mission; it isn’t even within their power. Any current censureship is exercised by the individual networks and/or governments.
  • It does not control access, pricing, or any other form of network control. Your access to the Internet is controlled by your ISP and any laws enacted by your government.

There is probably a long, long list of other things that the current Internet Governance does not do. To a very great extent, the current governance is about technical governance.

What’s So Bad About The Status Quo?

“The Internet” is currently governed by ICANN (the “Internet Corporation for Assigned Names and Numbers”) which is a US-based (and controlled) non-profit corporation. Whilst there are plenty of those who complain about ICANN and how it performs it’s work, the key metric of how well they have performed is that just one of their areas of responsibility – the control of the top-level domains in the DNS – has resulted in any alternatives.

And those alternatives are really not very successful; as someone who runs an institutional DNS infrastructure, I would be under pressure to support alternative roots if they were successful enough to interest normal people. No such requests have reached me.

So you could very well argue that technically ICANN has done a perfectly reasonable job.

But politically, it is a far more difficult situation. ICANN is a US-based corporation whose authority over the Internet standards is effectively granted to it by the US Department of Commerce. This grates with anyone who is not a US citizen, which is now by far a majority of the Internet population.

Historically the Internet is a US invention (although the historical details are quite a bit more complex than that; it is widely acknowledged that the packet switching nature of the ARPAnet was inspired by work done by a British computer scientist), so it is not unreasonable that Internet governance started as a US organisation.

But in the long term, if it remains so, it will be undemocratic and tyrannical; whilst the US is a democratic government it is only US citizens that can hold their government to account with a vote. The rest of us have no say in how the US government surpervises ICANN which is an untenable situation.

What About The ITU ?

The key to any change in how Internet governance is managed, is to make as few changes as possible. If we accept that ICANN has managed reasonably well at the technical governance, there is no overriding reason to take that away from them. If we accept that control of ICANN has to be passed to an international body, then what about the ITU ?

Many people object to the idea of the ITU being in charge for a variety of reasons, but probably the biggest reason of all is that it is a UN body and certain people start frothing at the mouth at the mere mention of the UN.

But if you look at the history of the ITU, you will see that despite the beaurocratic nature of the organisation (which predates the UN by a considerable number of years), it has managed to maintain international telecommunications through two world wars. A not inconsiderable achievement even if it succeeded because it had to succeed.

Time For A Compromise

International agreement is all about making all parties equally satisfied … or at the very least equally disastisfied, with a solution that comes as close as possible to giving everyone what they want. A seemingly impossible task.

But despite spending nowhere near enough time studying the issues, one solution does occur to me. Hand over the authority by which ICANN operates to the ITU with the proviso that any changes to the mandate of  ICANN (in particular giving it additional authority) should be subject to oversite by the UN as a whole; and of course subject to UN Security Council vetos.

Of course this is not a decision that should be made hastily; given that the main issue at stake is “political” rather than technical, there is no reason why the decision to do something has to be made quickly. But it does need to be made within 10 years.

Nov 192012
 

Over the years, whenever I’ve run into problems getting SSH key authentication to work, there’s always been the problem of a certain lack of information (partially because much of the information is held within the server logs which aren’t always accessible). This post is running through some of the issues I’ve encountered.

  1. The file server-to-login-to:~user/.ssh/authorized_keys has the key in, but the values are stored on multiple lines (as can happen when the contents are pasted in). Simply join the lines together, removing any extra spaces added by the editor, and it should work. Usually caused by pasting the key.
  2. Naming the file server-to-login-to:~user/.ssh/authorized_keys incorrectly – my fingers seem to prefer authorised_hosts – which whilst the authorised bit is the correct spelling, the code expects the Americanised spelling. Although you can set AuthorizedKeysFile to a space separated list of files, it’s usually best to assume it hasn’t been done.
  3. Getting confused over public/private keys. Not that I’m ever going to admit to being as dumb as to put the private key into the authorized_keys file, but it’s worth reminding myself that the private key belongs on the workstation I’m trying to connect from.
  4. Trying to login to a server where key authentication has been disabled (why would anyone do this?). Check PubkeyAuthentication in /etc/ssh/sshd_config.
  5. Not one of my mistakes (I’m on the side who disabled root logins), but logging in as root directly is often turned off.
  6. The permissions on the server-to-login-to:~user/.ssh directory and the file server-to-login-to:~user/.ssh/authorized_keys need to be very restricted. Basically no permissions for anyone other than the owner.

I am sure there are plenty of other possible mistakes, but running through this checklist seems to work for me.

Nov 162012
 

Way back in the 15th, and 16th centuries there was an outbreak of mass hysteria where in many instances the mere accusation of a crime could very well result in finding yourself tied to a stake with a bonfire burning around your feet. The crime? Well it is arguably the case that the victims tended to be inconvenient women – women of power, individuality, or just a trifle too odd for a misogynist. Ignoring the so-called crime itself, there is a great deal of similarity between the hysteria surrounding those ancient witchcraft panics, and the modern day paedophilia panics.

Although paedophilia is a real and serious crime –  in fact because paedophilia is such a serious crime – we need to be very careful about accusations of paedophilia. An accusation is enough to do irreparable damage to a person’s reputation, career, marriage, or even life. Which sounds a reasonable enough start at a punishment for a paedophile, but an accusation doesn’t mean someone is guilty. Again, again (although it is interesting how this story has been inflated over the years), again, again, again, again, again, again,  and again, those who take the law into their own hands have been shown to make mistakes.

And last week with the combination of old media (Newsnight) and new media managed to “name and shame” a totally innocent party: Lord McAlpine. His supposed victim has since indicated that he was mistaken about the identity of his abuser, and that it was not Lord McAlpine. Newsnight managed to “leak” enough information for other parties (the “new media” bloggers) to figure out the name.

No matter how serious the crime, an alleged perpetrator is entitled to present a defence; indeed under British justice an accuser has to demonstrate beyond reasonable doubt that the perpetrator is guilty. And “trial by twitter” is certainly not a fair system of justice.

Of course none of this means we should be taking accusations by the victims any less seriously. Such a victim may well misidentify the perpetrator for all sorts of possible reasons, but that does not mean the crime has not taken place. An accusation needs to be properly investigated to identify the real perpetrator(s), and done in such a way that any potential perpetrators who have been shown to be innocent do not suffer in any way.

Misidentifying an attacker may sound the kind of thing that is pretty unlikely, but is hardly impossible. As an example, within the city I live there used to be someone who looked enough like me for a significant number of people to walk up to me and have a long conversation without realising they were talking to the wrong person.