Dec 012012
 

So Leveson has finally released his report on press regulation, and as quick as a flash the Tories and the chief Tory (David Cameron – the Prime Minister) have announced that they will have nothing to do with it. They prefer some form of self-regulation; in other words a toothless organisation which the press routinely ignores or sticks a middle finger up to (i.e. a modified version of the old Press Complaints Commission).

Nothing could demonstrate more clearly that the Tories will bend over backwards to support any kind of business (including the demonstrably corrupt), and ignore the needs of the public. Without reading the report, it is still possible to determine that the recommendations are sensible merely by looking at who opposes it – the Tories, and the press themselves. Just about every other politician is right behind Leveson.

The big trouble with self-regulation (at least of the press) is that it has been tried again, and again, and still fails. As Leveson himself reports, we have had 7 inquiries into press standards over the last 70 years. The press regularly acts “as if its own code, which it wrote, simply did not exist”.

Or in other words, when the press barons say that they will behave now, we know they are lying seeing as they have promised that before and have yet to live up to their fine words.

Interestingly the Tories seem most concerned about Leveson because they believe that the government and parliament should have no say  in the regulation of the press. Thus demonstrating that their reading comprehension is perhaps at the level of a 10 year old.

As Leveson himself says :-

Not a single witness has proposed that the Government or Parliament should themselves be involved in the regulation of the press. I have not contemplated and do not make any such proposal.

Personally I am not opposed to self-regulation in general; at least until that self-regulation has been demonstrated to be useless. But in practically every case where an industry or professional group has regulated itself, it has failed to do so properly. We have trusted the press to regulate itself and ultimately it has failed to do so.

Statutory regulation of the press is very definitely something to be wary of – let the politicians have a say in how the press is run and we would never have heard of the MPs expenses scandal! But this is not what Leveson is suggesting; he is suggesting that an independent body regulates the press with statutory authority.

Frankly if a regulatory authority wants to punish a rogue editor – perhaps with a thousand lashes of the cat – it needs statutory authority or the rogue editor is likely to raise the finger and walk out.

Finally, and the main reason for this post; time to give the Tories a bloody nose by telling them that we want the Leveson recommendations implemented. Visit the petition site and tell them so!

 

Dec 012012
 

I have probably ranted about this all before, but as nothing has really improved it is worth trying again … not that I am expecting anyone to pay attention here of course! The rant here is about the myth of home delivery.

When I shop online, I have three different delivery addresses to choose from, none of which is likely to result in a delivery to my home address. Of course one of those three address is my home address, and sometimes choosing it can result in finding a parcel outside my front door when I get home, but most commonly results in a little card telling me to walk into the central post office to collect the parcel.

If I pay money for home delivery I expect the delivery to be made to my home address when I am at home. Delivery companies seem to live in some mythical world of the past where they assume everyone has someone standing by at their home address during working hours. Trying a delivery to my home address during working hours is a waste of time, and leaving a card in my letterbox does not count as a delivery.

Perhaps those who end up having to collect parcels from depots should start demanding their money back for any delivery charges.

Compare if you will with supermarket deliveries, or even fast food deliveries. Without paying any extra, you either get a delivery at the time of your own choosing, or even a same day delivery! You can pay extra for “guaranteed” next day delivery and not get a service that good.

And why do we put up with the shop’s choice of delivery agent? I pay for the delivery; I should get to choose who provides that delivery service. The more you think about it, the more it seems like a bloody cheek for shops to insist on their choice of delivery agent.

Nov 242012
 

NTP is one of those strange services that are so vital to the operation of an organisation’s network; if the servers around the network get their time in a muddle, all sorts of strange things can start happening. Besides which most people expect their computers to be able to tell the right time.

But often it is one of the unloved services. After all no user is going to ask about the health of the NTP service. And if you are a senior manager involved in IT, do you know who manages your NTP infrastructure ? If so, have you ever asked them to explain the design of the NTP infrastructure ? If not, you may find a nasty surprise – your network’s NTP infrastructure may rely on whatever servers can be scavenged and with the minimum investment of time.

Of course, NTP is pretty reliable and in most circumstances extremely resilient. NTP has built in safeguards against against confused time servers sending wildly inappropriate time adjustments, and even in the event of a total NTP failure, servers should be able to keep reasonable time for at least a while. Even with a minimal of investment, an NTP infrastructure can often run merrily in the background for years without an issue.

Not that it is a good idea to ignore NTP for years. It is better by far to spend a little time and money on a yearly basis to keep things fresh – perhaps a little server, and a day’s time each year.

That was quite a long rambling introduction to the NTP “glitch” that I learned about this week, but perhaps goes some way to explaining why such a glitch occurred.

A number of organisations reported that their network had started reporting a time way back in the year 2000. It turns out that :-

  • The USN(aval)O(observatory) had a server that for 51 minutes reported the year as 2000 rather than 2012.
  • A number of organisations with an insufficient number of clock sources (i.e. just the erroneous USNO one) attempted to synchronise to the year 2000 causing the NTP daemon to stop.
  • Some “clever” servers noticed that NTP had stopped, and restarted it. Because most default NTP startup scripts set the clock on startup, these servers were suddenly sent back in time to the year 2000.

And a cascade of relative minor issues, becomes a major issue.

Reading around, the recommendations to prevent this sort of thing happening :-

  1. Use an appropriate number of time sources for your main NTP servers; various suggestions have been made ranging from 5 (probably too few) to 8 (perhaps about right) to 20 (possibly overkill).
  2. Have an appropriate number of main NTP servers for your servers (and other equipment) to synchronise their time with. Anything less than 3 is inadequate; more than 4 is recommended.
  3. Prevent your main NTP servers from setting their time when NTP is restarted and monitor the time on each server regularly.
  4. And a personal recommendation: Restart all your NTP daemons regularly – perhaps daily – to get them to check with the DNS for any updated NTP server names.
  5. And as suggested above, regularly review your NTP infrastructure.
Nov 242012
 

As could be expected, when there are yet again moves made to pass the job of Internet Governance into the hands of the ITU, there is a huge wave of objections from the Americans; some of whom are objecting more from a reflex anti-UN position (or a wish to see the US remain “in control” of the Internet) rather than a more considered objection.

What is perhaps more surprising is the EU’s objections to the ITU taking control.

What Is Internet Governance?

In a very real sense, there is no such thing as the Internet; there are merely a large number of different networks that agree to use the Internet standards – protocol numbers, network addresses, names, etc. With the exception of names this is all pretty invisible to ordinary users of the Internet; at least when it works.

There is nothing to stop different networks from changing the Internet standards, or coming up with their own networking standards. Except of course that a network’s customers might very well object if they suddenly can’t reach Google because of different standards. Historically there has been a migration towards Internet standards rather than away from them.

In a very real sense, this is governance by consent. At least by the network operators.

It may be worthwhile to list those things that the current Internet Governance doesn’t do :-

  • It does not control network traffic flows or peering arrangements. Such control is exercised by individual networks and/or governments.
  • It does not control the content of the Internet. Not only is censureship not part of the current governance mission; it isn’t even within their power. Any current censureship is exercised by the individual networks and/or governments.
  • It does not control access, pricing, or any other form of network control. Your access to the Internet is controlled by your ISP and any laws enacted by your government.

There is probably a long, long list of other things that the current Internet Governance does not do. To a very great extent, the current governance is about technical governance.

What’s So Bad About The Status Quo?

“The Internet” is currently governed by ICANN (the “Internet Corporation for Assigned Names and Numbers”) which is a US-based (and controlled) non-profit corporation. Whilst there are plenty of those who complain about ICANN and how it performs it’s work, the key metric of how well they have performed is that just one of their areas of responsibility – the control of the top-level domains in the DNS – has resulted in any alternatives.

And those alternatives are really not very successful; as someone who runs an institutional DNS infrastructure, I would be under pressure to support alternative roots if they were successful enough to interest normal people. No such requests have reached me.

So you could very well argue that technically ICANN has done a perfectly reasonable job.

But politically, it is a far more difficult situation. ICANN is a US-based corporation whose authority over the Internet standards is effectively granted to it by the US Department of Commerce. This grates with anyone who is not a US citizen, which is now by far a majority of the Internet population.

Historically the Internet is a US invention (although the historical details are quite a bit more complex than that; it is widely acknowledged that the packet switching nature of the ARPAnet was inspired by work done by a British computer scientist), so it is not unreasonable that Internet governance started as a US organisation.

But in the long term, if it remains so, it will be undemocratic and tyrannical; whilst the US is a democratic government it is only US citizens that can hold their government to account with a vote. The rest of us have no say in how the US government surpervises ICANN which is an untenable situation.

What About The ITU ?

The key to any change in how Internet governance is managed, is to make as few changes as possible. If we accept that ICANN has managed reasonably well at the technical governance, there is no overriding reason to take that away from them. If we accept that control of ICANN has to be passed to an international body, then what about the ITU ?

Many people object to the idea of the ITU being in charge for a variety of reasons, but probably the biggest reason of all is that it is a UN body and certain people start frothing at the mouth at the mere mention of the UN.

But if you look at the history of the ITU, you will see that despite the beaurocratic nature of the organisation (which predates the UN by a considerable number of years), it has managed to maintain international telecommunications through two world wars. A not inconsiderable achievement even if it succeeded because it had to succeed.

Time For A Compromise

International agreement is all about making all parties equally satisfied … or at the very least equally disastisfied, with a solution that comes as close as possible to giving everyone what they want. A seemingly impossible task.

But despite spending nowhere near enough time studying the issues, one solution does occur to me. Hand over the authority by which ICANN operates to the ITU with the proviso that any changes to the mandate of  ICANN (in particular giving it additional authority) should be subject to oversite by the UN as a whole; and of course subject to UN Security Council vetos.

Of course this is not a decision that should be made hastily; given that the main issue at stake is “political” rather than technical, there is no reason why the decision to do something has to be made quickly. But it does need to be made within 10 years.

Nov 192012
 

Over the years, whenever I’ve run into problems getting SSH key authentication to work, there’s always been the problem of a certain lack of information (partially because much of the information is held within the server logs which aren’t always accessible). This post is running through some of the issues I’ve encountered.

  1. The file server-to-login-to:~user/.ssh/authorized_keys has the key in, but the values are stored on multiple lines (as can happen when the contents are pasted in). Simply join the lines together, removing any extra spaces added by the editor, and it should work. Usually caused by pasting the key.
  2. Naming the file server-to-login-to:~user/.ssh/authorized_keys incorrectly – my fingers seem to prefer authorised_hosts – which whilst the authorised bit is the correct spelling, the code expects the Americanised spelling. Although you can set AuthorizedKeysFile to a space separated list of files, it’s usually best to assume it hasn’t been done.
  3. Getting confused over public/private keys. Not that I’m ever going to admit to being as dumb as to put the private key into the authorized_keys file, but it’s worth reminding myself that the private key belongs on the workstation I’m trying to connect from.
  4. Trying to login to a server where key authentication has been disabled (why would anyone do this?). Check PubkeyAuthentication in /etc/ssh/sshd_config.
  5. Not one of my mistakes (I’m on the side who disabled root logins), but logging in as root directly is often turned off.
  6. The permissions on the server-to-login-to:~user/.ssh directory and the file server-to-login-to:~user/.ssh/authorized_keys need to be very restricted. Basically no permissions for anyone other than the owner.

I am sure there are plenty of other possible mistakes, but running through this checklist seems to work for me.