Dec 132012
 

I have been thinking a fair amount about Information Security recently; probably because I am in the middle of a SANS course which is rather more interesting than most IT courses I have been on. As I was walking in this morning, I was pondering how I would explain what I do to a distant ancestor. Not exactly the easiest of tasks given that what we do involves what would seem to be magic to someone from the distant past.

But an analogy did occur to me: What we do is somewhat similar to the militias that used to protect walled towns and cities in the medieval era; particularly during periods of the medieval era when central authority was somewhat lacking. Such as England’s “Anarchy”.

In the distant past (and in some cases, not so distant past), towns could be at risk of being sacked by brigands for profit or for some “military” purpose. Those living in towns were obviously somewhat reluctant at this possibility, and in many cases would arrange for protection by hiring soldiers to protect them; the defences would often include city walls, a militia (paid or voluntary), etc.

Which is somewhat similar to what we do – we’re the soldiers hired to protect the “town” (a company or some kind of institute), and we build town walls (firewalls), and other defences. Obviously it is easy to take the analogy too far – we don’t get to fire crossbows at our attackers. But neither is it completely inaccurate, or indeed uninteresting.

Today we expect our central governments to arrange physical protection for us – we don’t expect to need to organise a militia to protect our cities; neither do we expect to held up at gun point to turn over our valuables. Yes there are exceptions, but they are sufficiently unusual that they are greeted with astonishment. And yes some companies with especially high value assets do arrange for additional protection over and above what is usually provided by the state.

But when you compare physical security with information security, it becomes apparent that we are still in the medieval era when it comes to information security. States are only just beginning to look at “cyberwarfare” and offer little other than advice to individuals or organisations looking for protection; it is common to hear that the police are simply not interested in looking at an issue unless the costs are less than £1 million.

If someone suffers financial harm through a phishing attack, our standard response is to blame them for being “stupid”. Whilst most phishing attacks do involve someone doing something stupid, it seems odd to blame the victim – who would blame the victim of a mugging?

Similarly when an organisation has some attackers break in, steal a whole bunch of database files which in turn contain tons of clear text passwords, or hashed passwords, we blame the victim. How could they be so stupid as to not protect that data? After all, it costs more to be careful.

So perhaps I could explain what I do as being an old warrior who has settled down in a town and runs the local militia.

Now if you’ll excuse me, it’s time for bed – time to hang up the crossbow and take off this horrible chain mail.

Dec 102012
 

Today it was announced that the NHS would be mapping the DNA of cancer patients (with their consent) to be stored and used by researchers. Which on the surface seems to be a perfectly sensible thing to do.

Of course there are those who are concerned with the privacy issue of the data being stored. Which is fair enough – any large storage of data like this is subject to privacy issues, and there are genuine fears that the data may be made available to private companies with no interest in health research.

Amusingly one of the comments was that the data would be made anonymous by removing any personal data from the data made available to researchers. Amusing because with the most personal data and ultimate means of identifying individuals is the DNA sequence itself – nothing can be more fundamental in identifying an individual than their unique DNA sequence.

On a more serious note, it is effectively impossible to make this kind of data completely anonymous. To be of any use the data in this database needs to include more data than just the DNA sequence – such as disease(s), treatments used, outcomes, etc. Whilst this may not be useful in identifying every individual taking part, it may well be enough to identify individuals with rarer combinations of disease and circumstances.

Nov 242012
 

NTP is one of those strange services that are so vital to the operation of an organisation’s network; if the servers around the network get their time in a muddle, all sorts of strange things can start happening. Besides which most people expect their computers to be able to tell the right time.

But often it is one of the unloved services. After all no user is going to ask about the health of the NTP service. And if you are a senior manager involved in IT, do you know who manages your NTP infrastructure ? If so, have you ever asked them to explain the design of the NTP infrastructure ? If not, you may find a nasty surprise – your network’s NTP infrastructure may rely on whatever servers can be scavenged and with the minimum investment of time.

Of course, NTP is pretty reliable and in most circumstances extremely resilient. NTP has built in safeguards against against confused time servers sending wildly inappropriate time adjustments, and even in the event of a total NTP failure, servers should be able to keep reasonable time for at least a while. Even with a minimal of investment, an NTP infrastructure can often run merrily in the background for years without an issue.

Not that it is a good idea to ignore NTP for years. It is better by far to spend a little time and money on a yearly basis to keep things fresh – perhaps a little server, and a day’s time each year.

That was quite a long rambling introduction to the NTP “glitch” that I learned about this week, but perhaps goes some way to explaining why such a glitch occurred.

A number of organisations reported that their network had started reporting a time way back in the year 2000. It turns out that :-

  • The USN(aval)O(observatory) had a server that for 51 minutes reported the year as 2000 rather than 2012.
  • A number of organisations with an insufficient number of clock sources (i.e. just the erroneous USNO one) attempted to synchronise to the year 2000 causing the NTP daemon to stop.
  • Some “clever” servers noticed that NTP had stopped, and restarted it. Because most default NTP startup scripts set the clock on startup, these servers were suddenly sent back in time to the year 2000.

And a cascade of relative minor issues, becomes a major issue.

Reading around, the recommendations to prevent this sort of thing happening :-

  1. Use an appropriate number of time sources for your main NTP servers; various suggestions have been made ranging from 5 (probably too few) to 8 (perhaps about right) to 20 (possibly overkill).
  2. Have an appropriate number of main NTP servers for your servers (and other equipment) to synchronise their time with. Anything less than 3 is inadequate; more than 4 is recommended.
  3. Prevent your main NTP servers from setting their time when NTP is restarted and monitor the time on each server regularly.
  4. And a personal recommendation: Restart all your NTP daemons regularly – perhaps daily – to get them to check with the DNS for any updated NTP server names.
  5. And as suggested above, regularly review your NTP infrastructure.
Nov 242012
 

As could be expected, when there are yet again moves made to pass the job of Internet Governance into the hands of the ITU, there is a huge wave of objections from the Americans; some of whom are objecting more from a reflex anti-UN position (or a wish to see the US remain “in control” of the Internet) rather than a more considered objection.

What is perhaps more surprising is the EU’s objections to the ITU taking control.

What Is Internet Governance?

In a very real sense, there is no such thing as the Internet; there are merely a large number of different networks that agree to use the Internet standards – protocol numbers, network addresses, names, etc. With the exception of names this is all pretty invisible to ordinary users of the Internet; at least when it works.

There is nothing to stop different networks from changing the Internet standards, or coming up with their own networking standards. Except of course that a network’s customers might very well object if they suddenly can’t reach Google because of different standards. Historically there has been a migration towards Internet standards rather than away from them.

In a very real sense, this is governance by consent. At least by the network operators.

It may be worthwhile to list those things that the current Internet Governance doesn’t do :-

  • It does not control network traffic flows or peering arrangements. Such control is exercised by individual networks and/or governments.
  • It does not control the content of the Internet. Not only is censureship not part of the current governance mission; it isn’t even within their power. Any current censureship is exercised by the individual networks and/or governments.
  • It does not control access, pricing, or any other form of network control. Your access to the Internet is controlled by your ISP and any laws enacted by your government.

There is probably a long, long list of other things that the current Internet Governance does not do. To a very great extent, the current governance is about technical governance.

What’s So Bad About The Status Quo?

“The Internet” is currently governed by ICANN (the “Internet Corporation for Assigned Names and Numbers”) which is a US-based (and controlled) non-profit corporation. Whilst there are plenty of those who complain about ICANN and how it performs it’s work, the key metric of how well they have performed is that just one of their areas of responsibility – the control of the top-level domains in the DNS – has resulted in any alternatives.

And those alternatives are really not very successful; as someone who runs an institutional DNS infrastructure, I would be under pressure to support alternative roots if they were successful enough to interest normal people. No such requests have reached me.

So you could very well argue that technically ICANN has done a perfectly reasonable job.

But politically, it is a far more difficult situation. ICANN is a US-based corporation whose authority over the Internet standards is effectively granted to it by the US Department of Commerce. This grates with anyone who is not a US citizen, which is now by far a majority of the Internet population.

Historically the Internet is a US invention (although the historical details are quite a bit more complex than that; it is widely acknowledged that the packet switching nature of the ARPAnet was inspired by work done by a British computer scientist), so it is not unreasonable that Internet governance started as a US organisation.

But in the long term, if it remains so, it will be undemocratic and tyrannical; whilst the US is a democratic government it is only US citizens that can hold their government to account with a vote. The rest of us have no say in how the US government surpervises ICANN which is an untenable situation.

What About The ITU ?

The key to any change in how Internet governance is managed, is to make as few changes as possible. If we accept that ICANN has managed reasonably well at the technical governance, there is no overriding reason to take that away from them. If we accept that control of ICANN has to be passed to an international body, then what about the ITU ?

Many people object to the idea of the ITU being in charge for a variety of reasons, but probably the biggest reason of all is that it is a UN body and certain people start frothing at the mouth at the mere mention of the UN.

But if you look at the history of the ITU, you will see that despite the beaurocratic nature of the organisation (which predates the UN by a considerable number of years), it has managed to maintain international telecommunications through two world wars. A not inconsiderable achievement even if it succeeded because it had to succeed.

Time For A Compromise

International agreement is all about making all parties equally satisfied … or at the very least equally disastisfied, with a solution that comes as close as possible to giving everyone what they want. A seemingly impossible task.

But despite spending nowhere near enough time studying the issues, one solution does occur to me. Hand over the authority by which ICANN operates to the ITU with the proviso that any changes to the mandate of  ICANN (in particular giving it additional authority) should be subject to oversite by the UN as a whole; and of course subject to UN Security Council vetos.

Of course this is not a decision that should be made hastily; given that the main issue at stake is “political” rather than technical, there is no reason why the decision to do something has to be made quickly. But it does need to be made within 10 years.

Nov 192012
 

Over the years, whenever I’ve run into problems getting SSH key authentication to work, there’s always been the problem of a certain lack of information (partially because much of the information is held within the server logs which aren’t always accessible). This post is running through some of the issues I’ve encountered.

  1. The file server-to-login-to:~user/.ssh/authorized_keys has the key in, but the values are stored on multiple lines (as can happen when the contents are pasted in). Simply join the lines together, removing any extra spaces added by the editor, and it should work. Usually caused by pasting the key.
  2. Naming the file server-to-login-to:~user/.ssh/authorized_keys incorrectly – my fingers seem to prefer authorised_hosts – which whilst the authorised bit is the correct spelling, the code expects the Americanised spelling. Although you can set AuthorizedKeysFile to a space separated list of files, it’s usually best to assume it hasn’t been done.
  3. Getting confused over public/private keys. Not that I’m ever going to admit to being as dumb as to put the private key into the authorized_keys file, but it’s worth reminding myself that the private key belongs on the workstation I’m trying to connect from.
  4. Trying to login to a server where key authentication has been disabled (why would anyone do this?). Check PubkeyAuthentication in /etc/ssh/sshd_config.
  5. Not one of my mistakes (I’m on the side who disabled root logins), but logging in as root directly is often turned off.
  6. The permissions on the server-to-login-to:~user/.ssh directory and the file server-to-login-to:~user/.ssh/authorized_keys need to be very restricted. Basically no permissions for anyone other than the owner.

I am sure there are plenty of other possible mistakes, but running through this checklist seems to work for me.