Mar 242013
 

The above links to an interesting browser which allows zooming and selection of different data sets. It’s worth a look if you’re into that sort of thing. Although it’s rather surprising that it doesn’t like IPv6 addresses!

The most controversial thing about this map of the Internet gathered during 2012, is that it was produced with the aid of a botnet or in other words this researcher stole the resources they needed. Which is obviously wrong – no matter how good the cause – but now that it has been done, there is no reason not to look at the results (whilst wrong this isn’t really evil).

The first interesting discovery here is that this anonymous researcher managed to write a simple virus that would load the Internet scanner onto many devices with default passwords set – admin accounts with “admin” as the password, root accounts with “root” as the password, etc. You would have thought that such insecure devices would have been driven off the Internet by now, but it turns out not to be the case – there are at least 420,000 of them!

You could even argue that the owners of such machines are asking to have their devices controlled by anyone who wants to. Perhaps a little extreme, but certainly some people think so or this Internet survey wouldn’t exist.

But now the results. If you look at the default settings in the browser above, you will encounter large swathes of black squares where apparently nothing is in use. The trouble is that whilst it is true that an IP address that is pingable, or has ports open is “in use”, there is no guarantee that an IP address that is just registered in the DNS is in use or not, and finally unregistered IP addresses that do not appear to do anything may very well still be in use.

Essentially the whole exercise hasn’t really said much about how much of the Internet address space is in use, although that is not to say that the results are not useful.

One special point to make is that many of the large black squares that appear unused, are allocated to organisations that may very well want to have proper IP addresses that are not connected to the global Internet. That is not wrong in any way – before the wide spread adoption of NAT, it was common and indeed recommended that organisations obtain a public IP address before they were connected to the Internet to avoid duplicate network addresses appearing. And an organisation that legitimately obtained an old “class A” has no obligation to return the “unused” network addresses back to the unallocated pool. And even if they did, it would not make a big difference; we would still run out of addresses.

The answer to the shortage of IPv4 addresses is IPv6.

 

Jan 132013
 

Perhaps.

But it is a lot more complex than the mainstream press would have you believe. That story above is effectively about researchers using a specialised search engine to find what is effectively the login banner of SCADA systems … that is those systems that control utilities such as sewage plants, power systems, etc. What is not so widely publicised is that the same researchers warned about these insecurities as far back as 2010, so the latest warning by the US government is a bit lackadaisical.

On the other hand the discovery of what is effectively login banners is just that – login banners. Whilst this is pretty poor practice, it does not necessarily mean that the bad guys can get into the relevant systems. Attaching critical systems directly to the Internet is something that really should not be done, but is often done because :-

  • It has probably long been the practice to attach such systems up in such a way that work can be carried out from home. In the past, it would have been via a dial up modem. Making such systems available on the Internet makes such insecurity more visible, although dial up modems themselves are not necessarily secure.
  • Attaching the systems directly to the Internet is the kind of laziness that comes from a desire for convenience. Only services that everyone on the Internet can legitimately make use of should be directly on the Internet. Attaching “work from home” services should be done via some sort of gateway service, such as a VPN system, but that requires more work.
  • On occasions, such systems are connected directly to the Internet in an emergency for convenience – such as getting a vendor to look at some problem. And of course once connected, it tends to stay connected. Amazingly enough, it often seems that the customer needs to jump through hoops for the convenience of a vendor rather than the other way around.

Of course gateway systems themselves can be vulnerable especially given the problems we have with weak passwords.

Earlier I mentioned that just because a SCADA system can be reached from the Internet does not mean a bad guy can break into it to cause damage. Well, that is true enough but most experts think that SCADA systems are riddled with security issues including default passwords left unchanged, etc. Pehaps as poor as the Internet was back in the early 1990s.

It is a strange thing, but it seems that vendors who sell us stuff do not seem to pay much attention to security until bad guys start attacking them and exposing their vulnerabilities.

So we have a situation where SCADA systems are directly connected to the Internet, and many of those SCADA systems are vulnerable in some way. Does this mean that bad guys are going to break in and destroy the utilities ?

Well, perhaps. But on previous occasions, the bad guys have broken in just to look around. As someone remarked to me recently, the bad guys are busy making money and unless they see a way to make money from insecure SCADA systems they will leave alone. Of course there is always the issue of cyber-terrorism where the bad guys are less interested in money and more in making a point of some kind or another.

But should you worry about the security of SCADA systems? Probably not. After all, why worry about something you have no power over? Should I worry about the security of SCADA systems – definitely (as you may have guessed my work involves security). Anyone in the information security business should be looking at their own SCADA systems and wondering whether they are protected properly.

Dec 132012
 

I have been thinking a fair amount about Information Security recently; probably because I am in the middle of a SANS course which is rather more interesting than most IT courses I have been on. As I was walking in this morning, I was pondering how I would explain what I do to a distant ancestor. Not exactly the easiest of tasks given that what we do involves what would seem to be magic to someone from the distant past.

But an analogy did occur to me: What we do is somewhat similar to the militias that used to protect walled towns and cities in the medieval era; particularly during periods of the medieval era when central authority was somewhat lacking. Such as England’s “Anarchy”.

In the distant past (and in some cases, not so distant past), towns could be at risk of being sacked by brigands for profit or for some “military” purpose. Those living in towns were obviously somewhat reluctant at this possibility, and in many cases would arrange for protection by hiring soldiers to protect them; the defences would often include city walls, a militia (paid or voluntary), etc.

Which is somewhat similar to what we do – we’re the soldiers hired to protect the “town” (a company or some kind of institute), and we build town walls (firewalls), and other defences. Obviously it is easy to take the analogy too far – we don’t get to fire crossbows at our attackers. But neither is it completely inaccurate, or indeed uninteresting.

Today we expect our central governments to arrange physical protection for us – we don’t expect to need to organise a militia to protect our cities; neither do we expect to held up at gun point to turn over our valuables. Yes there are exceptions, but they are sufficiently unusual that they are greeted with astonishment. And yes some companies with especially high value assets do arrange for additional protection over and above what is usually provided by the state.

But when you compare physical security with information security, it becomes apparent that we are still in the medieval era when it comes to information security. States are only just beginning to look at “cyberwarfare” and offer little other than advice to individuals or organisations looking for protection; it is common to hear that the police are simply not interested in looking at an issue unless the costs are less than £1 million.

If someone suffers financial harm through a phishing attack, our standard response is to blame them for being “stupid”. Whilst most phishing attacks do involve someone doing something stupid, it seems odd to blame the victim – who would blame the victim of a mugging?

Similarly when an organisation has some attackers break in, steal a whole bunch of database files which in turn contain tons of clear text passwords, or hashed passwords, we blame the victim. How could they be so stupid as to not protect that data? After all, it costs more to be careful.

So perhaps I could explain what I do as being an old warrior who has settled down in a town and runs the local militia.

Now if you’ll excuse me, it’s time for bed – time to hang up the crossbow and take off this horrible chain mail.

Dec 102012
 

Today it was announced that the NHS would be mapping the DNA of cancer patients (with their consent) to be stored and used by researchers. Which on the surface seems to be a perfectly sensible thing to do.

Of course there are those who are concerned with the privacy issue of the data being stored. Which is fair enough – any large storage of data like this is subject to privacy issues, and there are genuine fears that the data may be made available to private companies with no interest in health research.

Amusingly one of the comments was that the data would be made anonymous by removing any personal data from the data made available to researchers. Amusing because with the most personal data and ultimate means of identifying individuals is the DNA sequence itself – nothing can be more fundamental in identifying an individual than their unique DNA sequence.

On a more serious note, it is effectively impossible to make this kind of data completely anonymous. To be of any use the data in this database needs to include more data than just the DNA sequence – such as disease(s), treatments used, outcomes, etc. Whilst this may not be useful in identifying every individual taking part, it may well be enough to identify individuals with rarer combinations of disease and circumstances.

Nov 242012
 

NTP is one of those strange services that are so vital to the operation of an organisation’s network; if the servers around the network get their time in a muddle, all sorts of strange things can start happening. Besides which most people expect their computers to be able to tell the right time.

But often it is one of the unloved services. After all no user is going to ask about the health of the NTP service. And if you are a senior manager involved in IT, do you know who manages your NTP infrastructure ? If so, have you ever asked them to explain the design of the NTP infrastructure ? If not, you may find a nasty surprise – your network’s NTP infrastructure may rely on whatever servers can be scavenged and with the minimum investment of time.

Of course, NTP is pretty reliable and in most circumstances extremely resilient. NTP has built in safeguards against against confused time servers sending wildly inappropriate time adjustments, and even in the event of a total NTP failure, servers should be able to keep reasonable time for at least a while. Even with a minimal of investment, an NTP infrastructure can often run merrily in the background for years without an issue.

Not that it is a good idea to ignore NTP for years. It is better by far to spend a little time and money on a yearly basis to keep things fresh – perhaps a little server, and a day’s time each year.

That was quite a long rambling introduction to the NTP “glitch” that I learned about this week, but perhaps goes some way to explaining why such a glitch occurred.

A number of organisations reported that their network had started reporting a time way back in the year 2000. It turns out that :-

  • The USN(aval)O(observatory) had a server that for 51 minutes reported the year as 2000 rather than 2012.
  • A number of organisations with an insufficient number of clock sources (i.e. just the erroneous USNO one) attempted to synchronise to the year 2000 causing the NTP daemon to stop.
  • Some “clever” servers noticed that NTP had stopped, and restarted it. Because most default NTP startup scripts set the clock on startup, these servers were suddenly sent back in time to the year 2000.

And a cascade of relative minor issues, becomes a major issue.

Reading around, the recommendations to prevent this sort of thing happening :-

  1. Use an appropriate number of time sources for your main NTP servers; various suggestions have been made ranging from 5 (probably too few) to 8 (perhaps about right) to 20 (possibly overkill).
  2. Have an appropriate number of main NTP servers for your servers (and other equipment) to synchronise their time with. Anything less than 3 is inadequate; more than 4 is recommended.
  3. Prevent your main NTP servers from setting their time when NTP is restarted and monitor the time on each server regularly.
  4. And a personal recommendation: Restart all your NTP daemons regularly – perhaps daily – to get them to check with the DNS for any updated NTP server names.
  5. And as suggested above, regularly review your NTP infrastructure.