Feb 222014
 

Having had a wee bit of fun at work dealing with an NTP DDoS attack, I feel it is long past time to tackle the root cause of the problem – the ISP’s who have neglected to implement ingress/egress filtering despite it being considered best practice for well over 15 years. Yes, longer than most of us have been connected to the Internet.

It is easy to point at the operators of NTP services that allow their servers to be used as attack amplifiers. And yes these insecure NTP servers should be fixed, but given the widespread deployment of NTP in everything it could take up to a decade for a fix to be universally deployed.

And what then? Before the widespread use of NTP for the amplification distributed denial of service attacks, DNS was commonly used. And after NTP is cleaned up? Or even before? There are other services which can be exploited in the same way.

But the way that amplification attacks are carried out involves two “vulnerabilities”. In addition to the vulnerable service, the attacker forges the packets they send to the vulnerable service so that the replies go back to the victim. Essentially they trick the Internet into thinking that the victim has asked a question – millions of times.

Forging the source address contained within packets is relatively easy to do, and it has been known about for a very long time and the counter-measure has also been known for nearly as long. To put it simply, all the ISP has to do is to not allow packets to exit their network(s) which contain a source address that does not belong to them. Yet many ISPs – the so-called “bad” ISPs – do not implement this essential bit of basic security. The excuse that implementing such filters would be impossible with their current routers simply doesn’t wash – routers that will do this easily have been on the market for many years.

It is laziness pure and simple.

These bad ISPs need to be discovered, named, and shamed.

Jul 232013
 

Sign me up for the perv’s list … I won’t trust a politician to come up with a sensible method of censorship, and neither should you.

Ignoring the civil liberties thing: That politicians with a censorship weapon will tend to over use it, to the eventual detriment of legitimate debate.

How is Cameron’s censorship thing supposed to work? It appears nobody has a clear idea. Probably not even Cameron himself.

It seems to be two separate measures :-

  1. Completely block “extreme” porn: child abuse images, and “rape porn”. Oddly enough, he also claimed that “50 Shades of Grey” would not be banned although there are those who categorise it as rape porn. Interestingly this is nothing new as child abuse images have been blocked for years ineffectively.
  2. An “optional” mechanism for blocking some other mysterious category of porn – the “family filter” mechanism.

Now it all sounds quite reasonable, but firstly let’s take a look at the first measure. Blocking child abuse images sounds like a great idea … and indeed it is something that is already done by the Internet Watch Foundation. Whilst their work is undoubtedly valuable – at the very least it prevents accidental exposure to child abuse images – it probably doesn’t stop anyone who is serious about obtaining access to such porn. There are just too many ways around even a country-wide block.

Onto the second measure.

This means that anyone with an Internet connection has to decide when signing up whether they want to be “family friendly” or if they want to be added to the government’s list of perverts … or possibly the ISP’s list of perverts. Of course, how quickly do you think that list will be extracted and leaked? I’m sure the gutter press is salivating at the thought of getting hold of those lists to see what famous people opt to get all the porn; the same gutter press that won’t be blocked despite publishing pictures that some might say meet the criteria for being classified as porn (see Page 3).

And who decides what gets onto the “naughty list” of stuff that you have to sign up as a perv to see? What is the betting that there will be lots of mistakes?

As we already block access by default to “adult sites” on mobile networks, I have already encountered this problem. Not as you might imagine, but whilst away on a course I used an “app” to locate hostelries around my location. On clicking on the link to take me to a local pub’s web site to see a few more details, I was blocked. The interesting thing here is that the app had no problems telling me where the pub was, but the pub’s web site was blocked. Two standards for some reason?

And there are plenty of other examples of misclassification such as Facebook’s long running problem with blocking access to breast feeding information, hospitals having to remove censorship products so that surgeons could get to breast cancer information sites, etc. I happen to work in a field where sales critters are desperate to sell censorship products, and I’m aware that many places that do install such products have the endless fun of re-classifying sites.

And finally, given this is all for the sake of the children, who thinks that children will come up with ways to get around the “family filter” anyway? It is almost impossible to completely censor Internet access without extreme measures such as pulling the entire country off the Internet – even China with it’s Great Firewall is unable to completely censor Internet activity. Solutions such as proxies, VPN access, and Tor all make censorship impossible to make totally effective. If you are thinking that this is all too technical for children, you are sorely mistaken … for a start it does not take many children able to figure this stuff out as they will distribute their knowledge.

This not to say that a censorship mechanism that you control is not a sensible idea. You can select what to censor – prevent the children getting access to information about the Flying Spaghetti Monster, but block access to other religious sites, etc. And such a product has to be network-wide, to prevent someone plugging in an uncensored device; such as using the OpenDNS FamilyShield (although I have never used it, I believe it to be a good product from independent reports). Of course even DNS blocking can be worked around, but it’s a reasonable effort.

Mar 282013
 

This article is short on references because I haven’t gotten around to filling them in … they will come

The fuss in the mainstream media about the distributed denial of service (DDoS for short) attack against Spamhaus goes to show that journalists need to buy more drinks for geeks, and the right geeks. It is nowhere near as bad as described, although the DDoS attack was real enough and definitely caused “damage” :-

  1. New York Times: http://www.nytimes.com/2013/03/27/technology/internet/online-dispute-becomes-internet-snarling-attack.html?pagewanted=all&_r=0
  2. Daily Mail:  http://www.dailymail.co.uk/news/article-2300810/CyberBunker-revealed-Secretive-fanatic-worst-cyber-attack.html

This article is not intended to be totally technically accurate in every detail; it is intended to describe the incident in enough detail and with enough accuracy that it can be understood without übergeek status.

So What Happened?

Spamhaus are experiencing on ongoing distributed denial of service attack that started on the 20th March, and is ongoing. The initial attack very quickly overwhelmed their 10Gbps (that’s about 1,000 times faster than your Internet connection) link to the Internet. Whilst this disrupted the Spamhaus web site, and various back office services, the main service that Spamhaus provides kept running (as it is distributed).

The very clued up geeks at Spamhaus who have had plenty of experience of being under attack, very quickly contacted CloudFlare which started hosting their web sites and other back office services at numerous data centres around the globe. Their services rapidly started returning to life – it isn’t the sort of thing that can be done instantly, and probably took a lot of late nights.

However the attacks escalated and reached levels of up to at least 300Gbps (that’s about 30,000 times faster than your Internet connection) or about 13Gbps of traffic for each of CloudFlare’s 23 data centres. That’s a lot and could be responsible for Internet slowdowns …

The Internet Is Slow. Is It The DDoS?

Well perhaps. We all have a very understandable tendency to blame known events for problems we’re having. Is the Internet slow? It must be that DDoS . But it is not necessarily so.

And if all the Internet was slow for you, it is quite possible that you were unknowingly taking part in the attack! Because the attack relied on infected PCs together with other stuff described below.

It is also possible that some parts of the Internet were overwhelmed by the DDoS. Reports have indicated that Internet services plugged in alongside the CloudFlare data centres (or in them) were suffering somewhat because of the extraordinary levels of traffic. However, this is the Internet and there is always lots of stuff going on that may cause slower performance than normal in various corners of the ‘net.

Was This The Biggest DDoS Attack?

Possibly. The figure of 300Gbps (and it was probably larger than that – the 300Gbps figure was through one Tier-1 ISP) probably qualifies as the largest known public DDoS.

However DDoS attacks are not always made public; there could well have been larger attacks that were not made public.

Various responses have indicated that the attack was not as serious as described by others :-

  1. http://cluepon.net/ras/gizmodo
  2. http://gizmodo.com/5992652/that-internet-war-apocalypse-is-a-lie

It may be that these commentators are mistaken to the extent that they didn’t see a problem; it may be that European and Asian networks were more prone to a slow-down than elsewhere.

What Is A Distributed Denial Of Service Attack?

If you were an attacker, you could try sending network traffic as fast as your PC could handle to the target of your attack. However the amount of traffic you could send would be very limited – you can’t send more than the speed of your Internet connection. Say 10Mbps … a lot less than most large services use for their own Internet connections.

To make an attack more effective, you will want to have lots of people send traffic as quick as they can. And the easy way to do that is to infect PCs with some sort of malware, and use your control of those infected PCs to send out that denial of service traffic. At which point it becomes a distributed denial of service attack because the attack traffic is distributed around the Internet.

And if you can find some way of amplifying your attack traffic so that say 10Mbps of traffic becomes 1Gbps of traffic, you make your attack much more effective.

So How Was This Done?

The details of what went on become pretty hairy very quickly, but very simply :-

  1. The attacker takes control of a large number of infected PCs to make his or her “robot army” to send out network traffic under their control.
  2. The attacker instructs their robot army to send out DNS requests as quickly as possible with the source address forged as the victim’s address.
  3. The negligent ISP allows those packets out by not applying source filtering.
  4. The network traffic reaches any number of misconfigured DNS servers that answer with a larger reply sent to the victim’s address.

DNS?

This is short for the domain name system and is a service that turns names into numbers (amongst other things). You type in a name such as www.google.com and the DNS server your PC is configured to talk to turns that name into an Internet address such as 203.0.113.63 or possibly 2001:db8:0:1234:0:5678:9:12. Your PC then makes a network connection to that numeric address in the background, and fetches a web page, a music stream or some other content you want.

Without the DNS we would all have to rely on numeric addresses to make connections – a lot tougher!

There’s another factor here as that DNS is an amplifying service – you ask for a name such as www.google.com, and the answer is a whole lot longer than just the numeric address you “need” as it can (and often does) contain a number of network addresses together with associated information :-

% dig www.google.com  

; <<>> DiG 9.8.4-rpz2+rl005.12-P1 <<>> www.google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 4

;; QUESTION SECTION:
;www.google.com.			IN	A

;; ANSWER SECTION:
www.google.com.		61	IN	A	74.125.138.104
www.google.com.		61	IN	A	74.125.138.106
www.google.com.		61	IN	A	74.125.138.99
www.google.com.		61	IN	A	74.125.138.147
www.google.com.		61	IN	A	74.125.138.105
www.google.com.		61	IN	A	74.125.138.103

;; AUTHORITY SECTION:
google.com.		126160	IN	NS	ns3.google.com.
google.com.		126160	IN	NS	ns2.google.com.
google.com.		126160	IN	NS	ns4.google.com.
google.com.		126160	IN	NS	ns1.google.com.

;; ADDITIONAL SECTION:
ns1.google.com.		126160	IN	A	216.239.32.10
ns2.google.com.		126160	IN	A	216.239.34.10
ns3.google.com.		126160	IN	A	216.239.36.10
ns4.google.com.		126160	IN	A	216.239.38.10

;; Query time: 1 msec
;; SERVER: 10.0.0.26#53(10.0.0.26)
;; WHEN: Sat Mar 30 09:52:59 2013
;; MSG SIZE  rcvd: 264

If you are talking to a misconfigured DNS server, it could answer even when it should not. Normally DNS servers are configured to answer just for those they are intended to provide answers to – your ISP’s DNS servers will answer your questions, and not mine. However if they are misconfigured, they will answer any question and will function as a DDoS amplifier.

This does not include public DNS servers such as OpenDNS, or Google’s public DNS servers – they are specially configured to avoid acting as a DDoS amplifier – probably by imposing a rate limit to stop answering if you ask too many questions.

Source Filtering?

When you click on a link in your web browser, your browser sends out a network packet containing the request (“GET /webpage”), and that network packet contains the destination of the web server – so your request reaches it, and your own address – so the web server knows where to send the answer! Your own address (in these circumstances) is known as the source address.

With appropriate software, you can forge your source address so that replies to your request go back to a different place. Without that only the very simplest DDoS attacks would work.

Of course, it has been best practice to block forged source addresses since well, not long after the beginning of the Internet. This is known as source filtering. An Internet router is capable of deciding that packets coming in from wire A should not have the address assigned to wire B, so should be dropped on the floor.

An Internet router that doesn’t do that is poorly configured.

So How Can This Be Stopped?

The answer is that we have known how to stop this sort of attack for at least a decade. And indeed the best Internet citizens have done so for years.

The trouble lies with those on the Internet who are not necessarily the best Internet citizens. Of the big three remedies, two are probably being neglected because managers of ISPs do not see the business benefits of applying those remedies. And there isn’t a business benefit, but a social responsibility.

The three remedies are :-

  1. The average Internet user needs to take action to prevent their PC from getting infected. Get anti-virus protection, and an Internet firewall. If the PC acts weird, get it looked at. And if the Mac acts weird, get it looked at too (yes they do get infected).
  2. ISPs should apply BCP38 (which dates back to 2000) which specifies source filtering.
  3. ISPs running DNS servers should ensure that their DNS servers are properly configured to only answer queries for legitimate clients.

And if you happen to know a senior manager at an ISP, ask them about BCP38 and if they’re doing it – source filtering is probably the most important action here.

But Who Is Responsible?

It is easy to get distracted by the problems caused by those leaving poorly configured router, and insecure PC lying around on the Internet. Whilst their owners are responsible for effectively leaving tools around that attackers can use (and all too often do use), they are not directly responsible for the attack.

The attacker is.

But who were they?

The fairly credible rumours are that the attackers were either Cyberbunker or Stophaus.com, as part of a campaign against the actions of Spamhaus. Various criminals behind the flood of spam targeting your mailbox with all sorts of rubbish have long complained about the actions of Spamhaus, as they try and prevent spam arriving. And Cyberbunker is an ISP dedicated to providing hosting to services that may get shut down elsewhere – they deal with anyone except paedophiles and terrorists, which leaves a whole world of swamp dwellers that you would really rather not know about. And spammers.

Who Are Spamhaus?

Spamhaus are subject to a great deal of black propoganda – including accusations of blackmail, extortion, censorship, and probably kicking cats too. The reason? They help identify spammers, so that ISPs can choose to block spam.

Spammers are somewhat irritated by this – their business model relies on polluting your mailbox so that the 1% (or so) of people who do respond to spam is a large enough number that they can carry on making money. And they get irritated very quickly if someone tries to interfere with their “right” to use your inbox to make money.

Mail server operators have long been blocking spammers using a whole variety of methods, and some of the best collaborated on producing lists of addresses of spammers that others could use. These evolved into DNS based RBLs, and one of the most respected groups of volunteers became known as Spamhaus.

You may be thinking that you still get plenty of spam, so they cannot be doing too great a job. But :-

  1. You may be with an ISP that chooses not to use Spamhaus.
  2. You don’t see the spam that gets blocked. Even if you see dozens of spam messages a day, you may be seeing only 5% of the spam that was sent your way.

It is telling that amongst those in the know, those who deal with spam and Internet abuse in general, there is practically nobody who thinks of Spamhaus as anything other than the good guys.

 

Oct 162012
 

Sometimes I get surprised by how many people do not fully understand how URLs work … or more specifically how they are decomposed and what each part means. And not just people who have no real reason to understand them, but people in IT. As a DNS administrator (amongst other things) I get some surprising requests – surprising to me at least – which involve explaining how I would like to help, but accomplishing the impossible is a task somewhat above my pay grade.

With any luck (so probably not then), this little post may go some way towards explaining URLs and what can and cannot be accomplished with the dark arts of the domain name system.

To start with, URLs can be thought of as web addresses. Not the kind you find painted on the sides of vans (www.plumbers-are-us.com) but what they turn into in the location bar with an honest web browser when you visit a site. Such as http://www.plumbers-are-us.com/. Although I note that my own browser is less than honest!

But just to make things a little more interesting, I will make that example URL a little more interesting: http://www.plumbers-are-us.com:8080/directory/portsmouth.html.

And now to the dissection. The first part of that URL above is the http bit … to be precise that which appears before the two slashes (apologies if you have been deceived by Microsoft but a ‘/’ is a forwards slash and a ‘\’ is a backwards slash, although those formal graphemologists who write the standards prefer to call a slash a solidus). This part of the URL is the scheme.

The scheme defines what protocol should be used to fetch a page with. You should be familiar with http and https as these are conventionally used to fetch web pages … with the later involving SSL encryption of course. There are of course other schemes less well known :-

ftp File Transfer Protocol – a pre-web method for transferring files.
gopher Gopher – an earlier competitor to the Web.
mailto Used to compose a mail message to an address.

In fact that is just a tiny sneak peak at the full list which contains a number of things even I have never heard of. But the usual scheme is either http or https (at least for now), so we can skip over the scheme part.

The next part (between the ‘//’ and the next ‘/’) contains two items of information :-

  1. The “hostname” where the web server can be found.
  2. The “port” to attach to on that web server.

The “port” is relatively uninteresting. If the server where the URL is served from is configured properly, there is no need to specify a port number, as any browser is capable of realising that the default port number for http is 80 (computers are good with numbers after all) and 443 for https. Unfortunately, whilst there is (arguably) no real excuse for running web servers on non-standard ports these days, some people insist on doing the Wrong Thing; quite often through archaic knowledge picked up during the 1990s which would be best recycled.

The “hostname” part is where it starts to get interesting. This is turned into an IP address by your browser, so it can go off across the Internet and have a polite conversation with a web server at the other end to ask nicely for a copy of the web page you have asked for. You can just put an IP address in there, but the expectation is that sometimes URLs may be typed in, and isn’t really.zonky.org slightly more memorable than 2001:8b0:640c:dead::d00d ?

But wait! It gets more interesting: The DNS allows you to point more than one name at a server, so mine can be reached with several different URLs such as http://zonky.org and http://really.zonky.org plus a few others. Which in fact show different web pages, by using so called virtual servers (which has nothing to do with virtual machines).

So the DNS can be used to change a boring server name such as server0032.facilities.north.some.organisation into a more meaningful name such as internet.some.organisation, but it can only pull tricks with the “hostname” part. Any messing with any other part of the URL including the bit after the slash is the job of something else; usually the web server itself, although that can sometimes require additional support.

The last part of the URL comes after the first single slash – in our example the “/directory/portsmouth.html” part – which can be best called the pathname as it provides a path to the page within the web server to fetch. In a very simplistic way, web servers can be thought of as file servers which require you to tell it which file to request; just like working with the command-line on a Linux machine or even a Windows machine.

BTW: I’m not really that scary – I haven’t bitten anyone’s head off for ages … at least a couple of weeks at least!

Dec 082010
 

If anyone has been following the news closely over the last few days, they will be aware of the attempt that the Swedish authorities are making to extradite Julian Assange to face an assortment of sex charges including rape. Even by itself, there is enough suspicion about the timing of this given previous history of the charges to cause any neutral observer to wonder just what is going on here.

For those who have not dug into the details, the charges were first investigated in August 2010 and then dropped before being re-opened. All the while Julian Assange was either in Sweden, or willing to talk to the prosecutor although not prepared to travel to Sweden at his own expense. The escalation to a request for extradition was unfortunately timed happening at the same time as the latest WikiLeaks (linking to a mirror as the main site is mysteriously down) publications.

By itself it is just about enough to cause a sensible to person to say to themselves … “I wonder … Nah!”, but there are other things happening to WikiLeaks.

WikiLeaks appears to be under a continual distributed denial of service attack where many computers are used to send traffic to the WikiLeak servers. There are two sets of servers involved in hosting the WikiLeaks sites – the actual web servers themselves, and the DNS servers hosting the name.

In the case of the web servers, the servers were first moved to the Amazon cloud service in the middle of a denial of service attack – so Amazon can hardly complain about this as it was known about at the time. Yet after less than a week, the site was booted off the Amazon cloud without a public explanation. The suspicion is that political pressure was brought to bear especially given one of the earliest statements about the issue was from a certain Joseph Lieberman – a US Senator.

WikiLeaks then went to a French hosting company – OVH – who have stated that they will honour their contract. Presumably providing that the French courts do not insist that they terminate the contract, which is possible given that the case is under review.

Separately to this, the Wikileaks domain (or “name”) has itself been under attack. Large scale distributed denial of service attacks took place against the EveryDNS infrastructure servers that provide the name wikileaks.org, and every other name hosted by the same infrastructure. EveryDNS took the step of terminating their domain hosting. As of now, the domain wikileaks.org is not available via the DNS servers I run, indicating that either they have not found another hosting company for the name, or their alternative arrangements are under sufficiently serious attack.

Those are the technical attacks.

In addition, a number of financial companies have frozen WikiLeaks accounts preventing funds from being used, or donations being made – PayPal (who admit that their decision was influenced by the US Government) and Mastercard amongst them.

Add all the attacks together and you start to think that there is some kind of conspiracy behind all this – perhaps the US government is waging cyberwar against WikiLeaks. It is almost certain that they have this capability and there are indications that they are annoyed enough with WikiLeaks to do this.

However it is still more probable that this is a combination of :-

  1. Annoyed US (and possibly other) “hackers” making denial of service attacks against the WikiLeaks infrastructure and the associated infrastructure.
  2. Various commercial organisations deciding that it is too much hassle to “help” WikiLeaks and deciding to terminate their contracts.

Probably the harshest criticism should be directed at PayPal who have just said in a TV interview that they received advice from the US State Department that the WikiLeaks site was probably illegal under US law. Well the opinion of a government in a free society should not be enough to condem an organisation, and the directors of PayPal could deservedly be called chickenshit arse-lickers for their actions.

Perhaps you do not believe that WikiLeaks is in the right here. I’m not entirely sure myself – leaking US diplomatic cables is one thing, but perhaps publishing a list of potential targets the US government feels are critical to its security was a step too far. But there is a bigger issue here than “merely” WikiLeaks itself. We are seeing a situation where a website that has not been condemned for their actions in any court of law has been pushed around and to some extent off the Internet by the actions of a few – both people engaged in illegal activities (denial of service attacks) and people making commercial decisions (terminating contracts).

Imagine if you will, this website is something controversial in a country that is considered a pariah by most of the world – Iran perhaps; perhaps they publish allegations with evidence of widespread government crimes and corruption. Iran and supporters of Iran undertake to destroy that website with “cyberwarfare”. Wouldn’t we want that website to be protected in some way ? Perhaps you are thinking that Iran doesn’t have the resources to undertake such an attack; well think again. Many of the largest botnets capable of carrying out widespread denial of service attacks are under the control of organised criminals (spammers) who have less resources than any government – it takes little more than a spotty teenager in a basement to control tens of thousands of compromised machines and target whatever they like.

In such a situation, it would seem to make sense to provide a hosting service of last resort. Presumably a volunteer effort as it would have to be immune to commercial interests, and presumable massively parallel to ensure that there are many servers providing service so that a distributed denial of service attack would fail to hit everywhere.

Lastly, the US reaction to WikiLeaks seems to me to be a little over the top. And I am not talking about the lunatic fringe who are likely to jump and down screaming at the slightest criticism of the US, but at more respected figures. Some of the reactions verge on coming close to events such as the Fatwwā against Salman Rushdie way back in the 1980s.

For example :-

  • Jeffrey T Kuhner wrote in an editorial in the Washington Times that Julian Assange should be treated “the same way as other high-value terrorist targets” and be assassinated.
  • Gordon Liddy has suggested that Julian Assange should be added to a “kill list” of terrorists to be assassinated without trial.
  • Mitch McConnell has called Julian Assange a “high-tech terrorist”.
  • Newt Gingrich has stated “and Julian Assange is engaged in terrorism. He should be treated as an enemy combatant.”. Well it would be a start to treat any terrorist as an enemy combatant (the US doesn’t as enemy combatants have rights).

Calling for the assassination of Julian Assange is no better than a radical Islamist calling for the assassination of Salman Rushdie – we’re supposed to be better than the knuckle dragging fundamentalists frothing at the mouth. Seems that some in the US aren’t. A reminder to those people – we supposedly live in countries where the rule of law is supposed to be followed, and nobody has tried and convicted Julian Assange of anything in relation to WikiLeaks.

As for calling Julian Assange a terrorist, that is blatantly ridiculous. However annoyed you may be with him, none of his actions equate to driving a truck packed with explosives into a crowded shop entrance, or hijacking a plane and flying it into a large city killing thousands. Even if any information published by WikiLeaks has led to the death of anybody (and nobody has managed to demonstrate this – merely raised ill-founded concerns about the possibility), the responsibility for those deaths belongs to those carrying out the killings and not WikiLeaks. At most (in such circumstances), WikiLeaks might be guilty of incitement to murder – and in a much less obvious way than those calling for the head of Julian Assange to be delivered to them on a platter.

The US is beginning to look like the fool in all of this – their information security is a joke, and their reaction to their inability to keep secrets is to shoot the messenger in a way that makes them look no better than those rogue regimes they complain so much about.