May 022012
 

One of the cool things about “the cloud” is that there are numerous different companies all offering cloud-based storage of one kind or another. You can even get quite a bit of storage for free, and different solutions offer different cool solutions – such as Dropbox where my phone is configured to automatically send photos up to it. And there are plenty of other solutions out there :-

  • Box
  • Google Drive (of course you may already be using Google Docs which means you essentially have storage related to that).
  • SkyDrive (although for some mysterious reason, Microsoft doesn’t supply a Linux client)
  • iCloud
  • Wuala
  • SpiderOak
  • Ubuntu One – which despite the name, isn’t just for Ubuntu!
  • And in a note for myself, there’s also SparkleShare which is essentially a DropBox client to talk to your own servers.
Undoubtedly there are a whole ton more, but I think I’ve gotten the “big names” covered. The best strategy is of course to find the one whose client works with all the platforms you use (phone, PC, laptop, etc.), comes with the most free storage, and the cost of getting more storage is the least (in decreasing order of importance). Of course in the real world, you are likely to end up with more than one – simply because it’s tempting to look at the next “new thing” or because you want more cheap storage, or simply because other people insist you use service X.

Now if you use multiple cloud-storage solutions, you have a bit of a problem – different clients offering different functionality, different amounts of storage available, and remembering what you put on which “cloud-disk”. Plus of course there is the interesting problem of security – different providers provide different levels of privacy and operate in different jurisdictions where different laws apply.

Different Clients

Different clients work in different ways with different features. For instance, for a Linux user :-

  1. The Dropbox client seems to work pretty well, but it doesn’t appear in a list of filesystems (i.e. when you type df) so you can’t instantly see how much space is still available, etc. At least not in the standard way.
  2. Box(.net) lacks a Linux client, so you have to hack something together. Perfectly possible for more geeky users, but even for us there is the danger that a hackish solution may suddenly stop working mysteriously. Or rather that is more likely.
  3. Ubuntu One doesn’t seem to work via a filesystem interface at all.
  4. And that seems to be the same with SpiderOak.
It may be different for Windows users (I’m too lazy to check – if anyone wants to submit details, please go ahead), but I doubt it.

Whilst cloud storage providers may offer additional features to differentiate their product, they are all essentially the same as a removable hard disk, usb memory stick, or some other kind of removable storage. Whilst the additional features are very welcome, why should we have to learn a new way of managing storage just because it is out there in the cloud ?

Privacy

There is a great deal of paranoia about storing private data in the cloud with the assumption that creepy organisations such as Google will do something nasty with the data. Well maybe, but the likelihood of Google being that interested in an individual’s data is a little unlikely. Of  course just because the cryptogeeks are a little paranoid does not mean they are completely wrong – there are privacy issues involved.

Firstly, Google could be looking at your data to determine things about you that would be of interest to advertisers – to present targeted adverts at you. Which at best can be a little weird.

Next we like to believe that the laws of our country will protect us from someone picking through our personal data. That someone could be the company supplying the storage, or it could be the government in the country where the storage is hosted. That would probably be fine if the storage was restricted to one location where we could be sure that the government protected us, but where is the storage located?

Much of the time the storage is located in foreign jurisdictions where there is no guarantee that any kind of privacy will be respected – especially if a foreign government takes an interest in your data. Don’t forget the laws of say the USA are not designed to protect citizens of any EU country (or visa-versa). There are of course agreements such as the EU Safe Harbour agreement, but it is possible that it does not offer as much protection as assumed – it is not really intended for private individuals choosing to put their own personal data into foreign jurisdictions.

Probably most of us do not have to worry about this sort of thing (although we can choose to), but some may have to be cautious about this sort of thing. Some of us deal with personal data about third parties – sometimes very personal data – and need to consider whether storing such data in the cloud is being appropriately responsible about the data privacy. For example, a contractor who stores information about their clients should be taking actions to ensure that data is not accidentally leaked (or hacked and published).

The easy answer to this problem is to assume that cloud storage is not safe for sensitive personal data, because there is a simple solution to the problem that still allows the cloud to be used. Use encryption such as TrueCrypt to ensure that even if the cloud leaks your data, it is still encrypted with a method that is not known to the cloud provider.

Store It Twice!

There have been occasions where storage providers have removed access to storage either permanently or temporarily – such as the Megauploads site. Whilst it is perhaps unlikely, it is possible for a cloud service provider to disappear and for the customers to lose their data – even if the cloud provider claims that there is some protection against this sort of thing happening. But it could happen, so it is sensible to ensure that if you store data in the cloud, that you should ensure that you have copies of that data elsewhere.

 

Mar 172012
 

This is at least partially an appeal for information – if anyone knows of a web application scanner that does what I describe here, please let me know!

All the web application scanners I have come across so far seem to only try “online” scanning where the work is done by connecting to a web server using the same method as someone with a web browser would use. Or in other words the scanning tools replicate what an attacker might do. Hardly the wrong thing to do – it is probably the best method given that so much can only be determined by going through the web server.

In addition, there are also tools to scan the source code of web applications that you have written yourself. These pick out bits of the application that could do with looking at. Fair enough for a web developer, but I’m after something a bit different.

What I want is a tool that will when given the directory containing the website, will go through it looking for weaknesses like the following :-

  1. Look for problems with the permissions – such as directories and files writeable by the web server owner.
  2. Look for common applications and components – such as WordPress – and identify them, and indicate whether they’re out of date or not.
  3. Look for signs of exploits – PHP ‘shells’ and the like.
  4. Look for content that isn’t linked to as an indication that it shouldn’t be present.

Of course most people could think of a few more things to add to that list! It would be a handy additional source of information when it comes to securing a website.

Mar 072012
 

So tonight, Apple launched their new iPad so undoubted mass hysteria from the Apple fans but is it interesting?

Well of course it is – whatever the specifications, it is going to sell in huge numbers and have quite a big influence on the IT landscape. But ignoring that, what has changed ? And is it all good ?

The big change is the use of a high-density screen – 2048×1536 in a 9.7″ screen. The use of a high-density screen might seem like it is excessive given that each individual pixel is getting towards being too small to see. But it does make the overall effect better – text (when scaled appropriately) becomes clearer, etc. After all one of the reasons that reading paper is easier on the eye is that the greater density makes things clearer.

Software that does not scale the display is going to look a bit odd – after all this screen is very roughly the equivalent of an old 1280×1024 screen (commonly a 20″ screen) in 9.7″. But I dare say Apple has a trick up its sleeve to deal with that.

But it is a bit odd that this is still not a wide-screen format screen – most other slate makers use the wide screen format so films can scale up to the full size of the screen. But Apple wants black bars! Or letter-boxing if you insist although as a film fan I hate that.

With any luck the new iPad’s screen resolution should trickle into other products – whilst I’m not that keen on the iPad to go out and get one, I do want to see a high-density screen on my desktop at some point. And why not? Screens on the desktop have been not just stuck at the same resolution for a decade now, but actually decreasing in resolution – before HD TV became popular, 1920×1200 was a popular resolution on flat screens; now it is 1920×1080. Except if you have very deep pockets (although even that monitor does not have the density of the new iPad).

But what else ? Well, except for the new screen, it’s all a bit “Meh” … nothing shines out as a dramatic improvement.

For instance, it has a new processor. But it is only dual-core when some Android slates are getting penta-cores – usually advertised as quad core, but the many are using a processor with four high speed cores, and a single slow speed (and low power consumption) core.

And the rest of it looks pretty much the same as the old iPad – no memory slot for adding additional media, a proprietary dock connector and no micro-usb so you have to make sure you have the right cable with you. And so on.

And I still find it odd that the camera pointing towards the face is of a lower quality than the camera facing out – doesn’t the front facing camera get used more for video conferencing than the other ?

Mar 072012
 

When I discovered that yet again a certain ISP had blocked my ISP’s smarthost (grr … hotmail), I needed to come up with something for my server’s Exim configuration to automatically route mail through an alternative route. For various reasons I wanted only specific domains to be routed through this domain (I run this other server and it is kind of handy to have an independent mail server that isn’t dependant on it).

This is a slightly unusual setup for Exim.

I started off with setting up a couple of authenticators so that once everything else worked, Exim could actually login :-

myloginMD5:
  driver = cram_md5
  public_name = CRAM-MD5
  client_name = USERNAME
  client_secret = PASSWORD
myloginPLAIN:
  driver = plaintext
  public_name = PLAIN
  client_send = ^USERNAME^PASSWORD

At this point, you have a secret in your configuration file, so protect it! There also seems no obvious way to use particular authenticators with particular servers … not to say that this is impossible (it’s hard to find something to do with mail that is impossible with Exim), but I didn’t see a method to do this.

The next step is to run through your test procedure when making changes. Mine was :-

  1. Reconfigure Exim by sending it a HUP signal.
  2. Check the paniclog file to make sure it is still running.
  3. Run through a manual submission of a mail through the SMTP interface.
  4. Check the main log file to see it worked as expected.

And if you need help running through that test procedure, this would probably be a good time to read up a good deal more about Exim as you probably should not be doing this until you understand a little more.

You don’t really need two authenticators here – you just need one authenticator that matches that offered by the SMTP servers you plan to authenticate to.

The next step is to modify the SMTP driver. Search for the string “driver = smtp”, and change it to look like :-

remote_smtp:
  driver = smtp
  hosts_require_auth = LIST-OF-HOSTS
  hosts_require_tls = LIST-OF-HOSTS

What we are doing here is using the normal driver with two extra options that come into play for the list of hosts (colon separated of course) – one that requires that authentication be used, and another that requires that TLS be used.

The next step of course is to run through the test procedure again.

The final step is to create a new “smarthost” router that applies for a specified list of domains :-

smarthostplusauth:
  # Deal with SMTP hosts but specifically through an authenticated SMTP server
  driver = manualroute
  domains = LIST-OF-DOMAINS
  transport = remote_smtp
  route_list = * "server1::587 : server2::587"

This of course applies to only emails that matches your list of domains. If it gets used, the mail is routed through either of “server1” or “server2” on port 587. I used two servers in here, so that Exim would happily deal with a server that was unresponsive, but you might prefer to use a single server.

And of course it’s time to run through the test procedure again.

 

Feb 142012
 

This morning I caught an item about how so-called “Internet Trolls” are forcing some famous people to close down their Twitter accounts because of offensive posts in reply to anything they post. Before getting to the main point of this post, lets get one thing cleared up to begin with.

Trolls on the Internet aren’t those who post offensive messages. Sure they’re irritating, but they are disruptive more than offensive. That’s not to say that trolls cannot also be offensive, but most are not.

This is yet another example of the media getting some clueless reporter to write up a story about “new technology” (it ain’t new any more) without checking their basic facts with someone who has half a clue – even checking with Wikipedia would quickly tell someone what the definition of an Internet Troll was (hint that funny coloured word at the beginning of the second paragraph takes you to the definition).

Us old-timers call those who use offensive language inappropriately “offensive little gits” which probably is not cute and cuddly enough for the media to like. Perhaps we should call them goblins (it’s all in the wrong order, but Gits, Offensive, B(onus), Little, INternet, S(omething)) just to keep the media happy.

Now onto the main point … this story was quite right about the fact there is a problem with people being deliberately offensive on the Internet, and it is not restricted to just famous people. There are plenty of examples of ordinary people facing all sort of offensive messages (I was going to dig up an example I know of, but it’s buried too deep).

Now us old timers remember a simpler age where people posting offensive messages would be dealt with quite simply. First the offended person would complain to the organisation (often a University) “hosting” the network address used by the offensive person. Next, the person at that organisation in charge of such things would find the relevant user, and apply the clue stick as hard and as often as seemed appropriate.

Up to and including throwing goblins off the Internet. Of course we also kept an eye out for vexatious complaints – there are some people who will complain about the most ridiculous things.

This was mostly lost when the ISPs started dominating the provisioning of the Internet to most people (although it survives in a few dusty old corners) because it “costs too much” for the ISPs to police their users. But there is no reason why it couldn’t be brought back.

And with careful management it should work quite well – of course some care would have to be taken as regards political activists posting on the Internet. The aim here is not to censure genuine political criticism or discussion, but to apply the clue stick as hard and as often as necessary to the Internet goblins.