Nov 172011
 

I have an Android phone that automatically uploads photos to Google; you have an iPhone that automatically uploads photos to Apple’s iCloud service. We both want to send photos to a Facebook gallery for some friends.

To solve this problem, we either have to copy photos manually from Google to Facebook, or make use of some special application to do the work for us. But isn’t this the wrong solution to the problem ?

If the different propriety clouds used an open standard for uploading photos, it would be possible to automatically upload to Google from an iPhone, upload to Apple’s iCloud from an Android phone, or … to some new competitor. Or even for those of us who prefer to do our own thing, to our own servers.

As someone who mixes and matches things, I have “islands of data” in different clouds – some photos are uploaded to Facebook (when I can be bothered), some are in Googleland, and some (the ones I regard as the better ones) are uploaded to my own server. And that is just photos; there are also contacts, notes, documents, drawings, etc. None of this can be easily moved from one island to another – sure I could move it manually, but why would I want to do that ? Computers after all are supposed to be good at automation.

This is all down to the convenience of the cloud providers of course – Google makes it easy to use their services and hard to use others because it’s in their interests to do so, Apple is similarly inclined to keep your imprisoned in their “perfumed prison”. And so on.

But it’s all our data and they should make it easy to move our data around. This not only would be useful for us, but less obviously would actually benefit the cloud providers. After all if I find it tricky moving from one online photo gallery “cloud” to another, I’m less inclined to do so.

Making it easier to move cloud data from one provider to another not only means it is easier for a customer to “escape” one proprietary cloud, but it is also easier for a customer of another cloud to move in. And it would not necessarily be that difficult to do – just produce a standardised API that works across multiple different cloud providers, and let the application developers loose.

To a certain extent this is possible right now – for example, Facebook has an API and Twitter has an API and it is possible to produce code to send status updates to both places. But the equivalent to update a Google Plus status does not seem to be available, and combining status updates in one tool just isn’t there as yet – I have a simple script which sits on top of two other tools (and very nicely pops up a window, a text input box, or takes the status on the command line). But with a standardised API, the code would be much easier to write.

 

Oct 312011
 

According to a couple of articles on The Register, a couple of manufacturers are getting close to releasing ARM-based servers. The interesting thing is that the latest announcement includes details of a 64-bit version of the ARM processor, which according to some people is a precondition for using the ARM in a server.

It is not really true of course, but a 64-bit ARM will make a small number of tasks possible. It is easy to forget that 32-bit servers (of which there are still quite a few older ones around) did a pretty reasonable job whilst they were in service – there is very little that a 64-bit server can do that a 32-bit server cannot. As a concrete example, a rather elderly SPARC-based server I have access to has 8Gbytes of memory available, is running a 64-bit version of Solaris (it’s hard to find a 32-bit version of Solaris for SPARC), but of the 170 processes it is running, none occupies more than 256Mbytes of memory; by coincidence the size of processes is also no more than 256Mb.

The more important development is the introduction of virtualisation support.

The thing is that people – especially those used to the x86 world – tend to over-emphasise the importance of 64-bits. It is important as some applications do require more than 4Gbytes of memory to support – in particular applications such as large Oracle (or other DBMS) installations. But the overwhelming majority of applications actually suffer a performance penalty if re-compiled to run as 64-bit applications.

The simple fact is that if an application is perfectly happy to run as a 32-bit application with a “limited” memory space and smaller integers, it can run faster because there is less data flying around. And indeed as pointed out in the comments section of the article above, it can also use ever so slightly more electricity.

What is overlooked amongst those whose thinking is dominated by the x86 world, is that the x86-64 architecture offers two benefits over the old x86 world – a 64-bit architecture and an improved architecture with many more CPU registered. This allows for 64-bit applications in the x86 world to perform better than their 32-bit counterparts even if the applications wouldn’t normally benefit from running on a 64-bit architecture.

If the people producing operating systems for the new ARM-based servers have any sense, they will quietly create a 64-bit operating system that can transparently run many applications in 32-bit mode. Not exactly a new thing as this is what Solaris has done on 64-bit SPARC based machines for a decade. This will allow those applications that don’t require 64-bit, to gain the performance benefit of running 32-bit, whilst allowing those applications that require 64-bit to run perfectly well.

There is no real downside in running a dual word sized operating system except a minor amount of added complexity for those developers working at the C-language level.

Oct 202011
 

So there I was, installing a Linux distribution on my new laptop. Got to the end of the installation when it refused to install grub in the master boot record. Opted to try another partition, and rebooted. At which point the infamous error “Error: the symbol ‘grub_xputs’ not found” was shown with a “grub rescue” prompt.

At which point I had a laptop that wouldn’t boot of course.

To cut a long story short, because it’s only the fix I’m interested in recording for posterity, I sorted this out by booting off an emergency USB stick (unetbootin is a good tool for writing one … if you have a working system). Once booted, I setup an environment where chroot would function well. This is basically where you start a shell whose root directory is a directory under the normal root directory. This allows commands to be run almost as if the non-bootable system was booted.

mount /dev/sda5 /mnt # Mount the root filesystem of the unbootable system under /mnt
mount /dev/sda1 /mnt/boot # And the /boot filesystem
mount -o bind /proc /mnt/proc
mount -o bind /dev /mnt/dev
mount -o bind /sys /mnt/sys
chroot /mnt

Once that is done, there are quite a few things that can be done to repair a broken system, but I just needed to re-install grub to the MBR of /dev/sda :-

grub-install /dev/sda

Once that was done, everything booted fine.

Of course all that comes with the experience of a lot of time with Linux. Those who have not used it since the 1990s will not be as lucky, but there’s a few key points there :-

  1. Don’t panic. Just because it won’t boot doesn’t mean everything is lost.
  2. Write down the error message exactly as it appears on screen. A small mistake here can make searching for the error almost impossible.
  3. Get a rescue USB stick. Ideally before you break a system, but afterwards is usually possible even if you don’t have another working system – you have friends, or there are ways to write a USB stick at work.
  4. Search the Internet for the problem. You may have to spend quite a while reading other people’s problems that may or may not relate to your problem. You may have to improve your search methodology. Putting the error message in quotes is usually a good method.
  5. And if you find a solution to your problem online, check the date of the solution. Something that worked 5 years ago may not be the best solution today. And that applies to this page just as much as any other.
Oh! And to those who would jump and down screaming about this wouldn’t happen with Windows or OSX, please grow up. Such problems occur with any operating system – and I’ve seen them.
Oct 062011
 

Today was the day we learned that Steve Jobs died. This is of course massive news within the technology industry as Steve Jobs has been such an important player in the industry since the beginning of the personal computer revolution (long before the iPod and all the other iThingies). As with everyone who dies, my sympathy goes out to anyone who knew him.

The reaction has been … interesting. Amongst the other compliments he has been called a great innovator, which to those who observe the industry closely seems a touch inaccurate. There are plenty of things that Steve Jobs was – he was a great businessman who not only built up Apple in the first place, but returned to rescue it from obscurity (and possibly saving it).

He had the ability to take innovations and introduce them to the mass market – he could somehow lead his engineers into producing usable mass-market products. But without meaning to criticise he was not as much of an innovator as is sometimes made out to be.

Looking through the history of the products he brought to the mass-market …

Apple I & Apple II

Neither of these were truly original. The Apple I was one of the first personal computers that were available fully assembled, but it was not the first. The basic concept of the personal computer released as a product can be traced to the IBM 5100 (1975) or the HP 9830 (1972). These may have been a lot more expensive but were probably more successful than the Apple I which only sold about 200.

The Apple II was a good deal more successful – probably the closest to a dominant personal computer around before the original IBM PC took off, but was no more truly original. For instance amongst the hordes of similar personal computers around at the time, there was the quite close Commodore PET (which was admittedly somewhat less expandable).

And the least said about the Apple III, the better!

The Macintosh

Most people assume that the Macintosh was the first computer with a graphical user interface, but it was not even the first from Apple themselves! They brought out the somewhat less successful (and very expensive) Lisa first. The first GUI computer was the Xerox Alto first built in 1973 – before Apple even existed! Admittedly this was never a commercial product, but Xerox did eventually launch a commercial workstation based on this early experiment – the Xerox Star, in 1981. That’s still 2 years before the Macintosh.

The Macintosh did however bring the graphical user interface to a mass audience even if the first Macintosh computers were more than a little constrained by lack of memory (128Kbytes anyone?).

The iPod

After a few successful years with the Macintosh (and having ditched Steve Jobs in 1985), Apple started to go downhill. Until Steve Jobs returned, and helped to turn the company around with the launch of Macintoshes that were better designed in terms of styling. Although he was probably right to kill it off, he also did something interesting on his return – he killed the Newton product line which although it was not really recognised at the time, was actually Apple’s first slate computer (it was marketted as a PDA but with a much bigger screen than most PDAs).

But the next big thing was the launch of  the music player that nearly everyone has tried at one time or another – the iPod. Again to disappoint the reflex Apple fans, this was not a massive innovation from Apple – there were portable digital music players launched before this. Such as the music player (with a somewhat limited capacity of 3.5 minutes!) envisaged by Kane Kramer way back in 1979 (and patented in the UK in 1981). Apple even hired him when they were facing patent litigation over the iPod.

Altogether there were five different music players launched in the market before Apple took a hand. But of course Apple made it easy enough for the man in the street to use.

The iPhone

The iPhone was an interesting product – a “smartphone” (it might have been more accurate to call it a featurephone) that on the basis of pure feature comparison was weaker than the competition in every way – a less capable data network (no 3G), many missing hardware features that were present on other smartphones (GPS, proper bluetooth support, a slot for memory expansion, etc.). It couldn’t even load additional apps – Steve Jobs tried telling everyone that apps should be on the Internet and not installed on the phone!

It did do two things better than the competition though – firstly the CPU was of reasonable strength to run a smartphone with. At least the pre-iPhone smartphones I used were positively anaemic in performance due to weak CPUs. Secondly, the iPhone made using a smartphone simple. And that was the real reason the iPhone took off – anyone could use it.

The iPad

And yet again Steve Jobs does it – take a product that was pretty much universally unpopular, or at most was popular only in certain vertical markets, and pushes it out to the mass market in a way that everyone can enjoy. Again very little in the way of innovation, but a great product (with some odd weaknesses until the iPad 2).

Oct 042011
 

So it has been announced at last. The iPhone 4S, which is more or less an iPhone 4 with some fiddling – a faster processor, an improved antenna, and a software update that gives it a feature that Android has had for a while. That is voice control.

Undoubtedly it will all be done in a very slick way – that is the Apple way, but is it enough ?

Well it all depends on what you mean by “enough”. It will undoubtedly sell – both to the Apple fans who worship anything Apple produces whatever the merits, but will it sell enough to keep Apple’s current level of influence in the mobile smartphone sector ? After all, Steve Jobs has now left and everyone is wondering how the new Apple will maintain it’s leadership in the smartphone and slate market.

Well the iPhone 4S is nice, but so is my iPhone 4. But it is hardly a major improvement – yes it’s faster; probably a lot faster. And the antenna improvement will please those who managed to tickle the antenna problem on the iPhone 4 (I could only do so by going through ridiculous contortions).

It’s a perfectly reasonable mid-life facelift, but it’s a touch late for a mid-life facelift, although admittedly a bit early for a whole new phone. Oh! Sure Apple will claim that the internals are completely different, but it’s still an improved iPhone 4 rather than an iPhone 5. Although it’s unreasonable, Apple’s problem here is that the iPhone 4S looks a little boring and in a post-Jobs era, they need to convince people that they are still able to release exciting products. And this isn’t it.

The big problem I see from my personal perspective is that there is no option for an iPhone with a big screen (and no I don’t mean an iPad!). If you look at the oodles of choice you can find in the Android phone market, you will find examples of premium smartphones with larger screens than the iPhone. Such as the Samsung Galaxy S II with a 4.3″ screen, and that is not even the largest smartphone screen you can find (although it may well be the best).

Sure not everyone wants a large screen on their smartphone, but I do and Android gives me that choice. And plenty of other choices – 3D screens, physical keyboards, etc. And no being chained up in Apple’s walled garden!

So yes, sorry Apple but it’s a bit of a yawn event. Try again with a proper iPhone 5 with a large (for a smartphone) screen.