Apr 232010
 

If you get yourself one of Apple’s iThingies (an iPhone, iPad, or iTouch) you are officially restricted to installing software onto it from the selection in Apple’s App store. Which is hardly news, as is the news that geeky types do not like this – which is why the iThingies have been “jailbroken” to allow the addition of unauthorised software.

At this point I would like to point out that I am not an Apple hater – I own an iPhone 3G and intend to upgrade to an iPhone 4G (when it comes out). I also use a Macbook Pro as my work laptop. I like Apple products. But Apple gets and deserves some criticism …

Much of the criticism of Apple’s software model for the iThingies has revolved around the continual censorship of the applications allowed into the App store. This is fair enough, and indeed Apple has made itself a laughing stock with inconsistency applied standards with applications rejected for breaching conditions not applied to other applications. In addition even Apple’s published standards can be become more restrictive leading to situations where you can find it impossible to restore an application that you have paid for!

But despite these disadvantages, the App Store method of software distribution does on the surface offer something genuinely advantageous to the average consumer. The applications in the App Store have been verified by Apple as being appropriate for use – reducing the malware problem considerably. One of the regulations is that applications should not be capable of interpreting code (approximately) which reduces if not eliminates the damage a compromised application can cause.

But a single source of applications is limiting and potentially dangerous. Indeed it can even be considered to be a restriction on trade as Apple is the gatekeeper (and insists on a rather large toll) for any developer who wants to develop for the iThingies. Perhaps ordinary consumers do not care about this especially when you consider that many applications have a very reasonable cost.

But it is still of some concern. The restrictions make experimentation more difficult.

But perhaps more seriously it prevents tinkering by ordinary consumers. This can be an advantage but is also a significant disadvantage as the very people who developed the iThingies would have tinkered with consumer devices as children on their way to becoming developers. By restricting tinkering by children we restrict the size of possible people who go on to become the techies of the future.

The obvious counter to this are the existence of other devices that are far more open – even equivalent devices to Apple’s iThingies such as the various Google Android devices. But if Apple’s App store model is successful enough (and it certainly seems to be heading that way), we could find ourselves with the same model being extended to not only competitors to Apple’s iThingies, but to more general purpose computing devices – netbooks, laptops, desktops, or even servers.

We could end up in a situation where the only devices you can buy are devices that can only run software sanctioned by the vendor. A dangerous possibility.

Apr 192010
 

One of the irritating things about reading or listening to people go on about CPUs or processors is how inaccurate they can be. In particular the complexity of modern processors allows for multiple “virtual processors” which many people seem to think are equivalent to each other. Not so! Some are and some are not.

In the old days you would have a socket on the motherboard of a computer into which you would fit a rectangular or square thing with lots of sharp legs on the underside (the chip) which was the processor. And yes I’m totally ignoring the period before single-chip processors when a chip might contain only a small part of a processor! One socket, one processor, one core (although you rarely if ever heard that), and one thread.

Although multi-threaded processors came before multiple cores, we will look at the later first.

One of the disadvantages of single processor computers was that for servers, they frequently did not have enough processor power. The solution was obvious – add more sockets so you could have more than one processor, although making the solution work was very difficult. Once multiprocessor servers came into use the cost of them was slowly reduced over time until they started being used at the high end of workstations where it become obvious that a multiprocessor machine for a single user was helpful in getting work done. Even though it was a rare piece of software that was written to take advantage of multiple processors.

At the same time, single core processors were becoming faster and hotter and it slowly became obvious that the old way of making computers faster was simple not feasible over the long term. Those who look into the future could see that if things continued as they were going, computers would rapidly become too hot to run easily. There was an almost collective decision that putting more than one processor onto a single chip was the way to make future computers “faster”, although there remains the problem of making software utilise those multiple cores properly.

Today you are most likely to encounter a multi-core chip going into that socket in your computer. This is more or less the same as the old multi-socket workstations and servers. Each “core” on a multi-core chip is roughly equivalent to an old single-core processor chip. If you have two cores inside your computer, your operating system will see (and hopefully use) each as a separate processor.

Now we come to threads, and this is where it becomes even trickier. Inside a single-core processor, there are a number of different units used to run your software which were often idle when running software. Each piece of software is made of of millions of instructions, and the processor runs a single instruction at a time. When a processor runs a single instruction, it has to go through a number of different stages which each use different units. At any time during the execution of an instruction, some of the units will be idle.

A variety of different strategies were tried to utilise these idle units, but the easiest to understand was one of the more complex to implement. This was to make a single-core processor pretend to be a multi-core processor and run more than one (usually two) pieces of software in what became known as “threads”. However whilst a simplistic piece of software may identify these threads as “virtual CPUs”, they are not quite the same – a processor with two threads will be slower than a processor with two cores (and no threads).

The “problem” with threads is that when two pieces of software attempt to run on the same processor, they will each try to grab a selection of units to use. These units change over time of course, but there is still a strong possibility that the two threads will both try to grab a single unit – and one will have to be stopped.

In many cases this performance difference between threads and cores does not make a noticeable difference. Almost all software spends far more time waiting for things to happen (for a bit of a file to come off a disk drive, for a user to press a key, etc.) than actually doing anything. However there are some software workloads that are significantly affected by the minor performance hit of threads – sufficient that it is even possible to improve performance by turning off threads!

This of course is an overly simplistic look at the issue, but may well be enough to convince some that threads and cores are not equivalent. A processor with 8 cores each of which can run 4 threads, is not equivalent to a processor with 32-cores. More sophisticated operating systems could well schedule software to run in a way that unused cores are referred to running software in a thread on a processor that is already being used.

Apr 012010
 

… is a good clone of SMIT.

SMIT for those who have not been exposed to AIX, is a system administration tool that allows you to perform tasks through a graphical (or text-based) interface. Just like any other tool really, but the killer feature is that once you have built up a task in SMIT such as extend a logical volume by a certain amount, you can then ask it what the command-line equivalent is.

Now all you point and drool fans out there are probably thinking “So what?”. Well perhaps this feature is not for you, but it does allow those who work at the command line to find out what command is necessary to perform a certain task. Once you know the command to perform a task you can :-

  • Use it to setup a cron job to run the task at a particular time. No need to stay up late to perform a task in a “maintenance window” after midnight.
  • Run that command on a whole rack full of servers using a tool like pssh. Much easier than repeating the same steps on a dozen computers one by one – aren’t these computers supposed to automate tedious jobs for us ?
  • Use that command to put into a script to run on certain events. For instance you could monitor the available space on all your filesystems and grow the filesystem when the available space drops below a certain trigger point. You might also want to automatically order a new hard disk when the “volume group” runs out of space 🙂

Linux is currently undergoing a process of fragmentation where different distributions are operated in different ways much like the old commercial Unix variants went through in the 1990s. A good clone of SMIT would go a long way to allowing different distributions to go their own way in system administration commands, but allow system administrators to use a standard tool to manage any Linux distribution.

Apr 012010
 

Apologies to those arriving here looking for information relating to U***tu and the use of this ExpressCard SSD. There is nothing relating to it here – Google has taken you on a wrong turn.

So after a false start with the wrong product I end up with a Wintec Filemate SolidGo 48GB ExpressCard 34 Ultra SSD (which is specifically a PCI-based ExpressCard rather than a USB-based one which tend to be a lot slower). The specs on this thing claim 115MB/s read and 65MB/s write which compares to my hard disk with tested scores of 80MB/s read and 78MB/s write – so a lot quicker for reads and marginally slower for writes.

How does this translate into how quickly the Macbook operates ?

Well after quickly duplicating my “OSXBOOT” partition onto the new “disk” using carbon copy cloner onto the new disk (“SSDBOOT”) I can run a few benchmarks :-

Test Result for SSD Result for Spinning Metal
Menu -> Login 31s 27s
Word startup 5s 16s
du of MacPorts 34s 109s

Well apart from the slightly surprising result of the time taken to get from the Refit menu until the login screen being actually quicker for the spinning metal disk, the SSD is approximately 3.2 times quicker! Certainly a worthwhile performance boost … and presumably a suitably chosen SATA SSD would be quicker again.

Mar 232010
 

I am in two minds about the need for multitasking on the iPhone. I can see that it would be useful for applications such as music streamers such as the one for LastFM or Spotify (personally I prefer LastFM), but having multiple GUI programs running on a machine as small (in terms of hardware resources) as the iPhone could be problematic.

It could also make the iPhone less stable.

But there is a demand for running lightweight background tasks in a way with a only a small risk of interfering with the currently running GUI application.

It would be easy to allow too – just allow the iPhone application to fork a helper daemon with some means of controlling it. After all under that pretty skin, the iPhone is just an computer running OSX as anyone who has jailbroken it has probably found out.