Jul 022011
 

One of the many obsessions in the IT industry going around at the moment is the possibility of low-energy ARM-based servers. ARM-based processors are currently very popular in the smartphone and slate markets because they eat much less energy than Intel-based processors. What is less commonly realised is that ARM-based processors have also long been used in general purpose desktop computers.

ARM processors were originally designed and built by a home computer company called Acorn as a replacement for the 6502 processor in their immensely successful BBC Micro. The replacement micros were collectively known as the Acorn Archimedes and were probably the most powerful home computer before the crash of the home computer market, and the eventual dominance of the IBM PC compatibles.

And of course a general purpose computer running a well-designed operating system is just a short step away from being a capable server.

So of course it is possible for someone to release a server based around the ARM processor and for it to be useful as a server. Whether it is successful enough to carve itself a respectable niche in the server market as a whole is pretty much down to the vagaries of the market.

Some of the criticisms I have seen around the possibilities for ARM servers :-

But ARM Cores Are Just So Slow

Actually they’re not. Sure they are slower than the big ticket Xeons from Intel, but they are quite possibly fast enough. Except for specialist jobs, modern servers are rarely starved of CPU; in fact that is one of the reasons why virtualisation is so popular – we can make use of all that wasted CPU resource. Modern servers are more typically constrained (especially when running many virtual servers) by I/O and memory.

And the smaller size of the ARM core allows for a much larger number of cores than x86-based servers. And for most modern server loads (with virtual machines), many cores is just as good as fewer but faster cores.

In the case of I/O, the ARM processor is just as capable as an Intel processor because it isn’t the processor that implements links to the outside world (that is a bit simplistic, but correct in this context). In the case of memory, ARM has an apparent problem in that it is currently a 32-bit architecture which means a single process can only address up to 4Gbytes of memory.

Now that does not mean an ARM server is limited to 4Gbytes of memory … the capacity of an ARM server in terms of memory is determined by the capabilities of the memory management unit. I am not aware of any ARM MMUs that have a greater than 32-bit addressing capability, but one could relatively easily be added to an ARM core.

Of course that is not quite as good as a 64-bit ARM core, but that is coming. And except for a certain number of server applications, 64-bit is over rated outside of the x86 world – Solaris on SPARC is still delivered with many binaries being 32-bit because changing to 64-bit does not give any significant advantages.

But It Is Incompatible With x86 Software

Yes. And ?

This is a clear indication that someone has not been around long enough to remember earlier server landscapes when servers were based on VAX, Alpha, SPARC, Power, Itanium, and more different processor architectures. The key point to remember is that servers are not desktops; they usually run very different software whether the server is running Windows, Linux, or some variety of Unix.

There are server applications where x86 binary compatibility is required. Usually applications provided by incompetent third party vendors. But most jobs that servers do are done by the included software, although in the case of Linux and Unix, the width of “included” software is somewhat wider than with Windows. Indeed for every third party application that requires an x86 processor, there are probably as a minimum half a dozen other server jobs that do not require x86 servers – DNS, DHCP, Directory services, file servers, printer servers, etc.

If you buy an ARM-based server, it will come with an operating system capable of running many server tasks which can be used to offload server tasks from more expensive x86 hardware (either in terms of the upfront cost, or in terms of the ongoing power costs). Or indeed, will be sufficient to provision thin clients to the point where they can use the cloud.