Jun 272020
 

So Apple has announced that it is replacing Intel processors with ARM processors in its Mac machines. And as a result we’re going to be plagued with awful puns endlessly until we get bored of the discussion. Sorry about that!

This is hardly unexpected – Apple has been using ARM-based processors in its iThingies for years now, and this is not the first time they have changed processor architectures for the Mac. Apple started with the Motorola 68000, switched to the Motorola/IBM Power architecture, and then switched to Intel processors.

So they have a history of changing processor architectures, and know how to do it. We remember the problems, but it is actually quite an accomplishment to take a macOS binary compiled for the Power architecture and run it on an Intel processor. It is analogous to taking a monolingual Spanish speaker, providing them with a smartphone based translator and dropping them into an English city.

So running Intel binary macOS applications on an ARM-based system will usually work. They’ll be corner cases that do not of course, but these are likely to be relatively rare.

But what about performance? On a theoretical level, emulating a different processor architecture is always going to be slower, but in practice you probably won’t notice.

First of all, most macOS applications very often consist of a relatively small wrapper around Apple-provided libraries of code (although that “wrapper” is the important bit). For example, the user interface of any application is going to be mostly Apple code provided by the base operating system – so the user interface is going to feel as snappy as any native ARM macOS application.

Secondly, Apple knows that the performance of macOS applications originally compiled for Intel is important and has Rosetta 2 to “translate” applications into instructions for the ARM processors. This will probably work better than the doom-sayers expect, but it will never be as fast as natively compiled code.

But it will be good enough especially as most major applications will be made ARM natively relatively quickly.

But there is another aspect of performance – are the ARM processors fast enough compared with the Intel processors? Well, the world’s fastest supercomputer runs on the ARM processors, although Intel fanboys will quite rightly point out that a supercomputer is a special case and that a single Intel core will outperform a single ARM core.

Except that with the exception of games, and specialised applications that have not been optimised for parallel processing, more cores beats faster single cores.

And a single ARM core will beat a single Intel core if the later is thermally throttled. And thermals has been holding back the performance of Apple laptops for quite a while now.

Lastly, Apple knows that ARM processors are slower than Intel processors in single core performance and is likely pushing ARM and themselves to solve this. It isn’t rocket science (if anything it’s thermals), and both have likely been working on this problem in the background for a while.

Most of us don’t really need ultimate processor speed; for most tasks merely the appearance of speed is sufficient – web pages loading snappily, videos playing silkily, etc.

Ultimately if you happen to run some heavy-processing application (you will know if you do) whose performance is critical to your work, benchmark it. And keep benchmarking it if the ARM-based performance isn’t all that good to start with.

And most of these tasks can be performed fine with a relatively modest modern processor and/or can be accelerated with specialised “co-processors”. For example, Apple’s Mac Pro has an optional accelerator card that offloads video encoding and makes it much faster than it would otherwise be.

Apple has a “slide” :-

That implies that their “Apple silicon” processors will contain not just the ordinary processor cores but also specialised accelerators to improve performance.

Apr 012010
 

Apologies to those arriving here looking for information relating to U***tu and the use of this ExpressCard SSD. There is nothing relating to it here – Google has taken you on a wrong turn.

So after a false start with the wrong product I end up with a Wintec Filemate SolidGo 48GB ExpressCard 34 Ultra SSD (which is specifically a PCI-based ExpressCard rather than a USB-based one which tend to be a lot slower). The specs on this thing claim 115MB/s read and 65MB/s write which compares to my hard disk with tested scores of 80MB/s read and 78MB/s write – so a lot quicker for reads and marginally slower for writes.

How does this translate into how quickly the Macbook operates ?

Well after quickly duplicating my “OSXBOOT” partition onto the new “disk” using carbon copy cloner onto the new disk (“SSDBOOT”) I can run a few benchmarks :-

Test Result for SSD Result for Spinning Metal
Menu -> Login 31s 27s
Word startup 5s 16s
du of MacPorts 34s 109s

Well apart from the slightly surprising result of the time taken to get from the Refit menu until the login screen being actually quicker for the spinning metal disk, the SSD is approximately 3.2 times quicker! Certainly a worthwhile performance boost … and presumably a suitably chosen SATA SSD would be quicker again.

Jan 072007
 

I recently replaced an elderly SGI Octane2 workstation which had 2 CPUs (400MHz MIPS-based), 1.5Gbytes of memory, and 3 elderly SCSI disks with a nice new Sun Ultra40 … 2 AMD Opteron 248s, 2Gbytes memory, and 2 mirrored SATA drives. It is interesting to compare the difference between an old-fashioned workstation originally designed in the middle to late 1990s with a 21st century PC. Not that I’m going to produce hard numbers from useful benchmarks … that is just too much work, and in some ways it is the feel of the differences that are important.

Of course this is not really a fair comparison. Whilst the SGI Octane is now very elderly and due to SGI managerial incompetence has not kept pace with PC performance as it should have done, it is after all a machine that originally cost 10-20 times the cost of the PC I am comparing it to. In car terms, I’m comparing a 20-year old Mercedes with a new and cheap Ford. I should point out that much of the software I am using is very much the same on both machines … the Enlightenment window manager, Sylpheed Claws as the mail client, Firefox as the browser, LyX as the word processor, and a text terminal for much of the remainder.

The PC is considerably quicker than the SGI of course. The graphic user interface is a good deal snappier, and most of the applications offer very welcome improvements in performance. With the exception of GIMP however, none of this performance increase is really essential; my old SGI ran pretty much everything my PC does, fast enough to get the job done. GIMP performance is the reason I upgraded, and here the difference is quite dramatic … filters that previous required patience now run almost instantly; when you are repeatedly trying things out in GIMP on quite large images this performance increase makes some things feasible that simply were not before.

There is one area where the SGI does offer some advantage over the PC; something I was expecting. The PCs disks are overall somewhat faster the the disks in the SGI (and of course I don’t have to pay to mirror my disks!), but the SGI tends to work more smoothly under high load. I’ve noticed before with the ‘low end’ on disks in PCs, that if you start to drive your disks very hard, the computer will sometimes stutter. Essentially the SGI was slower, but smoother under high disk load than the PC.

If was not for the need to run GIMP extensively (and the appeal of more standard add-on hardware like USB hard disks), there is no reason why I could not continue with the SGI. The tendency we have in the computing arena of replacing computers every few years is not a healthy one.