No ads? Contribute with BitCoins: 16hQid2ddoCwHDWN9NdSnARAfdXc2Shnoa
Jan 042018
 

Well, there’s another big and bad security vulnerability; actually there are three. These are known as Meltdown and Spectre (two different Spectres). There are all sorts of bits of information and misinformation out there at the moment and this posting will be no different.

In short, nobody but those involved in the vulnerability research or implementing work-arounds within the wel-known operating systems really knows these vulnerabilities well enough to say anything about them with complete accuracy.

The problem is that both vulnerabilities are exceptionally technical and require detailed knowledge of technicalities that most people are not familiar with. Even people who work in the IT industry.

Having said that I’m not likely to be 100% accurate, let’s dive in …

What Is Vulnerable?

For Meltdown, every modern Intel processor is vulnerable; in fact the only processors from Intel that are not vulnerable are only likely to be encountered in retro-computing. Processors from AMD and ARM are probably not vulnerable, although it is possible to configure at least one AMD processor in such a way that it becomes vulnerable.

It appears that that more processors are likely to be vulnerable to the Spectre vulnerabilities. Exactly what is vulnerable is a bit of work to assess, and people are concentrating on the Meltdown vulnerability as it is more serious (although Spectre is itself serious enough to qualify for a catchy code name).

What Is The Fix?

Replace the processor. But wait until fixed ones have been produced.

However there is a work-around for the Meltdown vulnerability, which is an operating system patch (to fix the operating system) and a firmware patch (to fix the UEFI environment). All of the patches “fix” the problem by removing kernel memory from the user memory map, which stops user processes exploiting Meltdown to read kernel memory.

Unfortunately there is a performance hit with this fix; every time you call the operating system (actually the kernel) to perform something, the memory map needs to be loaded with the kernel maps and re-loaded with the old map when the routine exits.

This “costs” between 5% and 30% when performing system calls. With very modern processors the performance hit will be consistently 5% and with older processors the hit will be consistently 30%.

Having said that, this only happens when calling the operating system kernel, and many applications may very well make relatively few kernel operating system calls in which case the performance hit will be barely noticeable. Nobody is entirely sure what the performance hit will be for real world use, but the best guesses say that most desktop applications will be fine with occasional exceptions (and the web browser is likely to be one); the big performance hit will be on the server.

How Serious Are They?

Meltdown is very serious not only because it allows a user process to read privileged data, but because it allows an attacker to effectively remove a standard attack mitigation which makes many older-style attacks impracticable. Essentially it make older-style attacks practicable again.

Although Spectre is still serious, it may be less so than Meltdown because an attacker needs to be able to control some data that the victim process uses to indulge in some speculative execution. In the case of browsers (for example) this is relatively easy, but in general it is not so easy.

It is also easier to fix and/or protect against on an individual application basis – expect browser patches shortly.

Some Technicalities

Within this section I will attempt to explain some of the technical aspects of the vulnerabilities. By all means skip to the summary if you wish.

The Processor?

Normally security vulnerabilities are found within software – the operating system, or a ‘layered product’ – something installed on top of the operating system such as an application, a helper application, or a run-time environment.

Less often we hear of vulnerabilities that involve hardware in some sense – requiring firmware updates to either the system itself, graphics cards, or network cards.

Similar to firmware updates, it is possible for microcode updates to fix problems with the processor’s instructions.

Unfortunately these vulnerabilities are not found within the processor instructions, but in the way that the processor executes those instructions. And no microcode update can fix this problem (although it is possible to weaken the side-channel attack by making the cache instructions execute in a fixed time).

Essentially the processor hardware needs to be re-designed and new processors released to fix this problem – you need a new processor. The patches for Meltdown and Spectre – both the ones available today, and those available in the future – are strictly speaking workarounds.

The Kernel and Address Space

Meldown specifically targets the kernel and the kernel’s memory. But what is the kernel?

It is a quite common term in the Linux community, but every single mainstream has the same split between kernel mode and user mode. Kernel mode has privileged access to the hardware whereas user mode is prevented from accessing the hardware and indeed the memory of any other user process running. It would be easy to think of this as the operating system and user applications, but that would be technically incorrect.

Whilst the kernel is the operating system, plenty of software that runs in user mode is also part of the operating system. But the over-simplification will do because it contains a useful element of the truth.

Amongst other things the kernel address space contains many secrets that user mode software should not have access to. So why is the kernel mode address space overlaid upon the user mode address space?

One of the jobs that the kernel does when it starts a user mode process, is give to that process a virtual view of the processor’s memory that entirely fills the processor’s memory addressing capability – even if that it is more memory than the machine contains. The reasons for this can be ignored for the moment.

If real memory is allocated to a user process, it can be seen and used by that process and no other.

For performance reasons, the kernel includes it’s own memory within each user process (but protected). It isn’t necessary, but re-programming the memory management unit to map the kernel memory for each system call is slower than not. And after all, memory protection should stop user processes reading kernel memory directly.

That is of course unless memory protection is broken …

Speculative Execution

Computer memory is much slower than modern processors which is why we have cache memory – indeed multiple levels of cache memory. To improve performance processors have long been doing things that come under the umbrella of ‘speculative execution’.

If for example we have the following sample of pseudo-code :-

load variable A from memory location A-in-memory
if A is zero
then
do one thing
else
do another
endif

Because memory is so slow, a processor running this code could stop whilst it is waiting for the memory location to be read. This is how processors of old worked, and is often how processor execution is taught - the next step starts getting really weird.

However it could also execute the code assuming that A will be zero (or not, or even both), so it has the results ready for once the memory has been read. Now there are some obvious limitations to this - the processor can't turn your screen green assuming that A is zero, but it can sometimes get some useful work done.

The problem (with both Meltdown and Spectre) is that speculative execution seems to bypass the various forms of memory protection. Now whilst the speculative results are ignored once the memory is properly read, and the memory protection kicks in, there is a side-channel attack that allows some of the details of the speculative results to be sniffed by an attacker.

 

Summary

  1. Don't panic! These attacks are not currently in use and because of the complexity it will take some time for the attacks to appear in the wild.
  2. Intel processors are vulnerable to Meltdown, and will need a patch to apply a work-around. Apply the patch as soon as it comes out even if it hurts performance.
  3. The performance hit is likely to be significant only on a small set of applications, and in general only significant on a macro-scale - if you run as many servers as Google, you will have to buy more servers soon.
  4. Things are a little more vague with Spectre, but it seems likely that individual applications will need to be patched to remove their vulnerability. Expect more patches.

Tunnel To The Old Town

 

 

Feb 082012
 

If you read certain articles on the web you might be under the impression that Apple has had a secret project to port OSX to the ARM-based architecture with the intention of producing a cut down (although not necessarily very cut down) Macbook Air running on the ARM architecture.

Which is preposterous.

Firstly this secret project to port OSX was merely bringing up the ‘lower half’ of OSX (Darwin) on a particular variety of ARM-processor. The end result ? Probably something more or less equivalent to a “login” prompt on an old multi-user Unix system with no GUI. That is not to underestimate the accomplishment of the student involved – in many ways that would be a good 75% of the work involved.

But a few key facts here :-

  1. This is not the first port of Darwin (or even OSX) to the ARM-based architecture. Pick up your iThingie … that’s got an ARM inside, and whilst we all call the operating system it runs iOS, it is really OSX with a different skin on. Sure there are some differences and limitations, but they are merely skin deep – at the lowest level they’re both Darwin.
  2. If there’s a secret project to run OSX on an ARM-based laptop of some kind, this ain’t it. Take a closer look at the processor used in this experiment. It’s an ARM processor less capable than that in the very first iPhone. You won’t see it in any new laptops. If this secret experiment had any real product behind it, it would be more likely to be an intelligent embedded device – a really clever fridge or something (and no, not a TV).
  3. If there was a real product behind this, it seems pretty unlikely that Apple would choose a student on work experience to do the work. After all such a student might just spill the beans on a secret project given enough green folding stuff as incentive.

What is probably the case here is that Apple came up with this project for the student as a way of testing whether he was worth considering as a full employee – after all it is a better way of testing a potential employee than asking them to make the tea! And have no intention of using the result as a product.

What they will do however is use the student’s observations to feed back into the OSX team – what problems did he encounter that might qualify as bugs ? Etc.

In reality, Apple probably already has OSX running on ARM based machines in their labs. It’s an obvious thing to try out given that all their iThingies are ARM based, and it is not an enormous amount of extra work to finish off what is already in place to get something that looks and runs like OSX. After all, Apple did ages ago admit that early versions of OSX did run on x86-based processors when their product line was all PowerPC based, and keeping OSX portable across architectures is something they probably want to keep as a possibility.

Will Apple launch an ARM-based Macbook Air ? Not anytime soon. Whilst the value of a 64-bit architecture is over-rated, it would seem unlikely that Apple will ever again launch a 32-bit based “real” computer. But with 64-bit based ARMs arriving in a year or two, who knows ?

Oct 312011
 

According to a couple of articles on The Register, a couple of manufacturers are getting close to releasing ARM-based servers. The interesting thing is that the latest announcement includes details of a 64-bit version of the ARM processor, which according to some people is a precondition for using the ARM in a server.

It is not really true of course, but a 64-bit ARM will make a small number of tasks possible. It is easy to forget that 32-bit servers (of which there are still quite a few older ones around) did a pretty reasonable job whilst they were in service – there is very little that a 64-bit server can do that a 32-bit server cannot. As a concrete example, a rather elderly SPARC-based server I have access to has 8Gbytes of memory available, is running a 64-bit version of Solaris (it’s hard to find a 32-bit version of Solaris for SPARC), but of the 170 processes it is running, none occupies more than 256Mbytes of memory; by coincidence the size of processes is also no more than 256Mb.

The more important development is the introduction of virtualisation support.

The thing is that people – especially those used to the x86 world – tend to over-emphasise the importance of 64-bits. It is important as some applications do require more than 4Gbytes of memory to support – in particular applications such as large Oracle (or other DBMS) installations. But the overwhelming majority of applications actually suffer a performance penalty if re-compiled to run as 64-bit applications.

The simple fact is that if an application is perfectly happy to run as a 32-bit application with a “limited” memory space and smaller integers, it can run faster because there is less data flying around. And indeed as pointed out in the comments section of the article above, it can also use ever so slightly more electricity.

What is overlooked amongst those whose thinking is dominated by the x86 world, is that the x86-64 architecture offers two benefits over the old x86 world – a 64-bit architecture and an improved architecture with many more CPU registered. This allows for 64-bit applications in the x86 world to perform better than their 32-bit counterparts even if the applications wouldn’t normally benefit from running on a 64-bit architecture.

If the people producing operating systems for the new ARM-based servers have any sense, they will quietly create a 64-bit operating system that can transparently run many applications in 32-bit mode. Not exactly a new thing as this is what Solaris has done on 64-bit SPARC based machines for a decade. This will allow those applications that don’t require 64-bit, to gain the performance benefit of running 32-bit, whilst allowing those applications that require 64-bit to run perfectly well.

There is no real downside in running a dual word sized operating system except a minor amount of added complexity for those developers working at the C-language level.

Jul 022011
 

One of the many obsessions in the IT industry going around at the moment is the possibility of low-energy ARM-based servers. ARM-based processors are currently very popular in the smartphone and slate markets because they eat much less energy than Intel-based processors. What is less commonly realised is that ARM-based processors have also long been used in general purpose desktop computers.

ARM processors were originally designed and built by a home computer company called Acorn as a replacement for the 6502 processor in their immensely successful BBC Micro. The replacement micros were collectively known as the Acorn Archimedes and were probably the most powerful home computer before the crash of the home computer market, and the eventual dominance of the IBM PC compatibles.

And of course a general purpose computer running a well-designed operating system is just a short step away from being a capable server.

So of course it is possible for someone to release a server based around the ARM processor and for it to be useful as a server. Whether it is successful enough to carve itself a respectable niche in the server market as a whole is pretty much down to the vagaries of the market.

Some of the criticisms I have seen around the possibilities for ARM servers :-

But ARM Cores Are Just So Slow

Actually they’re not. Sure they are slower than the big ticket Xeons from Intel, but they are quite possibly fast enough. Except for specialist jobs, modern servers are rarely starved of CPU; in fact that is one of the reasons why virtualisation is so popular – we can make use of all that wasted CPU resource. Modern servers are more typically constrained (especially when running many virtual servers) by I/O and memory.

And the smaller size of the ARM core allows for a much larger number of cores than x86-based servers. And for most modern server loads (with virtual machines), many cores is just as good as fewer but faster cores.

In the case of I/O, the ARM processor is just as capable as an Intel processor because it isn’t the processor that implements links to the outside world (that is a bit simplistic, but correct in this context). In the case of memory, ARM has an apparent problem in that it is currently a 32-bit architecture which means a single process can only address up to 4Gbytes of memory.

Now that does not mean an ARM server is limited to 4Gbytes of memory … the capacity of an ARM server in terms of memory is determined by the capabilities of the memory management unit. I am not aware of any ARM MMUs that have a greater than 32-bit addressing capability, but one could relatively easily be added to an ARM core.

Of course that is not quite as good as a 64-bit ARM core, but that is coming. And except for a certain number of server applications, 64-bit is over rated outside of the x86 world – Solaris on SPARC is still delivered with many binaries being 32-bit because changing to 64-bit does not give any significant advantages.

But It Is Incompatible With x86 Software

Yes. And ?

This is a clear indication that someone has not been around long enough to remember earlier server landscapes when servers were based on VAX, Alpha, SPARC, Power, Itanium, and more different processor architectures. The key point to remember is that servers are not desktops; they usually run very different software whether the server is running Windows, Linux, or some variety of Unix.

There are server applications where x86 binary compatibility is required. Usually applications provided by incompetent third party vendors. But most jobs that servers do are done by the included software, although in the case of Linux and Unix, the width of “included” software is somewhat wider than with Windows. Indeed for every third party application that requires an x86 processor, there are probably as a minimum half a dozen other server jobs that do not require x86 servers – DNS, DHCP, Directory services, file servers, printer servers, etc.

If you buy an ARM-based server, it will come with an operating system capable of running many server tasks which can be used to offload server tasks from more expensive x86 hardware (either in terms of the upfront cost, or in terms of the ongoing power costs). Or indeed, will be sufficient to provision thin clients to the point where they can use the cloud.

 

Facebook Auto Publish Powered By : XYZScripts.com

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close