No ads? Contribute with BitCoins: 16hQid2ddoCwHDWN9NdSnARAfdXc2Shnoa
Jan 042018
 

Well, there’s another big and bad security vulnerability; actually there are three. These are known as Meltdown and Spectre (two different Spectres). There are all sorts of bits of information and misinformation out there at the moment and this posting will be no different.

In short, nobody but those involved in the vulnerability research or implementing work-arounds within the wel-known operating systems really knows these vulnerabilities well enough to say anything about them with complete accuracy.

The problem is that both vulnerabilities are exceptionally technical and require detailed knowledge of technicalities that most people are not familiar with. Even people who work in the IT industry.

Having said that I’m not likely to be 100% accurate, let’s dive in …

What Is Vulnerable?

For Meltdown, every modern Intel processor is vulnerable; in fact the only processors from Intel that are not vulnerable are only likely to be encountered in retro-computing. Processors from AMD and ARM are probably not vulnerable, although it is possible to configure at least one AMD processor in such a way that it becomes vulnerable.

It appears that that more processors are likely to be vulnerable to the Spectre vulnerabilities. Exactly what is vulnerable is a bit of work to assess, and people are concentrating on the Meltdown vulnerability as it is more serious (although Spectre is itself serious enough to qualify for a catchy code name).

What Is The Fix?

Replace the processor. But wait until fixed ones have been produced.

However there is a work-around for the Meltdown vulnerability, which is an operating system patch (to fix the operating system) and a firmware patch (to fix the UEFI environment). All of the patches “fix” the problem by removing kernel memory from the user memory map, which stops user processes exploiting Meltdown to read kernel memory.

Unfortunately there is a performance hit with this fix; every time you call the operating system (actually the kernel) to perform something, the memory map needs to be loaded with the kernel maps and re-loaded with the old map when the routine exits.

This “costs” between 5% and 30% when performing system calls. With very modern processors the performance hit will be consistently 5% and with older processors the hit will be consistently 30%.

Having said that, this only happens when calling the operating system kernel, and many applications may very well make relatively few kernel operating system calls in which case the performance hit will be barely noticeable. Nobody is entirely sure what the performance hit will be for real world use, but the best guesses say that most desktop applications will be fine with occasional exceptions (and the web browser is likely to be one); the big performance hit will be on the server.

How Serious Are They?

Meltdown is very serious not only because it allows a user process to read privileged data, but because it allows an attacker to effectively remove a standard attack mitigation which makes many older-style attacks impracticable. Essentially it make older-style attacks practicable again.

Although Spectre is still serious, it may be less so than Meltdown because an attacker needs to be able to control some data that the victim process uses to indulge in some speculative execution. In the case of browsers (for example) this is relatively easy, but in general it is not so easy.

It is also easier to fix and/or protect against on an individual application basis – expect browser patches shortly.

Some Technicalities

Within this section I will attempt to explain some of the technical aspects of the vulnerabilities. By all means skip to the summary if you wish.

The Processor?

Normally security vulnerabilities are found within software – the operating system, or a ‘layered product’ – something installed on top of the operating system such as an application, a helper application, or a run-time environment.

Less often we hear of vulnerabilities that involve hardware in some sense – requiring firmware updates to either the system itself, graphics cards, or network cards.

Similar to firmware updates, it is possible for microcode updates to fix problems with the processor’s instructions.

Unfortunately these vulnerabilities are not found within the processor instructions, but in the way that the processor executes those instructions. And no microcode update can fix this problem (although it is possible to weaken the side-channel attack by making the cache instructions execute in a fixed time).

Essentially the processor hardware needs to be re-designed and new processors released to fix this problem – you need a new processor. The patches for Meltdown and Spectre – both the ones available today, and those available in the future – are strictly speaking workarounds.

The Kernel and Address Space

Meldown specifically targets the kernel and the kernel’s memory. But what is the kernel?

It is a quite common term in the Linux community, but every single mainstream has the same split between kernel mode and user mode. Kernel mode has privileged access to the hardware whereas user mode is prevented from accessing the hardware and indeed the memory of any other user process running. It would be easy to think of this as the operating system and user applications, but that would be technically incorrect.

Whilst the kernel is the operating system, plenty of software that runs in user mode is also part of the operating system. But the over-simplification will do because it contains a useful element of the truth.

Amongst other things the kernel address space contains many secrets that user mode software should not have access to. So why is the kernel mode address space overlaid upon the user mode address space?

One of the jobs that the kernel does when it starts a user mode process, is give to that process a virtual view of the processor’s memory that entirely fills the processor’s memory addressing capability – even if that it is more memory than the machine contains. The reasons for this can be ignored for the moment.

If real memory is allocated to a user process, it can be seen and used by that process and no other.

For performance reasons, the kernel includes it’s own memory within each user process (but protected). It isn’t necessary, but re-programming the memory management unit to map the kernel memory for each system call is slower than not. And after all, memory protection should stop user processes reading kernel memory directly.

That is of course unless memory protection is broken …

Speculative Execution

Computer memory is much slower than modern processors which is why we have cache memory – indeed multiple levels of cache memory. To improve performance processors have long been doing things that come under the umbrella of ‘speculative execution’.

If for example we have the following sample of pseudo-code :-

load variable A from memory location A-in-memory
if A is zero
then
do one thing
else
do another
endif

Because memory is so slow, a processor running this code could stop whilst it is waiting for the memory location to be read. This is how processors of old worked, and is often how processor execution is taught - the next step starts getting really weird.

However it could also execute the code assuming that A will be zero (or not, or even both), so it has the results ready for once the memory has been read. Now there are some obvious limitations to this - the processor can't turn your screen green assuming that A is zero, but it can sometimes get some useful work done.

The problem (with both Meltdown and Spectre) is that speculative execution seems to bypass the various forms of memory protection. Now whilst the speculative results are ignored once the memory is properly read, and the memory protection kicks in, there is a side-channel attack that allows some of the details of the speculative results to be sniffed by an attacker.

 

Summary

  1. Don't panic! These attacks are not currently in use and because of the complexity it will take some time for the attacks to appear in the wild.
  2. Intel processors are vulnerable to Meltdown, and will need a patch to apply a work-around. Apply the patch as soon as it comes out even if it hurts performance.
  3. The performance hit is likely to be significant only on a small set of applications, and in general only significant on a macro-scale - if you run as many servers as Google, you will have to buy more servers soon.
  4. Things are a little more vague with Spectre, but it seems likely that individual applications will need to be patched to remove their vulnerability. Expect more patches.

Tunnel To The Old Town

 

 

Dec 102017
 

If you take a look at a modern keyboard, there will be more than a passing resemblance to the IBM PC/AT keyboard of 1984. The differences are relatively minor – the keyboard may have shrunk slightly in terms of the non-functional bezel, there may be some additional media keys (typically above the number pad), and the overall construction will probably have been made a lot cheaper (the PC/AT was an expensive system and the keyboard was expensive too).

 

(The pictured mainframe keyboard is not a PC/AT keyboard but does have a half-reasonable number of keys)

But very little about the keyboard layout has changed. Oh there are variants such as the ten-key-less keyboard where the number pad has been removed, or even more extreme 60% keyboards which do away with the navigation keys as well, but overall the layout is still pretty much the same.

The very first thing to say is that ergonomically, keyboards are too wide which causes you to move your mouse too far out to use comfortably. This is where the age of the PC/AT keyboard shows; at the time it was designed, mice and gooey interfaces were a rarity and everyone’s hands were nailed to the keyboard. This is the reason why the ten-key-less keyboards exist, and from experience of using both them, and a modular keyboard with the number pad on the left, I can say that a narrower keyboard is more comfortable when taking the mouse into consideration.

But I like big keyboards (as you can tell from the picture), or more specifically I like keyboards with plenty of keys. A keyboard can have plenty of keys without being wide if it is deep. Changing keyboard layouts is contentious, but as someone who has used a wildly different set of keyboards I can say it is perfectly possible to get used to different layouts when those different layouts involve changing the non-touch-typing keys.

That is not to say that changing the touch-typing keys should not be considered; for one thing the staggered layout of the old QWERTY keyboard does make things tricky so orthogonal layouts should be considered.

Now onto some specifics …

Relabelling

In some cases, keys have been labelled the way they are just because that is always the way it has been done. Which is a damn silly reason especially when the name is not only inscrutable but wrong.

For example, Backspace is by description (and historically) a key that should move the cursor back one space to allow typewritten text to be overwritten – you could get an umlaut over an ‘A’ by typing A, Backspace, “ which would get you a very rough approximation of ä. Which is not what the key on our modern keyboard does – it rubs out a mistake, and some old keyboards labelled it properly as Rubout. I have also moved it to just above the Enter key which is traditional on Unix-layout keyboards which is not a bad idea more generally – it is still in a prominent position, and by reducing its size slightly we have room for an additional key in the main section of the keyboard.

The PrtScn key is one of those inscrutable keys that nobody who wasn’t around in the early days knows what it did. Pressing it would send the text contents of the screen to a printer. There are two reasons why we should relabel it Screen Copy – firstly that is what it does (it copies the screen contents to the clipboard), and secondly it gives people who don’t know what PrtScn does a fighting chance of discovering a useful feature.

In a similar way, it would be helpful to add Next Field to the Tab key as a description of one of its more useful functions. You can hear my teeth grinding every time someone takes their hands off the keyboard, uses the mouse to click in the next field, and then types again when one simple press of the Tab key will do all that for them. Of course the original use is still there and used within word-processors.

Finally, the Esc key has been moved to its traditional position, and added what is effectively its most common usage – Cancel.

The right Alt key is often configured as an AltGr key to allow it to be used in combination with other keys to generate characters not found on the keyboard – such as æ, þ, or œ (all of which should be used in English but rarely are because they are so difficult to type).

I have not been able to resist relabelling the Win keys to Super keys, which is what they are configured for in Linux (and used for much the same purpose).

Moving/Shrinking Keys

Why do both Shift keys have to be so big? It is well understood that inserting an extra key between Z and the left shift is unpopular because you have to stretch further for the Shift, but keeping it in position and adding a new key to the left (here a small Caps Lock) would work.

And on the subject of Caps Lock, why give such a prominent key next to A to such a rarely used function? EXCEPT FOR THOSE WHO INSIST ON SHOUTING! Of course, moving the Caps Lock key somewhere else may just lead to less shouting. And it allows a very common request amongst those who use it a lot – moving the Control key back to its traditional position.

Some “New” Keys

Where is the Help key? We all know that F1 almost always functions as a help key, but why not have a dedicated Help key when the keyboard standard allows for it?

And in these days of increased concern over security, why don’t we add a Lock Screen button? Whilst it may not seem that important at home, in a corporate environment it should be mandatory, and it is not a bad idea in a home environment either.

The CutCopy, and Paste keys do the equivalent of Control-X, -C, -V, which might seem unnecessary but not everyone knows the keyboard shortcuts. Besides which, in edge cases the control key shortcuts are used for other purposes.

Most of the media control keys in the top right are pretty much standard if labelled differently. I have merged the up/down keys – so rather than use two keys to control the volume, you use one key (unshifted is down and shifted is up); I have “added” Bright ± and Contrast ± which are commonly found on laptop keyboards as Function sequences, but why shouldn’t they have their own dedicated keys and appear on desktop keyboards too?

The smiley key (😀) is a feature stolen from smartphones – an easy way to pick and select emoticons. I envision it popping up a dialog box to allow the arrow keys to move onto the preferred emoticon and Enter used to insert that symbol.

The Compose key is copied from old keyboards and allows you to enter certain symbols by using keyboard sequences – for example Compose, results in “ä”, and there are many possible sequences. It is a quick and easy way to type certain symbols.

And Find is also an obvious key to add – to search for things.

The Blank Keys

Also I have added a whole row of blank keys which would ideally be populated with re-legend-able keycaps (a clear plastic top which can be removed to insert a tiny scrap of paper with your preferred label). And they should be able to be programmed for whatever the owner of the keyboard wants.

Because many people have their own ideas on what should be on a keyboard.

Indeed with a proper keyboard controller (such as one from the keyboard enthusiasts‘ arena) any key could be programmed to send whatever you want.

Removing Keys

Don’t.

However much you believe a particular key is unused, there is probably some population of some type of computer user that uses that key more than you would believe possible. For example, I rarely use Scroll Lock (enough that I often use it as a custom key to control VirtualBox), but it is often used with Excel.

And I have seen suggestions that the grave/tilde (` and ~) should be removed because nobody uses it; well I use it a hell of a lot.

Nov 292017
 

If you have not already heard about it, Apple made a mindbogglingly stupid mistake with the latest release of macOS (previously known as OSX), leaving their users open to an incredibly easy exploit that would give anyone full access over an Apple in their hands. Or in some cases, remotely.

The externally visible effect of the vulnerability is that a standard Unix account (root) that was supposed to be disabled was left with a blank password. Apple uses a very common Unix security mechanism that means the root account is unnecessary as an ordinary account (i.e. nobody logs in as root), although the account has to exist so that legitimate privilege escalation works.

As an alternative, Apple uses sudo (and graphical equivalents) so that members of a certain group can run commands as root. Nothing wrong with that.

To keep things safe, Apple disabled the root account and because the account was disabled, left the password blank.

It turns out that the vulnerability was caused by a bug in Apple’s authentication system which resulted in blank passwords being reset and the account enabled. But it is more complicated than that; Apple made a number of mistakes :-

  1. The bug in the authentication system. Of course no software is bug-free, but bugs are still mistakes. Of course because no software is bug-free, it makes sense to take extra precautions to avoid bugs causing a cascade of problems.
  2. The root password should have been set to a random value to prevent access if the account was accidentally enabled.
  3. Apple’s test suite which hopefully they use to verify that new releases don’t contain previously identified bugs should also check for this vulnerability.

Although the precise details don’t matter as it’s the principle of defence in depth.

Hemisphere and Curves

Nov 252017
 

The scariest predictions of robotics and artificial intelligence reveals a desolate future where almost everyone is unemployed because machines can do it better and faster than people. That will not happen simply because the economy would break down if that were the case – if people are unemployed they are too poor to be efficient consumers.

Of course the most rabid Tories will try to cling to the outdated economic model of capitalism beyond the point of sanity so they will try to bring a great deal of pain.

To give you a flavour of what Artificial Intelligence might bring, they are talking about machines replacing lawyers, solicitors, and barristers; which is not all bad. Legal fees are high enough that most people cannot bring civil suits beyond a point where only the simplest decisions can be made. Imagine a future where a civil suit can be automatically handled by machines battling it out at all levels from the County Court all the say up to the European Court in minutes and at a cost that almost anyone can access.

Of course if you work in the legal system, you might well disagree!

The most obvious way of dealing with a future where nearly everyone is ‘unemployed’ but still needs to be an efficient consumer is to use the basic income idea where everyone gets a reasonable income. The most immediate reaction to this is of course the belief that it is too expensive. Except that some basic maths shows that it is possible: the UK population today is around 65 million, and the UK economy is worth £2 trillion; a simple division shows that we could give everyone £30,000 per year.

Of course that would mean a few less amenities – the NHS, defence spending, etc. So in reality the basic income would be a great deal lower than this, but it is broadly feasible given some rather radical changes.

Does everyone deserve a basic income like this? No, of course not. But this is not about what the worst people in our society deserve, but making sure they function as efficient consumers. And as a bonus, by ensuring everyone has a basic income, you can be sure that nobody slips through the net.

This does not mean the end of jobs and industry, but it will radically change it. Imagine for instance that you do not get a salary, but a share of the profits – instantly the cost of labour is removed allowing a company to compete with low labour cost countries. But if that share is too low, people are likely to sit at home.

And of course work will have to be made worthwhile without (or at least minimising) the annoyances we find at work today. Get in the way of what people work to do, and they will disappear in the direction of somewhere else.

Essentially this is almost returning to pure capitalism – companies are free to get rid of workers at whim, and workers are free to leave at any time. That has always been one of the biggest problems with capitalism – workers are not free to leave work with many things keeping them at a potentially abusive work-place.

Those with more than half a brain will realise that housing costs are a big issue here; and a solution needs to be found or all of the above will only apply to those who get their housing costs for free (i.e. almost nobody). Any potential solution comes in two halves – what to do about those with mortgages and what to do with those who rent.

In the former case, the government can simply pick up mortgage payments when the house ‘owner’ cannot afford them. In return, the government gets a proportionate share of the freehold, so when the house is sold, they get their share back.

For those who rent, the government can also pick up the rent payments for those who cannot afford those payment and can decide what a reasonable rent is. Plus no landlord can kick out a resident for non-payment.

The Bench

Nov 022017
 

Autocorrect can be annoying when it happens to you, or amusing if it happens to someone else. But one thing that appears when you look at amusing autocorrects on the Internet is that you often find someone saying “it’s the phone” or “the phone is doing it”.

No it isn’t. It’s your fault.

Way back in the mists of time when we didn’t have smartphones and keyboards were big clunky mechanical things (some of us still use them), one of the first bits of IT security advice I ever gave was to read though the emails you are about to send. Whatever means you use to compose a message, there are chances of making a mistake. So what you get in the message you composed may not be what you intended to write.

As a bonus, you get a second chance to review your message to check for “thinkos” (like typos but where your brain comes out with something you didn’t intend).

If you choose to send messages (of whatever kind) without checking they say what you intended, you are responsible for the mistakes.

The Bench

Sep 202017
 

By default, the Awesome window manager sets up 9 tags and uses a rather clever method for setting keyboard shortcuts for those tags.

And that is also one of the irritations of using Awesome because I have gotten into the habit of using more virtual screens (“tags”) than this. After a dumb way of increasing the number, I have come up with a rather improved method that can be used to replace the existing method in the Awesome rc.lua file :-

local taglist = { "1", "2", "3", "4", "5", "6", "7", "8", "9", "0", "-", "=" }
-- The list of tags that I use.
…
 awful.tag( taglist, s, awful.layout.layouts[1])
…
for i = 1, #taglist do
  globalkeys = awful.util.table.join(globalkeys,
    awful.key({ modkey}, taglist[i],
                  function ()
                        local screen = awful.screen.focused()
                        local tag = screen.tags[i]
                        if tag then
                           tag:view_only()
                        end
                  end,
                  {description = "view tag", group = "tag"}),
        awful.key({ modkey, "Control" }, taglist[i],
                  function ()
                      local screen = awful.screen.focused()
                      local tag = screen.tags[i]
                      if tag then
                         awful.tag.viewtoggle(tag)
                      end
                  end,
                  {description = "toggle tag", group = "tag"}),
        awful.key({ modkey, "Shift" }, taglist[i],
                  function ()
                      if client.focus then
                          local tag = client.focus.screen.tags[i]
                          if tag then
                              client.focus:move_to_tag(tag)
                          end
                     end
                  end,
                  {description = "move focused client to tag", group = "tag"}),
        awful.key({ modkey, "Control", "Shift" }, taglist[i],
                  function ()
                      if client.focus then
                          local tag = client.focus.screen.tags[i]
                          if tag then
                              client.focus:toggle_tag(tag)
                          end
                      end
                  end,
                  {description = "toggle focused client on tag", group = "tag"})
    )
end

That’s three different parts of the code to change – a list of tags to use at the top of the file, a replacement somewhere in the middle, and a large chunk replacing existing code at the end of the keyboard configuration. I don’t claim this is better than the standard way, but it is handy for me.

The Window

Sep 162017
 

My Facebook news feed came up with a post with this embedded within it :-

Now I’m not in the business of telling someone they should own a smartphone, but taking some of the objections in turn …

Firstly if you are letting your smartphone boss you around and letting it overwhelm you, you’re using it wrong. You decide when to use your smartphone as a communications tool; most of those messages and emails that your phone is constantly pinging and burbling to you about can wait until it is convenient for you to answer.

Do any of your friends get annoyed when you don’t respond to their messages within seconds? Tell them to grow up and get a life.

To give you an idea of how I use my smartphone, here’s a typical day :-

  1. The phone is charging downstairs in the front room where it has been since the evening. If it is ringing, bleeping, throbbing, burbling madly, I won’t know until I’ve finished getting up.
  2. If I am curious about the reaction to some photos I posted the previous night I might pick it up and take a quick look at the notifications, or I might not.
  3. As I head out the door for work, I’ll pick it up and put it straight into my pocket. On the way into work I might hear phone calls, or I might not.
  4. may as I approach work, pull out the phone and take a quick look at the agenda screen (particularly if I recall an early meeting).
  5. If I remember, I’ll switch the phone to silent before I sit down to work. If not, and the notifications get annoying, I’ll remember then.
  6. If I get a phonecall whilst I’m working, I’ll pull out the phone, check who is calling, and slide to red (to reject the phonecall) if I don’t recognise the caller.
  7. When I take a break from work, and I’m not chatting to anyone, I’ll pull out the phone and have a quick look at Facebook, home email, etc.
  8. When I head home from work. the phone stays in my pocket. I’ll check the phone on getting home to see if I missed anything.

You might be wondering why I have a smartphone given I use it so little. Well first of all I do use it more than is implied here – particularly whilst travelling (having train timetables and maps in your pocket is really handy).

In terms of ethical production, not all smartphones are the same. There are even places which score phones based on the ethics of their production; there is even a smartphone whose whole purpose in existence is to be an ethically produced phone – the Fairphone.

So giving up your smartphone is the lazy way of ensuring you have an ethically produced phone that you don’t get bossed around by. No harm in being lazy here of course!

Sep 092017
 

I recently switched from Ubuntu to Fedora Core for a variety of reasons :-

  • For a later version of fwupd as I had some vulnerable wireless mice to update.
  • To have a look at what Wayland was like (mostly invisible although oddball Window Managers still only talk to X).
  • To have a look at what it’s like after all these years; RedHat was one of the early distributions I ran.

All is reasonable except for one thing. The software updates.

What is this obsession with restarting to perform software updates? Is the relevant developer a refugee from Windows?

Now don’t get me wrong; a restart is the most effective simple way to ensure that outdated versions are not in use, but restarting every time you perform an update seems excessive.

  • If you need to update the kernel for security reasons, a restart is reasonable if you don’t have “live upgrades” but Fedora Core comes with a kernel that has that feature.
  • If you have a security update to a long-running process (such as Wayland or X), then you need to restart that process. In some cases you can restart a long-running process without notice; in others you will have to be disruptive, or ask someone to quit the long-running process.
  • If it isn’t a security update, you can simply wait until the user restarts the process.

Overall, the update process need not be as disruptive as Fedora Core makes it. It is of course not the end of the world to force a reboot, but it is hardly a very graceful process and some (including me) will find it annoying enough to avoid Fedora Core.

Post Interference

Aug 272017
 

Every so often, somebody (or organisation) proclaims that this year is the year of Linux on the desktop. Given the number of times this has occurred, you would have thought that the Cassandras of the Linux world would stop trying to predict it. In fact I am not entirely sure what it is supposed to be – everyone using Linux on the desktop, or just some? And if it is just some people, how many?

It is essentially nonsense – if you use Linux on the desktop, every year is the year of Linux on the desktop; and if you do not, it isn’t.

Assuming you are someone who has more than two brain cells to rub together and are prepared to do some learning, it is perfectly possible to run Linux on the desktop. You can do pretty much everything with Linux that you can do with Windows. In fact the one area that Linux is traditionally weak – upgrading firmware of third party devices (such as media players, wireless mice – is beginning to change with LVMS and fwupd.

To give an example, I was recently upgrading some Logitech wireless mice to eliminate a serious security flaw, and I tried with Windows, OSX, and finally Linux. Both the Windows and OSX methods failed, whereas the Linux method just worked.

In fact even if the Windows method had worked, it would have been a lot more complex. I had to download the Logitech software (admittedly this step would probably be unnecessary if I was used to using the wireless mouse under Windows), know that a firmware upgrade was necessary, download the firmware upgrade, and finally load it into the upgrade tool.

Under Linux? Assuming I had been using some gooey tool like GNOME Software, it would have notified me that an upgrade was available and after a request would have upgraded it for me. I (of course) chose to do it the geeky way from the command-line, but even so running :-

# fwupmgr refresh
# fwupmgr update

… is a great deal simpler than the Windows way. And that is before you consider that with Windows, you need to download a firmware update tool for every device whereas the Linux way it is just one tool.

Of course in practice, the Linux method only works for a handful of devices – of the innumerable Linux machines I run only one has available updates for the desktop computer’s firmware (the Dell at work), and of the peripheral (or not so peripheral) devices only a tiny handful can be upgraded today.

But it is not inconceivable that in the not too distant future, the sensible way to upgrade the firmware of various devices will be to install Linux, and let it do it for you. Particularly if device manufacturers realise that by adopting Linux as the firmware upgrade delivery method, they can save time and effort.

“But I know Windows” – actually you know Windows 7, or Windows XP, or Windows 10; each of which is very different from each other. And whilst Linux has even more variability at first glance, there is actually more commonality between different versions of Linux. Or in other words, the effort of learning Linux in the first place is rewarded by less of a need to completely re-educate yourself every time you upgrade.

This is not intended as encouragement for you to switch to Linux (although if you are involved in IT you should at least be familiar with Linux), but intended as a criticism of the concept of a year of the Linux desktop. It isn’t useful, and what is worse it leads to the false impression of failure – if everyone is not using Linux on the desktop, then Linux has failed.

Linux on the desktop has not failed because I use it on the desktop.

Aug 262017
 

No. The title is just click-bait (which won’t accomplish much).

AMD Ryzen was interesting because it restored AMD’s competitiveness as compared to Intel for the non-enthusiast processor for desktops and laptops. Whereas AMD’s Epyc was interesting because it restored AMD’s competitiveness in the data centre. Both are good things because Intel has been rather slow at improving their processor over the last few years – enough that people are taking a serious look at a non-compatible architecture (the ARM which is found in your smartphone) in the data centre.

Threadripper itself is of interest to a relatively small number of people – those after a workstation-class processor to handle highly threaded workloads. A market that was previous catered to by the Xeon processor, so although Threadripper looks expensive, it is in fact pretty cheap in comparison to Xeon processors. So ‘scientific’ workstations should become cheaper.

And the significant advantage they have with I/O (64 PCIe lanes as opposed to a maximum of 44 for the X299 platform would be useful for certain jobs. Such as medium-sized storage servers with lots of NVMe caching, or graphics-heavy display servers (room sized virtual reality?).

But for gamers? Not so much. Almost no games use lots of threads (although it would be useful to change this), so the main use the extra power of Threadripper will only get used by other things that gamers do. Perhaps game streaming and/or using the unused power to run virtualised storage servers.

WP Facebook Auto Publish Powered By : XYZScripts.com

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close