Oct 102020
 

One of the big names in the opensource world – Eric Raymond – has declared that Windows will soon be effectively a Linux distribution. Which seems like a ridiculous notion; except technically it might make a lot of sense.

How?

It seems impossible for Microsoft to replace Windows with Linux, but actually it could be done. Windows itself consists of a bunch of software applications which call Windows “APIs” which in turn make calls to the legacy NT kernel. If all that software is written cleanly (it won’t be, but bear with me), it should be possible to make modifications to both (or either) the Linux kernel and the Windows APIs to allow Windows software to run natively.

Impossible? Nope – it has already been done to a certain extent – Wine and Proton allow a considerable amount of Windows software (and games!) to run under Linux.

Why?

So it’s not impossible, but surely it is a lot of work. So why?

Microsoft has a bit of a problem – they don’t make a huge amount of money selling the Windows operating system, and maintaining it is hugely expensive. All those security fixes, all those bug fixes, and all those new features they want to introduce.

Now most of this is done to the “userland” rather than the kernel itself, but the kernel does still need to be maintained. But what if you could use the Linux kernel and get some level of maintenance supplied by those not employed by Microsoft?

Would that save Microsoft money? It seems quite possible, and you can bet someone in Microsoft has estimated whether it would or not.

Will It Happen?

There are those who point to certain actions by Microsoft – the Linux subsystem for Windows, the Edge browser for Linux, the rumour of an Office build under Linux, etc. as indicators that Microsoft is planning this.

I think they’re wrong to the extent that those actions don’t say whether Microsoft is planning to make Windows a Linux distribution or not. There are plenty of reasons why Microsoft is releasing Linux software not least because they will almost certainly have developers that believe that porting software is a good way of finding bugs.

The real answer is that the only people who know are inside Microsoft.

The Join
Sep 242020
 

I like screenshots (the graphical kind) – I make them all the time for documentary purposes. But there is one kind of screenshot that makes me boil :-

Screenshots taken for fault finding which turn textual information into graphical information. How on earth am I supposed to be able to do anything with any of those IPv6 addresses without typing them in and risking making a typo?

And I’m a good typist – I’ve been known to freak people out by carrying on typing when they arrive at my desk whilst talking to them. But certain kinds of information – such as network addresses (whether MAC, IPv4, or IPv6) – are tricky to get right and a simple off-by-one error can dramatically influence the diagnostic results.

Now don’t get me wrong – I’m not expecting the average person to stop using screenshots when reporting faults. Hell, it’s better than nothing!

But there are IT support staff who don’t do textual cut&paste!

Aug 132020
 

Working from home (henceforth “WfH”) has cropped up in my Twitter feed lately and this is my “response” to some of the issues raised.

Now don’t get me wrong – there are all sorts of issues related to WfH – some people can’t, some people don’t like it, companies are getting offices for “free”, some companies not realising that they need to provide equipment, and that health and safety requirements apply to the home worker too.

And probably a whole lot more.

But some of the complaints seem to be coming from people who have never even looked at WfH advice, or who have ignored that advice.

If your work life and your home life seem to be merging, do something about it. Clearly distinguish between work time and home time with a “going to work ritual” and a “coming home ritual”. It doesn’t matter what they are as long as they clearly mark the start and end of the working day.

For example, I always take a morning walk to start the working day, and make a ceremonial cup of coffee at the end (I don’t usually drink coffee whilst working or I end up fizzing).

Find yourself slogging away at the computer non-stop? Well don’t do that then. You’re supposed to take a break away from the computer regularly anyway, so do so. Get up and wander around a bit – make a coffee, look out the front window to see if it’s raining, check the postbox, do some stretching, etc.

Stuck in non-stop meetings? Call a comfort break every hour then – even if you don’t need a pee. Do you really care if your co-workers think you have a weak bladder? Especially when they’re more likely to think you’re a hero for giving them an excuse for a comfort break.

Missing out on the social life of the office? Set up social meetings then – perhaps for lunchtimes when you can eat your meals “together”.

Lastly, ergonomics. That laptop you took back home with you in the spring isn’t the right equipment for a long-term workstation. Get yourself a decent desk, chair, monitor, external keyboard and mouse. That sounds expensive, and yes your employer should (at the very least) be helping out, but it needn’t be that expensive.

Into The Water; Stillness and Motion
Jul 112020
 

So I am currently messing around with a tiling window manager on my laptop – I prefer tiling window managers in general (I use Awesome on my main desktops). These are (in general) not “desktop environments” but just manage windows (and sometimes a “status bar”).

As it happens the window manager I’m messing with doesn’t come as part of a distribution package with a pre-prepared file for GDM3 to use. So I created a ~/.xsession file – something that has worked since display managers first arrived.

Didn’t work.

Turns out that I need to “hack” GDM3 to make a long standard bit of functionality functional again. As an aside (and especially to the GNOME people), all you had to do to keep this functional was detect if someone had a ~/.xsession file and then offer that up as a menu option. Not that difficult to do and even if it isn’t your preferred way of doing things, it’s a nice thing to do for us old-timers.

Anyway, to restore this functionality all it took was to create a file in //usr/share/xsessions/ called xsession.desktop with the following contents :-

[Desktop Entry]
Name=XSession
Comment=This session uses the custom xsession file
Exec=/etc/X11/Xsession
Type=Application
DesktopNames=GNOME-Flashback;GNOME;
X-Ubuntu-Gettext-Domain=gnome-flashback

Dead simple.

And yes I stole this and adapted it myself – I’m putting this up here so that I know where to look when I need it again.

Jun 272020
 

So Apple has announced that it is replacing Intel processors with ARM processors in its Mac machines. And as a result we’re going to be plagued with awful puns endlessly until we get bored of the discussion. Sorry about that!

This is hardly unexpected – Apple has been using ARM-based processors in its iThingies for years now, and this is not the first time they have changed processor architectures for the Mac. Apple started with the Motorola 68000, switched to the Motorola/IBM Power architecture, and then switched to Intel processors.

So they have a history of changing processor architectures, and know how to do it. We remember the problems, but it is actually quite an accomplishment to take a macOS binary compiled for the Power architecture and run it on an Intel processor. It is analogous to taking a monolingual Spanish speaker, providing them with a smartphone based translator and dropping them into an English city.

So running Intel binary macOS applications on an ARM-based system will usually work. They’ll be corner cases that do not of course, but these are likely to be relatively rare.

But what about performance? On a theoretical level, emulating a different processor architecture is always going to be slower, but in practice you probably won’t notice.

First of all, most macOS applications very often consist of a relatively small wrapper around Apple-provided libraries of code (although that “wrapper” is the important bit). For example, the user interface of any application is going to be mostly Apple code provided by the base operating system – so the user interface is going to feel as snappy as any native ARM macOS application.

Secondly, Apple knows that the performance of macOS applications originally compiled for Intel is important and has Rosetta 2 to “translate” applications into instructions for the ARM processors. This will probably work better than the doom-sayers expect, but it will never be as fast as natively compiled code.

But it will be good enough especially as most major applications will be made ARM natively relatively quickly.

But there is another aspect of performance – are the ARM processors fast enough compared with the Intel processors? Well, the world’s fastest supercomputer runs on the ARM processors, although Intel fanboys will quite rightly point out that a supercomputer is a special case and that a single Intel core will outperform a single ARM core.

Except that with the exception of games, and specialised applications that have not been optimised for parallel processing, more cores beats faster single cores.

And a single ARM core will beat a single Intel core if the later is thermally throttled. And thermals has been holding back the performance of Apple laptops for quite a while now.

Lastly, Apple knows that ARM processors are slower than Intel processors in single core performance and is likely pushing ARM and themselves to solve this. It isn’t rocket science (if anything it’s thermals), and both have likely been working on this problem in the background for a while.

Most of us don’t really need ultimate processor speed; for most tasks merely the appearance of speed is sufficient – web pages loading snappily, videos playing silkily, etc.

Ultimately if you happen to run some heavy-processing application (you will know if you do) whose performance is critical to your work, benchmark it. And keep benchmarking it if the ARM-based performance isn’t all that good to start with.

And most of these tasks can be performed fine with a relatively modest modern processor and/or can be accelerated with specialised “co-processors”. For example, Apple’s Mac Pro has an optional accelerator card that offloads video encoding and makes it much faster than it would otherwise be.

Apple has a “slide” :-

That implies that their “Apple silicon” processors will contain not just the ordinary processor cores but also specialised accelerators to improve performance.

WP2Social Auto Publish Powered By : XYZScripts.com