No ads? Contribute with BitCoins: 16hQid2ddoCwHDWN9NdSnARAfdXc2Shnoa
Sep 092017
 

I recently switched from Ubuntu to Fedora Core for a variety of reasons :-

  • For a later version of fwupd as I had some vulnerable wireless mice to update.
  • To have a look at what Wayland was like (mostly invisible although oddball Window Managers still only talk to X).
  • To have a look at what it’s like after all these years; RedHat was one of the early distributions I ran.

All is reasonable except for one thing. The software updates.

What is this obsession with restarting to perform software updates? Is the relevant developer a refugee from Windows?

Now don’t get me wrong; a restart is the most effective simple way to ensure that outdated versions are not in use, but restarting every time you perform an update seems excessive.

  • If you need to update the kernel for security reasons, a restart is reasonable if you don’t have “live upgrades” but Fedora Core comes with a kernel that has that feature.
  • If you have a security update to a long-running process (such as Wayland or X), then you need to restart that process. In some cases you can restart a long-running process without notice; in others you will have to be disruptive, or ask someone to quit the long-running process.
  • If it isn’t a security update, you can simply wait until the user restarts the process.

Overall, the update process need not be as disruptive as Fedora Core makes it. It is of course not the end of the world to force a reboot, but it is hardly a very graceful process and some (including me) will find it annoying enough to avoid Fedora Core.

Post Interference

Sep 072017
 

I have been hard at work fixing all of the broken Photography posts on this blog – specifically fixing all the broken images. Go back far enough, and you may well come across photos you have not seen before.

As a bonus, I have also uploaded all the images to Eyeem where you can see all the images on one page without any annoying words.

Walking The Beach

Sep 072017
 

Well of course it is.

To give a bit of context, this came up in reaction to an article on Hollywood picking a director for a Star Wars film, and the possibility of the chosen director being someone other than a white male. Of course the comments kept bouncing back and forth between declaring the comment above to be racist and sexist, and claiming that it wasn’t.

Highlighting that Hollywood seems to have an exclusive club of candidates to direct big budget films which exclude anyone who isn’t white and male, is perfectly reasonable. Or at the very least, turning a blind eye (as far as “industry recognition” (like the Oscars)) to female directors when they do get to direct (and there are plenty of talented female film directors). In fact there are also plenty of talented non-white film directors too.

Which is a bit of a surprise – you would expect the famously liberal Hollywood to be gender and ethnic background blind when it came to picking talent. You might have assumed (as I did) that the career path for film directors favours rich white dudes – perhaps with “internships” (slavery for rich youngsters) amongst other things.

So it would appear that Hollywood is actually being sexist and racist in selecting film directors for major films. And it needs to fix this.

In other words the sentiment of the statement was anti-racist and anti-sexist.

But the way that comment was expressed was racist.

Any time you say something like “must choose ${ethnic group}” or “must not pick ${gender}” you are being racist and sexist. Even if it is in a good cause.

It is better to come up with a better way of saying the same thing: “It would be a surprise to see Hollywood select a director from any background rather than it’s usual pool of directors that give the impression that Hollywood is racist and sexist.”

Apart from anything else, the comments following such an article might be a bit more interesting.

Contemplating The Sea

Aug 272017
 

Every so often, somebody (or organisation) proclaims that this year is the year of Linux on the desktop. Given the number of times this has occurred, you would have thought that the Cassandras of the Linux world would stop trying to predict it. In fact I am not entirely sure what it is supposed to be – everyone using Linux on the desktop, or just some? And if it is just some people, how many?

It is essentially nonsense – if you use Linux on the desktop, every year is the year of Linux on the desktop; and if you do not, it isn’t.

Assuming you are someone who has more than two brain cells to rub together and are prepared to do some learning, it is perfectly possible to run Linux on the desktop. You can do pretty much everything with Linux that you can do with Windows. In fact the one area that Linux is traditionally weak – upgrading firmware of third party devices (such as media players, wireless mice – is beginning to change with LVMS and fwupd.

To give an example, I was recently upgrading some Logitech wireless mice to eliminate a serious security flaw, and I tried with Windows, OSX, and finally Linux. Both the Windows and OSX methods failed, whereas the Linux method just worked.

In fact even if the Windows method had worked, it would have been a lot more complex. I had to download the Logitech software (admittedly this step would probably be unnecessary if I was used to using the wireless mouse under Windows), know that a firmware upgrade was necessary, download the firmware upgrade, and finally load it into the upgrade tool.

Under Linux? Assuming I had been using some gooey tool like GNOME Software, it would have notified me that an upgrade was available and after a request would have upgraded it for me. I (of course) chose to do it the geeky way from the command-line, but even so running :-

# fwupmgr refresh
# fwupmgr update

… is a great deal simpler than the Windows way. And that is before you consider that with Windows, you need to download a firmware update tool for every device whereas the Linux way it is just one tool.

Of course in practice, the Linux method only works for a handful of devices – of the innumerable Linux machines I run only one has available updates for the desktop computer’s firmware (the Dell at work), and of the peripheral (or not so peripheral) devices only a tiny handful can be upgraded today.

But it is not inconceivable that in the not too distant future, the sensible way to upgrade the firmware of various devices will be to install Linux, and let it do it for you. Particularly if device manufacturers realise that by adopting Linux as the firmware upgrade delivery method, they can save time and effort.

“But I know Windows” – actually you know Windows 7, or Windows XP, or Windows 10; each of which is very different from each other. And whilst Linux has even more variability at first glance, there is actually more commonality between different versions of Linux. Or in other words, the effort of learning Linux in the first place is rewarded by less of a need to completely re-educate yourself every time you upgrade.

This is not intended as encouragement for you to switch to Linux (although if you are involved in IT you should at least be familiar with Linux), but intended as a criticism of the concept of a year of the Linux desktop. It isn’t useful, and what is worse it leads to the false impression of failure – if everyone is not using Linux on the desktop, then Linux has failed.

Linux on the desktop has not failed because I use it on the desktop.

Aug 262017
 

No. The title is just click-bait (which won’t accomplish much).

AMD Ryzen was interesting because it restored AMD’s competitiveness as compared to Intel for the non-enthusiast processor for desktops and laptops. Whereas AMD’s Epyc was interesting because it restored AMD’s competitiveness in the data centre. Both are good things because Intel has been rather slow at improving their processor over the last few years – enough that people are taking a serious look at a non-compatible architecture (the ARM which is found in your smartphone) in the data centre.

Threadripper itself is of interest to a relatively small number of people – those after a workstation-class processor to handle highly threaded workloads. A market that was previous catered to by the Xeon processor, so although Threadripper looks expensive, it is in fact pretty cheap in comparison to Xeon processors. So ‘scientific’ workstations should become cheaper.

And the significant advantage they have with I/O (64 PCIe lanes as opposed to a maximum of 44 for the X299 platform would be useful for certain jobs. Such as medium-sized storage servers with lots of NVMe caching, or graphics-heavy display servers (room sized virtual reality?).

But for gamers? Not so much. Almost no games use lots of threads (although it would be useful to change this), so the main use the extra power of Threadripper will only get used by other things that gamers do. Perhaps game streaming and/or using the unused power to run virtualised storage servers.

Aug 192017
 

The simplistic recitation of what happened in Charlottesville last Friday was that a bunch of fascists organised a protest against the removal of a town statue of Robert E Lee and a counter-protest was organised by anti-fascists. The fascists had a perfect right to peacefully protest (although given their ideology, cringing in their basements in shame would be more appropriate), and the counter-protesters were almost inevitably present – arguably with also a right to be there (peacefully).

The protests turned violent, and on Saturday a fascist drove a car into a crowd of counter-protester killing one, and injuring 19.

Who was to blame? Well before I add my opinion to the pile of opinions out there, let’s take a look at some of the others that have come out since the attack :-

  1. Trump initially sought to blame “all sides”, then went back on his word, and then rolled it forward again. Such decisiveness. But blaming “all sides”? So in other words, the victims of terrorism are to blame as well as the terrorists? You could be generous, and assume that he intended to blame all sides for the general violence, but not to call the attack on anti-fascists terrorism was unforgivable.
  2. Early on, some fascists even tried to claim that the terrorist attack was perpetrated by anti-fascists to blacken the name of fascism. Unfortunately I cannot find a source for this, although I recall it being mentioned (perhaps an entry on the Stormfront site which is currently unavailable to unregistered users). This was a fore-runner of the next part of the “blame game”.
  3. “But BLM/Antifa are terrorists too”. Victim-blaming; even if it were true (I’ll come back to that), the only terrorist attack at Charlottesville was perpetrated by a fascist with anti-fascists as the target. Besides which, the majority of the counter protesters were not members of BLM or Antifa; students, church groups, local residents, hell anyone with half a sense of decency could have been there opposing the fascists.
  4. The deceptive use of the “Alt-Left” label. There is no equivalent of the alt-right on the left; the left have a pretty consistent attitude towards racism. Using the “Alt-Left” label implies that the counter-protesters were members of the lunatic fringe of the left. For a start, whatever you think of the old hard-left (communists and the like), they certainly aren’t new or “alt”in any way. And secondly, many of the counter-protesters were certainly not part of the far left; hell there were probably right-wingers as part of the counter-protesters. I’ve got a low opinion of the mainstream right.

Variations on number 3 above has been common enough online that I have seen it multiple times in my Facebook feed (and elsewhere). Let me emphasise something I mentioned earlier – two wrongs don’t make a right, and there was no BLM/Antifa terrorism at Charlottesville.

Now onto my opinion about who was to blame.

As mentioned before, the only terrorist attack at Charlottesville was carried out by a neo-fascist, and the terrorist attack was the only reason why Charlottesville made a big news story. The counter-protesters were not involved in terrorism.

Now onto the violence. Determining blame here is tricky for several reasons :-

  1. You cannot tell from media reports who was to blame for crowd violence; in particular video footage can be very deceptive especially once it is cut to “sex it up” for the news. When some bozo starts windmilling punches at the fascists, how do we know that he wasn’t hit by a stone thrown by the fascists just before? That could easily be not shown on any video footage. When police forces ask for everyone’s mobile phone video and pictures after a terrorist incident they do so for a reason – they want to see things from as many perspectives as possible.
  2. Reacting with violence to extreme provocation is wrong, but those going out of their way to provoke things are not entirely blameless. Having been on anti-fascist protests myself, I can say that fascists can be extremely intimidating and provoking.

Having said that, there is a school of thought that says that giving a fascist a good kicking is a job well done. Having recently seen a film of what racism seems to inevitably lead to, it is hard to condemn such an attitude :-

Watch that film, and dare say that nazis deserve the protection of the law. At the very least, punching a nazi is no crime. (whatever the law may say).

I have previously used the generic term “fascist” to describe the protesters at Charlottesville, but in reality there was an alphabet soup of right-wing extremists – the KKK, white supremists, neo-nazis, and every other bunch of thugs that are collectively known as “alt-right”. Yes, I said thug. If you scratch the surface of any low-level fascist, you will find a young man who is into violence. What passes for their idiotic ideology is little more than an excuse to justify violence against certain groups.

If you look at listed terrorist attacks in the USA by ideology, 15 attacks have been by left-wing extremists since 1901; 51 have been by right-wing extremists (which excludes lynchings which would bring the figure up into thousands). So which group is the most violent?

Aug 132017
 

It wouldn’t surprise me if I have ranted about this before, but I just don’t understand how people decide how some animals are food, and others are “cute” and shouldn’t be harmed. In the later case, there are all sorts of stories on Facebook (and presumably similar places elsewhere) about some sort of animal cruelty to “cute” animals.

Yet most of us ignore the cruelty to food animals, and indeed wild animals. Admittedly most of that cruelty happens behind closed doors with only the occasional peek behind the curtain.

But what really determines whether one species is looked upon as food and another is looked upon as a pet? It cannot be as simple as being cute is the deciding factor, or those of us seen as ugly would also be considered to be a food source.

You could argue that pet animals were formerly work animals of one kind or another, and that certainly applies to dogs and horses, but there are plenty of pet animals it doesn’t apply to – cats (admittedly cats were sometimes tolerated as pest control animals), hamsters, birds, tortoises, reptiles, etc. So that isn’t a good argument.

It is possible to argue that some animals – in particular dogs and horses – have a special place because our partnership with the animal is inherently linked to our survival. But even that doesn’t work – both horses and dogs are eaten all over the world (including Europe).

I have hunted the Internet for possible reasons why we should not eat pets, and whilst there are plenty of pages out there trying to rationalise why we should not, there is nothing that really makes sense. So it might as well be that pets are cute and food animals are not.

Essentially we have a non-rational position on eating pets which is fine. But the rational position is to eat any animal you like the taste of, or to eat none.

Aug 092017
 

It is a bit of an exaggeration to proclaim the death of Youtube, but given the recent changes in how advertising revenue is shared out amongst content creators it is entirely possible. At least in the long term.

For those who have not been made aware, Google has changed how advertising revenue is shared out to content creators, which has resulted in many creators losing incoming; sometimes significant amounts. The intention appears to be to pay advertising revenue to those content creators that advertisers like, which sounds fair enough. But the unintended consequences :-

  1. New content creators will be discouraged because their advertising revenue is likely to be so low as to make it seem impossible to make money with youtube.
  2. Existing content creators who are not ridiculously popular will also be discouraged, and are likely to look for alternatives to youtube that will maintain their income.
  3. Content creators will be encouraged to make middle-of-the-road content that nobody finds offensive, advertisers like, and is popular with the overwhelming majority; in other words just like ordinary TV. Essentially this discourages the kind of content that makes youtube interesting (or at least not as boring as broadcast TV).

Now would be a great time for a competitor to jump in, and encourage content creators to jump ship with a revenue payout mechanism to encourage creative content producers – the small ones and the innovative ones – yes this will mean the larger content creators will lose out, but perhaps they can afford to.

Facebook Auto Publish Powered By : XYZScripts.com

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close