May 142011
 

This is a note for my own future sanity (like when I start using IPv6 and want this enabled again) given that this information is widely available around the network. If you do not know why you would want to turn off IPv6 when you are almost certainly not using it, then you probably want to do it anyway.

Hint: You may have a globally reachable IPv6 address on your machine that bypasses your firewall. And if that doesn’t worry you, it should!

Anyway, to turn it off run regedit in your preferred manner, and create the following DWORD attribute :-

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\tcpip6\Parameters\DisabledComponents

Set the value to FF (in hexadecimal). And reboot your machine.

So far this has worked with :-

  1. Windows Server 2008R2
  2. Windows 7 (Ultimate)
May 052011
 

For my own future reference …

Today I encountered an interesting little issue where I could not send an ABORT signal to a running process to kill it with a core dump because the process had a limit of 0 for the core dump size. Try as I might, I could not find a way to change that process’s core dump limit.

Turns out there is another way of tackling the problem, which is to use gdb to generate a core image :-

gdb
>attach PID
>gcore /var/tmp/core.PID

There is of course the gcore shell script wrapper for this, but that may not work if the working directory of the process no longer exists.

Apr 152011
 

I recently read some of the papers linked to from Andrew Cormack’s blog entry on the legal dangers of cloud computing, which made for interesting reading. And caused me to do some thinking. Whilst the legal aspects of cloud computing are complex and need to be examined (it would make things a great deal easier if there was an “Internet Nation” with it’s own laws), one of the dangers most obvious to me is an old danger to corporate computing with a cloud computing twist.

The old danger itself is what happens when non-IT specialists setup their own servers. Such servers are rarely physically secured properly (allowing data to be stolen), are often poorly backed up, and are sometimes even setup with old retired desktop machines. The dangers are obvious, although those who set them up are rarely aware that installing a server is only a tiny part of the work involved in maintaining a service.

Cloud computing offers similar dangers. An organisation that signs up to a cloud-based service is almost certainly going to get a suitable contract that covers many possible concerns, but an individual within that organisation may sign up to a cloud service with the defaults terms of service aimed at the consumer. Some of the dangers are :-

  1. If that individual makes use of their cloud service in a way that is important to the organisation, how do those responsible for IT services assess the risk of it when they are not aware that it is being used ?
  2. Does that cloud service offer a service level agreement sufficient to protect the organisation? Most consumer grade cloud services can withdraw that service or change the terms of that service without notice at any time. They also rarely commit to protect any data held on the cloud, or offer any guarantees of availability. Or confidentiality.
  3. A consumer using a cloud service is protected to some extent by consumer law. An individual within an organisation using a cloud service for their work, may well not be protected at all. Organisations are usually protected by contract law – when a contract exists!

 

Apr 142011
 

This is one of those things that I was under the impression was widely understood (at least amongst a certain specialist population of IT people), but apparently not.  As anyone who has ever paid extra for a static IP address, a network block has some notional monetary value. To give you an idea of how much, a quick search shows that a certain ISP (it doesn’t matter which one) charges $2.50 per month for a static IP address.

The scales up to a value of $637 for a /24 network block, $163,000 for a /16 network block, and $41 million for a /8 network block. These values are of course wildly unrealistic given that network blocks can’t be sold (or at least not usually, although I do know people who have sold them). But let’s assume they do have a monetary value – after all with the starvation of IP addresses it is not impossible that network blocks could be traded.

Physical objects are subject to depreciation to represent the declining value to the organisation – a 10 year old server may eventually have an interest to a museum, but an organisation is likely to realise that it makes more sense to replace it.

Network blocks are also subject to depreciation although it is not time dependent but depends on what use is made of that network block. If we assume that network block A has been assigned to a bunch of unrepentant scamming spammers, what is likely to happen ? Well as spam floods their networks and servers, network administrators and system administrators will start to block addresses within network block A.

Some of the blocklists are collectively run, but some are run by individual organisations. In the later case you cannot ensure that these will ever be removed. As a network block gradually acquires more and more entries in numerous blocklists around the world, it becomes of less use to those who want to use it. It decreases in value.

Similarly when a network block (let’s call it “B”) is used for a collection of workstations run by users whose interest does not extend to keeping their machines secure, it will be populated by machines infected with various forms of malware. As such, it is also subject to being cast into the blocklists of the world. In most cases, the users will not notice, but if that network block ever gets reallocated to servers, those servers are subject to problems caused by historical entries in blocklists.

So each malware infection a machine is subject to has a cost associated with it – it has decreased the value of the network address it uses by a tiny amount. Over time and with enough long-lived malware infections, it is possible that a network block will have a much lower value than an unused network block.

Mar 282011
 

Today I hear the Apple iPhone has been bitten by yet another bug causing alarms to go off at the wrong time. This is hardly the first time that Apple has had a problem with it’s IOS Clock application. And every time Apple rushes out a “fix” that supposedly stops the problem.

It’s now blatantly obvious that Apple is rushing out “workarounds” and not spending any time on proper fixes here. I mean come on guys, a software clock is hardly rocket science. You shouldn’t be having multiple related problems like this.

What is almost certainly happening here, is that Apple management are accepting quick fixes from the engineers, but ignoring their requests to spend more resources on properly fixing the application. Odd as it may seem, the Clock application was probably originally written by one of Apple’s least experienced engineers – it is the kind of application farmed off to the new guy who has just arrived from University.

Now that is usually fine – Notes works well enough – but in some cases you end up with an application that is riddled with inexplicable bugs and Clock’s time related bugs are inexplicable in the sense that Clock should be using APIs to do this which are ancient and robust in the extreme. This sort of problem is commonly found in the kind of code that is overly complex, inscrutable, and makes far too little use of APIs.

What Apple’s engineers have probably done is ask for time to ‘refactor’ the code. What this means is basically :-

  1. Ripping out code that implements functions that have been implemented in a library somewhere. Novice programmers often write code that implements a function that has already been written. The programmers writing the library function usually have a greater incentive to get their code right.
  2. Ripping out and replacing the worst of the inscrutable code.
  3. Shuffling around and improving the documentation.

Unfortunately when an engineer mentions the word “refactor”, poor managers think “unproductive” (or in the worst case don’t understand and don’t ask). You wouldn’t have thought that Apple was riddled with poor managers in charge of their software engineers, but perhaps they are. This is a really bad sign for Apple (and Apple customers) – all of their products rely on good software engineering, and if they can’t get a Clock application right, you have to wonder how soon the rest of their code will collapse around our ears.

Apple – it’s time to do something serious. The Clock is ticking …

(sorry)