Returning to the VW story; the thing about the cars that passed the emissions tests by cheating is that they did exactly what they were supposed to do.

Presumably, some engineers were given some clear parameters for what they needed to achieve - diesel engines that met certain regulations (to please regulators, so that they could actually sell the cars), hit certain numbers (to please the marketers market the cars to people who care about the environmental numbers), and meet some performance benchmarks (to please the people who test drove the cars as potential owners.)

I'm guessing that the engineers realised that the problems they were out to solve weren't really the same problem, and that through software could set up programming conditions to reflect the different problem conditions. I think I can imagine how, from an engineers point of view, given those particular problems to solve, you could consider it an elegant solution. (Particuarly if you have the kind of mindset – which that kind of problem solving would require – to compartmentalise the ethical responsibilty for the consequences of your work to a different compartment of the VW corporation. Its the kind of thing that can make great television.)

I wonder what the implications are from a regulatory point of view though. I mean, it seems clear that the tests were faulty – if a car can behave differently in the tests to how it behaves on the road, then the tests aren't doing their job. Except, the nature of the tests have changed. It used to be about measuring an object – an object does what it does, and regulators performed physical measurements. Objects don't lie. Now, the objects have behaviour – they do exactly what they are programmed to do. So now, its about testing what they are programmed to do.

Marcelo Rinesi of the IEET says;

Things now have software in them, and software encodes game-theoretical strategies as well as it encodes any other form of applied mathematics, and the temptation to teach products to lie strategically will be as impossible to resist for companies in the near future as it has been to VW, steep as their punishment seems to be. As it has always happened (and always will) in the area of financial fraud, they’ll just find ways to do it better.

So, does that mean that the way the measurements are taken needs to be improved? (Reflecting the reality of 21st century cars, bringing the tests to more real-world conditions.) Or does it mean that it isn't just the cars themselves that need to be measured, but that the software itself needs to be subject to testing?

This is a terrifying concept.

For anyone who has never been involved with software, that probably seems like a pretty innocuous statement; sure, just test the software. Wire it up to a monitor, get some geeks to have a look, make sure that there isn't anything like;

IF conditions = "testing" THEN
    LET FuelMix = "Clean";
ELSE
    LET FuelMix = Dirty;
END IF

Obviously, that isn't even close to what you would be looking for in the real world.

For one thing, anyone who knows anything about writing software knows that testing software is half of the challenge of making it work the way you want it to in the first place. Writing software that doesn't do what you don't want it to do is hard enough when you are actually writing it – dealing with software that deliberately does something that it isn't supposed to do and then hiding it in the code is the kind of thing that would be terrifying to have to find – even if you knew for a fact that it existed in the first place. (And thats before factoring in a fairly reasonable dose of paranoia – does the latest iPhone software update just happen to be slowing down what used to be a fast handset because of the cool new features, or is it to make the owner want to upgrade to the latest and even faster model?)

The other thing that you will know, if you know the software industry, is that many organisations will fight tooth and nail to stop anyone being able to look at its code in the first place. Often, this is down to valid reasons (for example, software might implement 3rd party code and be restricted by the licencing deal to protect that code from being accessible to others who could steal it – costing the original developers future sales.)

A Wired article from earlier this year explains how the software embedded in cars and tractors remains the property of the manufacturer – not the owner of the hardware that runs it – and the kind of lengths (technically and legally) that those manufacturers will go to prevent anyone being able to see that software. Even when Microsoft was doing its best to argue in court that Windows software wasn't breaking any US or European legislation, it still took years before allowing government agencies to see the source code. – and that was at a time when businesses would literally live or die depending on how Microsoft were implementing the APIs that they weren't even really making public.1 In other words, even if the silly pseudocode above was in any way representative of what was hiding in the code of car computers, regulators probably wouldn't be able to see it anyway.

The days where if you wanted to know how something worked, you could find out by carefully pulling it apart is over; even if it were possible to examine a microchip to discover what functions it processes, one of the consequences of the 'computerisation of everything'2 is the growing role that software plays in the basic workings of what used to be seen as physical things – add in an internet connection and have the software running on a remote server (where it can be updated, revised and refreshed at any time with no notice given) and you have an utterly opaque – and almost impossible to properly regulate – scenario.

  1. (Wikipedia's criticism of Microsoft page has plenty of information about the kind of activities that were going on at the time.

  2. Goole "internet of things" for an idea of how far this is expected to spread.