Can a programming bug kill you? It can in a connected vehicle
When it comes to connected vehicles, the numbers are not on our side. The average car is made up of some 30,000 parts, and connected vehicles need many of those parts to be able to communicate with each other and outside agents – and for that, they need millions of lines of code and terabytes of data.
Numbers of that size can be downright dangerous. The more code, the more bugs – and the more bugs, the more things that can go wrong, from errors that can cause a program to malfunction or operate incorrectly, to hackers who can take advantage of those code errors to hijack a program. In a connected vehicle, the term “hijack” can be taken literally – with hackers exploiting programming errors to take over a vehicle’s steering or braking function.
That such a situation could develop is almost inevitable given the numbers. While the bug count in a program varies according to many different factors, the accepted industry average is between 15 and 50 bugs per 1,000 lines of code. Additionally, the more complicated the code is, the greater the possibility that there will be bugs. Each component in a connected vehicle could have millions of lines of code, and as all the parts have to work together to make the vehicle work, there is even more code to connect all the parts of the vehicle into a single unit. What is the likelihood of a bug cropping up under those circumstances? Pretty high, it would seem.
If hackers are able to invade a database due to a programming bug, they may end up stealing sensitive data, such as credit card information. While that’s certainly undesirable, in the final analysis, it’s only money. But, if hackers take advantage of a bug in the code of a component of a connected vehicle, lives could be at risk depending on the exploit they carry out.
Of course, one would assume that the code would have been tested thoroughly before being installed on a vehicle that will be on the road – but then there are the heisenbugs that may show up, especially in situations that were not tested in the lab. NASA, for example, thoroughly tested its systems before dispatching rockets and probes into space – and yet, software bugs that were not anticipated have cost the agency hundreds of millions of dollars and extensive programme delays over the years.
What if nothing untoward happens – if a connected vehicle makes it from point A to point B without issue? Does that mean we are on safe ground, that there are no bugs in the code that hackers could take advantage of? Maybe, or maybe not; the fact that something hasn’t happened yet doesn’t mean it won’t.
If there is an incident – if a connected vehicle traveling sixty cracks up on the highway – it could be the doing of hackers, but it could also be the result of a dozen other things, including communications between the vehicle and a hacked server, a rogue program that was installed in a part built into the vehicle, and so on. It may even be a zero-day attack – something that no security system will be cognisant of. The point is, we don’t know if something is going to go wrong – and that is knowledge we would do well to acquire as it could very well be a matter of life and death.
That exploitable bugs will crop up on occasion should be assumed. We can either hope and pray hackers don’t discover those bugs, or we can try to do something to protect ourselves. For the latter, we may want to consider fighting bugs not only by looking for them and correcting them, but also by examining their effects.
If, for example, a vehicle is supposed to turn left, and we see that the left turn is not executing properly, that means something has gone wrong somewhere – and the vehicle’s controllers (the driver in the case of a connected vehicle, and the controlling agent in the case of an autonomous vehicle) can be alerted immediately that they need to stop the vehicle. With the anomaly caught and safety ensured, the issue that caused the problem in the first place can be tracked down and fixed. But, for the people in the vehicle, that’s less relevant; what concerns them is ensuring that they get to their destination safely, and anomaly detection can greatly help in assuring that.
The main issue that manufacturers, programmers, and everyone else needs to be concerned with – safety – becomes the priority when we examine vehicle behavior, as opposed to trying to track down the source of a problem.
All of us strive for perfection, but attaining that perfection for connected vehicles is almost impossible given how complicated the systems are. If we want the people who ride in connected and autonomous vehicles to get to their destination safely, we need a better way of protecting them.
This post was also published on SC Magazine.