To start off with a story: in 1962 NASA’s Mariner 1 mission was launched. The purpose of the 80-million-dollar project (approximately 630 million in today’s currency) was to conduct a fly-by of Venus to gather information. However, the satellite never left earth; shortly after ignition, the rocket veered off course and was remotely detonated by NASA to prevent it from smashing back down on some unfortunate earthly landing site. The problem? The accidental omission of a single hyphen in the guidance system’s source coding was found to be the culprit[1]. While thankfully there were no injuries, I cannot even begin to imagine the embarrassment of the programmer.

We all know by know the basics of tort law negligence; a person is only held to the standard of the “reasonable person”, and is not expected to be super-human in ability or foresight. Yet, every one of us has departed from that standard at some point in our lives. We’ve all been negligent in some way, but it was only good fortune that nothing came of it. A person who is texting while driving and accidentally runs a stop sign is only guilty of distracted driving and running the stop sign. However, if by sheer bad luck it just so happened that there was a pedestrian crossing at that exact moment, the same person is guilty of so much more, yet both were equally negligent. Just because the only difference in the two scenarios is luck doesn’t excuse the 2nd person in tort law; bad luck is no defence for actions in negligence.

However, I see that as future technology emerges, there is a potential for a serious collision between tort law and technological advancement. While it will probably happen in several different sectors, the best and most obvious example would be that of self-driving vehicles. The established rules of the road are such that, if an accident occurs, in most cases someone will be partially at fault for deviating from the rules. Someone was speeding, or not paying attention, or didn’t check their blind spot before merging. This isn’t to say that there do not exist collisions where nobody is at fault. A deer suddenly and unexpectedly jumping onto a highway can result in massive human and property damage yet nobody can be at fault.

I would also argue that there is an inherent emotional dimension in tort law, which is heightened in vehicle accidents. Very often you have a plaintiff, wholly innocent of any wrongdoing, who has suffered a great tragedy. Death, brain damage, paralysis; if you’re ever having a bad day, a brief search of recent tort car accident cases will put your problems into a better perspective. Emotion cries out for compensation to these victims, to compensate them for the unfairness of life. While overall injury rates have consistently declined since 1994, in 2014 Canada suffered approximately 150,000 vehicle-related injuries, of which about 9,600 were “serious injuries” and 1,843 were fatal[2]. That equates to approximately 5 deaths and 26 serious injuries per day, every day. Each death was an individual.

However, as self-driving vehicles emerge, tort law may have to be tempered lest it have a systemic chilling effect on beneficial technological advancement. It is yet unknown precisely how much safer self-driving vehicles will be compared to human drivers, but there can be no doubt that as the technology advances AI-controlled vehicles will be much safer. They don’t get tired, or angry, or distracted. They are permanently vigilant and aware. However, there also can be no doubt that accidents will continue to occur. Coding the artificial intelligence that drives these vehicles is enormously complex, and there will be coding errors. It isn’t a question of if, but of when. Most likely there is someone alive today, going about their daily routine, totally unaware that they will one day claim the title of being the first human killed due to a self-driving vehicle coding error.

I would image the inevitable lawsuit to go something like this; the next of kin will hire a specialist to review the computer error. The error will be traced to a chunk of source coding that caused the problem. It could be the accidental omission of a single hyphen, such as in the Mariner 1 accident, but I would imagine that much more likely scenario would be a very particular set of variables causing the AI to generate an unexpected and unintended reaction.

The problem, as I see it, would be the danger of a judge viewing the source coding error in isolation, deferring to the expert, who will no doubt say that it was an obvious overlooked error. If only the production company had run more simulations, or had hired more people to review the coding, then our poor victim would be alive. The coding was negligent; the designer must pay out millions for their error.

The chilling effect here comes from the fact that only a very few entities have the resources to create this technology, so naturally there will probably be only a few different systems. Even if a single AI program is responsible for decreasing overall deaths by say 90%, that still means that 10% of the deaths will continue to happen. By Canada’s 2014 statistics, that would mean about 184 deaths would still have occurred, and no doubt at least some of those deaths would be created wholly or in part by imperfect coding. So, while overall vehicle related fatalities may drop dramatically, which everyone would agree would be a universal good, the few remaining fatalities could result in huge lawsuits against a single AI programmer. Even if the accident was primarily caused by a human, such as a negligent jay-walker stepping in front of a self-driving vehicle that swerves and crashes into the vehicle next to it, there will be a natural gravity towards including the AI developer as a defendant if only because they are the only potential defendant who has deep enough pockets to properly compensate the victim.

However, I am not arguing for a blanket protection of these companies against negligent programming, or that the general tort principles ought to be altered. My argument is confined to the method in which judges should assess negligence in source coding. A single chunk of coding, viewed in total isolation, can look like negligence. With 20/20 hindsight, it can be obvious how a very particular set of variables can cause a program to crash or react in an unintended manner, but this only becomes obvious after the crash has occurred.

I think there is a tendency to expect that computers ought to be flawless. Considering their vast ability for calculation, and their adherence only to pure logic, it follows that it is not unreasonable to expect perfection. If you need any further convincing on this point, just think of the last time you got frustrated with your computer. Never mind the fact that we rarely (if ever) stop and consider just how magical computers are, and how much easier and richer they make our lives. Instead of being gracious 99% of the time when they do work properly, and then forgiving the 1% of time they don’t, in actuality we act indifferent during the 99% of the time they work properly, and become angry and upset at the 1% of times that they don’t.

Nobody intends to write bad coding, but it just happens. A system that has hundreds of thousands of lines of flawless coding can be brought down by the omission of a single hyphen. The overall point of this blog is that I am afraid there will develop a standard of the “reasonable robot,” where society unjustly demands perfection. But it must be remembered that for each robot, for each governing program, behind it all is a human programmer. Just like how a reasonable person in tort need not have the “wisdom of Solomon[3]” to avoid liability, this standard should also apply to the reasonable programmer.

A flawed piece of source coding should never be viewed in isolation. When dealing with the inevitable negligent programming cases to come, defective coding should be viewed in the context of the overall program and the risk management methods employed during development. Errors are inevitable, and it is simply not possible to demand a flawless program. Another source of danger I see here is the unfortunate influence emotion has on tort cases. Most likely, the first future negligent programming cases will have wholly innocent plaintiff, either killed or very seriously (and permanently) injured, suing a mammoth and well-insured corporation. These initial cases will bring with them a strong emotional plea for the victim to be compensated, especially in the face of a single line of code that contains an obvious error.

The temptation will be to point to the error and say “guilty”, if for the very least so the victim can at least be compensated for all that has been taken from them. But in doing so it would set a precedent of demanding robotic perfection, which completely forgets that robots are programmed by humans, and tort law doesn’t expect any humans, even highly skilled professionals, to be flawless. The question is if the flaw was truly the product of negligence, and the focus of that inquiry should take the judge away from the single error and onto the procedures in place that allowed it to occur. Even a massive flaw can still be not the product of negligence if the AI developer placed reasonable safeguards against such an occurrence.

However, accepting what I said above means accepting that it is possible for a coding error to produce death or serious injury, but in disallowing compensation in tort. I hope I’m not going to be one of those future victims. But we must keep in mind that, currently, we are risking death or serious injury every time we go anywhere near a road. That appears to have become an accepted part of our lives. As robots come to take a more active role in our lives, we must not let their artificial nature distract us from the fact that behind it all are human engineers, who, as discussed above, are not held to a standard of perfection.

That being said, I still think the standard ought to be high. The serious risk of injury or death in AI programming demands nothing less, and if an AI developer produces a flawed product due to inadequate testing, or by unreasonably rushing the product to market, then I have no objection to injured victims getting their just compensation, even if it means bankrupting the developer. But we must resist the temptation to demand perfection. Perhaps it will come one day, but with the emergence of any new and untested technology there has always been teething problems in early development stages that only become obvious with hindsight. If we are going to get past that stage and onwards to the land of perfection, we are going to have to tolerate some amount of tragedy in the short term.



[3] Stewart v Pettie, [1995] 1 SCR 131, at para 50.