Decisions, Deaths, and Self-Driving Cars…

At some point in the very near future, an accident will take place with a self-driving car, where –– with clear purpose, though with no intended malice –– the on-board processors, software, and networked guidance of the vehicle will decide to kill someone.

Or kill an entire carload of people.

Or wipe out a few pedestrians making their way across a crosswalk.

And it will make this decision based upon logic, circumstances, the laws of physics, and an incredibly complex set of algorithms.

Please understand, this is not a question of “If.”

This is a question of “when.”

And when this accident takes place –– when an autonomous vehicle makes the logical decision to kill its own passengers, or a pedestrian, or someone in an oncoming car –– the repercussions of this accident (and the lawsuits that follow) will completely reshape the laws of this nation as they apply to personal injury, liability, and responsibility.

At the same time, the bedrock principals of cause and intent will soon share equal weight in the courtroom with examinations of process, purpose, and potential.  More specifically: what underlying decision making process is used by an autonomous vehicle when it purposely saves a life (or lives) while also extinguishing others, and what would be the future potential of the life (or lives) extinguished.

And all of this hinges on questions of liability and responsibility while, at the same time, fuzzying up the distinctions between legal liability and moral responsibility.

Who (or what) bears the blame for deaths that occur when a self-driving car makes the logic driven decision to kill someone?

Will car manufacturers shoulder most of the blame?

What about the software development teams who put together the autonomous car guidance coding?

What about the in-house ethics teams who –– as a committee –– create decision trees, outlining which deaths are morally acceptable, which deaths are totally unacceptable, and what sort of deaths occupy an amorphous gray area?

And what about hardware?  Will the original manufacturers of the various LIDAR and camera systems, providing self-driving cars with “vision,” also be brought in on the inevitable lawsuits

Lastly, what about the actual owner of the car?  What responsibility will he or she bear (provided they weren’t actually killed by their own car), when logic/software/algorithm driven deaths take place?

Given the pace of technological change –– and the acceptance of more and more semi-autonomous (and soon, fully autonomous) vehicles on our roadways –– these questions of liability and responsibility will soon be asked in a court of law.

Again, it’s just a matter of time before an autonomous vehicle makes the decision to kill someone.

Is our legal system ready to deal with the repercussions of the car’s decision?

Facebooktwitterredditlinkedinmail