Tag Archives: Future

Decisions, Deaths, and Self-Driving Cars…

At some point in the very near future, an accident will take place with a self-driving car, where –– with clear purpose, though with no intended malice –– the on-board processors, software, and networked guidance of the vehicle will decide to kill someone.

Or kill an entire carload of people.

Or wipe out a few pedestrians making their way across a crosswalk.

And it will make this decision based upon logic, circumstances, the laws of physics, and an incredibly complex set of algorithms.

Please understand, this is not a question of “If.”

This is a question of “when.”

And when this accident takes place –– when an autonomous vehicle makes the logical decision to kill its own passengers, or a pedestrian, or someone in an oncoming car –– the repercussions of this accident (and the lawsuits that follow) will completely reshape the laws of this nation as they apply to personal injury, liability, and responsibility.

At the same time, the bedrock principals of cause and intent will soon share equal weight in the courtroom with examinations of process, purpose, and potential.  More specifically: what underlying decision making process is used by an autonomous vehicle when it purposely saves a life (or lives) while also extinguishing others, and what would be the future potential of the life (or lives) extinguished.

And all of this hinges on questions of liability and responsibility while, at the same time, fuzzying up the distinctions between legal liability and moral responsibility.

Who (or what) bears the blame for deaths that occur when a self-driving car makes the logic driven decision to kill someone?

Will car manufacturers shoulder most of the blame?

What about the software development teams who put together the autonomous car guidance coding?

What about the in-house ethics teams who –– as a committee –– create decision trees, outlining which deaths are morally acceptable, which deaths are totally unacceptable, and what sort of deaths occupy an amorphous gray area?

And what about hardware?  Will the original manufacturers of the various LIDAR and camera systems, providing self-driving cars with “vision,” also be brought in on the inevitable lawsuits

Lastly, what about the actual owner of the car?  What responsibility will he or she bear (provided they weren’t actually killed by their own car), when logic/software/algorithm driven deaths take place?

Given the pace of technological change –– and the acceptance of more and more semi-autonomous (and soon, fully autonomous) vehicles on our roadways –– these questions of liability and responsibility will soon be asked in a court of law.

Again, it’s just a matter of time before an autonomous vehicle makes the decision to kill someone.

Is our legal system ready to deal with the repercussions of the car’s decision?

Facebooktwitterredditlinkedinmail

Change is coming and –– brace yourself –– it’s coming fast

Though it might seem more-than-a-bit obvious to say this (and forgive me if this comes across sounding a bit cliched), but here in the 21st Century we are living through a period of profound social, financial, and technological transitions, and it’s all too easy to underestimate just how much change is taking place in the world –– not because the change is gradual, but because the change is fast, fluid, and next-to-impossible to keep up with.

One major technological shift that will bring about –– and IS bringing about –– foundation shaking changes to our world is the technology of self-driving cars (aka: “autonomous vehicles”).

Current estimates put the number of cars in the United States at roughly 250 million.

250 million cars, driven by humans.

250 million cars, that kill nearly 35,000 people per year.

250 million cars, that injure more than 5,000,000 people per year.

And all of this is about to change.

The first wave of this change is taking place in States like California, Nevada, Florida, and Michigan (along with the District of Columbia), where –– in February of 2017 –– self-driving cars have been granted the legal right of operation on roadways, streets, and highways.

And if trendlines are to be followed, we’ll see that legality extend to all 50 States within the next two to three years..

At the same time, as we see more and more autonomous vehicles make their way from the engineering labs to the dealerships and then out onto the highways, we’ll soon see a dramatic drop in the number of automobile related deaths and injuries taking place each year.

By their very design, utilizing full 360 degree views of their surroundings –– along with their networked and shared knowledge of all roadways and obstacles –– self-driving cars will bemarkedly safer than vehicles driven by humans.

But vehicle-related accidents and deaths will continue to take place.

Yes, there will be a markedly reduced number of accidents taking place, but accidents will still occur.

An in situations where an autonomous vehicle injures or kill its passengers –– or a pedestrian, or passengers in another car –– where does the blame lay?

Is the manufacturer of self-driving vehicle liable in any sort of way?

Or perhaps the engineers who developed the software allowing a car to be self-driving?

Or the owner of the autonomous vehicle –– do they shoulder any liability?

The simple fact is this: as self-driving vehicles become increasingly common on our streets, roads, and highways, the legal profession (along with federal and state legislatures) will itself go through a period of transition AND a period of confusion, where much of the confusion will be centered around the question of final liability.

Right now (in the early part of 2017), if you were to call up five different attorneys and ask for their opinions on liability issues related to autonomous vehicles, you will receive five completely different answers as to who/what/where the responsibility should lie in the case of an accident.

The point of all this –– and there really is a point to this whole post –– is that it’s important for lawyers, lawmakers, software developers, and car manufacturers to keep themselves up-to-date on all that is taking place in the world of autonomous technology and –– AND –– it’s even more important that lawyers, lawmakers, software developers, and car manufacturers are in open communication which each other. With all the changes taking place in our world, it’s far to easy to hunker down and confine oneself to a bubble of one’s own making. Just as self-driving cars will have open and shared knowledge of their surroundings (in other words, what one car knows all cars know), there needs to be an open and shared dialogue taking place amongst all the various parties involved with the development of self-driving vehicles AND the various parties who determine, legislate, and litigate self-driving vehicle.

The future will be here far sooner than expected.

Facebooktwitterredditlinkedinmail