Driverless cars won’t play chicken

Written by babulous | Published 2017/10/12
Tech Story Tags: self-driving-cars | ethics | ai | human-behavior | autonomous-cars

TLDRvia the TL;DR App

I have been mulling over an ethical issue that faces driverless cars, for some time now. So when a nephew opened a discussion about it on social media, I decided to put in my two bits.

Basically, the issue is a driverless car may need to make a split second decision (like when the brake fails). It may have to choose between running over a group of pedestrians on a zebra crossing, or crashing into a wall and killing the car owner. Losing many lives VS sacrificing one life. It’s an interesting dilemma for driverless tech, and reminds me of the ‘chicken or egg’ question.

Morally, the car should choose the second option of sacrificing one life. But then why would anyone get a driverless car if it will not prioritise their life? That will be the end of the driverless car, which would be a tragedy indeed.

Driverless tech has an immense potential to make our roads a lot more safer, by taking human error out of the equation. So it would be really sad to kill off such a valuable technology because of an ethical dilemma.

In fact, I feel this is less an ethical issue and more a case of too much analysis causing paralysis. If we instead program the car to do what humans would do in such situations, the dilemma will resolve itself.

Take the above ‘brake failure’ scenario. Human drivers won’t get into choosing between killing pedestrians or themselves. They focus on factors like the car’s speed, controllability, the environment outside the car, avoiding hitting any pedestrians, and the possibility of slowing the car with an angled side collision (as against a head on collision where they have no chance to survive). Basically the driver’s own survival is factored into the decision. As long as this is done, it’s acceptable if the driver sometimes does not survive, as that is what often happens in real life.

I believe driverless tech can also be programmed to make decisions based on a such ‘grey’ factors, rather than a ‘black and white’ choice of ‘kill the pedestrians’ or ‘kill the driver.’ Yes, there will be a learning curve with driverless tech, and other issues will crop up. Like sooner or later, someone will figure out how to hack the driverless car and take control of it. But I think the positives of less accidents on the road far outweigh the risks, which is part of the price of progress.

As the wise man said, “I don’t care if chicken or egg came first, as long as I get to eat both.”


Published by HackerNoon on 2017/10/12