Are self-driving cars worth the risk?

In society, we may be asked to give up some freedom in order to gain a greater good for the group–like safety.

Do self-driving cars provide enough of a benefit? Are early-stage mistakes in technology tolerable in order to make our roads safer? Or, should we have zero-tolerance on technology that can result in miscalculations and accidents?

How do driverless cars work?

Simply put: Our eyes, ears and brain take in our surroundings, we make decisions based on our observations, and we apply those decisions to the very complicated task of driving.  A driverless vehicle takes input from cameras and sensors, and then crunches all that data in a computer that then controls the vehicle.

Self-driving technology will make our roads safer.

the problem with cars is the drivers

Dr. Jeff Schneider is the lead engineer for machine learning at Uber’s Advanced Technologies Group. Watch his amazing Ted talk here.

Schneider tells us:

Over 30,000 deaths each year are due to traffic accidents. And, 94% of the time drivers are the cause of car accidents, and, from those, the biggest cause for the accidents is driver recognition error and then driver decision error–both of which are interpretation issues.

the role of speed in car accidents

The driver makes the decision how fast to go, yet the National Transportation Safety Board tells us that speeding not only increases the likelihood of being in a crash, it also increases the severity of injuries.

And, according to a CCC Information Services Crash Course Report, there is an 8.0% increase in fatality rates on interstates and freeways for every 5 mph increase in maximum state speed limits and a 4% increase on other roads.

independence for disabled

Self-driving technology could help people who are unable to drive themselves, such as the elderly or disabled.

There’s too much risk associated with this technology and we shouldn’t allow it on our roads.

cybersecurity

Could someone steal your car by hacking into the computer? Could someone simply control the car through hacking?

accidents have happened

In addition to a pedestrian fatality in Arizona, in Florida a 40-year-old man was killed while riding in a Tesla Model S in autopilot. The car approached an intersection and ran straight and then under a left-turning truck, never having slowed down. The car then veered off the road, hitting a fence and a pole before coming to a stop.

In Arizona, an Uber-owned self-driving SUV collided with a vehicle in an intersection, flipping the SUV on its side. No one was seriously injured, and the Uber vehicle wasn’t at fault. The company resumed using its self-driving vehicles in three days.

human interactions are too complicated for computers

Humans are confusing. Approaching someone waiting on the sidewalk when the vehicle has the right-of-way might end in a little go-ahead wave, an I’ll-wait-for-you signal that it’s safe to cross. Approach someone else waiting on the sidewalk—this time someone who is texting or talking—and you might chose to slowly inch through the intersection. Can a robot tell the difference?

How about giving a wide berth—even going into the oncoming lane on a back, country road—when the person has a dog on leash, or a baby in stroller. Can a robot understand that safety etiquette?

liability

Who is responsible for accidents? The car manufacturer or the owner?

ethics of choosing who gets hurt

the trolley dilemma

Decades ago, philosophers came up with this ethical question to illustrate different schools of thought in moral philosophy:

You’re standing by a trolley track and a out-of-control trolley is about to run over five people. But, luckily, there’s a lever that controls a switch that could redirect the trolley onto another track. Trouble is, on this other track one person will get hit and die.

What do you do?

Nothing, and five people get killed. Or, pull the lever, and one person is killed.

the trolley dilemma for autonomous vehicles

A self-driving car is rolling down the highway and is forced to choose between going right and killing someone, going left and killing a different person, or going straight and killing its passenger. Who should the vehicle save? If the likelihood of hitting someone on the right and left is absolutely equal, does the car prioritize saving an old person? A child? How about a pregnant woman or a homeless person or a minister in a robe?

In a driverless car, a programmer decides who would be chosen to live and who to die. In other words, a value system gets coded into the car.

Who decides on the value system?

 

 

Be the first to comment

Let's Talk