Potential Problems with Driverless Cars?

  • Reading time:19 mins read

Saw a post that sparked my interest about driverless cars and their problems on Twitter and here we are! Now, to preface. I am not an expert. I’m not even sure what I say here will be something new from what has been previously said before. But I think I might have some interesting problems that may arise. I’m sure many people who are infinitely smarter than me are working on these types of projects. So, they may already have a solution or be at least working on it. Please let me know if there is, I would love to know about it.

Also, for the record, I love the idea of driverless cars. I have a belief/hunch that driving will be seen as the smoking of our generation. That kids will ask their parents “You drove around in those death machines? Didn’t you know how dangerous they were?”.

In the US alone, around 30,000 people die each year (source) due to car crashes. Fatalities per 100,000 of the Population has averaged over 10 people each year since 1994 as well. Driverless cars have the potential to seriously reduce these overall numbers.

Although I think they will be at a cost of some highly publicised deaths resulting from ethical dilemma situations decided on by AI and the programmers who designed them.

AUTONOMOUS DRIVING LEVELS

Just a quick rundown for those who don’t know or a refresher maybe for those who do. Autonomous cars have a number of levels of autonomy. Level 0 being the current situation where you the driver control it all: steering, brakes and gas. Level 5 being full autonomy where the system matches how a current driver would behave, handling all situation. This post is operating on the assumption of level 5 autonomy levels.

DRIVERLESS CAR PROBLEMS?

THE TROLLEY PROBLEM

You might know about the trolley problem from the meme. Basically, the problem is that a trolley is approaching five people tied down on the tracks. A person is next to a lever and if he decides to, he can pull it so the trolley will be diverted to a different set of tracks, where there is only one person tied down.

So, in theory the one person with the lever could either allow the trolley to kill five people or choose to divert it and only kill one.

Interesting to note that in a Michigan State study, among 147 subjects tested, 133 pulled the lever to kill the one to save the other five.

Why is this relevant? Is killing some innocent bystander for the sole purpose of saving others okay? Either way AI cars will have to be programmed to make these kinds of decisions. Which means these problems deserve a discussion to maybe better understand why.

Relevant meme.

APPLYING THE TROLLEY PROBLEM

Ok so now the fun begins. Let’s make some quick substitutions. The trolley is now the autonomous car. The human and the lever are now the AI responsible for driving the car. The single person lying on the tracks is the you, the sole occupant of the driverless car.

Do you feel comfortable being in a situation your life may be sacrificed through no wrongdoing of your own to save the lives others? Especially from the choice of a programmed machine?

Maybe you would feel more comfortable at least if you knew what the AI software values in those split-second decisions? Maybe the driverless software will value the life of the occupant above all else. Or maybe it values the greatest good to the greatest number of people. Hopefully I’ll be able to illuminate some problems from both sides of the coin.

JAYWALKING

I saw on Dave Perrel’s blog about the thought that driverless cars will have to be irrational and a little bit angry to prevent misuse. The case being that if the cars were to stop reliably whenever a pedestrian steps in front of it, jaywalkers would be king, and madness would ensue.

Think about it. If you knew a driverless car would stop for you, why would you bother with zebra crossings and crosswalks? Thus, instead of injuring or potentially killing the arrogant jaywalker, the driverless car would stop suddenly, causing great discomfort to the occupants. So, in order to prevent this, driverless cars may have to learn to be angry, and occasionally maliciously fail to stop in time and strike the pedestrian on the shins.

Driverless-Car-Meme

Lance Eliot in his article, describes it well “Humans at times like to show that they are as smart or even smarter than automation. What can be more ego boosting than to outwit a seemingly supreme AI system that’s driving a car?”

LET’S TAKE IT UP A NOTCH

Ok if they can stop in time. But what if they can also choose to swerve out the way to avoid pedestrians? What if the pedestrian steps out too late and the car makes a split-second decision to value the 1 life of the occupant over the 1 pedestrian.

But what if there’s a group of bored teenagers who know they can game the system? On a busy highway they know if they can all jump out in time in front (let’s say 4 of them), they know that the driverless car will make a split decision to choose 4 teenage lives over the 1 older occupant.

MIT has an interesting game where you can judge potential situations that an AI car may face. Afterwards you’ll get a breakdown of how you answered (Mine for the curious). The game gives a good perspective of some of the types of split-second decisions that the software will make. As you will see however, the choices are not always so black and white and will require of you to make some assumptions about what you value.

WHY IS THIS AN AI PROBLEM AND NOT A HUMAN PROBLEM?

Because in the examples used above, if a human was in the position, even if somehow, they managed to kill everyone involved, it’s a reaction. There’s an element of panic, of stress, and no matter how bad the outcome, it can be understood that in a millisecond reaction, anything is possible and bad outcomes are understandable. With a self-driving car however, there is a deliberate decision and forethought programmed. A human would act purely on instincts. Or just not react at all, whatever that’s ok.

The fact that the driverless car is able to make a choice must mean we have to program it to make a choice. Therein lies the problem. Which decision is most optimal and ethical? Whether the decision programmed is to value the occupant’s life above all else, or the greatest good to the greatest number, it still has a choice.

Driverless Cars
I think he might be onto something.

MAKING IT A SPORT

People (and I’m certainly including myself here) love to game a system. Finding an exploit would be the joy of many around the world. What would be better than outsmarting a car, capturing it on video and uploading to show off to the world? Now how could we stop humans exploiting the driverless cars? Potentially the driverless cars may be restricted to tunnels or overpasses in an AI exclusive ecosystem free from outside human interference and exploitation.

KANGAROO CHAOS

This was too good to leave out. Volvo’s detection system is able to detect Moose comfortably, but Kangaroos are proving tougher to detect. Apparently Kangaroos mid hop appear further away to the car. Because the system uses the ground as a reference point, they become confused as they struggle to determine just how close the animal is.

A decent problem in Australia as Kangaroo and Wallaby collisions account of up 90% of all animal collisions here. Can imagine it is a tiny hiccup though, just included as I found it interesting.

SHARING THE ROAD WITH HUMAN DRIVERS

What if driverless cars still share the road with human drivers in traditional current day cars. Let’s assume here that driverless cars are designed to minimise and avoid accidents in order to protect the occupant. What’s to stop a human driver (assuming they are a skilled driver and are in control) from flooring the accelerator non-stop and driving in the centre of streets in order to get to their destinations quicker. They would do this knowing full well that driverless cars will move to the side or swerve off the road to protect the occupant. How would you stop them abusing the system while causing harm to the minimal amount of people? Will the AI have to be unpredictable and show a bit of ‘anger’ to deter people from abusing the system?

POSSIBILITY OF HACKING?

Well I mean cars can be stolen currently and used for nefarious means. So, it would be delusional to assume this won’t continue. But where hacking a car differs from stealing one personally, is the ability to do it remotely. And where hacking a car differs from say a personal computer, is the ability to inflict physical damage. Combine these two factors together and it’s a pretty naughty little recipe. Will it be commonplace? Probably not. Do I think there will be at least one major breach causing mass damage and maybe death? Well of course I hope not but as a betting man I would wager yes.

WHO SHOULD BE PENALIZED?

Patrick Lin in his TED-Ed video describes a situation where a car is travelling down a highway when a truck in front loses a large heavy object and cannot brake in time. The car choose to swerve but either side there is a motorcyclist. One has a helmet and one does not. Assuming the car will swerve to protect the occupant, who should the car swerve into?

If the goal is to minimize harm, then technically the car should swerve into the motorcyclist wearing the helmet as they’re more likely to survive. But why should those who follow the rules be penalized? So, the car swerves to take out the one with no helmet, as they are not following the laws. Is the car now dealing out street justice? Maybe a randomized reaction was the correct response all along.

Driverless Car Meme

One possible framework we have to work off is the German’s Federal Ministry of Transport and Digital Infrastructure report on automated and connected driving. Amongst their 20 guidelines they listed that in the event of unavoidable collisions, any distinction based on personal features eg age or gender is strictly prohibited. Udo Di Fabio who headed the commission, has noted that he imagines a future in which China allows their driverless cars to prioritise people that rank higher within their social-credit system [1][2]. Spooky stuff.

WHO’S AT FAULT IN ACCIDENTS?

You rear end someone today and you’re at fault. Your car manufacturer (assuming none of their parts were at fault) aren’t held responsible. No complaints here. When you are in an accident as an autonomous car who is at fault?

I think there will be 3 possible players in this. Will the responsibility for the accident still remain with the owner of the vehicle, the car manufacturer or the company responsible for the code?

For example say Google’s Waymo designed your autonomous cars coding yet Toyota manufactured the car itself. I would imagine both companies would be trying to shift the blame between each other. Who would be at fault? I don’t know because that depends. But I hope lawmakers and the like have a decent idea for what will be the new norm.

HOW TO STOP THE SHENANIGANS?

New laws? Will the AI learn from previous pranks and abuse towards it? Will driverless cars have to be programmed to be a little irrational to prevent abuse? How much irrationality is enough and how much irrationality is too much?

But the idea I like the most is the comparison to trains. If you stood on the tracks where the drive could see you from far enough away, theoretically you could cause the entire train to come to a halt. But would it be worth the risk? Who would even try to mess with a train? Hopefully it doesn’t pose too large of a problem.

Although as mentioned before I’m sure there are plenty of extremely clever people finding or have already found solutions to these questions. Will you be able to live in a world where your life could be sacrificed at any moment through no wrongdoing of your own in order to save the lives others? Especially at the discretion of an emotionless machine?

FOLLOW ON EFFECTS: ORGAN DONATION

Currently in the US nearly 11,000 people die each year before receiving a transplant. In the US alone as of July 2019, more than 113,000 people are waiting for a transplant. The type of donors from car crashes would be classified as a deceased donor, where their organs are harvested in a brain-dead state where their respiration and circulation are maintained artificially.

In 2018, of a total of 17,553 donors, over 60% (10,722) were deceased donors. According to this source, 1 in 5 organ donations come from the victim in a vehicular accident. Assuming that statistic is from deceased donors, this would be in the vicinity of 2,144 organ donors per year from traffic accidents. In 2018 there were 36,560 fatalities, and expecting around 2,144 donors from that number, which would be 5.86% but we’ll round to 6%.
Given that KPMG predicts that accidents could be reduced by 80% by 2040, the fatality number would be calculated off 2018 as 7,312. Using the 6% donor rate off this would be 439 deceased donors. Wonky maths I know but hopefully proved a point.

Driverless Car Organ Donation statistics
My poor math laid out better.

Organ donation is already facing a huge shortage and besides the last few years, the waiting list has grown every year. It will be interesting to watch the follow-on effects of the self-driving revolution. What other downstream problems of driverless cars have I missed?

YEAH OR NAH?

Did I make sense and are my problems valid? Am I missing any major points or ideas because I’m blind? Please let me know! Feedback is always helpful, and discussion encouraged. Leave a comment below, contact me or feel free to hit me up on Twitter. Thanks for coming to my TED talk ;).

EXTRA RESOURCES