Are autonomous vehicles really safer than human drivers?

A group of self driving Uber vehicles position themselves

Autonomous vehicles in the press


Automated vehicles or self driving cars are getting a lot of publicity in the press recently, a lot of it is very positive but also in the mix is a few more negative views. It's quite easy to dismiss these more negative comments are scaremongering, as after all they largely relate to isolated incidents which often turn out to be caused by human error rather than the vehicle itself.

Some believe that the holy grail of driving is just around the corner, helping to vastly reduce road deaths, reduce commute times and free up humans to do something more interesting than spending their time driving. There are indeed some sceptics among us, and perhaps they are simply reacting to the fear of the unknown, or do they really have legitimate concerns?

Levels of driving



When you first learn to drive it's a struggle to learn how to control the vehicle, to turn, accelerate, brake, operate a clutch and change the gears. Once this becomes more automatic you then turn your attention to following the rules of the road, things like being in the correct lane, obeying the speed limit, stopping at a red light, yielding when you don't have priority and avoiding the many hazards such as pedestrians, parked cars and cyclists.

A good human driver isn't just reacting to the road and following the rules, the driver is also planning ahead and constantly managing risk levels. Their actions are not only based on what they can see, but also what they can't see and what you could reasonably expect to happen. For example more experienced drivers know when they see a car with children in the back driving slowly near a school they are about to pull over and drop the kids off. A good driver will know a gap in traffic not only a space but also a gap for someone to pull into that space.

Clearly some human drivers are better than others. Does that mean that drivers who aren't planning ahead and anticipating what might happen in the future as much are more dangerous drivers than ones that are? 

Managing risk.


Driving in essence comes down to managing risk and against reward. The reward is getting to your destination in a timely manner while the risk is having an accident or breaking the traffic rules. Each driver will have their own level of risk they are subconsciously happy with. Often this isn't an absolute risk level but commonly becomes a trade-off between risk and reward. For example if you are running late for an important meeting you may consciously decide to increase your level of risk (e.g. drive faster or accelerate harder) as the reward element of getting their on time is also higher.

As humans we are very good at dealing with risk, it's something we do every day in our lives, for example crossing the road, buying dinner or even chopping up a carrot contains an element of risk we need to assess and manage.

Most drivers will manage their level of risk to match their level of skill, so a driver who is unable to plan a long way ahead or has a slower reaction time is likely to drive more slowly and carefully to compensate for their lower ability level.  

Younger people tend to be happier to accept higher risk levels than older people, which also explains why younger drivers tend to have more accidents than older people even when experience is taken into account. On the other hand as older drivers ability naturally reduces they tend to compensate by driving slower which also explains why they don't tend to have more accidents.

Why do accidents happen?


Clearly there is a variety of different causes for accidents, but most of them come down to something unexpected happening. The unexpected events are usually caused by one or more human's making a mistake, but sometimes the cause isn't directly related to a human (e.g. mechanical failure).

Usually an unexpected event does not cause an accident, for example if a pedestrian steps out into the road there is usually time to avoid them, The chances of reacting to this unexpected event and avoiding it depends on how well the driver has anticipated and avoided this risk to start with, when the driver is able to detect the event, the reaction time of the driver, the capabilities of the driver and the capabilities of the vehicle.

For example a modern car might be able to stop very quickly once the brakes are applied but if the driver doesn't react until the pedestrian is very close to the vehicle it's already too late.

Of course a lot of the unexpected events can be caused by the driver themselves, for example if a driver is using their mobile phone and not concentrating on the road then the driver might make a mistake in their own driving.

What's different with autonomous vehicles?


Autonomous vehicles are working under very similar constraints to the human drivers, they have to manage the level of risk vs reward, they have to cope with unexpected events and they have a reaction time. Lets break each of these down and see where the differences are:

Reaction times

Clearly a machine can react to a known stimulus much quicker than a human, so if an automated vehicle decides to apply the brakes then they can be applied much quicker than you or I could move our foot onto a brake pedal.

However, reaction times also include how long it takes to make a decision, for example when the traffic light turns to red how long will it take until you decide to apply the brakes. For most people this is going to be around 1 second (give or a take a little), which still gives the autonomous vehicle plenty of scope to have a better reaction time than a human in the circumstances it has been designed to react to.

On the other hand, if an autonomous vehicle has not been designed to react to a particular circumstance then it's not going to react at all. This does mean there is likely to be some cases where the human reaction time will be much better because we can cope with new circumstances that computer programmes are unable to understand. For example if you see a hot air balloon flying very low and close to the road you might well slow down or pull over and stop before the balloon actually lands on the road, where as an automated vehicle is unlikely to react to such an odd situation.

See for example the Tesla autopilot crash, although this was not a fully automated vehicle it illustrates how easy it is to miss something very obvious to a human driver. 

Managing risk and reward

So one thing an autonomous vehicle can do is choose to ignore the reward aspects, for example an autonomous vehicle won't care if you are late to pickup your children from school so it won't take extra risks to get there sooner. Users will still want some level of comfort setting which may adjust acceleration, cornering and braking, so while they won't be able to override the fundamental safety of the vehicle there could be some level of fine tuning allowed.

On the risk side of the scale a human driver naturally adjusts to conditions, for example most people will be more careful if the road is wet or icy, and for these basic parameters autonomous vehicle can also make similar adjustments.

The more difficult aspect of managing risk relates to anticipating things that might happen in the future and taking small mitigating actions in case they do happen. This is one of the key essences of an experienced driver and is something that autonomous vehicles are unlikely to be good at. For example if you were driving along a country road and saw the hedgerow with freshly cut leaves you might well slow down for the upcoming corner, just in case the person or machine that is cutting the hedges was just around the corner. 

The recent Uber crash is a case in point, although the vehicle was technically not at fault how many people would drive through a busy intersection at 38mph (in a 40mph zone) if they were aware of the risk of vehicles turning across their path?

Assertive driving?

One aspect of the risk/reward ratio is how assertive (or aggressive) the driver is. If the driver is very passive then they will appear as hesitant and likely to frustrate other drivers, be taken advantage of by more assertive drivers or even encourage others to take additional risks to overtake them. On the other hand if they are too assertive they may well be perceived as aggressive and come into conflict with other drivers or ultimately end up in more accidents. 

Additional complexity in this area arise from local culture and customs, for example in some places if a vehicle slows down in response to a pedestrian stepping into the road it is seen as an invitation for them to cross. In some places it is normal for vehicles to break the traffic rules (e.g. on hatched box junctions) and if an isolated vehicle follows the rules it may well be significantly delayed.

Unexpected events

One of the key arguments for autonomous vehicles is that humans are very prone to making errors and so if you remove the human element the number of accidents will surely reduce. 

There are two main sources of unexpected events, the ones caused by the driver and the external ones cause by other vehicles and people outside the control of the vehicle. 

An autonomous vehicle clearly has no control over the external events and so is very unlikely to be able to reduce their occurrence. Arguably as more vehicles become autonomous and if these vehicles make less errors then the overall likelihood of accident being caused by a vehicle will be reduced. While this is entirely possible it is not something that will make much of a difference in the short term, and there will always be other external unexpected events such as pedestrians, cyclists, motorcyclists, roadworks, mechanical failure, debris in the road, animals and suchlike.

As anyone who has ever owned a piece of technology will know that even if computers themselves do not make mistakes their programming is always less than perfect and so real world errors are actually quite common. Moving from a human driver to a autonomous vehicle will be trading one class of errors for another. 

The beauty of autonomous vehicles is that once an error has been found the manufacturer can simply update the software to avoid the same error again, where as human drivers keep making the same old mistakes over and over. On the other hand it is entirely possible that a massive number of vehicles sharing the same software can all be prone to making the same error, and in theory that error could prove to be fatal.

Until we have a mass deployment of automated vehicles it will be unclear as to how many driver errors these vehicles are making, after all most errors go unpunished and currently these vehicles do not have enough miles under their belt to obtain credible data on the level of mistakes they are making.

Higher standards?

On top of this, the public seem to expect much higher standards from autonomous vehicles than they expect from themselves. Accidents happen every day all over the world and most of them don't even make the local news unless they are very severe. On the other hand every minor incident involving an autonomous vehicle is being looked at very closely. What will happen the first time that a driverless vehicle is responsible for a human death due to it's own mistake?

Are autonomous vehicles really safer than human drivers?



I'm afraid there is no simple answer to this question, the honest truth is that we really don't know yet. Clearly autonomous vehicles are removing one class of accident but they are quite likely replacing them with a whole new class of mistakes. There has been many trials of these vehicles by companies such as Google, Uber and Tesla but most have been in areas of the world where the roads are not too complex and the number of difficult circumstances they need to cope with is limited. I believe it's going to be quite a long time before autonomous vehicles can be let loose unguided in somewhere as complex as central London.


by Trefor Southwell (trefor@tdlj.net)


Comments

Popular posts from this blog

Artificial Intelligence (AI) - latest craze or disruptive technology?