Home Emergency Management News Thinking Logically About The Risks Of Self-Driving Cars
Thinking Logically About The Risks Of Self-Driving Cars

Thinking Logically About The Risks Of Self-Driving Cars

0

Many don’t realize that we are all incurring risk that is associated with allowing self-driving cars on our roadways today for public tryouts.

Readers have contacted me asking about the level of risk associated with today’s self-driving cars. In prior columns, my indication was that the risk level is high at this juncture and that you should be cautious and mindful of the risks involved.

Start an Emergency & Disaster Management degree at American Military University.

First to clarify: there are not as yet any true self-driving driverless cars, none certainly at the Level 5 stature. When I refer to your going for a ride in a self-driving car of today it would be one that might barely be considered at Level 4, which are being trial-tested on our roadways in mini-sized fleets. Level 4 and Level 5 would be considered in the autonomous realm, while a Level 3 car is semi-autonomous, and conventional everyday cars that we all routinely use today would be rated as a Level 2.

The Level 4 is a constrained or limited variant of what you might imagine a truly autonomous car to consist of and allows for the automaker or tech firm to stipulate that the driverless capabilities will only be viable in particular circumstances (those circumstances are more formerly known as ODDs or Operational Design Domains).

For public roadway tryouts, most of the Level 4 driverless cars include a human back-up driver. The purpose of the human back-up driver is to closely monitor the actions of the self-driving car and intervene as needed to aid in promoting safety during a driving journey.

As such, if you went for a ride in a self-driving car of today, you normally would be accompanied by a human back-up driver, someone presumably trained and alert that is ready to take over the driving from the AI system. This admittedly takes some of the “excitement” out of the riding experience in that you aren’t reliant entirely on the AI system to do the driving, but at the same time this is a useful precautionary move to protect you, plus in many states it is a requirement by-and-large for public roadway tryouts (there are exceptions).

To go for a ride in a self-driving car and do so without a human back-up driver, you would likely need to arrange for a ride at a closed track or proving ground. The companies making the AI systems of self-driving cars are at times practicing and experimenting with their autonomous car systems via the use of a special track that allows for a controlled environment.

Risk Is Everywhere

When referring to risk, it is important to realize that we experience risk throughout our daily lives.

Some people joke that they won’t leave their house because it is too risky to go outside, but this offhanded remark overlooks the truth that there is risk while sitting comfortably inside your home. At any moment, an earthquake could shake your house into the dust. While sitting in your living room, an airplane flying overhead could falter and crash into your domicile.

I don’t want to seem like a doom-and-gloom person, but my point hopefully is well-taken, namely that the chances of an adverse or unwelcome loss or injury is always at our fingertips and ready to occur.

You absorb risk by being alive and breathing air. Risk is all around you and you are enveloped in it. Those that think they only incur risk when they say go for a walk or otherwise undertake action are sadly mistaken. No matter what you are doing, sleeping or awake, inside or outside of a building, even if locked away in a steel vault and trying to hide from risk, it is still there, on your shoulder, and at any moment you could suddenly suffer a heart attack or the steel vault might fall and you’d get hurt as an occupant inside it.

This brings us to the equally important point that there is absolute risk and there is relative risk.

We often fall into the mental trap of talking about absolute risk and scare ourselves silly. It is better to discuss relative risk, providing a sense of balance or tradeoff about the risks involved in a matter.

For example, I had earlier stated that I believe the risk of your going for a ride in today’s self-driving cars is high, yet you can’t know for sure what I mean by the notion of “high” related to the risk involved.

Is the risk associated with being inside a self-driving car considered less or more than say going in an airplane or taking a ride on a boat? By describing the risk in terms of its relative magnitude or amount as it relates to other activities or matters, you can get a more realistic gauge of the risk that someone else is alluding to.

I’d like to then bring up these three measures for this discussion about risk:
• R1: Risk associated with a human driving a conventional car
• R2: Risk associated with AI driving a self-driving autonomous car
• R3: Risk associated with a human and AI co-sharing the driving of a car

Let’s unpack those risk aspects.

Relative Risk Associated With Self-Driving Cars

We can use R1 as a baseline since it is the risk associated with a human driving a conventional car.

Whenever you go for a drive in your conventional car, you are incurring the risk associated with you making a mistake and crashing into someone else, or, despite your best driving efforts, there might be someone that crashes into you. Likewise, when you get into someone else’s car, such as ridesharing via Uber or Lyft, you are absorbing the risk that the ridesharing driver is going to get into a car accident of one kind or another.

Consider the R2, which is the risk associated with a true self-driving autonomous car.

Most everyone involved in self-driving cars and that cares about the advent of driverless cars are hoping that autonomous cars are going to be safer than human-driven cars, meaning that presumably there will be fewer deaths and injuries due to cars, fewer car crashes, etc.

You could assert that the risk associated with self-driving cars is hoped to be less than the risk associated with human-driven conventional cars.

I’ll express this via the notation of: R2 < R1

This is aspirational and indicates that we are all hoping that the risk R2 is going to be less than the risk R1.

Indeed, some would argue that it should be this: R2 << R1

This means that the R2 risk, involving the AI driving of a driverless car, would be a lot less, substantially less than the risk of a human driving a conventional car, R1.

You’ve perhaps heard some pundits that have said this: R2 = 0

Those pundits are claiming that there will be zero fatalities and zero injuries once we have true driverless self-driving cars.

I’ve debunked this myth in many of my speeches and writings. There is not any reasonable way to get to zero. If a self-driving car comes upon a situation whereby a pedestrian unexpectedly leaps in front of the driverless car while in motion, the physics belie being able to stop or avoid hitting the person and so there will be at least a non-zero chance of fatalities and injuries.

In short, here’s what I’m suggesting so far in this discussion:
• R2 = 0 is false and misleading, it won’t happen
• R2 < R1 is aspirational for the near-term
• R2 << R1 is aspirational for the long term

Some believe that we will ultimately have only true self-driving cars on our roadways, and we will somehow ban conventional cars, leading to a Utopian world of exclusively autonomous cars. Maybe, but I wouldn’t hold your breath about that.

The world is going to consist of conventional cars and true self-driving cars, for the foreseeable future, and thus we will have human-driven cars in the midst of AI-driven cars, or you could say we’ll have AI-driven cars in the midst of human-driven cars.

Bringing In The Risk Of Co-Sharing Driving

There’s an added twist that needs to be included, namely the advent of Level 3 cars, consisting of Advanced Driver-Assistance Systems (ADAS), which provide AI-like capabilities that are utilized in a co-sharing arrangement with a human driver. The ADAS augments the capabilities of a human driver.

To clarify, the Level 3 requires that a licensed-to-drive human driver must be present in the driver’s seat of the car. Plus, the human driver is considered the responsible party for the driving task. You could say that the AI system and the human driver are co-sharing the driving effort.

Keep in mind that this does not allow for the human driver to fall asleep or watch videos while driving since the human driver is always expected to be alert and active as the co-sharing driver.

I have forewarned that Level 3 is going to be troublesome for us all. You can fully anticipate that many human drivers will be lulled into relying upon the ADAS and will, therefore, let their own guard down while driving. The ADAS will suddenly try to get the human driver to take over the driving controls, which the human driver will now be mentally adrift of the driving situation, and the human driver will not take appropriate evasive action in-time.

In any case, I’m going to use R3 to reflect the risk of the human and AI co-sharing the driving task.

Most everyone is hoping that the co-sharing arrangement is going to make human drivers be safer, presumably because the ADAS is going to provide a handy “buddy driver” and overcome many of today’s human solo driving issues.

Here’s what people assume:
• R3 < R1
• R3 << R1

In other words, the co-sharing effort will be less risky than a conventional car with a solo human driver, and maybe even be a lot less risky.

Down-the-road, the thinking is that truly driverless cars, ones that are driven solely by the AI system, will be less risky than not only conventional cars being driven by humans but even less risky than the Level 3 cars that involve a co-sharing of the driving task.

Thus, people hope this will become true:
• R2 < R3
• R2 << R3

Overall, this is the aim when you consider all three types of driving aspects:
• R2 < R3 < R1
• R2 << R3 << R1

Thus, this is an assertion that ultimately AI-driven autonomous cars (R2) are going to be less risky than co-shared driven cars (R3) and for which is less risky than conventional human-driven cars (R1), aiming to be a lot less risky throughout.

Here then is the full annotated list of these equation-like aspects:
R2 = 0 — a false claim that AI autonomous cars won’t have any crashes
R2 < R1 — aspirational near-term, AI driven cars less risky than human driven cars
R2 << R1 — aspirational long-term, AI driven cars a lot less risky than human driven cars
R3 < R1 – aspirational near-term, co-shared driven cars less risky than human-solo
R3 << R1 – aspirational long-term, co-shared driven cars lot less risky than human-solo
R2 < R3 – aspirational near-term, AI driven cars less risky than co-shared driven cars
R2 << R3 – aspirational long-term, AI driven cars lot less risky than co-shared driven cars
R2 < R3 < R1 – aspirational near-term, AI car less risky than co-shared less risky than human-solo
R2 << R3 << R1 – aspirational long-term, AI car lot less risky than co-shared and human-solo

Today’s Risk Aspects Of Self-Driving Cars

My equations are indicated as the aspirational goals of automating the driving of cars.

We aren’t there yet.

When you go for a ride in a self-driving car that has a human back-up driver, you are somewhat embracing the R3 risk category, but not quite.

The human back-up driver is not per se acting as though they are in a Level 3 car, one in which they would be actively co-sharing the driving task, and instead are serving as a “last resort” driver in case the AI of the self-driving car seems to need a “disengagement” (industry parlance for a human driver that takes over from the AI during a driving journey).

It is an odd and murky position.

You aren’t directly driving the car. You are observing and waiting for a moment wherein either the AI suddenly hands you the ball, or you of your own volition suspect or believe that it is vital to takeover for the AI.

Some might say that I should add a fourth category to my list, an R4, which would be akin to the R3, though it is a co-sharing involving the human driver being more distant of the driving task.

Another approach would be to delineate differing flavors of the R3.

For example, some automakers and tech firms are putting into place a monitoring capability that tries to track the attentiveness of the human driver that is supposed to be co-sharing the driving task. This might involve a facial recognition camera pointed at the driver and alerting if the driver’s eyes don’t stay focused on the road ahead, or it could be a sensory element on the steering wheel that makes sure the human co-driving has their hands directly on the wheel, etc.

If you have those kinds of monitors, it would presumably decrease the risk of R3, though we don’t really know as yet how much it does so.

Another factor that seems to come to play with R3 is whether there is another person in the car during a driving journey. A solo human driver that is co-sharing the driving task with the ADAS is seemingly more likely to become adrift of the driving task when alone in the car. If there is another person in the car, perhaps one also watching the driving and urging or sparking the human driver to be attentive, it seems to prompt the human driver toward safer driving.

Rather than trying to overload the R3 or attempt to splinter the R2, let’s go ahead and augment the list with this new category of the R4:
• R1: Risk associated with a human driving a conventional car
• R2: Risk associated with AI driving a self-driving autonomous car
• R3: Risk associated with a human and AI co-sharing the driving of a car
• R4: Risk associated with AI driving a self-driving car with a human back-up driver present

This leads us to these questions:
• R4 < R1? – is an AI self-driving car with a human back-up driver less risky than a human-driven car
• R4 << R1? — is an AI self-driving car with a human back-up driver a lot less risky than a human-driven car

Or, if you prefer:
• R1 < R4? – is a human-driven car less risky than an AI self-driving car with a human back-up driver
• R1 << R4? – is a human-driven car a lot less risky than an AI self-driving car with a human back-up driver

Unfortunately, we don’t yet know the answer to those questions.

Indeed, some critics of the existing roadway tryouts involving self-driving cars are concerned that we are allowing a grand experiment for which we don’t know what the comparative risks are. They would assert that until there are more simulations done and closed track or proving ground efforts, these experimental self-driving cars should not be on the public roadways.

The counterargument usually voiced is that without having the self-driving cars on our public roadways it will likely delay the advent of self-driving cars, and for each day delay it is allowing by default the conventional car to continue its existing injury and death rates.

Conclusion

When someone tells you that you are taking a risk by going for a ride in a self-driving car, and assuming that there is a human back-up driver, the question is how much of a difference in risk is there between driving in a conventional car that has a human driver versus the self-driving car that has a human back-up driver.

Since you presumably are willing to accept the risk associated with being a passenger in a ridesharing car, you’ve already accepted some amount of risk about going onto our roadways as a rider in a car, albeit one being driven by a human.

How much more or less risk is there once you set foot into that self-driving car that has the human back-up driver?

What beguiles many critics is that the risk is not just for the riders in those self-driving cars on our public roadway.

Wherever the self-driving car roams or goes, it is opting to radiate out the risk to any nearby pedestrians and any nearby human-driven cars. You don’t see this imaginary radiation with your eyes, and instead it just perchance occurs because you just so happen to end-up near to one of the experimental self-driving cars on our public streets.

Are we allowing ourselves to absorb too much risk?

I’ll be further contemplating this matter while ensconced in my steel vault that has protective padding and a defibrillator inside it, just in case there is an earthquake, or I have a heart murmur, or some other calamity arises.

 

This article was written by Dr. Lance Eliot from Forbes and was legally licensed through the NewsCred publisher network. Please direct all licensing questions to legal@newscred.com.