By Mitchell Cunningham and Michael Regan
Few people pay close attention to the traffic situation unfolding around them when they’re travelling as a passenger in a car, even if they’re in the front seat. And that could make partially automated vehicles, which are operating on our roads right now, problematic.
Also known as Level 2 automated vehicles, partially automated vehicles are capable of controlling steering, acceleration and deceleration. The Tesla AutoPilot system is a good example. (Cadillac, Volvo, Audi and Nissan also offer partial automation).
These kinds of automated vehicles, although designed to optimise driver comfort and safety, require a human driver to remain on standby when the vehicle is in autonomous mode. That means paying close attention to the driving environment, and taking back control of the vehicle if required.
This may sound straightforward, but it’s not.
Passive fatigue and distraction
There are two main reasons why people find it difficult to pay close attention to the driving environment, especially for extended periods of time, when a vehicle is driving itself.
Firstly, people are prone to passive fatigue. Driving conditions that don’t require frequent use of vehicle controls, but do require constant vigilance for hazards, may paradoxically reduce driver alertness – even after only 10 minutes on the road. Such conditions may even put drivers to sleep.
Secondly, prolonged periods of automated driving may become outright boring for some drivers left on standby. Bored drivers tend to engage spontaneously in distracting activities that stimulate them, such as using a phone, reading a magazine or watching a movie. This may be especially true if the driver feels a high level of trust in the automation.
These by-products of automation have been demonstrated in both simulated and real-world driving studies.
Safety concerns
Drivers who are inattentive to the driving environment when a partially automated vehicle is operating in autonomous mode may pose a significant safety risk to themselves and others. They may be less likely to anticipate critical events that spark a takeover request, and be ill-prepared to safely take back control if required.
The tragic fatality in 2016 of a driver of one of Tesla’s partially automated vehicles bears on this issue. The US National Transportation Safety Board’s accident report notes that:
the probable cause of the Williston, Florida, crash was the truck driver’s failure to yield the right of way to the car, combined with the car driver’s inattention due to overreliance on vehicle automation, which resulted in the car driver’s lack of reaction to the presence of the truck.
Helping people remain vigilant
Autonomous vehicle manufacturers seem to be aware of this problem, and of the need to make the interaction between the driver and the automation safe. To compensate, they require drivers to keep a hand on the wheel when the vehicle is driving itself, or to periodically touch the steering wheel to signal that they remain vigilant.
But it’s unclear whether this is an effective strategy to keep drivers attentive.
Some drivers have devised some creative ways of circumventing the requirement to touch the steering wheel. For example, by placing a bottle of water on the steering wheel in lieu of their hand.
Even if a driver touches the wheel when requested, their eyes may be focused elsewhere, such as on a mobile phone display. And if their eyes are focused on the roadway at times when they touch the steering wheel, their minds may not be. There is evidence periods of prolonged automation can cause drivers’ minds to wander. Indeed, drivers may fail to attend to things on the roadway, even if they are physically looking at them.
This calls into question whether partially automated vehicles can keep drivers attentive to the driving task during periods of autonomous driving. Researchers are actively trying to work out ways of improving this.
A recent paper proposes a set of design principles for the human-machine interface – the technology built into the vehicle that allows it to communicate messages to the driver, and vice versa.
But, in our view, until vehicles become automated to the point there is no longer a requirement for drivers to pay attention to the driving environment, driver inattention is likely to remain a road safety problem.
What about the vehicle itself?
While humans may become inattentive to driving due to mechanisms such as distraction or misprioritised attention, could vehicles operating autonomously become inattentive through similar mechanisms? For example, could they focus their attention, or computational resources, on one aspect of driving to the exclusion of another that is more time critical to safety?
The safe operation of these vehicles will be determined largely by the software algorithms that drive them. Just like a human driver, a vehicle driven by these algorithms will need to prioritise its attention on activities critical for safe driving.
But how do we design algorithms that define what a vehicle should pay attention to from moment-to-moment when we don’t yet fully understand what human drivers should pay attention to at any moment in time? Poorly designed automation could make vehicles as vulnerable to inattention as humans.
Driver inattention is currently a problem in partially automated vehicles. In the future, this may morph into “vehicle inattention” unless we can design vehicles capable of reliably attending to all activities critical for safe driving. Until then, inattention as a road safety problem may not be going anywhere.
The authors would like to thank Dr Bill Horrey, Dr Steve Most and Associate Professor Vinayak Dixit for reviewing an earlier version of this article.
This article was originally published on The Conversation.
Professor of Human Factors, Research Centre for Integrated Transport Innovation, UNSW.
PhD Candidate + Casual Academic (USyd); Senior Behavioural Scientist (ARRB Group), University of Sydney.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius