Google has released data on how often “drivers” of its autonomous cars have to re-take control of the vehicles, either due to technology failure or safety concerns.
The data, released in a compliance report to California’s Department of Motor Vehicles, generally shows the autonomous technology is improving, as Google reports fewer incidents despite its cars covering vast distances.
But it again highlights the need for drivers to remain attentive in autonomous cars rather than becoming distracted by other past-times.
Between September 2014 and November 2015, Google said there were 272 occasions when a technology failure forced the test driver to re-take control.
Such failures – which Google termed “immediate manual control disengages” may be triggered by communication failures between the primary and secondary self-driving systems (for example, a broken wire) or anomalies in sensor readings of acceleration, GPS positioning and the monitoring of key functions like steering.
Google’s test drivers were awake to them – requiring an average 0.84 seconds to respond.
In addition to times when the computer handed off control of the vehicle, Google reported an additional 69 incidents where the test driver decided to take back control.
Of the 69, 56 “would very likely not have resulted in a real-world contact if the test driver had not taken over”, Google said, citing simulator test results.
In the remaining 13 incidents, “the test driver prevented our vehicle from making contact with another object”, Google said.
Two of those would have just seen the car hit a traffic cone, but three would have resulted in accidents due to unpredictable actions by another driver, Google said.
“In these cases, we believe a human driver could have taken a reasonable action to avoid the contact but the simulation indicated the [self-driving car] would not have taken that action,” the company said.
Google expected the incidence of human takeovers would drop over time and said it was already seeing evidence of that in the latter part of 2015.
“That said, the number of incidents like this won’t fall constantly; we may see it increase as we introduce the car to environments with greater complexity caused by factors like time of day, density of road environment, or weather,” the director of Google’s self-driving cars project, Chris Urmson, wrote in a Medium post.
Sophisticated simulation
One thing Google’s report also reveals is the existence of a “powerful simulator program” developed in-house by Google’s engineers which is used to run through different scenarios using the data collected when a human test driver manually takes over.
It “allows the team to replay each incident and predict the behaviour of the self-driving car (had the driver not taken control of it) as well as the behaviour and positions of other road users in the vicinity (such as pedestrians, cyclists, and other vehicles).
“The simulator can also create thousands of variations on that core event so we can evaluate what would have happened under slightly different circumstances, such as our vehicle and other road users moving at different times, speeds, and angles,” Google said.
Urmson wrote on Medium that Google’s test regime was building “measurable confidence” in the self-driving cars’ abilities in various environments.
“This stands in contrast to the hazy variability we accept in experienced human drivers -never mind the 16-year-olds we send onto the streets to learn amidst the rest of us,” he said.
“Although we’re not quite ready to declare that we’re safer than average human drivers on public roads, we’re happy to be making steady progress toward the day we can start inviting members of the public to use our cars.”