ELAINE Herzberg was pushing a bicycle laden with shopping bags across a six-lane freeway in Arizona when she was struck, fatally, by a Volvo travelling at 38mph.
The tragedy was one among dozens of deaths on roads in America that day, but it made headlines because the Volvo was no ordinary car. It had been modified by Uber to drive itself and was equipped with state-of-the-art collision-avoidance technology.
The investigation by police and federal transport authorities is still going on, but the ramifications have already spread across the globe. Uber, the ride-hailing service that was in charge of the Volvo, immediately suspended testing of similar vehicles in other North American cities, including Pittsburgh and Toronto.
Toyota also halted tests of driverless cars on public roads in the US, saying it was concerned about the effect of the incident on its test drivers. Audi asked its engineers for reassurance about its new A8, billed as the most advanced autonomous car of its kind and due to go on sale later this year.
The death of 49-year-old Herzberg may be unremarkable on the scale of traffic accident statistics, but as scrutiny of the technology intensifies, and public opinion cools, it may delay what once seemed an inevitable march towards the adoption of autonomous cars.
For those in the know, that will come as no surprise. Uber’s own data reveals how far its driverless cars really are from taking over from humans.
A document leaked a year ago revealed that the average distance Uber’s driverless test cars could manage before a backup driver was forced to intervene was just 0.8 miles. During a week in 2017 in which Uber’s 43 test cars in Pittsburgh drove more than 20,000 miles, drivers intervened on 25,000 occasions.
For a time Uber was banned from testing its driverless Volvos in San Francisco, which is one reason it decamped to Arizona, where regulation is looser — though tests in California later resumed.
Christian Wolmar, the author of Driverless Cars: On a Road to Nowhere, points out that Uber has led the charge towards autonomy, believing it offers a cheaper and safer alternative to human drivers.
“In January 2015 the company claimed that entirely autonomous taxis would be available in 2018 and [bosses expected] Uber’s fleet to be entirely driverless by 2030. In reality, the performance [of its cars] demonstrates just how far away full-scale autonomy is.”
Could the backup driver in the Uber car that struck Herzberg have intervened earlier? Interior video footage suggests she had taken her eyes off the road shortly before the collision. Was the autonomous system at fault?
“It’s easy to take for granted just how incredibly good humans are at driving”
Driverless cars have been “just round the corner” for more than a century. Proponents say they ease jams and cut the risk of collisions because they are interconnected and can talk to one another, freeing up road space and keeping vehicles a safe distance apart. But nobody has yet been able to come up with a solution that improves on you.
It’s easy to take for granted just how incredibly good humans are at driving. How, each day, millions of us stream into cities, filtering into narrow streets, squeezing past parked vans, navigating round potholes and nudging through nose-to-tail traffic, according to — usually — unspoken but readily understood rules of fairness.
We can spot a headphones-wearing pedestrian who’s about to step into a busy road; a wavering driver set to swerve into our lane; or red brake lights that signify a hold-up far down the road.
And we’re so good that things rarely go wrong: there’s only one crash involving even a slight injury for every 2m miles driven in Britain, according to 2016 statistics from the Department for Transport.
No driverless car has yet matched our instinctive understanding and reactions. And yet in little more than a decade car manufacturers would have us believe a computer will be able to do this. Not just as well as a human, but faster, more safely and more efficiently.
Joshua Brown, a former US navy bomb disposal expert, appears to have shared this optimistic vision when he set off on May 7, 2016, driving his Tesla Model S on Highway 27 in Florida. The car’s Autopilot function was switched on, maintaining the car’s speed at 74mph and keeping it in lane.
Despite the Autopilot name, drivers are expected to keep their hands on the wheel. Yet in the course of a 37-minute journey the car’s sensors detected Brown’s hands on the wheel for only 25 seconds.
Neither Brown nor the car spotted the lorry that pulled across the freeway, and he died when they collided. An investigation published by the National Transportation Safety Board (NTSB) found that Brown should have been able to see the lorry for at least seven seconds before the impact.
It’s not clear why he didn’t but, perhaps significantly, a month before the crash Brown had posted a video on YouTube showing him at the wheel of his Tesla during another encounter with a truck that swerved into his lane.
On that occasion his Autopilot system reacted instantly, darting out of the truck’s way and avoiding an accident. After being retweeted by Tesla’s co-founder, Elon Musk, the video went viral.
“Carlos Ghosn, chairman of Nissan, admitted last year that one problem still to resolve was ‘people with bicycles'”
The incident may have led Brown to believe the Autopilot was cleverer than it was. Certainly, on the day he died it didn’t spot the white side of the lorry against a bright sky. The NTSB said data from the car suggested Brown had relied too heavily on the Autopilot function and had not understood its limitations. Tesla’s alert to remind drivers to keep their hands on the wheel was also found to be ineffective — and the company has since updated the design.
It was a situation in which human intervention was required — and another reminder of the illusion of techno-infallibility.
Carlos Ghosn, chairman of Nissan, a car maker that has positioned itself as one of the world leaders in autonomous driving, admitted last year there were problems still to resolve. One obstacle he pointed out — presciently, as it turns out in the light of Herzberg’s death — was “people with bicycles”.
He said: “The car is confused, because from time to time they behave like pedestrians, and from time to time they behave like cars.”
It’s one thing for a car to be able to guide itself at a fixed speed on a road with clear white line markings in good weather, and another to be able to navigate in, say, a multistorey car park, or on a road with no markings, or where there are roadworks or contraflows, or when an emergency vehicle needs to pass, or when the road is obscured by snow, or in a million other unforeseen situations.
Early tests of Google’s driverless cars were hampered by the fact that they could not tell whether an object in the road was a rock or a plastic bag.
Advocates such as Musk point out that most crashes are caused by human error and, in terms of accident rate per miles covered, the records of autonomous cars are excellent. But the figures may be misleading, because the conditions in which they are tested are not always representative of real-world driving and don’t include the day-to-day challenges most drivers face.
It would, in theory, be possible to make robot cars that were accident-proof by programming them to drive slowly and brake more cautiously. But then drivers would not reach their destination on time. And it would open up the possibility that people who were motivated to bring cities to a halt — to draw attention to a campaign, say — could do so simply by standing in the road at a few strategic junctions. The robot cars would shut themselves down to avoid .
For the foreseeable future it seems likely that autonomy will be limited to electronic systems that help to prevent accidents — such as the systems on many existing cars that apply the brakes to avoid impact, or help the driver to see at night and in bad weather.
The new Audi A8 expected to go on sale later this year has “level 3” technology that will guide the car on a main road but hand back control to the driver when the speed of traffic goes above 37mph, if the lane markings aren’t clear, in poor weather or if any pedestrian is spotted. Drivers will hear an alert and have about 10 seconds to retake the steering wheel. If this doesn’t happen, the car will come to a halt with its hazard warning lights flashing.
There’s also an on-board nanny: a driver-facing camera to ensure that you don’t close your eyes and grab a quick snooze. Similar driving aids are expected soon from a range of manufacturers.
Audi’s Traffic Jam Pilot system will be allowed to go on sale in Germany after being granted government exemption, but the autonomous system is “geo-fenced” to stop it working in other territories, including Britain, without permission. Herzberg’s death may delay that.
Even if it goes on sale, it won’t mean drivers are free to send texts or watch TV. Guidance from Audi says: “Currently the driver is required to have complete control of the task of driving. In the event of a traffic accident, [relying on automated driving functions] could be considered a breach of his/her obligations, which in turn could lead to questions of liability.”
The government’s Automated and Electric Vehicles Bill, announced in last year’s Queen’s speech, is designed to prepare for more electric and driverless cars by addressing the fact that currently insurance covers drivers and not vehicles.
To allow development of autonomous vehicles, the rules need to change so insurers will also pay out when a car is in autonomous mode and at fault. In those circumstances, the insurer will have to claim back from the manufacturer.
To make assigning blame easier, autonomous cars will have to carry black boxes. Data on whether a car or driver was in control at the time of the crash is likely to be stored online by manufacturers, and insurers given access in the event of a claim — another thing that may make drivers think twice about the advantages.
Whatever happens, the planned legislation won’t change the fact that a driver still needs to be in control of a vehicle. In Britain the law is summed up in the Road Vehicles (Construction and Use) Regulations 1986.
They state: “No person shall drive or cause or permit any other person to drive a motor vehicle on a road if he is in such a position that he cannot have proper control of the vehicle or have a full view of the road and traffic ahead.”
Lawyers warn that drivers risk being prosecuted for an accident in which the car was driving itself. “Someone might step out into the carriageway,” says Nick Freeman, the motoring lawyer known as “Mr Loophole”.
“If your autonomous car doesn’t brake, and that person is killed, then who is to blame? If that car does something wrong, then the driver is still responsible. The buck stops with you. I do not know of any law at the moment that says that is not the case.”
Ministers say they are taking a “step-by-step approach and regulating in waves of reform” — shorthand for a situation in which technology has overtaken the law and politicians have no real idea what to do about it.
If history is a guide, they will follow public opinion, rather than lead it. Simson Garfinkel, an expert on autonomous systems, wrote in the MIT Technology Review: “It will take only a few accidents to stop the deployment of driverless vehicles. This probably won’t hamper advanced autopilot systems, but it’s likely to be a considerable deterrent to the deployment of vehicles that are fully autonomous.”
It’s not yet clear whether Herzberg’s death will have changed the course of history, but certainly the car of the future has hit a bump in the road.
Additional reporting: Dominic Tobin
A brief history of driverless cars
GM FIREBIRD II
This prototype gas turbine car was designed to automatically follow a wire in the ground
Modified at the Transport and Road Research Laboratory in Crowthorne, Berkshire, the Citroën would follow two wires in the road
This university project could autonomously follow a white line at a top speed of 0.8mph
Also built at Stanford, Stanley was a fully autonomous Volkswagen, which completed a 132-mile desert course without human intervention
Audi’s robot racing car lapped the Laguna Seca track in California at the speed of a human driver
Dude, Where’s My Driverless Car?
“A new problem is promised for the traffic policeman by the recent invention of a motor car which is steered and controlled by wireless”
Diss Express (Norfolk), October 7, 1921
“On a 1960s express motorway, a safe distance between cars is maintained by automatic radio control”
Futurama by General Motors, 1939
“As you enter the super-highway, you reach over to your dashboard and push the button marked ‘Electronic Drive’. Selecting your lane, you settle back to enjoy the ride. You may prefer to read … or catch up on work. Fantastic? Not at all”
Electronic Age, January 1958
“Look: no hands! And no driver for that matter. This robot car is the latest project under study by the Road Research Laboratory. It’s all done by computerised electronic impulses relayed to the car. Researchers say that robot cars may well be in everyday use within 30 years”
UK newsreel, 1971
“Driverless cars by year 2000”
The Daily Telegraph, February 19, 1987