At The Information, Amir Efrati has a scoop[1]: the Uber car that killed a pedestrian[2] wheeling a bicycle across the road in Tempe, Arizona in March actually did see her. Its software just decided she was a false positive and accordingly did not stop the car. Anyone familiar with AI classifiers understands the problem, which is where to set the threshold between 'ignore' and 'stop'.
Because it's a car instead of a photo-tagging algorithm trying to differentiate between cats and giraffes, if you set it too low -- that is, if you tell the car to stop for too many false positives -- you annoy your passengers by stopping for every shadow and plastic bag. If you set it too high -- too many false negatives -- you kill the pedestrian.
This kind of conundrum is one of the reasons Christian Wolmar gives in his new book, Driverless Cars: On a Road to Nowhere[3], for believing that driverless cars are not going to be filling our streets by the end of this year -- or by 2020, either. Both industry announcements and media reports, he argues, are filled with hype and optimism, but little in the way of measured skepticism. This he sets out to provide, calling the present state of the industry, "More Bladerunner 2049 than News at Ten".
For one thing, despite billions in investment from venture capitalists, car companies and technology companies, self-driving cars today are more myth than reality. Last year, Uber's human intervention rate leaked to Recode[4]: a human had to take over