My friend Don Norman wrote an op-ed this weekend calling for an FDA-like testing program before autonomous cars are put on the roads in the US. Clearly, some level of government approval is important. But I see lots of problems with using drug testing (FDA = Food and Drug Administration) as a model.
Here is an excerpt from a recent article about testing problems with Uber cars, which were the ones in the recent fatal accident. After the break, my assessment of how to test such cars before they are allowed on American roads.
Waymo, formerly the self-driving car project of Google, said that in tests on roads in California last year, its cars went an average of nearly 5,600 miles before the driver had to take control from the computer to steer out of trouble. As of March, Uber was struggling to meet its target of 13 miles per “intervention” in Arizona, according to 100 pages of company documents obtained by The New York Times and two people familiar with the company’s operations in the Phoenix area but not permitted to speak publicly about it.Yet Uber’s test drivers were being asked to do more — going on solo runs when they had worked in pairs.And there also was pressure to live up to a goal to offer a driverless car service by the end of the year and to impress top executives.
So Uber car performance was more than 100 times worse than Waymo cars?!
Here are some comments on why I think the FDA is not the right model for regulating Autonomous Vehicles.
1) FAA regulation of new aircraft seems a much more direct analogy than FDA drug regulation.
BUT that level of regulation of autos, while technically possible, probably reduces progress to a crawl for a decade.
The FAA system greatly slows down upgrades and changes, so with rapidly evolving tech the net result is quite possibly worse than no regulation, and is certainly worse than lighter standards would be.
2) There may be some arguments for letting product liability laws, including torts, bear part of the regulatory burden. These laws work via organizations like Underwriters Lab, and insurance companies refusing to insure cars that don’t meet their standards.
But the effects of such laws will be critically dependent on how Congress/court interpretations set the liability: driver, automaker, auto owner, etc.
3) According to NYT today, Uber specifically has been having far more near-accidents than other leading AV companies. Which suggests that they should have been treated differently, e.g. by state legislators.
4) Testing complex systems like Autonomous Vehicles for very low accident levels of (say) 1 per million miles cannot be done by normal statistical field testing. That is, the pharmaceutical model cannot achieve this level of safety.
We can partition testing into three pieces:
a) Is the car safe if all of its systems are working ‘properly’? This should be tested by deliberate edge case testing: fog, narrow roads, robot bike riding dummies, flat tires, and other known hazards.
b) In use, with normal maintenance, how will systems malfunction?
c) With common malfunctions, repeat the edge case tests of part (a).
5) What do we do about 10 year old cars? Various sensors will degrade or break over time.
Regulated commercial aircraft have rigorous methods to deal with this issue, including redundancy, Minimum Equipment Lists which prevent flying if certain equipment is not working, and heavily regulated maintenance methods. It’s hard to imagine such a system being affordable for $30K automobiles.
My conclusion: I’m not sure that the testing problem will have good solutions. It has social, economic, and legal dimensions as well as technical ones. Unfortunately, we may have to accept some tradeoff between lack of safety in the short run, and faster development of this technology (which will lead to greater safety in the long run). Different countries will resolve these conflicts differently.
Hi Roger,
I definitely agree with the need for regulatory oversight; particularly as it pertaining the the development and operation of autonomous vehicles (on land, in the air and on the water). And speaking of FAA-like oversight, should this technology become an integral component of modern-day air transport, the FAA will most certainly oversee/regulate its development and subsequent operation.
As I view the situation, the primary reason or basis for having federal regulatory intervention is due to the fact that corporations (both past and present) have demonstrated a horrendous track record of failure when it comes to being capable of socially, economically, and environmentally responsible self-regulating behavior. This fact was clearly demonstrated in the lead up to – and aftermath of – the GFC. And the primary reason(s) for this failure record is/are evidenced in the quote from the NYT article above… That is, there are simply too many “other priorities” (e.g., competitive positioning, investor/executive interests, and individual reward expectations) that take precedent over responsible social/societal and environmental thinking and behaving.
That said, given the prevailing political climate in the country, there’s no guarantee that any federal regulatory body will be able to provide the level of oversight and control needed to serve in the best interests of both society and the economy as a whole. Given the present and growing level of lobbying influence and biased politicking that’s taking place – at all levels – within government, any corporations seeking to pursue windfall profits, by being on the leading edge of disruptive technologies, may not have to be concerned about any real regulatory oversight. In fact, under the prevailing conditions, such organizations might even be inclined to lobby in favor of what would amount to as “fake” or “neutered” regulatory oversight.
Finally, when it comes to the potential for having to accept some trade-off(s) between lack of safety in the short run, and faster development of this technology over the long run, there’s absolutely no law of cause and effect that could/would guarantee such an outcome. Quite the contrary, higher risk acceptance early on is more than likely going to result in a greater tendency for higher risk acceptance later on. EVEN IN A FEDERAL AGENCY SUCH AS NASA, which has a long-standing history of risk control and mitigation, there are well documented instances where “other PRIORITIES” have taken precedent over well-established and highly-recognized risk mitigation first principles and practices; thereby resulting in highly-visible, program/progress-threatening, and very costly incidents.
Bottom line: As has been the case in all successful endeavors to develop and introduce/utilize (commercially or otherwise) disruptive technologies, the government has either taken on the responsibility itself or has provided a high degree of true regulatory oversight.