
The excitement about artificial intelligence (AI) has led to inflated expectations for what machine learning could do for automatic driving. However, there are some fundamental differences between AI’s Large Language Models (LLMs), manipulating sentences and machines driving on public roads. Automated driving is a serious issue for safety, not just for passengers in driverless cars but for everyone else on the road. Software for the LLM must meet much higher standards in terms of reliability and accuracy than LLMs supporting desktop or mobile apps.
Despite the well-founded concerns about human driving mistakes, the number of serious traffic accidents in the U.S. has been remarkably low. According to the National Highway Traffic Safety Administration’s (NHTSA) traffic statistics, fatal crashes happen approximately every 3.6 million hours of driving. Injuries are caused by crashes every 61,000 driving hours. This is one fatal accident in 411 and seven years of 24/7 driving. It is extremely difficult for software-powered systems to have a comparable long mean time between failures, especially if they are mass-produced and affordable.
The problems that Tesla and Cruise have had with the NHTSA, California’s safety regulators, are examples of some of automated driving’s challenges. These problems are not purely technical, as they show the risks that come with the Silicon Valley approach of “moving quickly and breaking things.” The development of safe systems is a slow process that requires meticulous attention to detail and patience. These are two qualities that cannot be combined with speed. Our vehicles shouldn’t be causing damage to anything, especially people.
The U.S. must have a strict safety regulatory framework to ensure automated driving is safe. This will allow the industry to earn public trust once the technology has been thoroughly vetted and approved by safety experts and regulators. Due to its critical nature for safety, software that controls vehicles must operate at an unprecedented level of reliability. The general public, as well as safety regulators, must be able to prove and explain that the software can actually improve traffic safety rather than make it worse. The software will not be able to rely solely on AI-based machine learning but must also incorporate explicit safety algorithms. Tesla and Cruise warn us of the need for this.
NHTSA is investigating the safety of Level 2 partial driving automation systems in Tesla’s case. These systems are designed to control speed and steering with constant driver supervision and under specific road and traffic conditions. It announced a deal with Tesla on December 12, last year, for a recall because the company had not included adequate safeguards to prevent misuse by drivers. Tesla’s Autopilot, unlike similar driving automation features offered by Ford and General Motors, does not use direct (infrared video surveillance of drivers’ gaze in order to determine their level of vigilance when supervising system operation. The software allows for the system to work anywhere. It doesn’t matter if it’s on the restricted-access freeways that it was originally designed for. Simple modifications allowed for a reasonable level of driver awareness and limited the use of the system to areas with safe road conditions. The company has refused to do so and only implemented some additional warnings in Autopilot (via an over-the-air software update) to try to discourage abuse. It is necessary to enforce stronger regulatory measures to force them to “geofence” the system so that it only works when it has been shown to be safe and the cameras indicate that the driver looks ahead to look for hazards it may not have recognized.
The California Department of Motor Vehicles revoked Cruise’s authority for driverless ride-hailing services in San Francisco because the company did not provide a complete and timely report of an event that occurred on October 2, in which one of their vehicles dragged under a victim trapped underneath. This victim was severely injured. Cruise was forced to undergo a thorough internal review of its operations, which revealed serious problems in the company’s safety culture as well as the way it interacted with the public. Cruise adopted a Silicon Valley-style culture, which prioritized speed and expansion at the expense of safety. It also did not have an effective corporate safety system or a chief security officer, unlike other companies developing driverless ride-hailing services. Cruise has made safety a major talking point, but the company did not place a high priority on safety when making important decisions.
While automated driving technology has not yet matured, and there is insufficient data to determine precise performance-based regulation, it is possible to make progress in the short term by implementing basic requirements at a state or national level in order to enhance safety and public perceptions. Developers of automated driving systems (ADS) and fleet operators must be required to ensure that the ADS cannot operate in areas where its behavior has been proven to be unsafe; to report all accidents and near-misses, as well as high g maneuvers and takeovers by humans; and to implement audited and controlled safety system. They should also develop comprehensive safety cases that will be reviewed and approved by state or federal regulators before deployment. The safety case must identify reasonably foreseeable hazards and explain how they have been mitigated using quantitative evidence from real-world testing.
It will take time for the industry to earn the public’s trust that automated driving is safe. It will be necessary to establish regulations that set minimum standards for the secure development and operation of systems and disclose enough safety-relevant information for independent safety experts and regulators to review.