Self-braking cars, or vehicles equipped with Autonomous Emergency Braking Systems (AEBS), represent a significant leap in automotive safety technology. These systems are designed to automatically apply the brakes to prevent or mitigate collisions, acting as a crucial safety net. But how are self-braking cars programmed to react in those critical moments? The investigation into a fatal accident involving a self-driving Uber vehicle in Tempe, Arizona, provides a stark example of the complexities and potential pitfalls in programming these life-saving systems.
In March 2018, an Uber self-driving SUV struck and killed a pedestrian. A preliminary report by the National Transportation Safety Board (NTSB) revealed a concerning detail: the Uber vehicle was not programmed to automatically engage emergency braking in scenarios like the one that unfolded. Sensors on the vehicle detected the pedestrian, Elaine Herzberg, crossing the street outside of a crosswalk at night. The system correctly identified the need for emergency braking to avoid a collision. However, according to Uber, this crucial emergency braking function was intentionally disabled while the vehicle operated in autonomous mode. This deactivation was a deliberate programming choice, intended to “reduce the potential for erratic vehicle behavior.”
This revelation immediately begs the question: how are self-braking systems typically programmed, and why was Uber’s system configured in this way? Normally, autonomous emergency braking systems rely on a suite of sensors – radar, lidar, and cameras – to constantly monitor the vehicle’s surroundings. These sensors feed data into sophisticated algorithms that are programmed to identify potential hazards, such as pedestrians, cyclists, or other vehicles, and assess the risk of collision.
The programming behind AEBS involves several key stages:
-
Object Detection and Classification: The system must first accurately detect objects in its vicinity. This involves processing sensor data to identify shapes, sizes, and movement patterns that correspond to potential hazards. Crucially, the system needs to classify these objects correctly – distinguishing between a pedestrian, a bicycle, another car, or a stationary object like a sign. In the Uber case, the system initially misclassified the pedestrian, first as an unknown object, then as a vehicle, and finally as a bicycle, before correctly identifying it as a bicycle.
-
Risk Assessment and Collision Prediction: Once an object is detected and classified, the system calculates the risk of collision. This involves analyzing the object’s speed, trajectory, and distance relative to the self-driving car. Algorithms are programmed to predict potential collision paths and estimate the time to impact. In the Uber accident, the system determined a collision was imminent 1.3 seconds before impact.
-
Decision-Making and Action Initiation: If the risk of collision is deemed high enough, the AEBS is programmed to initiate a response. This typically involves a staged approach. First, the system might provide warnings to the driver (in non-autonomous vehicles or as a backup in autonomous testing). If the driver doesn’t react, or if the system is programmed for full autonomy, it will then automatically engage the brakes. The intensity of braking can be modulated depending on the severity of the situation, aiming for maximum deceleration while maintaining vehicle stability.
-
System Override and Safety Driver Interaction: In the context of autonomous vehicle testing, like Uber’s program at the time of the accident, a safety driver is present to monitor the system and intervene if necessary. However, the NTSB report highlighted that Uber’s system was not designed to alert the safety driver to the need for emergency braking. The responsibility for initiating braking was entirely placed on the safety driver, who, in this instance, was found to be looking away from the road in the moments leading up to the crash. Furthermore, the Volvo XC90 SUV used by Uber had its own built-in automated braking system, “City Safety,” but this was also disabled when the vehicle was in self-driving mode.
The Uber accident underscores a critical point: the programming of self-braking systems is not just about technical capability, but also about strategic choices and safety philosophy. Uber’s decision to disable emergency braking in autonomous mode, ostensibly to prevent “erratic vehicle behavior,” proved to be a fatal flaw in their system’s design. As Bryant Walker Smith, a professor studying autonomous vehicle regulations, pointed out, disabling Volvo’s safety systems and placing the entire braking responsibility on a single, potentially fallible safety driver seems like a “serious design omission.”
The challenge in programming effective AEBS lies in striking a balance. The system needs to be sensitive enough to react to genuine emergencies and prevent collisions. However, it also needs to be robust enough to avoid false positives that could lead to unnecessary and potentially dangerous sudden braking maneuvers. “Erratic vehicle behavior,” the concern cited by Uber, could stem from overly sensitive systems reacting to harmless scenarios. The programming must therefore incorporate sophisticated algorithms that can accurately differentiate between genuine threats and non-threatening situations.
The aftermath of the Uber accident led to significant scrutiny of autonomous vehicle safety protocols and programming. Uber suspended its self-driving testing in Arizona and initiated a safety review of its autonomous vehicle program. The incident served as a stark reminder that while self-braking technology holds immense promise for improving road safety, its effectiveness is entirely dependent on the quality and thoughtfulness of its programming, and the overall safety architecture within which it operates. As autonomous vehicle technology continues to develop, ensuring robust and reliable emergency braking systems remains a paramount concern for manufacturers, regulators, and the public alike.