Boeing 737 MAX Not Too Big To Have Failed, Driverless Cars Not Too Small To Fail, Too
Lessons can be learned from the Boeing 737 MAX and applied to driverless cars. Photocredit: ASSOCIATED PRESS
The difficulties facing the Boeing 737 MAX 8 have dominated the airline industry news of late and the story about what went wrong and how to fix it are still evolving. At this stage, it is nonetheless possible to leverage what has so far emerged to draw a parallel to the self-driving driverless car realm, allowing for insightful lessons learned about the MAX, the hard way, as an aid and forewarning about autonomous car development.
As indicated by noted philosopher George Santayana, those who cannot learn from history are doomed to repeat it.
Boeing 737 MAX Particulars
The Boeing 737 has been around since the late 1960s and has evolved over time, leading to the NG (Next Generation) series of the 1990s and then the MAX series became the successor to the NG. Boeing opted to retrofit the 737 for the MAX series by mounting the engines further forward and higher than the prior designs. This tended to potentially cause an upward pitching effect in some flight circumstances, and so a new hardware/software system was added to push the nose down when presumably desired, a system known as MCAS (Maneuvering Characteristics Augmentation System).
The question arises as to when should the MCAS try to push down the nose of the plane, a quite serious action and one that needs to be undertaken with great care and rationale. The computer hardware and software of MCAS would need to know the angle of the plane to ascertain whether a nose-down nudge might be warranted, and thus the Angle of Attack (AOA) sensors mounted on the plane were chosen to provide such data to MCAS.
On paper, this all sounds hunky-dory. Seemingly good news, an added system to augment the human pilots and aid in dealing with a known characteristic of the plane. Makes sense.
As with most things in life, though, the devil is in the details. You can implement this overall notion in a multitude of ways, some of which might be more, or less, a capable and suitable feature. Tough design choices come to play. Currently, there are various governmental efforts underway to trace how the particular approach of implementation was determined, which will likely provide added lessons learned.
You are undoubtedly aware that the Boeing 737 MAX 8 was sadly involved in two fatal crashes, one on October 29, 2018, which was Lion Air flight 610, and another fatal crash on March 10, 2019, the Ethiopian Airlines flight 302.
So far, it appears that the MCAS was integral to those two crashes.
Lessons Learned For Driverless Cars
There are actually several lessons already that can be gleaned from what is known today about the situation, or at least what has been reported in the media about the matter. Even if there are later reports that opt to recast what was previously thought, or add new elements, you can still consider these lessons as worthwhile to observe, regardless of their veracity specific to the Boeing 737 MAX 8.
• Retrofits Versus Starting Anew
Some have pointed fingers at the retrofit of the NG and suggested that the choice of engine placement for the MAX led to a “problem” that never should have arisen (the nose upward pitching effect), which presumably a start-from-scratch approach would have not had.
You might be under the belief that driverless cars are not prone to a retrofit since they are so new, but you’d be mistaken. Many of the driverless car designs of today are based on prior designs, logically so, attempting to build upon what already works.
Lesson #1: Auto makers and tech firms making driverless cars need to be mindful of how their retrofits can be a boon or a bane in terms of the autonomous capability and safety.
• Sensors Criticality
It has been reported that apparently the MCAS relied on just one of the two AOA sensors on the plane, rather than trying to use both, and that the MCAS might have been misled by a faulty AOA sensor reading, telling MCAS a higher angle than true and spurring MCAS to push the nose downward, doing so needlessly and worse so dangerously.
For driverless cars, they are chock full of sensors, cameras aplenty, radar, ultra-sonic sensors, LIDAR (light and radar), and so on. They are the eyes and ears of the autonomous car, without which the self-driving car is blind and would be a frantic menace to all while in-motion.
Lesson #2: Driverless cars need to employ multiple sensors that act redundantly in case any or some of them falter or fail, there should not be any single point-of-failures that can doom the driving effort.
Lesson #3: The sensor fusion that attempts to combine together the sensor readings needs to do so with aplomb, identifying which sensors are working properly and which are misreporting the driving environment, and not become fooled or misled by out-of-whack sensors.
• Human Factors Given Their Due
There is controversy associated with the piloting of the Boeing 737 MAX in that some contend that the pilots might not have had sufficient training about the MCAS aspects, plus the MCAS was apparently setup to proceed without necessarily alerting the pilot (an add-on), and the MCAS was given wide latitude of how often and how far it could nudge down the nose of the plane on its own. These are vital design choices that relate to the Human Factors involved in co-sharing the human-machine flying effort.
For driverless cars less than a Level 5 (Level 5 is the highest level and considered true autonomy that does not require the presence of a human driver), the lesser levels are essentially co-sharing the driving task, similar in a manner to how a pilot might co-share the flying with the MCAS.
Lesson #4: The driverless car when co-sharing needs to ensure that the human driver is aware of what the automation is doing versus what the human is expected to do, it’s a somber life-minding dance for both parties.
Lesson #5: It needs to be abundantly clear as to what training the human driver needs for the driving co-sharing effort, and the training has to be undertaken and reinforced over time.
Lesson #6: Design choices for co-shared human-machine driverless cars must take into account human driver frailties, including lack of human attention to the task, human speed-of-response to urgencies, etc.
I’ve touched upon just some of the key lessons learned based on the recent flight tragedies and assure you that there are many more such lessons hidden within the matter.
Beyond using those lessons for improving airplanes and airplane systems, it is important and useful to extend those automation insights into the realm of self-driving driverless cars. Nothing is too big or too small to fail, and it is prudent to consider how failings can hopefully be prevented or mitigated by design and implementation choices made.