Air-Guardian: The Human-Machine Partnership in Aviation
Imagine being on an airplane with two pilots – one human and one computer. While both have their hands on the controls, they have different areas of focus. If they both notice the same thing, the human gets to steer. But if the human gets distracted, the computer quickly takes over. This is the concept behind Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). It acts as a proactive copilot, working in harmony with the human pilot to enhance safety and attention during critical moments.
Determining Attention: Human and Machine
So how does Air-Guardian determine attention? For humans, it uses eye-tracking technology, while for the machine, it relies on saliency maps that identify where attention is directed. These maps highlight key regions within an image, helping to understand complex algorithms. By analyzing attention markers, Air-Guardian can identify early signs of potential risks, rather than only intervening during safety breaches like traditional autopilot systems.
Implications and Future Applications
Air-Guardian’s potential goes beyond aviation. Similar cooperative control mechanisms could be applied to cars, drones, and other robotics in the future. The system’s differentiability and adaptability make it an exciting innovation. Its cooperative layer and end-to-end process can be trained, and its dynamic features in mapping attention ensure a balanced partnership between human and machine.
Field Tests and Success
In field tests, both the pilot and Air-Guardian made decisions based on the same raw images to navigate to the target waypoint. The system’s success was measured by cumulative rewards earned during flight and shorter paths to waypoints. Air-Guardian reduced the risk level of flights and increased the success rate of reaching target points.
The Strength of Air-Guardian’s Technology
Air-Guardian’s strength lies in its foundational technology. It uses an optimization-based cooperative layer with visual attention from humans and machines. It also incorporates liquid closed-form continuous-time neural networks (CfC), known for their ability to decipher cause-and-effect relationships. The system’s attention maps are further enhanced by the VisualBackProp algorithm, which helps identify focal points within an image.
Improving the Human-Machine Interface
For mass adoption in the future, the human-machine interface of Air-Guardian needs refinement. Feedback suggests that an intuitive indicator, such as a bar, could be used to signify when the guardian system takes control.
A Safer Sky with Air-Guardian
Air-Guardian represents a new era of safer skies by providing a reliable safety net when human attention wavers. The system’s use of human-centric AI-enabled aviation, which augments human expertise with machine learning, reduces operational errors and enhances safety and collaboration in aviation.
Recognition and Funding
This research was funded in part by the U.S. Air Force (USAF) Research Laboratory, the USAF Artificial Intelligence Accelerator, the Boeing Co., and the Office of Naval Research. The views expressed in the findings do not necessarily reflect those of the U.S. government or the USAF.