New AI tool predicts airport traffic to avert devastating collisions
In managing airport traffic, small errors can cause catastrophe. A group from the CMU Robotics Institute's AirLab used the Pittsburgh Supercomputing Center's Bridges-2 supercomputer to create World2Rules, an AI that draws from airport data and historical crash reports to help human controllers spot collisions before they happen. Their paper is published on the arXiv preprint server.
Why it's important
On March 12, 2026, an air traffic controller told an Air Canada jet that had just landed at JFK Airport in New York to wait before crossing a runway, to avoid an EVA Air jet that was landing. The Air Canada crew acknowledged the instructions.
But they began moving right away, while the EVA jet was still traveling toward them at a high speed. Fortunately, the alert controller transmitted, "Stop stop stop stop" in time. The Air Canada plane stopped, the EVA plane zoomed by it, and there was no collision.
But it had been close.
"The overall idea [is] we've been working on this project to see how we can improve safety in the aviation domain, or [other] safety-critical domains," said Jack Wang, a student working in Sebastian Scherer's AirLab at CMU's Robotics Institute.
"The original idea stemmed from the fact that, as you've seen on the news, runway incursions have been happening … Sometimes, they're minor, but sometimes they can be quite catastrophic."
Wang and Jay Patrikar, another Scherer student, wanted to see if they could create a design that could help not only detect airport collisions that were about to happen, but also predict possible future collisions. This could give pilots and controllers extra minutes or seconds to avoid disasters.
To build, train, and test this method, they turned to the Pittsburgh
Supercomputing System's Bridges-2 system, via an allocation from ACCESS,
the NSF's network of supercomputers.
How PSC Helped
Prior to World2Rules, the AirLab and BIG lab jointly developed Amelia-42. The Amelia dataset of raw airport surface movement data pulls from two years of Federal Aviation Administration data at 42 U.S. airports.
The team's new goal would be for World2Rules to be a complementary component within the collaborators' broader collision prediction pipeline. World2Rules would learn safety rules from the Amelia data and use them to interpret trajectory forecasts. This would help explain its behavior in a way that humans could understand, and identify potential rule violations.
The task would be challenging. To create World2Rules, the CMU scientists looked at two different kinds of AI. Neural models, patterned on a simplified idea of how the human brain works, are good at pulling the gist out of complex data. But their results are a black box. We can't "look under the hood" for the formal guarantees about how they work that we'd like for life-critical functions.
Symbolic methods, on the other hand, are based on symbols that humans can read and understand. But they struggle with imperfect data. Airport records tend to have vast amounts of routine data and only a small amount of data from rare bad incidents. Symbolic methods have a hard time with this kind of large and noisy data set.
Patrikar and Wang decided to pursue a neuro-symbolic AI that combined the strengths of the two methods. Bridges-2 was ideal for their work. For one thing, PSC's management of the system for users made it possible for them to focus on solving their problem rather than running the computer.
Equally important, Bridges-2 handles vast data well. Amelia-42 contains close to 10 terabytes of raw data—ten times as much as the entire drive of a good laptop. Their AI would need to draw from that deep well.
"We basically collect all aircraft surface movement data from airports in the U.S.," Patrikar said. "And that stream is pretty intense. It's, like, 1 megabit per second every day, 24–7, 365 … So it's a ridiculously high amount of data … We used PSC to train large trajectory forecasting models, which was the first of its kind … We don't want to understand that a crash is happening. We want to predict if a crash will happen in the future."
The team's approach, called World2Rules, used the airport data to extract a framework that generates rules that are consistent and understandable to humans. Their method uses both nominal data and off-nominal crash reports to generate interpretable rules grounded in historical data.
The team had an AI partnership going: Amelia-TF generated trajectory forecasts, and World2Rules learned interpretable safety rules that it used to analyze, verify, and provide explanations for potential collision scenarios within those forecasts.
World2Rules was able to recognize unreliable evidence and identify and delete the kinds of faulty outputs that plague less sophisticated systems. In a direct comparison in recognizing potential collisions, it was 23.6% more accurate than purely neural AI and 43.2% more accurate than a symbolic approach.
The team reported their results at the NASA Formal Methods Symposium in Los Angeles, held May 5–7, 2026.
World2Rules can work even better with more data. The current AI also works from essentially a snapshot of vehicles and their movements at a moment in time. One direction the team would like to move in is to incorporate a time-evolving picture that better deals with uncertainties in what the vehicles will do.
Finally, while World2Rules was designed for airport collision avoidance, the AI can work from any similar data. It should be useful for avoiding bad outcomes in other places where controlling traffic and avoiding conflicts are needed.



Comments
Post a Comment