Scientists have claimed that computers would replace humans in the cockpit for some time, but artificial intelligence simply hadn’t reached the point where it could compete successfully against a human opponent.
But now, at least in simulators, it has. ALPHA, the AI, bested a retired Air Force fighter pilot repeatedly while running on a tiny, cheap Rasberry Pi computer that is often used to teach children coding basics.
Retired Air Force Col. Gene “Geno” Lee helped guide ALPHA’s programming and flew against ALPHA in a series of air battles in a computer simulator, battles that he lost every time when flying against the mature version of ALPHA.
At first, ALPHA was being used as a tool to create better simulators for training pilots and testing tactics. ALPHA took control of “Red” fighters flying against a “Blue” force. Red typically held a numerical advantage while Blue typically had a technological advantage with longer range missiles, a larger payload, and an AWACS flying in support.
The AWACS is a radar system that gave Blue forces better situational awareness and targeting data.
In the initial matchups, ALPHA’s Red team won more than it lost but took heavy losses. Then Lee and the programmers at Psibernetix, the company that created ALPHA, began making adjustments to its programming and ALPHA begin to win. Soon, it won every engagement.
So, Lee decided to take control of a Blue fighter personally to try and give the other team an advantage. He flew engagement after engagement against ALPHA.
ALPHA won every fight and, whenever Lee stayed in the air for a protracted period, Lee was shot down.
Lee told the researchers that ALPHA was “the most aggressive, responsive, dynamic and credible AI (he’s) seen-to-date.”
Lee later told UC Magazine reporter M.B. Reilly that, after flying missions against ALPHA, “I go home feeling washed out. I’m tired, drained and mentally exhausted. This may be artificial intelligence, but it represents a real challenge.”
Now, ALPHA does have some advantages of its own. First, it utilizes a “Genetic Fuzzy Tree” system. GFT systems work closer to the way a human brain works than most computers. Rather than try to calculate every variable when computing a solution, it keeps track of key bits of data and forms generalities.
But it can form decisions based on those generalities 250 times faster than a human can blink. When controlling four aircraft, it can take in all available sensor data, create a new plan of action, and adjust each jet’s controls to implement that plan every 6.5 milliseconds.
This allows ALPHA to constantly choreograph the jets’ movements to cover one another. If one pair of Red planes are forced to evade and are in danger, ALPHA can direct a second pair to move into position on the attackers instantly.
Researchers believe that if ALPHA was split among two computers, one handling sensor data and the other computing actions, ALPHA could adjust its plans and adjust flight paths 1,100 times per second.
The success of ALPHA is impressive, but the system isn’t exactly ready for combat. While ALPHA receives sensor data with “noise” incorporated, errors and missing data that would occur in a real fight, it hasn’t flown in a situation where the signals between planes were jammed. This would make its coordination between planes more challenging.
In their paper in the Journal of Defense Management describing ALPHA’s success, the creators note that ALPHA would make a great wingman for human pilots. So, human pilots would fly lead and command the mission while sending AI controlled jets into the knife fight against enemy jets. This would match plans the Air Force has for the future.
The full paper on ALPHA, which goes into much greater detail about how ALPHA was created, how it works, and what its limitations are, is available in the Journal of Defense Management.
(h/t Popular Science)