Lt. Col. Benjamin Wysack, Lt. Col. Stephen Graham, Mr. Jack Harman, and Maj. Justing Eagan evaluate the first-ever formation of F-16 Fighting Falcons equipped with Active Electronically Scanned Array (AESA) radars over Eglin Air Force Base, July 2, 2020. U.S. Air Force photo by Jack Harman, via DVIDS.
Skynet still isn’t a reality — but it’s a little closer than you might realize.
In a groundbreaking series of air combat simulations on Aug. 20, an artificial intelligence (AI) pilot skunked a premier, human US Air Force fighter pilot 5-0, underscoring what some experts say is a bellwether for the coming appearance of AI-controlled platforms in America’s warfighting arsenal, as well as China’s.
Thursday’s dogfighting experiment proved the viability of autonomous, AI combat pilot algorithms, which could, in theory, one day divorce a human being from the so-called kill chain of lethal combat operations. Justin Mock of DARPA, a fighter pilot who commentated a webcast of Thursday’s trial, said the AI pilot had a “superhuman aiming ability.”
“There’s a long way to go. This was a far cry from going out in an F-16 and flying actual [basic fighter maneuvers],” Mock said. “But I think we made a really large step, a giant leap if you will, in the direction we’re going.”
In Thursday’s experiment, an AI algorithm produced by Heron Systems controlled a simulated F-16 fighter in a virtual reality dogfighting tournament. The computer repeatedly prevailed over its human adversary — a fighter pilot known by the callsign “Banger” from the District of Columbia Air National Guard.
With more than 2,000 hours in the F-16, Banger is a recent graduate of the Air Force Weapons School’s F-16 Weapons Instructor Course — a prestigious combat pilot training program.
Eight teams were selected last year to compete in the Defense Advanced Research Projects Agency’s AlphaDogfight Trials, which lasted from Aug. 18 to 20 and were meant to showcase the ability of advanced AI algorithms to conduct simulated within-visual-range air combat maneuvering — better known as the “dogfight.”
The competing contractors pitted their AI combat pilots against each other in a so-called round robin tournament to determine which one would ultimately go up against a human. Apart from Heron Systems, the other entrants comprised: Aurora Flight Sciences, EpiSys Science, Georgia Tech Research Institute, Lockheed Martin, Perspecta Labs, PhysicsAI, and SoarTech.
“It’s been amazing to see how far the teams have advanced AI for autonomous dogfighting in less than a year,” said Col. Dan Javorsek, program manager in DARPA’s Strategic Technology Office, prior to Thursday’s simulated air combat exercise.
The US military ramped up the use of armed drone aircraft — known as remotely-piloted aircraft, or RPAs, in military parlance — in the post-9/11 counterterrorism campaigns. While those drones are able to perform some parts of their flight profiles autonomously, combat operations are always remotely controlled by a human operator.
According to DARPA, the AlphaDogfight Trials were designed to “energize and expand a base of AI developers for DARPA’s Air Combat Evolution (ACE) program.”
“ACE seeks to automate air-to-air combat and build human trust in AI as a step toward improved human-machine teaming,” DARPA said in a statement posted to its website.
DARPA aims to test AI algorithms on a succession of aircraft, increasing in size, and to ultimately hand the program over the Air Force by 2024. Officials say the prospect of AI systems actually operating in combat is still likely decades away — if such technology is, in fact, practicable in real world combat.
DARPA has been researching AI for five decades and currently funds a “broad portfolio” of AI research programs. The agency announced in September 2018 a multi-year investment of more than $2 billion in new and existing programs called the “AI Next” campaign.
“DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools,” DARPA said in a statement on its website.
However, DARPA’s push to develop AI is not happening in a vacuum — and America is not necessarily outpacing its adversaries in the exploitation of AI for military advantage.
Many industry experts say that China is set to become the world’s leader in AI. President Xi Jinping of China has made AI technology a national priority, leveraging China’s vast human capital and the power of its centralized form of government in a bid to overtake the US in the development of this burgeoning technological revolution.
— Elon Musk (@elonmusk) August 20, 2020
In September 2017 Beijing issued a document called the “Next Generation Artificial Intelligence Plan,” outlining a national effort to jumpstart China’s AI technology development.
“In light of new circumstances and demands, we need to take proactive approaches to meet changes, seize the historical opportunity of AI development and take stock of the current situation and make proactive plans to serve socioeconomic development and national security and to lead leapfrog advancement of national competitiveness,” wrote the authors of the Chinese report.
China has been heavily investing in advanced weapons, including AI, in a bid to challenge American military dominance. However, some US defense experts have sounded the alarm on China’s so-called crash course AI development program, warning that in its bid to “leapfrog” the US China may create dangerous new weapons systems with unintended or unplanned consequences.
“Chinese advances in autonomy and AI-enabled weapons systems could impact the military balance, while potentially exacerbating threats to global security and strategic stability as great power rivalry intensifies,” wrote Elsa Kania, an adjunct senior fellow with the Technology and National Security Program at the Center for a New American Security, in an April 2020 report.
“In striving to achieve a technological advantage, the Chinese military could rush to deploy weapons systems that are unsafe, untested, or unreliable under actual operational conditions,” Kania wrote.
Thursday’s air combat experiment, while eye-opening, simulated a dogfight in which both simulated aircraft operated within visual range of each other and tried to essentially outturn the opponent for a “gun” shot from the F-16’s cannon.
Modern weapons technology, however, makes such close-in, World War I-style dogfights unlikely. Due to the advanced radars and missiles aboard modern fighters, future air-to-air combat encounters will likely be fought at a distance, beyond a human pilot’s visual range. More like a sniper dual than a boxing match.
“From a human perspective, from the fighter pilot world, […] we trust what works. And what we saw is that in this limited area, in this specific scenario, we’ve got AI that works,” Mock said, adding: “When I’m going downrange, and when I’m going into combat, there is no pride anymore, there is no ego, it’s all about doing what we need to get done and being able to come back home to our families.”
In some cases, the performance limits of modern fighter aircraft are due to the biological limitations of the human pilots. Most importantly, the g-forces produced during highly aggressive aerial maneuvers can cause a fighter pilot to lose consciousness — a condition known as g-induced loss of consciousness, or GLOC.
Modern pilots typically wear g-suits — comprising leggings and a waistband akin to a cummerbund — which inflate during high-g maneuvers to literally squeeze blood back up into a pilot’s brain, counteracting the increase in gravitational forces that can be up to nine times more than normal.
In addition to wearing g-suits, modern fighter pilots train like elite athletes to perfect what is known as the anti-g straining maneuver, or AGSM, in which a pilot performs lower body and core muscle contractions simultaneous with intermittent and forceful breaths in order to keep blood in their brains.
While techniques and equipment increase a pilot’s tolerance for the physical strain of flying high-performance aircraft, the human body often remains the limiting factor in air combat performance. Additionally, life support equipment — comprising items such as oxygen systems and ejection seats — also take up valuable space and weight on high-performance jets. Without a human pilot on board, however, the weight and space used for life support equipment could be used for more weapons systems or simply exploited for better performance.
Nevertheless, the performance advantages enjoyed by artificially piloted aircraft would be less valuable in beyond-visual-range air combat — the most likely nature of a future air war.
During Thursday’s simulated air combat exercise, the simulated F-16 piloted by AI was forced to operate within the performance limits of a manned F-16. The computer’s dominance over the human pilot, therefore, had nothing to do with an unmanned aircraft’s higher g-force tolerance threshold. It was, in fact, the computer’s ability to think beyond the constraints of the human pilot’s training, as well as faster reaction times, which reportedly gave the machine its advantage over the man.
The AI algorithm was reportedly able to make decisions in a “nanosecond,” whereas a human pilot operates according to a decision-making matrix known as the “OODA loop,” which stands for observe, orient, decide, and act. Also, the AI pilot reportedly learned about the human pilot’s patterns after each encounter and subsequently flew more aggressively.
“The advance of technology has evolved the roles of humans and machines in conflict from direct confrontations between humans to engagements mediated by machines,” DARPA said on its website. “The next stage in warfare will involve more capable autonomous systems, but before we can allow such machines to supplement human warfighters, they must achieve far greater levels of intelligence.”