“Deep fake” videos of Tom Cruise surfaced recently, exposing technology that has serious implications for national security.
The Greek tragedian Aeschylus famously wrote: “In war, truth is the first casualty.”
Well, in this new era of so-called “hybrid” or “gray zone” warfare, truth is not only a casualty of war — it has also become the weapon of choice for some of America’s contemporary adversaries.
Recent “deepfake” videos of the actor Tom Cruise illustrate the power of the new technological tools now available to foreign adversaries who wish to manipulate the American people with online disinformation. The three videos, which appear on the social media platform TikTok under the handle @deeptomcruise, are striking in their realism. To the naked eye of the casual observer, it’s difficult to discern the videos as fakes.
Equally as stunning is an artificial intelligence tool called Deep Nostalgia, which animates static, vintage images — including those of deceased relatives. Together, these technological leaps harken back to the famous line by the writer George Orwell: “Who controls the past controls the future; who controls the present controls the past.”
The technology now exists for America’s foreign adversaries, or other malign actors, to challenge citizens’ understanding of their present reality, as well as the past. Coupled with the historic loss in confidence among Americans for their country’s journalistic institutions, as well as our addiction to social media, the conditions are certainly ripe for deepfake disinformation to become a serious national security threat — or a catalyst for nihilistic chaos.
“The internet is a machine, but cyberspace is in our minds. As both expand and evolve faster than we can defend them, the ultimate target — our brains — is closer every day,” Kenneth Geers, a Cyber Statecraft Initiative senior fellow at the Atlantic Council, told Coffee or Die Magazine.
2 years ago on stage I was asked “when will Deepfake video/audio impact trust & be believable in social engineering?” My response then was that we were 2 years away from undetectable Deepfakes. I wish my prediction then was wrong. We need synthetic media detection + labels ASAP. pic.twitter.com/yUUOTDepYY
— Rachel Tobac (@RachelTobac) February 26, 2021
According to a September Gallup Poll, only 9% of Americans said they have “a great deal” of trust in the media to report the news “fully, accurately, and fairly.” On the other hand, when it comes to trusting the media, six out of 10 Americans, on average, responded that they had “not very much” trust or “none at all.” Those findings marked a significant decline in Americans’ trust for the media since polling on the topic began in 1972, Gallup reported.
“Americans’ confidence in the media to report the news fairly, accurately and fully has been persistently low for over a decade and shows no signs of improving,” Gallup reported.
That pervasive distrust in the media leads to increased political polarization and is bad for America’s democratic health, many experts say. Americans’ loss of trust in the media could also portend a national security crisis — especially as contemporary adversaries such as Russia and China increasingly turn to online disinformation campaigns to exacerbate America’s societal divisions.
In fact, Russia already used deepfake technology in its disinformation campaign to influence the 2020 US election, said Scott Jasper, author of the book, Russian Cyber Operations: Coding the Boundaries of Conflict. In advance of the election, Russian cybercriminals working for the Internet Research Agency created a fake news website called “Peace Data,” which featured an entirely fictitious staff of editors and writers, multiple news agencies reported.
“Their profile pictures were deepfakes generated by artificial intelligence,” Jasper told Coffee or Die Magazine. “The fake personas contacted real journalists to write contentious stories that might divide Democratic voters.”
A Soviet doctrine called “deep battle” supported front-line military operations with clandestine actions meant to spread chaos and confusion within the enemy’s territory. Similarly, modern Russia has turned to cyberattacks, social media, and weaponized propaganda to weaken its adversaries from within. According to an August State Department report, Russia uses its “disinformation and propaganda ecosystem” to exploit “information as a weapon.”
“[Russia] invests massively in its propaganda channels, its intelligence services and its proxies to conduct malicious cyber activity to support their disinformation efforts, and it leverages outlets that masquerade as news sites or research institutions to spread these false and misleading narratives,” wrote the authors of the State Department report, Pillars of Russia’s Disinformation and Propaganda Ecosystem.
Some experts contend that the cyber domain has become the proverbial “soft underbelly” of America’s democracy. In the past, America’s journalistic institutions served as gatekeepers, shielding the American people from foreign disinformation or propaganda. However, due to the advent of social media and the internet, America’s adversaries now enjoy direct access into American citizens’ minds. Consequently, the ability to manufacture video content indistinguishable from reality is an exponential force multiplier for adversaries intent on manipulating the American people.
The emerging deepfake threat spurred the Senate in 2019 to pass a bill mandating that the Department of Homeland Security provide lawmakers an annual report on advancements in “digital content forgery technology,” which might pose a threat to national security.
According to the Deepfake Report Act of 2019: “Digital content forgery is the use of emerging technologies, including artificial intelligence and machine learning techniques, to fabricate or manipulate audio, visual, or text content with the intent to mislead.”
However, the bill died in the House and has not become law.
The advancement of deepfake technology has been meteoric. Just a couple of years ago, the casual observer would have been able to rather easily tell the difference between genuine humans and their computer-generated, deepfake doppelgangers. Not anymore. Much like the advent of nuclear weapons, the Pandora’s box of deepfake technology has officially been opened and is now impossible to un-invent.
The potential dangers of this technological leap are practically boundless.
Criminals could conceivably concoct videos that offer an alibi at the time of their alleged crimes. Countries could fabricate videos of false flag military aggressions as a means to justify starting a war. Foreign adversaries could generate fake videos of police brutality, or of racially charged acts of violence, as a means to further divide American society.
“I think it’s a safe assumption that video manipulation is a key short-term weapon in the arsenal of less reputable political-military organizations needing to shape some opinions before the contents can be disputed,” Gregory Ness, a Silicon Valley cybersecurity expert, told Coffee or Die Magazine.
There are certain commercially available artificial intelligence, or AI, tools already available to detect deepfake videos with a fidelity surpassing that of the human observer. Microsoft, for example, has already developed an AI algorithm for detecting deepfakes.
Some cybersecurity experts are calling on social media platforms to integrate these deepfake detection algorithms on their sites to alert users to phony videos. For his part, Geers, the Atlantic Council senior fellow, was skeptical that social media companies would step up on their own initiative and police for deepfake content.
“Social media profits from our negativity, vulnerability, and stupidity,” Geers said. “Why would they stop?”
Deep fakes are getting scary good and taking over TikTok. Every public figure should just be on there with a verified account – even if they don’t want to make content – to make it easier to identify their fakes. Here’s Tom Cruise: pic.twitter.com/xoSJt1bvVR
— lauren white (@laurenmwhite) February 25, 2021
The overarching intent of disinformation campaigns — particularly those prosecuted by Moscow — is not always to dupe Americans into believing a false reality. Rather, the real goal may be to challenge their belief in the existence of any objective truths. In short: The more distrustful Americans become of the media, the more likely they are to believe information based on its emotional resonance with their preconceived biases. The end goal is chaos, not brainwashing.
“If we are unable to detect fake videos, we may soon be forced to distrust everything we see and hear, critics warn,” the cybersecurity news site CSO reported. “The internet now mediates every aspect of our lives, and an inability to trust anything we see could lead to an ‘end of truth.’ This threatens not only faith in our political system, but, over the longer term, our faith in what is shared objective reality.”
Some experts say the US government should get involved, perhaps by leveraging the power of the Department of Defense, to patrol the cyber domain for deepfake videos being spread by foreign adversaries. The Pentagon, for its part, has already been called in to defend America’s elections against online disinformation.
In the wake of Russia’s attack on the 2016 presidential election, the Department of Defense partially shouldered the responsibility of defending against foreign attacks on America’s elections. By that measure, it’s certainly within the bounds of national security priorities for Washington to leverage the US military’s resources to root out and take down deepfake videos.
“Governments will inevitably step in, but what we really need is for democracies to step up and create innovative policies based on freedom of expression and the rule of law,” Geers said.