Americans for Intelligence Reform

Brad Johnson, President, and retired CIA Senior Officer and Chief of Staff. Insight into current events from an intelligence angle.

  Sunday Morning Coffee with Jeemes: RELUCTANT MISSIVES: THE FUTURE OF WAR (PART II)

  RELUCTANT MISSIVES: THE FUTURE OF WAR

(PART TWO)                                        

                        General Artificial Intelligence and the Coming War

“Whichever nation is the leader in AI [artificial intelligence] will be the ruler of the world.”

 

                                                                                                                          Vladimir Putin

“If any major power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”

2015 Open Letter

(signed by almost 25,000 individuals

including Stephen Hawking and Elon    Musk)

 

“On a smaller scale, suppose two drones fight each other in the air. One drone cannot open fire without first receiving the go-ahead from a human operator in some distant bunker. The other is fully autonomous. Which drone do you think will prevail?”

                                                                                                                              Yuval Hariri

A few weeks ago, I finished a book that has reportedly caused a recent buzz among Pentagon planners, 2034: A Novel of the Next World War.[1] The book portrays twin simultaneous warfare scenarios—a U.S.-China naval confrontation in the South China Sea (near the disputed Spratley Islands) and a U.S. recon aircraft is disabled over Iranian airspace. Add a Russian gambit in the Artic on the margins and you have a major crisis situation for a future incoming presidential administration. Interestingly enough, the book portrays India as the final broker of power (the UN Headquarters is moved to Mumbai) after nuclear strikes decimate major U.S. and Chinese cities, as well as aircraft carrier task forces.

Interesting concept, huh?

In many respects, the themes appearing in 2034 resemble another recent book that caused a similar stir at the Pentagon shortly after I left the Agency, Ghost Fleet: A Novel of the Next World War by P. W. Singer and August Cole (2015). The similarities: both books are techno-thrillers portraying a future war pitting a declining United States against a rising China; both portray the U.S. conventional warfare advantage being neutralized by Chinese technological breakthroughs (in 2034 a new, but never defined, algorithmic-cyber capability enables Beijing’s leaders to blackout portions of the U.S., intercept naval encrypted communications, and shut down defensive weapons’ systems; in Ghost Fleet a secret space weapon enables a new Chinese military leadership to launch a preemptive strike in the Pacific after the U.S. Defense Department’s constellation of satellites over Asia are blinded); and, both suggest U.S. military assets in Asia and elsewhere have become over-reliant on modern technology.

After finishing 2034 last night, I read an interesting BBC article (May 27, 2021) about recent observations on future AI, and its battlefield applications, by Microsoft president Brad Smith.[2] Smith’s basic concern is that technology is racing ahead of lawmakers’ ability to control it: “If we don’t enact the laws that will protect the public in the future, we are going to find the technology racing ahead, and it’s going to be very difficult to catch up.” The article discusses China’s stated intention to be the world leader in AI by 2030, a contest we in the western world cannot afford to lose. But, after Google withdrew from “Project Maven” in June 2018, Pentagon leaders have found it increasingly difficult to enlist Silicon Valley firms in a bid to win the global AI arms race. The stakes are high. Seth Moulton (chair of the U.S. Future of Defense Task Force) says: “Could the AI arms race lead to conflict with China? Absolutely.”

That is a truly scary thought.

After reading the two aforementioned books, and Smith’s AI-related warnings, I was reminded of an Asian wargame scenario I was invited to attend in the years before I retired from the Agency. The wargame, set decades in the future, was designed to assist Pentagon war planners. It was a real eye-opener for me. At that time, it seemed to me, there was a lack of serious regard for what several futurists were contending about the future of war: there will be a steady improvement of lethal autonomous weapons over the next two decades, as well as increased usage of AI in military decision-making (and the corresponding ethical dilemmas). In short, future wars are likely to be AI-enhanced (or AI-led) wars—the so-called “weaponization of AI.”

During my participation in the aforementioned wargame, I mentioned the future likelihood of drone swarms and oceangoing mother ships (a trend that I tried to suggest during the wargame scenario itself, toward what I called the “Gillette-ization of future war,” i.e., the mass production of cost-effective weapons’ systems), all of which promise to change the nature of future warfare and render vintage systems such as carrier battle groups all but obsolete.

You can imagine how that went over …

In this vein, one of my favorite writers, David Ignatius, noted in an opinion piece in The Washington Post recently that “the future is now” when it comes to AI-enhanced weapons.[3] As an example, Ignatius cited the “Nova 1”—a small quadcopter drone, less than a foot square, and created by a high-tech start-up called Shield AI—that can enter an open window to survey the inside of buildings, room-by-room, and “using artificial intelligence software called Hivemind embedded in the drone,” makes the drone totally autonomous in that it doesn’t have to connect with a server elsewhere.  According to Christian Brose, an executive with start-up Anduril Industries, the key advantage of such autonomous systems is that “rather than lots of humans operating one system, we have one human operating many systems.” Or, as Ignatius explains: “In other words, rather than having a big vulnerable aircraft carrier, we have swarms of hard-to-target drones.”[4]

That was what I was trying to explain to the naval war planners at the wargame; far less eloquently I’m afraid.

(Not that it is a perfect process: this week, for example, a tourist from Texas slammed his drone into one of the World Trade Center buildings causing a fuss among local police and FBI).[5]

 

Indeed, over the last several months, I have been trying to keep track of the numerous reports of radical advances in robotic weaponization. It is like trying to sip water from a fire hose. Here is a sampler of recent articles in recent weeks. In May, a CNA nonprofit research and analysis group based in nearby Arlington, Virginia, warned that Russia is developing “an array of autonomous weapons platforms utilizing artificial intelligence as part of an ambitious push supported by high-tech cooperation with neighboring China.”[6] The report came a few days after the Russian Defense Minister announced that the country was beginning to manufacture robots with autonomous militarized capabilities.[7]Earlier that same week, an article quoting authors Trung Ghi and Abhisek Srivastava—co-authors of “The global AI arms race—How nations can avoid being left behind”—insisted we are in the midst of a global AI arms race and within the next 10 years such weapons will be “ubiquitous and available to everyone.”[8] Finally, an article in Wired cited a recent DARPA-orchestrated drill involving drone swarms and tank-like robots “to test how AI could help expand the use of automation in military systems, including in scenarios that are too complex and fast-moving for humans to make every critical decision.”[9]

In today’s world, drones are being used by Iranians to hit Israeli-flagged naval vessels and Saudi oilfields, by terrorist elements to bomb American units in Syria, and by Russian-affiliated elements to knock out Ukrainian ammunition dumps. And, as you will see below, that only scratches the surface.

I have told my classes that the next major war, for all intents and purposes, could be decided within the first 15 seconds.

So, what has actually happened in recent months?

The last two global large-scale kinetic military engagements—in November 2020, in a conflict between Azerbaijan and Nagorno-Karabakh,[10] and in Israel’s more recent attempt to punish Hamas for launching as many as 4,000 short-range rockets over a ten-day period (as well as explosive-laden drones) in May 2021[11]—have given us an alarming glimpse into the future of warfare.

And that future is drones and AI-enhanced weapons’ systems.

In the first example, Azerbaijan’s use of Israeli-supplied IAI Harop drones proved decisive. “The drones, which can operate autonomously, circled over the Armenian defense line until they could detect a radar or heat signal from a missile battery or tank on the ground; then they dove down and crashed, kamikaze-style, into targets.”[12] The Armenian modern tanks and jet aircraft were helpless against the small and lightweight drones which were difficult to detect and harder to shoot down. The drones shattered the morale of the Armenian forces. During the conflict, “Azerbaijani and Armenian soldiers rarely even saw each other; it was a very different kind of war—and likely a preview of wars between state actors in the future.”[13]

In the more recent conflict between the Israelis and Hamas in Gaza, AI-enabled weapons played a key role in Israeli military operations (“Operation Guardian of the Walls”), prompting The Jerusalem Post to label the conflict the “world’s first AI war.”[14] Israeli military officials said their forces (IDF) used technology as a “force multiplier,” and even before the latest round of fighting, “established an advanced AI technological platform that centralized all data on terrorist groups in Gaza (primarily Hamas and Palestinian Islamic Jihad) into one system that enabled the analysis and extraction of intelligence.”[15] A leading role was reportedly played by Israel’s famous elite cyber Unit 8200 which pioneered algorithms and code leading to several new programs used during the campaign. “While the IDF had gathered thousands of targets in the densely populated coastal enclave over the past two years, hundreds were gathered in real time, including missile launchers that were aimed at Tel Aviv and Jerusalem.”[16] AI-enhanced algorithms sorted through this mass of data to provide target sets for hundreds of strikes against key research and cyber leaders, rocket launchers, storage sites, drones, intelligence offices, and naval commando units (including several autonomous GPS-guided submarines). The result was real-time target changes during the aerial campaign. In addition, Israeli officials used special AI programs to map Hamas’s extensive underground network of tunnels and weapon caches.

Welcome to the new art of war …

 

Israel’s use of AI technology in its weapons and targeting systems illustrate the point that has been made by many futurists: the advantage of speed in future battlefield situations will go to military commanders using AI. In Yuval Hariri’s words, “Aside from their unpredictability and their susceptibility to fear, hunger and fatigue, flesh-and-blood soldiers think and move on an increasingly irrelevant timescale.” (Emphasis mine). Furthermore:

“From the days of Nebuchadnezzar to those of Saddam Hussein,                     despite myriad technological improvements, war was waged on an                  organic timetable. Discussions lasted for hours, battles took days, and                   wars dragged on for years. Cyberwars, however, may last just a few                minutes. When a lieutenant on shift at cyber-command notices some-                  thing odd is going on, she picks up the phone to call her superior, who      immediately alerts the White House. Alas, by the time the president                    reaches for the red handset, the war has already been lost.” [17]

 Jeremy Straub agrees:

“AI-coordinated attacks can launch cyber or real-world weapons                    almost instantly, making the decision to attack before a human even                 notices a reason to. AI systems can change targets and techniques                           faster than humans can comprehend, much less analyze. For instance,                      an AI system might launch a drone to attack a factory, observe drones        responding to defend, and launch a cyberattack on those drones, with no      noticeable pause.”[18]

Simon Biggs, professor at the University of Edinburgh, has a particularly alarming view of AI-assisted weaponization by the year 2030: “We cannot expect our AI systems to be ethical                            on our behalf—they won’t be, as they will be designed to kill efficiently, not thoughtfully.”[19]

*****

Okay, Jeemes. Interesting. But have military-related computers actually come close to embroiling all of humanity in war?

Yes. On several occasions.

Let me mention just one. During our discussion of current events in one of my CofO classes four years ago (2017), we mentioned the passing of Stanislav Yevgrafovich Petrov, widely hailed as “the man who single-handedly saved the world from nuclear war.”

The Petrov story is amazing. In late September 1983, when U.S.-Soviet Cold War tensions heightened following the Soviet military’s controversial shootdown of Korean Air Lines Flight 007, Petrov (then a 44-year-old lieutenant colonel of the Soviet Air Defense Forces) was the late-night duty officer at the secret Serpukhov-15 command center for the Oko nuclear early warning system. Suddenly, the system’s computers reported that a missile had been launched by the U.S., followed by reports of up to five more. After a tense 15 seconds, Petrov judged the reports to be false alarms, and deciding to disobey orders dictated by military protocol, he prevented an erroneous retaliatory nuclear strike that would have launched World War III. Petrov would later say it was a 50-50 gut decision, based on his distrust of the early-warning system and the relative paucity of missiles that were supposedly launched. Only 25 minutes would elapse between launch and detonation.

An investigation later confirmed that the Soviet satellite warning system had malfunctioned and sent a false alarm when the satellites mistook the sun’s reflection off the tops of clouds for a missile launch. The computer algorithm designed to filter out such information had to be rewritten after the incident. Years later, Petrov would tell a German journalist that “we are wiser than the computers … we created them.”

Petrov was subsequently reprimanded by the Soviet military for failing to accurately record the events in a duty log. He retired from the military in 1984, fading into obscurity as he tended his ailing wife, and in later years was forced to grow potatoes to feed himself. His role in averting a nuclear Armageddon only came to light in 1988, with the publication of the memoir of General Yuriy V. Votintsev, the retired commander of Soviet missile defenses. Petrov achieved some notoriety based on the book and in 2006, traveled to the U.S. where he received awards; in 2013, he was awarded the prestigious Dresden Peace Prize.[20]

Can we rely on a Petrov-like decisionmaker to override AI decisions on the future battlefield? I have my doubts.      

[1] Elliot Ackerman and Admiral James Stavridis, 2034: A Novel of the Next World War, New York: Penguin Press, 2021. Ackerman has several military-related books to his credit and is a highly decorated former Marine who served five tours of duty in Iraq and Afghanistan; Admiral Stavridis, ret., became a four-star admiral, Supreme Allied Commander of NATO, and has served at sea in a number of positions including commanding an aircraft carrier battle group in combat. The authors’ combined military background made the novel’s scenarios realistic and plausible. See also, Howard W. French, “’2034’ Review: Navigating a Disaster,” WSJ, May 2021.

[2] “Microsoft president: Orwell’s 1984 could happen in 2024,” BBC News, May 27, 2021.

[3] David Ignatius, Opinion: “In warfare, the future is now,” The Washington Post, May 27, 2021.

[4] Ibid.

[5] David Hambling, “Drone Striking World Trade Center Is A Wake-Up Call,” Forbes, Aug. 3, 2021.

[6] Tom O’Connor, “Russia is building an army robot weapons, and China’s AI tech is helping,” Newsweek, May 24, 2021.

[7] ibid.

[8] Bernard Marr, “The New Global AI Arms Race: How Nations Must Compete On Artificial Intelligence,” Forbes, May 24, 2021.

[9] Will Knight, “The Pentagon Inches Toward Letting AI Control Weapons,” Wired,

[10] See, among others, Robyn Dixon, “Azerbaijan’s drones owned the battlefield in Nagorno-Karabakh—and showed  the future of warfare,” The Washington Post, Nov. 11, 2020.

[11] See, among others, David S. Cloud, “Armed drones crisscross Middle Eastern skies, bringing havoc and a new threat to U.S.,” Los Angeles Times, May 24, 2021.

[12] See, among others, Mark Sullivan, “The U.S. is alarmingly close to an autonomous weapons arm race,” Fast Company, May 2021.

[13] Ibid.

[14] Anna Ahronheim, “Israel’s operation against Hamas was the world’s first AI war,” The Jerusalem Post, May 27, 2021.

[15] Ibid.

[16] Ibid.

[17] Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow, 2016.

[18] Jeremy Straub, “Artificial intelligence is the weapon of the next Cold war,” theconversation.com, Jan. 29, 2018.

[19] Biggs’ quote is contained in a Pew study on AI that I have cited frequently in earlier works.

[20] Sewell Chan, “Stanislav Petrov, Soviet Officer Who Helped Avert Nuclear War, Is Dead at 77,” New York Times, Sep.18, 2017.      

 

 

Share:

More Posts

Left Handed Universe

Jeemes Akers                                       MIRROR-UNIVERSE “The Lord says

A True Christmas Miracle 

“Dear Mother, I am writing from the trenches. It is 11:00 in the morning. Beside me is a coke fire, opposite me a ‘dug-out’ (wet)

Send Us A Message