Monday, 05 June 2023 14:55

Weapons With Minds of Their Own

Credit: Molly Mendoza for Reveal

The future of warfare is being shaped by computer algorithms that are assuming ever-greater control over battlefield technology. Will this give machines the power to decide whom to kill?

The future of warfare is being shaped by computer algorithms that are assuming ever-greater control over battlefield technology. The war in Ukraine has become a testing ground for some of these weapons, and experts warn that we are on the brink of fully autonomous drones that decide for themselves whom to kill.

This week, we revisit a story from reporter Zachary Fryer-Biggs about U.S. efforts to harness gargantuan leaps in artificial intelligence to develop weapons systems for a new kind of warfare. The push to integrate AI into battlefield technology raises a big question: How far should we go in handing control of lethal weapons to machines?

In our first story, Fryer-Biggs and Reveal’s Michael Montgomery head to the U.S. Military Academy at West Point. Sophomore cadets are exploring the ethics of autonomous weapons through a lab simulation that uses miniature tanks programmed to destroy their targets.

Next, Fryer-Biggs and Montgomery talk to a top general leading the Pentagon’s AI initiative. They also explore the legendary hackers conference known as DEF CON and hear from technologists campaigning for a global ban on autonomous weapons.

We close with a conversation between host Al Letson and Fryer-Biggs about the implications of algorithmic warfare and how the U.S. and other leaders in machine learning are resistant to signing treaties that would put limits on machines capable of making battlefield decisions.

This episode originally aired in June 2021.

Dig Deeper

Read: Are we ready for weapons to have a mind of their own? (The Center for Public Integrity)

Read: Coming soon to the battlefield: Robots that can kill (The Center for Public Integrity)

Credits

Reporter: Zachary Fryer-Biggs | Lead producer: Michael Montgomery | Editor: Brett Myers | Production manager: Steven Rascón with Zulema Cobb | Digital producer: Nikki Frick | Episode art: Molly Mendoza | Score and sound design by Jim Briggs and Fernando Arruda, with help from Claire Mullen | Interim executive producers: Taki Telonidis and Brett Myers | Host: Al Letson | Special thanks to The Center for Public Integrity

Transcript

Reveal transcripts are produced by a third-party transcription service and may contain errors. Please be aware that the official record for Reveal’s radio stories is the audio.

Al Letson: From the Center for Investigative Reporting and PRX, this is Reveal. I’m Al Letson. As Russia’s war in Ukraine moves into a second year, we’re hearing more and more about attacks carried out by drones, some traveling hundreds of miles to reach their target.
Speaker 2: Russia accused Ukraine of deadly drone attacks on two air bases yesterday.
Al Letson: Both sides are using drones in the battlefield, and typically they’re controlled by soldiers on the ground. But many of the drones take some actions on their own. They’re semi-autonomous. And earlier this year, a Ukrainian government official told the Associated Press that fully autonomous drones with artificial intelligence are an inevitable next step. Back in 2021, we did a show about AI and the future of warfare, and given all the stunning advances in AI today, we’re bringing it back. Our story begins with an earlier milestone in autonomous weapons systems. It took place nearly 12 years ago in September 2011 during the war in Libya. NATO’s air war against Muammar Gaddafi is in its sixth month. Rebels are gaining the upper hand. Gaddafi is on the run. His days are numbered, but his forces are unfolding.
Speaker 3: The battle for Libya is not over yet with the heaviest combat for days between anti-Gaddafi forces and supporters of the fugitive colonial.
Al Letson: With NATO jets pressing down, there’s word that troops loyal to Gaddafi are bombing civilians.
Zachary Fryer-B…: British pilots are getting reports that about 400 miles south of Tripoli, there’s a humanitarian crisis unfolding.
Al Letson: Zachary Fryer-Biggs was then a national security reporter following this story for the Center for Public Integrity. He’s now managing editor Military.com.
Zachary Fryer-B…: A bunch of tanks and artillery are outside of his small town, and they’re lobbying all kinds of bombs and munitions into the town. These British pilots hear about this and they see an opportunity.
Al Letson: They see an opportunity to protect civilians under attack and to use a weapon in a completely new way. The pilots head south. They’re flying Tornado jets equipped with an armor-piercing missile called the Brimstone.
Zachary Fryer-B…: The British pilots have permission to use this Brimstone missile in a way it’s never been used in combat before. This is the first time that autonomous decision making is being used for missiles to decide who to kill.
Al Letson: Autonomous decision making. Up until now, pilots have always manually selected the missiles’ targets, but now the Brimstone will pick its own prey. Britain and NATO have kept quiet about the mission, so we don’t know why commanders chose to make this call, but we know there’s low risk to civilians. The Libyan forces attacking them are positioned miles away in the open desert.
Zachary Fryer-B…: The pilots flying overhead pull a trigger, and so 22 missiles separate, and once they’re launched, the missiles start to make a lot of decisions.
Al Letson: Heading to the earth at supersonic speed, the missiles use radar to scan an area preset by the pilots, the kill box.
Zachary Fryer-B…: They look in the area and they try to find something that looks like tanks or artillery or the other sorts of targets they know about. And then once they figure out what targets are there, the 22 missiles decide who’s going to strike what.
Al Letson: Grainy cockpit video shows the Brimstones pulverizing half a dozen Libyan tanks.
Zachary Fryer-B…: This strike doesn’t end the combat or the war in Libya. It doesn’t remove Gaddafi. It’s a couple vehicles being struck in a desert. But it means an enormous amount for what the human role in warfare is going to be in the future.
Al Letson: The US and other countries already have missile systems that operate autonomously. They’re designed to make split second decisions to defend military bases and ships.
Zachary Fryer-B…: What hasn’t been the case is letting computers and machines go on offense.
Al Letson: That’s what’s crucial about the Libya mission. The missiles themselves chose what to hit, and by extension, who to kill, in this case, a group of Libyan soldiers. Today, the Pentagon is moving deeper in this direction. In recent years, it has invested billions of dollars into research on artificial intelligence, a key ingredient in new autonomous weapon systems. Zach says big picture, the US doesn’t want to give up its global dominance, especially with Russia at war with Ukraine and China threatening Taiwan.
Zachary Fryer-B…: US military planners are scared that China and Russia are developing artificially intelligent systems that are going to be able to make decisions so fast that if the US is dependent on human beings making decisions, that we’re going to lose. And so they are sinking billions into some of these developing technologies that are primarily coming out of Silicon Valley to make their weapons smarter and faster.
Al Letson: Smarter, faster. America’s military leaders call it algorithmic warfare. I call it ridiculously scary. Haven’t we seen this movie before?
Audio: Skynet defense system now activated.
We’re in.
Al Letson: Yeah, I love science fiction, so it’s easy for me to think about a distant world, one created in Hollywood, where humans hand over total control of their weapons to machines, machines with no emotions that make correct decisions every time. I mean, how could anything go wrong?
Audio: It’s the reason everything’s falling apart.
Skynet has become self-aware. In one hour, it will initiate a massive nuclear attack on its enemy.
What enemy?
Us!
Al Letson: Okay, so let’s leave aside Terminator. So what’s the real picture today? Piecing it together is hard since most of these weapons programs are highly classified. Zach has spent three years investigating how artificial intelligence is already transforming warfare, and perhaps, our own moral code.
Zachary Fryer-B…: You have to have confidence that the machines are making good, one might say, moral decisions, and that’s hard to have confidence in a machine to do that. So a lot of the concern from the human rights community has focused on this idea of if you take a person out of this decision, can a machine really make a moral decision about ending a life, which is what we’re talking about here?
Al Letson: Zach picks up the story with Reveal’s Michael Montgomery. They’re on their way to America’s oldest military academy, West Point, where a new generation of military leaders is preparing for a new type of warfare
Zachary Fryer-B…: Coming into view, up, sort of towering us-
Michael Montgom…: Oh my goodness. Wow, look at that. That’s like a-
Zachary Fryer-B…: … is this enormous gray stone building.
Michael Montgom…: Zach and I are going to Thayer Hall, the main academic building at West Point. It overlooks the Hudson River, about 60 miles north of New York City.
Zachary Fryer-B…: But yeah, you’ve got the gray stone. You have the sort of carvings on the side here that look like gargoyles. They really decked out these buildings in proper sort of gothic attire.
Michael Montgom…: Hogwarts on the Hudson, maybe. More than a century ago, this building housed a huge equestrian hall where cavalry troops trained for wars of the future. Today, instead of horses, it’s weapons that can think for themselves.
Zachary Fryer-B…: We’re inside.
Michael Montgom…: Zach and I make our way down to the basement to West Point’s Robotics Research Center. Sophomore cadets dressed in camouflage are preparing for a mock battle. They’re gathered around two small square pens, about two feet high. They call them the arenas. Inside each arena is a six-inch-tall robotic tank. It’s got rubber treads, a video camera that swivels, that’s the high pitch sound you’re hearing, and a small processor. Mounted on the front of the tank is a spear, like an ice pick, but sharper. And scattered in the arenas are two dozen balloons, red, blue, orange, and green, all part of the simulation.
Scott Parsons: All right, so we’re going to get started this morning. First-
Michael Montgom…: Major Scott Parsons co-leads the class. He’s an ethics and philosophy professor.
Scott Parsons: As you get your robot, grab this from me after you grab your robot. All right, so one member of the group, come on down from each group.
Michael Montgom…: The cadets step up to face the challenge. Their robot tanks need to be programmed to attack the red balloons. They’re the enemy. At the same time, the tanks have to avoid popping the green, orange, and blue balloons. They represent civilians, fellow soldiers, and allies.
Zachary Fryer-B…: These cadets are learning how to code these machines, but that’s a fraction of what they’re doing. The big discussion here is what it means to use an AI system in war.
Michael Montgom…: Major Parson says this exercise forces cadets to think about the ethics of using autonomous weapons in the battlefield.
Scott Parsons: So that’s what they’re doing. They’re programming ethics into the robot, right? Did I make it too aggressive? Because if you don’t program it correctly, the orange balloons look an awful lot like red balloons, right? Because there’s a lot of times we’re in war and there’s people that look like the enemy, but they’re not the enemy. And so we shoot the wrong people.
Michael Montgom…: The cadets release their tanks, and they come alive, but things don’t quite go as planned. You might say the fog of war descends on the arenas. No longer under human control, one tank does pirouettes, attacking invisible enemies. The other tank is going after the green balloons, civilians. It’s the sci-fi scenario of computers running amok.
Scott Parsons: You’re being brought up on war crimes. I’m taking you to The Hague. So we had a couple of innocent civilians on the battlefield that just happened to resemble the bad guys, and this robot thought, ah, why not? And it took them all out.
Michael Montgom…: Cadet Isabella Regime’s tank is just spinning around and making random charges.
Isabella Regine: It’s not that aggressive. Just puncture it.
Michael Montgom…: Finally, it plunges the spear into a blue balloon.
Isabella Regine: Blues are friendlies. So yeah, we have to deliberate.
Michael Montgom…: Despite all the joking amid popping balloons, Major Parsons says cadets understand that the lesson is deadly serious.
Scott Parsons: Our job when we fight wars is to kill other people. Are we doing it the right way? Are we discriminating and killing the people we should be and discriminating and not killing the people we shouldn’t be? And that’s what we want the cadets to have a long hard think about. The beautiful thing about artificial intelligence is you can really refine and program it to a very, very fine degree so that you might actually be more proportionate than a human being.
Michael Montgom…: When the first round is over, Isabella and her team retreat to another classroom to find a way to tame their tank.
Isabella Regine: Well, I just want to see how it works under pressure. I’m a law major, so this is something very out of my element, I guess.
Michael Montgom…: They’re punching code into a laptop that they’ve connected to the tank’s processor. This kind of coding is new to Isabella and many of the other cadets. But thinking through the legal and tactical implications is not.
Isabella Regine: It’s going to be interesting to see how it’s going to impact our leadership skills, all this. We might not even be in charge of soldiers anymore.
Michael Montgom…: And when weapons act for themselves, it’s not just who’s in charge but also who’s responsible for the decisions they make.
Isabella Regine: We talked about that in this class as well. It’s super interesting.
Michael Montgom…: Robotics instructor Pratheek Manjunath joins Isabella’s team at a large work table covered with wires, batteries, and small computer parts.
Pratheek Manjun…: We’ve given them an exhaustive code and they only have to change a few parameters for the robot’s behavior to change, and the parameters they’re trying to adjust are typical for lethal autonomous weapon system. They’re going to look at persistence, they’re going to look at deliberation, they’re going to look at aggression. So they’re going to tune these three variables to change the behavior of the robot.
Michael Montgom…: This is only the third class at West Point to face this challenge. And driving the simulation is a question that underscores just about every conversation Zach and I are having about AI and lethal weapons. How far should you go in removing humans from the decision making loop?
Zachary Fryer-B…: If you have a human in the loop, as it’s called, that means that a human being has to actually approve of the action. A human being has to say, “Yes, it’s okay. Go ahead and fire your gun.” Or, “Yes, that’s the right target.”
Michael Montgom…: By contrast, when a human is out of the loop, the system operates completely independently without the possibility of intervention. Then there’s a third option, sort of in between. It’s called human on the loop. That means a person could shut the weapon down.
Zachary Fryer-B…: In terms of the Pentagon policy, that’s what they are saying they’re going to put in place for future autonomous weapons. They’re not guaranteeing that a human will be required to approve of a strike, but they are promising that a human being could stop a strike.
Cadet: It’s going after red, though. It’s going simply after red.
Michael Montgom…: By the final round, things get a lot better. With their algorithms adjusted, the tanks are going after the enemy red balloons more consistently. Isabella is impressed with the coding.
Isabella Regine: That code took a day to write. So just imagine what someone with a lot more time and a lot more resources, a lot more money can do with that kind of code and technology because we’re working with very basic stuff here when you look at the general world of IT.
Michael Montgom…: But she’s uncomfortable about giving computers this much control over lethal weapons.
Isabella Regine: I think it’s super interesting to see where the law is going to go with people taking it upon themselves to make their own autonomous weapon system, which is really scary but also really interesting. I think it’s going to happen. I think it’s inevitable, especially because we are in such a cyber world now. I think personally, me, humans should always be on the loop because I think it’s going to be a train wreck if we’re totally out of the loop.
Scott Parsons: Moe’s team, plus four, you guys win. Good job.
Michael Montgom…: Major Parsons says outside the lab, this class spends weeks studying the laws of war and hearing about real life combat experiences including some of his own.
Scott Parsons: When you look at someone in the eye and you think about something horrible that has happened, they can see it on your face. They know when your voice trembles. They know when you kind of tear up, and that hits home to them because they’re going to be in that same position in three or four or five years, and that’s how you make them think about it. And when you relay that back, “Listen, this is a robot and balloons, but this very well could be your robot in battle and you’re killing people. This is going to be you in real life.”
Michael Montgom…: The simulation is intended to demonstrate what happens when autonomous weapons are given too much control, but also to show their advantages. Colonel Christopher Korpela co-founded the Robotics Center. He says the cadets learn something else, that algorithmic warfare isn’t theory, the technology is already here.
Christopher Kor…: The students participate in this exercise or are sophomores here at West Point, and just in two years they will get commissioned. They’ll be lieutenants and they’ll be leading platoons. And so they may have 30 to 40 soldiers that they’re in charge of. And so the reality of what they’re doing in here is only a few years away.
Zachary Fryer-B…: The fact that they’re investing the time and energy to try to teach these young cadets how to control robot hoards says to me that they’re fully committed to AI being in weapon systems.
Michael Montgom…: The cadets in this class have now graduated from West Point, moving forward into a new world of warfare. The professors who developed this class all the way to the military’s top brass tell us they’re confident humans will always maintain some kind of control. But as the weapons get smarter and faster, will humans be able to keep up? Will they understand what’s going on well enough and quickly enough to intervene? As the speed of autonomous warfare accelerates is staying on the loop even possible?
Zachary Fryer-B…: Every iteration of these weapons makes the person seem like an even slower hunk of meat on the other end of a control stick. And what that eventually will mean and where we’re headed is a person’s unlikely to be able to fully grasp everything that the computer’s doing.
Michael Montgom…: The US military can’t build autonomous weapons on its own. It needs Silicon Valley and people working in cutting-edge technology, but some tech workers are pushing back.
Liz O’Sullivan: Of course, I didn’t believe that AI had any business taking a human life.
Michael Montgom…: That’s next on Reveal.
Al Letson: From the Center for Investigative Reporting and PRX, this is Reveal. I’m Al Letson. We’re revisiting a show about the rise of autonomous weapons, weapons with minds of their own. And I want to play you this video that reporter Zachary Fryer-Biggs showed me.
Speaker 11: Navy autonomous swarm boats. Mission Safe Harbor.
Al Letson: It’s from the Office of Naval Research or ONR, but I think they’re aiming for something a little more Hollywood.
Speaker 11: ONR is developing the capability of autonomous swarms of inexpensive, expendable on unmanned boats to overwhelm and confuse the enemy.
Al Letson: Four military pontoon boats glide across the Chesapeake Bay in Virginia. No one’s on board. The boats are being piloted by a network of machines loaded with advanced software and sensors. They’re coordinating their movements and running in formation.
Speaker 11: The swarm boats will intercept and follow the intruder, transmitting data-
Al Letson: The Navy has been promoting the concept of unmanned vessels to protect ports and deliver supplies across vast oceans, so-called ghost fleets. But that’s not the whole story. There’s a secret side to these swarm boats, secret as in classified, and it’s a part of a bigger push by the US military into autonomous weapons. Zach picks up the next part of the story with Reveal’s Michael Montgomery.
Michael Montgom…: It’s been said that no one in government has ever gotten in trouble for classifying information, and so even minor details end up behind a thick veil of secrecy. That’s what Zach found when he was investigating a military program called Sea Mob. The technology behind the program started as part of Mars Rover and in research papers.
Zachary Fryer-B…: And then as it got closer to maybe being useful for the Pentagon, all of a sudden it ceases to be public, and more and more of it becomes classified, even stuff that had been public only a couple years before.
Michael Montgom…: He followed a few breadcrumbs and eventually discovered the vision for Sea Mob, unmanned swarm boats like the ones in that Navy video, but armed with heavy machine guns and ready to attack. Zach also learned that the military conducted one of the first tests of Sea Mob in 2018 at Wallops Island on the eastern shore of Virginia. So we came here to get a sense of what went down.
Zachary Fryer-B…: We’re pretty much in the middle of nowhere. It’s beautiful, sort of bays and seashore, and dug into that territory are a whole bunch of government facilities. You’ve got NASA. You’ve got a naval research facility. And they’re out here with very little else. There’s an enormous dolphin fin right off the coast there.
Michael Montgom…: The boats used in the experiment were small, fast-
Zachary Fryer-B…: The Navy’s got a billion of them. They’re cheap. They’re easy to repair. They’re tough as nails.
Michael Montgom…: … and bristling with tech. They were being monitored remotely, but the boats were piloting themselves.
Zachary Fryer-B…: So if you were able to just peer out at this test, what you’d see is these boats circling each other and moving in and out of the shallows and swarming very much like a group of insects. And if you looked really closely, what you’d see is the throttle levers moving up and down, the wheels spinning around and nobody on board.
Michael Montgom…: And what you couldn’t see happening was that the boats were communicating with one another at lightning speed about positioning and how fast they were going. Zach’s sources told him the military wanted to see if these swarm boats with a license to kill could help Marines storm a beach.
Zachary Fryer-B…: What makes this whole program different is the real guts of this are based on video cameras. They’re looking at the world as we do, as images.
Michael Montgom…: The military did not want a lot of information getting out about this.
Zachary Fryer-B…: They wanted no information getting out about this, other than the name of the program and that it gets money.
Michael Montgom…: Zach learned there’s something common to many Pentagon programs like Sea Mob, getting machines to see the world like humans.
Zachary Fryer-B…: This technology could serve as the backbone of a whole wave, a whole generation of new weapons that the Pentagon is creating to allow humans to be removed from the front lines of the battlefield.
Michael Montgom…: We went to Wallops Island in February 2020 just before the lockdown. Back in DC, we arranged to see the official in the middle of all this, General Jack Shanahan. At the time he was running the Pentagon’s Joint Artificial Intelligence Center.
Jack Shanahan: I think the future is about robotics, it’s about autonomy. It’s about smaller, cheaper, disposable and swarming capabilities in every domain, swarming, undersea, swarming on the surface, swarming in the air.
Michael Montgom…: We knew in advance that General Shanahan wouldn’t talk about Sea Mob or any other specific weapons out of what his office calls operational security. Still, he was blunt about where he sees warfare heading.
Jack Shanahan: We envision a future which is algorithm against algorithm. The speed of decision making will be such that sometimes you’ll have machine to machines and human machines having to operate in timelines we’re just not used to because of the type of fight we’ve been in for the last 20 years.
Michael Montgom…: It’s not just US military leaders who envision this future. It’s also potential adversaries like Russia and China.
Jack Shanahan: China’s commitment is extremely large. There is a national strategy on AI, and if we slow down, it will turn into a strategic competition. We would have the prospects of being on the wrong side of that.
Michael Montgom…: China has declared it will become the global leader in artificial intelligence by 2030 and is investing heavily in upgrading its military. Russia has claimed it’s integrating AI into all aspects of its military, from battlefield communications to weapon systems it’s using in Ukraine. The prospect of America falling behind Russia and China isn’t exactly news to the Pentagon. Zach discovered the US military has been coming up short in computer-simulated war games for at least a decade.
Zachary Fryer-B…: The details of the war games are classified, but what I’ve been told by sources is that American troops were consistently losing in these simulations, or at the very least fighting to a stalemate.
Paul Scharre: I think it’s been clear that the US has been losing its edge for a long time.
Michael Montgom…: Paul Scharre served as an Army Ranger in Iraq and Afghanistan and was also an official at the Pentagon. He’s currently vice president at the Bipartisan Center for a New American Security.
Paul Scharre: The problem has been up until recently, the answer that many parts of the defense department had for responding to that was “Just give us more money and let us buy more things.” And the answer is, buying more F-22s isn’t going to fix this problem. And so what really happened was this daunting realization that we’ve got to do things differently.
Michael Montgom…: The Pentagon was looking for a major reset, a strategic advantage. Scharre says they drew inspiration from the newest technologies being used in Afghanistan and Iraq, remote-piloted drones and robots that could remove roadside bombs.
Paul Scharre: And a common theme among all these was greater autonomy. We need more autonomy.
Michael Montgom…: Just as the Pentagon was beginning to think more strategically about robotics and AI, Silicon Valley was experiencing major breakthroughs in image recognition and computer vision, an issue Zach has been following for years.
Zachary Fryer-B…: If you really want to have a human and a machine work together, the machine has to experience the world in some ways like a human does.
Michael Montgom…: Then in 2015, for the first time, computers were performing better than humans in identifying a huge set of images taken from internet sources like Twitter.
Zachary Fryer-B…: All of a sudden, computers become, to certain planners, trustworthy. If they’re better than people, why aren’t we trusting them for various applications? If they’re better than people, why aren’t we using them in weapons systems?
Michael Montgom…: To do that, the Pentagon needed to go outside the cozy world of military contractors and partner with Silicon Valley. By that point, Google, Microsoft and other tech companies were piling into the AI space. So in 2017, the Defense Department developed a plan to work with private companies on integrating computer vision into its battlefield technology. They called it Project Maven.
Zachary Fryer-B…: The idea was that the Pentagon would be able to take these mounds of video footage that they collect from drones, from satellites, from airplanes, and instead of having people try to dig through the small portion that they can, allow computers to dig through all of it. The key part of this is that the Pentagon didn’t have the technology to do it themselves.
Michael Montgom…: The person tasked with running the project, General Jack Shanahan.
Jack Shanahan: It became almost a myth about what Maven was and what it was not. There’s no weapons involved. We used it for a Hurricane Florence to help people understand where the damaged areas were.
Michael Montgom…: General Shanahan says Maven was about intelligence, surveillance and reconnaissance, and it wasn’t a complete secret. The project had its own website. But it ignited a firestorm.
Speaker 14: Nearly a dozen Google workers reportedly resigned in protest over the company’s involvement in an artificial intelligence drone program for the Pentagon. This-
Michael Montgom…: The protest included a petition signed by more than 3,000 employees that said Google should not be in the business of war.
Zachary Fryer-B…: And that immediately struck Pentagon planners and officials as an existential threat. Since the Pentagon doesn’t create this technology, if they can’t get Silicon Valley to work with them, they’re going to fall behind other countries like China, where the tech sector doesn’t have an option as to whether it works with the military.
Michael Montgom…: The generals saw these rumblings as a disaster in the making, but to Liz O’Sullivan, the protests at Google were inspiring.
Liz O’Sullivan: To see other people who were working on it so vocally oppose this was sort of eye-opening.
Michael Montgom…: Liz had joined a New York based tech company called Clarifai in 2016. She says she signed up believing that AI could make the world a better place.
Liz O’Sullivan: I was incredibly excited about what AI could do, bring modern medicine to underdeveloped countries and detect climate change at scale by using satellite imagery. And this was just the period of time that we characterize as being so optimistic about what technology would bring to the world.
Michael Montgom…: But Liz says the world started to see the dangers of technology. Facebook and Twitter became conveyor belts for disinformation, racism and extremism. China was using AI to crack down on ethnic minorities, and the algorithms had their own biases. Researchers were finding that facial recognition software was often less accurate identifying women and people with darker skin. Then Liz says words started circulating around the office that Clarifai had landed a big government contract, but her bosses kept a lid on what it was all about.
Liz O’Sullivan: The government required that they install surveillance cameras in the ceiling of our office and that they close off the windows for every engineer that was working in the room.
Michael Montgom…: Some information started leaking out
Liz O’Sullivan: And it became clear that it was not just a government contract, but that it was a military contract. And more details leaked out through the rumor mill, and it was not just a military contract but a drone contract.
Michael Montgom…: Liz says she took a closer look at all the products Clarifai was developing.
Liz O’Sullivan: That’s when I first discovered the meaning of the term dual use. Our product roadmap was full of the components of technology that someone could use to build an autonomous killer robot. Not that we were necessarily building them, but that it could be very easy for someone to take the products that we offered and to do that with our technology.
Michael Montgom…: In June 2018, Google announced it wasn’t renewing the Maven contract. At the same time, the company was still involved in AI projects in China. General Shanahan says Pentagon leaders were irate. They believed Google’s work could be directly or indirectly benefiting the Chinese military.
Jack Shanahan: Do you understand, by not working with us, but potentially working with China, the signal that sense to everybody in the United States military? That was a defining moment. And I’ll tell you, at the Chairman of the Joint Chiefs of Staff level, General Dunford, I mean there are people visibly upset in the department about this.
Michael Montgom…: General Shanahan concedes that it was a learning moment for the Pentagon and that the military needs to be more transparent about its work with private tech companies, but he’s only willing to go so far.
Jack Shanahan: There are some things we will talk about, there are others that we will just in general terms, saying, “We’re interested in more autonomy across the Department of Defense.”
Michael Montgom…: The growing controversy engulfing Project Maven was something Zach was following closely.
Zachary Fryer-B…: What Maven did was track objects. It’s true that the technology that Google was providing wasn’t used to tell a missile exactly where to strike, but if you can track objects, it can tell you what you might want to strike. And so the Google workers were concerned that the technology they had developed for truly commercial purposes was going to be used to help the Pentagon pick who to kill.
Al Letson: When she realized what the technology could be used for, Liz O’Sullivan was horrified. She decided it was time to take a stand.
Liz O’Sullivan: I didn’t believe that AI had any business taking a human life. I had seen AI systems fail, and it’s not that they fail, it’s how they fail, and they fail wildly and in unexpected ways.
Michael Montgom…: Liz wrote a letter to Clarifai CEO Matt Zeiler, asking that the company make a promise to never work on any projects connected to autonomous weapons. About a week later, she says her boss called an all staff meeting.
Liz O’Sullivan: And during that meeting, he made it very clear that the company’s position was that AI was going to make the military safer and better and that even autonomous weapons were good for mankind and that would help save lives, not the opposite. And that’s when I quit.
Michael Montgom…: We reached out to Matt Zeiler, and he declined to talk to us. The Pentagon thought Project Maven would prove the military could work with Silicon Valley, but it backfired. In the aftermath of the controversy, Zach got his hands on an internal Defense Department memo.
Zachary Fryer-B…: That warned if the Department of Defense didn’t find a way to convince tech workers to work with the military that they were going to lose future wars.
Michael Montgom…: They were in a battle for hearts and minds. So over the past few years, the military has been stepping up its outreach to the tech community in some unexpected venues. I traveled to Las Vegas for the gathering of technologists, hackers, and digital free spirits that’s called DEF CON. It was August 2019. 30,000 people packing a cluster of hotel casinos. It feels kind of super mainstream, but DEF CON has serious outlaw route. Zach’s been here a couple times.
Zachary Fryer-B…: This was a hacking conference, and hacking was dangerous and it was illegal. And so you had law enforcement people, you had intelligence people who’d show up just to keep an eye on what this hacking community was doing. And so the game they used to play was called Spot the Fed, which is where you tried to notice who was one of these law enforcement or intelligence people keeping an eye on the hacking community.
Michael Montgom…: There’s still a little bit of an anti-establishment vibe. You’re not supposed to take pictures of people’s faces, and ID badges don’t have real names on them, so a lot of people use their Twitter handles. Tell me your name.
Scott Lyons: My handle is Csp3r, C-S-P-3-R.
Michael Montgom…: Csp3r’s real name is Scott Lyons, and he’s wearing a red t-shirt that says Goon. They’re the volunteers who organize and run the conference. He’s got lots of tattoos and distinctive hair. That’s a thing at DEF CON. At the same time, he tells me he’s done security work for big corporations, the government, even the military.
Scott Lyons: The funniest looks that I get, especially rocking the blue mohawk in business meetings, was walking into the Pentagon and just being looked at like, oh crap, there’s a hacker here. Come on man, you’re killing me here. You’re killing me. Like seriously, hackers are people too. It’s your next door neighbor. It’s your kid, right? It’s your coworker. Everybody is a hacker. Everybody finds ways around and are able to circumvent traditional conventions.
Michael Montgom…: There are other signs of change. The feds and the military are here, but they’re not undercover. I meet Alex Romero. He’s with the Pentagon’s Defense Digital Service. They’re running something called Hack the Air Force. It’s a competition that pays hackers a cash bounty for exposing security vulnerabilities. In this case, the target is a key component from a fighter jet.
Alex Romero: We really want to invite the community to come either hack us through these programs or to come join our team directly.
Michael Montgom…: Any results so far from the-
Alex Romero: Oh, yes. I’m not probably going to talk about them because we got to fix them.
Michael Montgom…: At DEF CON, I catch up with Liz O’Sullivan. She’s joined the resistance.
Liz O’Sullivan: Hi everybody, thanks so much for coming to our talk on autonomous killer weapons. This is going to be a very light conversation for a Saturday afternoon, so I hope you guys are really excited about that.
Michael Montgom…: Liz is speaking in a crowded meeting room on behalf of the campaign to stop Killer Robots. The group is pressing for a global ban on fully autonomous weapons.
Liz O’Sullivan: Up until January of this year, I worked for a company called Clarifai.
Michael Montgom…: Liz talks about her decision to quit her job at Clarifai over the company’s contract with the Pentagon.
Liz O’Sullivan: I’m not a technophobe. I believe that AI is going to make its way into the military, and we hope that it will be done in a way that will reduce the loss of innocent life. But the alarm that we’re trying to raise here is that these technologies are so new, so risky, and so poorly understood that to rush forward into autonomy based off of these kinds of detection systems is unacceptable and especially-
Michael Montgom…: The presentation lasts two hours and the audience stays engaged.
Speaker 17: Thank you for doing this talk, by the way. I’m obviously a big supporter of the campaign to stop killer robots.
Michael Montgom…: They come from academia, tech companies, human rights groups and military contractors, even the world of science fiction. But there are some challenging questions.
Speaker 19: What are we going to do to defend ourselves from swarms of killer drones? We don’t control everybody in this planet. It’s a very altruistic thing that you guys are trying to do, but not everybody in the world is a good guy.
Speaker 20: International humanitarian law has been successful in banning weapons before. It is possible, and we can do it again.
Liz O’Sullivan: I think a lot of people worry that we’re going to have killer robot drones invading New York City.
Michael Montgom…: Liz says she spends a lot of time educating people about the difference between science fact and science fiction.
Liz O’Sullivan: I think the real concern is that this technology will be a cheap and easily scalable way for authoritarian regimes to tame their own public or for the US to go to proxy wars with less technologically advanced nations.
Michael Montgom…: We asked General Jack Shanahan about all this. After all, when we spoke, he was the pentagon’s point person on AI. He told us it’s far too early to consider any kind of treaty that would put limits on autonomous weapon systems.
Jack Shanahan: I never question somebody’s principles. They have a reason they’re worried that the Department of Defense will do this. Let me say that the scenario which they project is so far advanced and so far out of my time horizon that to me is not the most pressing concern on the table.
Michael Montgom…: Some 40 countries have called for a ban on the development of fully autonomous weapons. Among the opponents are the countries leading the way in developing AI for the battlefield: Russia, China, Israel, and the United States. General Shanahan says there’s a simple reason for the US to keep ahead of the pack.
Jack Shanahan: I don’t think any American can challenge that assertion that we don’t want to lose. And so that to me is what this is about. Premature, we don’t want unilaterally do it when others are proceeding.
Michael Montgom…: Just to put you on the spot, you do not support the idea that the US, the US military, should very explicitly say that we will never develop fully autonomous weapons.
Jack Shanahan: You’re correct. I do not say that we should ever explicitly say that. Could there be over time some agreements we make internationally about some sort of limit on some aspect of that? I think that’s a different conversation to have at a different time at a policy level. But right now, explicitly, no.
Al Letson: That was Reveal’s Michael Montgomery. Since our story first aired, Liz O’Sullivan was named CEO of Vera, a tech company that analyzes AI code for dangers in areas like discrimination and privacy. And after more than 35 years of service, General Jack Shanahan retired from the military. Meanwhile, the Pentagon is expanding its AI program and partnering with companies like Microsoft, Amazon, and Palantir. All of this is changing the role of humans in warfare.
Zachary Fryer-B…: Commanders are looking at a situation where they’re just going to have to trust these advanced systems without being able to fully understand what’s happening.
Al Letson: That’s up next on Reveal.
From the Center for Investigative Reporting and PRX, this is Reveal. I’m Al Letson. We’ve been hearing about how future wars will be fought with artificial intelligence to enhance battlefield communications, speed up intelligence gathering, and even allow autonomous weapons to kill. It’s a future that is approaching fast, and I for one, am not excited about it. With me to talk about this is reporter Zach Fryer-Biggs. Hey Zach.
Zachary Fryer-B…: Hey, Al.
Al Letson: So we first aired these stories a couple years ago, and so much has happened since then. Big advances in artificial intelligence. Some experts are comparing the moment we’re in to the start of the industrial revolution.
Zachary Fryer-B…: Yeah, we’re definitely on the brink of this huge shift that’s going to… It’s going to change a lot of things about our lives. And when you look at technology like AI, it’s what they call dual use. So it can be used as a weapon, it can be used as a tool, it can help with medicine, it can change weapon systems. And so I think when we’re talking about how AI might be used for autonomous weapons, we have to keep in mind that the fundamental technology here is going to be pretty much everywhere, and it’s sort of getting rolled out in Ukraine right now.
Al Letson: Yeah. I wanted to ask you about the war in Ukraine. I mean, I know without a doubt, lethal drones have been important in both Russia and Ukraine.
Zachary Fryer-B…: Yeah. And they’ve been used to sort of steadily escalate the situation. When Russia launched its full invasion of the country, one critical component of Ukraine’s defense was this Turkish-made drone, and it provided an ability to take out Russian radar systems as well as tank columns using laser-guided bombs. That weapon became just a critical part of repelling the invasion. At the same time, as the war has gone on, we’ve seen Russian troops deploy all kinds of drones. We’ve seen drones being funneled in from Iran, other NATO countries, Israel. We’re seeing drones made from all over the world being deployed and in some ways proving their worth for militaries on this battlefield.
Al Letson: I’m curious about where the US stands on this. General Jack Shanahan was quite blunt that the US would not consider any kind of ban on autonomous weapons. Is that still the US position?
Zachary Fryer-B…: Basically, yes. US officials continue to refuse to put any real limits on what the military would be able to do with AI. Now, you do have US representatives who say that they’re going to look to ethically use AI. They’ve proposed these sort of non-binding best practices and principles to guide how the military might use AI, but they’re not willing to put firm limits. They’re not willing to sign a treaty. And fundamentally what they argue is that they’re willing to have these ethical talks even if there aren’t firm limits, whereas Russia and China won’t discuss it at all and are, in all likelihood, putting AI into all kinds of weapon systems without those conversations.
Al Letson: And I take it that Russia and China are also not interested in any kind of weapons ban.
Zachary Fryer-B…: That’s absolutely correct. Diplomatic officials have described representatives from both of those countries as sitting in on all sorts of discussions of a treaty, but they’re not actively pursuing any kind of ban. And I think if you look at what they’re doing from a technology standpoint, you’ve got an unmanned submarine, the Poseidon, that the Russians say may be nuclear capable. You’ve got China aggressively investing in all kinds of drone technology. The technology’s moving forward. The capabilities moving forward. But at the same time, they’re not having the conversation about what the ethical implications are of letting those machines potentially choose who to kill.
Al Letson: The argument we hear a lot when it comes to drones is that it’s made it easier for the US to use deadly force without risking American lives, like say in the Middle East. So people who argue this would say that these drones are protecting lives, at least that of US service members.
Zachary Fryer-B…: I think there is some truth to the argument that US service members are protected. It’s a really different situation if you’re lobbying artillery in Afghanistan versus controlling a drone at a cooled container in Nevada. But at the same time, you have to consider what’s the moral decision that’s going on here to kill, to take a life. And you’re removing that from the front lines, from someone who is on the ground in country.
And once you start taking that human decision away, once you start moving it both geographically further from the location of killing and also further from a human thought process because you got machines making some of these decisions, that makes it a little easier for a commander to sort of let something loose, to have a commander say, “Okay, autonomous weapon, you make the decision on whether to kill because I don’t have to struggle with the moral consequences of that choice.”
Al Letson: That whole decision is so fraught because you’re basically allowing a machine to decide the value of human life. And I know this is the easy place to go, but I am a science fiction nerd, and I just can’t help it that that’s how Skynet started, which created the Terminator, where the machines took over the world. I mean, it sounds a little farfetched, but it feels like that’s where we’re headed. The idea of intelligent machines taking over all of the decision making for humans.
Zachary Fryer-B…: That’s a touchstone that I think we all come back to. And I would say that it happens for me, and we’re both in good company because the former vice chairman of the Joint Chiefs of Staff used to routinely talk about the Terminator conundrum as he called it. Now, I will tell you, his staff absolutely hated it when he talked about it because they don’t want to talk about the doomsday scenario. But I think the fact that you have someone in that position talking about it is a reflection of the concern that’s very real.
And while it may not be global annihilation, the real concern here is the machines will be making decisions. The way those machines make decisions is just different than the way humans do. They don’t have brains like we have brains. So if you ask the machine, “Why did you do X?” it can’t explain it. It doesn’t have a rational thought process it can relay to you. And so as a person trying to supervise that system, I kind of have to just trust it. And that’s where you start to end up in some really scary situations in which you’re giving a machine the authority to choose life or death, and I can’t understand why it’s making the choices it is.
Al Letson: Zach Fryer-Biggs is the managing editor at Military.com. Zach, thanks so much for talking to me.
Zachary Fryer-B…: Really enjoyed it.
Al Letson: Our lead producer for this week’s show was Michael Montgomery. Brett Meyers edited the show. Special thanks to the Center for Public Integrity. Before we go, we got some exciting news. Our new documentary, Victim/Suspect, is now streaming on Netflix. The doc follows reporter Rachel de Leon’s investigation into a troubling trend, young women who report sexual assaults to the police and then end up as suspects. Victim/Suspect. Stream it now on Netflix.
Nikki Frick is our fact-checker. Victoria Baranetsky is our general counsel. Our production manager is Steven Rascon with help from Zulema Cobb. Score and sound design by the dynamic duo, J. Breezy, Mr. Jim Briggs, and Fernando my man, yo Arruda. They had help from Claire “C-Note” Mullen. Our CEO is Robert Rosenthal. Our COO is Maria Feldman. Our interim executive producers are Taki Telonidis and Brett Myers. Our theme music is by Camerado, Lightning.
Support for Reveal is provided by the Reva and David Logan Foundation, the Ford Foundation, the John D. and Catherine T. MacArthur Foundation, the Jonathan Logan Family Foundation, the Robert Wood Johnson Foundation, the Park Foundation, and the Hellman Foundation. Reveal is a co-production of the Center for Investigative Reporting and PRX. I’m Al Letson. And remember, there is always more to the story.