How AI is used in Warfare

 

It’s late 2020 and war has broken out  in a place the world had forgotten.   A festering conflict has erupted  into full-scale fighting. Ground zero is Nagorno Karabakh… a  disputed region in the Caucasus mountains,   fought over by two former Soviet  republics: Armenia and Azerbaijan.

This looks like a textbook regional war – over  territory, over ethnic and national pride. Fought   while the rest of the world is consumed by the  pandemic, it doesn’t get that much media coverage.

   But for those who are paying attention,  it is a glimpse of future wars. You can find it right here, in the propaganda  pumping out from the start of the war. Azerbaijan’s border patrol posts this video on  its YouTube account just as the conflict begins.

The lyrics are a rush of jingoistic fever,  with a mantra: “hate” for the enemy. But look carefully, and you’ll see what makes  this conflict a watershed in modern war. Watch out for these trucks in the background.

In this shot you can just about see what’s inside. Then … a launch, in slow motion. What emerges is not a rocket or a missile:   it has wings that are beginning to  unfold just before the video cuts away.

We can see enough to identify what this is. It’s what’s called a “loitering munition” from  Israel’s state-owned defence manufacturer,   IAI. Its model name: the “Harop.” The company’s promotional videos show  what “loitering munitions” can do.

Once launched, they fly – autonomously  – to a target area, where they can wait,   or “loiter” in the sky for hours, scanning  for a target – typically, air defence systems. Once they find a target, they don’t drop a  bomb, but fly into it, to destroy it on impact.

It’s earned them the nickname “kamikaze drones.” In the war over Nagorno Karabakh, these  weapons didn’t just make for good propaganda.   They made a real difference. Azerbaijan had spent years  investing in loitering munitions.

   Analysis by a US think tank showed that  they had more than 200 units across   four different models – all of  them sophisticated Israeli designs. Armenia only had a single, domestically  made model with a limited range.

“The really important aspect of the conflict  in Nagorno Karabakh, in my view, was the use of   these loitering munitions, so-called kamikaze  drones, these pretty autonomous systems.” Ulrike Franke is one of Europe’s  leading experts on military drones.

“They also had been used in some way or  form before, but here, they really showed   their usefulness, militarily speaking, of course.   It was shown how difficult it is  to fight against these systems.

” As Azerbaijan celebrated victory, you  could even call Nagorno Karabakh the   first war that was won — in  part — by autonomous weapons. Little wonder the Harop was on show that day.  And other militaries were paying attention.

“Since Nagorno Karabakh, since  the conflict, you could definitely   see a certain uptick in interest in loitering  munitions. We have seen more armed forces   around the world acquiring or wanting  to acquire these loitering munitions.

” The Nagorno Karabakh war amounted to a showcase   for autonomous weapons technology. With  a clear message: this is the future. It’s a future that is coming at us fast. Ever  more advanced models are coming onto the market…   Designed to hit a wider range of targets… The manufacturer IAI even markets one of its  models with the slogan… “fire and forget.

” Fire and forget… think about that. Already,  today, autonomous weapons are being used to   find a target over long distances and  destroy it without human intervention.   And this revolution is just getting started  – turbocharged by artificial intelligence.

In the United States, a major report  from a “national security commission”   on artificial intelligence talks  about AI enabling a “new paradigm   in warfighting” – and urges massive  amounts of investment in the field.

This isn’t all about autonomous weapons –  there are many other areas of the military   which will be using artificial intelligence. “One area where we see a lot of AI-enabled  capabilities is in the realm of data analysis.

   So we get we are gathering all so much  data in military operations. Another area,   which I think is quite promising, but also  still relatively removed from the battlefield,   from the combat is logistics.

AI  can definitely help to make this   more efficient, cheaper,  better, easier, all of that.” And fuelling all of this is an intensifying  global competition, which spans all the   way from these more prosaic fields to the  autonomous weapons we’re looking at today… “The Chinese and the Russians have  made it very clear that they intend   to pursue the development of autonomous weapons… Martijn Rasser, a former analyst at the CIA,   covers emerging weapons technology at  Washington’s leading defence think tank “and they’re already investing heavily in the  research and development of those systems.

” It’s not just the superpowers piling  in. Britain’s new defence strategy   also puts AI front and centre. And as we’ve already seen, Israel is a  leader in the autonomous weapons field.

In fact, wherever you look, countries  of all sizes are jumping in.   No wonder there’s talk of  this becoming an arms race. Germany’s foreign minister Heiko Maas is  clear that that arms race is already underway.

“We’re right in the middle of it. That’s  the reality we have to deal with.” If anything, this might go  deeper than an arms race… “AI is here to stay. And there is a belief  among the major powers that this could make a   difference on the battlefield in the future.

  So they are frenetically investing in it. Indian Diplomat Amandeep Singh Gill is the  former chair of the UN government experts’ group   on lethal autonomous weapons “And this is a race, in a sense, which cuts  across the military and the civilian fields,   because there’s also the sense that  this is a multitrillion dollar question.

See also  Debunking History: How Artificial Intelligence Emerged Victorious

   It’s about the future of resilient economies.” That is what sets this new era  apart from arms races of the past.   During the Cold War, the development of nuclear  weapons was driven purely by governments and the   defence industry.

Beyond power generation, there  wasn’t much commercial use for nuclear technology.  Today, AI is rapidly entering our everyday  lives. It might even unlock the phone in your   pocket when you hold it up to your face.

  This emerging ubiquity of AI important.   Because it means that developments  in AI cannot be contained – they   are BOUND to bleed across between civilian and  military fields — whether we like it or not.

“AI is by definition dual use or multi use, it  can be used in all kinds of ways. It really is   an enabler more than the technology. There is  a whole range of applications of artificial   intelligence in the civilian realm, from health  care to self-driving cars to all kinds of things.

” It means that something as innocuous as  a new year’s celebration in Edinburgh…   or St Patrick’s Day in Dublin… can be  powered by similar swarming technology… …to what the Indian army showed  off on its national day.

In fact,   swarming is one of the hottest areas of  autonomous weapons development right now. The US Navy has released  footage of early demonstrations.   Here, fighter jets drop over  100 tiny drones in mid-flight.

Once they’re out there, it’s almost impossible  for the human eye to keep track of them. The whine of their motors — almost  the only sign of the threat in the sky.  Experts say they will make  highly effective weapons.

“You could take out an air defense  system, for example, by — just   you throw so much mass at it as so many  numbers that the system is overwhelmed.   This, of course, has a lot of tactical  benefits on a battlefield.

And no surprise,   a lot of countries are very interested  in pursuing these types of capabilities.” Not least the head of the body  advancing the US army’s modernisation,   as he explained in an online think tank forum.

“Most likely drone swarms are something you’re  going to see on the battlefield – on a future   battlefield. I don’t think it’s a  matter of if – as a matter of fact,   I think we’re already seeing some of it –  it’s a matter of when we begin to see it.

” And feeding the momentum of this potential  arms race — in order to fight these weapons,   you need these weapons.  Humans don’t have a chance. “When you’re defending against a drone swarm,  a human may be required to make that first   decision.

But I’m just not sure that any  human can keep up with a drone swarm.” This issue of speed gets us to a critical  emerging danger of autonomous weapons… The weapons we’ve seen so far are capable of  a high degree of autonomy.

But they wouldn’t   be IMPOSSIBLE for humans to control. Even  a “fire and forget” weapon needs a human   to fire it, and they’re still operating  in a way that we can pretty much grasp.

Now let’s think ahead, a decade or two into  the future. That’s a decade or two of rampant   technological development – and adoption  – of increasingly autonomous weapons. “I think what is very likely that in 20 years’  time we will have swarms of unmanned systems,   not even necessarily just airborne drones  — it can also be ground systems, surface   vessels, etc.

So different units operating  together and carrying out attacks together,   which does indeed require quite a  high level of AI-enabled autonomy.” To fight these systems, you will need these  systems.

Because human beings are simply too slow. “This is what potentially may drive  an arms race that — some actors may   be forced to adopt a certain level  of autonomy, at least defensively,   because human beings would not be  able to deal with autonomous attacks   as fast as would be necessary.

So speed  is definitely a big concern here.” And that could have fateful  consequences for how wars begin. “We could find ourselves in a situation where  because of this this problem of speed and   autonomous systems having to be countered  by other autonomous systems, we could find   ourselves in a situation where these systems  basically react to each other in a way that’s   not wanted….

In the literature we call this  “flash wars” — where you have an attack or   even just that you think that there is  an attack and autonomous system reacts   to that. Another autonomous system by the  opponent reacts to that attack.

And you   have this escalation potentially very  fast. Hence the “flash” – “flash wars,”   where you basically have an accidental  military conflict that you didn’t want.

” We’ve already seen something like  this on the financial markets.   The “flash crash” of 2010 wiped more  than a trillion dollars off the US   stock markets in just minutes. It was driven  by trading algorithms feeding off each other   in a dizzying spiral.

How it happened  is STILL not fully understood. In a flash crash, trading can  be halted to prevent disaster.   The risk with a “flash war” is that  there might be no pulling back. “If the beginning is bad enough, it may  not even matter anymore that the original   event wasn’t supposed to be an attack in the  first place.

But you could have a situation   where the counterattack is so  bad that you end up in a war.” Now, think back to Nagorno Karabakh — a  regional war where autonomous weapons may   have tipped the balance.

In a future  world with the risk of “flash war,”   places like this could face even  more instability, even more conflict. “We are moving in the world into a world  where systems will be more autonomous.

See also  The Future of Learning | Virtual Technologies and Artificial Intelligence

But   we need to make sure that we minimize the risk  of unwanted escalation, of lethality decided by   machines without any human control.” But how do we do that? How  do we prevent the worst?   As we’re about to find out… the  world is struggling to find a way… “My fear is that there will be more  unpredictability in how we get to armed   conflict, so the pathways to getting to the  battlefield won’t be clear to policymakers.

So   they will not understand fully the risks  of certain actions or certain happenings,   and that will make the whole  world a more dangerous place.” Amandeep Singh Gill was at the centre of  United Nations efforts to try to get a   grip on autonomous weapons… a process that  critics say is on the brink of failure.

This is where it all happens… The UN buildings  in Geneva. It’s here that delegates from UN   member states gather with experts and NGOs to  talk about the future of autonomous warfare. This process is part of what’s called the UN  Convention on Certain Conventional Weapons.

   A diplomatic tongue-twister launched in  the 1980s to try to regulate non-nuclear   weapons that were deemed so dangerous that they  need special attention. Things like land mines   and blinding lasers.

In 2014, lethal  autonomous weapons made it onto the agenda. It has been very slow going. The process has  yielded a set of “guiding principles” – saying   that autonomous weapons should  be subject to human rights law,   and that humans must have ultimate  responsibility for their use.

But these “guiding principles” have no force…  they’re just a basis for more discussions.   For campaigners calling for a  ban, that’s not good enough. “We do get frustrated by the  delays that have happened   and the delay in moving from discussions  to actual negotiations of a new treaty.

   The main problem with this forum is that it  operates by consensus. So meaning any one state   can block progress and block that shift  from discussions and negotiations.” Bonnie Docherty lectures on human rights  at Harvard Law School – and is also a   spokeswoman for the “Campaign to Stop Killer  Robots” – a high-profile coalition of NGOs.

   She has mapped out principles  for an international treaty. “The overarching obligation of the treaty should  be to maintain meaningful human control over the   use of force, and where it should be a treaty  that governs all weapons operating with autonomy   that choose targets and fire on them based  on sensor’s inputs rather than human inputs.

” That idea of keeping “meaningful human control”   is broadly echoed by many countries, but only  30 states support the campaign. They’re mostly   smaller nations but include one giant in the form  of China.

But Beijing’s true position is blurred. “China has called for a ban on, or expressed  support for a ban on USE, but has not,   to my knowledge, expressed support for a ban  on development and production.

We believe that   you need to prohibit development as well as  use of these inherently problematic systems,   because once things are developed,  the genie is out of the bottle.” And the other great military powers aren’t keen  at all on those sorts of limitations either.

   Russia is accused by many of taking any  opportunity to thwart the Geneva talks.   But there are plenty of other objectors too. “Russia has been particularly vehement in its  objections… Some of the other states developing   autonomous weapon systems such as Israel, the US,   UK and others have certainly been unsupportive of  a new treaty and have expressed varying degrees of   support for actually continuing discussions.

So  those are some of the roadblocks that we face.” As things stand, the US is  highly unlikely to support a ban.   Rather, it has set out its own principles,  which include human involvement.

“A ban on autonomous weapons systems is  essentially infeasible just because the technology   is out there. The Department of Defense  has been very clear about its commitment   to ethical uses of these technologies, where  right now the position is that a human being   has to be on the loop or in the loop when  those weapons are used so that it won’t be   fully autonomous in the sense that there won’t be  any human interaction with these weapons systems.

” But the reality is that the US, China and Russia   are competing so intensely in all areas of AI  technology that it’s questionable whether any   of them would sign up to a treaty that  significantly limits what they can do.

“The large powers will have will  always have agendas. They want   freedom of manoeuvre. They think that they need  to have agency over technology development.   And sometimes they’ve been very sceptical  of the role of international organizations,   multilateral forums in understanding  and regulating technology.

” Aside from the lack of interest from crucial  players… the challenge of tackling an intangible   technology like AI with the traditional tools  of “arms control” is genuinely difficult. “A lot of the old ways of arms control and  arms control treaties don’t work anymore and   don’t apply anymore to these systems, because  we are to put it bluntly, we’re talking about   software rather than hardware.

So a lot of arms  control systems in the past basically were about,   you know, allocating a certain number of systems.  You are allowed one hundred warheads of this type   and you were allowed one hundred heads of  this type.

And we’re basically counting.   You can’t do this with the A.I. enabled  weapon systems that we were talking about,   because it doesn’t matter what it looks  like from the outside. But what’s in there.

See also  ClickZee Review - What You Need To Know

” Germany has been quite active in trying  to navigate around these problems…   its foreign minister says that  the world has to find a way… “Just like we managed to do with  nuclear weapons over many decades,   we have to forge international  treaties on new weapons technologies…” Heiko Maas is a member of  Germany’s social democrats   and has been a vocal advocate of arms control.

“They need to make clear that we agree that  some developments that are technically possible   are not acceptable and must  be prohibited globally.” In fact the German government has laid  out its intention – in the document that   underpins the current coalition.

It says… “We reject autonomous weapons systems   that are outside human control. We  want to prohibit them worldwide.”  That sounds pretty clear. But even this is  complicated. Germany for instance does NOT   support the Campaign to Stop Killer  Robots.

It says there’s a better way. “We don’t reject it in substance – we’re just  saying that we want others to be included the   global controls that we would need to ensure that  autonomous weapons systems don’t come into use…   So military powers that are  technologically in a position   not just to develop autonomous weapons but  also to use them.

We need to include them.” So this isn’t just a debate about the  rights and wrongs of autonomous weapons.   It’s also a debate about PROCESS. On the one hand, Germany says an agreement it  only worth anything if the big countries are   on board – they want that elusive  consensus in the Geneva process.

   On the other, the Campaign to Stop Killer Robots   says the matter is too urgent to wait. They say  there’s just time for one more round in Geneva. “We feel that if states don’t take action  by that point, that they should consider   strongly they should move outside of the  Convention on Conventional Weapons and   look at other options.

So they could go to the UN  General Assembly to negotiate a treaty. They could   start an independent process, basically a  forum that is not bound by consensus, but is   guided by states that actually  are serious about this issue   and willing to develop strong standards  to regulate these weapon systems.

” There’s precedent for this… with land  mines, for example. In the 1990s,   the Geneva process couldn’t find consensus.   Instead, more than 100 countries broke away to  create a ban called the “Ottawa Convention.

” But the great powers didn’t sign. And more  than 20 years later, the US, Russia and China   still haven’t joined the Ottawa Convention. “It’s a dilemma, isn’t it? So you can  do away with the rule of consensus   and then you can have results quickly,  but they will not have near universal   support at the very least, they will  not have support from the countries that   are developing these capabilities.

But through  the rule of consensus, you force those countries   to engage. So I think it’s a choice that the  international community makes in these forums.” So the world doesn’t agree on what to do about  autonomous weapons.

And it can’t even agree on   HOW to agree on what to do about them. In this  situation, is there ANY prospect of a solution? “In the end we may end up with rules or norms  or indeed agreements that are more focused on   specific uses and use cases rather than  specific systems or technology.

So you where   you basically agree, for example, to  use certain capabilities only in a   defensive way or only against machines  rather than humans or only in certain   contexts. But as you can imagine, implementing  and, first of all, agreeing to that and then   implementing that is just much harder than  some of the old arms control agreements.

” Compounding this is the rock-bottom level  of trust between the major powers right now.   US-Chinese talks in Alaska in early 2021  descended into a bitter round of accusations. “When there is lack of trust, you tend to  attribute all kinds of intentions to the   other party and you tend to overestimate what they  might be doing and overshoot in your own response.

   Today, frankly, the developments on the technology  front are actually adding to the mistrust.” As the US, China and Russia slip deeper  into an era of “great power competition,”   the challenge will be to carve out areas  like this — where they can put mutual   interest above the visceral drive to be on  top.

THAT is the spirit of “arms control.” “You don’t make arms control agreements with  your best friends and allies. You always,   by definition, you know, try to negotiate them  with your enemies.

And this isn’t exactly new…   I don’t I don’t think it’s impossible that these   players, which are already opponents and may  eventually become even more adversarial, can come   together and agree on certain minimum requirements  simply because it is in everyone’s interests.

” For Germany’s foreign minister, the  whole world has responsibility here. “The world must take an interest in the fact  that we’re moving towards a situation with   cyber or autonomous weapons where everyone  can do as they please.

We don’t want that.” Climate change serves as an ominous warning   of what can happen when humanity  sees a common threat on the horizon   but FAILS to act in time to stop it. The Rio Summit kicked off the UN’s process of  talks to tackle climate change way back in 1992…  It took 23 years to get to the Paris Agreement…  And it’s clear even THAT wasn’t enough…  It’s already too late to  prevent much of the devastation   that scientists predicted from the start.

With the scenarios we’ve just seen —  the warning signs are just as clear,   and if anything even more urgent.

Get Your Download Immediately

Get Instant access to our Keto Recipe Ebook

You have Successfully Subscribed!