paint-brush
A Tragic Inevitability: The Fusion of Drones & AI in the Wars of the Not-So-Distant Futureby@RuairiLuke
549 reads
549 reads

A Tragic Inevitability: The Fusion of Drones & AI in the Wars of the Not-So-Distant Future

by Ruairi Luke McCallanApril 15th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I’m writing this piece following the decision of the US, UK and France to launch a supposedly one-off missile and air strike against Syrian president Bashar al-Assad and the forces loyal to him in response to a chemical attack likely carried out by him against civilians in the town of Dhouma.
featured image - A Tragic Inevitability: The Fusion of Drones & AI in the Wars of the Not-So-Distant Future
Ruairi Luke McCallan HackerNoon profile picture

Image From TechSpot

I’m writing this piece following the decision of the US, UK and France to launch a supposedly one-off missile and air strike against Syrian president Bashar al-Assad and the forces loyal to him in response to a chemical attack likely carried out by him against civilians in the town of Dhouma.

What struck me about the attack was just how many people were involved in it. And I mean that in the sense of ‘wow, even something that is a one off still requires this much manpower, this much ‘flesh-and-blood’ to carry out’. Maybe it was the fact that as a child, I’d been a big fan of the Star Wars franchise and had probably played one-too-many hours of Halo and Call of Duty with my friends during my awkward teenage years, but I’d often imagined by now that the battles of the future would be fought by things like androids, clones or small groups of super-human soldiers who could kill an ordinary mortal simply by accidentally brushing into them.

Childish naivety aside, the rapid advancements in the world of tech haven’t simply been limited to our iPhones, Kindles and Xboxes. Indeed, as pointed out in a documentary by Wired Magazine, much of the technology we hail today as making our lives easier started out, for better or worse, as a military project of some sort. Those self-driving cars I’m really excited about for example — The driverless technology that powers them started off life as part of a drone-bombing project to allow AI to better recognise potential targets.

This got me thinking: What else is in the works for the military, especially in regards to drones? How will this shape the conflicts of the not-so-far away future, and what legal (and ethical) implications could arise as a result of these new technologies?

Diligent Drones?: The Ever-Evolving Shape of Drone Technology

The use of unmanned drones in the targeting of potential terrorist hideouts and enemy structures has become a pretty common feature of warfare over the past five-or-so years, and since they first ‘debuted’ as it were the controversy over the use of unmanned drones has grown year-on-year.

The controversy is likely to continue and become a much more hotly-debated topic in both tech, military, political and academic circles. Recently, the Pentagon announced that it was going to step up its research into the possibilities of AI weaponry, particularly when it came to drones. Speaking at a conference hosted by New America think-tank, aerospace engineer and undersecretary of defence research Mike Griffin noted that whilst on the offensive side drones had proven useful, defensively there was still much work to do, particularly when it came to dealing with the increasing likelihood of “drone swarm” attacks becoming a much more common feature of warfare:

“Certainly, human-directed weapons systems can deal with one or two or a few drones if they see them coming, but can they deal with 103?” Griffin asked. “If they can deal with 103, which I doubt, can they deal with 1,000?”

This fear — of the inability to defend US and allied forces against drone attacks if the technology fell into the hands of enemy forces — is not an unwarranted one; in February for example, Griffin recalled that a Russian airbase in Syria was attacked by what many suspect to be a small swarm of unmanned drones, an attack the base was not prepared for.

This conference and Griffins remarks about the need for the US and West in general to enter an ‘AI arms race’ of sorts comes ‘hot on the heels’ of protests by Google employees at the companies involvement in the Project Maven task-force at the Pentagon. Maven represents the fusion of current drone technology with enhanced, ever-evolving AI capabilities, and according to this DSD memo obtained by tech website The Verge, the Maven project will “provide computer vision algorithms for object detection [and] classification”, meaning that drones will be able to identify potential targets from this gathered data the next time they’re out in the field.

Image From Wikipedia

Treaties & Treatises: The Legal & Moral Implications of AI-Drone Fusion

This however begs the question: Drone strikes have already proven to be controversial due to the fact that the Pentagon has, on many occasions, fired at what it purported to be a terrorist cell but later turned out to be a man sifting through scrap metal for example. Will a more advanced, AI drone necessarily be able to improve terrorist detection? What if it confuses scrap metal for a Kalashnikov? Whilst such questions may initially seem laughable, there remains, as pointed out by the Campaign to Stop Killer Robots group (the name isn’t helping your case to be taken seriously lads, not going to lie), a need to retain human control over robotic and computational weapons technology in order to avoid potential cock-ups that result in more deaths of innocent civilians that end up empowering, not defeating, terrorist groups and their rhetoric.

This desire to reign in AI and its sadly inevitable fusion with the arms industry is nothing new — In fact, the idea of trying to prevent scientific discoveries and research from becoming weaponised is a long standing ‘historical trope’ of the discipline; Einstein for example was horrified at the use of his work on atomic theory being used to create the first atomic bombs. As in the 1940s, so too in the 2010s it seems. Tech entrepreneurs and AI developers such as Elon Musk have led calls for an outright ban on autonomous robotic weapons, whilst the Future of Life Institute have created an open letter from robotic scientists of all stripes — which you can still sign I believe — urging a total and complete de-escalation in the AI arms race, at least in regards to weapons research that may lead to technology emerging that will require no level of human control to operate.

The Institute’s letter is an interesting read, and makes many valid points about the use of AI in combat and what its potential application could mean for a philosophy and ethic of war in the future. They rightly point out that whilst the use of AI and robots in warfare will almost certainly reduce the amount of human casualties in conflict, it will also “lower the threshold for going into battle”. Had questionable doubts about the legality of the recent airstrikes against Assad in Syria? Multiply those doubts by around a thousand when it comes to the use of AI in warfare.

Scientists Need To Continue To Be Vigilant: Concluding Thoughts

The words of AI developers in a letter to the UN last year said it all: “Once this Pandora’s box is opened, it will be hard to close”. And they’re right. Recently, AI researchers alongside other scientists led a boycott of the Korea Advanced Institute of Science and Technology (KASIT) over the university’s plan to open an autonomous weapons lab in partnership with the arms firm Hanwha Systems. Whilst the likelihood of Skynet and Terminator-esque scenarios remain rather remote, the possibility of arms firms and defence ministries — whether they be in the US, UK and elsewhere — taking advantage of the potential AI has for weapons production is fast becoming a reality, one that is increasingly becoming as ethically questionable as drones were when they were first ‘re-purposed’ for military means.

I am not a scientist. In spite of enjoying subjects like physics at school, I never truly excelled at or was interested in the discipline enough to consider a full career in the sciences. This is why scientists need to take a stand against this tragic inevitability of AI being continually focused on being used to kill as opposed to help; liberal arts grads like me, whilst we can shout as loud as we like about things like the potential philosophical and ethical implications, don’t have the expertise, the gravitas, to truly develop scientific arguments and policy that could shape the future of warfare and the suffering of innocent civilians.

I’ll leave you with the (rather scary) thoughts of Scott Phoenix, co-founder of the robotics firm Vicarious: “[If] you have an autonomous drone army, your army becomes my army if I find a single bug in your code somewhere”. The benefits of AI are huge, but the myriad of risks are clear; robotic and AI scientists need to continue to speak out against the weaponising of their research.