By Tristan Greene
Do you have a family emergency plan for autonomous weapon attacks? Stop, drop, and roll isnt going to fool the drone tracking you. Duck and cover? Does it even matter?
Its hard to separate the hyperbole and science fiction-nonsense from practical concerns for regular people when it comes to autonomous weapons. Despite the late Stephen Hawkings warnings, were probably decades away from the dystopian nightmare military experts predict the battlefield will become.
And, its not like theres a crime-syndicate of extremely well-financed super villains developing warehouses full of laser-equipped murder bots. Its easy for the average individual to frame the killer robot problem as something that might be important in the future, but not-so-pressing right now.
Yet theres always some important technology figure warning us about some unknown doom typically in vague and spooky ways. Do these warnings even matter to the average Joe or Jane?
Probably not. Theres this sort of pallor of existential dread that comes along with knowing the Pentagon and the Kremlin are hellbent on finding ways to exploit AI for warfare, but for the most part we dont have time to worry about autonomous missiles and Project JEDI.
Killer robots dont loom as menacingly in our fear-centers as more familiar threats, so we tell ourselves that, as long as we dont end up in some warzone, were probably safe.
We live in civilized places with access to indoor plumbing and emergency response services. This gives us the confidence to point at Alexa and Google Assistant and say is that the best you got? Then we laugh off the idea that the robots are going to rise up forgetting its the ingenuity of evil people we should fear, not the robustness of a neural networks code.
It seems shocking that, as of October 2018, weve yet to see the headline thats going to send the killer robots debate into high gear: Officials still searching for humans behind terrorist attack carried out by autonomous weapons. But, sadly, its almost surely coming.
The Human Rights Watch understands this. Through its Campaign To Stop Killer Robots the organization has dedicated itself to the incredibly difficult mission of spreading awareness about autonomous weapons mostly as it pertains to government, military, and police use.
And, if you ask us, the problem of autonomous weapons is one the general public might not even be capable of fully understanding yet, so the Watch has its work cut out. Ice skate uphill much?
In a video posted today, the Campaign shows us a dramatic fictional slice-of-life that paints machines as unpredictable and dangerous:
The Campaigns coordinator, Mary Wareham, told TNW:
The video shows what a future attack by fully autonomous weapons might look like. It also shows the serious concerns about the likely lack of accountability for fully autonomous weapons systems, as Human Rights Watch has documented.
These videos often come off as fear-mongering and far-fetched. But consider this: as best as we can tell, theres no technology in this fictional video that isnt already here in reality. This is a future that could have happened yesterday, technologically speaking.
Particularly worrisome, in light of Warehams comment about accountability, is the notion that weapons developed by governments for military use often end up in the hands of terrorist organizations.
Long-time readers might remember the Slaughterbots video by Stop Autonomous Weapons we reported on last year. In it, a big tech company takes to the conference stage to show off the latest and greatest gadget for military use. Horror ensues.
Much like the Hated in the Nation episode of Black Mirror, it shows us how robots could kill us in ways the average person might not have considered. These may be fiction, but theyre important for helping those of us who dont think like engineers visualize how autonomous weapons could affect our lives.
But you dont have to turn to fiction to find examples of the ways AI could be used to automate mass murder. Earlier this month Syrian engineer Hamzah Salam built a fully-functioning autonomous weapons platform with an AK-47 and a computer. He calls it an electronic sniper.
Its literally a sentry gun, like the ones from video games such as Call of Duty and Borderlands. And it exists right now.
According to Sputnik News Salam says the platform:
can use any small-arms weapon, from a machine gun to a sniper rifle. Cameras transmit a signal to a computer, which analyzes the data received. Its main task is to track movement. The computer has several preset scenarios. If it notices odd behavior in a given quadrant, it will open fire.
We live in a world where you can 3D print a firearm, mount it to a battery-powered tripod, and use open-source machine learning software and a Raspberry Pi (that may be an exaggeration, maybe not) to create something that, just a few years ago, would have seemed like experimental weapons at the cutting-edge of military research.
Things are changing faster than the public perception can handle.
Should you be worried about some other countrys killer machines occupying Main Street, USA? Probably not today.
But were in the last few innocent moments before the first AI-powered massacre happens somewhere. And the scariest part is that theres likely nothing we can do about it if we remain ignorant to the scope of the immediate threat.
You can learn more about the fight against autonomous weapons by visiting the Campaign To Stop Killer Robots.
This article has been previously published on The Next Web.
Tristan Greene is a sailor gleefully writing about consumer-friendly artificial intelligence advances, political policy, and concerning technology.