By Tristan Greene
Russia this week announced plans to develop AI-powered missiles. This follows earlier news that Russian arms manufacturer Kalishnikov is designing an autonomous small-arms weapon system for military use. The Russians seem intent to prepare for a world where robots, or autonomous missiles powered by machine-learning algorithms, kill people.
TASS reports Tactical Missiles Corporation CEO Boris Obnosov, speaking at the MAKS 2017 airshow in Moscow, responded to a question about his company’s AI missile development:
We saw this example, and when the Americans used it in Syria … when it is possible to re-direct missiles to targets. Work in this area is under way. This is a very serious field where fundamental research is required. As of today, certain successes are available but we’ll still have to work for several years to achieve specific results.
There’s more to the statement than meets the eye. If Mr. Obnosov is claiming the US has autonomous missiles with capabilities that would take Russian weapon-makers years to develop, he’s making a bold claim. It’s more likely he’s describing good ole’ American Tomahawk missiles, which have been called “smart” since the first gulf war in 1991. They’re a little smarter now, but they don’t learn – and can’t choose their own targets.
Re-directing a missile isn’t a particularly advanced feat, it doesn’t require AI: just intelligent programming and limited human interaction. The reason why the US launched 59 Tomahawk missiles at Syria recently is because it’s a dependable missile and it does not require a ‘pilot’ in the general area.
Navy’s Harpoon missile
The US is developing a replacement for the Navy’s Harpoon missile system called the Long Range Anti-Ship Missile (LRASM). This missile does have AI built-in, but does not have the ability to select its own targets. It’s designed to be really good at attacking enemy vessels – and the AI will play a huge role in its ability to impact on-target without the need for a human to do all of the math. On-the-fly avoidance systems and real-time redirection reduce the need for humans to work out complex firing solutions.
The fear — when a weapons maker for an advanced nation discusses a potentially years long development cycle for a weaponized AI technology in the year 2017 — is that they’re giving it the capability to discern its own targets and eliminate them.
Kalishnikov group
It wouldn’t be the first time a Russian weapons manufacturer has announced its intent to do so. Representatives from the Kalishnikov Group, manufacturers of the AK-74 assault rifle, are developing a completely autonomous weapons system. Kalashnikov Director for Communications Sofiya Ivanova told TASS earlier this month:
In the imminent future, the Group will unveil a range of products based on neural networks. A fully automated combat module featuring this technology is planned to be demonstrated at the Army-2017 forum.
There’s so much hyperbole surrounding AI that it’s easy to get caught up in the “AI-washing” that companies engage in. The idea of giving a non-human entity autonomy to decide whether a human is a target or not without the direction of a real person… is a terrifying one, no hype needed.
Top US military officials recently warned politicians on Capitol Hill about exactly this type of thing in a meeting last week. US Air Force General Paul Selva stated, “I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life.” According to The Hill, Sen. Gary Peters (D-Mich.) replied:
Our adversaries often do not consider the same moral and ethical issues that we consider each and every day.
It’s clear that Russia isn’t developing completely autonomous terminator bots. Rest assured, the sky isn’t falling. Russia is, however, working to weaponize the learning ability of artificial intelligence. The US probably is too – this isn’t an attack on any given country.
Even with a limited ability to learn, AI is more advanced than we’re ready for. There’s a lot of data, and AI can spit out more information than humans have time to read. We’d have to get every AI researcher into a room together and give them a few years alone with the data we have, right now, in order to figure out where we’re actually at. And this is the infancy of what AI is going to become.
It’s time for a global summit on AI warfare. World leaders need to prioritize this while they can still debate the “before” scenarios.
Tristan Greene is a sailor gleefully writing about living on dry land. He’s been known to brag about his Rockband 2 scores.
This article has been previously published in TNW.
Featured Image Source: Visual Hunt.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius