Autonomous weapons have been with us for a long time, take for example sea and land mines that detonate on their own when triggered by a target’s proximity. However, artificial intelligence-enabled autonomous weapons have only more recently crossed the threshold from science fiction to reality. That is of course due to the giant leaps we have made in the realm of AI development.
Military drone swarms both in the air and below the surface of the sea, are already here. They might not be advanced yet, but give it time. We also have to contend with AI-piloted fighter planes that can defeat human pilots in dogfights. Removing human beings from the equation — with their slower decision-making ability and capacity for critical thinking and intuition — leaves a system in place that simply relies on preprogrammed parameters to select a target and destroy it.
This should probably give you pause. It is not hard to make the mental leap from “man, this is a great weapons system that provides a huge tactical and strategic advantage,” to “holy crap, this is terrifying and could lead to a level of destruction and killing never-before-seen in human history.” The latter type of sentiment underlies international efforts to limit such technologies, such as the push for a “killer robots treaty,” meant to save us from ourselves.
Related: How US special operators use Artificial Intelligence to get an edge over China
This article is in no way a call for an outright ban on the development of AI-enabled autonomous weapons. Surely humanity has the capacity to limit the use of such weapons as we (mostly successfully) do with chemical, biological, and nuclear weapons. I say that with a giant dose of skepticism, though, as we are not always good at avoiding death and destruction on a massive and global scale.
What I do intend to do here is just make you think about the potential dangers of AI-enabled autonomous weapons. Once we go down the road of robots battling robots in AI-dominated warfare, it is a slippery slope to warfare-as-a-videogame. Countries could be less reluctant to go to war if their leaders see it primarily as a struggle between machines. Leaders who already often show little regard for sending the young off to die in war will presumably show even less reluctance when they are sending killer robots off to fight it out.
Related: US Special Operations Command wants to use “deep fakes” as a weapon, but that could create problems
If a war would cease escalation at that point — robot versus robot to decide a victor — things might not be so bad. However, I see little hope that mankind would stop there. Those AI-enabled autonomous weapons would surely at some point find themselves pointed at human targets. That is when things become terrifying, even without the further threat of those AI somehow becoming sentient and turning on humans.
At a certain point, we will have to come to a collective decision to limit just how effectively and industriously we can kill ourselves. Given our history, I do not hold out much hope, but that should not stop us from at least beginning to wrestle with the inherent moral and ethical problems of AI-enabled autonomous weapons. One thing is for certain: the field of science fiction is indeed fertile for yet countless more terrifying stories of just how bad things could become if we are not careful.
Feature Image: The 11th Armored Cavalry Regiment and the Threat Systems Management Office push a swarm of 40 drones through the town during the battle of Razish, National Training Center on May 8th, 2019. This exercise was the first of many held at the National Training Center. (U.S. Army Photo by Pvt. James Newsome)
Read more from Sandboxx News
- 3 myths about Nazi technology the internet won’t let die
- DARPA’s new missile hints at truly game-changing technology
- Navy launches new drone from one of its strangest ships
- How Val Kilmer used artificial intelligence to speak again in ‘Top Gun: Maverick’
- Marine Corps is ready to bring the sting to enemy aircraft and drones
Leave a Reply