Autonomous weapons: What are they, and why do they matter?

In the last few years, there has been a new development in the field of weapons technology.

Some of the most advanced national military programs are beginning to implement artificial intelligence (AI) into their weapons, essentially making them ‘smart’. This means these weapons will soon be making critical decisions by themselves – perhaps even deciding who lives and who dies.

If you’re safe at home, far from the front lines, you may think this does not concern you – but it should.

What are lethal autonomous weapons?

Slaughterbots, also called “lethal autonomous weapons systems” or “killer robots”, are weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without human intervention.

Whereas in the case of unmanned military drones the decision to take life is made remotely by a human operator, in the case of lethal autonomous weapons the decision is made by algorithms alone.

Slaughterbots are pre-programmed to kill a specific “target profile.” The weapon is then deployed into an environment where its AI searches for that “target profile” using sensor data, such as facial recognition.

When the weapon encounters someone the algorithm perceives to match its target profile, it fires and kills.

What’s the problem?

Weapons that use algorithms to kill, rather than human judgement are immoral and a grave threat to national and global security.

  1. Immoral: Algorithms are incapable of comprehending the value of human life, and so should never be empowered to decide who lives and who dies. Indeed, the United Nations Secretary General António Guterres agrees that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
  2. Threat to Security: Algorithmic decision-making allows weapons to follow the trajectory of software: faster, cheaper, and at greater scale. This will be highly destabilising on both national and international levels because it introduces the threats of proliferation, rapid escalation, unpredictability, and even the potential for weapons of mass destruction.

How soon will they be developed?

Terms like “slaughterbots” and “killer robots” remind people of science fiction movies like The Terminator, which features a self-aware, human-like, robot assassin. This fuels the assumption that lethal autonomous weapons are of the far–future.

But that is incorrect.

In reality, weapons which can autonomously select, target, and kill humans are already here.

A 2021 report by the U.N. Panel of Experts on Libya documented the use of a lethal autonomous weapon system hunting down retreating forces. Since then, there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world.

The accelerating rate of these use cases is a clear warning that time to act is quickly running out.

  • March 2021

    First documented use of a lethal autonomous weapon

  • June 2021

    First documented use of a drone swarm in combat

    June 2021

  • ...

    We must not wait to find out what will happen next!

The Latest from the Future of Life Institute
Subscribe To Our Newsletter

Stay up to date with our grant announcements, new podcast episodes and more.

Invalid email address
You can unsubscribe at any time.