FLI August, 2020 Newsletter

Lethal Autonomous Weapons Systems, Nuclear Testing & More

Described as the third revolution in warfare after gunpowder and nuclear weapons, lethal autonomous weapons are systems that can identify, select and engage a target without meaningful human control. Many semi-autonomous weapons in use today rely on autonomy for certain parts of their system but have a communication link to a human that will approve or make decisions. In contrast, a fully-autonomous system could be deployed without any established communication network and would independently respond to a changing environment and decide how to achieve its pre-programmed goals. The ethical, political and legal debate underway has been around autonomy in the use of force and the decision to take a human life.

Lethal AWS may create a paradigm shift in how we wage war. They would allow highly lethal systems to be deployed in the battlefield that cannot be controlled or recalled once launched. Unlike any weapon seen before, they could also allow for the selective targeting of a particular group based on parameters like age, gender, ethnicity or political leaning (if such information was available). Because lethal AWS would greatly decrease personnel cost and could be easy to obtained at low cost (like in the case of small drones), small groups of people could potentially inflict disproportionate harm, making lethal AWS a new class of weapon of mass destruction.

There is an important conversation underway in how to shape the development of this technology and where to draw the line in the use of lethal autonomy. Check out FLI’s new lethal autonomous weapons systems page for an overview of the issue, plus the following resources:

Nuclear Testing

Video: Will More Nuclear Explosions Make Us Safer?

On August 6th and 9th, 1945, the United States dropped nuclear bombs on the Japanese cities of Hiroshima and Nagasaki. To this day, these remain the only uses of nuclear weapons in armed conflict. As we mark the 75th anniversary of the bombings this month, scientists are speaking up against the US administration’s interest in restarting nuclear testing. Watch here.

Open Letter: Uphold the Nuclear Weapons Test Moratorium

Scientists have come together to speak out against breaking the nuclear test moratorium in an open letter published in Science magazine. Read here.

AI Ethics

Podcast: Peter Railton on Moral Learning and Metaethics in AI Systems

From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics. Listen here.

FLI in the News

The Latest from the Future of Life Institute
Subscribe To Our Newsletter

Stay up to date with our grant announcements, new podcast episodes and more.

Invalid email address
You can unsubscribe at any time.