Thousands of researchers working in artificial intelligence have signed a pledge not to develop lethal autonomous weapons. The pledge was announced at the International Joint Conference on AI in Stockholm, Sweden. The pledge states that signatories will “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”
The pledge was signed by 2,400 individuals representing more than 160 organizations from 90 countries. The list includes some of the most influential names from the tech industry, science field, and academia. Among them are OpenAI founder Elon Musk, Skype co-founder Jaan Tallinn, and artificial intelligence researcher Stuart Russell. The three founders of Google DeepMind, the European Association for AI, and University College London have also made the pledge.
The Future of Life Institute, a Boston-based charity that seeks to reduce risks posed by AI, organized the pledge. Max Tegmark, president of the Future of Life Institute, said in a statement, “AI has huge potential to help the world — if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons and should be dealt with in the same way.”
Lethal autonomous weapons systems can identify, target, and kill without human input. According to Human Rights Watch, these types of weapons systems are currently being developed in the United States, China, Israel, South Korea, Russia and the United Kingdom, as well as several other nations. Drones are not considered autonomous weapons systems because they rely on human pilots and decision-makers to operate.
Toby Walsh, a Scientia professor of artificial intelligence at the University of New South Wales in Sydney, helped to organize the pledge. He says, “We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so.”
The pledge reads in part: “Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”
“Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage.”