Nuke-launching AI would be illegal under proposed US law
On Wednesday, US Senator Edward Markey (D-Mass.) and Representatives Ted Lieu (D-Calif.), Don Beyer (D-Va.), and Ken Buck (R-Colo.) announced bipartisan legislation that seeks to prevent an artificial intelligence system from making nuclear launch decisions. The Block Nuclear Launch by Autonomous Artificial Intelligence Act would prohibit the use of federal funds for launching any nuclear weapon by an automated system without “meaningful human control.”
“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” Markey said in a news release. “That is why I am proud to introduce the Block Nuclear Launch by Autonomous Artificial Intelligence Act. We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”
The new bill builds on existing US Department of Defense policy, which states that in all cases, “the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment.”
The new bill aims to codify the Defense Department principle into law, and it also follows the recommendation of the National Security Commission on Artificial Intelligence, which called for the US to affirm its policy that only human beings can authorize the employment of nuclear weapons.
“While US military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” Buck said in a statement. “I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions.”
The new bill comes as anxiety grows over the future potential of rapidly advancing (and sometimes poorly understood and overhyped) generative AI technology, which prompted a group of researchers to call for a pause in the development of systems “more powerful” than GPT-4 in March.
While GPT-4 isn’t feared to launch a nuclear strike, a group of AI researchers that evaluate the capabilities of today’s most popular large language models for OpenAI fear that more advanced future AI systems may be a threat to human civilization. Some of that fear has transferred to the broader populace, despite worries over existential threats from AI remaining controversial in the broader machine learning community.
Hot topics in technology aside, the new bill is also part of a larger plan from Markey and Lieu for avoiding nuclear escalation. The pair also recently reintroduced a bill that would prohibit any US president from launching a nuclear strike without prior authorization from Congress. The overall goal, according to the congressmen, is to reduce the risk of “nuclear Armageddon” and hinder nuclear proliferation.
Cosponsors of the Block Nuclear Launch by Autonomous Artificial Intelligence Act in the Senate include Bernie Sanders (I-Vt.) and Elizabeth Warren (D-Mass.).