Top

Biden announces new rules on AI

Joe Biden announced new rules governing the use of artificial intelligence that will prevent the Pentagon and intelligence communities from using the technology in ways that are not “in line with democratic values.” Biden will release the new guidelines in a national security memorandum that is scheduled to be signed Thursday. Officials said it is the first directive outlining how the U.S. national security apparatus should use artificial intelligence. It aims to set an example for other governments seeking to use and expand the technology responsibly.

They added that the new rules are designed to encourage the use and experiment with AI while ensuring that government agencies do not employ it for activities that could, for example, violate free speech rights or circumvent nuclear arms controls. “Our memorandum sets out the first government framework on engagement in AI risk management, how to refrain from use that deviates from our nation’s core values, avoid harmful bias and discrimination, maximise accountability, and ensure effective and appropriate human oversight,” U.S. National Security Adviser Jake Sullivan said in a speech Thursday morning.

A step towards the election day

The guidelines are not legally binding, and should he win next month’s presidential election, Donald Trump may choose to refrain from enacting them. Vice President Kamala Harris has played a vital role in shaping the Biden administration’s efforts on AI and is expected to focus on emerging technologies if elected. The guidelines are not legally binding, and should he win next month’s presidential election, Donald Trump may choose to refrain from enacting them. Vice President Kamala Harris has played a vital role in shaping the Biden administration’s efforts on AI and is expected to focus on emerging technologies if elected.

The directive is the latest effort by the Biden administration to try to encourage the use of AI as the U.S. seeks to compete with China while simultaneously addressing concerns about the possible misuse of the technology. The rules focus on national security applications of AI, such as cybersecurity, counterintelligence, logistics, and other activities that support military operations. The United States has taken several measures to maintain a strategic advantage over technology, including issuing export controls to slow China’s advanced AI development. Last year, Biden signed a broad executive order that forced private companies whose AI models could threaten U.S. national security to share security information with the U.S. government.

Biden announces new rules on AI to prevent from using the technology in ways that are not “responsible”
Biden announces new rules on AI to prevent from using the technology in ways that are not “responsible”

Just like autonomous weapons

While the Department of Defense and other agencies have funded a significant portion of the work on AI in the 20th century, the private sector has driven much of the progress in the past decade. The lack of guidelines on how AI can be used is hampering development as companies worry about what applications might be legal. The directive is the latest move by U.S. President Biden’s administration to address artificial intelligence, as congressional efforts to regulate the emerging technology have stalled.

Biden will convene a global security summit in San Francisco next month; last year, he signed an executive order to limit AI’s risks to consumers, workers, minority groups and national security. For example, the memorandum excludes AI systems from deciding when to launch nuclear weapons, leaving that decision to the president as commander in chief. This explicit statement is part of an effort to engage China in deeper discussions about the limits imposed on high-risk applications of artificial intelligence. However, the rules for non-nuclear weapons are less clear, and there is encouragement to keep human decision-makers in the decision-making process, especially if Russia and China start using fully autonomous weapons.

An important part of the order is the treatment of private-sector AI advances as national assets to be protected, like early nuclear weapons, from espionage or theft by foreign adversaries. The order calls on intelligence agencies to protect work on large language models and the chips used in their development as “national treasures.” It also describes efforts to attract top AI specialists worldwide to the United States to prevent them from working for rivals like Russia.

Antonino Caffo has been involved in journalism, particularly technology, for fifteen years. He is interested in topics related to the world of IT security but also consumer electronics. Antonino writes for the most important Italian generalist and trade publications. You can see him, sometimes, on television explaining how technology works, which is not as trivial for everyone as it seems.