Top

AI systems: US Government and Companies agree on safe development

Safety, security and trust. These are the three factors on which the agreement between the seven leading companies developing artificial intelligence systems and the US government was signed. Intending to provide citizens with clear and recognisable content when produced by generative AI software, the agreement voluntarily aims to promote the safe and transparent development of the technology. This is considered dangerous even and, above all, by the companies dedicated to its development. Therefore, the various solutions developed will have to be tested internally and externally by independent specialists. In contrast, the companies must share the potential risks with the industry, academia and institutions. Demonstrating, too, that the algorithms governing AI systems are capable of protecting users’ data and are unbiased and non-discriminatory in terms of sexism and racism. Also are avoiding the spread of disinformation and the possibility of being used as a weapon by cyber criminals.

The seven companies developing AI systems and the risks to avoid

The long list of elements and possible drifts to safeguard give an idea of how complex the development of artificial intelligence capable of respecting the points described is. These, however, are not in dispute since to offer society a technology so eagerly awaited for its ability to simplify and speed up activities – theoretically allowing people to focus their energies and time on more sophisticated and relevant tasks. It is sacrosanct that AI software must be safe, effective, transparent and functional. The game is played on how to make them so while simultaneously purging them of the many risks associated with indiscriminate use and outlined in part by the agreement reached between companies and the Biden-Harris administration.

Sam Altman
OpenAI Co-Founder & CEO Sam Altman speaks onstage during TechCrunch Disrupt San Francisco 2019 at Moscone Convention Center on October 03, 2019 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)

AmazonAnthropicGoogleInflectionMetaMicrosoft and OpenAI have accepted the proposals included in the letter. Ensuring Safe, Secure, and Trustworthy AI concerning the duties to be observed. The first concerns the use of red teaming techniques to check the validity of the software developed. In particular for possible use by enemy entities to create or facilitate the development of weapons, biological threats, radioactive threats, cyber-attacks, or that can take control of physical systems and discriminate. At the same time, the seven companies pledge to disclose to industry and government potential vulnerabilities and best practices for developing AI systems, with the possibility of defining common standards.

Watermarking AI for safety

The predominant aspect for those of us dealing with AI solutions is the collective commitment to avoid misleading people. Therefore, to highlight text, audio, images and video produced with generative artificial intelligence. This type of content will be characterised by watermarking, as well as APIs to determine the source system of the content itself.

A first step to controlling technology with enormous potential

Although this is a common-sense agreement established voluntarily and thus lacking a specific legislative basis, it is currently impossible to imagine how the development processes of the respective AI software by the seven companies will change. Assuming each of them follows the agreement to the letter. Indeed, there will be an increase in hiring personnel specialised in working according to the standards to carry out internal tests, even though companies such as Meta and OpenAI are used to using external agents to test their models.

Although this is a promising first step, it must be considered that according to several experts who test algorithms for companies and institutions, evaluations on significant software and language models could be more effective. Rather than generic experiments, they believe specific use cases that put AI to the test are needed, especially with the risks feared in the agreement with the US government. An example? Customised evaluations would be necessary for a chatbot designed and used to provide medical or legal advice. Otherwise, there is a risk of going astray. And this, too, is an aspect not to be underestimated.

Alessio Caprodossi is a technology, sports, and lifestyle journalist. He navigates between three areas of expertise, telling stories, experiences, and innovations to understand how the world is shifting. You can follow him on Twitter (@alecap23) and Instagram (Alessio Caprodossi) to report projects and initiatives on startups, sustainability, digital nomads, and web3.