ChatGPT: The man who taught the machines to rewrite the world, according to the legend tailor-made for him by his dermatologist mother Connie Gibstine – and not by Artificial Intelligence – was a child prodigy who, at the age of eight, in the dilapidated St Louis where he grew up, had already dismantled and reassembled to perfection his first Mac computer: beginning to code programs at an early age just as the young Mozart composed symphonies.
Generative Artificial Intelligence increasingly resembles a beautiful road full of pitfalls. An attractive picture but also a worrying one. And worried, too, was Sam Altman, CEO of OpenAI (ChatGPT’s parent company), who, before a subcommittee of the US Senate, confessed that his most significant concern is artificial intelligence’s potential to persuade and misinform voters ahead of next year’s US elections.
Altman, the most high-profile CEO in the US technology industry at the moment, before the US senators said he was ready to help regulators as they grapple with drafting rules that would provide flexibility for companies and broad and safe consumer access to generative Artificial Intelligence. “I believe there is a need for new rules, for guidelines. We can and must work together to identify and manage potential’ risks so that everyone can ‘enjoy the enormous’ benefits the new technology offers. Altman said he was convinced that AI should be ‘developed on democratic values’ because ‘it is not social media and needs a different response.
My biggest fear is that it will create significant damage,’ he pointed out, noting that the technology is ‘still in its early stages and can still make mistakes’. The OpenAI founder pointed to governments that have regulated nuclear weapons as a precedent and believes the US should ‘lead the way in regulating’ AI but added that it would be necessary to rely on an international body such as the International Atomic Energy Agency with nuclear weapons. “There is a possibility,” Altman added, “that the US could set standards that other countries could conform to, but it seems an impractical idea.
Like a nuclear weapon
Hence the OpenAI CEO’s call for the possible licensing of the development of artificial intelligence models without stifling the growth of small start-ups in the sector. “The regulatory pressure should be on us, on Google” and, in general, on the biggies in the sector. Altman was very clear: ‘If this technology goes wrong, it can go very wrong. And we want to be heard on this. We want to work with the government to prevent that from happening. Altman’s hearing, the first for the CEO of OpenAI, coincided with the publication of a report by Microsoft (OpenAI’s first investor, with a commitment of more than $10 billion), according to which artificial intelligence is capable in some cases of understanding things like a human being. Called ‘Sparks of Artificial General Intelligence’, the report fuels a heated debate on how human artificial intelligence can be.
In the next few days, at the US Congress will also be Christina Montgomery, IBM’s chief privacy officer, and Gary Marcus, professor emeritus at New York University, who was part of a panel of artificial intelligence experts that called on OpenAI and other technology companies to suspend development of more powerful AI models for six months to give the company more time to consider the risks.
The letter responded to the March launch of OpenAI’s latest model, the GPT-4, described as more powerful than ChatGPT. “Artificial intelligence will be transformative in ways we can’t even imagine, with implications for elections, jobs, and the security of Americans,” said Republican Senator Josh Hawley of Missouri, a prominent member of the panel. “This hearing marks a critical first step in understanding what Congress should do.” Altman and other tech industry leaders said they welcome some form of oversight of AI but warned against what they see as overly burdensome rules. In a copy of his prepared remarks, IBM’s Montgomery calls on Congress to adopt a ‘precise regulation’ approach. “This means establishing rules to govern the implementation of AI in specific use cases, without regulating the technology itself,” Montgomery said.