Open AI: “The world is not ready.” Miles Brundage, the researcher in charge of artificial general intelligence (AGI) at OpenAI, stepped down last Friday due to concerns about the company’s and the public’s readiness to handle the unknown potential of AGI. According to Brundage, “neither OpenAI nor any other frontier lab is ready,” as he expressed worries about safety and security and whether AGI will be beneficial in developing, deploying, and governing other AI systems.
Brundage is not the only high-profile executive who has voiced concerns about AGI and AI. Elon Musk, a well-known critic of OpenAI (which he helped establish in its nonprofit phase), has stated that “AI is more dangerous than nuclear weapons” and that with “AI, we are summoning the demon.” He made these comments a decade ago. Geordie Rose, the former CEO of Kindred AI and founder of D-Wave (both companies develop quantum computers), echoed similar sentiments in 2017, stating, “The same way that you don’t care about an ant is the same way they (AGIs) are not going to care about you.”
Six years at OpenAI
Brundage has worked for over six years at OpenAI, and in his previous positions, he was head of policy research and senior advisor for AGI Readiness. In his resignation letter, he acknowledged the challenging landscape he faced during these years: “The opportunity costs have become very high,” he wrote, underscoring that his ability to address critical research topics had been constrained within the organisation, “It’s hard for me to publish on all the topics that are important to me”, he highlighted.
After the resignation, Brundage plans to redirect his efforts towards AI policy research and advocacy, potentially through a nonprofit organisation. Earlier this year, OpenAI’s chief scientist and co-founder Ilya Sustkever left the company to start Safe Superintelligence (SSI), an AI safety-focused startup to develop safe AIs.”
Although Brundage is leaving Open AI, he expressed optimism about the organisation’s ongoing efforts to ramp up investment in safety culture and processes. However, he remains reluctant about the gaps in safety that exist, stating that both OpenAI and the world are not prepared for the arrival of AGI.
He pointed to the need for robust safety protocols and regulatory frameworks to ensure that advancements in AI are managed responsibly. “I’m particularly interested in opportunities for action under existing legal authorities, as well as shaping the implementation of already-approved legislation such as the EU AI Act”, Bundage explained.
AI governance is in its infancy
The landscape of AI governance is still in its infancy and “lags” behind developments in the industry “by a fair amount”, according to the executive. And, as Brundage indicated, it will require a concerted effort across various sectors. “I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so,” he said and stressed the importance of collaboration between governments, non-profits, and industry leaders to mitigate the risks posed by advanced AI technologies.
“I think AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society, and industry, and this needs to be informed by robust public discussion,” Bundage noted.