OpenAI’s recent announcement regarding forming a specialized team to address the risks associated with superintelligent artificial intelligence (AI) has raised eyebrows among sceptics. While OpenAI emphasizes the potential benefits and dangers of superintelligence, critics argue that such initiatives may divert attention away from the immediate need for AI regulation and the societal impact of existing AI technologies.
Mission and Leadership
Ilya Sutskever, the Chief Scientist of OpenAI, and Jan Leike, the head of the alignment, will jointly lead the newly formed team. Their primary objective is to develop an automated alignment researcher that ensures superintelligence remains in line with human values and poses no threats.
Despite the grand ambition, sceptics question whether OpenAI’s efforts must be put in place and whether such a solution can be achieved within the projected timeline.
As OpenAI stated, “While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem.” This optimistic outlook fuels their determination, but critics remain cautious about the potential limitations of experimental ideas and empirical observations.
An Optimistic Outlook
OpenAI acknowledges the challenges and uncertainties surrounding its ambitious goal but remains optimistic about the potential of a concentrated effort. They cite promising ideas from preliminary experiments and claim to have useful metrics for measuring progress. However, critics argue that relying solely on experimental ideas and empirical observations may not sufficiently address superintelligence’s complex and far-reaching implications.
Regulatory Concerns
Governments worldwide are grappling with the challenge of AI regulation, prompting OpenAI’s CEO, Sam Altman, to engage with federal lawmakers. OpenAI publicly expresses eagerness to collaborate with policymakers. However, sceptics argue that initiatives like the Superalignment team may inadvertently defer the immediate need for regulatory action and oversight of AI technologies.
One critic highlights the potential danger of shifting the regulatory burden: “By focusing on hypothetical risks that may never materialize, organizations like OpenAI divert attention from the pressing issues surrounding the interplay between AI and labor, misinformation, and copyright. These are the concerns policymakers need to address today, not tomorrow.”
Shifting the Burden
Critics caution against an excessive focus on hypothetical risks that may never materialize. By amplifying future threats, organizations like OpenAI may inadvertently delay regulatory action on the current challenges posed by AI, such as labour displacement, misinformation, and copyright issues. They argue that immediate concerns require urgent attention and decisive regulatory measures.
While OpenAI’s formation of a dedicated team to manage the risks of superintelligent AI demonstrates a commitment to aligning this powerful technology with human values, scepticism remains. Critics argue that hypothetical risks should not overshadow the urgency of AI regulation and addressing present societal impacts.
In response to the challenge posed by superintelligent AI, OpenAI is actively recruiting talented individuals to join their specialized teams. Their commitment to addressing the risks associated with superintelligence demonstrates a proactive approach. OpenAI’s recruitment efforts hold the potential to bring fresh perspectives and expertise to the table, fostering a collaborative environment for responsible AI development and regulation.