Top

The AI ACT- the new EU regulator legislation for AI applications and use

With AI applications becoming increasingly intertwined with our daily lives, a comprehensive framework has become necessary. In a landmark move, the European Union (EU) has put forth pioneering legislation known as the AI Act, aiming to regulate the rapidly advancing field of artificial intelligence (AI). The AI Act, the first of its kind by a major regulatory body, seeks to strike a delicate balance between the transformative potential of AI and ensuring the protection of users’ fundamental rights. Similar to the far-reaching influence of the EU’s General Data Protection Regulation (GDPR), the AI Act holds the potential to become a global benchmark, shaping the regulation of AI across different jurisdictions.

What does the EU AI Act mean?

As mentioned above, the EU AI Act is a proposed legislation introduced by the European Union concerning artificial intelligence (AI). It holds the distinction of being the first comprehensive AI law put forth by a significant regulatory body. Under this law, AI applications are classified into three categories based on their level of risk. Firstly, applications and systems that pose an unacceptable risk, like government-run social scoring like those employed in China, are prohibited. Secondly, high-risk applications, such as a CV-scanning tool for ranking job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or categorized as high-risk remain largely unregulated.

The Act’s emphasis on flexibility allows for regulatory adjustments that keep pace with emerging technologies, novel use cases, and unforeseen advancements in AI. At the same time, the Act acknowledges the need for flexibility to adapt to the evolving landscape of AI technologies and applications, thereby avoiding an approach that could stifle future innovation. By maintaining an adaptable framework, the legislation encourages ongoing exploration and development in AI while ensuring responsible and ethical practices.

AI ACT
MEP Brando Benifei during the parliamentary committee vote on the AI Act. [European Parliament]

The Act’s flexible approach also accounts for the dynamic nature of AI risks and benefits. As our understanding of AI continues to evolve, so does our ability to assess and manage its potential risks. By avoiding a one-size-fits-all approach, the legislation aims to provide a regulatory environment that can effectively respond to the nuanced challenges posed by different AI applications, striking an appropriate balance between regulatory oversight and fostering innovation.

Why is this Act a must in today’s world?

AI has become pervasive and influential in various aspects of our lives, including information dissemination, data analysis, law enforcement, healthcare, and more. Its widespread application necessitates a regulatory framework to ensure responsible and ethical use while protecting fundamental rights. Also, AI technologies’ rapid evolution and constant development require a proactive approach to address potential risks and challenges. An AI Act can provide guidelines and safeguards that keep pace with technological advancements and changing risk landscapes. By establishing clear rules and accountability measures, the Act aims to strike a balance between fostering innovation and mitigating potential harms. And lastly, a comprehensive AI Act can serve as a global standard, setting the benchmark for AI regulation across different jurisdictions. The EU’s AI regulation has already garnered international attention. In late September 2021, Brazil’s Congress passed a bill establishing a legal framework for artificial intelligence, which awaits the country’s Senate approval.

But the AI Act has sparked some concerns among European startups due to its potential impact. According to a survey conducted by applied AI, a significant number of respondents (33-50%) expressed concerns about their technology falling under the high-risk classification outlined in the proposed legislation. Compared to the initial impact assessment’s estimation of 5-15%, this marked increase in perceived risk has raised alarms within the startup community. Meeting the rigorous conformity assessment requirements for high-risk AI could pose significant challenges for startups and small- to medium-sized enterprises, particularly those with limited resources. However, regulatory sandboxes, if accessible to smaller organizations, may offer a viable solution to mitigate these challenges and strike a balance between innovation and compliance.

That’s why to ensure effective regulation that safeguards fundamental rights and encourages innovation; AI regulations must remain adaptable to new advancements, evolving risk classifications, and the vast array of applications. Rigid categorization solely based on perceived risk levels diverts attention away from AI’s actual risks and benefits. It runs the risk of becoming quickly outdated or suppressing future innovation.

Kristi Shehu is a Cyber Security Engineer (Application Security) and Cyber Journalist based in Albania. She lives and breathes technology, specializing in crafting content on cyber news and the latest security trends, all through the eyes of a cyber professional. Kristi is passionate about sharing her thoughts and opinions on the exciting world of cyber security, from breakthrough emerging technologies to dynamic startups across the globe.