Top

It’s time for the AI Act, the European regulation that makes men and machines talk

A few years ago, European researchers and academics, following the ideas coming from the United States, began to hypothesize the signing of a shared document on artificial intelligence. Or rather: on how to create artificial intelligence applications and services that are ethical and not in conflict with what is acceptable in various scenarios. The Artificial Intelligence Act (AIA) defines various areas of intervention of AI systems: applications prohibited because they cause unbearable risks to fundamental rights and liberties; high-risk applications (not prohibited, but subject to specific conditions to manage the risks); applications of limited risk and other applications of negligible risk.

The list of prohibited AI systems includes online manipulative practices that produce physical or psychological harm to individuals or exploit their vulnerability on the basis of age or disability; social scoring that produces disproportionate or decontextualized harmful effects; and biometric identification systems used by law enforcement authorities in public spaces (when their use is not strictly necessary or when the risk of harmful effects is too high).

For the first time, European regulators attempt to define a boundary or limit that should not be crossed when employing AI-based services or products in society. Unlike prohibited AI systems, AI systems classified as ‘high risk’ are not banned by default, but subject to various compliance obligations. These obligations include, among others, a risk management plan, compliance certification, a data management plan, and human oversight.

The list of high-risk AI systems in the AIA includes facial recognition; AI used in critical infrastructure; in educational, employment or emergency contexts; in asylum and border contexts; in social care, for credit scoring, for use by law enforcement or for judicial purposes. The EU Commission may update this list on the basis of the severity and likelihood of impact of present and future AI systems on fundamental rights.

Finally, a third category of AI systems is considered to be of ‘limited risk’. This category includes morally questionable AI applications such as AI algorithms that produce deepfakes (highly realistic fake videos or photos) as well as emotion recognition and biometric categorization systems. The use of these systems would not imply any specific duty of compliance, but only very vague transparency obligations: a simple notification to consumers/citizens about the fact that an AI system is operating in that context.

The level of risk of an AI system, however, derives not only from the type of technological system used, but also from the conjunction between its domain of application and its human target. This implies that if a low-risk AI system were to be used for practices that fall on the list of unbearable risks, then it would be banned.

For example, if AI were used to detect emotions among children and manipulate them accordingly, then it would fall under prohibited practices. Similarly, if such low-risk AI applications are used in sensitive contexts that fall under the high-risk list, then strict compliance obligations would apply. This would be the case, for example, when emotional recognition systems are used to profile suspected criminals, or to assess workers or students.

Basically, despite the merits of the AIA, it seems that the current proposal fails to sufficiently regulate, and thus classify as high risk, all those practices that use AI in order to reveal information about a person’s mind. In our view, this insufficiently strict regulation of AI applications for processing mental data opens the way to risk scenarios where the only safeguard for an individual against the automatic processing of his or her mental information (e.g. emotions) would be a simple transparency obligation, such as notifying the individual, but without giving that individual any possibility to opt out.

Several ethically questionable uses of AI could benefit from this loophole. For instance, human rights activists have recently revealed that the Uyghur population, a Turkish-speaking ethnic group of the Islamic religion living in northwest China, has been forcibly subjected to experiments with automatic emotion recognition software for surveillance purposes.

Furthermore, methodologically ambiguous scientific studies have claimed to be able to reveal sensitive characteristics of individuals related to their mental domain such as their sexual orientation, intelligence, or criminal inclinations from mere AI facial recognition systems. These practices are not currently considered high-risk per se. Neither are other mind-mining practices such as the analysis and manipulation of emotions on social media, or practices aimed at secretly influencing users’ behavior through micro-targeted advertising, except in very rare cases where this produces physical or psychological harm or exploits vulnerabilities related to age or disability.

To date, the only safeguard proposed in the AI Act against such practices is an obligation to notify data subjects (note, however, as also noted by the EDPB, that the proposal does not define possible ‘data subjects’). However, we know that simple transparency notices may be fallacious or ineffective for disinterested or inattentive users. Moreover, in this particular case, such transparency obligations in the AIA might prove to be too limited and generic. This becomes clear if we compare them with transparency duties in other EU legal fields.

For example, in the GDPR when automated decision-making is used, data subjects should not just receive a simple notification, but meaningful information about the logic, relevance and intended consequences of the algorithm. From this point of view, the AI Act seems to introduce a principle that is aporetic or even contradictory to the GDPR. Also, automated emotion analysis is already considered a ‘high risk’ practice in the GDPR’s regulatory landscape, as the EDPB has listed indices of high risk that include innovative technologies, sensitive information, and vulnerable subjects.

However, the AI Act can still be improved. After the period of public consultation by the European Commission, today the document integrates 304 further contributions, which can be read here.

Antonino Caffo has been involved in journalism, particularly technology, for fifteen years. He is interested in topics related to the world of IT security but also consumer electronics. Antonino writes for the most important Italian generalist and trade publications. You can see him, sometimes, on television explaining how technology works, which is not as trivial for everyone as it seems.