/ / Legal x-rays

By Aineias Spiliotis, LL.B.

On June 14, the European Parliament’s plenary gave the green light to the AI Act, which will be followed by the trilogue process, in which the European Parliament will negotiate with the Council of the EU and the European Commission. This process aims to reach agreement on a legislative proposal that is acceptable to both Parliament and the Council and the co-legislators. This provisional agreement must then be adopted by the formal procedures of each of these institutions.

More specifically, the AI Act prohibits certain applications, such as manipulation techniques and social scoring, that are considered to pose an unacceptable risk to citizens. The list of these prohibited practices was significantly expanded at the insistence of MEPs. The ban is extended to artificial intelligence models for biometric categorisation, predictive policing and non-targeted archiving of facial images from sources such as the internet for the purpose of creating databases for facial recognition. Moreover, the use of, inter alia, artificial intelligence systems that detect emotions should be prohibited in law enforcement, border management, the workplace and education.

In addition to the definition of ‘AI systems’ (which has been deliberately left neutral in order to cover techniques that are not yet known/developed), the legislators distinguish between ‘foundational models’ and ‘general purpose AI systems’ (GPAI), adopted in the most recent versions to introduce a stricter regime for the former. Article 3(1) of the draft law defines an ‘artificial intelligence system’ as software developed using specific techniques and approaches that can, for a given set of human-defined objectives, produce outputs such as content, predictions, recommendations, or decisions that affect the environments with which they interact. ‘General purpose AI system’ means an AI system that can be used and adapted to a wide range of applications for which it was not intentionally and specifically designed.

In accordance with Section 53 of the AI Act, Member States are further required, either jointly or individually, to establish an AI regulatory tester on a national level that will be operational no later than the date of implementation of the AI Act. Such AI regulatory testers, in accordance with the prescribed criteria, shall provide a controlled environment that promotes innovation and facilitates the development, testing and validation of innovative AI systems for a limited period of time prior to the implementation of the AI Act.

In the health sector, the European Health Data Space will facilitate non-discriminatory access to health data and the training of AI algorithms on these data sets in a secure, transparent, and trustworthy way, while protecting privacy and ensuring appropriate institutional governance. Relevant competent authorities providing or supporting access to data can also support the provision of high-quality data for training and testing AI systems.

Moreover, the right to privacy and protection of personal data should be ensured throughout the life cycle of the AI system. In this respect, the principles of data minimisation and data protection as laid down in the Union’s data protection legislation are necessary when the processing of data poses significant risks to the fundamental rights of individuals. Providers and users of AI systems should implement modern technical and organisational measures to protect those rights. These measures should include not only anonymisation and encryption, but also the use of increasingly new and available technologies that allow algorithms to draw valuable conclusions without passing between parties or unnecessary copying of raw or structured data.

The European Parliament’s text made the obligations for high-risk AI providers much more restrictive, particularly in risk management, data governance, technical documentation and record keeping.

By the end of 2023, the EU AI law should be in place.