By Evangelia Manika, Associate, LLB, MSc.
On 8 December 2023 Parliament and Council negotiators reached a provisional agreement on the key terms and provisions of the AI Act, the world’s first horizontal legislation on artificial intelligence. The main objective of the AI Act is to ensure that only AI systems that are both safe and respect the EU’s fundamental rights and values are placed on the EU market. This plays an important role in the healthcare sector, where trust and transparency are essential elements for the creation and development of applications using artificial intelligence in healthcare. All of this can help improve patient outcomes, reduce healthcare costs, and create patient-centric medical technologies. This can lead to more personalised and effective solutions for patients.
In this political deal, the co-legislators agreed on some key provisions, including safeguards for general purpose artificial intelligence, limits on the use of biometric identification systems by law enforcement, bans on biometric categorisation systems that use sensitive characteristics, the right of consumers to lodge complaints and receive explanations about decisions based on high-risk AI systems that affect consumers’ rights, and fines up to 7% of the company’s total annual global turnover or EUR 35 million.
In addition, the final wording of AI Act will include the final amended definition of AI systems, the key principles applicable to AI systems, provisions for banned AI systems (AI systems that pose an unacceptable risk) and high-risk systems (i.e. AI-based medical devices), copyright prerequisites, etc.
The political agreement will be put into a final text over the next months, possibly in the first quarter of 2024. A number of necessary technical negotiation meetings have also been scheduled until the end of February to finalise some technical aspects of the AI Act text. As the AI Act is an EU regulation, it will be directly effective in Member States without the need for local implementing legislation. Most of the AI Act’s catalogue of obligations will be applicable 24 months after the AI Act enters into force. However, the current draft of the AI Act provides for other certain obligations to apply earlier. For instance, generative purpose AI systems obligations may apply 12 months after the AI Act comes into force while the prohibitions will apply six months after the AI Act comes into force.
It is therefore highly advisable for companies to review the obligations of the AI Act as soon as possible, then assess whether their products are subject to AI compliance obligations and, if so, prepare for compliance as the AI Act imposes high documentation requirements.
To this end, it is important to understand your company’s position in the AI supply chain since the associated compliance obligations are different for each operator, while it is crucial to take into account the obligations arising from other legislation (e.g. MDR, GDPR). This is particularly important in cases where AI products are classified as ‘medical devices’ under the MDR and must therefore meet certain safety and quality requirements. However, there are no additional requirements regarding the transparency or explainability of medical devices using AI. For high-risk systems in particular, the AI Act introduces new provisions on transparency, such as registration in an EU database. Therefore, the combined implementation of all necessary legislation will lead to full compliance for companies operating in the healthcare sector using AI-based solutions.