Main risks related to the use of Artificial Intelligence → fundamental rights, including personal data and privacy protection and non-discrimination Artificial Intelligence (AI) can perform many functions that previously could only be done by humans. As a result, citizens and legal entities will be increasingly subject to actions and decisions taken by or with the assistance of AI systems, which may sometimes be difficult to understand and to effectively challenge where necessary. Moreover, AI increases the possibilities to track and analyze the daily habits of people. For example, there is a potential risk that AI may be used, in order to prevent breaches of EU data protection and other rules, by state authorities or other entities for mass surveillance and by employers to observe how their employees behave. By analyzing large amounts of data and identifying links among them, AI may also be used to retrace and de-anonymize data about persons, creating new personal data protection risks even in respect to datasets that per se do not include personal data. AI is also used by online intermediaries to prioritize information for their users and to perform content moderation. The processed data, the way applications are designed and the scope for human intervention can affect the rights to free expression, personal data protection, privacy, and political freedoms. Certain AI programs for facial analysis display gender and racial bias, demonstrating low errors for determining the gender of lighter-skinned men but high errors in determining gender for darker-skinned women. Source: Joy Buolamwini, Timnit Gebru; Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR 81:77-91, 2018. Bias and discrimination are inherent risks of any societal and economic activity. Human decision-making is not immune to mistakes and biases. However, the same bias when present in AI could have a much larger effect, affecting and discriminating many people without the social control mechanisms that govern human behavior. Facial Recognition Speaking to a group of reporters on February 13th 2020, Commission Vice-President for Digital Margrethe Vestager mentioned that “as it stands right now, GDPR would say ‘don’t use it’, because you cannot get consent. The EU’s flagship data protection regime renders automatic identification through facial recognition technology illegal”. More specifically, article 6 of the EU’s General Data Protection Regulation outlines the conditions under which personal data can be legally processed, one of them being the explicit consent of the data subject in regards to the processing of its personal data. However, there are exemptions to the rule with regards to public security issues, in which cases facial recognition technologies should be allowed to automatically identify persons legally. It should be noted that facial recognition is already present in our smartphones and passport controls at airports. But its use remotely, including by public authorities, is increasingly becoming a controversial issue in the rollout of artificial intelligence, as it occurred during the protests in Hong Kong. Our law firm’s comment It is crucial to fully seize the opportunities that AI has to offer while at the same time create a future regulatory framework that protect our fundamental values, such as human dignity and privacy protection and also find solutions to the most pressing societal challenges, including climate change and environmental degradation. In the throes of such significant changes one thing is certain: new legal challenges are rising and should be faced and addressed accordingly by setting updated and new legal and regulatory rules.