05.02.2024

Artificial intelligence from a data protection perspective

Security

For the 7th Data Privacy Day, the Restena Foundation and the University of Luxembourg raised awareness of data protection in the context of its use by artificial intelligence techniques among +260 Luxembourg academic and economic player on the occasion of the European Data Protection Day.

On 29 January 2024, the Restena Foundation and the University of Luxembourg co-organised the 7th edition of the Data Privacy Day dedicated to data protection. Once again, the event was held in hybrid mode, bringing together people interested in the subject beyond Luxembourg's borders. More than 260 participants, a record number for the event since its creation in 2018, gathered around the central theme of the 2024 edition: artificial intelligence (IA).

After the opening speech by Malte Beyer-Katzenberger, Team Leader at the European Commission, speakers from the National Commission for Data Protection (Commission nationale pour la protection des données - CNPD), BEE SECURE, the Luxembourg Institute of Science and Technology (LIST), the Maastricht University, the University of Luxembourg and the law firm /c law follow one another on Data Privacy Day 2024 stage.

Artificial intelligence, a source of challenges and opportunities

Artificial intelligence is all around us. It uses a wide range of technologies: recommendation systems, image or voice recognition, text generation, self-driving cars, etc. These technologies and the tools that implement them create challenges in protecting privacy and the people whose personal data is processed. Risk of data leak to third parties, reputation damage, mobbing, blackmailing, faking evidence, surveillance, and identification are just a few examples.

When Artificial intelligence is using personal data, regulation compliance is essential. The European Union already has several data initiatives, including the Data Governance Act, the Data Act or the General Data Protection Regulation (GDPR) and the guidance provided by the supervisory authorities to ensure its application.

The RGPD, which defines the main framework for personal data processing within the European Union since May 2018, was discussed during the conference. Like other legislation, the RGPD establishes rules to allow the free movement of personal data and protect the rights and freedoms of natural persons. It is considered as the key legal text when personal data is concerned.

Further regulation expected

Article 22 GDPR, ‘Automated individual decision-making, including profiling’, is already a safeguard for individual rights in the domain of automated decision-making, on the integration of artificial intelligence. However, the AI Act - specifically designed to regulate the use of artificial intelligence in the European Union and which should come into force by 2026 – will offer specific regulation for AI based on a risk approach like the RGPD. Interacting with the RGPD, it will be a significant key step in the regulation of artificial intelligence. In particular, the AI Act will include an artificial intelligence systems classification based on their purpose, which will determine the level of compliance, requirements and obligations. On the other way around, this new regulation will impact data protection experts and Data protection officers (DPOs). This remains to be determined to what extent, as the role of DPOs is defined within the RGPD.

Generally speaking, and perhaps even more so in an academic and research context, any institution processing personal data with AI techniques must comply with data protection laws and document their processing. Even if the legislation is behind technological development and lacks in certain areas, compliance cannot be ignored. The utmost caution must therefore be taken when using AI technology in combination with personal data.