CNIL (finally) launches a service dedicated to AI: everything you need to know

CNIL has announced its intention to create a department dedicated to AI in order to enhance its expertise on personal data protection. This department will be responsible for issues relating to privacy risks associated with the use of AI.

Artificial intelligence is a fast-growing field of technology with an ever-increasing number of applications. As research and development in this field continues, it is important that measures are taken to protect the privacy of users.

CNIL: what are the objectives of this service dedicated to AI?

Artificial intelligence has become an essential tool for businesses and individuals alike. However, it also raises concerns about the protection of personal data. The French Data Protection Authority (CNIL) has undertaken to protect this sensitive information by launching a specialized service.

There are many issues at stake in AI technology today. Consequently, this dedicated service intends to help the CNIL to better apprehend them. It will focus specifically on understanding and preventing the risks to privacy associated with to this technology.

In addition to fostering links between the various players in the system, CNIL’s AI Department will also be responsible for preparing the implementation of the European regulation on artificial intelligencewhich will be implemented shortly.

CNIL: about the European regulation on artificial intelligence

A draft regulation on artificial intelligence has been unveiled by the European Commission in April 2021. This is a set of laws designed to introduce, for the first time ever, binding measures for the use of AI. It will normally in force from 2024.

This law will guarantee the safety and transparency of AIas well as protect the rights and freedoms of individuals. It will cover AI systems used in applications such as facial recognition, autonomous driving, healthcare, justice and financial services. In addition, it also includes accountability, transparency and quality assurance requirements for AI systems, as well as a monitoring and risk assessment mechanism.

How can the use of AI affect privacy and personal data?

AI is a very powerful technology that can be used to solve complex problems. However, there are a number of concerns about how it can affect users’ privacy and personal data.

The use of AI-based algorithms means that organizations have access to a considerable amount of personal and sensitive data. This information is often stored in centralized databases. This means that they are more vulnerable to computer attacks or any other form of malicious exploitation. The risks incurred by this massive collection may include theft or unauthorized disclosure. The same applies to sharing without consent of the owner of this personal information.