Support our independent media ❤️

To continue investigating, uncovering (and creating!) new solutions, advocating with the national/European parliament, and participating in making the digital sphere more responsible... we need 20 000 €.

Support solution-oriented journalism and become an actor in responsible digital practices.

We're counting on you !

Who will make AI yield? European Union's first attempt

On June 14, 2023, the European Parliament adopted the AI Act, a bill on Artificial Intelligence. Evaluation of the danger posed by systems, risk management, transparency towards users: a closer look at a regulation emerging in the context of the rapid growth of the AI market.

Artificial Intelligence (AI) is a field of computer science that focuses on the development of systems and technologies capable of performing tasks that would typically require human intelligence. To date, AI is omnipresent in several domains:

  • Customer service ("Chatbots" or conversation robots),
  • Social networks (recommendation algorithms),
  • Everyday life (voice assistants).
Textual exchange with a conversational robot from SNCF (French National Railway Company).
Example of a conversation with the SNCF's Chatbot (conversation robot)

While these artificial intelligence tools may seem indispensable for most of us, they actually raise certain concerns. This mistrust has increased with the emergence of generative AI systems. These systems, based on learning data, are capable of generating various types of content. Among others, we can mention ChatGPT, MidJourney, Chatsonic...

Why Regulate the Development and Use of AI?

With the exponential development of artificial intelligence, overflows have been observed. Indeed, generative AI systems are, for example, used to create fake content: spreading fake news, generating false images and videos, generating music from a person's voice (without their consent), etc.

Other criticisms have been addressed to these systems, including the lack of transparency in the collection of personal data or the absence of age-related filters for users. It is for these reasons that ChatGPT was blocked in Italy on March 31 until it complies.

It became urgent for authorities to address the issue of the disproportionate use of artificial intelligence. This regulation, which represents the first legal framework on Artificial Intelligence, reflects the European Union's intention to position itself very early in this sector. Faced with the risks posed by this technology — spreading false information, violating copyright, or manipulating individuals — it became essential to take action.

Concretely, the innovations of this regulation are:

  • A risk-based approach, with the classification of AI systems based on their risk level for individuals.
  • Obligations more or less strengthened depending on the type of AI and the type of actor: depending on the role (provider or simple distributor), obligations will be more or less reinforced.
  • Sanctions for non-compliance with the regulation.
  • Promotion of the development of responsible AI systems.

Let's explore these measures in more detail.

A Risk-Based Approach

This regulation on artificial intelligence brings a significant innovation: the classification of AI systems into 4 risk levels. Namely: AI systems with no risk, low-risk, high-risk, and those presenting an unacceptable risk.

AI Systems with No Risk

DescriptionAI systems that do not pose risks to individuals or do not process personal data.
MeasuresNo specific measures as there is no danger to individuals.
ExamplesSimple connected objects, predictive maintenance systems, spam filters.

AI Systems with Low Risk or Limited Risk

DescriptionSystems subject to a transparency obligation due to the risk of manipulation of individuals.*
Measures
  • Transparency: the individual must know that they are interacting with an AI.
  • The individual must be able to decide whether to continue interacting with the system or not.
ExamplesChatbots

* This transparency of the system will allow users to be aware that they are interacting with an AI, thus promoting an informed decision-making. This helps prevent the manipulation of users by false content that could be made credible by AI.

AI Systems with High Risk

DescriptionMain category targeted by this regulation. These are systems with a significant impact on health, safety, or presenting a risk to fundamental rights.
Measures
  • Evaluation before market entry and throughout the life cycle.
  • Obligation to establish a risk management system.
  • Human control to prevent or reduce risks.
Examples
  • AI systems used in products subject to EU product safety legislation.*
  • AI systems in 8 specific domains that must be registered in an EU database.**

* Products subject to EU legislation on product safety: toys, aviation, medical devices, etc.

** These specific domains are:

  • Biometric identification and categorization of individuals.
  • Management and operation of critical infrastructures.
  • Education and vocational training...
Generative AI

Generative AI systems like ChatGPT, Midjourney, or DALL-E must comply with a transparency obligation by indicating their sources, allowing the distinction of AI-generated content.

AI Systems with Unacceptable Risk

TitleTitle
DescriptionThese systems are considered a threat to individuals.
MeasuresProhibition
Examples
  • Cognitive or behavioral manipulation systems of vulnerable individuals or groups.
  • Real-time and remote biometric identification* systems.
  • Classification of individuals based on their behavior (credit scoring)**.

* Regarding real-time and remote biometric identification, there are exceptions for the preservation of the general interest in three specific areas:

  • Search for potential victims of criminal acts, including missing children.
  • Threats to the life of individuals, including terrorist attacks.
  • Detection and prosecution of authors of criminal offenses covered by Framework Decision 2002/584/JHA (European Arrest Warrant).

** Credit scoring is a rating system used by banks and financial institutions to assess the credit risk of a potential borrower. It can be calculated based on elements such as current financial situation, behavior (overdraft, repayment frequency, etc.).

This risk-based approach allows for increased obligations for established actors operating in the European market.

Enhanced Obligations for Actors

This regulation makes a distinction between different actors in the artificial intelligence sector. These obligations primarily concern actors of high-risk AI systems, including providers, importers, distributors, and users.

The Provider

This refers to any entity or person who develops or has developed an AI system with the intention of placing it on the market or putting it into service, under its own name or brand, whether for a fee or for free. It also includes any entity or person who adapts general-purpose AI systems for a specific purpose.

The provider is required to:

  • Ensure the compliance of the high-risk AI system with regulatory requirements at its design and throughout its lifecycle. And affix the CE marking to demonstrate this compliance.
  • Establish technical documentation.
  • Implement corrective measures (if necessary) and inform the relevant national authorities.
  • Provide evidence of the compliance of these systems upon request by the competent authorities.

The Importer

Any natural or legal person established in the EU who places on the market or puts into service an AI system bearing the name or brand of a natural or legal person established outside the Union.

The importer is, in this sense, required to ensure that the provider has fulfilled its obligations.

It is also obliged, at the request of the competent authorities, to provide evidence of the compliance of the systems.

The Distributor

Any natural or legal person in the supply chain, other than the provider or importer, who makes an AI system available on the Union market.

The distributor ensures that all previous obligations have been met before placing high-risk AI systems on the market, including CE marking. They take corrective actions if necessary and provide evidence of compliance at the request of the authorities.

The User

The user refers to any natural or legal person, public authority, agency, or other body using an AI system under its authority — except if the AI system is used for personal (non-professional) activities.

In turn, the user is required to use high-risk AI systems following the instructions. They must inform the provider or distributor and cease the use of these systems in case of additional risks to individuals.

For systems other than high-risk ones, it is for example encouraged to create a code of conduct to promote the voluntary application of this regulation.

Ensuring Compliance through Enforcement

To ensure that the various actors respect their obligations and the rights of individuals, financial sanctions are provided for by this regulation:

  • Up to 30 million euros or 6% of the annual global turnover for prohibited practices.
  • Up to 10 million euros or 2% of the annual global turnover for refusal to comply with authorities.
  • Up to 20 million euros or 4% of the annual global turnover for other practices in violation of the regulation.

The effective enforcement of this regulation will involve the designation of competent authorities at the state level. These national authorities will be responsible for

establishing and carrying out the necessary procedures for the assessment, designation, and notification of conformity assessment bodies and their control.

Official AI Legislation Text

Additionally, a European Committee on Artificial Intelligence will be created to facilitate cooperation between different national authorities and ensure consistent implementation of this regulation.

In France, the CNIL has been designated as the competent authority. It has, therefore, established an Artificial Intelligence Service (SIA). This service will be tasked with understanding the functioning of AI systems, preventing privacy-related risks, and preparing for the enforcement of this regulation.

It is undeniable that the introduction of this regulation will disrupt the growth of the artificial intelligence sector and likely provoke discontent among industry players. Thus, the regulation includes measures to support these actors.

Sanction, Yes. Encouragement is Good Too.

In addition to sanctions, this regulation also aims to support innovation. Regulatory sandboxes will be established to provide

a controlled environment that facilitates the development, testing, and validation of innovative AI systems for a limited period before their market introduction or commissioning.

Official AI Legislation Text

This will involve the possibility of using legally collected data for development and testing purposes, under certain conditions.

The purpose of this initiative will be to make legally collected data available in the context of these regulatory sandboxes to:

  • Develop innovative AI systems used for the preservation of public interests (public safety, crime prevention).
  • Enable small providers and users to have priority access to these regulatory sandboxes (and the contained data) under eligibility conditions.

These measures will thus ensure ethical use of AI systems while encouraging innovation.

The implementation of this regulation necessarily involves considering data protection principles. As Marie-Laure Denis (President of the CNIL) emphasized on September 11 during her hearing in the Senate:

It is indeed essential to ensure a harmonious articulation of the AI regulation with the GDPR, especially since the European Parliament proposes to condition the obtaining of CE marking on compliance with Union law on data protection.

Marie-Laure Denis, President of the National Commission on Informatics and Liberties (CNIL)

In France, the CNIL will play a crucial role in this regard.

Negotiations are still ongoing among the member states on the final form this text will take. The aim is to reach an agreement by the end of the year.

Sources:

[Cover Photo: Eric Mclean]

Nousseu Douon
Nousseu Douon
I am a student in data governance. With my background in legal studies and my passion for digital technology, I am interested in regulating the digital sphere and promoting responsible digital practices.

Comments

Write a comment

Chargement d'un nouvel article...

Support our independent media ❤️

Our organization mobilizes dozens of people to publish quality information. Support solution-based journalism and become a player in responsible digital practices!

Les Enovateurs

Find us also on

linkedin
mattermost