IA Regulation. ST: Methodological Approach and Case Studies
Article REF: SE2034 V1

IA Regulation. ST: Methodological Approach and Case Studies

Authors : Nicolas DUFOUR, Matthieu BARRIER

Publication date: January 10, 2026 | Lire en français

Logo Techniques de l'Ingenieur You do not have access to this resource.
Request your free trial access! Free trial

Already subscribed?

Overview

ABSTRACT

This paper discusses the European regulation on artificial intelligence systems, the AI Act. This regulation addresses the principles of good management of an artificial intelligence system using a risk-based approach and provides related compliance and control procedures. The article provides a series of use cases and feedback to facilitate the understanding and integration of the AI Act within organizations.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHORS

  • Nicolas DUFOUR: Doctor of Management, Associate Professor, CNAM Lirsa, Deputy Risk Director of a diversified group, Antony, France

  • Matthieu BARRIER: Risk Management Specialist, Paris, France

 INTRODUCTION

This article discusses the regulation on artificial intelligence (AI), also known as the "AI Act." It is a European regulation (Regulation (EU) 2024/1689) that aims to establish harmonized rules on the use of artificial intelligence. This regulation follows a series of texts referring to the principle of AI regulation (discussions initiated in 2014 in Europe), in a context where certain AI-based solutions can be used for malicious purposes (fraud, cyberattacks) or in inappropriate contexts (decisions depriving individuals of their rights based on AI processing, for example). It is scheduled to come into force in 2026 for high-risk AI and is based on a risk-based approach for organizations designing or using AI. This text is part of a major trend in Europe towards the implementation of risk management and internal control principles for organizations defining or using artificial intelligence solutions . Its principle is to consider the organization associated with the use of artificial intelligence in order to protect consumers and customers of companies, but also citizens and employees of organizations . One of the underlying principles is simple: implementing an AI-based process can present risks, particularly when certain AI systems are used without an appropriate governance framework or without sufficient control over their operation. Without security measures or principles for controlling both the solution and its use by users, high risks could arise (e.g., information security breaches related to the use of AI in data management and analysis software, or risks of discriminatory practices related to insufficiently controlled AI-based recruitment processes whose algorithms are beyond the control of users)

You do not have access to this resource.
Logo Techniques de l'Ingenieur

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource. Click here to request your free trial access!

Already subscribed?


Ongoing reading
AI Act

Article included in this offer

"Safety and risk management"

( 459 articles )

Complete knowledge base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

View offer details