Overview
ABSTRACT
This paper discusses the European regulation on artificial intelligence systems, the AI Act. This regulation addresses the principles of good management of an artificial intelligence system using a risk-based approach and provides related compliance and control procedures. The article provides a series of use cases and feedback to facilitate the understanding and integration of the AI Act within organizations.
Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.
Read the articleAUTHORS
-
Nicolas DUFOUR: Doctor of Management, Associate Professor, CNAM Lirsa, Deputy Risk Director of a diversified group, Antony, France
-
Matthieu BARRIER: Risk Management Specialist, Paris, France
INTRODUCTION
This article discusses the regulation on artificial intelligence (AI), also known as the "AI Act." It is a European regulation (Regulation (EU) 2024/1689) that aims to establish harmonized rules on the use of artificial intelligence. This regulation follows a series of texts referring to the principle of AI regulation (discussions initiated in 2014 in Europe), in a context where certain AI-based solutions can be used for malicious purposes (fraud, cyberattacks) or in inappropriate contexts (decisions depriving individuals of their rights based on AI processing, for example). It is scheduled to come into force in 2026 for high-risk AI and is based on a risk-based approach for organizations designing or using AI. This text is part of a major trend in Europe towards the implementation of risk management and internal control principles for organizations defining or using artificial intelligence solutions . Its principle is to consider the organization associated with the use of artificial intelligence in order to protect consumers and customers of companies, but also citizens and employees of organizations . One of the underlying principles is simple: implementing an AI-based process can present risks, particularly when certain AI systems are used without an appropriate governance framework or without sufficient control over their operation. Without security measures or principles for controlling both the solution and its use by users, high risks could arise (e.g., information security breaches related to the use of AI in data management and analysis software, or risks of discriminatory practices related to insufficiently controlled AI-based recruitment processes whose algorithms are beyond the control of users)
Exclusive to subscribers. 97% yet to be discovered!
You do not have access to this resource.
Click here to request your free trial access!
Already subscribed? Log in!
The Ultimate Scientific and Technical Reference
KEYWORDS
artificial intelligence | risk management | compliance | internal control
This article is included in
Safety and risk management
This offer includes:
Knowledge Base
Updated and enriched with articles validated by our scientific committees
Services
A set of exclusive tools to complement the resources
Practical Path
Operational and didactic, to guarantee the acquisition of transversal skills
Doc & Quiz
Interactive articles with quizzes, for constructive reading
AI Act
Bibliography
Standards
- Information technology – Artificial intelligence – Management system - - 2023
Regulations
"AI ACT": Regulation (EU) 2024/1689 of the European Parliament and of the Council of June 13, 2024, laying down harmonized rules on artificial intelligence and amending Regulations (EC) No. 300/2008, (EU) No. 167/2013, (EU) No. 168/2013, (EU) 2018/858, (EU) 2018/1139, and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797, and (EU) 2020/1828 (Regulation on Artificial Intelligence) (text with EEA relevance).
...Exclusive to subscribers. 97% yet to be discovered!
You do not have access to this resource.
Click here to request your free trial access!
Already subscribed? Log in!
The Ultimate Scientific and Technical Reference