AI Act compliance: Definition of Artificial Intelligence according to the AI Act

Definizione di Intelligenza Artificiale secondo l’AI Act

The definition of artificial intelligence according to the AI Act is contained in Article 3(1) which defines an “AI system” as:

(a)     a machine-based system

 

(b)    designed to operate with varying levels of autonomy

 

(c)     that may exhibit adaptiveness after deployment

 

(d)  that infers from the input it receives how to generate output such as predictions, content, recommendations, or decisions

 

(e)    that can influence physical or virtual environments.

 

 

Vediamo di seguito nel dettaglio ognuno di questi requisiti.

(a) Automated system or “machine-based system”

According to the European Commission’s Guidelines, the term “automated” refers to the fact that AI systems operate through the use of machines, meaning a system designed by and run on “machines,” which should be understood as “hardware” and “software.”

(b) Autonomy

Recital 12 of the AI Act clarifies that the expression “varying levels of autonomy” means that the AI system is designed to operate with some degree of independence from human involvement and intervention.


This implies that only systems that can operate exclusively with full human involvement and intervention would be excluded. Indeed, the mere fact that an AI system is capable of generating output that is not entirely controlled by a human makes it “autonomous” under the regulation.

 

 

As a result, this requirement is very easy to meet and is even satisfied by software programs that have been in use for years, such as email classification systems.

(c) Adaptiveness

The concept of adaptiveness is related to autonomy and refers to the AI system’s ability to modify its behavior during use.

 

However, the use of the term “may” clearly indicate that this characteristic is not decisive for classifying a system as an AI system.

 

 

Thus, in practical terms, adaptiveness is not a distinguishing feature.

(d) Inference

Inference capability refers to the process of generating outputs such as predictions, content, recommendations, or decisions.

 

Recital 12 of the AI Act clarifies that an AI system’s inference capability goes beyond basic data processing, enabling learning, reasoning, or modeling.

 

Therefore, artificial intelligence systems differ from traditional software systems in that the former are not solely based on rules defined by natural persons to perform automated operations. For instance, the recitals of the regulation explicitly include “machine learning software” among those having inference capabilities.

 

This definition of “inference” would be extremely broad if not for the fact that the Commission, in its Guidelines, has excluded systems with limited inference due to their restricted ability to analyze patterns and autonomously adapt their outcomes, such as:

– Systems designed to optimize mathematical methods or accelerate and approximate traditional optimization methods, such as linear or logistic regressions;

– Data processing systems that exclude “learning, reasoning, or modeling” or systems for mere “data description”;

– Systems whose outputs can be obtained through the application of basic statistical rules, such as financial forecasting programs used to predict future stock prices (e.g., based on historical average price predictions) or weather forecasting systems that use last week’s average temperature to predict tomorrow’s temperature.

(e) Ability to influence physical or virtual environments

According to the Commission’s Guidelines, it is sufficient that the system has an impact on the surrounding environment in a broad sense (physical or virtual).

 

This requirement is once again extremely broad: a system only needs to be embedded in a process (e.g., a management decision-making process) to have an impact on its environment and thus fall within the definition of an AI system.

 

 

In conclusion, the definition of an artificial intelligence system adopted by the AI Act is extremely broad, making it highly likely that if we use the term “AI” to describe our system or product, it will fall within this definition and, consequently, within the material scope of the regulation.

 

 

Leave a Comment

Your email address will not be published. Required fields are marked *