European Commission Guidelines on the Definition of Artificial Intelligence Systems

Blog 15/7/2025

European Commission Guidelines on the Definition of Artificial Intelligence Systems

Ana Martins, Vice President Consulting at Timestamp, explains the key points of the new AI Act and how the European Commission’s guidelines help identify what is considered an Artificial Intelligence system under the new regulation.

As of 2 February 2025, the first general provisions of the Artificial Intelligence Regulation (Regulation (EU) 2024/1689, known as the “AI Act”) entered into force. This regulation imposes strict rules on the development, availability and use of AI systems in the European Union.

In order to provide practical criteria to identify when a technology qualifies as AI — enabling better interpretation and application of the Regulation — the European Commission issued Guidelines on the definition of Artificial Intelligence systems, noting that these are non-binding, and that it is ultimately up to the Court of Justice of the European Union to interpret and apply the concept of an AI System within the EU legal framework.

1. WHY ARE THESE GUIDELINES IMPORTANT?

The European Commission’s guidelines are essential for companies and professionals in the tech sector to assess whether their solutions fall under the definition of an AI system and, consequently, whether they are subject to the requirements of the AI Act. In addition to helping the various stakeholders (those who develop, implement, import, distribute or use AI systems) to comply with the new rules, these guidelines also serve as a reference for supervisory authorities, ensuring a consistent interpretation and uniform application of the rules across the European Union.

2. WHAT DEFINES AN ARTIFICIAL INTELLIGENCE (AI) SYSTEM?

1. Machine-based
  • Developed and operated using hardware and software;
  • Uses components such as processors, memory, code and operating systems.

Examples:
• Virtual assistant that receives customer queries online and uses specialised software to interpret and generate responses;
• Access control system that checks digital credentials and authorises or blocks building entry;
• Real-time facial recognition through the use of cameras and servers;
• Industrial monitoring system that predicts equipment failures using sensors and software.

2. Autonomy
  • Operates with varying degrees of independence;
  • Can function without constant human supervision.

Examples:
• Autonomous vehicles that use sensors, cameras and software to interpret their surroundings and make driving decisions;
• Voice system that allows users to control lights and heating at home without manual input;
• System that analyses market trends and automatically adjusts investments based on user risk profile.

3. Adaptability
  • Able to learn and adjust its behaviour after deployment;
  • Its responses may evolve over time.

Examples:
• Streaming platform that adjusts suggestions based on previously watched content;
• Medical diagnostic system that updates its results based on new data;
• Translator that improves translation accuracy by learning from user corrections;
• Forecasting algorithm that refines market demand or needs predictions based on market changes.

4. Objectives
  • Designed to achieve explicit or implicit goals;
  • Can be programmed to optimise processes or uncover patterns.

Examples:
• Financial risk analysis system that detects fraud based on transaction patterns;
• AI for logistics optimisation;
• Product recommendation algorithm suggesting items based on user/consumer habits;
• Dynamic pricing system that automatically adjusts prices in line with market demand.

5. Inference capability
  • Analyses input data and generates results without relying strictly on predefined fixed rules;
  • Uses techniques such as machine learning and logic-based models.

Examples:
• CV screening algorithm that recommends suitable candidates;
• Intelligent virtual assistant that understands queries and generates coherent responses;
• Anomaly detection system that identifies suspicious transactions in real time;
• System that predicts peak electricity consumption periods and automatically adjusts energy distribution;
• Programme that analyses network traffic and forecasts cyberattacks based on suspicious activity.

6. Outputs
  • Produces predictions, content, recommendations or decisions;
  • Its responses can affect the physical or digital world.

Examples:
• Film and music recommendation algorithms (e.g. Netflix, Spotify);
• Systems generating images and videos, such as deepfakes or automated designs;
• Credit scoring systems that influence loan approvals;
• Automatic moderation platforms that remove harmful content from social media.

7. Interaction with the environment

Directly influences physical environments (such as robots) or virtual ones (such as AI assistants).

Examples:
• Traffic control systems that adjust traffic lights based on vehicle flow;
• Industrial robots that adapt their movements to different manufacturing processes;
• Environmental monitoring systems that analyse climate data and recommend actions;
• Wildfire detection systems that collect temperature and humidity data to identify risks.

3. HOW TO PROCEED?

Organisations should map and analyse all AI systems in use within their operations to determine whether they fall within the definition of an AI system and whether the requirements of the AI Act apply.

4. HOW CAN TIMESTAMP HELP?

We offer a systematised methodology to map, analyse and classify your organisation’s AI systems, identifying critical points and associated risks in line with the regulatory framework.

TIMESTAMP provides a 360º approach to Artificial Intelligence, covering Regulatory Compliance, Technological and Functional Consulting, and Technology Solutions, supporting your business throughout the full AI project lifecycle — from diagnosis, design, development and implementation to monitoring.

We align Artificial Intelligence with your business strategy, requirements and needs, in an ethical and responsible way, ensuring the use of best practices, top-tier technologies, and regulatory compliance.

Share this post

Copy link

Related Articles

Timestamp

Blog | 2/7/2025

How to Prepare for the DORA Regulation

Nuno Dias, Managing Partner – Digital Security & Governance at Timestamp, explains how the DORA Regulation is driving deep transformation within organisations, making digital resilience a strategic cornerstone.

Timestamp

Blog | 4/6/2025

Obligation for Literacy in the Field of Artificial Intelligence

Ana Martins, Managing Director – Compliance, Governance & Sustainability at Timestamp, explains the regulatory requirement for AI literacy, essential for proper compliance and responsible use across all organisations.

Timestamp

Blog | 16/4/2025

Artificial Intelligence Regulation: European Comission's Guidelines about Prohibited Practices

Ana Martins, Managing Director – Compliance, Governance & Sustainability at Timestamp, explains the guidelines on prohibited AI practices from the European Commission.