Obligation for Literacy in the Field of Artificial Intelligence

Blog 4/6/2025

Obligation for Literacy in the Field of Artificial Intelligence

Ana Martins, Managing Director – Compliance, Governance & Sustainability at Timestamp, explains the regulatory requirement for AI literacy, essential for proper compliance and responsible use across all organisations.

The Artificial Intelligence (AI) Regulation came into force gradually in August 2024. Since 2 February 2025, the prohibition of a set of AI practices and the obligation for literacy in the field of AI have become applicable.

According to the Regulation, literacy means acquiring the skills, knowledge, and understanding that enable providers, implementers, and other affected parties, taking into account their respective rights and obligations within the context of this Regulation, to carry out an informed implementation of AI systems and to be aware of the opportunities and risks posed by AI, as well as the harm it may cause. The concepts covered depend on the context and include the correct use of technical aspects during the development phase, the precautions to be taken during its use, the best way to interpret the results generated, and, for those affected, understanding how decisions assisted by these technologies can influence their lives.

Compliance with this obligation therefore covers, among other aspects:

  • Protection of fundamental rights and the safety and health of individuals;
  • Guarantee of legal control in the context of artificial intelligence and business compliance;
  • Support for informed decision-making by all parties involved in AI systems;
  • Understanding the correct application of technical elements during system development;
  • Implementation of protective measures during the use of these technologies;
  • Appropriate interpretation of results generated by the systems;
  • Clarification of the impact of automated decisions on affected individuals;
  • Compliance with current regulations in the field of AI;
  • Promotion of safe and reliable innovation within the European Union;
  • Awareness of benefits, risks, safeguards, as well as the rights and duties associated with the use of these systems.

In this context, organisations should adopt a structured approach to ensure that AI literacy is effective and aligned with their specific needs. To do so, they should:

  • Identify the role of departments and their obligations in the context of AI;
    • Map which areas of the organisation interact directly or indirectly with AI systems;
    • Understand the legal and regulatory requirements applicable to each department regarding AI regulation;
    • Define clear responsibilities to ensure regulatory compliance.
  • Evaluate the risks and impacts of the AI systems used.
    • Identify the different AI systems in operation and analyse their potential risks to the organisation, employees, and end users.
    • Assess the level of autonomy and adaptability of the systems, as well as the associated ethical, legal, and security challenges.
    • Implement monitoring mechanisms to minimise risks and ensure the responsible use of AI.
  • Adapt AI training and literacy to the roles performed by employees.
    • Develop training programmes tailored to the level of involvement each employee has with AI (for example, programmers, end users, and managers will have different needs).
    • Create specific guidelines for professionals involved in the development and implementation of AI systems.
    • Ensure that employees who use AI daily understand how to interpret results and make informed decisions.
    • Raise awareness throughout the organisation about the rights, responsibilities, and ethical implications of AI use.

HOW CAN TIMESTAMP HELP?

  • Systematised methodology to map, analyse, and classify your organisation’s AI systems, identifying relevant critical points and associated risks in alignment with the regulatory framework;
  • Technical and functional consultancy in the design and implementation of Policies and Procedures, Governance Models, Evaluation Frameworks, implementation, and monitoring that support responsible AI development and use, promoting transparency, ethics, and ongoing compliance;
  • Training, as a DGERT-certified company, supporting organisations in defining and implementing AI training plans and providing specialised training to empower teams in AI matters.

TIMESTAMP offers a 360º approach to Artificial Intelligence, including Regulatory Compliance, technological and functional consultancy, and Technological solutions, supporting your company throughout the entire AI project lifecycle, including diagnostics, design, development, implementation, and monitoring.

We tailor Artificial Intelligence to your business strategy, requirements, and needs in an ethical and responsible way, ensuring the use of industry best practices, leading technologies, and regulatory alignment.

Share this post

Copy link

Related Articles

Timestamp

Blog | 16/4/2025

Artificial Intelligence Regulation: European Comission's Guidelines about Prohibited Practices

Ana Martins, Managing Director – Compliance, Governance & Sustainability at Timestamp, explains the guidelines on prohibited AI practices from the European Commission.