AI Act Literacy: Are You Fluent Yet?

Viewpoints
January 31, 2025
4 minutes

Clients regularly ask us a variant of the following question: if, for the purposes of the European Union’s AI Act, we do not develop or deploy prohibited systems, and are not providers of general purpose AI models that are put onto the market after 2 August 2025, do we have any immediate obligations under the Act?

And the answer is: generally, no — but with an important caveat.

The AI Act, which came into effect on August 1, 2024, imposes obligations across a staggered timeline between February 2025 and December 2025. Unsurprisingly, the focus of many organisations has been the obligations that apply to the prohibited and high-risk use cases, which apply from 2 February 2025 and 2 August 2026, respectively. For more information on the AI Act and a timeline for compliance, please see our previous Viewpoint here.

There is, however, an obligation that applies to users of AI systems irrespective of the risks of such systems — and it will be enforced from 2 February 2025. That obligation is to implement AI “literacy measures”.

Under the AI Act, AI literacy is the skill, knowledge and understanding that allows entities and/or individuals to make an informed deployment of AI systems and to gain awareness about the opportunities and risks of AI and possible harm it can cause. While guidance on the topic is limited, the AI Board and the relevant authorities in EU member states have been tasked with publishing guidance — such as codes of conduct — under the AI Act. 

How does it apply?

The requirements of AI literacy apply generally to:

  • Providers, i.e., an entity that develops the AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
  • Deployers, i.e., an entity that uses an AI system under its authority (except where the AI system is used in the course of a personal non-professional activity), regardless of the risks and capabilities of the relevant AI system.

The requirements do not apply to other entities under the AI Act, such as importers or distributors. However, these entities are subject to their own subset of obligations under the Act, and may be deemed to be a provider of AI systems in certain circumstances (such as when they put their name or trademark on, or make a substantial modification to, a high-risk AI system already put into service or placed onto the market in the EU). 

What are the requirements?

The exact requirements and standards of AI literacy will be context-specific. Among other things, they will depend on:

  • The type and risk of the relevant AI system. AI literacy requires providers and deployers to take into account the respective rights and obligations of entities and individuals in the context of the AI Act, and to consider the person or groups of persons on whom the AI system is to be used. This means that providers of high-risk AI systems (such as AI systems used in educational and vocational training) are likely to be subject to a higher standard of AI literacy, compared to providers of lower risk AI systems. 
  • The size and resources of the organisation. The AI Act requires organisations to ensure, “to their best extent”, a sufficient level of AI literacy of their staff and other relevant persons dealing with the operation and use of their AI systems. This is also likely to mean that the size and resources of the organisation will be taken into account when determining what is a compliant level of AI literacy under the Act. 
  • The relevant employees. The AI Act states that all “relevant actors” across the AI value chain should be provided with appropriate knowledge, and providers and deployers must take into account the technical knowledge, experience, education and training and the context in which the AI systems are to be used when implementing measures to ensure a sufficient level of AI literacy. This means that the standards of AI literacy will depend on the relevant personnel developing or using the relevant AI system, and the AI Act expressly identifies that “persons assigned to implement the instructions for use and human oversight (of high-risk AI systems)” must have, among other things, an appropriate level of AI literacy to fulfil their tasks for a deployer of an AI system.

While the AI Act’s obligations on AI literacy take effect on 2 February 2025, its provisions on penalties for non-compliance will only apply six months later, on 2 August 2025. From that date, it will be open to each EU member state’s respective national competent authorities to impose enforcement measures and sanctions to ensure compliance with the Act. Prior to that date, the main form of enforcement is likely to take the form of private litigation.

What’s next?

To meet the AI Act’s requirements, and to the extent that they have not done so already, organisations should formalise their approach to ensuring AI literacy among staff — both technical and non-technical — and other relevant individuals. Because these requirements are context-specific, and do not require a one-size-fits-all approach, in-scope organisations should consider:

  • The role they play and the obligations placed on them under the AI Act.
  • The type and risks presented by their AI systems.
  • How to tailor their AI literacy measures according to the roles their employees play, in particular with regards to the development and use of AI systems within the organisation. 

While public enforcement will only be possible from 2 August 2025, organisations should take steps now to design, implement and socialise their AI literacy measures. If you would like assistance in understanding how to do this, or to discuss what good practice looks like in your industry, please do get in touch.

Subscribe to Ropes & Gray Viewpoints by topic here.