Five Takeaways From the EU Commission’s AI Literacy Q&As

Viewpoints
May 21, 2025
4 minutes

In the nearly four months since the EU AI Act’s literacy requirements took effect, we regularly receive variants of the following questions: what, exactly, does it mean to be AI-literate, and how does my organisation meet that standard? 

Earlier this month, the European Commission released AI Literacy – Questions & Answers, a link to which is here. The Q&As are a useful primer — both for organisations at the start of their literacy journey and those that, having taken steps to educate their employees on AI development and use (whether via training, policies, or both), are now looking for insights into how the Act’s requirements will be interpreted and enforced.   

Most importantly, the Commission confirms — seemingly for the first time — that the AI literacy requirements will not be enforced by regulators until 3 August 2026. (Private enforcement remains possible before that date.)

As to whether the “national market surveillance authorities” (i.e., EU member state regulators) that are responsible for the supervision and enforcement of the literacy requirements could impose penalties for historic non-compliance, the Commission says only: “The prohibitions apply since 2 February 2025.” 

Notably, a commissioner from the Irish regulator recently stated at a public event that, rather than being enforced as a standalone matter, the failure to comply with the Article 4 literacy requirement would likely act as a mitigating or aggravating factor in any wider enforcement action under the AI Act. It remains to be seen whether other authorities take a similar position — but the Commission acknowledges that enforcement of the AI Act could be more likely if non-compliance is due to a lack of “appropriate training and guidance” of relevant individuals.

The Q&As reiterate that the AI literacy requirement is contextual and should be assessed on a case-by-case basis. 

Small AI Deployer GmbH does not need to have the same measures in place as Multinational AI Provider Inc. Indeed, such an approach would miss the point of the literary principle — as well as what it means to define an effective compliance programme more broadly. That said, organisations should not take this flexibility to mean that an off-the-shelf solution that is adopted without careful consideration will be appropriate. Embedding AI literacy within an organisation is — or should be — about more than tick-box compliance. Rather, as described below, taking into account your organisation’s specific use cases and risk profile underpins the entire principle.

The reference in Article 4 to ensuring the literacy of “staff and other persons” concerns “persons broadly under the organisational remit”. 

The Q&As state that such individuals could include contractors, service providers and clients. One can appreciate why some contractors and third-party providers (e.g., service engineers) will need to be literate about their customers’ AI technologies — as well as understanding the vendor’s own AI systems that are used when servicing those customers. Indeed, we are already starting to see training and competence requirements being contractually flowed down, and this approach is likely to become commonplace in the coming months and years. 

By contrast, it remains to be seen why and how an organisation would be responsible for its clients’ AI literacy standards. As a starting point, organisations are therefore advised to focus on the literacy of the individuals within their control.

The AI Act does not require organisations to measure their employees’ knowledge of AI, nor will the AI Office impose strict requirements around what is a “sufficient” level of AI literacy.

That said, the Commission makes clear that in order to comply with Article 4 of the Act, in-scope entities should, as a minimum, ensure:

  • A general understanding of AI within the organisation;
  • A specific understanding of the context in which the organisation will develop and/or use AI systems, including by reference to industry sectors, use cases and the differences in knowledge, experience and training of the relevant individuals; and 
  • That AI literacy measures reflect the role of the organisation in the AI ecosystem — i.e., a developer of its own AI systems and/or a user of third-party systems — and the risks of those systems.

Although the Q&As confirm that AI training is not mandatory, the Commission cautions that, in many cases, “simply relying on the AI systems’ instructions for use or asking staff to read them might be ineffective and insufficient”.  I would go further: this approach will usually be insufficient. With the best will in the world, relying on employees to self-educate rarely amounts to a robust or defensible approach to compliance. Save for limited exceptions, staff training should be a core aspect of AI literacy for all organisations, and the choice not to do so will be closely scrutinised, and likely viewed negatively, by regulators, customers and other stakeholders.

The Commission will continue to provide guidance on AI literary — as may the national market surveillance authorities, once they are established.

The Commission’s upcoming guidance for providers of high-risk AI systems will also “touch upon issues of literacy”, such as in relation to human oversight and risk management. That said, providers whose products and services do, or are likely to, constitute high-risk AI systems are advised not to wait for the publication of that guidance before starting their literacy journey. For its part, the AI Office will continue to update its living repository on AI literacy practices (available here), and a dedicated webpage will be developed for literacy-related activities.  

Subscribe to Ropes & Gray Viewpoints by topic here.