R&G Tech Studio: Navigating AI Literacy—Understanding the EU AI Act

Podcast
April 29, 2025
13:07 minutes

On this episode of the R&G Tech Studio podcast, Rohan Massey, a leader of Ropes & Gray’s data, privacy and cybersecurity practice, is joined by data, privacy and cybersecurity counsel Edward Machin to discuss the AI literacy measures of the EU AI Act and how companies can meet its requirements to ensure their teams are adequately AI literate. The conversation delves into the broad definition of AI systems under the EU AI Act, the importance of AI literacy for providers and deployers of AI systems, and the context-specific nature of AI literacy requirements. They also provide insights into the steps organizations should take to understand their roles under the AI Act, develop training modules, and implement policies and procedures to comply with AI literacy principles.


Transcript:

Rohan Massey: Hello, and welcome to the latest episode of the R&G Tech Studio podcast. I’m Rohan Massey, a partner in the data, privacy & cybersecurity practice at Ropes & Gray, and today, I’m thrilled to be joined by my colleague and data, privacy & cybersecurity counsel, Edward Machin. We’re going to dig into the AI literacy measures of the EU AI Act and how companies can meet its requirements so that your teams are adequately AI literate.

Edward, let’s get started. Now, as most people know, the EU AI Act came into force in August 2024. It’s the first piece of comprehensive legislation regulating AI systems and the use of AI systems. The definition of an “AI system” is very broad—it means any machine-based system that is designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that for explicit or implicit objectives infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Wow, there’s a lot in there, but basically, what we’re saying is if you’ve got a system that doesn’t need human input to create an output, then it’s going to be captured by the EU AI Act. Edward, what are the things that we have to be aware of with regard to that understanding of what these systems are?

Edward Machin: Thanks, Rohan. Under the act, AI literacy is the skill, knowledge, and understanding that allows your organization or other organizations—and, crucially, individuals—to make an informed deployment of AI systems and to gain a better understanding of the risks and the opportunities of AI, of which there are many and which you know, but also the possible harms that it can cause. The AI Act, as you may know, applies to a wide range of organizations using specific terminology: deployers, providers, importers. The entities that are in scope for the AI literacy obligation, which took effect earlier this year in February 2025, are primarily “providers” of AI (i.e., the entity that develops the AI system or the general purpose AI model), as well as “deployers,” which is the AI Act’s term for “users,” so both organizations and individuals regardless of the risks and the capabilities of the AI system.

The AI Act categorizes AI systems into risk tiers. Importantly, the AI literacy principle applies regardless of risk, so if you’re a provider of AI systems or you’re a deployer (a user of those systems), irrespective of whether that use applies in a high-risk context, a low-risk context, or a limited-risk context, you will need to be compliant with the AI literacy principle. Now, the caveat here is that it doesn’t apply to other entities under the act—so, importers and distributors of AI systems—but those organizations and entities are subject to their own set of obligations under the act. And in some cases, they may become providers of AI systems, depending on the roles that they play within the supply chain or the ecosystem. So, if you are not currently a provider or a deployer of an AI system but you are or will be an importer or distributor, it’s important to remember that those roles can shift, and you could possibly find yourself as a provider, in which case you do need to comply with the AI literacy principles.

Rohan Massey: Okay, and just for clarification, the importer and the distributor are organizations that are importing or distributing around the European Union, as this act applies to AI systems that are put on the market within the European Union. I know we have quite a global audience, so I thought that bit of clarification may be helpful. Edward, on the basis of what you’ve said, if I am a deployer or a provider, irrespective of the risk of the system that I’m using, I have to have my employees or users AI literate. Now, does that then mean this is a one-size-fits-all regarding what AI literacy is?

Edward Machin: Yes, it’s a really good question. Having said that the AI Act buckets uses of AI into certain risk tiers, actually the benefit of the AI literacy principle for organizations that are seeking to get their arms around it is that it is a subjective standard. There is no one size fits all, and the specific requirements and standards will be context specific, so dependent on your use of AI, your business, and a whole other list of factors that I’ll spend a little bit of time going through now. These are not exhaustive, but the things that we are seeing organizations do well is to consider the type and risk of the relevant AI system.

  • Obviously, the higher-risk system that you’re using, the more important it is that the folks that are either developing the system, using the system, and (as Rohan and I have touched on) potentially importing or deploying the system—if you become a provider for that system—the greater the need for those individuals to be AI literate, as it were. Because this is not a one-size-fits-all, there will even be context specific within the same types of risk categories. You may be using the same system but for different outputs or to receive different inputs, and so, you will need to consider how your business either does or wants to use the AI system, or deploy or develop the AI system, and then think about the types of risks that follow from that.
  • Secondly, you need to think about the size and the resources of the organization. Big tech organizations will obviously have significantly more resources baked into its compliance and governance programs and frameworks than a subject-matter expert (“SME”). That is not to say that an SME is not required to comply with the AI literacy principle, but it does help, and we think that regulators will hopefully take into account the resources, size, budgetary requirements, and so forth of organizations when considering whether they are AI literate.
  • Relevant employees is also an important factor. Again, the context here is key. Folks who are in non-AI or non-technology-facing roles or limited roles will need to have much less literacy than the folks who are in product, for example, or in HR. Even in legal and compliance, frankly, when you are advising on these laws, it is important that you have a broad range of understanding within the organization to fit the relevant personnel who are either using the system or developing the system.

There has been some guidance on this—we are still in early days both with the literacy principle itself but also the AI Act more generally. Non-technical employees are subject to the literacy requirements. The Dutch regulator, which is one of the few supervisory authorities that has issued guidance on this topic so far, makes what we have said clear: the level of AI literacy for each employee must be in line with the context in which the systems are used and the groups of people who are affected by the systems. So, all of this goes to say that it is not a one-size-fits-all—there is room for context. There should be room for context, but what is important is that you can justify and demonstrate as an organization why you have taken the steps you have taken and why they are proportionate and reasonable within the circumstances.

Rohan Massey: Context is key—that’s very clear. With this part of the EU AI Act coming into force a couple of months ago on February 2, what should organizations really be doing to try and understand their own context, to think about or identify which of their employees will be required to have this literacy training? What should that training look like? What resources should they be putting behind this? You mentioned there’s only been a little bit of guidance, and I think that follows the EU playbook where a big piece of legislation comes out like this with a staggered, couple-of-year implementation timeline, and we expect guidance to come out from regulators as they develop their thinking or, in some cases, especially with the EU AI Act, as they’re in fact created as official bodies, and then, they start bringing out the guidance. But without that guidance, what should companies be thinking of?

Edward Machin: We are telling clients, and we already see clients doing some of the following things. The first and most important thing that organizations need to be doing before even the AI literacy or understanding what they need to roll out from AI literacy is understanding the role they play under the AI Act and the obligations placed on them, because everything flows from that—and this can build into either a wider AI governance strategy or a standalone project just for AI literacy. We typically see the former and recommend the former. Once you have your arms around the types of AI systems that you’re going to be using or developing, from there, you can develop your training modules. You can develop your policies and procedures around literacy. You can understand who it is that you need to triage, who it is that you need to focus on first—the most critical employees within your business, those at the front line of AI use or development—and then, cascade that training and those policies and procedures down from there. Like Rohan has said, this is all context specific, so there is no specific playbook here, but we are seeing the same themes emerge from organizations, and that is training that’s not a tick-box compliance training that some folks may have done or seen in the past—this should be a living and breathing training in a way that allows particularly the people at the front line of use or development of AI to understand what they’re doing and to understand the attendant risks and opportunities of AI.

We’re also seeing organizations now start to beef out their policies and procedures around this, to have workshops, to have seminars and similar types of interactions with their employees so that it’s not necessarily a prescriptive, “You must do this,” or, “You must do that,” but it’s helping employees think about, “When you’re using AI,” because most organizations are now, “what is it that you need to think about? What is our organization’s particular view on this, risk tolerance, and approach to compliance?” And so, it will require multiple different strands both within the organization and, in some cases, outside, whether you’re also speaking to your suppliers and critical pieces of the supply chain.

Then, thirdly, as Rohan said, the guidance so far has been limited. The European AI Office, which is the omnibus regulator for the AI Act, published what it called a Living Repository of AI Literacy Practices. To be clear, this is not guidance on how to comply with the requirements, but rather, it shows examples of how certain companies are thinking about AI literacy and the steps that they have taken and intend to take to meet their requirements under the act. We think it’s a really good tool and resource. Again, it doesn’t have to be a one-size-fits-all, but it is interesting to see how organizations who may be in your sector or of a similar size are dealing with this.

The final point, as Rohan said, guidance is still scant from both national regulators but also the EU authorities themselves, so you should keep an eye on this. Ropes & Gray will continue to alert our clients and others about the guidance that comes as and when it is published. But we are expecting further guidance. That is not to say, though, that you should wait for that guidance to be published. It’s important to be taking steps now to meet the requirements, although, there isn’t going to be formal enforcement for a number of months. And, as you said, Rohan, it will be interesting to see how and whether that enforcement takes shape over the coming months and years. This is a critical part of not only your AI literacy obligations but your wider AI governance framework, so it is important for your employees and other stakeholders to understand and be working toward compliance with the AI approach, risk tolerance, policies and procedures, and other steps that your organization requires in order to be compliant.

Rohan Massey: Great—thank you very much, Edward. It certainly seems like there is a lot to do for organizations to get their heads around it, to get ahead of their AI literacy. Of course, if there’s anything that Ropes & Gray can do to help, we are here for you. Edward, with that, I just want to thank you very much for joining me today. I think this has been a really insightful conversation. And thank you to all of our listeners. This has been the R&G Tech Studio podcast. It is available on the Ropes & Gray website, on the R&G Tech Studio podcast page, and wherever you get your podcasts. Thank you, everyone, for listening.

Subscribe to R&G Tech Studio Podcast