Decoding Digital Health: AI/ML Webinar Recap—U.S. FDA & European Perspectives

Podcast
May 21, 2024
18:43 minutes

On this episode of Ropes & Gray’s podcast series, Decoding Digital Health, host Kellie Combs, a partner in the life sciences regulatory and compliance group and co-chair of the firm’s cross-practice digital health group, is joined by Greg Levine, chair of the firm's global life sciences regulatory and compliance practice, and Lincoln Tsang, head of the European life sciences practice. They unpack some key insights from a recent webinar on regulatory issues surrounding artificial intelligence (“AI”) and machine learning (“ML”) devices, hosted by Ropes & Gray and HLTH. The conversation covers the boundaries of AI and ML regulation as a medical device, the evolving regulatory frameworks in the U.S. and Europe, and upcoming guidance and developments in the field.


Transcript:

Kellie Combs: Welcome to Decoding Digital Health, a Ropes & Gray podcast series focused on legal, business and regulatory issues impacting the digital health space. My name is Kellie Combs. I am a partner in the life sciences regulatory and compliance group and a co-chair of our cross-practice digital health group. I am joined today by my partners Greg Levine, chair of the firm’s global life sciences regulatory and compliance practice, and Lincoln Tsang, head of our European life sciences practice. On this episode, we will discuss Greg and Lincoln’s recent webinar on “Regulatory Issues Surrounding AI and Machine Learning Devices: U.S. FDA and European Perspectives,” which was hosted on April 30 by Ropes & Gray and HLTH. Greg and Lincoln were joined on the webinar by Sonja Fulmer, the Deputy Director of FDA’s Digital Health Center of Excellence, and they had a fascinating and wide-ranging discussion about the use of AI and ML in medical products. Today, Greg and Lincoln will share with us some of the highlights coming out of that conversation and give us a preview of what’s on the horizon for the regulation of medical devices that incorporate AI and ML.

Greg, let's start with you. When we talk about AI and ML in medical devices, what do we mean? What’s the difference between “locked” and “adaptive” algorithms? And what are some common use cases?

Greg Levine: The FDA follows the U.S. government’s definition of artificial intelligence as “a machine-based system that can, for a given set of defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments.” In more lay terms, the way I think of it is software that recognizes patterns in data to make predictions or recommendations based on the data. Then, for machine learning, FDA and the government’s definition refers to machine learning as “a set of techniques that can be used to train AI algorithms to improve performance of a task based on data.” “Locked” algorithms are those, as they sound, that stay constant, unless or until they are updated to reflect improvements—they might be established through additional machine learning training, for example. Whereas “adaptive” algorithms are those that incorporate improvements automatically.

The most common use cases by far that we’ve seen in the U.S. for AI in medical device software are in various radiology applications—for example, looking at scans and identifying potential cancerous lesions would be one use case that we’ve seen. In our conversation with Sonja Fulmer, we also discussed new directions or new kinds of applications for AI that the Agency is starting to see. As an example, FDA recently issued the first de novo marketing authorization for an artificial intelligence and ML product that would guide in the rapid diagnosis and prediction of sepsis in patients in a hospital environment, so that’s the kind of development of applications that the FDA is seeing now.

Kellie Combs: Great—thanks, Greg, for providing that background. Lincoln, now let me turn to you to talk about the regulatory framework. It sounds like the regulatory frameworks in the U.S. and across the pond are continuing to evolve, and at a high level, it looks like regulators have adopted a risk-based approach to regulation of medical devices that incorporate AI and ML. Lincoln, can you explain?

Lincoln Tsang: I think there are three key components that the regulators across the world—not necessarily in Europe or the U.S.—are increasingly focusing on and have been the subject of greater debate at an international level, through the International Medical Devices Forum. The three components that people tend to focus on from a risk minimization or risk management perspective is really to focus a great deal on the life cycle regulation as one of the important tools to control and manage risks associated with the use and the construction of an AI algorithm. The second point is really arising from the first point, which is about potential algorithmic bias—this is a very important component, which I will elaborate on a bit more. And the third component is really about transparency about the AI tool, for users to be aware of the limitations and the utility of the AI tool.

Now, let’s focus a bit more on the algorithmic bias and transparency point, which underpins the risk minimization process. In the AI and ML situation, we are essentially dealing with mathematical models based on data analytics, and each stage of the data analytics journey presents a unique risk level. For example, descriptive models, which will uncover patterns, which I think Greg has alluded to, in a past or current event in a particular format. Then, we will proceed with, for example, looking at a predictive model using the historical data to predict a particular outcome, or prognostic factors. The prescriptive model sometimes will be labeled as an “optimization model,” which is designed to find the best solution for a given problem. Each of those models represent unique risks associated with the tool. Now, algorithmic bias is not actually unique in AI and ML—it actually occurs in real life in clinical diagnosis, because diseases can be affected by a number of prognostic factors: Gender, race, socioeconomic status. Any model built on data, which are not sufficiently generalizable, may give rise to algorithmic bias. So, those are the risk issues that regulators are increasingly focused on from a risk minimization perspective.

Kellie Combs: Let's move on and talk about the regulatory analysis, and how we should be thinking about products that incorporate AI and ML. Part of your conversation with Sonja focused on the divide between software functions that trigger regulatory oversight, and those that are “non-device.” Greg, let’s start with you: How does the inclusion of AI/ML technology into a medical product impact the analysis of device regulatory classification in the U.S.?

Greg Levine: The analysis itself is done the same way as it would be done for other types of software. FDA has put out guidance to try to help developers of software distinguish between those software applications that would be regulated as medical devices, and those that would not—for example, those that are considered clinical decision support software that might not be regulated by the FDA based on certain statutory criteria and then FDA’s interpretation of those criteria. Part of the conversation with Sonja was that there’s been some controversy about how FDA has interpreted those criteria in its guidance that is published on clinical decision support. Sonja’s position was that the FDA’s guidance lays out criteria that can be understood, and for those who are unsure of where they are, that FDA has a process that allows you to come to the Agency and obtain advice or the Agency’s opinion on whether a particular software application is or is not clinical decision support. But the analysis itself applies the same criteria as you would for other types of non-AI software.

Kellie Combs: Got it—thanks, Greg. What about you, Lincoln: What’s it like across the pond?

Lincoln Tsang: Across the pond in the U.K. and in Europe, a general medical device will include, as a matter of fact under the current regulatory regime, software medical devices—but not all software used in a health care setting alone per se is considered as a medical device. We have the benefit of actually having the case law to guide the classification. Largely it’s based on an assessment of the functionality of the software, and the intended use or purpose ascribed by the manufacturer. A medical device, including a software medical device, must have a medical purpose, that is ascribed by the manufacturer, that is capable of restoring, correcting, modifying physiological functions in human subjects.

On top of the software medical device, in Europe, as people may be aware, there is a piece of legislation going through the legislative process called the AI Act. Essentially now, the likely landscape in the future will be that it will be a combination of sector-specific legislation (i.e., controlling medical devices), and those that are not medical devices will be separately regulated by the so-called “AI Act.” The AI Act essentially introduces EU-wide requirements for AI systems or machine learning, based on risk evaluation, so it’s a sliding scale of rules based on the risk (i.e., unacceptable risk, high risk, and limited risk of AI systems). Attached to that, there’s power for the regulators to impose a maximum financial penalty for infringement. Underpinning the AI Act, there are two legislative proposals to expand first the current strict product liability regime, and then the second proposal is the so-called “AI Liability Directive,” which essentially represents the more targeted harmonization of national and civil liabilities rules for AI.

Kellie Combs: That’s a great segue to talk about what’s on the horizon. Greg, coming back to you, I know that Sonja mentioned several items on FDA’s to-do list for this year. Can you elaborate?

Greg Levine: Yes, there are a couple of guidance documents that are specific to AI that FDA has on its agenda for this year. One is to finalize the draft guidance that FDA has previously published on predetermined change control plans. Those are plans that a software developer would submit to FDA as part of a marketing application, that would provide the manufacturer with some flexibility to update its algorithms without going back through the pre-market review process with FDA. There is a draft out that already that describes what such a submission would need to include, and the FDA already has authorized a number of products that include these kinds of plans, so they do intend to clarify perhaps some aspects of that guidance, such as this issue of the “locked” versus “unlocked” algorithms. We may see that final guidance as soon as this year.

Then, another one that the Agency is working on that Sonja mentioned, that might come out this year, presumably in draft form, is a new guidance document that would address life cycle management of AI. Lincoln mentioned before some of the important aspects that relate to risk management for AI applications, and the need to keep an eye on those things and maintain them throughout the life cycle of the device, including post-market, once it’s already on the market. If we see bias in the performance of the algorithm that might not have been previously observed or other things that suggest that the algorithm isn’t working as expected, then one might need to make adjustments or do things to manage those risks, even once the product is already on the market. So, we expect to see some guidance on that coming out this year as well.

The last guidance that Sonja mentioned is not one that is specific to artificial intelligence, but it deals with medical device software, and that would be finalization of a guidance that’s not an FDA-specific guidance—it's from the International Medical Device Regulators Forum (“IMDRF”)—but they have put out a draft guidance on risk characterization for medical device software. At the time we recorded the podcast, I think there were two or three days left to comment on that draft guidance, so the comment period will have closed by now, but we may see a final version of that guidance come out this year as well.

Kellie Combs: Got it—thanks, Greg. Lincoln, are there other important developments that we should be on the lookout for across the pond?

Lincoln Tsang: I think really not across the pond, but probably internationally as well. I mentioned earlier on about the three important components to ensure safe use of AI and ML—life cycle regulation will be the focus of future development. How are you going to manage bias and transparency requirements for users? Because you need to build trust in the system and the regulatory structure. With those considerations in mind, internationally, there are a number of so-called “international standards”—issued by the International Organization for Standardization (“ISO”)—that have been issued to look at issues like trustworthiness of AI, issues concerning bias that may arise from the AI systems and AI-aided decision-making tools, so that the regulation itself is more drawn up in the overall control, to provide a greater certainty on the requirements that manufacturers can look to in developing AI tools or machine-learning tools, to guide, for example, clinical decision-making.

A great deal of collaboration amongst regulators nowadays—Sonja mentioned in her talk about the collaboration between Health Canada, FDA, and the U.K. Medicines and Healthcare Regulatory Authority (“MHRA”)—on setting out some guiding principles on good machine learning practice. In the U.K., the government is focusing very much on how to encourage innovation in AI and ML. There are a number of guiding principles already developed by the various government departments, not only the MHRA, to set out the broader guiding principles, but those guiding principles will not be codified into legislation, but likely there will be sector-specific legislation to be promulgated in the coming years. Obviously, this year is going to be challenging, because in the run-up to the election, it’s unlikely there will be sufficient parliamentary time for new legislation to be put through.

Kellie Combs: Thanks, Lincoln. When we’re thinking about regulation on a global scale, can you talk a little bit more about some of the harmonization efforts that are occurring in this area?

Lincoln Tsang: I think Greg has already touched on that. The harmonization effort largely mediates through the International Medical Devices Regulators Forum. A position paper has already been developed, essentially to encourage a greater operation and collaboration amongst international regulator authorities—not necessarily limited to the well-established regulatory authorities like FDA, Health Canada, the U.K. Agency, but also engaging the broader constituents, like the Korean FDA and the Singapore Health Sciences Authority. The reason for greater international collaboration is recognizing, for example, in the E.U. AI Act, a lot of those devices, tools, or services provided through AI will be accessed by final users not necessarily based in their home country, so one would have to look at the regulation that is fit for the purpose (i.e., that would be fit for the purpose for the global community).

Kellie Combs: Thanks, Lincoln. Anything to add from your perspective, Greg?

Greg Levine: In addition to the documents we've mentioned already, there’s also a guiding principles document put out by the three—the FDA, Health Canada, and the NHRA in the U.K.—on predetermined change control plans for machine learning. I would expect that we may see similar kinds of guidance or rules being developed in those jurisdictions. Then, in addition, we see these broader frameworks more globally being carried out through the IMDRF. And then, also in addition to that, Lincoln, I think, alluded already to some of the international standard-setting organizations. So, I think through those various mechanisms, there is already a pretty intentional effort to try to have some degree of harmonization across the globe for these applications, which don’t have geographic boundaries necessarily.

Kellie Combs: Thanks to you both for joining me today. Unfortunately, we’re out of time, but I’ve definitely learned a lot. For our listeners, the webinar recording is available online, if you’d like to take a deeper dive on these topics. Also, as a follow-up to the webinar, Lincoln has provided some additional thoughts on the evolving regulatory landscape, which are also available on our website. Please stay tuned for additional podcasts in the series—we’ll be discussing further trends and hot topics in digital health.

We also wanted to make you aware of an upcoming webinar on AI/ML issues, hosted by Ropes & Gray and led by me and Christine Moundas, one of my co-chairs in the digital health practice and a partner in the health care group. On that webinar, slated for June 13, we’ll be discussing “Considerations for AI in Healthcare and Life Sciences Applications.” Please be on the lookout for the invitation and let us know if you’d like to join.

Thank you so much for listening today. We appreciate you tuning in to our Decoding Digital Health podcast series. If we can help you navigate any of the topics we’ve been discussing, please don’t hesitate to get in touch. For more information about our practice or other topics of interest in the digital health field, or to sign up for our mailing list with access to alerts and updates around notable developments as well as invitations to digital health-focused events, please visit ropesgray.com/digitalhealth. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple and Spotify. Thanks again for listening.

Subscribe to Decoding Digital Health Podcast