Challenges and opportunities: A global review of regulatory developments surrounding AI and machine learning as devices in healthcare settings

Viewpoints
May 5, 2024
6 minutes

My colleague, Greg Levine, who chairs our FDA regulatory compliance practice, and I had a very informative conversation in a webinar on 30 April 2024 with Dr Sonja Fulmer, Deputy Director of the FDA Digital Health Center of Excellence, on a number of cross-cutting regulatory issues. It was a successful event with over 600 participants registered for their attendance. The webinar, organised by HLTH, can now be accessed here.

Below are some thoughts on the evolving regulatory landscape. 

The world of AI/ML enabled devices has been rapidly expanding and accelerating with increased interest and investment in development of wide-ranging applications.

The technologies have already been deployed by industry to improve efficiency in drug discovery and development, productivity, and clinical development, amongst other things throughout the product life-cycle. 

Such emerging technologies have the potential to reshape healthcare delivery and patient care. With their ability to analyse and dissect vast quantities of data faster, AI/ML technologies can identify new patterns and predictors to improve health outcomes through individualising treatment pathways.  

Such software-based technologies have been increasingly being applied for ‘smart clinical monitoring’ and will become clinically important tools to improve patient outcomes through timely and personalised clinical interventions. 

AI-linked diagnostic systems have been developed to identify signatures for early disease detection and disease severity in various disease settings, such as radiology for early cancer diagnosis, diabetes, high blood pressure, cardio-physiological monitoring. US Food & Drug Administration has authorised a software as the first AI tool capable of guiding rapid diagnosis and prediction of sepsis based on biomarkers and clinical data with the assistance of AI. 

Sepsis is a complex life-threatening condition and early diagnosis has presented practical clinical challenges for decades in various healthcare systems. Time-sensitive medical interventions are critical to improve patient survival. In Europe, an AI software has been granted a European conformity certificate and the approved label claim is to assist in stroke triage and providing decision support for life-saving treatment using non-contrast computed tomography so that patients can get treatment quicker.

Innovation and growth of emerging technologies may benefit from greater clarity and certainty of the regulatory requirements. 

Legislatures and regulatory authorities around the world use regulation to deliver various public policy objectives. For the life sciences and healthcare sectors, this is to ensure a high level of protection of public health and consumers or patients. Regulation is characterised by a set of rules and expected behaviours that economic operators or individuals should follow. Very often regulatory oversight involves one or more regulators to enforce or influence regulatory compliance. 

Regulation of innovative therapeutic or diagnostic approaches in the life sciences and healthcare sectors follows essentially a similar regulatory paradigm. Some have advocated a more agile approach to regulation of AI and ML that supports innovation whilst protecting the public, the safety, and rights of the users. 

President Biden’s Executive Order on “Safe, Secure and Trustworthy Artificial Intelligence” of October 2023 states that the U.S. government “should advance the responsible use of AI in healthcare and development of affordable and life-saving drugs”, and directs the Department of Health and Human Services to “establish a safety program to receive reports of – and act to remedy – harms or unsafe healthcare practices involving AI,” 

In addition, the Digital Health Center of Excellence has been created by FDA to meet three principal objectives:

  • To connect and build partnerships to accelerate digital health advancements;
  • To share knowledge to increase awareness and understanding, drive synergy, and advance best practices; and
  • To innovate regulatory approaches to provide efficient and least burdensome oversight whilst meeting the FDA standards for safe and effective products.

FDA engages other federal agencies to address regulatory challenges and opportunities posed by such emerging technologies. FDA has considered that there may be a need for updated legal authority for the agency to regulate medical devices enabled with AI. 

In the EU, as the first step of the legislative process, the EU Parliament approved an AI Act on 13 March 2024 that ensures safety and compliance with fundamental rights, whilst promoting innovation. The AI Act has extraterritorial reach and applies to providers placing on the market or putting into service AI systems in the EU regardless of whether these providers are based within the EU. 

The Act introduces EU-wide requirements for AI systems and proposes a sliding scale of rules based on the risk (unacceptable risk; high risk and limited risk AI systems). The Act introduces specific powers to impose the maximum penalty for:

  • Up to €35 million or 7% worldwide annual turnover for non-compliance of data requirements;
  • Up to €15 million or 3% of global turnover for breach of other requirements or obligations including the rules on general-purpose AI models; 
  • Up to €7.5 million or 1% global annual turnover for providing incorrect information, incomplete or misleading information. 

The AI Act, when it becomes applicable, will intersect with the regulatory regime governing medical devices. Software intended by a manufacturer to be used, alone or in combination, for a medical purpose such as diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease is classified as a software medical device under EU (and similarly the UK) regulatory system.   

There are two proposed EU law instruments to provide redress for harms caused by AI systems:

  • The updated strict liability regime based on the existing EU strict product liability directive by expanding the scope AI systems; and
  • Supplementing the strict product liability regime with AI liability directive which represents the more targeted harmonisation of national civil liability rules for AI. 

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by several core principles: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

  • UK regulators will implement the framework in their sectors or domains by applying existing laws and issuing supplementary regulatory guidance. The existing framework will not be codified into law at present - particularly in the likely parliamentary election in 2024. But it is anticipated the need for targeted legislative interventions in the future.
  • These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex general-purpose AI and the economic operators or key players involved in its development. 

The AI/ML technological advances have challenged and will continue to challenge the regulatory systems around the world. As the technologies intertwine with the lifecycle of data analytics, borderline product classification can present a practical challenge as not all data analytics could be properly classified as medical devices. Functionality, safety and intended purpose appear to be the relevant considerations in the borderline product classification.  

Novel regulatory compliance questions may arise from limited generalisability, continuous learning, and lack of transparency - each of which stems from the ability of these technologies to operate and develop without or with limited human input. 

The highly iterative, autonomous, and adaptive nature of highly adaptive tools require a new total product life cycle regulatory approach to check for improvement as well as performance deteriorations. This may call for a continuous assurance protocol whereby the product undergoes continual or frequent periodic monitoring and review.

To this end, a plan or protocol ought to be put in place to describe what modifications will be made to the system and how such modifications are to be assessed, and this essentially represents the policy guidance document developed by FDA, as well as its international partners UK MHRA and Health Canada for a “Predetermined Change Control Plan” or PCCP which consists of three components (a) description of modifications, (b) modification protocol, and (c) impact assessment.

Given the complexity and the cross-border nature of research, innovation and standardisation of the AI/ML landscape, international cooperation – in developing internationally-aligned good regulatory practices to guide risk classification and regulatory requirements – can improve cost-efficiency for developers seeking to innovate for the global market. 

Such international regulatory cooperative efforts are underway amongst the regulatory bodies in the key geographical regions of North and South Americas, Asia Pacific, and Europe as well as the World Health Organization through the International Medical Device Regulators Forum.

In 2023, the FDA, UK MHRA, and Health Canada jointly issued a “guiding principles” document on Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices. 

Subscribe to Ropes & Gray Viewpoints by topic here.