The Biden Administration’s Swan Song on Digital Health: Two FDA Guidances on Artificial Intelligence and FDA’s Defense of its Clinical Decision Support Guidance

Alert
January 31, 2025
12 minutes

In the waning days of the Biden administration, the FDA released a flood of new guidance documents and other agency actions, including several important items related to digital health. On January 7, FDA published two new draft guidance documents addressing artificial intelligence (“AI”). One is a long-awaited draft guidance addressing marketing submission and lifecycle management considerations for AI-enabled medical devices, titled “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations” (the “Device AI Draft Guidance”). The other is a draft guidance addressing the use of AI tools in the drug product lifecycle, titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products” (the “Drug AI Draft Guidance”). Although they address different topics, both documents focus on the quality of datasets used to train and validate AI tools, addressing algorithmic limitations and biases, lifecycle maintenance to ensure continued reliability, and the importance of engaging with FDA. These key concepts are familiar from prior FDA actions addressing AI, which our Ropes & Gray team has discussed in previous podcasts (U.S. Life Sciences Regulatory and Compliance Outlook 2024 (Part IV): Digital Health and Non-binding Guidance: FDA Regulatory Developments in AI and Machine Learning ) and recently in an Alert on FDA’s final guidance on Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions.

Separate from the new draft guidance documents, the agency also issued two citizen petition responses on January 17 (Docket FDA-2023-P-2808 and Docket FDA-2023-P-0422), rejecting challenges to FDA’s 2022 final guidance on Clinical Decision Support Software (the “CDS Final Guidance”). We previously discussed the CDS Final Guidance, including some of its more controversial elements, in an Alert and podcast. In the petition responses, FDA defends its CDS Final Guidance against challenges asserting, among other things, that the guidance exceeds the agency’s statutory authority, failed to follow proper processes, stifles innovation, and violates the First Amendment. These responses appear to be FDA’s attempt, in the final days of the Biden administration, to defend its more restrictive interpretation of the statutory exemption for CDS software. It remains to be seen, however, whether the new administration will take a different approach.

This Alert summarizes the key points and takeaways from the two new AI-focused draft guidance documents and FDA’s citizen petition responses defending the CDS Final Guidance.

Device AI Draft Guidance

The Device AI Draft Guidance provides FDA’s recommendations for lifecycle management and marketing submissions for AI-enabled device software functions. It provides both specific recommendations on the information and documentation that should be included in a marketing submission, as well as recommendations for the design, development, deployment, and maintenance of AI-enabled devices, including performance management, consistent with a total product lifecycle (“TPLC”) approach to AI-enabled devices. The draft guidance also suggests strategies to address transparency and bias throughout the TPLC and encourages early engagement with FDA to guide product development and submission preparation. Some key aspects of the draft guidance include:

  • Marketing Submission Content Recommendations. The Device AI Draft Guidance provides detailed recommendations for what documentation and information should be included in a marketing submission for AI-enabled devices, including why FDA considers the information important and where in the submission it should be provided. The draft guidance describes AI-specific submission considerations in the following areas: (1) device description; (2) user interface and labeling; (3) risk assessment; (4) data management; (5) model description and development; (6) validation; (7) performance monitoring; and (8) cybersecurity. FDA clarifies that the guidance is meant to supplement recommendations in other applicable guidance documents. For example, a sponsor should consult FDA’s guidance on Content of Premarket Submissions for Device Software Functions and refer to FDA’s other digital health resources, including FDA-recognized consensus standards and FDA webpages and workshops dedicated to AI.
  • Transparency. Consistent with prior FDA statements addressing AI-enabled devices, the Device AI Draft Guidance emphasizes the importance of transparency for AI-enabled devices, which are heavily data driven and incorporate algorithms exhibiting at least some degree of opacity. The draft guidance recommends that sponsors consider transparency throughout the TPLC, starting early in the design phase. FDA provides detailed recommendations within the body of the guidance and in Appendix B (Transparency Design Considerations), for the information sponsors should provide, and where it should be provided, including in the user interface, labeling, and in the public submission summary for a marketing application. FDA suggests one possible format for delivering this information in a model card, for which it provides examples in Appendix E to the draft guidance.
  • AI Bias. AI bias refers to the potential tendency of a device to produce incorrect results in a systematic, but sometimes unforeseeable way, which can impact safety and effectiveness of the device within all or a subset of the intended use population (e.g., different healthcare settings, different input devices, sex, age). The Device AI Draft Guidance emphasizes the need for a comprehensive approach to address the potential for bias, which should include addressing representativeness in data collection for development and testing, evaluating performance across subgroups, and monitoring performance throughout the product lifecycle.
  • Data Management. The Device AI Draft Guidance underscores that the training and tuning of AI-enabled devices, as well as the accuracy and usefulness of their performance validation, depends heavily on the amount, quality, and diversity of data used to train and test the AI model. To help FDA evaluate expected device performance and the soundness of device validation, sponsors should describe the training and testing data used in device development in detail in their marketing submissions. This description should include, for example, how the data were collected; the size, limitations, quality assurance processes, and mechanisms used to improve diversity of the dataset; any steps used to clean or process the data; how data are stored; and how data are representative of the intended use population, including if data collected outside the U.S. is used in device validation. The description should also address the steps taken to ensure that training and testing datasets are independent of one another, with data used for testing sequestered from the development process and generally coming from different clinical sites, to ensure robust external validation of the AI performance.
  • Validation and Usability Recommendations. The Device AI Draft Guidance provides detailed recommendations for AI-enabled device performance validation, focusing on those aspects that are uniquely important in the AI context. For example, FDA emphasizes the importance of subgroup analyses as part of performance validation for AI-based technologies because their reliance on relationships learned from large amounts of data, and the relative opacity of models to users, make them particularly susceptible to unexpected differences in performance across subgroups. Likewise, FDA highlights human factors and usability testing as part of AI-enabled device validation, with the type of assessment that may be needed varying based on the role human users play in the “human-AI team.” Whereas, for some devices, more emphasis may be placed on the AI model’s standalone performance, for others, such as image analysis software intended to aid the physician’s read, a focus may be on assessing the performance of the human-AI team (e.g., did the intended user working with the new device perform the same or better than the operator alone or with another device?).
  • Performance Monitoring. In keeping with the TPLC approach, the Device AI Draft Guidance provides recommendations relevant to the development and post-market management stages of a device’s life cycle, including with respect to post-market performance monitoring and cybersecurity. As FDA has previously described, a key risk with AI-enabled devices is that their performance in a real-world environment may change or degrade over time due to a variety of factors, such as changes in patient populations over time, disease patterns, or data drift. As such, a post-market performance monitoring plan may serve as an important risk control for AI-enabled medical devices. Sponsors of AI-enabled devices that elect to employ proactive performance monitoring as a means of risk control should include information on the performance monitoring plan as part of the premarket submission.

Drug AI Draft Guidance

Jointly authored by seven offices within FDA, the Drug AI Draft Guidance articulates the agency’s recommendations on the use of AI to produce information or data intended to support regulatory decision-making regarding safety, effectiveness, or quality for drugs. Specifically, the draft guidance lays out a risk-based credibility assessment framework for sponsors to evaluate and establish the credibility of an AI model for a particular context of use (“COU”) in the drug product lifecycle. The draft guidance covers a broad range of AI use cases in the drug product lifecycle, including use of AI-enabled tools in nonclinical and clinical development, manufacturing, and pharmacovigilance. The guidance does not apply to AI tools used for drug discovery or operational efficiencies that do not impact patient safety, drug quality, or the reliability of results from a nonclinical or clinical study. Some key aspects of the draft guidance include:

  • Increasing Prevalence of AI Tools in the Drug Product Lifecycle. As FDA has previously described, the Drug AI Draft Guidance acknowledges the rapidly expanding role of AI-enabled tools across all stages of the drug product lifecycle. Examples of AI use cases provided in the guidance include, e.g., predictive modeling for clinical pharmacokinetics and/or exposure-response analyses; analyses of real-world datasets for the development of clinical trial endpoints or assessment of outcomes; identifying, evaluating, and processing post-marketing adverse drug experience information; and facilitating the selection of manufacturing conditions.
  • Risk-Based Credibility Assessment Framework. The Drug AI Draft Guidance presents a seven-step, risk-based framework to establish and document the credibility of an AI tool for a particular COU. The framework defines risk based on two factors: (1) model influence (i.e., the importance of the AI-derived evidence relative to other evidence being considered), and (2) decision consequence (i.e., the impact of an incorrect decision on an adverse outcome). In consideration of the risk and COU of the particular AI tool, the draft guidance provides detailed recommendations for developing and executing a credibility assessment plan. Like the Device AI Draft Guidance, FDA’s recommendations for AI used in the drug product lifecycle address, e.g., data quality and management (including separation of training and test data), mitigation of AI bias, and performance assessment of the AI model. FDA strongly encourages sponsors planning to use AI-enabled tools covered by the guidance to engage early with the agency, emphasizing that the risk-based credibility assessment framework envisions interactive feedback from FDA concerning both the assessment of the AI model risk as well as the adequacy of the credibility assessment plan.
  • Challenges with AI-Enabled Tools. While acknowledging the potential of AI to accelerate the development of safe and effective drugs, FDA also underscores that AI tools can present unique challenges that sponsors must address. These challenges include susceptibility to bias, model opacity, and potential degradation in the tool’s credibility over time. Like the Device AI Draft Guidance, the Drug AI Draft Guidance includes a variety of recommendations aimed at identifying and addressing these challenges. Given the breadth of use cases covered by the Drug AI Draft Guidance, however, it is important to keep in mind that not all recommendations will be applicable to all AI-enabled tools. For example, FDA provides lifecycle maintenance recommendations for continued credibility of AI outputs, which it suggests would generally not be applicable for AI models that produce locked data or information at a static timepoint but would be crucial for AI models used throughout the product’s lifecycle (e.g., in the case of pharmaceutical manufacturing).

CDS Citizen Petition Responses

On January 17, 2025, FDA issued two responses to citizen petitions that challenged its CDS Final Guidance. The CDS Final Guidance provides FDA’s interpretation of the four statutory criteria that exempt certain CDS software from the definition of a device. The guidance was a significant departure from the previous draft iterations, and stakeholders have raised a variety of concerns with FDA’s approach since it was issued in 2022, including through the citizen petition process.

One of the January 2025 citizen petition responses addressed a February 2023 citizen petition submitted on behalf of the CDS Coalition, which requested that FDA rescind and repropose its CDS Final Guidance, as well as an earlier petition submitted in 2016 by the same group. FDA found the 2016 petition moot and denied the 2023 petition. The February 2023 citizen petition argued, among other things, that the CDS Final Guidance construes the statutory exemption for certain CDS software more narrowly than Congress intended and impedes innovation. FDA defends its interpretation as consistent with the plain language of the statute and giving meaning to each of the words Congress chose. In particular, in defending its controversial interpretation of the third statutory criterion to limit the CDS exemption to software that “does not provide a specific preventive, diagnostic, or treatment output or directive,” FDA points to the statute’s use of the plural “recommendations” to describe the intended use of non-device CDS. FDA further rejected the assertion in the petition that the CDS Final Guidance impedes innovation, arguing that “innovation is meaningless if there is no assurance that ‘innovative’ devices are appropriately safe and effective.”

The other response denied a July 2023 citizen petition that requested a “prompt, full investigation” of First Amendment concerns, asserting, among other things, that the CDS Final Guidance imposes content-based restriction on the professional speech of physicians in violation of the First Amendment. In its response, FDA disagreed that the informational outputs of CDS software are speech entitled to First Amendment protection on various grounds, and asserted that, even if the informational outputs of CDS software were to constitute speech, the CDS Final Guidance would pass muster under First Amendment scrutiny.

It remains to be seen how the legal framework for CDS regulation will change, if at all, during the Trump administration. In the first days of his second term, President Trump issued an executive order revoking the Biden administration’s Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the initiatives of which have already, for the most part, been achieved or implemented, and issued an Executive Order titled, “Removing Barriers to American Leadership in Artificial Intelligence.”

Conclusion

The two new AI-focused draft guidance documents represent the latest step FDA has taken to develop its regulatory approach to AI-enabled tools, which the agency and stakeholders have long recognized present unique regulatory challenges as well as extraordinary potential to benefit the public health. The new draft guidance provides some helpful clarity for sponsors developing AI-enabled medical devices or incorporating AI tools into their drug development programs, and sponsors should carefully consider the agency’s recommendations in their planning process. Comments to the Drug AI Draft Guidance and the Device AI Draft Guidance are due by April 7, 2025.

For those companies developing CDS tools, many of which are AI-enabled, FDA continues to stand behind its controversially restrictive CDS Final Guidance. However, it remains to be seen whether the agency may begin to enforce regulatory requirements more actively for CDS tools that do not adhere to all recommendations in the CDS Final Guidance or whether there will be changes to FDA’s approach to CDS tools under the Trump administration. The new Trump administration may, in an effort to foster digital health tool development, direct federal agencies to adopt a more hands-off regulatory approach.

Our team will continue to monitor developments in this area. If you have any questions regarding this Alert, please contact any members of Ropes & Gray’s FDA Regulatory practice or your usual Ropes & Gray advisor.