Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

November 13, 2023
11 minutes

On October 30, 2023, President Biden issued an executive order (“EO”) on the safe, secure, and trustworthy development and deployment of artificial intelligence (“AI”) that has the potential to set far-reaching standards governing the use and development of AI across industries. Although the EO does not directly regulate private industry, apart from certain large-scale models or computing clusters deemed to potentially impact national security (discussed below), it requires federal agencies including the Departments of Commerce (principally through the National Institute of Standards and Technology (“NIST”)), Energy, and Homeland Security, among others, to issue standards and guidance and to use their existing authorities, including regulatory authorities, to police the use of AI in ways that will impact business for years to come. In addition, it devotes federal resources toward AI-related education, training and research, including the further development of privacy enhancing technologies (“PETs”) such as differential privacy and synthetic data generation.

Even though the required agency guidelines will not directly apply to private industry in many cases, they are likely to still have significant impact through their incorporation into federal contracts. Similarly, standards set by the NIST like the NIST Cybersecurity Framework have had a substantial industry impact through voluntary adoption – which has set industry expectations – and it would not be surprising to see a similar effect here.

The EO follows on the heels of the administration’s Blueprint for an AI Bill of Rights, issued in October 2022 (the “Blueprint”) and the administration’s meetings with leading AI and technology companies earlier this year. Like the Blueprint, the EO sets forth a number of guiding principles to ensure a safe, reliable and unified approach to AI governance: ensuring safety and security, promoting responsible innovation, competition, and collaboration, supporting the rights of American workers, advancing equity and civil rights, protecting the interest of American citizens, protecting privacy and civil liberties, promoting government efficiency, and advancing American leadership abroad.

Safety and Security: The EO requires NIST to issue new guidelines and standards for the testing and development of AI that are likely to shape business processes going forward. It also imposes rules under the Defense Production Act on “dual-use foundation models” that have the potential to pose a serious risk to national security, national economic security, or national public health or safety.

Testing the Safety and Efficacy of AI: NIST, in collaboration with the Departments of Energy and Homeland Security, is tasked with creating guidelines, best practices, and standards to ensure the safe and secure development of AI technologies. These efforts include developing standards and guidelines around managing AI risks, incorporating secure practices for generative AI, and developing testing environments for AI safety. NIST is specifically instructed to issue a companion resource to the AI Risk Management Framework, NIST AI 100-1 for generative AI. The required guidelines will also support and develop standards for red-teaming tests around the safety and efficacy of AI systems to promote trustworthy AI deployment. Additionally, the Secretary of Energy is instructed to develop tools and testbeds for assessing AI systems’ capabilities, focusing on preventing security threats across various domains like nuclear, biological, and critical infrastructure.

Identification and Labeling of Synthetic Content: The EO calls for strengthening the integrity and traceability of digital content amidst the rise of AI-generated synthetic media. The Secretary of Commerce, with input from other agencies, is tasked to report on and then develop guidance for standards and techniques for authenticating and tracking digital content’s provenance, labeling synthetic media, and detecting AI-generated content. This includes preventing the misuse of AI in creating harmful materials. Subsequently, the Office of Management and Budget (“OMB”), in consultation with various department heads, will issue directives for labeling and verifying government-produced digital content to bolster public trust. The Federal Acquisition Regulatory Council is instructed to consider revising acquisition regulations in line with these new guidelines, ensuring that government procurement aligns with the practices of digital content authentication and synthetic content management.

Defense Protection Act: The EO issues new requirements directly applicable to businesses developing or demonstrating an attempt to develop so-called “dual-use foundation models” that have the potential to impact national security, including national economic security and public health or that are in possession of so-called “large-scale computing clusters.” In doing so, the EO relies on the Defense Production Act, as amended, 50 U.S.C. § 4501 et seq., which provides the President with authority to influence industry in the interest of national defense. The Secretary of Commerce in consultation with the Secretaries of State, Defense and Energy and the Director of National Intelligence are required to define the set of technical conditions for the models and computing clusters that would make them subject to these requirements. Prior to that time, the terms are defined with reference to a degree of computing capacity that is unlikely to impact most businesses outside of some of the largest cloud computing vendors or AI-models. However, because the requirements will apply during the development and acquisition stage, businesses should be mindful of whether their activities could meet these thresholds.

Businesses developing “dual-use foundation models” are required to make reports regarding their planned activities related to development and production, including the cybersecurity protections taken to protect the integrity of the training process. They must also report on the ownership and protection of model weights and the results of relevant AI red-team testing. Companies acquiring large-scale computing clusters are required to report these acquisitions, the location of the clusters and amount of total computing power.

The EO will also add a “Know Your Customer” requirement applicable to certain Infrastructure as a Service (“IaaS”) Providers.  The Secretary of Commerce is required to make regulatory proposals to ensure that IaaS Providers report when freeing entities use their services. Moreover, these regulations will demand verification of foreign persons’ identities by resellers of U.S. IaaS Products and ensure compliance with cybersecurity best practices to mitigate the misuse of American IaaS Products by foreign malicious cyber actors.

The Secretary of Commerce is further mandated to engage with various sectors via a public consultation within 270 days to evaluate the implications of making dual-use AI foundation models with accessible weights widely available, focusing on the potential security risks, such as the disabling of built-in safeguards, alongside the benefits to AI innovation. The input will guide a comprehensive report to the President, assessing the balance of risks and benefits, and shaping policy and regulatory recommendations for managing dual-use AI models whose weights are broadly distributed.

Equity and Civil Rights: While not establishing specific new rules, the EO instructs the Attorney General, in cooperation with other agencies, to use existing federal laws and authorities to address civil rights and discrimination related to the use of AI. The EO specifically calls for the Civil Rights Division to convene within 90 days to discuss the comprehensive use of agency authorities to address discrimination in the use of automated systems, including in particular algorithmic discrimination. The Biden administration has paid particular attention to the risks around AI-influenced discrimination, so this should not come as a surprise. In April, for example, the Consumer Financial Protection Bureau (“CFPB”), Federal Trade Commission (“FTC”), Equal Employment Opportunity Commission (“EEOC”) and other federal agencies issued a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. The EO reenforces the focus that agencies are likely to place around these issues, heightening the risk of enforcement. In addition, the EO gives specific instructions around the use of AI in areas like criminal justice and government benefits.

Privacy: Following on the Blueprint’s concerns around the potential for AI to exacerbate privacy risks, the EO includes measures to enhance privacy protections. The Director of the OMB is tasked with multiple initiatives aimed at addressing privacy risks. These include evaluating and potentially revising the processes for how agencies handle commercially available information that contains personal data, which may be directly purchased or procured through third-party vendors. There is a specific call for the Director of OMB to issue a Request for Information (“RFI”) to review the effectiveness of privacy impact assessments under the E-Government Act of 2002 and consider potential enhancements in light of AI’s capabilities. The Director is also instructed to take necessary steps informed by the RFI to update guidance and collaborate with other agencies and the Federal Privacy Council as required.

Notably, the EO would direct further federal resources toward the development of PETs. To further support the advancement and implementation of PETs, the Director of the National Science Foundation (“NSF”), in collaboration with the Secretary of Energy, is directed to establish a Research Coordination Network (“RCN”) to foster communication and collaboration among privacy researchers, especially in the development and scaling of PETs. The NSF Director will also identify opportunities for incorporating PETs into agency operations and prioritize research that propels the adoption of PETs across agencies. Moreover, the NSF will utilize insights from the US-UK PETs Prize Challenge to guide the research and adoption strategies for PETs.

Promoting Innovation: The EO order includes a number of initiatives designed to foster innovation of new AI-related technologies. These include efforts to attract talent through streamlined immigration procedures. Additionally, the EO includes a number of instructions related to intellectual property (“IP”) rights and ownership. The Under Secretary of Commerce for Intellectual Property and Director of the USPTO is instructed to provide guidance to patent examiners and applicants on issues of inventorship in the context of AI within 120 days. This guidance is expected to include examples illustrating the various roles AI may play in the inventive process and how inventorship should be determined in such cases. Further guidance will be issued within 270 days to address broader considerations at the intersection of AI and IP law, which could involve updated guidelines on patent eligibility concerning AI innovations.

Additionally, the USPTO Director is expected to consult with the Director of the United States Copyright Office and make recommendations to the President on executive actions relating to copyright issues raised by AI, subsequent to the publication of a study by the Copyright Office. To combat AI-related IP risks, the Secretary of Homeland Security, via the Director of the National Intellectual Property Rights Coordination Center and in consultation with the Attorney General, is to develop a training, analysis, and evaluation program. This program will include dedicated personnel for handling reports of AI-related IP theft, coordinating enforcement actions where appropriate, and sharing information with other agencies and stakeholders. Guidance and resources will be provided to the private sector to mitigate AI-related IP theft risks, and information will be shared to help AI developers and law enforcement identify and deal with IP law violations, as well as to develop mitigation strategies. This initiative is part of a broader update to the Intellectual Property Enforcement Coordinator Joint Strategic Plan to encompass AI-related issues.

Industry-Specific Impacts Including Health Care: The EO includes a number of initiatives focused on particular industries. For example, the Secretary of the Treasury is required to issue, within 150 days, a public report on best practices for financial institutions in managing AI-specific cybersecurity risks.

In particular, the EO focuses on risks in the health care industry, given the significant opportunities and also risks around the use of AI for diagnoses and treatment. The Secretary of Health and Human Services (“HHS”), in collaboration with other key departments, is directed to establish an AI task force within 90 days of the order. This task force is to create a strategic plan within a year, focusing on the deployment and use of AI in health care delivery, patient experience, and public health. The plan will address several critical areas, including the development and use of AI for predictive purposes, safety and performance monitoring, incorporating equity principles to prevent bias, ensuring privacy and security standards, and developing documentation for appropriate AI uses. Strategies will also be formulated to work with state and local agencies, advance positive AI use cases, and promote workplace efficiency.

Furthermore, within 180 days, HHS is to develop AI performance evaluation policies and strategies to assess AI-enabled technologies in health care for quality maintenance, including pre-market assessment and post-market oversight. HHS will also consider actions to ensure understanding of and compliance with federal nondiscrimination laws in relation to AI. Within a year, an AI safety program is to be established, providing frameworks to identify and capture clinical errors from AI use and disseminating recommendations to avoid such harms. Finally, a strategy will be developed for regulating the use of AI in drug-development processes, identifying needs for new regulations or authority, resources, and potential partnerships, and addressing risks related to AI.

Conclusion: The EO sets a number of deadlines for agencies to issue new guidance and regulations, and so we expect additional significant developments in the coming months. Already, the OMB has issued draft guidance for federal agencies following the issuance of the EO. The OMB guidance directs each federal agency to designate a Chief AI Office (“CAIO”) and requires some agencies to develop a specific AI strategy. CAIOs are tasked with coordination around the agency’s use of AI, promoting innovation and managing risk. Similarly, the EO anticipates a number of multilateral initiatives, tasking the Secretary of State and others with leading efforts to establish strong international frameworks around AI risks.

With that said, many issues addressed in the EO order may require legislative action—the fact sheet accompanying the EO notably calls on Congress to adopt federal privacy legislation, for example. While the EO will have direct impact on the federal government, in most cases it does not require specific industry actions. The Biden Administration has demonstrated a clear focus on guiding the development and secure use of AI technologies, however, and is likely to use the considerable resources of the federal government to push forward these initiatives through agency regulation and enforcement. Ropes & Gray will continue to monitor developments in this space.