R&G Tech Studio: AI Innovation vs. Safety—Insights on a New Executive Order

Podcast
May 7, 2025
14:18 minutes

On this episode of the R&G Tech Studio podcast, Ropes & Gray partners and co-leaders of the firm’s AI initiative, Megan Baca and Ed McNicholas, delve into the key implications of President Trump’s new AI Executive Order 14179, contrasting it with the previous Biden administration's approach to AI regulation. They explore the nuances of AI innovation versus AI safety, the potential conflicts between federal and state regulations, and the global landscape of AI governance. Tune in for an insightful conversation on how companies can navigate the evolving regulatory environment while balancing innovation and compliance.


Transcript:

Megan Baca: Hello, and welcome to the latest episode of the R&G Tech Studio podcast. I’m Megan Baca, co-head of the IP transactions and licensing team, as well as the digital health and AI initiatives at Ropes & Gray. Today, I’m thrilled to be joined by my colleague and co-lead of the AI initiative, as well as head of Ropes & Gray’s global data, privacy & cybersecurity practice, Ed McNicholas. Ed will be discussing the key implications of President Trump’s new AI Executive Order. Before we delve into the details of the Order, Ed, why don’t you go ahead and introduce yourself and share a bit about your practice?

Ed McNicholas: Wonderful. Great to be here, Megan. The Ropes & Gray data, privacy & cybersecurity practice is focused on the law of data as it’s evolving, and obviously, the law of AI is going to be a crucial piece of this. We live in a fantastic age where AI is becoming agentic AI. We have quantum computing right around the corner. This is an amazing time, and the law is going to be keeping up with this a little bit slower than the technology. That’s going to be a very interesting place to be, and Ropes is a great place to be guiding clients on this wonderful journey.

Megan Baca: Terrific. So, let’s dive right in. First, how does the Trump administration’s view of AI regulation differ from the previous Biden administration? Let’s start there.

Ed McNicholas: There are certainly differences. Recent Executive Orders, in general, have directed federal agencies to prioritize AI innovation and national security, including rescinding previous Orders and replacing them with directives that emphasize national security concerns and eliminations of perceived barriers to technological advancement. The Trump administration’s recent pronouncements tended to decry what they called “ideological bias,” or “engineered social agendas,” and they say this is antithetical to continued AI in leadership by America. And so, we see the Executive Order from President Trump—Executive Order 14179—replacing and repealing President Biden’s Executive Order 14110 on AI safety. You can hear that already—it’s AI innovation versus AI safety. And it reflects a theme that Vice President Vance put forward in a Paris summit speech in February—he said that, “We feel very strongly that AI must remain free from,” what he called “ideological bias.” The Trump administration seems to be focused on making sure that regulation of non-discrimination, AI safety, AI transparency is not going to be used to put American AI development at a disadvantage in the global marketplace. I think the DeepSeek launch reinforced these concerns and was a wake-up call to American industry about these issues.

Megan Baca: It sounds like Executive Order 14179, the new one, is really indicative of this bifurcation between the safety-focused approaches and the second one, the innovation-focused approaches, that the Trump administration prefers. So, let’s dig into that just another layer deeper: What are the contours of each of those approaches?

Ed McNicholas: Let’s go back to the Biden Order, which was not that old—it died young. Executive Order 14110 was the AI safety approach. President Biden required Executive Branch adherence to eight guiding principles and priorities, including safety and security, promoting innovation and competition—that element was there as well—but responsible development advancing equity and civil rights, consumer protection, privacy and civil liberties, risk management, and American AI leadership. You can see where the elements that the Trump administration is championing were already present in the Biden Order. Under the Biden Order, these elements were meant to be operationalized through agency-level guidelines focused on safe, secure, and trustworthy AI, and this was supposed to be seen as a way of having these principles distributed across the various agencies.

Now, the Trump administration took elements of this and focused much more on the goal of solidifying American AI leadership. It’s much shorter than President Biden’s Executive Order, and it basically only has two functions—it has very little substance in its own. The first requires agency heads to submit an action plan to sustain and enhance Americans’ global AI position. Then, it directs all agencies to review, suspend, and revoke any actions inconsistent with American AI dominance. And so, really, the only two sections are 1) this development of an action plan by the agency heads, and then 2) a review by all agencies, which would include the non-cabinet agencies, to look for areas where there’s anything that is holding America back from being dominant in AI. It’s a competition-focused approach, and we’re seeing it show up in a lot of agency rhetoric. For instance, we’ve seen the Federal Trade Commissioner suggest that DeepSeek may have violated unfair competition rules in building its data sets. So, we see an approach there of taking this very general Order and then trying to replicate it in the various different agencies.

Megan Baca: That makes sense. Should companies expect in the future direct conflict between competing sets of rules and laws, for example, between federal and state in the U.S. or between U.S. rules and international? Where is that all heading?

Ed McNicholas: We’re seeing a global competition in AI develop both on the technology and in the regulatory space. As the Trump administration has tried to put aside some of the AI safety concerns in favor of AI innovation, we’re seeing other jurisdictions take that balance a different way. For instance, starting February 2026 under Colorado law, developers and employers of high-risk AI systems will have a duty under state law to use reasonable care to protect consumers from the “foreseeable risk of algorithmic discrimination,” which is defined as “differential treatment or impact that disfavors an individual group based upon protected categories.” Virginia recently followed suit with a similar set of rules, and Maryland takes a similar approach, requiring impact assessments for all high-risk AI systems procured or deployed by state governments starting July 1, 2025, so not that far. We’re going to see this focus on differential treatment or impact with the state level even though we have the federal rules. Now, the interesting thing is President Trump’s Executive Order is not a statute—it’s just an Executive Order through the agency heads. While the agencies may develop their plans, they would still just be plans for the Executive Branch. Executive Branch orders tend not to trump state law in the same way that you would have a federal statute trump a state statute. And so, we’re going to see state laws that focus on transparency, differential impact, consumer protection—basically the same things that President Biden wanted—continue along.

Those issues we hear echoed through global competition. We’re seeing other jurisdictions, such as the EU, launch regulation in this area. There’s an EU AI Act, and they say one of the main goals is to reduce bias and safety risk in high-risk AI and in training data sets. And so, as evidenced by these other commitments, existing rules surrounding the processing of sensitive data are going to be suspended if it’s strictly necessary for bias detection and correction. Likewise, there’s existing NIST guidance in the U.S., which is non-binding, but it promotes safety, highlighting the potential for AI to be used in offensive cyber attacks—which it certainly is being used—to generate harmful content, and even to develop weapons systems. All of these regulations—the statutes at the state level, the EU Act, and the NIST guidance—are all focused on safety rather than innovation, or I should say, they try to balance safety and innovation in a way that the Trump Executive Order does not try to do.

Megan Baca: It does sound complicated. Going forward, what practical steps can companies take to facilitate their compliance with both these innovation-focused and safety-focused AI regulations that, frankly, are constantly changing?

Ed McNicholas: The Trump administration’s position here does present something of a quandary for a global private enterprise. If you’re trying to develop AI that will work in London and New York and Tokyo, the fact that the Trump administration is trying to focus on U.S. dominance can be an issue. We’re going to have to come up with ways for companies to comply with existing state and international safety and non-discrimination rules, while still producing products that are something that will be attractive in the federal marketplace, and hopefully will not incur the anger of federal regulators. I think there are really practical ways to do this. You’re going to have to focus on the different laws as they’re developing and look at some of the details. One thing is going to be the focus on which ones are actually binding. There are a lot of policy speeches being given and a lot of guidance being issued, but there are fewer binding statutes. Obviously, noncompliance with binding rules carries much more risk than noncompliance with a policy preference. So, doing a risk assessment and documenting a risk assessment can be very helpful in determining where your AI systems will be deployed and where you’re going to do business with these particular AI systems. Looking at whether or not you have, let’s say, California’s data set transparency rules—they only apply to entities that produce generative AI systems made available to Californians. But the EU AI Act applies broadly to entities that make AI systems available in the EU, are based in the EU, are produced, developed, or import models whose output is used in the European Union. And so, it’ll be important to focus on which markets and which laws are coming from them.

Now, taking it to the next level is to recognize that prioritizing AI innovation does not necessarily mean there’s going to be a trade-off with existing safety transparency or non-discrimination imperatives. You can be extremely innovative while being safe, transparent, and not discriminating, and there are practical and technical steps you can take to hit both goals. For instance, you can use synthetic training data that may reduce data collection costs and reduce the risk of data protection noncompliance, while mathematically mimicking the effectiveness of real data, and that could align innovation goals with safety goals. Another approach is to ensure that decisions made with the help of AI ultimately have an informed human in the loop. This may slow things down slightly, but it would significantly mitigate safety concerns while allowing the AI systems to do what they do best: minimize manual parsing of large quantities of information for patterns or statistical predictions.

Overall, though, it is going to be important to constantly monitor the legislative developments. President Trump’s Order asked each agency to develop their plan. These plans may lead to congressional proposals, they may lead to agency regulations, and they certainly can lead to restrictions on the type of AI that the government is allowed to purchase itself. I think people should expect continued evolution in this area, continued conflict, unfortunately, as differing approaches continue to spin out, and they can expect and rely upon continued coverage and analysis of these issues on our blogs and podcasts.

Megan Baca: That’s fantastic. There’s so much to consider and so many layers to keep track of, so we’ll continue to monitor it as well. Ed, thank you for joining us today. It’s been a really insightful conversation and you’ve given me and our listeners a lot to think about. And thank you to our listeners. This has been the R&G Tech Studio podcast. It’s available on the Ropes & Gray website, on the R&G Tech Studio podcast page, and wherever you get your podcasts. Thank you, everyone, for listening.

Subscribe to R&G Tech Studio Podcast