In this installment of the AI Unlocked webinar series, asset management partner Melissa Bender, private equity partner Howard Glazer, and Director of Technology Innovation Sergey Polak presented exclusive findings from the November 2025 Ropes & Gray and Corporate Counsel Emerging AI Trends in Investment Management survey, revealing how investment managers are progressing from AI pilots to enterprise-wide adoption. This session explored how AI is being utilized by investment managers and private equity firms, highlighting trends, potential barriers, governance, ROI measurement, and key priorities for in-house counsel in the sectors’ evolving AI landscape. Speakers also discussed current AI merger and acquisition and deal trends and emerging AI use cases.
Transcript
Melissa C. Bender: Welcome everyone and thank you for joining us for our final installment in the AI Unlocked series, where we dive into a discussion of AI use by asset managers. My name is Melissa Bender and I'm a Partner in our Asset Management Practice based in San Francisco.
I'm a Co-Head of our Private Funds Practice and spend a significant amount of my time on technology and AI integration here at the firm. At Ropes & Gray, our Asset Management, Transactional, and other practices are committed to practical innovation, adopting and operationalizing AI in ways that enhance efficiency and deliver measurable outcomes for our clients.
Likewise, we are committed to supporting our clients as they navigate the AI landscape within their own organizations. Our previous AI Unlocked webinars covered a range of topics, including how our clients can begin to use AI, and some of the regulatory issues you need to consider when using AI.
We also hosted demos for two of the tools in our AI toolbelt, Hebbia and ProVision. We'll touch briefly on Hebbia and ProVision and some of the other tools we use in our session in a little bit later today. I did want to mention that all of the recordings from our prior sessions can be made available if you weren't able to attend.
Before we begin, I'd like to introduce my amazing colleagues who are joining me on today's call. Howard Glazer, Leader of our Private Equity Transactions Practice, and Managing Partner of our Los Angeles office; and Sergey Polak, Director of Technology Innovation, who sits in our Boston office.
Earlier this week, Ropes & Gray-- earlier this fall, sorry, Ropes & Gray collaborated with corporate counsel on a strategic study on AI adoption and governance among investment managers. The goal of the survey was to understand how investment managers are adopting and using AI in their businesses.
I know that many of you watching today participated in that survey. And thank you very much for contributing your insights. We really appreciate it. Our discussion today will center on the findings from the survey. And with that, I'll hand things over to Howard.
Howard S. Glazer: Thank you, Melissa. Let's dive right in to see what the collective insights were from our collaboration with corporate counsel. There were five key themes coming out of the survey. They were AI adoption, modus for implementation, barriers to expansion, AI governance, and measuring success.
You know, once we do it, how do we figure out whether we've succeeded or not? These really reflect the real-time challenges that our clients are facing. Melissa, Sergey, and I and many, many of our colleagues across Ropes & Gray are out talking to clients every day about artificial intelligence.
And what we've found in the survey is consistent with what we hear, you know, around tables, again, on a daily basis. Let me start off by going down the road of AI adoption, the first of the five key themes. Within the investment management sector, and more broadly within within within the range of industries that we're talking about, a few years ago AI adoption was nothing.
People were afraid of it. What is it? What might it be? Obviously over the last few years, we've gone from, "Should we even be thinking about using AI," to where we are today, which is how to use it more effectively and efficiently. More than half of asset managers are now actively using AI in their business.
And this trend is really, has been driven mostly by individuals. If Melissa goes home and plans her vacations in ChatGPT, she thinks, "You know something? I may be able to use this at work." And that's how this has kind of developed historically.
What we're seeing now, is billions of dollars being invested into technology companies, and legal, and investment management technology companies. The vendors are coming to us now and saying, "Hey, we have tools for you." So the next stage of our progression is going to be the tools coming to us and seeing how we can use it.
The pace of adoption is just going to increase. You know, we said in the survey it's expected to increase. We all know it's going to increase. And it's going to really transform the way all of us do our work. Law firms, investment managers, everyone. With that, why don't I hand it over to Sergey to talk about AI governance?
Sergey Polak: Thank you, Howard. So effective governance is essential to using these AI tools safely and effectively. Because there's a high level of regulatory scrutiny in the asset management sector, it's not surprising to see that nearly three quarters of respondents have a policy in place and a process in place, or they're developing one.
Many organizations have established committees or dedicated teams responsible for this process. And these groups are responsible for evaluating the tools. They're responsible for monitoring the use of these tools, making sure that you adhere to regulatory requirements, and that people are using these things in compliance with policies.
We find that compliance functions play a central role in all of this for many of our clients. Regulations are evolving rapidly, and the tools are evolving even more rapidly than that. And so it is absolutely essential to keep your policies up to date.
And you need to make sure that the policies are addressing all the key areas, things like privacy, things like data protection, things like ethical use of these tools. We counsel our clients quite a bit about using sensitive data in these tools.
And we help them come up with strategies to mitigate the risks of doing so. We also partner with many of our clients to help them build robust governance frameworks. That means being able to balance innovation, which is a key driver for all of this, with risk management.
We help clients develop clear policies, making sure that there are approval processes. We support ongoing training initiatives, and we also help by monitoring what's going on out in the regulatory world and then advising our clients around policy updates that might be necessary as a result.
Its I Just I want to be very clear. Effective governance is not a one-size-fits-all. What works for one organization is certainly not going to work for everybody. And to really be successful in what we have seen in organizations that are successful, you have to make sure that you're collaborating with other stakeholders in your business.
So that means collaborating with primary business users, including legal, including compliance functions and, yes, including information technology so that you have the right process in place, you have the right tools, and you're using them most effectively. Let me turn things over to Melissa to talk a little bit about primary motivations for adapting AI.
Melissa C. Bender: Great. Thanks, Sergey. And just, you know, observation on some of Sergey's thoughts on governance, right? I think one thing that we've been seeing increasingly is that our clients have been essentially looking to outside counsel as AI counsel, right, because in many cases these governance issues touch many different sort of regulatory areas, right, and many different functions within the organization.
So that's sort of been an interesting development that we've seen over the past couple of years. Okay. Motivations for implementing AI. So enhanced operational efficiency is the clear, is the clear winner here, followed by strengthening compliance and risk management functions, along with keeping up with the Joneses and staying competitive with one's peers.
I have to say that I was a little bit surprised because I was expecting that reducing overhead and expenses was going to take the lead here, just because when we're reading the headlines, right, we're seeing a lot of organizations really leaning into the idea of reducing their head count, right, because they can be replacing folks with AI, right, effectively.
And so, we're just not seeing that I think quite as much in the asset management space quite yet. You know, these responses underscore that the motivations behind AI adoption really reflect both strategic and operational goals, right? You know, the idea of being able to automate repetitive, time-consuming tasks, and freeing up human resources for higher value work is obviously very attractive to managers, where many of their teams are strapped for time.
You know, common use cases that I hear from clients often involve sort of quick research when they're looking to understand new area of law, preparing initial drafts of bespoke agreements, or reviewing term sheets or contracts for key issues.
It's not surprising to me either that regulatory compliance is another major motivator here. The asset management industry is subject to complicated and evolving regulations and increasingly devoting, you know, resources to compliance with all those regulations.
Likewise, compliance obligations are also arising because of lengthy-lengthy and complicated side letter provisions. You know, increasingly, managers are entering into funds of one. Likewise, new retail products have, you know, a whole overlay of compliance and regulatory obligations that are new.
Similarly, global distribution, right, and the increasing number of placement arrangements that folks may have are are driving compliance as well. You know, we do see a number of tools on the market that can assist with all of these functions.
I would say that, you know, it's really important to be vetting those tools before managers are onboarding a particular software tool to ensure that it is in fact meeting their needs. Here at the firm, we have assessed many, many vendors in the space. And we'll talk about that a little bit more later on in the presentation.
But, you know, we would be more than happy to be making introductions or sharing our perspectives with you as you do evaluate these tools, particularly in relation to compliance functions. And we're also happy to set up demos, right, and sort of walk you through some of the tools that we use and that we're familiar with.
In terms of our own internal implementation here at Ropes & Gray, you know, we've been really helping clients to track and manage disclosures, right the voluminous disclosures in PPMs and ADVs right, one of the challenges can be ensuring that you have consistency across all of your products.
We found that AI tools are incredibly useful on this front. Likewise, we've been deploying AI in order to help benchmark ADV disclosures across the industry really to help managers ensure that their documents are best in class relative to their peers. Okay, next slide please.
So this next question focused on which areas of the organization where AI was currently being used or targeted. So it's interesting because here we see portfolio management and optimization, investment due diligence and underwriting, and ongoing investment monitoring and analysis quite clearly in the lead.
You know, this data reflects that the portfolio management and investment side of the house has been leading the way initially in terms of AI adoption and integration. You know, that is what I've been hearing from clients in discussion with them.
This makes sense, right, given the possibility that these tools can really contribute and enhance ROI. Now I'm hearing that, because investment management teams have had such a positive experience using AI, organizations are now looking to adopt these tools more broadly, right, in the legal, finance, IR, and other operational functions. And so, we see that in this data as well.
One area where I expect that we will see real real transformation is on the contract management side. Automatic workflow tools, some of which may or may not be powered by generative AI, can help ensure that contracts sort of move efficiently through approval processes.
Likewise, we see that AI tools that are designed to facilitate the markup and negotiation of ordinary way contracts, things like NDAs where terms are highly standardized, that AI is very, very effective in those cases. Likewise, here at Ropes, we've been very focused on similar types of use cases.
One example is helping to automate LP transfers. This is a workstream that I know many in-house lawyers spend a significant amount of time on, dealing with internal transfers, particularly for closed-end funds. Likewise, here at Ropes, you know, we have large teams, typically of junior associates, that handle these workstreams at the end of every quarter.
You know, this is, this can be a real friction point with investors because, notwithstanding the fact that in concept a transfer should be really simple and straightforward, this ends up consuming a lot of time and money. And one of the things we've done is we've partnered with a software provider to build an automated platform here where the transferer and the transferee essentially receive a link to input all of their own information related to the transfer.
The documentation is then completely auto populated, including all of the tax forms. And the tool addresses the entire workflow from soup to nuts, including pushing out a DocuSign integration at the end to have the parties execute all the documents.
We're currently in the process of beta testing that product for Q4 transfers. But we plan to be opening it up more broadly to clients in Q1 and Q2 of next year. But I think this is just an example of the kinds of things that we're also seeing people build and develop on an in-house basis as well. So, I'm going to turn things back over to Sergey to talk a little bit about barriers to expanding AI internally at organizations.
Sergey Polak: Thank you, Melissa. So, the benefits of using these tools are absolutely becoming clear all around. But there are still barriers. These tools are not cheap by any measure. And so, it was kind of really interesting to us to see that the budgetary constraints are not the top barrier. Only 7% said that.
What is not surprising to us are the things that are at the top of this list, the concerns over security of these tools and just the sheer volume of these tools. So, lets we’ll talk about security first, and then we'll talk about the volume. Security, confidentiality, it is number one concern.
It's the right concern to be at the top of the list. There are absolutely risks out there. And I'm sure you've all heard horror stories and seen things on the news. So, it is very, very important to use these tools correctly and to use the right tools.
We advise our clients to make sure that they only use properly vetted tools, that they conduct the right level of due diligence, that they ensure that the tools they're using have the right technical controls in place, and that they have the right contractual controls in place.
And then, of course, using AI by and large means using the cloud. And that requires robust controls, robust vendor selection, and just knowing what you're doing, being careful with it. The volume of these tools, the number of tools in the market, is also a significant issue.
The market has just absolutely exploded in recent years. It feels like everybody, and their grandmother are coming out with a tool with generative AI built in. And it quite frankly, it can be overwhelming. We spend a fair amount of time, maybe even more than a fair amount of time, a significant amount of time reviewing these tools, reviewing the market, piloting the tools. And it is a substantial investment.
We help our clients with this process. We help them evaluate tools. We help them select tools. And we advise people to not just, you know, buy the shiny, new tool, but to really think carefully about getting the tools that will address the most valuable use cases for your organization.
Change management with these tools, although not reflected on here, is actually also often underestimated. AI is different from other tools. It's not like oh here's a new version of Outlook and just everybody starts using it. It requires a mindset shift to really use it effectively.
And you have to have the right level of education and training. You have to allow for experimentation. You have to make sure that you share successes. And we encourage people to start small and to learn through hands-on experience. And these tools are highly, highly adaptable.
They can be integrated into your systems and processes. I mean, the integration with legacy systems is a significant concern. But it can be done. But I will say that the barrier to entry with these tools is relatively low. I mean, just anybody can power up ChatGPT and start typing stuff, which means that these tools are very easy to misuse.
And the barriers to effective use can be very high. To get good quality results, you really need to ensure careful collaboration and careful configuration of these tools. And that means, at the risk of repeating myself, collaborating with all the right people in the business, with the technical folks, and your IT groups, with the legal and compliance folks to make sure that you're using these tools correctly, that they are correctly configured for your use case and your organization and your level of risk.
And because these tools are evolving so rapidly, it's also important to keep on top of it and refine and update things. Just because a prompt worked for you yesterday, doesn't mean it's going to work today when a new version of the model has been released.
Howard S. Glazer: Yeah. I mean, I think, Sergey, that's something that we saw just publicly, but we've certainly seen internally as well. But when ChatGPT went to 5.0, there was a lot of uproar may be an overstatement, but, you know, 5.0's worse than 4.5.
And in retrospect, it probably wasn't worse. It was just people were using the exact same prompts from the old model on the new model, and not getting what they were expecting. It just required experimenting. Now, of course, a later model can actually be worse as well. So, its we're in that stage where the fact that this tool used this way is the best thing for this purpose doesn't mean six months from now or three months from now some other tool may actually be better at it.
Sergey Polak: Or six days from now, at the rate that we're evolving, yes?
Howard S. Glazer: Yeah. No, I mean, I'm trying to be super careful because, as you know, Sergey, I've mentioned this before, I've yet to be at a client meeting, or a webinar, or any sort of meeting where I've not made at least one mistake based simply on the fact that the world had changed in the last week. You know, I'll say something on our internal partner calls. "We don't have a tool for this." And then Sergey will pipe up and say, "Well, actually, turns out we do. We didn't have it last week, to be fair, but, you know, we have it now."
Melissa C. Bender: Yeah. I would just echo too, right, that change management portion, right? And just in terms of looking back at this survey question, I think when we were developing these back in September, I think reflecting on, you know, some of the work that we've been doing internally at Ropes & Gray, we might revisit what these options are, right, and add a change management portion.
I think that, as we have been working to roll out AI more broadly across the firm, you know, I think I somewhat naively was of the view that it would largely be a question of-- that the hardest parts of this would be a question of evaluating, you know, the use cases and matching up the appropriate tools on that front, and ensuring that there was training available, right?
And it's not to say that those things are easy, because those things are challenging in and of themselves, right? But I think actually encouraging people and finding ways to sort of repattern the ways in which they do their jobs on a day to day basis, and the way they think about tackling problems, right, and, you know, getting to answers, like, fundamentally that has to change, right?
Howard S. Glazer: Yeah. Yeah.
Melissa C. Bender: And so and that in and of itself I think has been one of the biggest challenges for Ropes & Gray and is something that we're still continuing to work on, right? So yeah.
Howard S. Glazer: Yeah, but I think it's universal. As the most senior member of the panel, I will share one quick story, which is--
Melissa C. Bender: Barely, anyway.
Howard S. Glazer: A senior partner at a firm I was at when I first started working referred to his desktop computer as "a piece of furniture it my credenza." Its he ended up retiring early because he never was able to evolve to be able to work at the pace and using the tools that were available back then. And AI is that on steroids, you know?
Sergey Polak: Very much so, yes.
Howard S. Glazer: Thirty years ago, there were still senior people dictating their responses to emails. They were able to function, just not as effectively as they could have. Without being able to use AI tools, we're not going to be able to work, no matter where we are in the seniority chart.
Sergey Polak: Absolutely. Let's go to the next slide please. Melissa, I believe you'll be talking about ROI.
Melissa C. Bender: Yup. Great. So the next question that we had up had to do with how it is folks are currently measuring the success, or ROI, of their AI initiatives internally. You know, the jury is still out on this front. You know, folks either have not measured, been able to measure yet, or it's been too early to assess it, right?
So that's the bulk of the respondents here. You know, I think that measuring success in ROI is actually challenging, right? It's a hard thing to do. And this feedback, you know, reflects the fact that people I think are almost universally struggling with this.
You know, there're a couple key questions here, one of which is, you know, what are you measuring, and two, how is it that you're going to measure it, right? And we constantly get this question on the part of clients, right, is, "Please explain to us, you know, what it is you're doing on the AI front," right? "But what are sort of the, what s the efficacy of what it is you're doing, and what are the cost savings to us," right?
And we, we basically assess success for us along two main dimensions, right? One is the accuracy of the AI outputs. And the second relates to the efficiency gains that we're seeing. In terms of accuracy, right, like, this is sort of a funny metric because there's a real question of how accurate you need the data to be, right?
We've all used Google get a basic handle on a new legal concept, right? And AI search functions are simply an enhanced version, right, of a traditional search engine. And for many basic queries, they are going to meet a lot of people's needs, right, and then some.
You know, if you're just trying to get your arms around, you know, some new regulatory regime, or a regulatory regime in another country that you may not be familiar with, right, you know, even if the output is 80% or 90% accurate, that's going to be serving your needs, right, just to get a basic handle on that.
But, you know, let's assume for these purposes that complete accuracy is absolutely necessary, right? That's a lot of what we do here at the firm. I would say that, as AI tools become more sophisticated, their ability to generate reliable, high-quality outputs really, really improves.
And it is, I mean, just even in, you know, the past couple of years where we've really been piloting, and adopting, and using these tools, even within a particular given tool, based on feedback we may be providing to the service provider in that case, right, the software provider, the improvements are absolutely remarkable, right, where we see increasing improvement in terms of the accuracy of the output.
I will say though that, you know, accuracy of the output also really depends tremendously on who is using the tool, right, and that person's experience and their ability to craft effective prompts and actually interact with the tool, right?
You know, we are in many cases becoming prompt engineers, right? You know, figuring out how it is we're going to really extract the best and most accurate data by developing strong prompts is one of the things we are very focused on here at the firm.
And, you know, we found that, as people become more skilled over time, as associates and attorneys are doing more and more of that work in how to structure queries, they actually are able to get better results too. I think one of the things I've also come to appreciate is that, you know, where you really are looking to solve, you know, complex research questions or sort of multi-layered questions, breaking those questions down into multiple discreet prompts can actually be very, very helpful and can yield much more accurate and actionable information. And sometimes I just find that ChatGPT and other search engines just become overwhelmed when you're asking fairly complicated questions, so.
Howard S. Glazer: Oh, Melissa?
Melissa C. Bender: Oh yeah.
Sergey Polak: I would add, Melissa, that, yeah, the conventional wisdom has been that prompt engineering will be dead because these models will just become so good, that you won't need to worry about it. And I will say that may still come to pass, but it is not here yet.
Howard S. Glazer: I think that its where are we? How do we use these tools? And, you know, I've never had more than a five-year plan, and it was-- you know, 30 years ago, that was kind of silly. But now, it doesn't make sense to have more than a five-year plan. We have no idea what it's going to be.
Certainly, the five-year plan includes people being very good at prompts. And this I know you know the connection, but this goes to what you raised earlier. Why is it more important to manage the people and get people using the tools? Why is that the more important barrier than picking the tool?
And this is the reason. The reason is that the tools, you can't push a button and get an answer. You have to be able to know how to ask the questions. And, as with anything, you get better asking the questions when you have experience. And that's why that's so important.
Sergey Polak: And the tools are non-deterministic, right? These are different tools. It's not like, you know, if A, then B always. That's how software has always been. And that is not how these tools work. So it is very different. And, you know, you say five-year plan, Howard. I'd be surprised to make a five-month plan at the rate things are evolving.
Melissa C. Bender: Yeah. And I think, look, I mean, just to, you know, expand on what Howard was saying, right, I think that, you know, at some level, right, even when the prompt engineering becomes very, very good and the user input may be minimal, at some level there is no replacement for experience, right, because even if you have AI generating, you know, the most accurate prompt, right, that elicits extraordinarily accurate data, if AI is not actually asking the right question, right, if it's not, right, if we aren't targeting what the issue actually is, right, then the data isn't actually any good even if it's 100% accurate, right?
Howard S. Glazer: Yeah.
Melissa C. Bender: So I think, you know, one of the things that we really continue to emphasize internally is that even though these AI tools can do a tremendous amount to sort of speed up the rate of review and the ability to extract very accurate information, you know, there still really is no replacement for sort of the experience and the understanding of, "Okay, but what is the core issue? And what is the core risk here," right, in terms of being able to actually counsel clients on that front, so.
And then efficiency gains, right? This is the other key metric that I was mentioning. You know, where we can compare the time and resources it used to take, versus the time and resources it takes now, right? Then you could end up with a pretty good assessment of, you know, how much of a savings there is there.
Here at the law firm, this is very easy, right? We measure our time in billable hours, right? And, you know, folks who are in-house-- you know, so we can look at how long it took us to complete a project historically, and how much time it takes to complete a project now, right?
And we can measure the difference. I think in-house, that can be a little bit more challenging because people may not have, you know, sort of written down their hours in 0.1 increments in the same way that we do here at the firm. I do think that, as people look to implement, it's actually really important to sort of be trying to assess that and trying to tie some numbers to that so that you can really evaluate whether or not the AI is improving things, right, for these purposes, so.
Sergey Polak: But I think, Melissa, it goes back to the point you made earlier. It's what are you measuring? What is success? Is success being more efficient? Is success being faster? Is success being more thorough because you can now do things you couldn't do before? So you have to know what you're measuring, what success means for you for your particular initiative.
Melissa C. Bender: Yeah. And I think another fascinating dimension of this, right, is, you know, a lot of, a lot of clients have been asking us about what the ROI is, or what the increased efficiency may be associated with certain tasks that historically we would not even have undertaken because of the barriers associated with the amount of time it would take, right?
So, for example, in my universe one of the things we're commonly doing is really tracking what are the latest and greatest terms on, you know, new fund launches. And people want to understand what their peer firms are doing, right? You know, what does X, Y, or Z provision look like?
And, you know, historically that's the kind of thing that we've done on a regular basis for many different types of terms, right? This is part of our jobs on a day-to-day basis. But we now have the ability to extract very, very nuanced data, right, relative to what we've been able to access previously.
And so in some cases, you know, we're even looking at data sets or analyzing questions where it would've been cost prohibitive, right? And so there isn't necessarily even, you know, an ROI there, right, because what we're doing is we're basically leaning into a totally new dimension of work, right, that we haven't done before.
Sergey Polak: Yeah, absolutely.
Howard S. Glazer: Or think of the real estate fund. Five hundred leases in a transaction. We're going to spend hundreds of hours reviewing the top fifty because that's expensive enough. Well, now we can spend 50 hours and analyze all of them.
Melissa C. Bender: Yes.
Howard S. Glazer: But we know it won't be 100% accurate. We'll spot-check it. It won't be 100% accurate. But you know what? We probably shouldn't say this on a recorded line, but our associates aren't 100% accurate, you know? You know, reading contracts and moving things to a chart doesn't happen with 100% accuracy either.
So now if there's a particular term, a change in control terms or something like that, you know, you'll have someone check each of the five hundred just to be sure, or certainly the most important leases, as we used to. But we can understand a lot more about a portfolio in much less time, just using these tools.
Melissa C. Bender: Yeah. And I think one of the other things too to keep in mind where folks are measuring success or ROI in terms of using these tools is the ability to refine and edit the tools effectively on more or less a real-time basis, right? You know, the service providers that we work with in many cases are selected because they are nimble technology companies, right?
And they are able to adapt their product very specifically to our needs. And so I think one thing that, you know, we really recommend to clients is that, where you are selecting an external software provider is that you understand their commitment and willingness to work with you to adapt the tool, right, to meet your internal use cases because we spend an extraordinary amount of time, you know, working with the external service providers to essentially build the technology that we need in order to accomplish the types of tasks that Howard has just described. So, you know, a whole other dimension of ROI is really who you can work with in partnership to accomplish the goal as well. All right. I think we're going to turn to the next slide, and Howard's going to provide a global overview.
Howard S. Glazer: Yeah. With our global market report, our AI global report, I recommend it. My only fear with this is that we do it quarterly. And as Sergey suggested, quarterly probably isn't often enough. But there's only so much time we can spend doing our reports.
We're already working on obviously the fourth quarter one. But what we try and do is consolidate what's going on in AI, both what's going on with the technology itself, and what's important to investors. You know, what we're seeing in terms of the impact that AI is having on the actual investment space.
What we've seen is, you know, we've seen AI move from experimentation to kind of full-scale integration. And, certainly from an investment perspective, what we've seen is capital flows obviously overwhelmingly into data and, you know, in infrastructure.
You know, this is not the headline of this webinar. I think we all know that. Even, you know, we see money flowing into Agentic AI. Agentic AI does exist. But it's at a very, you know, basic level. Agentic AI, you know, it's when the AI itself can figure out what it wants to do, then do it, and then course correct.
That does exist. That's what people are imagining. But, for example, most of my practice is private equity transactions. I would love to be able to put a term sheet, you know, into the machine, and get back a draft of a purchase agreement. That doesn't even come close to existing. It's not going to be able to pick out--
Sergey Polak: We're not there yet.
Howard S. Glazer: No. It's not going to be able to pick out the precedents And to Melissa's point earlier, we have one team in particular working with one vendor in particular trying to develop this tool. And if we're able to do it, we will get it before anyone has it.
But all the large, you know, legal technology companies are trying to develop that tool. As of today, it's the holy grail. Once it's developed, we'll find a new holy grail. But that's where money is flowing into. Again, just stepping back and thinking about the report, we created to try, and as briefly as possible, and as succinctly as possible, can get the message out to our clients what's going on in the world of AI.
What's happening, trying to separate the hype from the reality. You know, Elon Musk tells me that pretty soon, there won't be money because there'll be so much stuff in the world that none of us will have jobs and there'll be so much stuff, that we won't need the money.
I think we all know that's hype and probably nothing we'll see in our lifetime. But that's what's going on. What's going to define the next waves of investment? That's what we're trying to communicate, you know, to our clients. Just looking down, you know, just quickly to summarize what you'll see in the report, what we talk about, adoption versus impact.
What we've been talking about now, how do you measure it? In fact, we are at the beginning of this. Adoption is increasing. More than half of our asset management clients have adopted this. Not all law firms have adopted it. We've been very aggressive in adopting artificial intelligence.
What do we see in terms of profit losses? You know something? I think that we're more likely to see losses than profits, you know, in the next six months, for example. At Ropes & Gray, in order to make sure the tools are working, we do the job the old way, and we do it the new way. And we compare.
And, you know, over time and over a very short period of time, we're going to be able to be way more efficient and way more cost effective. But similar to how we're doing it, the time it takes people to learn, the time it takes people to change their routines and start using the computer instead of using, you know, pen and paper, those sorts of things take a little bit of time.
And so while we're seeing a huge uptick in adoption in many, many industries, we're not yet seeing a full impact on the P&L. In some industries, we certainly are. In consulting. There are many, many industries where the large language models and the generative AI can really replace large numbers of people.
That hasn't yet hit the investment management industry, and it hasn't hit the legal industry yet. One huge thing is the breakout of Agentic AI. It's here. The data says that a billion dollars was invested in Agentic AI in 2024. And the projections are it will be over $50 billion within a few years.
And I wouldn't be surprised if it ends up being more than that from what we see Amazon, and Apple, and Alphabet announcing in terms of just their investments into artificial intelligence. As we already said, and as I think everyone knows, it's an infrastructure-led cycle.
From the perspective of the investment manager and investments, that's what's really driving the investment cycle. And this, I just wanted to grab this. It comes from our global AI report. Harvard economist Jason Furman attributes 92% of U.S. GDP growth in the first half of 2025 to investment in AI data centers.
I mean, that is to have this one sector really of the economy which didn't-- I know for Sergey, it's existed for a long time. But, you know, until The New York Times ran an article about ChatGPT, I didn't know what this was. And that was three years ago. So to have something that's really that new to the scene have this level of impact on the economy is just amazing.
Melissa C. Bender: Well, and I think we see that in our day-to-day practice, right, both in fund formation and the transactional side, right? I mean, it's not just a headline, right? I mean, that's something that we experience on a day-to-day basis in our practices, so yeah.
Howard S. Glazer: Yeah. And so what does that mean in practice? Big tech. Big tech is underwriting a lot, is underwriting these investments, and private capital is also moving in and supporting and financing this huge investment in artificial intelligence and in the data infrastructure, computing resources necessary to allow the AI to work.
Partnerships are forming. Again, just read the papers. In fact, just trying to, you know, just read my notes before we came on, I looked at my computer screen, Disney just invested a billion dollars I think into OpenAI. I was more interested in looking at my notes and looking at my computer screen, but I think that's what they did today.
And the number-- you know, Oracle's partnering with The cross-partnerships, you know, we see will only increase. Public-private consortiums, not as much in the U.S., but certainly in other countries we're going to see that a lot more because nobody wants to fall behind.
And how do you keep up? And in other countries, it's going to be sovereign wealth funds. It's going to be governments actually getting directly involved with creating infrastructure for their artificial intelligence. And then finally, deal activity.
If you have, again, being someone who works on private equity transactions mostly, the number of add-on acquisitions to existing platforms that don't have artificial intelligence, but try to get that, it's actually surprising to me. I would have thought that we're more likely to see just people going out and buying tools.
But we actually see people going, buying the capacity, buying capability, trying to add it to their platforms. So yeah, I mean, their just strategy matters. Again, Melissa mentioned it earlier, even just in terms of what tools you use, partnering with the right people is going to be vitally important in how we adopt AI for our businesses and for the targets of, you know, the investment managers of the world. Who you're partnering with, who you're working with, that's just key. I think, Melissa, you want to talk a little bit about how we're using AI?
Melissa C. Bender: Yeah. Next slide. Great. Yeah, so this is a list of the tools that we use here at Ropes & Gray. In some cases, these are used for sort of a single function, right. In many cases, they are used across a large number of practice groups and on the operational side of the house as well. We, to get to this list, Sergey can attest, we've actually vetted dozens, and dozens, and dozens of providers. We are constantly being bombarded actually by--
Sergey Polak: Hundreds.
Howard S. Glazer: I was going to say, just two quick things. It's hundreds, and the first thing we do is verify the security.
Sergey Polak: Absolutely.
Howard S. Glazer: There's one I probably shouldn't mention, but you would know the name immediately. And we did not test it for months because it did not have satisfactory security. So that's the first thing we do. But if it's out there and it passes our security standards, again, by "we," I mean Sergey and other people, you know, in our Information Technology team, look at everything.
Melissa C. Bender: That's right. Yeah. So, you know, once it's sort of passed through those threshold security questions where we vetted, right, the tool, then what we're really doing is we are working hard to identify, you know, very, very specific use cases.
And, you know, I think that's one of the other arts too of really adopting and successfully integrating AI is being pretty clear about what AI is and is not good at, right? I think a lot of people imagine that AI is going to sort of be a magic bullet, right? And that's not the case, right?
I think there are certainly use cases where it's really important to recognize that, while certain portions of the task will be made much more efficient, or can be made much more efficient by having an AI integration, right, at the end of the day, you can't actually shortcut, right, some of these things.
So for example, you know, one of the use cases that we have developed over time really involves, you know, analysis, specific analysis of documents, right. But we still have to go through and confirm, right, that the AI hasn't hallucinated, right? We still have to go back in and confirm and provide comments to our clients around the types of things that they should be pushing back on, or they should be negotiating in that context, so.
Sergey Polak: Just on this part of Melissa's point, there was actually a study out of Harvard Business School. I think this point, I think maybe even a couple years ago, that proved sort of empirically that using AI incorrectly makes you less efficient. So don't just start using it because you think it's going to solve for every single problem.
Howard S. Glazer: They call it falling asleep at the wheel. You can't fall asleep at the wheel.
Sergey Polak: Right.
Howard S. Glazer: The studies they've done show that, used correctly, you know, under-performing employees become at least as productive as the average employee. But, you know, but focusing on it, it actually then increases even more kind of the better employees. These are, you know, things done at consulting firms, things like that. Some, you know, double-blind studies and things like that. But the real risk is just taking what comes off the computer and just assuming it's correct.
Melissa C. Bender: All right. Yeah. All right, we're going to talk about specific implementation of a few of these tools and the ways in which we use them at Ropes & Gray. I guess I would just highlight too that, you know, to the extent that, you know, you are evaluating any of these tools as a client, right, you know, in many cases, we've gotten on the phone with clients to demo, right, the ways in which we use the tools.
I mean, certainly, all of these software providers are happy to get on a demo with you, (LAUGH) right, and sort of show you the magic, right? But in many cases, our clients have found it to be helpful just to sort of see the ways in which Ropes & Gray actually implements and uses the tools sort of in live demos, and being able to compare some of the tools on a head-to-head basis because we do appreciate that, you know, while Ropes & Gray has the resources to be licensing all of these tools, right, folks in-house may only be able to choose one, right?
So sort of picking the right horse that's going to meet the most needs internally is something, where we where you know, where we're committed to helping our clients figure out what the right tool is for them. All right, so maybe we can go to the next slide. Oh yeah. Hebbia. All right.
So Hebbia is an advanced AI platform that we use to review and streamline analysis of complex legal documents. Hebbia actually has the benefit of being able to create matrices, where you can compare large data sets, right, across many, many documents.
And you can sort of parse out the different terms. In my universe, in the fund formation universe, right, we use Hebbia in order to be able to sort of lay out all of the different terms of a partnership agreement, right. In many cases, it can be quite helpful if you're needing to digest a partnership agreement quickly, and understand, "Well, what does the claw back provide for? What are the management fees?"
But additionally when we're looking to compare a large number of documents for benchmarking purposes, we found Hebbia to be extremely helpful. I have a real-world scenario that just came up for me in the past weekend that I would like to share with this group.
One of the areas where I focus my practice is in the digital asset space. Increasingly, because digital assets are back in vogue again, right, in the current environment, we've been asked increasingly to be reviewing custody agreements for digital assets, right?
And there're essentially a very limited number of providers in the digital asset space that provide custody, right? And those agreements tend to have some unusual and bespoke terms that are unique to that universe. One of the things that I did is, using Hebbia, I worked with two summer associates and a senior attorney in our IP transactions team to build a matrix that essentially analyzes all of the terms of a digital asset custody agreement and lays them all out in a grid.
We've reviewed a few dozen of these over time, right. So we've included all of those in our database. And I had a client reach out to me on a Friday afternoon where they needed to negotiate and get one of these agreements in place by within a couple of days, right before Thanksgiving, right?
And so what I was able to do-- and this was they were looking to negotiate agreement with a provider where I didn't know sort of the market terms off the top of my head, right? So what I was able to do was I was able to essentially put that agreement into our Hebbia matrix that I had built with two summer associates, right.
And, within the space of an hour, I was essentially able to dissect all of the terms. And then I was able to benchmark the terms against every other agreement we had historically negotiated, right, to understand where that agreement was essentially off-market, and to be highlighting key terms that they would be wanting to negotiate in the context of, right, being able to get this agreement done in a couple of days.
And then I was able to hand that off to a more senior lawyer to essentially digest the information, mark up the agreement. And we were able to get it done, right, on their timeline. I think one of the things that, you know, I really learned from that experience working with summer associates on a project like this is, you know, people really worry a lot about how it is we are going to be including partners at the firm, right, how it is we're going to be training the next generation of lawyers, and whether AI is effectively going to be, you know, taking over such that we can't train junior lawyers anymore.
And, you know, one of the things that I realize is that in order to actually build a matrix and analyze the data in these agreements, the summer associates I was working with had to figure out what an indemnity was, and how does an indemnity work, right, because they weren't able-- you know, unless they knew what it is we were looking for and the types of things we would be commenting on, they couldn't generate and build prompts, right, and build a summation of the data from the other agreements until they'd actually gone through that exercise and learned how to do it, right?
And so I think it was a good example of how education can actually be accelerated in many respects, right, to sort of get associates up to speed more quickly so that they can actually be contributing value at a higher level earlier on in their career.
So I think that's a good example of the ways in which AI is actually a very constructive tool, right, on many levels and helps us to save time. So that's Hebbia. The next one that we wanted to talk about briefly was ProVision. This is under Intelligent Legal Solutions. It's the ProVision tool.
ProVision, this is a tool on, as many of you may be familiar with, the proliferation of side letter requests in the fund formation context has been really extreme. And now the amount of time that it takes to put together side letter compendiums from a compliance perspective, but likewise for purposes of being able to address MFN obligations at the end of a fundraise are very significant.
And one of the things that we found is that increasingly, a larger portion of the organizational expense budget is really dedicated towards this MFN process. And you know what? This is a process that is just a thankless task, right, on many levels.
You have to digest in some cases hundreds of side letters for a large fundraise. You have to red line them. You have to organize them into a giant compendium. You have to go through the MFN to decide who is entitled to what rights. Then you actually have to build out a specific, you know, compendium for a particular investor.
You send it to them. They fill it out. You know, it comes back. You have to digest all that information. And so we essentially knew that we wanted to come up with a technology solution for this. We identified Intelligent Legal Solutions as a provider that we were going to work with to build a bespoke tool on this front that essentially was aligned with the way Ropes & Gray conducts that process.
We had a team of associates and senior attorneys who worked with ProVision to build this tool essentially to our specifications. And we've now been implementing that on behalf of clients. And we estimate that the time savings associated with this is 60%, 70%, right, of the time that it previously took.
So, you know, what ProVision does is it essentially allows us to dump all of the side letter provisions into the tool. It will then organize all the provisions. And then upfront you can essentially code all of the provisions, and whether or not particular investors would be entitled to those provisions.
And ProVision automates the process of creating all of the compendiums. It automates the process of distribution and execution. It automates the process of tracking what provisions have been elected by particular investors. And so this has been a tremendous time savings that is both saving our clients money but is also freeing up our associate time to be doing higher value work here at the firm.
And honestly, they're a lot happier, right, not having to spend so many hours tracking large word documents and things like that. So this has been a really great success story on our part, you know, where we are building bespoke tools in partnership with particular providers. Great.
Howard S. Glazer: What's next? Is Harvey next?
Melissa C. Bender: He's up next. I think that's you, Howard.
Howard S. Glazer: Yup. I'm looking at the clock. I don't want to spend too much time. I think Harvey is used widely. As a general purpose, you know, AI tool, it can analyze documents. We can create, you know, custom workflows within Harvey. Something that we have that most of our competitors actually don't is Harvey's integrated with our document management system.
Now, you can't tell it just to go, you know, "Find me all the MFNs in our document management system." But the efficiencies that we find just by having that link are tremendous. But as with all the AI tools, you have to give it the data to analyze. You can't just ask a question and hope to get the answer. As we've said a number of times, that's the future. Right now, Harvey is really a general purpose, you know, AI tool that we use regularly. Want to go to R2G2?
Sergey Polak: Sure. So many of you are probably saying, "What is this? I've never heard of it." And there's a good reason for that. That is because this is our own internal tool. We built it ourselves. And we built it before really the rise of legal AI tools. You know, after ChatGPT came on the scene, but before the explosion that we talked about in legal AI technology. And so we called it R2G2.
Howard S. Glazer: Well, to be clear, Sergey, just to say, I know we have very little time, we had an internal poll. People got--
Sergey Polak: We did.
Howard S. Glazer: --to vote. But I don't think anyone imagined we'd be then showing this in front of clients, the name of the silly thing. I would've voted for something else. But sorry, go ahead.
Sergey Polak: So it is a lightweight chatbot. It is built on the GPT models. And one of the main reasons we built it is to ensure security and confidentiality, so that all of our clients' data that might be used in this tool would be protected.
And it is a general-purpose tool. Supports document analysis. It's able to summarize content, retrieve information from documents. It helps people to do edits and refine their text. And we've built custom functionality into it, in terms of integrating these tools with your own systems.
And, you know, we're able to build knowledge bases of proprietary content in this tool. We have integrated with our email to make it easier for people to interact with it. And even though we have Harvey and Harvey's available enterprise-wide, R2G2 is something we continue to support. And it continues to be one of our most heavily used tools.
Melissa C. Bender: Yeah. And the name is catchy, right? So I think that goes back to the change management --
Sergey Polak: It's hard to type it, but it's catchy.
Melissa C. Bender: Yeah. So it's a good way for getting people to adopt it internally, so. I guess just we had a couple of questions. So just to circle back, I think, you know, a question that has come up is, you know, talking more about this whole change management piece, right, and, you know, what are some of the things that we're seeing in terms of skills, teaming, and other change management factors that make the biggest difference in terms of successful AI adoption, and how leaders can sort of measure successful readiness for that? Howard, do you wanna take that question?
Howard S. Glazer: Sure. I mean, it's something that kind of leading our Private Equity Transactions group, that I'm focused on. I think we've talked about it a lot. I mean, Sergey mentioned. I think one of the most important things is that internally you need cross-functional teams.
That I think is the most important change. And I joke with Sergey that, you know, I've spoken to him more in the last three weeks than, you know, my entire career before that. You know, until recently, it's like, "Get my computer to work. I will call you when it doesn't work and complain. And otherwise, I don't need to talk to you. I just need my computer to work."
But now, we need to, we need to kind of combine our collective skills, you know, as a law firm, to provide service to clients. It's vitally important. There's not other way to do it. We are all, you know, part of a team, trying to, you know, serve clients, trying to get things done.
When we talk about change management, it's probably harder. The other thing that we've talked about, and I think Melissa, you started, it's, you know, getting people to actually change their individual behaviors. You know, my old boss who thought the computer was a piece of furniture, getting that change is probably harder.
But more important is thinking of everyone as members of the team trying to solve your problem. As a law firm, we're trying to provide client service. And everyone is trying to get there, you know, to that same end goal. To me, again, it's cross-functional teams and real teams, not just, you know, keep them in their box sort of thing. It's real teams--
Melissa C. Bender: Yeah, right. And I think that applies across all organizations, right? I think that, you know, that sort of the technology integration piece is absolutely essential.
Howard S. Glazer: Exactly. "I want my Bloomberg monitor to work" is different than, "How do we get this done?"
Melissa C. Bender: Correct. Right. Yeah. Exactly. Exactly. Yeah. And then I guess the other question we have is, Sergey, right, what are some of the practical steps that you would recommend, right, where companies are looking to build trust around data privacy, security, and bias? Like, what are some of the core steps that people should be taking as they're looking to do AI integration?
Sergey Polak: Sure. I will go quickly in the interest of time--
Melissa C. Bender: Very quickly. Yeah, one minute. Yes--
Sergey Polak: Yeah, I have one minute--
Melissa C. Bender: Key ground.
Sergey Polak: I will speak fast. So, of course, the number one thing is make sure you have a policy. Make sure you have a governance framework in place. You want to avoid shadow IT because people will go and use these tools if you don't let them use the tools safely and correctly. So have something in place.
Require a minimum level of training for people to use these tools. Yes, you can just go and start typing away, but to use them effectively and safely you need to have some basic fundamentals. So make sure you require that, and we do that here.
Make sure people are using the right tools, vetted tools. Go through your evaluation process. Make sure you have enterprise agreements. And if people are using ChatGPT, that they're using the enterprise version and not the free version. Make sure that you conduct that thorough technical review to make sure that the tools you're using are safe, they're doing the right things, they're not going to be training on your data. And that'll get you started and on your way to success.
Howard S. Glazer: And Sergey, quickly, I think do we all have access to ChatGPT, or is it just a limited number of us?
Sergey Polak: It's a limited number. It is not something we make available firm-wide at this time.
Howard S. Glazer: Okay, because I was just looking at the questions. Yeah, so I do. To answer the question is it's R2G2. We do want to be careful. And those of us who have access to the ChatGPT, I understand what I can and cannot put in the ChatGPT--
Melissa C. Bender: Correct. You do put any client information--
Howard S. Glazer: --as opposed to R2G2--
Melissa C. Bender: --in ChatGPT.
Howard S. Glazer: I cannot put client information. But I can do deep, deep research on you, our clients, for example. I have access to that. And then just quickly, I see the second question. The first question was answered. The question about the end of the hourly billing for legal services. Different people will have different opinions.
Yeah. I assume that this is the beginning of the end of that for most services. We're seeing it in accounting. We're seeing it in consulting. What we do is less prone to being eaten away by AI as it exists today, as compared to our friends in accounting and consulting. But I think it's just a question of time.
Melissa C. Bender: Yeah. Great. All right. Well, thank you everyone for the questions and for joining us today. We absolutely look forward to continuing the conversation. If there's anything that we didn't cover or anything that you'd like to explore more coming out of this conversation, feel free to reach out to any of us or your relationship partner at Ropes & Gray. And we look forward to hearing from you. Have a great day, everyone. Thank you.
Howard S. Glazer: Bye bye.
Sergey Polak: Thank you. Bye.
Related Services
Ropes & Gray Speakers



Stay Up To Date with Ropes & Gray
Ropes & Gray attorneys provide timely analysis on legal developments, court decisions and changes in legislation and regulations.
Stay in the loop with all things Ropes & Gray, and find out more about our people, culture, initiatives and everything that’s happening.
We regularly notify our clients and contacts of significant legal developments, news, webinars and teleconferences that affect their industries.