Part I: Award-Winning Author Benjamin van Rooij Explains Why Understanding Human Behavior is Essential to Law and Compliance

Podcast
April 19, 2023
35:25 minutes
Speakers:
Hui Chen , Zachary N. Coseglia ,
Benjamin van Rooij

On this episode of There Has to Be a Better Way?, the first in a two-part series, co-hosts Zach Coseglia and Hui Chen talk to author and researcher Benjamin van Rooij about why it’s dangerous for lawmakers and compliance professionals to rely on their “own intuitions about human misbehavior” when designing laws, rules and compliance programs. They also examine the challenges organizations face when trying to promote ethical behavior. Stay tuned for part two of our conversation, where Professor van Rooij dives into lessons from his award-winning popular book The Behavioral Code, co-authored with Adam Fine.


Transcript:

Zach Coseglia: Welcome back to the Better Way? podcast, brought to you by R&G Insights Lab, where we are on a journey to find Better Ways and to spotlight folks who are disrupting the status quo in an effort to tackle longstanding organizational challenges. I’m Zach Coseglia, here, as always, with my partner in crime, Hui Chen. Hi, Hui.

Hui Chen: Hi, Zach—I’m very excited today.

Zach Coseglia: Me too, because we are joined by a very special guest. Joining us today is Benjamin van Rooij. Benjamin is a professor of law and society and the director of research at the school of law at the University of Amsterdam. He is one of the most respected legal scholars who writes about the role of behavioral science and the law, so he is a natural friend of the Better Way? podcast and of R&G Insights Lab. Today, he’s joining us to discuss his book, The Behavioral Code, which was published in 2021, and the lessons that it might have for us, for compliance professionals, and for our listeners. Welcome, Benjamin—thanks for joining us.

Benjamin van Rooij: Thanks for having me today.

Hui Chen: Benjamin, I’m going to start off by saying that I’ve known you now for a few years. We first met at the first academic conference I ever went to, which you organized, and was the inaugural meeting of ComplianceNet, which is this network of scholars who study compliance-related issues. What made you and Adam Fine, whom I also met at ComplianceNet, decide to write a book for a popular audience instead of the usual academic publications? What triggered that decision to take that route?

Benjamin van Rooij: We saw that there was a large amount of science, of empirical studies that were relevant for how we organize and operate law, both in criminal justice, corporate, and other fields of law, that just didn’t have much of an influence in practice. At first, we thought, “We’ll just bring the information to practice.” The more I talked with people in practice, I saw that for people in companies, they’re only looking at the regulators, for instance—and the regulators, in the end, are looking at the politicians. The politicians, in the end, are in a public sphere where you don’t have a lot of space to make complex arguments. And I saw that a lot of simplistic arguments were being made, both about criminal justice as well as about corporate issues and other issues, so we really felt we had to go into that space. To bring that complexity of science into this space where everybody who’s interested in reading something that’s a little bit more difficult, but not really complex, could get this knowledge and talk about it—that was our original aim, to change the conversation on these issues, which is very ambitious, obviously.

Zach Coseglia: Benjamin you’re a lawyer, like us, and a legal scholar. When and how did you become interested in behavioral science?

Benjamin van Rooij: I can pinpoint the exact meeting where this happened. I applied for a job for a Ph.D. in environmental law in China. I was going to my then-to-be supervisor, to pitch him an idea for a purely legal tort analysis of environmental torts in China. I had this whole idea in my head—I was at this house in Amsterdam—but he spoke first, and he said, “Why don’t you study environmental law enforcement? Why don’t you go to China, why don’t you go to Beijing, and if you hear something interesting happening, you take a train to Guangzhou?” And he had me at the word “train”—I still remember that. I was like, “Wow, we’re going to be on a train. That’s way more fun than looking at case law.” So, before I knew it, I was in China, trying to understand how the Chinese government enforces law. I couldn’t answer that question through legal analysis, so that’s how I moved into social and behavioral sciences, and I’ve never looked back.

Zach Coseglia: One of the reasons I ask is because we obviously talk about behavioral science in the law a lot. We talk about behavioral science in the context of solving complex organizational challenges, like compliance, which is an area that we spend a lot of time talking about. I find that we often get confused looks from the lawyers who we’re working with as we’re introducing some of these concepts. And so, I want to actually share with our readers a quote from your book that I love, and I want you to unpack it for us. You say, “Because lawyers who design and operate our legal code rarely receive any mandatory training in social and behavioral sciences, they are forced to rely on their own intuitions about human misbehavior, many of which have been proven false by empirical studies.” And then you say, “We have left the most important coding of human conduct, the legal code, in the hands of behavioral novices.”

Benjamin van Rooij: Yes, that’s how you write a popular book, or try at least. We do overstate some of these things. Of course, there are lawyers, like also in your firm, who have a lot of behavioral knowledge. I think that’s changing, but generally speaking, if you look at legal education, it’s not part of it. I’m in a fortunate situation, where in Amsterdam, we have a compulsory first-year class for 1,000 students on law and behavior, where we don’t just only focus on compliance issues, but for instance, also on the human side to judicial decision-making, or the human side behind lawyers or even clients—people who seek justice, how they actually come to understand grievances and develop ways to deal with that. I really feel strongly that, yes, lawyers have a tremendous amount of power in designing rules and operating rules, but I think they could wield that power better and also for better interests if they have a broader understanding of the social and behavioral sciences. And I use both terms, “social” and “behavioral,” because I think sometimes when we say “behavioral,” we immediately go into behavioral economics, and to me, that’s a very narrow field of understanding of some of the issues here.

Hui Chen: Benjamin, I’m also curious, besides the lack of training that most lawyers have in sciences generally—the approach to how to assess problems and how to assess the effectiveness of a solution—do you also believe that there is an issue about lawyers having different interests in that their focus often is more avoidance of liability than prevention? One of my favorite examples from your book is the 4,000-page lab manual that the University of California, after a serious fire, had chosen to enforce among all of the people who work in any type of lab, to at least acknowledge receipt of the 4,000-page document. It does nothing to prevent the next lab fire. What it does is to help the lawyers say, “Look, we did something.” I see this many times in talking about corporate compliance programs, particularly training—a lot of times, we ask people about what their objective is in doing the training. They may or may not know why they’re doing it, but one reason they all acknowledge they’re doing it is because they want to do it to satisfy law enforcement and regulators. And law enforcement and regulators—this is an incentive or at least a perspective that not a lot of people appreciate—they have a compelling, particularly law enforcement, interest in having people document their training so that when these people commit crimes, it gives them automatic proof of mens rea. So, the person cannot say they did not know this was the law—they have training records that show it. People forget that this is certainly prosecutors’ interests. How do you reconcile the differences, not just in background and approach mindset, but in their professional interests?

Benjamin van Rooij: In the book, we talk about two views on behavior. One is an ex-post view, which is the bad behavior has happened, what do you do then? That’s really the liability view. That view is very dominant. It is why companies have lawyers and hire lawyers. It’s even in contracts. I don’t think lawyers normally make contracts because they think, “We’ll really perform them.” What they’re thinking is, “What if there’s nonperformance—what happens then?” There’s an ex-post view on this sort of behavior that’s not the desired view. Our book argues that we should also have an ex-ante view, which is, “How do we prevent the behavior that we don’t want?” It is a much more difficult question, because, for instance, if you’re looking at legal knowledge, most people don’t know most of the rules, even if you do training. So, with your example of training—we just published this paper where we looked at the different anti-bribery policies, a long one and two short ones, and then we had a group that got no policy. We found no effect of just having people read these policies. People don’t remember rules because there’s just too many rules. You may remember another example from the book is that by the late-1990s, a U.S. company was already bound by 300,000 rules backed by criminal sanctions. Nobody can know and understand these, let alone communicate them. In an ex-post view—even the example on the University of California system having these 4,000-page protocols—that’s fine, because you find the applicable rule and you apply it. The ex-ante view, you need to think in a different way—you need to think, “How do we actually get to people when they’re potentially engaging in a conduct that the law doesn’t desire? How do you actually reach them?”

There’s several things to say here. One is the liability thinking is not going to go away. Compliance 1.0, I think, in your HBR article you also mentioned this—the Compliance 1.0 is not going to go anywhere. A lot of people, including me and probably you, are arguing, “We need to have an ex-ante and behavioral view.” But that view comes on top of liability thinking, I think it’s really important to recognize that. We’re not going to get any organization, corporation, prosecutor that will, in the end, give up on liability thinking. So, we need to be able to find a way to combine it, where you have the liability thinking together with the preventative thinking, and that’s very hard. I’ve seen some instances where I see beginnings of this—I’ve met one prosecutor in California, a state prosecutor who did environmental cases, who said, “If I see a company who has been caught in a gross violation of environmental rules and they have a good compliance management system, I actually go double down on them.” The opposite of what the federal sentencing guidelines have, because he said, “They had the system, but then they still broke the rules.” I think there may be changes amongst at least some of the regulators and some of the prosecutors into thinking of this where these things become more aligned—where actually, in terms of your liability management, you can’t get a lower liability only if you also do something preventative. I think also some of the things that have happened in the Department of Justice in the last years, also with your influence, Hui.

I see Compliance 2.0 as sort of where we are now, where companies and firms are trying to draw on behavioral science and are really focused on measurement of effectiveness. The problem there is you get these very simplistic interventions where you do A/B testing of one small thing that you may be able to measure—they hardly ever work in the real world, and they hardly ever work for complex problems. I don’t know how you nudge somebody out of bribery. Bribery’s a very complex, contextual thing that has no simple intervention. So, what I’m really arguing is that we need to recognize that there’s going to be liability thinking—we need to embrace it even. And then, we need to think about once we combine that ex-post liability thinking with ex-ante steering of behavior, how do we best get that together?

Hui Chen: I thought of you recently when I was looking, for other reasons, at a couple of pieces of legislation that had to do with evidence-based policymaking. The federal government, in 2018, passed the Evidence Act, which requires federal agency data to be accessible, and requires an agency to plan and develop statistical evidence to support policymaking. Yet, for example, when DOJ made decisions on compliance certification, I’m not seeing any data-based evidence to back up those decisions.

Benjamin van Rooij: There is a problem here. If I look at the evidence, where do we have conclusive evidence on any sort of intervention actually doing what it’s designed to do? Hardly any. So, the only thing I have in the book that’s really conclusive is that rehabilitation programs, both mandatory and non-mandatory for street crime, work really well, and that’s really consistent. But then, still, I can critique it if I go deeper into that—I can see that the study designs suffer from selection bias, because you put people in the programs that are more likely to succeed. I’m doing a paper now on the potential effects on illegal behavior of punishment. We’ve made this whole causal loop model where you put all these different potential effects of deterrents, rehabilitation (that doesn’t apply for corporate), incapacitation, but also all kinds of other side effects, like evasion or ending impunity, etc. I first summarized, for each of these potential mechanisms, what is the evidence based on? And for all of them, except rehabilitation, it’s inconclusive, mixed, or nonexistent. Second, they’re all interactive—that’s why we’re designing this model, and we’re going to work with physics professors and math professors to see if we can actually model it further and do simulations. But just the knowledge base that we have is, a) shaky for each of these individual mechanisms, and (b) there’s a lot of interaction.

Zach Coseglia: I want to talk a little bit about the real challenge that exists for a lot of the people who may be listening and for our clients who are corporate compliance officers, who are folks who are working within an organization with good intentions, trying to promote ethical behavior, trying to shape behaviors in certain ways. I want to share another excerpt from your book to frame this. This is early in the book—you say the following: “People tend to let their moral convictions about what is right and wrong shape how they think about what is an effective way to deal with misconduct. It’s what psychologists call moral coherence.” Then you say, “This means that people conflate what they perceive to be morally correct with what is actually effective.” Which resonated so deeply with me, having been in the room so many times when compliance programs are being designed, when policies are being shaped, when trainings are being created. So, we want to do what’s effective, but picking up on what you just shared, how do we know what’s effective when the research that we have available to us more or less tells us that the way that we’ve done things, the way that we’ve tested things, doesn’t seem to really have much of an impact?

Benjamin van Rooij: Yes, that’s a hard question. There’s a lot of things that we’ve grown used to doing, for instance, in the corporate sphere, having a certain compliance program, having certain aspects of that that everybody has—because of that, we’ve really grown used to them, and people have become invested in them. The problem that we see is that we are not sure, based on the evidence that does exist, that it does much, and what we do know is that it doesn’t really work where we need it the most. So, the evidence that we have, for instance, with corporate compliance programs, is they work best if you have committed leadership, if there’s good external oversight and a good internal culture. As we write in our book, we say if you have all that, you probably don’t need much of a compliance program. I think the first thing to do is to say, “Let’s just start with summarizing some of the knowledge that we do have about things that we’re not sure about.” And I think our book helps with that—our book helps to frame the questions. It doesn’t tell you what works, because that’s really particular to the situation, but rather what it does, it shows you the different ways in which you can try and approach a problem.

Zach Coseglia: Does it start with making an effort within the organization to actually see whether or not your set of circumstances creates a scenario where this intervention works? So, perhaps we can’t rely as we’d like on research that’s been done elsewhere, especially because it’s, in many ways, a burgeoning emerging area, but let’s make the effort internally to actually measure ourselves.

Benjamin van Rooij: I would have a step before measurement—I would start with analyzing what are the root causes of the problem that you’re facing. And before that even, what is the variation in the problem that you’re facing? For instance, bribery—there may be so many types of bribery happening. There may be really willful bribery for the person’s own personal interest. There may be accidental bribery. Those are very different root causes and have very different ways to address them. So, I think it first starts with scoping. What is the problem and is it even a problem? That’s number one. There’s way too many problems for any organization to all solve, so you need to prioritize what is the biggest problem you really want to throw a preventative approach at? There’s going to be other problems where you may just want to stay on the liability approach, because they’re just not big enough—they don’t really cut into your core business. Maybe you can’t even prevent them—there’s always going to be some problems that are not, should not be on your radar. So, the first thing is to prioritize which problems. Second, the question: Are these really problems and why are they problems? And then third, what’s the variation in the problems? Then, you go into a root analysis, and we have six questions in the back of the book to help with that. You go into a root analysis: What are the types of causes? Is it something that exists because it’s easy to do? If that’s the case, can you make it harder to do it? Is it something that exists because people are unable to refrain from it, because maybe they don’t know the rules, maybe they don’t have the technical qualifications, etc.? Then you need to support them. Is it something that exists because it’s normal, because most people do it? Then you have a really big problem. I would try and go back to A and B, but there are some interventions that work also on reducing the social norms impacts here. In that way, you have a more comprehensive understanding of the root causes that you do before you even talk about interventions. I think the problem is we’re too often jumping to interventions.

Zach Coseglia: Could you define what “complexity science” is for folks who probably are hearing it for the first time?

Benjamin van Rooij: Yes, my own simple definition is to move beyond just thinking that science is about finding out how A influences B. A lot of science is reductionist by design—it’s all based on this idea that we can have a full causal view of how one thing influences another thing, for instance, how a training program would affect compliance. And what you do in those types of things, you design an experimental approach in this traditional science where you isolate the effect of A on B, controlling for other factors. The problem with that is that becomes quite artificial. So, the best causal research, especially in things that we’re interested in, is done in game theory experiments in labs with undergraduates. They’re asked to play a bribery game—then we get an article, and the claim of the article is that it says something about bribery that folks in the corporate world could use. I think complexity science is the opposite—it sees the world rather as an interaction of processes that exist within a larger system. For instance, it wouldn’t see compliance as being something that is just an outcome, compliance itself also shapes all these other elements. There’s different ways of going at this. One way that we do is more bottom-up inductive. For instance, in the case studies of these organizations where we’re looking at culture, the existing research in the organizational culture says, “Let’s do a survey in an organization,” and based on that survey, we can get to the culture. If the survey says it’s a bad culture, then we know it’s bad. And I’m not overstating this, I think—this is the mechanism that we have.

Zach Coseglia: Yes, I don’t think you are.

Benjamin van Rooij: What we are doing is the opposite. What we are doing is saying, “Let’s take organizations that have had structural, long-term bad misconduct, and let’s try and trace back what is happening to elements of what we, from an anthropological and an organizational science view, can see as culture. We’re going to, really from these cases, learn what these elements are. Then, where we have one case, we go to the next one—we compare and we gradually build from the ground up what these elements of a negative culture are. And then, only once we’ve done that, we can look at, “How can we then develop this into a diagnostic tool?” It may include survey questions, looking at certain data from companies, or may include focus groups, so it’s really grounded. That’s one way of complexity science. The other way is the opposite. For instance, we did a study of compliance with social distancing rules, and we then included variables in this study from across different theories. Then we put all the data on different variables into one big model, and in that way, we could see how they all interacted, including with compliance itself. This showed us that this interaction created a network of all these forces that was stable, and it meant that if you just changed one thing, for instance, you start more law enforcement, it probably wouldn’t affect the rest of the network. We’re using that now to do simulations where we actually see if we switch one more variable on, what the ripple effect is through the network and how it interacts with compliance. So, it means in that way, you can have a much more realistic view of what one intervention might do if it is let go in a real environment. And the cool thing with things like this is that it can be done with real data, but still capture the complexity.

Hui Chen: Benjamin, I’m looking also at your book where you talk about, as quoted by you, “Capturing an organization’s culture requires a complex form of research.” Certainly, from what I’ve seen, most compliance departments are not equipped to carry out, or oftentimes even appreciate, the complexities that you’re talking about. Where do we start changing that?

Benjamin van Rooij: What we need is, ideally, a group of anthropologists going in to really do the ethnography to find which units in the organization are which type of cultural problems. I think there’s a large market here. Once this is seen as a service that could be offered, and once prosecutors see, “We could do this,” then at least you can start with an analysis. So, that’s in the really bad, big cases. For all the others, you can never do this—it’s way too expensive. For other normal clients, the thing you want to have is to be able to assess risk—you want to be able to assess risk in our culture. What is the risk that we are developing negative elements? What we are doing in our project—and I’m sorry it will take a couple of years because we want to do it really well—we’re developing a data-based risk assessment tool. It’s based on the worst cases, and the argument would be going to a company and saying, “You never want to become company X, Y, and Z. X, Y, and Z have become like that, in part, because they developed a negative culture, and we want you to not develop that.” I think the most important thing is it should be self-regulatory. I don’t think that regulators or prosecutors should be involved in this culture assessment, because as soon as they do, I don’t think companies can do this anymore. It needs to be very safe—even if you find negative elements, it needs to be without consequences, because otherwise you’re not going to find them. So, the idea here is that we developed, based on the data we have, diagnostic tools that include, as I said earlier, elements that can be done through surveys, through data analysis of existing data the company has access to as well as through focus groups and maybe small bits of qualitative interviews.

Hui Chen: Let me share some contrasting stories. So, you talk about the companies that generated lots of headlines—huge worldwide scandals, monitors, $1 billion fines. I worked on those cases. Even from the outside, I have observed a lot of those cases, as most of the public can if they care to. The current system is fraught with the problem that we started with—it’s lawyers who don’t know this who are in charge. The prosecutors are not ever going to suggest that you hire anthropologists to study culture. Prosecutors are there to prosecute cases. When it moves to the monitor phase, they’re on to other cases already—they’ve made the headlines, they are now onto the next case. The monitors tend to be almost always lawyers, with rare exceptions, certainly in the U.S. They are not the ones who are hiring people with these types of specialties—they are not evaluated on these. Let me give you a contrasting story of a really well-known household name company. I had an opportunity to hear a closed-door presentation by one of its senior leaders at that time. She was, at the time, their chief marketing officer, who now has become actually a member of their board. She talked about when they changed their company’s logo (and if I say this logo, you would all know this logo), they hired an anthropologist who was embedded in their company for a year before they changed the logo. This company would invest a year hiring an anthropologist to study how the logo affects their mission and their performance of the mission. Never in a million years would they think, “Let’s hire an anthropologist to assess our culture when it comes to compliance.” What’s the disconnect?

Benjamin van Rooij: Let me give you some counterexamples. So, you’re right—I think in the U.S. context, first of all, we don’t even see a lot of behavioral teams in government, and we don’t also see a lot of behavioral teams even in-house. Of course, you’re really unique in that you have it in a private firm. But if we move to England and the Netherlands, for instance, I know a big bank in the U.K.—the behavioral team, they have anthropologists in the audit unit, which is where the power is, which is way more powerful than the compliance unit. In the Netherlands, in this project, you would be surprised to know that we collaborate with the Dutch Central Bank, the main white collar crime prosecution unit of our central prosecutors’ unit (who are also focused on culture), the main environmental regulators at the Port of Rotterdam, the national labor regulators, the national pharmacy and health care regulators—we have nearly everybody there from the government who have all, in the last year, said culture is the key thing. They don’t know what to do with it, and they all have different ways of looking at it. The Dutch Central Bank even has a book—a full-length book on culture in English. I think it starts by having a first conversation that some of us should be having with people in supervisory and prosecutorial roles about culture.

If there’s one thing that our book makes clear—and the chapter is even called that: “Eating Systems for Breakfast”—whatever you do, if the culture is wrong, it’s not going to work. So, at some point, you need to address the culture, but addressing the culture is very difficult, and there’s a lot of ways to do it wrong. But I think there’s some traction—I think in some of the DOJ documents they mention culture. I think it’s coming up more—I’m pretty sure it’ll become more central, and at least there’s going to be more and more models from other countries.

Zach Coseglia: I want to share another excerpt from the book as a continuation of this discussion. But before I get there, I’ll just say in our first episode, I talked about how one of the reasons why I am on the path that I’m on now is in part driven by a frustration that I’ve experienced in the way in which we’ve historically done compliance. The creation of the Lab grew out of that frustration: “Are we doing it right? I’m not so sure.” You say, “It’s clear that compliance and ethics programs alone are not enough, and that they only work under particular conditions. There must be independent oversight. Corporate leadership must be committed to compliance and ethics. And the organizational climate and culture must be ethical.” This is all of what you’ve shared in our discussion now. “Just let that sink in,” you say. “The chief reason to have compliance and ethics programs is because some or all of these conditions are missing. This brings us to a startling conclusion: compliance and ethics programs only really work exactly where they are not really needed.” So, what do we do?

Benjamin van Rooij: This actually goes back to the conversation about culture. In this chapter, it’s not an argument against having a compliance or ethics program—it’s not. I think there’s a lot of value in having them—it’s just that it’s probably a necessary condition, but not a sufficient condition. So, you need to look around that—you need to make a broader analysis in going back to what I said earlier, of the root cause of the problems. And if they’re in the culture—and I know culture is a hard thing to define and understand—whatever problem you throw at it, it’s going to be eaten by that culture. That is one of the things. The other thing is leadership. One of the things we also talk about, a lot of people say, “You need to have the right tone at the top.” I have yet to find a company other than one of these big tech companies that really had a bad tone at the top. It’s one of those companies where the CEO had to leave because of problems. But most companies have a fine tone at the top—it sounds wonderful. In these bad cases that I’ve collected, for instance, on some of these automobile industries, they’re not saying, “Let’s cheat on our emissions.” No, they’re actually saying, “We’re going to be a green company.” And they all say, “We are about integrity. We’re about compliance.”

One of the things we also see as part of this cultural analysis is when the tone at the top doesn’t meet what some accountants have called the “smell at the bottom.” There’s this mismatch between where your mouth is and where you actually put your money. For instance, in one of the oil companies that we looked at, they at some point said, “No, we’re all about environmental protection, all about safety.” Yet, they kept on having 25% budget cuts across the board, which cut into their safety programs, and that also led for the lower-level employees who manage compliance and safety to see this double message. On the one hand they saw, “Here’s all these new policies coming in, and this is what we’re supposed to do.” And then, they saw, “We’re not really backed up to actually do it, and if we complain about that we can’t do it, all we hear is, ‘You have to stick to the budget cuts.’” I think with those things, having a compliance program but not understanding the context in which it operates—including culture, leadership, very basic things such as what are the targets, what are the things you’re supposed to do, mechanisms such as are you able to speak out, and power structures, for instance, having a whistleblower protection program without addressing structural hierarchy problems or safety concerns—is not going to work. It’s those types of things where you get to the harder stuff, the stuff that is harder to address, change, and understand. I think also what your group is doing there is you need high-end advice. You don’t just have a cookie cutter, “Here’s your problem,” copy, paste, let’s do it. I think there’s a market for this—there’s a market for companies who really want to showcase this, who really want to be leaders, who want to be ahead of the curve, because at some point, I think the regulators and the prosecutors are going to catch up and demand all of these things, and I think people like your group can really help with that.

Zach Coseglia: That’s all the time we have in this first part of our conversation with Benjamin van Rooij. Thank you, Benjamin and Hui, for a great conversation. Please join us for the second part of our discussion where we discuss the role of compliance in innovation, where we walk through the Behavioral Code that Benjamin outlines in his book, and, of course, when we get to know Benjamin a little better with our Better Way? questionnaire. For more information about this or anything else that’s happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to the series wherever you regularly listen to podcasts, including on Apple and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.

Speakers

Zachary N. Coseglia
Managing Principal and Head of Innovation of R&G Insights Lab
See Bio

Benjamin van Rooij
Benjamin van Rooij
Professor of Law and Society at the Faculty of Law, University of Amsterdam; Author, The Behavioral Code
Subscribe to There Has to Be a Better Way? Podcast