Skin in the Game? Dr. Caitlin Handron Discusses How Compliance Incentives Might Work in Practice
In this episode of There Has to Be a Better Way?, the second in a three-part podcast series on the DOJ’s recent policy announcements, Hui Chen and Zach Coseglia speak with cultural psychologist Dr. Caitlin Handron about both personal experiences and academic research around behavioral incentives. They discuss the complexity in both human and organization behaviors, and advocate for companies to take a nuanced look at their cultures and gather data to find creative ways to incentivize compliance without unintended consequences.
Zach Coseglia: Welcome back to the Better Way? podcast, brought to you by R&G Insights Lab. This is a curiosity podcast, where we are on a hunt for good ideas and for better ways of tackling long-standing organizational challenges. I’m Zach Coseglia, and I’m joined by my friend and co-host, Hui Chen. Hi, Hui.
Hui Chen: Hi, Zach.
Zach Coseglia: Hui, we have a special episode today—this is a continuation in a series. And we’re also joined once again by our colleague and friend, Caitlin Handron. Dr. Handron, hello.
Zach Coseglia: Welcome back to the Better Way? podcast. I bet you didn’t think that you’d be coming back quite so soon. I think it just popped up on your calendar and we said, “Caitlin, we want you to opine on incentives and policy set by the Department of Justice—join us,” and here we are.
Caitlin Handron: Which I’m absolutely thrilled to do. I love speaking with you both, so this is a real joy.
Zach Coseglia: We’re really happy to have you. Hui, that is what we’re going to be talking about today—we’re going to be talking about the comments made, the policy set, the speeches delivered by Lisa Monaco and Kenneth Polite at the ABA conference recently. Specifically, today, we’re going to talk about DOJ policy around incentivizing compliance through compensation and review structures.
Hui Chen: Let me start with a quote from Deputy Attorney General Lisa Monaco, where she introduces this topic about DOJ’s initiative on compensation and clawback by saying, “Nothing grabs attention or demands personal investment like having skin in the game, through direct and tangible financial incentives.” Here, the focus is squarely on direct and tangible financial incentives. But, incentives take more forms than financial incentives—there are other forms of incentives that operate to motivate people. So, Caitlin, can you tell us a little bit more about the world of incentives?
Zach Coseglia: What’s an incentive?
Caitlin Handron: Essentially, just something that we’re given in order to do something. And so, it’s a reward or a punishment in order to get us to behave in a particular way. In terms of the world of research of incentives, it is quite rich—there is a lot going on in this world. Unfortunately, it’s actually pretty mixed in terms of recommendations that we can glean or insights we can draw from the literature about what we should be doing in terms of incentives. So, as you mentioned, there are multiple kinds of incentives—there are financial incentives, but there are also other kinds that are non-monetary. As you might guess, the research does suggest that our reactions to these incentives can be pretty varied depending on what kind of incentive it is and how much it is.
Zach Coseglia: Incentives can take various forms, both financial and otherwise—I think we all get that. The research is telling us that incentives may work in some cases and they may not work in others, and that in some cases, they may actually have counterintuitive outcomes. So, what do we do?
Caitlin Handron: That is a great question. I’d say it really depends on what you’re trying to do and the context. I think, as a psychologist, people can get kind of annoyed that often the response is: “It depends.” And it can just depend on so many factors in this case. I know one topic of interest for Hui is this question of: Should we really be incentivizing behavior that people should be doing anyway, or should you be incentivized to do the right thing? I think that is a great question and one that researchers have really spent a lot of time wondering about. One big objection to the use of incentives is the idea that it can crowd out intrinsic motivation. And so, if we’re given an incentive or a reward for doing something that we already wanted to do anyway, then eventually, we might begin to like that activity less and less and only associate our desire to do it with receiving that reward. What the research suggests is that this can be the case under certain circumstances where, if you’re given a reward for doing something like a puzzle that otherwise might be fun and you would do just regardless, once you’ve been given an incentive to do it, once the incentive is taken away, that intrinsic motivation can go down and people might not be as excited or willing to do it.
Zach Coseglia: Hui, I want to ask you a question on this front because I think we hear this a lot, not just from social scientists, like Caitlin, but we hear it just from people who are operating in this space—we hear, “Why should we be incentivizing people to do things that they’re obligated to do? Why should we be incentivizing people to do the right thing?” The question I have is that’s an interesting question to ponder, but also—in the organizational context, which is the context within which we’re discussing it—isn’t everything that one does within their organization ultimately driven by an incentive to get paid for it? We work at a place; we get paid. We do our job, and we are compensated and rewarded for it, oftentimes, financially.
Hui Chen: I think that’s true. Certainly, I would say my own experience is that I consider a lot more than financial compensation. When I graduated from law school, I had the choice to go to a very well-paying law firm job or to be a prosecutor. There was no amount of money the law firm could offer that would get me to turn down the prosecutor opportunity because that was what I always wanted to do—that was why I went to law school. I wanted to do justice. I wanted to go get the bad guys. It paid a fraction of what the law firm job was offering, but there was no question in my mind which job I was going to take. There are a lot of things that, again, speaking only to my own experience, that I do because I want to do it, because I believe it’s the right thing. One of the stories that I had read in one of the books was talking about incentivizing. So, think about yourself hosting a Thanksgiving dinner. The person prepared the dinner with a lot of labor of love. Everybody sits down, has a wonderful time. At the end of the meal, instead of just saying, “Thank you—that was so nice,” one of the guests gets up and takes out $200 and says, “Thanks very much for that dinner.” How does that make you feel? This was something that you did out of love. Your compensation in that was seeing everyone enjoying your food—it was seeing everyone having a good time in your home. And somebody just turned all of that into $200—that almost feels insulting, I would think.
Zach Coseglia: Yes, especially to you because I know that you never miss an opportunity to give us a good story, and you never miss an opportunity to give us a good story that involves food or an analogy that involves food.
Hui Chen: This is true. I also will share an example, again, from my own experience. And think back—I invite you to all think back to your own time or your experience of similar things. I am an environmentalist—I love trying to avoid plastic waste, for example. For probably over 30 years, I have always brought my own reusable bag when I go grocery shopping. A couple of years ago, at my local farmer’s market, they started a campaign—they wanted to get more people to do what I was already doing. So, what they did was they had a campaign where, if you can go shopping, at the end of your shopping trip you can go to this table and show them how many groceries from different vendors you bought from who are not using plastic and put in your own reusable bag, you get certain levels of prizes—the more you have, the better prizes you got. You would not believe how I vied to compete in this. When I discovered this, I was like, “Wow, somebody’s rewarding me for this. I am really good at this and I deserve to be rewarded.” So, every time I went to the farmer’s market that month, I was at that table and I was making sure that I got my level of prize. In my head, the intellectual part of my head said, “You really shouldn’t be doing this because you’ve been doing this for 30 years and you didn’t do it because of this money. In fact, you’re taking prizes away from people who need to be incentivized to do this.” Did that stop me from going to claim every prize? No, I’m embarrassed to say. I also wonder how many people went and shopped, got the plastics, and they just hid the plastic bags and put everything in a reusable bag to claim the prize. I don’t know if that happened, but I can certainly imagine. Caitlin, I would love to hear your reactions about particular research that sheds light on this embarrassing behavior that I just confessed to. How widespread is this in terms of what we know? Is this common?
Caitlin Handron: You offered two stories and I want to react to both. There’s quite a bit of research that came to mind. The first, this piece about offering money in response to a social offering that someone gives us—there is research to suggest that the way that the psychology or the mindset, whether we’re dealing with money or non-cash incentives, it really matters. What the researchers hypothesized was that when we are working with money, we’re really in a cost–benefit mindset, and so, the greater the incentive, the harder we’re willing to work or the more we’re willing to do. But, what they hypothesized was, with a non-cash reward, we’re just willing to work hard regardless because it’s more about the social contract—it’s like you’re being given a gift and you want to deserve that gift. It doesn’t necessarily matter as much how large that gift is, to the point that actually people work harder when they receive no money than when they receive a small amount of money in some of these studies.
Then, to your second story about the farmer’s market, the shopping bags, I think we all have these kinds of stories that really challenge some of our intuitions about whether incentives work in the ways that we expect them to. What the research suggests is that incentives seem to work very well, at least in the short term, but then once they’re removed, that’s really where we see some of the negative results happen—the crowding out of intrinsic motivation or frustration. I imagine, if you had a 30-year habit of using the bags, you continued to use the bags even once the rewards stopped, and yet, there’s still the emotional experience that you had around it. Any time we introduce something new to a system, it’s going to affect it, potentially, and not always in ways that we can anticipate or that we hope for. And so, I think your story is common, and I think we all can probably reflect on these different examples that we have in our own minds about when incentives did work or didn’t work. There’s actually some research to suggest that if it’s too large, it can also backfire. For instance, the question of: “Do you want a nuclear waste site in your backyard?” If you’re offered a lot of money, it might signal to you that that’s really dangerous and you don’t want that to happen. Whereas, if you are not offered an incentive, then maybe you’re weighing more of the consideration of, “Is it my civic duty? Do I have to agree to this just because it’s part of being a citizen?”
Zach Coseglia: Caitlin, I’m wondering if you can tell us a little bit more about this childcare study that we all hear about a lot and that, I think, has become a part of the pop culture zeitgeist around behavioral science, because I’d love to hear about the study itself but then also talk about what happened since the study was released and how that contributes to this discussion around the complexity of incentives.
Caitlin Handron: Absolutely. In the study, essentially, the researchers were interested in working with childcare centers in order to reduce the number of parents who were showing up late to collect their children, and what they decided to try was a disincentive. So, they wanted to include a very low fine—I think it translated to about $3.00 at the time—for parents who arrived late to pick up their children. And what they found was that with the introduction of the fine, the number of parents who were showing up late actually increased. Unfortunately, when they decided to roll back the fine, they didn’t think it was working out as well as they intended, the norms persisted—so, parents continued to show up late to pick up their children. And so, I know this research made a big splash—it was very heavily cited, both in the academic research and in popular press, as well—but more recent research has failed to replicate it.
Zach Coseglia: If I could just jump in there—when you say it “failed to replicate,” what does that mean for folks listening?
Caitlin Handron: It means that other researchers used the same or similar methods to try to see if they could discover the same finding. It has failed to replicate. So, in science, you expect that if you use the same or similar ways of doing things, you should expect the same or similar results. When we fail to see that, it calls into question the original findings, and suggests that maybe it was just a fluke that you saw that finding or maybe it was something very particular about the conditions of that particular study, but it won’t generalize to other samples. And so, this is really speaking to a broader crisis in the behavioral sciences where a number of studies are failing to replicate. It’s really a cause for concern because so much of what the field of behavioral science is trying to advocate for is that we need a much more scientifically based approach to how we are structuring our society. I think we advocate very strongly for the use of behavioral sciences in business, government, and basically anywhere where we can fit it in.
I think being psychologically informed is tremendously important, and not just psychologically—behavioral sciences spans a number of fields. The more you know, hopefully, the more informed you will be when you’re making these decisions. However, of course, we’re in this crisis where now people are debating whether they can trust the results or which results to trust. For instance, even the studies I was discussing earlier about the crowding-out effects of intrinsic motivation, I was reading that there were three meta analyses that came out around the same time with differing results. And so, what do you do when even a meta analysis is suggesting something different from another meta analysis? It’s a little discouraging, but, I would say that, like any field that has growing pains, the behavioral sciences are evolving. There is increasingly attention being placed on the idea that these systems are far more complex than we’ve given them credit for. Using sticks and carrots like incentives, it’s a very simple tool that, were we dealing with a complicated situation or a complicated system, then there might be right answers and there might be all these conditions under which they’ll work in one way or another. But, when you’re dealing with a complex system—things aren’t as straightforward, not as linear, less predictable—we don’t necessarily have these clear answers of it’ll work, when it’ll work, and under which conditions.
Zach Coseglia: Caitlin, one thing I just want to underscore there is you talked about complicated systems and complex systems. Now, I think a lot of us might hear that and think, when you use “complicated,” you’re using it as a synonym for “complex,” but you’re actually being very intentional in the ways in which you’re using those words. So, tell us just a little bit about what is the difference between a complicated system and what I think we’re talking about here—in the context of incentives, compliance, and shaping behaviors in humans—which is a complex system.
Caitlin Handron: Yes, thank you so much. This is based off of the Cynefin framework from Dave Snowden, which differentiates between ordered systems, which are simple and pretty straightforward, and constrained, complicated and complex. And so, with something that’s complicated—you can think of a car, for instance, it’s very complicated. I don’t know how to work a car—I need an expert to help me any time I’m dealing with my car, even for the most basic thing. But, what we know to be true of a car is that, if you have the expertise, then you have the expertise—you know how it works, you can offer recommendations, you can take it apart and put it back together. That’s not necessarily the case in a complex system where you have a number of independent actors that are all engaging with one another and are all complex in and of themselves. A single human is so complex, and then you put a whole bunch of humans together and you’ve got an extraordinarily complex situation where, even if incentives work, if they have worked previously in a number of different studies, there might be something particular that has emerged in a complex system that will disrupt how that plays out.
Zach Coseglia: That’s great. I think we actually just experienced the living example of that, which is Hui. Hui is a complex system. Hui is not motivated by money in the job that she takes, is not motivated by money when she cooks a meal for someone (is, in fact, offended when they offer her money for the meal that she’s created), but is really motivated by small benefits associated with using environmentally friendly bags at the farmer’s market—that is complexity right there.
Hui Chen: As are all of us. I was just only foolish enough to share some of my complexities, but I think that the point is, when we think about it, we’re complex because we’re not all two plus two equals four.
I also think it’s interesting because some of the more simplistic financial or otherwise immediate incentive systems might work better in a more simplistic situation—a one-step decision as opposed to a multi-step decision. The example I would give is I had an uncle who was always very late to every single family event. So, my mother required him to give a $100 U.S. deposit, and he got that refunded if he showed up on time, but he lost it if he was more than 15 minutes late. It worked—he was never late to events that my mother hosted. But that was one behavior: just don’t be late. Some of the behaviors that we’re talking about in an organization, where we’re talking about ethics and compliance, it’s not that simple—it oftentimes involves a series of decisions, different decisions, or repeated decisions over time. I think the complexity in all of that is there’s really no simple way to incentivize a series of decisions over time. Or am I simplifying the matter?
Caitlin Handron: The research that I know of in the education space is consistent with your intuition here that it depends on what the ask is. With simple tasks, incentives are more effective—things like attendance—so, incentives work well if all you have to do is show up. But, if it’s a matter of performance, that’s where things begin to get a little trickier. I think your intuition is right about it being repeated over time. And any time there’s more distance between the action and the incentive, I think that also makes everything a bit murkier, as well.
Zach Coseglia: One of the things I also want to talk about, shifting gears just a little bit, is a topic that we talked about the last time you were on the podcast and something that we talk about a lot, which is measurement, outcomes, data, and testing. You gave us this point, Caitlin, when you were talking about the childcare study and the fact that it hasn’t replicated, that it required testing, measurement, and data to ultimately figure that out. The truth is, there’s probably not going to be a silver bullet for every organization or every group of people. And so, it really does underscore how important it is for whenever we put in place new policies that we are hoping are going to shape behavior, that we actually measure the outcomes, using data, to determine whether or not they work. Here, though, we have policy being articulated, but we don’t know whether it’s going to work. So, what do you do? Maybe, Hui, I come to you first. The Department of Justice says that this is something that folks should do—I think a lot of folks will do it for that reason, but what we really want, and I think what the Department ultimately really wants, is that we do things that work. So, what do we do? There’s some conflict there.
Hui Chen: There’s definitely conflict there because what I’m assuming, based on the speeches that were made surrounding this initiative, is that there were certain relatively concrete outcomes that they’re hoping to promote. They’re hoping, I believe, to promote less recidivism. They’re hoping to promote better prevention of criminal behavior in organizations. They’re hoping to promote better, stronger cultures of ethics and/or compliance, which I distinguish those two words, but we can talk about those later. So, the question is: Are they collecting data to study whether these initiatives are working towards those goals? I think what we need to be cognizant of is when a pilot program like this is announced, in some ways, it’s like a call for a social experiment. I’m now quoting specifically from the pilot program announcement itself—one of the things they want to see is “incentives for employees who demonstrate full commitment to compliance processes.” I’m not even sure what that means exactly, but if somebody could figure out what this is, maybe they can articulate a clear hypothesis. And if there’s a clear hypothesis, maybe we can have data that would help us measure whether it’s accomplishing its intended objectives. But, there really is no mention of what kind of data that they would be collecting to evaluate the success of this program or the validity of their hypothesis. Caitlin, if it were you, assuming that the goals are the ones that I articulated—basically less criminal activities in corporations—what kind of data should they be collecting for a measurement?
Caitlin Handron: I think there are a lot of, maybe, more straightforward responses in the compliance space in terms of how we look at compliance, but I would say what comes to mind for me is really just the importance of culture—and that’s probably not surprising. When we’re talking about complexity, and when we’re talking about human behavior and what drives human behavior, there’s just so much disconnect between what we know about human psychology, how humans operate, and how the world is structured. There’s now significant research and conversation around just neuroscience and how much of our cognition is happening outside of our conscious awareness. Yet, we have built systems and structures based on the very small percentage of our cognition that’s happening consciously and more or less rationally. And so, I think that definitely needs to be a part of the conversation, and it’s why I always turn back to culture in terms of thinking about how we ultimately want people to behave.
I’d say, in terms of measurement, there’s so much that can go into assessing the culture and whether it’s ethical, but I think it comes down to listening to people and finding out what their experiences are and what motivates them. The research seems to suggest that people are both driven by what they think is right and by what they think others think is right. It comes down to creating a culture where people want to be there and know that this is an environment where people do the right thing, so that you attract the talent and the people who, like Hui, are motivated intrinsically to do the right thing. But then, you also want to make sure that you’re supporting and encouraging the broader cultural context to align with that, as well, so you’re bringing people into an environment that’s consistent with their own goals. And so, concretely, what does that actually look like? Listening to people, gathering stories, finding out what their experiences are like, finding out what the processes are like for them: Is it comfortable and safe to report? But also, considering leadership. The introduction of incentives potentially is going to call into question trust, and trust is just so important within the organizational context. Once you introduce incentives, the question is: Why are people doing what they’re doing? Before, you maybe would have inferred that it was of their own volition, that they were acting on what they believe is right. Once you introduce incentives, now you’re maybe questioning that and you may really begin to doubt how genuine people’s motivations are, and so, that’s definitely something to consider, as well.
Hui Chen: As we come closer to the end of our time, I also want to pick up on that last point you made in terms of how you perceive people who are acting under compensation. Going back to the language in the pilot announcement, “incentives for employees who demonstrate full commitment to the compliance processes,” you can easily see that being applied to leaders in the company. So, let’s incentivize them for promoting compliance. Now, one of my worries is when my leader is standing up at a town hall talking about ethics and value; right now, I like to think that they are doing it out of their personal conviction—so, intrinsically motivated. If I learn that they’re actually getting paid according to the pro-compliance messages that they deliver, for me, that would change my perception, because I might think that they are only saying that because they’re paid to say it and not because they really believe it. What’s your view on that?
Caitlin Handron: Absolutely, I agree 100%. The research does suggest that the introduction of incentives can disrupt trust and the development of trust across different processes, so I think your intuition is spot on.
Zach Coseglia: One of the things that this raises for me that I think is inevitable is the risk that what we’re actually doing is creating compliance theater as opposed to actual compliance, that we’re manufacturing it and treating it, to your point, Caitlin, like a complicated system—like a car, robot, or machine. When in fact, it is this complex system that—to really work, to be real, to be authentic, to engender trust, to shape behaviors—it actually really does require it to be intrinsically motivated. If that’s the case, what’s an organization to do if it’s not?
Caitlin Handron: I think the way to get people to behave in the way that you are hoping that they will is really to make it social, to make it something that everyone’s doing, to make it so that you’re floating downstream with the current and not fighting against it. And so, I think any time we’re asking a question of—how do we increase intrinsic motivation?—there are a number of steps we can take. Making things fun, making things engaging, making them social—these are all strategies that can increase people’s willingness to engage in an activity.
Zach Coseglia: We’re talking about a complex system—so, who is the organization? What is the organization? The organization may have values and there may be leadership that’s driving decision-making consistent with those values that is intrinsically motivated, but you’ve got 100,000 employees out there and some of them may not be. That’s what I’m thinking about: How do we drive compliance when there may be actors who aren’t intrinsically motivated? Hui?
Hui Chen: I think precisely by starting to do what you just did, Zach, which is distinguishing the organization in a way that recognizes the individuals—I think because people oftentimes talk about an organization as if it were one single individual, it fails to recognize that complexity. Recognizing that there are going to be people who are more intrinsically motivated and there are going to be people who are going to be externally motivated, you really need to deploy a combination in a way that speaks to these different motivations. I would say what I have learned from this conversation, the Better Way, to me, is to think of incentives in a way that’s complex. I’m learning the new use of an old word. Think about incentives as a complex concept, that is not just about financial compensation—it’s not just about all carrots or all sticks. Some people, on some things, will respond to the carrots better, and some other people, on other things, will respond to the sticks better. So, what is challenging here is grappling with that complexity—to figure out in what circumstances, with whom, and in what way do we find those intrinsic or extrinsic motivations.
Zach Coseglia: Yes, that makes good sense. When I first read this, my initial reaction, and I think I even said this earlier, was, “Here’s policy that’s intended to shape behavior that isn’t actually tested. Where’s the data? Where’s the research?” And so, my mind originally went to, “If an organization’s going to put in place an incentive program, it should then measure its outcomes.” But, I actually wonder if the Better Way—and Caitlin, I think this is what you’ve been saying—is we should really understand the psychology of our organization. We should understand the complexity of our organization. We should understand, through measurement of our culture, whether or not our people are incentivized by the financial reward or whether or not they are already intrinsically motivated. Let’s understand where people are, and then let’s shape policy in response to that. To your point, Hui, we have the opportunity to set policy that isn’t monolithic because an organization isn’t monolithic. And so, if we have that data, if we understand our organization in that way, we’re able to put policies in place that are targeted to the people and the complexity of our organization, so that the way that we do it here might be slightly different than the way that we do it there because we understand our people.
Caitlin Handron: I love that—I couldn’t agree more. I think it really does begin with finding out where you are, listening to folks, and understanding them and what’s driving them. Complexity science experts suggest that when you’re dealing with a complex system, there aren’t necessarily best practices—there are emergent practices. There will be the discovery, the testing, and then the emergence of something that might completely surprise you—I think being prepared for that is another piece of this. What I always recommend is that people go into it—and we’ve talked about this—with a scientific mindset, with curiosity, but also with the willingness to collect the data and find out that you might be wrong, completely wrong. Going back to just this idea that things aren’t necessarily structured optimally based on what we know from the neuroscience. I just feel so strongly that we need to overcome our attachment to the status quo and really be willing to get creative and experiment. I know Hui told me a story about instead of offering a compliance ambassador a financial incentive, instead offering them dinner with the CEO. I think that’s a really creative alternative that could be very motivating for some people. It might be less motivating for others. It might be a nightmare scenario for some people. But, it could work well, and I think that element of curiosity paired with a willingness to actually look at the data is exactly what we need when we’re dealing with complex systems.
Zach Coseglia: What I’m hearing is your “Better Way” is to find Better Ways. Be creative. Be innovative. Disrupt the status quo. Try something new. Don’t land on what’s easy and predictable.
Hui Chen: And gather evidence to validate.
Zach Coseglia: And data, yes. This has been such a fun discussion. Caitlin, thank you so much for coming back. Any final words for our audience?
Caitlin Handron: I do just want to put a plug in for creativity and thinking outside the box. We don’t often get a lot of room to express creativity and to move beyond what’s already been done before. There’s so much psychological resistance to changing what has been done before that I think we all need to be a little bit more intentional about allowing a little bit of off-the-wall thinking or out-of-the-box thinking and just really question some of these foundational assumptions that structure so much of our lives. I think there’s a lot of possibility once we begin to move beyond some of those assumptions.
Zach Coseglia: I couldn’t have said it better myself—I fully agree. Hui, any final thoughts from you?
Hui Chen: I would say embrace the complexity, but always be prepared to defend it. You defend it by collecting evidence and data along the way.
Zach Coseglia: That’s all the time we have on this episode of the Better Way? podcast. Caitlin, thank you so much for joining us again. And, thank you all for tuning in to the Better Way? podcast and exploring all of these better ways with us. For more information about this or anything else that’s happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to the series wherever you regularly listen to podcasts, including on Apple, Google, and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for better ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.