Data-Powered Insights: Telling Your Risk, Culture, and Ethics Stories with Data

Podcast
April 3, 2024
38:57 minutes

In today’s business world, we’re increasingly being asked to make sense of data. But how do we turn numbers into meaningful insights about a company’s risks, culture, and compliance? On this episode of the Better Way? podcast, co-hosts Zach Coseglia and Hui Chen are joined for a second time by R&G Insights Lab’s very own David Yanofsky to go deeper into the world of data storytelling. With a background in data journalism, David talks about how data can be used to tell multiple stories and why context is key to making them meaningful. The three explore various data concepts, such as correlation, causation, magnitude, distribution and flow. Even if you’re not a numbers person, you’ll walk away with a more sophisticated understanding of how data can be useful to help understand and navigate your organization.


Transcript:

Zach Coseglia: Welcome back to the Better Way? podcast, brought to you by R&G Insights Lab. This is a curiosity podcast, where we ask, “There has to be a better way, right?” There just has to be. I’m Zach Coseglia, the co-founder of R&G Insights Lab, and I am joined, as always, by the one and only Hui Chen. Hi, Hui.

Hui Chen: Hi, Zach. How are you today?

Zach Coseglia: I am good. I am super excited because we have another internal guest, and another repeat guest, in David Yanofsky. David, welcome to the podcast.

David Yanofsky: Hi, Zach and Hui.

Zach Coseglia: David, why don’t you just reintroduce yourself to our listeners?

David Yanofsky: I’m David Yanofsky—I run the Lab’s data visualization and data analysis practice. Prior to doing that, I spent a number of years in journalism, investigating and trying to understand companies from the outside, rather than from the inside.

Zach Coseglia: Terrific. We’re going to talk today about a topic we explored with you on your first visit to the podcast, which is data storytelling, but we’re going to dig a little bit deeper. David, one of my favorite parts of your original time on the podcast was you introducing us to the concept of “precision journalism,” and then us adopting that phrase for “precision compliance,” “precision DEI,” or “precision fill-in-the-blank.” Why don’t you remind folks about “precision journalism,” and your work as a journalist prior to joining the Lab.

David Yanofsky: “Precision journalism,” which is also referred to more colloquially as “data journalism,” is using quantitative social science methods to report on the world around you. And so, it’s using data that already exists—it’s collecting data that you don’t have and applying scientific methods and statistical methods to it to learn about your community, the economy, or whatever else you’re reporting on. That’s what I was doing—I did that most recently at Quartz, and prior to that at Bloomberg.

Zach Coseglia: You say that “data storytelling is just storytelling, but with data.” Talk to us about what you mean by that, and how we can make the concept of “data storytelling” just more relatable to those of us who probably are in the business of telling stories in one way or another every day.

David Yanofsky: There are folks who need to communicate metrics and successes of their business unit, their group, or whatever their work is. As soon as you have a manager, a culture in your organization, or an otherwise business demand to “start quantifying those successes and areas that could use improvement,” people really struggle on how to continue to tell that story in as specific an amount of terms as you get with numbers. I say “data storytelling is just storytelling” because in journalism, your journalism doesn’t get better or worse just because you’ve added a number—or just because you’ve added a number, it doesn’t change the requirements of what you need to communicate to your audience. It’s another detail, it’s another fact, it’s another venue to reinforce the ideas and the concepts that you’re trying to convey to people who may be more predisposed to understanding the success of something as a percentage increase or the success of something through a chart, instead of through a sentence or through a feeling.

Hui Chen: There are other careers I might have gone into had I been more comfortable with numbers. I think your concept of “telling a story with data” is really new with a lot of people because certainly in the compliance field, many of the people working in the field are just beginning to get introduced to the idea of “data analytics,” and, I think, many of them are still struggling with what that is. And then, you layer on top, “Not only do you have to analyze, but you have to tell a story,” and I think that can feel overwhelming to a lot of people. Hence, I think a lot of times in compliance presentations you just get these rows and rows of data, and it’s difficult for both the presenter and the audience to easily make sense of it. This is where a lot of what you call “visualization” comes into play. Whereas I’m looking at a massive spreadsheet with lots of fields and numbers—I don’t know what to make of it. Yet, this is where, when you do the right visualization, you can see that spike in the trend or you can see a lot of dense points in a particular area, and that’s when you start making sense of some of these things. To the people who are struggling, what would be the first couple of steps that you would advise them to do, in terms of helping to transform this “data dump” into “data storytelling”?

David Yanofsky: Good stories have beginnings, middles, and ends. You need to know what you want to tell people, you have to introduce the issues, and then, introducing those issues to your audience, you need to capture their attention and you need to set expectations in a beginning. In a middle, you need to provide details: “What’s happening?” You need to communicate status, trends, and maybe discuss challenges. In the end, you want to summarize: Ensure that your points are being made, reinforce the reason why you’re communicating this stuff to begin with, and perhaps even have a call to action. You’re giving someone this information for a reason, and you want them to be able to now take an action because of it. That’s just a normal storytelling arc. But to your point, Hui, about not being comfortable—you see a big spreadsheet full of numbers and you just don’t get it, or you see some charts and you’re confused by them. When I think of folks who have that discomfort, I try to remind them that they look at the weather report every morning, or that they might open the newspaper to a big page of tiny numbers that just fill the entire broadsheet of a newspaper—there’s all sorts of maps, charts, and data visualizations, and the text is tiny—but you can look at that and you can understand the story that it’s telling. You can understand, by looking at all of that data, and picking out the exact visualizations that are relevant for you, or the right metrics, that you can answer lots of questions about the day. “What am I supposed to wear today?” “Am I going to be able to exercise outside today?” “Should I walk to work?” “Do I need an umbrella?” All of these things are just based on the numbers that are on that page. Not everyone reads the newspaper—some people get their weather report from, say, the local TV station, and there you have someone presenting all of that information to you. You have someone standing in front of a lot of the same data, pointing out all of the various features of it, and telling a story about the weather of the day based on the data that’s coming in. That is a completely data-focused story that people are used to either proactively consuming or passively consuming on a near-daily basis. And so, have confidence in your ability to understand stories with numbers, because you’re already doing it.

Hui Chen: I love that analogy because, to me, it’s very easily applicable to compliance. When you’re looking at a lot of these data, what you want to know is, “Is there a storm coming?” “Is there a part of our program that’s particularly susceptible to a certain type of risk that I want to fortify that?”

Zach Coseglia: One of the things that I really like about storytelling as a device for helping maybe the uncomfortable get more comfortable is that it’s not only a device that helps us think about the way in which we’re going to present the data—I actually use it as a way to start reframing the discussion on the front end about what we need to be telling stories about at all. We talk to some of our stakeholders, some of our clients, some of our friends in the space, and I hear folks starting the conversation around things like, “We have X number of ERP systems. This one communicates with that one, and this one doesn’t communicate with the other. We have a bunch of data in Excel spreadsheets. We want to get access to our HR data, but we don’t have access to that, and there are political landmines associated with getting that. We don’t have systems set up for this, but we have systems set for that.” What winds up happening is a swirl arises that prevents folks from making progress. And so, rather than starting the conversation around, “What data sources do we need?” “What data sources are connected?” or talking explicitly about “data,” I always try to reframe the question of just simply, “What story do you want to be able to tell?” When you start the conversation about “data,” you wind up ending the conversation with just a dump of data, of numbers, of that experience that you described, Hui, of looking at the Excel spreadsheet. When, in reality, when you just start by, “What stories do we have to tell?” or “What stories do we want to be able to tell?” we’re naturally putting ourselves on the right path to creating the kind of final work product that ultimately is going to bring value to the stakeholder—that’s going to be actionable, and that isn’t going to be susceptible to the “So what?” factor, which, all too often, winds up getting in the way of meaningful, data-driven analytics progress.

Hui Chen: Also, the story concept may be very helpful to the people who ask Zach and me a lot of these questions about, “What kind of data should we be looking at? Everybody tells me I have to be doing data analytics. I don’t know what data I should be looking at, other than the traditional ones of how many people are under investigation?, how many cases are closed and open?, how many people completed training? When you start talking about “data as a story,” when it’s reframed that way, you start asking the questions that would help you build the story. So, if you look at this one set of data of here’s how many investigations the company has completed, and the traditional breakdown—by business units, by geography—that’s pretty much how far most of the compliance data analytics go. But when you start thinking about it, how do you put a story together about investigations? It’s the traditional: Who? What? When? Where? Why? Who is committing this misconduct? Is it people who are senior in the company? So, besides the geographical breakdown, the business unit breakdown, there’s a lot of demographic information that can come into play, and you can start looking at those and compare those with the investigation’s numbers. When are they committing this misconduct? Within their first six months in the company? Is it happening more frequently when they’re 20 years in the company? If you go through these factors, you start building a story by thinking about what other datapoints can fill in this story to answer all these who, what, when, where, why questions. Zach, I know we’ve done work with clients in precisely that kind of storytelling.

Zach Coseglia: One-hundred percent. But it’s even more than just more effectively telling the who, what, when, why story—it’s how you can use that data. Let’s take investigations data, for example, to answer or to tell a story about something like, have we effectively mitigated risk?, or to tell a story around whether our employees are making good decisions. Does our compliance program work? Are our employees learning from the trainings that we’re giving? These are all stories that, I think, are on all of our minds as compliance people, and we can answer these questions with data. And so, we should not start by saying let’s talk about ERP systems. Let’s talk about Excel spreadsheets. Let’s talk about data. We should just start by saying, before we even get into a discussion about what data we need, have, how it’s structured, or how much confidence we have in it—what’s the question we want to answer? We want to answer the question: Are our employees making good decisions? Alright, now let’s talk about what data we need to answer that question.

David Yanofsky: I think also it’s good to remember that there’s a lot of different stories that can be told by the exact same data. I think about this from the media landscape perspective of, if you have a local candidate for office, they’re probably only covered by the local newspaper, the local radio, the local TV. Same area if you have some member of Congress—they might be covered a little more broadly, they might be covered by a couple of national newspapers who are focused on politics. Someone running for Senate might be a bigger race, and now you have a national story that’s being told about that candidate, but you also have a local story that’s being told about that candidate from those local media that might be focused more on individual issues in that community. Whereas the larger national or global publications that are reporting on that are going to be talking about the implications for control of the legislature, for what it means for national policy or the types of votes that are going to happen if one or the other of those candidates gets elected. All of those are valid stories to be telling, but it’s a different audience that you’re trying to reach with that story and trying to find the things that are most important for that audience to tell in that story.

Hui Chen: Allow me to do the translation for the ethics and compliance audience. You want to have your data storytelling in a way that different parts of your organization can engage with the data in a way that’s relevant to them. So, looking at the same data, the country manager of country X would have different interests in the same set of data as the global chief financial officer. They’re all looking at the same data, but you want to have your “data story” set up in a way that the country manager can go in and do whatever filtering that she needs to do and come up with an engagement with the data in a way that’s relevant to her, and the global CFO can do the same thing with whatever is relevant to him. And you do this in a way that’s customized for these different stakeholders so that they can get the story that interests them, that matters to them.

Zach Coseglia: David, could you talk a little bit about a project that we recently did for a client where we were tasked with creating that kind of high-level but also filterable story around compliance performance? And specifically, talk about how we framed the narrative into different buckets using various datapoints, but also framing or shaping individual KPIs that fell into several high-level buckets.

David Yanofsky: Absolutely. We came up with a whole framework for nesting and overlapping all of the various concepts that you use to tell a data story—taking all of those pieces and interlocking them, overlapping them, putting them aside, fitting them in opposition—so that you can mix and match them and view them in all sorts of different ways that make sense. The most atomic item that we had in this system was a metric. All of those metrics rolled into analysis theories or analysis ideas, and those ideas were questions. And they were questions, Zach, like you were just bringing up: “Is our compliance program working? Are our employees being non-compliant? What is the manner in which our employees are being non-compliant?” Under each of those, we had various metrics that could help inform that question or that statement, and all of those were inside of a framework around understanding employees’ conduct, understanding their knowledge, and understanding perceptions. Each analysis theory having metrics that can drive them now lets you pick at any one point the relevant pieces of your compliance program that you need to report on to your audience. You might have people who are interested in specific metrics. You might have people who are interested in specific questions or analysis theories. You might have people who are interested in various components, or any sort of combination of those things. And now, you can roll them all together and interlock them in a way that tells a cohesive story about your program that is relevant to your audience, rather than just saying, “Here’s a bunch of numbers that we collect, and aren’t you interested in them?”

What made these metrics really powerful, though, was to ensure that our client was not just capturing the number but capturing information about the number—so, understanding what the normal range of that number is or their comfortable range of that number. Obviously, organizations are human, and humans are variable, and so, these metrics are just going to change from month to month, from week to week, from quarter to quarter, just by the fact of human nature. Instead of saying, “The perfect number is 50,” actually, “an acceptable number is between 40 and 60. And let’s define that range for every single number that you’re collecting—that range might be zero to 60.” And so, you need to know how far above 60 should you start getting more and more concerned? Where are those breakpoints between “something that’s a little concerning” versus “needs immediate attention”? That can happen on both sides of those metrics. When we’re talking about creating these comprehensive metrics, it’s about adding in those features at the start and saying, “We have numbers, we’re collecting them, and we’re calculating them. Here’s where we expect them to be, and here’s where we start worrying about them.” Now, instead of just putting numbers in front of people—whether those people are your line managers or your CEO—they can see that, “This is an unacceptable place to be,” or, “This is just an uncomfortable place to be.”

Zach Coseglia: We started by saying that we want to make sure we’re telling stories with data, but also that we want to make sure that we’re starting the discussion about how we’re going to use our data, how we’re going to cut our data, how we’re going to analyze our data framed around stories, and so, that’s exactly what’s done here. We started by saying, “What are the questions you want to answer? What are the stories you want to be able to tell?” One of the questions that we wanted to be able to answer was, are our reporting channels operating as they should? That then becomes a level in and of itself—it’s actually the middle level in the framework that you just articulated. Above it sits a higher-level designation of, “What do we know about our conduct? How are our people behaving?” Then, you’ve got that question of whether the reporting channels are operating as they should along with a number of other questions that speak to conduct. And then, under that, you have the individual metric, which could be the number of reports, or the number of reports per X number of employees, the percentage that are anonymous—whatever the metric may be. What often happens, though, is that people start with those individual metrics, and then they wind up putting together a bunch of metrics, as you said, and then they dump all of those metrics on people. So, what I love about the framework that you created here is that we have these three different levels, each of which is giving an indicator in and of themselves. But even when we get down to that bottom level of the metric, it’s not just a “number”—that number is being communicated in a way that says, this is good, this is bad, or, this is indifferent, and being able to do that is a huge part of telling the story. Oftentimes, we think of “good,” “bad,” or “indifferent” on a sliding scale, where, “good” is on one side and “bad” is on the other, but that doesn’t work for all of these metrics, so, how do you deal with that?

David Yanofsky: Yes—sometimes, that “good” scale is in the middle. In the investigations context, the “number of reports per employee” is something that you don’t want to be too low and you don’t want to be too high. “Too high” and you have a lot of issues in your organization. “Too low” and maybe people aren’t speaking up enough—people aren’t comfortable with your reporting program or people don’t know how to report things. And so, just because a number is “low” doesn’t mean it’s “good.” You do need to try and understand when something should have a lower limit that’s not zero. In this system, we talked about related metrics that you should also look at: When this number is low, ensure that this number isn’t, or that this number is as well, and that will help give you a fuller sense of what’s happening in your program.

Hui Chen: There are a couple of things that really jump out at me on this particular client work that you did. One is I loved the fact that your deliverable was called a “blueprint,” because, again, as a layperson, I think of it in architectural terms. In the beginning of the conversation we had with this organization, it literally was basically saying, “We have a lot of these data, and what do we do with them?” So, it’s almost like somebody saying, “We got a lot of bricks. Here they are: What do we do with them?” And you delivered a blueprint that is the house that they want to build with all these bricks. Without the blueprint, it would just be piles of bricks. A “blueprint,” when you think about it like a “story,” is the house that you want to build, and so, I really loved that.

The other concept that basically you’re talking about is “contextualizing” a lot of the data when you say “information about the numbers, and, I think, this is, again, where people who are not comfortable with numbers or statistics can get lost in a lot of concepts. I know when we have you in front of our training workshops, you go through certain concepts, like causation, correlation, and things like that. I think a lot of people can say things like, “Causation does not equal correlation,” but they do it all the time despite their knowledge—it’s constantly confused.

Zach Coseglia: Take us to school. Let’s start with “correlation” and why it’s different from “causation”.

David Yanofsky: “Correlation” is the idea that you can have two datasets that move in similar ways. A positive correlation would be: every year around the time of performance reviews, the number of calls to our hotline goes up as well. The date of performance reviews is tied to the date of reports to the hotline. That is a feature that is seen in a lot of organizations, and it has to do with disgruntled employees finally being given a bad performance review, and the final straw is to report their boss for something. That is “correlation.” That is also “causation” when you look into it and you say, “This has caused that.” But sometimes, there are things that just happen to move in parallel with each other that aren’t “causal”—it is complete happenstance that these things are tightly correlated.

Hui Chen: One of my favorite examples of this, I think, came from one of these medical shows that I watch. All the doctors were sitting in grand rounds, it’s early morning, they’re all drinking coffee. Whoever was the main character looked up and said, “How many of you are doctors?” They all raise their hands. “How many of you are drinking coffee?” 90% of them raise their hands. He says, “So, drinking coffee makes you doctors?” That’s a classic confusion of these two concepts. You’re not looking at other factors, like, how many people who are not doctors who are also drinking coffee at this time—that’s the failure to contextualize, right there.

Zach Coseglia: I think it also underscores a really important point that all compliance professionals need to remember, which is that when we start making our programs more data-driven, we also naturally have to move away from binary thinking around “good and bad,” “right and wrong,” because that correlation may actually be reflective of causation, but you’ve got to ask additional questions, you’ve got to interrogate the data more, you’ve got to dig deeper to figure that out. And the spurious part of it is to just take it on its face, and that’s exactly what you can’t do as you start making your program more data-driven.

Hui Chen: Companies love to ask the question about “measuring the effectiveness of their policies and procedures,” or “training and communications.” There are many important things here, but one of the ones I immediately thought of was that this cannot be a one-time exercise. The first question is what do you mean by ‘effective’? So, what is your policy and procedure, and training and communication trying to achieve? Assuming you’re trying to achieve essentially “good behavior,” a “good compliance culture,” we can assess that culture. But without comparisons—let’s say we go into the company and we assess, and they have a “strong compliance culture”—we don’t know if this compliance culture existed before you had any compliance program, any training, any policy. That “compliance culture” may have nothing to do with your “compliance program.” If we don’t do these measurements over time, which is what you need to do in this type of contextualization, you really cannot answer the “causation” question. All you can do at one point is the “correlation”—that you’ve done this number of training and communications exercises, you revised your code of conduct, and at this point in time, you have a culture that looks like this, but is it “because” of your training and communications and all of those? We don’t know because no baseline measurement was taken before you did any of the intervention. We are just coming in “taking the temperature” now, so there is really no way to answer the “causation” question at all without the time-over-time measurement.

Zach Coseglia: Absolutely. David, I like this game. Let’s do another data concept for you to educate us on. How about “magnitude”?

David Yanofsky: “Magnitude” is just how big something is. Lots of people are used to reporting about magnitude. You have: “How many cases did you have this month? How many cases were about HR issues?” “How many” is “magnitude,” and putting it in scale with something else is sometimes looking at “part-to-the-whole”—which would be another data concept—but oftentimes, it’s not, you’re just giving the context of magnitude: “We had 10 cases about HR issues, and we had 50 cases about expense issues,” and so “5X more” is a “magnitude.”

Zach Coseglia: Alright. How about “distribution”?

David Yanofsky: “Distribution” is one that people do not use enough to tell stories with. “Distribution” is, if we start grouping things together, what does the frequency of those things look like? One of the examples that we all like to give is talking about expenses and talking about expense limits. If you create a chart of the distribution of per-person dinner cost on the expenses in your organization, you will not have a “normal distribution”—you will have a big spike right under your limit. That is because people view a “meal limit” as something to maximize towards, rather than something to just prevent you from going over, and also because people play a game. There are lots of people who play games with their expenses, because if they don’t, they won’t get repaid or they’ll be on the hook for some corporate expense that they weren’t familiar enough with what the policy was, or there was a culture that enabled this sort of behavior to try and fit within some rule that is programmed into a system that prevents you from actually even filing your expenses if you have deviated from a policy.

Zach Coseglia: Let’s do one more: How about “flow”?

David Yanofsky: “Flow” is a really important one when you’re trying to talk about a complicated way in which items move through processes, and those items could be dollars, those items could be people, those items could be documents. A hiring process is a great use for talking about “flow.” “We had this many people apply, some segment of them we moved on to the next stage. So, of our first-round people, we did a second-level review of their resumes, of their cover letters, of their portfolios, whatever it is, and we interviewed some even smaller segment of them. And of those, we offered more interviews to them—we had finalists.” All of this narrowing down of the funnel is not actually just a one-way funnel that we talk about, that we have all of these applicants and it narrows down to finally the person that we hired. At each step of that filtering, there are people who are being spun out. Perhaps they’re being spun out into other roles that they’re better suited for, or they’re being categorized into some group of candidates for a future role, if it were to open up, or a more junior role, after a senior person gets hired. Keeping track of the whole system of people as you moved from the big cohort into the smaller cohorts, as you filtered them, is “flow.”

Zach Coseglia: One of the things that I want to say here to folks who are listening to this, and who may be some of those who are not naturally inclined or comfortable with data: The goal is not to make you a data scientist if you’re not a data scientist. The goal is not to make the compliance or the human resources professional a data scientist. The goal is to make you more savvy and more sophisticated in your ability to have conversations with the folks who do have those skillsets that could help you build this stuff.

David Yanofsky: I talk about data concepts like this because it helps people frame the ideas that they’re already having into concrete terms that they know how to communicate. The way in which you communicate a “magnitude” is different than the way in which you communicate a “correlation,” but until you give it a name, you might not make that leap and say, “I have these two ideas in my head, and I need to tell people about them.” You might start telling people about them in the same way, and your audience might not get it, because they don’t realize that in one case, you’re talking about how something has a “big magnitude,” and in the other one, you’re talking about something that is “really tightly correlated.”

Hui Chen: I also think it’s important to get to a level of robust thinking, where whatever you’re presenting and the points that you’re making are defensible. I think, for example, if somebody gets up without a time-over-time measurement, and says, “We’ve done a bunch of activities and we have this graph to show our strong compliance culture,” you want to be able to defend the causation that you’re implying there—that all of this great culture is “because” of all the activities that you have conducted. In order to do that in a way that’s business savvy—you’re asking your stakeholders to invest in your activities, so you need to prove the value of those activities—you need to be able to stand up under scrutiny when people ask you hard questions about the investments that you’re asking them to make.

We are talking about a “beginning,” a “middle,” and an “end,” and so, I’d like to see what some of the “end” looks like. You talked about compelling people into action. From having heard you present on this, I know by “action” you don’t necessarily mean “actions” like we need to institute something. We need to fire people or promote people. So, tell us what you mean by the kind of “ending” that a “story” would be provoking.

David Yanofsky: You need your audience to be coming away with an appreciation for the thing that you’re telling them. It might be the traditional action. It might be, yes, now I can sign this document. Now I can approve your budget. Now I will allocate more headcount to you. Now I will give you a promotion. These are all things that, when you present to someone, can happen immediately because you gave them enough information to take an action. We’re in a business context here—people need to make business decisions of all kinds throughout their day, especially your senior leaders. All they are doing is making decisions and enabling other people to make decisions. And so, if you are giving them information that lets them make their decisions more informed, and smarter, and with less risk—or at the very least, with a greater appreciation for the risks that are existing—you have helped improve the business. Hopefully your leaders have an appreciation for that, but if they don’t, that’s part of the story you also need to tell—in that, in you knowing this information, you now understand our business so much better that your decisions will be so much more informed that we will be more successful, we will be more strategically aligned, we will be hitting our goals, and we are going to be closer to achieving everything we want to achieve, because we all understand what is happening.

Zach Coseglia: David, we’ve reached the end. What final words of encouragement, advice, or just knowledge do you want to share with our listeners?

David Yanofsky: I like telling folks in compliance that they need to understand and create their version of “a chart that goes up and to the right.” The leaders in your organization are used to seeing a sales team come in and show a chart that goes up and to the right, and the leader says, “Good job, team.” The marketing team can do the same thing, an R&D team can do the same thing—they can show metrics that are, on their face, understood by leaders to be “good” and “showing a positive trend.” Compliance teams are not yet great at that, and so, to be able to come up with this chart, having your at-a-glance view of success that you can show a leader, is going to be one of the most powerful things that you can do, and one of the most important reasons why data storytelling is going to be core to your message. Your colleagues in other functions are already telling their story through data, and your leaders are already used to consuming it, and so, you have to meet your leaders where they are, and tell the story in that way.

Zach Coseglia: Terrific. David, thank you so much for joining us.

Hui Chen: Thank you, David.

Thank you all for tuning in to the Better Way? podcast and exploring all of these Better Ways with us. For more information about this or anything else that’s happening with R&G Insights Lab, please visit our website at www.ropesgray.com/rginsightslab. You can also subscribe to this series wherever you regularly listen to podcasts, including on Apple and Spotify. And, if you have thoughts about what we talked about today, the work the Lab does, or just have ideas for Better Ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.

Speakers

Zachary N. Coseglia
Managing Principal and Head of Innovation of R&G Insights Lab
See Bio
Subscribe to There Has to Be a Better Way? Podcast