What it really takes to thrive in the age of data, algorithms, and AI

In “The Digital Mindset, What It Really Takes to Thrive in the Age of Data, Algorithms, and AI,” authors Paul Leonardi and Tsedal Neeley specify the categories of skills that you’ll need and what 30% competence looks like in each of those categories. Once you have achieved the 30%, you will have created the platforms from which you’ll start to think differently — to think digitally. In Part One of my interview with Paul Leonardi, we do a deep dive into the 30% Rule, discuss collaboration, second mind apps, and more! Scroll down for a Compete Transcript.

Paul  Leonardi, is the Duca Family professor of technology management at the University of California, Santa Barbara.  Paul’s Co-author Tsedal Neeley is the Naylor FitzHugh professor of business administration at Harvard Business School — an award-winning scholar, teacher, and expert on virtual and global work.

My name is Peter Clayton. My focus on this channel is to provide a Total Picture of Innovation in HR Tech. TA Tech, Recruiting, Talent Acquisition, and Career Stratgies.

The authors of The Digital Mindset focus on the following questions that many people have today regarding how to interact in a digital world question such as:

  1. How much technical capability do I need?
  2. Do I need to learn how to code?
  3. What do I need to know about algorithms?
  4. What do I need to understand about big data?
  5. How do I use digital tools effectively?
  6. What exactly is AI?
  7. Do I need to prepare to have a bot or robot on my team?
  8. How do I collaborate successfully when people are working remotely?
  9. What are the best ways to make sure my data and systems are secure?
  10. How do I develop skills to compete in a digital economy?
  11. Is digital transformation different from other transformations?
  12. How do I build a digital-first culture?
  13. Where do I start?


Paul Leonardi: [00:00:00] I think that that’s a key role that HR can in particular help play in our organizations today, which is making sure that there are supplements to the array of quantified metrics that we have about people in our organizations that capture more of the, let’s call them social, cultural, interpersonal, political kinds of things that actually make our jobs work that are very difficult to quantify and don’t show up in a tool. It’s really easy to to track how fast somebody completes a project or how often do they talk to this person. But it’s much more difficult for us to be able to identify how persuasive is somebody when they’re trying to make a pitch or how well does somebody explain knowledge to someone else? And because those are really difficult to quantify, they often don’t show up in those kinds of models. And if you just relied on those data, you could start making hiring or retention decisions that are very, very skewed to the parameters that we can quantify and are in the tool, but that doesn’t represent an employee holistically.

Peter Clayton: [00:01:20] Today we’re going to talk about a new book, The Digital Mindset, What It Really Takes to Thrive in the Age of Data, Algorithms and AI published by Harvard Business Review Press, written by Paul Leonardi and Tsedal Neely. According to the authors, you can develop a digital mindset and this book shows you how. It introduces three approaches: collaboration, computation, and change, and the perspectives and actions within each approach that will enable you to develop the digital skills you need.

Peter Clayton: [00:01:54] Hi, this is Peter Clayton, host of the TotalPicture Podcast. I focus on innovation in HR tech, TA tech, recruiting, talent acquisition, and career strategies. And today I’m joined by Paul Leonardi, who is the Duca family professor of technology management at the University of California, Santa Barbara. Paul’s co-author Tsedal Neely is the NAYLOR Fitzhugh Professor of Business Administration at Harvard Business School, an award-winning scholar, teacher, and expert on virtual and global work. In part one of our interview, Paul and I discuss how to ask the right questions, make smart decisions, and appreciate new possibilities for a digital future. So how will adopting a digital mindset futureproof you and your career? Let’s find out. Paul, welcome to the Total Picture Podcast. So let’s start at the beginning. So we’re all on the same page here. How do you and Tsedal define a digital mindset?

Paul Leonardi: [00:02:55] Great question, and it’s a perfect place to start. So we think of a mindset as a set of approaches towards dealing with the world. And if you look at broad definitions of what a mindset is, not just specifically a digital mindset, but any kind, most people kind of think of it that way, that you have a way of approaching, making sense of understanding the world around you, and those approaches help you determine what the possibilities are for your action and how you’re going to proceed and act in the subsequent things that you’re going to do. And so for us, a digital mindset is really a set of approaches about seeing the possibilities for how to act in the digital age.

Peter Clayton: [00:03:34] I’ve been obsessed recently with second mind apps, things like Notion and Obsidian and Rome. Do you personally use any of these digital tools?

Paul Leonardi: [00:03:47] I don’t. I know several people who do and find them really helpful, but I don’t. And I’m not sure. I’m not sure why exactly. To be totally honest, I’ve played around with Obsidian before, and I think it’s kind of cool. One of the issues that I’ve been exploring, even since we’ve published the book, is really thinking through kind of exhaustion that people are experiencing, using so many different kinds of digital tools. And I’ve certainly been experiencing this as well. And we don’t really talk about this issue so much in the book, but I think it’s a sort of a subsequent project for me to explore. And I’m just finding that there’s so many different options of tools like those that we could be using. And I find myself getting worn out if I try to do too many things. And so I’ve been kind of trying to make a more purposeful strategy of deciding which tools are really important for me to use and what kind of contexts. And that’s one of the reasons why my own tool proliferation hasn’t happened so much.

Peter Clayton: [00:04:48] So how do you retain all of the research papers and books that you’re currently reading and make sense of all this stuff?


Paul Leonardi: [00:04:55] Yeah, it’s a big process. I have a pretty elaborate kind of database that I’ve developed over the years. When I find new articles that I think are interesting and relevant to the topics that I want to explore, I am able to sort of put the citations and key points from those articles into this database that then searchable again in the future. And it’s taken me a good decade to really kind of build it up and orchestrated into the shape that is useful for me. But learning is a lifelong process, and it’s a slow process in many ways. And I find that really being able to create good, good reference materials for all the things that I’ve been reading is so useful because then it allows me to sort of go back again and say, Oh, this thing that I’m experiencing now that I’m working with this company or something that I’ve read reminds me of what what I read and some evidence I saw before. Let’s pull those together and kind of juxtapose them and see what they say and how they relate to one another.


Peter Clayton: [00:05:57] One thing I’m really curious about is what was the process you used in writing this book? Your co-author, Professor Nee ly, is at Harvard. You know, you’re in, like I said, God’s country in Santa Barbara. So how you how did you start out work on and and collaborate on this book?


Paul Leonardi: [00:06:21] Yeah, well, Tsedal I met about 20 years ago a little more now when we were both PhD students at Stanford in the School of Engineering, and we had a lot of similar research interests and we hit it off as friends. And so we have been working together on a number of projects over the years. And right when we both left Stanford, she went to Boston, I went to Chicago and we started some collaborations there where we would work on platforms like Skype was a big one at the time that we would meet for lots of virtual face to face sessions and talk through the ideas that we had, divide up the work in ways that made sense and go about and do our different parts of the job and come back together and brainstorm and work and do that sort of thing. So we have a pretty long history of doing that and we study virtual collaboration, both of us, and so we practice a lot of the things that we preach. But when it came to thinking about this book, she and I were just having a fairly casual conversation one day, catching up and seeing how things were going. And we both said we were reflecting on experiences that we had had recently, talking to executives and doing some consulting work. And the things that we kept hearing were we’re we’re really bought into this idea of digital change, and we know that we need to develop new business models and operating models to be successful.


Paul Leonardi: [00:07:47] But honestly, we don’t totally know what that means. It was always kind of said and a little bit of a hush whisper and we thought, that’s actually pretty common. We’ve seen this a lot over our years of experience working with companies that senior leaders don’t necessarily know what skills and approaches they need to be successful in the digital world. They certainly don’t know how much of their workforce needs those or what kinds of skills their workforce needs. And so we thought we’ve actually have quite a bit of evidence that we’ve amassed, even though we haven’t been asking this exact question. This is a time for us to try to put this together and see can we provide some kind of guide for people that would allow them to focus their energy and attention on developing the kinds of skill sets and. Mindset that would really help them be successful in this digital age. And so we sort of went from there, started figuring out who has what kind of data. Let’s pull it together, let’s do analyses. Let’s map out the various chapters that we think we need to the things we need to say in the chapters to make this argument really persuasive and clear and actionable. And then we kind of divided the work a little bit, and then we switched and we kind of collaborated in that manner. And the book took about a year, I would say about a year and a half to write.


Peter Clayton: [00:09:07] Yeah. And you were doing this all basically during the pandemic?


Paul Leonardi: [00:09:11] That’s right. We started. It’s funny, we the last the last flight that I took right before lockdown started was in February of 2020. And it was to go visit Tesdal and Boston. I gave a talk at the Harvard Business School Digital Initiative, and then Tsedal and I mapped out kind of the first chapter and started writing that great story about Sarah Menker that opens the book. And then I flew home and then like two days later, lockdown started basically. So that was so then that was commenced the writing process for the book. So I always think of it as coincident with the pandemic.


Peter Clayton: [00:09:49] And at the same time, she’s working on another book that is called The Remote Work Revolution, which was rather timely.


Paul Leonardi: [00:09:59] I would.


Peter Clayton: [00:09:59] Say.


Paul Leonardi: [00:10:00] Yes, Tsedal had been working on. She is an expert on global and virtual work and had been developing that book for quite a while and had a complete draft of that before we started working on the digital mindset. And then the timing worked out just such that that was the moment for her to send that book off to the presses.


Peter Clayton: [00:10:19] Yeah, no kidding. Well, time, I’d like you to discuss a large theme in this book, which is the 30% rule, a concept that really kind of drives the narrative throughout the book.


Paul Leonardi: [00:10:35] Yes. Well, one of the questions that we get very frequently when we talk about developing digital skills and having a digital mindset is do I need to learn how to program? Like, do I need to learn how to be a coder? And people are really concerned that I don’t have a computer science background and I’m not that technically literate. Am I going to be able to develop a sufficient number of competencies to be successful in the digital age? And our short answer to the coding question is, well, it depends, which is not always satisfactory. But the reason that it depends is that if you’re in an area of work that really heavily depends on understanding the ins and outs of code and developing algorithms right to do your work, of course you’re going to need to learn how to code, but for most of us, we don’t need to. And we’ve talked to lots of experts that are both very proficient in computer programming and those that have excelled in technology based businesses that don’t know how. And we said, well, let’s try to see if we could quantify in some way the amount of knowledge that you would actually need to learn so that you can be conversant and talk to people and understand what’s going on with your core products or the way that different kinds of algorithmic technologies are configured within our organizations so that we could give people what they need to know and.


Paul Leonardi: [00:12:08] To be able to tell them that you don’t need 100%. So what do you need? And the. So that’s kind of what got us on this path of trying to think about what’s the minimum threshold for competence, really in a broad array of skills in the digital age. And one analogy that was really useful for us in thinking through this is the analogy to learning a second language. And there’s lots and lots of research by linguistic linguists excuse me, and second language learning specialists about this process. And they all pretty much agree that to be that native level fluency in a language, you need about 12,000 words or so. But to be proficient enough to interact at really high levels, but not fluent. But those high levels that enable you to collaborate and understand people and ask the right kinds of questions, you need about 3500 to 4000 words, and that’s roughly 30% of the total. And we thought that was a really nice analogy to what we were looking at, which is we’re not aiming to be native speakers in, if you will, in data science or computer programming, right? Or cybersecurity or being able to know all of the ins and outs of running an A/B experiment or whatever the skill might be.


Paul Leonardi: [00:13:28] But you need enough to be able to understand what are the people that we’re interacting with doing? Can we ask them questions to really get at the heart of how they’ve built something or where the data are pulling from or how those data are categorized? Do we know enough of the statistical language to be able to problematize some of their answers and to say, Well, you know, you’re saying this is significant. But when I look at the data, it shows that maybe the confidence intervals aren’t quite what we would need them to be to make those kinds of claims. These are the kinds of questions that we need to be able to ask to really be successful in the digital age. But if we don’t have that 30%, if we can’t get there, we don’t even know enough to be able to ask the questions. And so it’s a kind of a level of fluency that gives you enough to be dangerous. I guess I would say you can ask the questions. You can make sense out of what people are asking you, and you can interpret things well enough to be part of the conversation rather than letting the conversation pass you by and you always wondering what exactly is going on here.


Peter Clayton: [00:14:32] Right. Yeah, I think that makes a lot of sense. There’s another question I’d like you to address, which is how do I use digital tools effectively? Because a lot of the people that tune in to this show are involved in HR and talent acquisition and recruiting and either develop digital tools such as applicant tracking systems and chatbots and background checking applications and scheduling apps and digital assistants, and are using many of these tools on a daily basis. However, trying to integrate all of these tools is really difficult and time-consuming, and oftentimes really expensive. And, you know, for example, a lot of recruiters I talk to are frustrated because they’re spending half their time inputting data into these systems while trying to figure out how to use them, which has little to do with what their job really is, which is recruiting. You know, I think a lot of knowledge workers today are overwhelmed and frustrated by the number and scale of digital applications they’re required to use. In your book, you write about getting to know your technology stack. So what advice can you share on this?


Paul Leonardi: [00:15:50] Yeah, it’s a thorny issue for sure. We do see rampant proliferation of digital tools in organizations. And I think part of the the the reason for that really in the last 5 to 10 years especially, has been that SAS based tools are really easy to implement. And a strategy of a lot of these SAS based companies is to have specific price points for their tools that fall inside somebody’s credit card purchasing power as a manager in the organization. Right. So it’s really easy for someone who’s, let’s say, the director of h.R. To get some kind of SAS tool, have the recurring charges per seat right for their organization on their expense card. And nobody’s going to blink an eye at that. And so that that has really encouraged so many different kinds of tools to enter into the organization at kind of a more managerial level. And then you have the same sort of a similar trend that’s happening at the IT level where there’s increasing pressure across the organization to track and quantify what’s happening and. Various business units and various administrative units across the organization. And so there’s been a proliferation of tools there also. And so you’re getting it from both sides, right? If I’m an independent contributor, an individual contributor in an organization, my managers like asked me to use these three tools and then it is asking me to use these couple of tools and HR maybe is asking me to use these other tools. I’m like, Oh, that’s just it’s too much.


Paul Leonardi: [00:17:30] And I think that there’s kind of two ways to approach your question here. One is to say, well, what is it that the decision makers in our organizations can and should be doing to help employees not experience so much of this digital tool overload? And the advice that I typically give when I’m working with companies is that you have to always remember that technologies are tools that we should be using because they make an important difference in our work. And that work, that difference should be improving our work in ways that we can identify, that we can discuss, and hopefully that we could also measure. And if we’re implementing tools for the sake of implementing them because we think they might be helpful, but we’re not quite sure how or because other organizations are doing it. And so we think we should too, without a really clear sense of how they’re helping us do something that we couldn’t do before. That’s those are bad reasons to be implementing tools. And, you know, it’s funny, I think about three months ago, I sat down with an organization, financial services organization, and we went through and kind of did an audit of the various systems that folks were using. And I was pretty ruthless in my audit, right? I was like, well, why what what functionality does this tool give you and why do you need that functionality? And we basically like asked about ten different software applications from the list because like, well, we don’t really know.


Paul Leonardi: [00:19:08] And could you be accomplishing those activities and one of your other tools. Yes. Well, then get this one out of here. Right. I know that sounds simple in many ways, but it’s it’s really important for leaders in our organization to be purposeful about what technologies they’re using for what purposes. So I think that’s part of it. At the managerial level, at the individual level, one of the things that’s really key that we talk a lot about in the chapter on data in our book is really making sure that we understand what kinds of data streams are feeding into these tools, what manipulations are happening to the data within the tool. So are there machine learning algorithms that are analyzing the data or classifying the data in particular ways? And the reason it’s important to know these things is that presumably the reason to use many of these kinds of technologies, especially in HR functions, as you mentioned, is that they’re providing us with data and insights that we wouldn’t otherwise have. However, data are not natural substances that exist in the world. Data are social constructs where we produce data through the ways that we sensor the world, through the ways that we collect information on the interactions that people have on our tool. Then we classify those and we sort them in particular ways. You can think of all the ways that we measure something like wind, right? We measure wind in terms of its velocity.


Paul Leonardi: [00:20:38] We measure it and in terms of the humidity that’s associated with it, we measure wind and temperature together. All of these are ways of segmenting this natural thing that happens in the world into data points that we think make sense and that we value. So my advice for individual contributors and people that are using these tools is that you really need to understand what that data stream looks like, how those data are being produced, and the way that they’re being analyzed so that you can decide, is this the kind of recommendation or is this the kind of insight that I can actually use in my work? And you’d be surprised how many times when we work with companies that we see that, you know what? I don’t actually think that these data are very representative of the behaviors that I’m interested in or the kinds of recommendations that this tool makes actually don’t align with the things that we really want to reward within our organization. And if you can understand that, you can stop some of this rampant tool proliferation as well, but you don’t know those things unless you know what questions to ask about the data and about the way that the tools are processing them. So that it’s a little bit of a long winded answer that I hope gives a sense of what we might think about at the managerial level, but also at the level of sort of team members and individual contributors.


Peter Clayton: [00:21:58] I think that that’s a really great observation because as you know, a lot of these tools are really smart and can do really cool things. And, a lot of people are using them because the graph or the chart or whatever they’re able to whip up is so cool looking. But when you get to the basis of it, like you were saying, a lot of these things really don’t provide you with information that’s useful and can be acted upon.

Paul Leonardi: [00:22:30] Right? I’ve been doing some work with a number of different software companies that are interested in harvesting relational data from. Various communication tools that we use in the organization. So if you think about every time that you’re on Slack or Microsoft teams, when you direct message somebody or when you join a channel that creates a data file, that’s metadata about your about your interactions. And there’s so many ways now to harvest those metadata via API access and then construct calculations on top of those that can make predictions about how, how reliable is a particular employee finishing tasks on time, or who’s most likely to turn over within the organization based on declining centrality in their communication networks over time? Lots of metrics that we can use to do that. But part of the problem is that those data aren’t always capturing and representing exactly the things that are important to us, right? Or we spend too much time paying attention on the data that we’ve captured and that we can quantify. And we don’t pay enough attention to all the other kinds of data points that are out there that are much more difficult to quantify and capture and therefore don’t appear in the tool as part of the analysis.

Paul Leonardi: [00:23:56] And I think that that’s that’s a key role that HR and can in particular can help play in our organizations today, which is making sure that there are supplements to the array of quantified metrics that we have about people in our organizations that capture more of the, let’s call them social, cultural, interpersonal, political kinds of things that actually make our jobs work that are very difficult to quantify and don’t show up in a tool. It’s really easy to to track how fast does somebody complete a project or how often do they talk to this person. But it’s much more difficult for us to be able to identify how persuasive is somebody when they’re trying to make a pitch or how how well does somebody explain knowledge to someone else? And because those are really difficult to quantify, they often don’t show up in those kinds of models. And if you just relied on those data, you could start making hiring or retention decisions that are very, very skewed to the parameters that we can quantify and are in the tool, but that don’t represent an employee holistically.

Peter Clayton: [00:25:13] Yeah. I think that’s a really, really smart observation. And you’re absolutely right because you look at people who are extroverts versus people who are introverts and the way people are having to interact today because so many people are working hybrid or remote. And, let’s talk about zoom frustration that everybody is experiencing. So I think that’s a really interesting approach to it because you’re right, you can’t put all of these things into algorithms and have them spit out things that are absolutely accurate.

Paul Leonardi: [00:25:56] Right. And that’s one thing that worries me about this discourse about deification and quantification and the advanced role of analytics and HR are that there’s absolutely a role for that. And we highlight several cases in the book where that kind of insight really proves useful and valuable. But relying too much on those kinds of insights, unfortunately, cause us to overlook many of those things that are not intangible. I won’t say it that way, but that are less easy to quantify, less easy to measure, less easy to capture. And I’ve got one quick, quick story that I think illustrates this really well. I was doing some work with an I.T services organization probably a decade ago now, and they implemented this new this new tool that was going to track how well these technicians responded to user requests from across the organization. And the manager of this group was really excited about the tool because he thought, you know, I’ve got no way of really assessing the performance of the individuals in my group because they’re all out at different parts of the business and I can’t see what they’re doing. So all I get occasionally right are customer service evaluations, maybe that come back, but I don’t really know what they’re doing. This tool is going to quantify how fast they respond to a problem, how long it takes them to resolve the issue. It’s going to include documentation that describes the process that somebody went through. So all of these things are going to give me much better insights into the performance of my my team members. Six months went by using this tool and the manager thought that he had a pretty good handle on who were the top employees and who weren’t.

Paul Leonardi: [00:27:50] And one of the women who did not make it into the top part of the list was really frustrated by the fact that she kept getting overlooked for a promotion or her contributions weren’t really being represented. And so she quit. And. The evaluations that came in from across the organization began to plummet, and he couldn’t figure out why until what they realized was that this woman and several other people that did similar work to her actually did a whole lot of things to grease the skids for the services group to be effective when they went out on these user calls. But none of that greasing the skids work was ever captured in the tool. And so the manager kind of fell victim to this classic blunder of rewarding the things that he was measuring. And this isn’t a case necessarily that he was measuring the right things, but the things that he measured came to matter and the things that he couldn’t measure were invisible to him. And so he overlooked those. And it wasn’t until he hired this woman back at about a 30% salary increase that customer service evaluations went way up. And so we had to work to say, all right, if the tools that you are implementing are only capturing data on these particular kinds of variables, how do you create a more robust performance management system that is going to take into account all of these other things that people do that are valuable that your tools can’t capture?

Peter Clayton: [00:29:19] Got it. Interesting. So there’s one example that you give that I think would be helpful to my audience. And it’s Amtrak’s chatbot called Julie, right? Because obviously a lot of people in recruiting and talent acquisition use a chatbot, but I think it’s really fascinating how Julie was set up and how Amtrak made it clear from the get-go that you were talking to a bot.

Paul Leonardi: [00:29:51] Mm hmm. Yeah. We find this to be really kind of an interesting and important issue around this human machine, Human machine interaction – Collaboration. There’s been lots and lots of research about the anthropomorphic qualities of digital agents, and that the more they look human, the more we tend to treat them as human. And it’s easy to forget that we’re interacting with the machine when in fact we really are. And many interfaces that use AI powered bots sort of behind the scenes are really trying to give the impression that this is you can interact very, very conversationally like you would with a human and thus trigger responses from users that would elicit like natural language kinds of queries, which is appropriate, however. The our ability to process that natural language and respond to it is still in its infancy. I mean, it’s grown by leaps and bounds, don’t get me wrong, in the last decade or so. But it’s still in its infancy. And what we find is that the more that you can signal that you’re using a bot right as an organization, the the more you’re going to be encouraging people or reminding them that they’re interacting with a piece of technology. And that means that they’ll be much more explicit in how they talk to it. They’ll make sure that they’re using keywords and then their interactions actually flow much better when they don’t have the anticipation and expectation that they’re talking to an actual person.

Paul Leonardi: [00:31:36] And that’s because we’re pretty good at these days of knowing how to interact with machines. But this anthropomorphic quality hides the fact often that we’re actually talking to a machine. And when the machine can’t respond like a human can, it just increases our frustration. And we give another example in the book that kind of deals with this. When we looked at the comparison of two companies that had developed AI bots for scheduling, and so these were bots that would help you to schedule your calendar. And you would do that by copying one of these virtual assistants in the line every email and saying, Hey, Assistant Amy, can you help set up this meeting? And in one of the organizations, it was very clear that this was a bot that was helping and people were very explicit about their schedules. With this I bought and they were pretty happy with the scheduling process in the one where the AI Abbott was kind of masquerading as a person. People’s frustration grew and grew and grew because they expected the the perceived person that was actually an AI powered chatbot, right.

Paul Leonardi: [00:32:46] Be able to do the kind of high level thinking and integration and anticipation work that a human could do that the AI bought just couldn’t. The bot was great. I mean, technically very sophisticated, but when it broke down in the sense that it couldn’t schedule in a complex situation, users got really, really frustrated. And what was even worse was they transferred that frustration to the principal agent right to the act, the person who was deploying the bot. And so if I was trying to schedule with you, Peter, and you had this bot and it was really frustrating, I would end up getting annoyed at you, right? Not for anything you did, but because the bot couldn’t respond to me in the right kind of ways. But in that other organization where I actually knew it was a bot, even if it didn’t do the things I wanted it to do, we saw that people didn’t transfer their frustration and anger to the principals. And that’s the similar thing that we see operating in terms of the Amtrak example, is that when you know that it’s a machine and you know how to interact with it like a machine, interaction flows much more smoothly. And I think that’s going to be the case for the foreseeable future.

Peter Clayton: [00:33:55] Yeah, it’s interesting. I just interviewed a friend of mine who has a chatbot company and a texting service that is used for recruiting. And most of the companies he works with are large organizations and he’s hiring part-time or hourly people. So the fact that on this large scale, they’re able to schedule things via text because it’s the applicant just has one piece of data they’re dealing with. Well, would you like to schedule an interview? When are you available? You know, it’s this one thing, and they know they’re dealing with technology, not with the person. And it makes it much more efficient.


Paul Leonardi: [00:34:48] Absolutely. And I think that most of the users or customers really appreciate that.

Peter Clayton: [00:34:53] Yeah, absolutely. You know, I’ve seen a lot of research where millennials specifically would much rather interact with a chatbot on initial. When you’re initially exploring a job within a specific company, you know the basic things, well, you know, can I work remote? Can I do this? What are your benefits? All of those things. A lot of people would rather deal with a chatbot than than a human when you’re just dealing with these very rote questions.

Paul Leonardi: [00:35:28] Yeah, you can control the timing and the pace of that conversation. It’s completely up to you. You often don’t feel silly or embarrassed for asking questions that you’re like, Maybe I should know this. Maybe they told me this somewhere, right? Right. But I don’t remember. So there’s a lot of advantages.

Peter Clayton: [00:35:43] In part two of my interview with Paul, we drill down on the three major approaches to the digital mindset, collaboration, computation, and change. If you like this content, I’d greatly appreciate your subscribing and clicking the like button. It will really help to attract new viewers. My name is Peter Clayton and stay tuned to the TotalPicture podcast by hitting the bell icon. I hope to see you soon.