Barclay Bryan Press promo

What GenAI Means for Companies Right Now


TOM STACKPOLE: For the last year and a half, we’ve been hearing that generative AI is going to change everything. In that time, companies have invested huge amounts of time, money, and resources, and most of them are still waiting for the payoff, even as it seems like everyone else is on the cusp of cracking how to use this technology. If you’re a leader, you’re probably asking, “Is this real or is this just hype?” and honestly, here at HBR, we’re asking ourselves the same thing.

Welcome to Tech at Work, a four-part special series of the HBR IdeaCast. I’m Tom Stackpole.

JUAN MARTINEZ: And I’m Juan Martinez. Every other Thursday, we’ll bring you research, stories, and advice about the technology that’s changing work and how to manage it.

TOM STACKPOLE: We’re both senior editors covering technology here at the Harvard Business Review, and in the last year, we’ve seen more pitches than we can count on generative AI. Juan, how would you say the questions we’re seeing have evolved over the last year, and where are we now?

JUAN MARTINEZ: I’d say that in the beginning, everyone was asking what’s possible. Now they want to refine their use. They want to figure out exactly how to use it perfectly for their business case with their employees, the way that their security structure is set up. It’s about getting everyone on board, making sure you can have it work within the organization, and then optimizing it. And so that’s where I think we are now, that optimization stage.

TOM STACKPOLE: I think I’m feeling a little more skeptical than you are about this, but I’m curious, what are the big questions that you have right now, and what would convince you that this is the real thing?

JUAN MARTINEZ: GenAI is not going anywhere, so the question now is, how do I use it, how does my team use it, and how does my company use it. There are a lot of questions that they’re going to have to answer, and just get in there and start playing around and learn how to converse with AI. It’s really important that you do that whether we’re using GenAI 10 years from now or whether we’re just using regular AI to answer our customer service questions.

TOM STACKPOLE: So I think you make a lot of good points. For me, there are still a few things that give me pause, especially when thinking about adoption at scale. This could have huge ecological costs, it could really change our media environment, but thinking about the questions that businesses need to be able to answer specifically, one, what do we want to be able to do with these tools, two, what impact will trying to do that have on our employees and our customers, or to say that another way, what are the risks, and three, are we confident enough in the promise of this tech to really invest in it and see what it can do. And maybe I’m just naturally pessimistic, but I see the high compute costs, I see risk around copyrighted materials, I see trust issues with employees, I see regulation coming down the pike, and I still don’t see proven use cases that are going to make it all worth it, but tell me why I’m wrong.

JUAN MARTINEZ: It’s the kind of thing like 40 years ago, if I had told you that there was a way for people to send letters to each other that would arrive instantly, you would’ve said, “Wait, hang on a second now. People could intercept those messages, and it’ll turn up compute power because people will be sending so many messages all the time,” and these are really good questions that you would’ve asked about email, but we’re using email, and we started using it, and we had to figure out how to use it because it wasn’t going anywhere. It was too convenient and it was too impactful for businesses to just pretend that it’s not there because they’re concerned about these questions.

TOM STACKPOLE: Well, I think our guest today will probably agree with you. Today, we’re talking to Ethan Mollick, a professor of innovation at The Wharton School at the University of Pennsylvania, who has become one of the leading experimenters with these new tools.

Way back in 2022, right after OpenAI launched ChatGPT, I called Ethan to ask him to write an article for us because he seemed to have an immediate intuitive understanding of how to use it. Now, a year and a half later, he’s just published a new book, Co-Intelligence: Living and Working with AI, about what he’s learned about using generative AI, what it can and can’t do, and the risk companies face in trying to integrate it into their work. We start out talking about what Ethan has learned through direct experimentation with these tools.

ETHAN MOLLICK: So the crazy thing about the state of AI right now is that nobody knows anything. I talk to all the major AI labs on a regular basis, we have conversations, and I think people think there’s an instruction manual that’s hidden somewhere. There is not. Nobody knows anything. There’s no information out there. So the best thing I could do is, this is the principle of my book, is that you should use AI for everything you legally and ethically can because that’s the way you get the experience with how these systems operate. So for me, I will use it for research and for writing. I use AI as an experiment for most things that I do to see what it works for and what it doesn’t, and the results are often quite surprising. So I think a lot of people are waiting for instructions that are not forthcoming, and you have to sort of take charge and do this yourself to some degree.

TOM STACKPOLE: For people who are kind of anxious about this, do you have a pitch for why people should start messing around with this or even just trying to figure out what it’s actually like to use this?

ETHAN MOLLICK: I have a few pitches. I mean, the first pitch is I think it’s important. I think a lot of people think this might be going away or, “AI is here, and now I got time to adjust.” This is a rapidly advancing technology, and I don’t think there’s any indication in any circumstance that it’s going to disappear. I also don’t think we’re going to see the advances plateau that quickly. Maybe a year from now, maybe two years, but it already operates at a very high level. I think we need to get used to a world that has AI in it, and trying to put that off doesn’t help you, and in fact, knowing how it works will help you adjust as the systems get better.

The second reason is that it is helpful. I mean, it’s funny because when I actually talk to large groups of employees and executives, the executives almost always are not using AI, but lots of employees are already automating their job. Right now, the huge advantage comes to you as a user. If you can figure out a way to make your job better, then you use AI to do that. And a lot of people are secretly using AI at work. I think the best survey we have is over 60% of people use AI secretly at least some of the time. And so there’s advantages to you. We can talk about what that means for organizations and what that means for leaders in just a bit, but there is value in doing that.

And then I would say the third reason is once you get over the freakiness, it’s super interesting and fun to explore. Like this system does a lot of things that are really neat, and you can be the first person to figure out what those things are.

JUAN MARTINEZ: In my mind, you’re like the prompt master. I go to your LinkedIn page, I see prompts after prompt, like you’re giving good feedback. What research would you cite to talk about the best prompts and the best way to use the answers? And do you have your own best practices for how to use prompts and how to take answers?

ETHAN MOLLICK: So there are two kind of core ways to interact with the AI that we call a conversationalist or interactionist and structured. And so in a conversational prompt, you’re literally just chatting back and forth with the AI, and everything matters in this case. There’s papers showing that punctuation matters, that if you ask the questions in a dumber way, you get less accurate answers. If you approach it as a debate, the AI will argue with you. If you approach it as, “I am the teacher, you’re the student,” and you sort of imply that’s what you want, the AI will be much more pliable. If you approach it as, “You are a dumb machine that just does work,” it’ll act like a machine that just does work. We don’t actually fully understand how to make that happen best, so I just don’t worry about it. Like I said, I give it a context, “You are X, I am Y. Let’s work together on this,” and that gets you a large part of the way there.

Part of the problem is that they’re putting too much magic in prompting. There is not a single one of the AI lab people I talked to that think prompts are going to be that important two years from now or that like prompt crafting or prompt engineering is a skill. It will be if you’re just trying to build large-scale enterprise deployments, but for most people’s work, the AI already can tell you what to do and it’ll only get better at having you prompt things. When I teach my students how to prompt, I typically make them prompt four or five times before producing something, and interestingly enough, by the time you’ve prompted four or five times, not only is it hard to recognize it’s AI writing, but also the AI detectors don’t work anymore. So it feels much more like a blend of human and AI work. So this is not a type a query in and get a result back. It’s a conversation with an intern, with an employee that you are trying to get to do good work.

TOM STACKPOLE: In the book, you talk a lot about how you used AI to write this. Can you tell us a little bit more about what that process was like, what you learned worked well, what didn’t work well, how that whole process changed how you were thinking about using these tools?

ETHAN MOLLICK: So one of my principles in the book is to be the human in a loop, to figure out what you’re good at and do well. For right now, where AI is on the ability curve, you are probably better on it at whatever core task you like to do most, and AI is probably not as good as you at that.

So I like to style myself a pretty good writer, and AI is not as good a writer as me, but the AI made writing the book much better because it did a lot of ancillary tasks. So that could be anything from the stuff that typically stops you from writing a book, which is, “Give me 20 versions to end this sentence because I’m stuck.” I would ask the AI to just give me variants. You know, “Summarize these 200 research papers,” and I would read them myself, but then I had the summaries available to work from. I’d actually send the summaries to some of my fellow academics, and they quite liked them. They thought they were well done. AI does a pretty good job simulating customers, so I had AI readers read chapters and give me feedback on it.

So it wasn’t about the writing task itself, it was about removing the friction from all the other stuff that would’ve stopped me from doing my writing, and I think that’s part of what you want to think about when you’re using AI, is not so much how do I replace the core thing I love to do and then I think I’m better at. Maybe AI could do that, maybe not, but it’s about how do I make it so that’s what I get to focus on.

TOM STACKPOLE: So one of the things that you write about that I think is really useful is you have a great way of thinking about the kinds of work tasks that we should continue to do versus the kinds of tasks that GenAI might be able to take on. Can you kind break these down into categories for us?

ETHAN MOLLICK: Sure. So there are sort of four categories and a subcategory, not to make it hard. But I talk about Just Me tasks, which are tasks that only you as a human can do, and that might be because you want to do them, or it might be because only a human should do them because a human should remain in the loop of that particular task or idea.

JUAN MARTINEZ: You’re a professor. What are some of your Just Me tasks on your day-to-day?

ETHAN MOLLICK: So an interesting Just Me task in my case is letters of recommendation. It’s about purposefully setting my time on fire as a signal flare to people that I care about someone. So I’m supposed to spend 45 minutes or an hour doing that, and I’m supposed to struggle with what I write. If I just give the AI the resume of the person, the job they’re applying for, and say, “I’m Ethan Mollick,” and do a thumbs up emoji, “Write a letter of recommendation,” I will get a better letter of recommendation than the one I would write in 45 minutes or an hour.

So there’s an open question to me about whether or not we do that or not. You know, I still do grading by hand even though I know the AI will do a better job because I feel like that’s an obligation as a professor. When I write reviews of academic papers, I do them by hand, but in the principle of I use AI for everything, I turn them in and then I have the AI then do a review and see where the AI and I differed. So I’m trying to think about these things, but those are kind of moral lines that may get crossed soon.

So those are Just Me tasks. Then the other two categories are delegated tasks, where we do some of the work with the AI. And then zooming out again, we finally have automated tasks. You say, “Go handle this,” and it goes and solves your problems for you. And that’s the explicit goal of OpenAI this year, is to release fully-working autonomous agents.

TOM STACKPOLE: Yeah, I mean, one of the things that’s been interesting with critics of some of these LLMs is people were saying, “There are limits to this architecture. There is going to be a plateau, and it’s going to be coming sooner than people think.” I mean, what do you kind of think of that argument?

ETHAN MOLLICK: So I think that nobody knows the answer, and I see splits even when I talk to people at OpenAI between we’re on an infinite curve here to AGI versus this is going to even off, and I don’t think anyone knows, and people don’t know until the training is done and the model is shipped essentially. So I would suspect that this summer of 2024, we will see GPT-5 class models that will represent a significant improvement over GPT-4. After that, that’s the next question. Is there a GPT-6 that is big a leap over GPT-5? Are we seeing the top of the curve? I don’t know the answer.

JUAN MARTINEZ: I know how I feel personally about working with AI or having AI work for me, but there’s a lot of studies now happening that are talking about AI as a boss, AI as a supervisor. Can you talk a little bit about what that means and how people are responding to being controlled or supervised by AI?

ETHAN MOLLICK: So we don’t actually have a lot of evidence on the current version of AI. Part of the problem that happens is our conversations about AI are confusing because prior to ChatGPT, when you would have a podcast or an HBR article about AI, it was about AI as algorithmic approaches that were usually math-based about forecasting. And so when we talk about AI supervision, almost all the studies are about these earlier models that were sort of cold algorithmic control, so what’s it mean to be an Uber driver and have the algorithm telling you what to do, and what happens when doctors or managers are getting advice from AI and how do they feel about that, and I think that we’re not seeing the same sort of effect because large language models feel like it’s already the person. It feels warmer, it feels more humane, and we don’t quite know what the effects are going to be. That being said, I think the dangers are still there of algorithmic control. It’s just dressed up in a nice way.

TOM STACKPOLE: In this context, you’ve talked about how important it is to still have expertise. My first magazine job was being a fact-checker, and it was brutally slow and surprisingly hard because there’s facts kind of baked into all kinds of things and it’s not always immediately obvious what kind of assumptions are being made. So how do we make sure that we can still be experts and do this kind of fact-checking work?

ETHAN MOLLICK: Expertise is kind of great because it gives you heuristics and rules. You can glance at something and say, “Is this good? Is this bad?” Right? And you build expertise through deliberate practice, from trying something over and over again and getting feedback on it.

To me, the biggest risk, actually, is the destruction of deliberate practice inside organizations because the way we do this is we actually have a medieval apprenticeship system inside white-collar work, right? When my students graduate from Penn, I think they’re awesome, but they’re not specialized at working at HBR, or Goldman Sachs, or McKinsey, or whatever, name your company of choice. They go there and they spend a couple of years learning the ropes. That’s how we sort of teach people, is like we get an intern who is very, very smart but inexperienced, and in return for doing some of our work, they learn, right? And they get paid relatively little for their job, and then if they’re good enough, they advance in the organization. That’s the basics of how organizations work.

Intern work is the most delegatable work to AI. It’s so easy to hand off, like, “Write this deal memo, do this research project, give me the initial briefing for this interview, create a transcript and highlight the key points.” And what I think the real danger is that we’re going to destroy expertise-building inside organizations because people are just going to have AI do that work for them.

JUAN MARTINEZ: Well, you’ve actually studied this in a real-world setting. Can you talk a little bit about the study with BCG. They were given ChatGPT-4 to perform some of their work tasks. How did it work, what did they do, and what did you learn from it?

ETHAN MOLLICK: So this is work with a whole set of co-authors at Harvard, MIT, and Warwick. We went to Boston Consulting Group, so elite consulting company. They gave us 8% of their global workforce, which was amazing, and we did a couple different experiments. One of the main ones was we developed 18 realistic business tasks, and these ranged from analysis, to persuasion, to creative tasks, and we asked people to do these tasks. We measured them before, doing the tasks without AI, and then in the second set of tasks, half the people randomized in using AI, half not.

What we found was pretty extraordinary, which is a 40% improvement in the quality of answers, and about a 26% improvement in speed, and about a 12.5% improvement in the amount of work done, and that was with GPT-4 out of the box without any of the special training lots of companies are spending their time and money trying to build. We found the impact largest on the bottom performers, not on the top performers, though we’re still trying to figure out whether or not that was a result of early use, where people didn’t know very well, or whether that’s a universal thing, though that’s a result people have found before. And so very, very powerful results right out of the box for a very elite set of tasks, which was fascinating.

JUAN MARTINEZ: So if you’re an enterprise, how do you sort of take your weakest “employees” and then give them ChatGPT-4 and help them become better?

ETHAN MOLLICK: I think if you’re an enterprise owner, there’s a lot to think about, or as a manager, because the incentive right now is for your employees to secretly use these systems, and they’re doing that all the time. You could think about the reasons. There’s a lot of them, right? In one case, like, “What if my rules aren’t clear and I get fired for using it?” Second is, “You guys think I’m a wizard right now because I’m suddenly producing all this amazing work and you don’t know how I’m doing it. If you know it’s AI, you might value my work less.” Or maybe you start to see, “Hey, I just showed that I’m replaceable by AI. I don’t want to show you that.”

So there’s a lot of reasons people don’t want to share. So it starts with a culture problem and an incentive problem, right? How do we incentivize people to do this? How do we build a culture that people want to share what they do? So I find in startups, in nonprofits, in cooperative enterprises, they’re really hitting it off with AI because people share, “Hey, I figured out a way to do something cool.” In large-scale bureaucratic organizations or highly-competitive organizations, everyone’s hiding their AI use all over the place for all those reasons.

JUAN MARTINEZ: Can you give us examples, if you have any, of these secret cyborgs that really messed stuff up because they just copied and pasted or because they didn’t do the work that you suggest people do in order to make the most of GenAI?

ETHAN MOLLICK: The weird thing about it is like the secret type of work people are doing at work often are doing it for tasks they know well and are experimenting till it’s good because they know what a good task looks like. So I haven’t seen huge incidents inside work environments, right? There’s famous cases of lawyers using this, thinking it works kind of like Google, and finding sites for themselves that aren’t real, it hallucinates sites all the time, and not checking and getting in trouble with judges. That’s become increasingly a sort of phenomenon in the legal field. Again, something that we’ll see growing, but it’s from kind of inappropriate use. Like for example, one use I see people putting this to that’s completely inappropriate and it was one of the most common things people tell me they use the system for is performance reviews. Of all the use cases, right? Performance reviews suck to do, but they are meaningful by the process of doing them, right? And when there’s only a couple AI users, maybe it doesn’t feel so bad, but after everybody starts using them, we have to rethink about how HR works in that case.

TOM STACKPOLE: I think the story is about the hazards of how this could be applied by companies are really interesting, and I want to look at a different study, this one by Harvard Business School researcher Fabrizio Dell’Acqua, and he studied recruiters using AI. Some were given a good AI, some were given a mediocre one. So how does this play out, and what does this example tell us about how companies should be careful about how they’re starting to use these tools?

ETHAN MOLLICK: And Fabrizio’s the lead author on the BCG study as well, and he studied in this a phenomenon that we found a bit large in the BCG study, which is that when people use an AI system that’s good enough, they actually stop paying attention. He calls it falling asleep at the wheel, and it’s the most common when the systems are best. So if you have a bad AI system, you’re checking all the work. If it seems like it’s really smart or good, you stop checking it. There’s an additional factor, which is that the AI’s errors actually become very subtle, so it’s very hard to even check the facts.

So between those two factors, fact-checking becomes as hard and we fall asleep at the wheel, it means that you stop using your brain as much when you’re using AI work, and it’s really hard to figure out a good way around that. But there’s no sort of Boeing-style disaster at this stage of somebody turning something in, right? It is much more a bunch of small disasters of people not paying attention. So I think that’s the real danger to me, is less that we’re going to see a plane fall out of the sky and more that we see a steady creeping set of indifferent work appearing.

TOM STACKPOLE: Coming up after the break, we’re going to talk about what organizations that are successfully using generative AI are doing differently and why these tools can’t be managed like other enterprise technology. Be right back.

So one of the things that you articulate I think really well in this book is that there’s this tension between how easy it is for individuals to be innovative with these tools and it’s really, really hard for institutions to do the same thing for a variety of reasons. And so what question should companies be asking as they start to figure out what to do about generative AI, how they should be thinking about it, what it means within the organization?

ETHAN MOLLICK: The truth about innovation is it’s very expensive to do. R&D is expensive, and the reason why R&D is expensive is because the way we learn is trial and error. It’s why drug trials are very expensive, it’s why figuring out whether a software product is good or bad is very expensive. But R&D is very easy for people at the tasks they do at their own job, right? Where if we try recording this podcast slightly differently every time, if every time you send an email out, you’re doing it slightly differently, like that’s pretty costless and you get fairly fast feedback because you’re experimenting in a domain you know well and you could do it all the time, and that’s how we learn.

The problem with organizations is that they’re viewing this as an IT product that needs to be centrally controlled and implemented, and there’s a lot of problems with that kind of central control approach with AI. You’re waiting for centralized instruction to tell you how to use it, and it’s unclear how a senior management team would be able to tell a line worker how to improve their sales technique using AI, or if they’d listen anyway.

JUAN MARTINEZ: Have you come across any examples of companies that have incentivized this use well and have actually brought use cases to their employees and said, “Hey, do this. It’ll help you”?

ETHAN MOLLICK: I mean, one of the more both effective, I think, and also more extreme effects was IgniteTech, which is a software holding company. The CEO basically got into the idea of AI very early and gave everybody GPT-4 access last summer and said, “Everyone needs to use it,” and he has told me that he then fired everybody who didn’t put a couple hours in by the end of the month, but he also offered cash prizes to anyone who gave the best prompts. Another organization I know, when they do hiring, they require you to try and automate the person’s job with AI before you put the job request out, and then you put in a different job request that’s altered for what you think the job’s going to look like in the future. So I think modeling behavior, incentivizing with rewards, and thinking about the future are the three things you want to be able to do to incentivize organizations properly.

TOM STACKPOLE: It sounds like you’re also saying that companies need to really change how they’re thinking about where innovation comes from and who’s responsible for it and who’s being rewarded for it, right? I mean, there’s also kind of just sort of a structural or kind of even just how they’re thinking about this that needs to change.

ETHAN MOLLICK: I absolutely think that we’re not ready for this world that’s happening. We’ve built corporations around the idea that the only control system we have are other humans and that the only way to get advice is to escalate things up a chain, and that’s not true anymore. So organizations need to change in lots of ways. The locus of innovation has always been on the edge, but now it matters more than ever because your only advantage as a large company with AI is that you have more people using it and you can adapt faster. Otherwise, everybody else has the same AI tool you do, and most companies I talk to have worse AIs than every kid has access to in most of the world because they’re so scared about privacy and other concerns, sometimes rightly, sometimes wrongly, that they don’t allow the most advanced version.

TOM STACKPOLE: You know, we’ve had this kind of natural experiment where Bloomberg invested in their own GPT, and now we’re looking at how that compares to frontier models basically for doing the same task. You have one that’s just out there and one that’s been trained with all this really valuable proprietary data. What is that sort of telling us? What have we seen in that kind of experiment?

ETHAN MOLLICK: So just for people who aren’t that familiar, frontier models are the most advanced models, and right now, there’s a very strong scaling law in AI, which is the bigger your model is, which also means the more expensive it is to train, they’re just smarter, and the result is that the most advanced frontier models are often much better than specialized models built for specialized tasks.

And we don’t have all the answers yet, but Bloomberg decided to build a finance GPT, and they spent over $10 million on it from what I can tell, and they trained it on finance data, and it was supposed to do things like sentiment analysis for stocks and so on. And it did that, and it was pretty solid, but then in the fall, the team retested that compared to GPT-4, which is the advanced model available to everybody all over the world, and GPT-4 beat it in almost every category. We’re seeing the same sort of effects for specialized medical models, being beaten by GPT-4, which is not built for medicine, or especially law models in the same way. What this tells you, as a company, you’re going to be kind of thinking about the use cases. If smarts is valuable, you’ll need to use a frontier model, but the future is probably not training your own AIs.

TOM STACKPOLE: One of the things that’s kind of surprising is I think a lot of companies are looking at generative AI and they’re saying, “This is great. We can cut headcount. We can automate tasks, so now we can have 10 people doing the work that 80 people used to do.” But I’m curious what you think of that instinct of, “This is a labor-saving tool. Let’s cut headcount. Let’s keep doing the same stuff with fewer people and just count the bonus profits.”

ETHAN MOLLICK: To me, I mean, that is a short-sighted failure of imagination in many different ways. Seriously, if you really think that this is another general-purpose technology like electricity, steam, or computer but happening in a very compressed time period, then the worst thing you could do is say, “Let’s make sure to keep productivity the exact same by firing people who might be the people who could leverage this into the next generation of things.” And I see that happening. People will reduce headcount to keep performance the exact same at the exact same moment that everybody else is getting a performance boost. What I fear is that companies have gotten into the view that cost-cutting is the highest-value thing they can do. Expansion was hard to do. You do expansion through acquisition. Now you would do expansion through your workers, through everyone becoming more productive, and they need to change their mindsets or workers will detect that and they’re not going to come along as part of the voyage.

I mean, I have these rules for organizations that I’ve been thinking about which is four questions I would ask any company, which is, one, what did you do that was valuable that’s no longer valuable? If providing white-glove customer service to everybody was your big differentiator, that’s about to go away.

The second thing though is, there’s something impossible that you could do that you couldn’t do before. What is that? So if you’re a consulting company, it used to be 10 hours or 20 hours of someone’s work to give you a basic outline of a piece of work. Now you can produce a bunch of stuff with a consultant’s mindset in five minutes. What does that let you do? Do I offer individual advice to everybody? Does everybody get their personal consultant? It’s very exciting.

The third thing is, what can you move upmarket that you couldn’t do before? Like so now we can provide white-glove service to everyone. Your customer service agents know everything about a person and have a great interaction with them. Your salespeople can give personalized sales pitches.

And then the fourth thing is, what can you move down market or democratize? I think again about BCG. I think those results showed that it makes consulting much easier to do. They work for high-end Fortune 5000 companies. What can they do now for small and medium businesses that they couldn’t do before? How do they move down market?

So I think people who are thinking about this strategic shift will be much better off than people who don’t.

TOM STACKPOLE: Two final questions, I think. First, what is your advice for a manager or a team leader about how to get started with generative AI?

ETHAN MOLLICK: The advice I would give a manager is to start using it and using it publicly, model behavior, and the idea is that you’re going to show where you fail or succeed, you’re going to model curiosity, that you’re trying to figure out from other people how to use it, that you would also be publicly kind of sharing that you don’t know what you’re doing and working on it. I’d say, “Hey, let’s try doing AI in this meeting, and let’s have it recorded and give us advice, and let’s give it feedback on that advice.” I think making it casual and making it interesting, I think, is going to be really important.

TOM STACKPOLE: So what about for senior leadership? What should they be sort of doing with this?

ETHAN MOLLICK: Absolutely. I think the same things apply, but I also think that they have to realize that this is the real one, right? And if it is, this is enough of an emergency that you should be using it so you know what it does, and your employees should be using it, and you should be thinking about this, and you need a multi-pronged approach. How do we reorient organization? We don’t have answers yet, like we don’t know what an organization chart looks like with AI included, we don’t know what processes need to be changed. This is where leaders and strategy actually matters, and I would love to see more leaders stepping up and saying, “I have a vision for how to build a better organization in this time period.” You want to be the visionary leader of the next century that everyone looks up to and that there’s biographies about you. You figure out how to use AI in a positive way to build the next great enterprise. That’s how you get famous.

TOM STACKPOLE: Juan, do you want to get famous?

JUAN MARTINEZ: I’m already famous. I’m on a podcast with Ethan Mollick for Harvard Business Review. My mother never thought I would get this famous.

ETHAN MOLLICK: Well, nobody paid attention to me for like the last decade, so it’s been a very funny rise of like, “I’ve been talking about stuff like this for a while, but okay.”

JUAN MARTINEZ: Ethan, I learned so much from this, and your book was fantastic.

ETHAN MOLLICK: Thank you guys so much. This was really interesting.

TOM STACKPOLE: Thanks for coming on.

That was Ethan Mollick. He’s a professor of management at The Wharton School at the University of Pennsylvania. His new book is Co-Intelligence: Living and Working with AI.

So Juan, we touched on this super briefly with Ethan, but before we wrap up, let’s talk for a minute about how people feel about working with generative AI.

JUAN MARTINEZ: Yeah, this is a really, really important topic, and researchers are exploring it because it’s important to understand the challenges that we’ll face as AI is integrated into human teams. We’ve published a few articles on this already at HBR. One that I really love just came out in our May-June magazine issue. Full disclosure, I edited it, so you know it’s awesome.

TOM STACKPOLE: Okay, tell us about it.

JUAN MARTINEZ: First of all, Tom, have you ever played Super Mario Party?

TOM STACKPOLE: No. The last Super Mario game I played was probably Super Mario 3 on original NES.

JUAN MARTINEZ: All right. So a team of researchers use Super Mario Party to explore how integrating AI into a team affects humans. In the experiment, people were asked to play Super Mario Party together. They were paired in teams of two, and they had to work together to gather fruits and veggies from around the kitchen. They also had to coordinate with other teams to make sure their onscreen characters didn’t bump into one another, and they all had to do it really, really fast. So if you’re watching the game, you’d see these Mario characters rushing around a kitchen, grabbing tomatoes and lettuce, and maybe Mario is bumping into Gumba, and Toad has a huge stack of teetering plates. You can picture it, right? Very chaotic, very fun.

TOM STACKPOLE: And how does AI come into this?

JUAN MARTINEZ: The researchers asked these teams to play together for six rounds, the two humans. Then they added an AI team member and asked the team to play another six rounds with the AI. And even though the AI is really, really good at Super Mario Party, they found that team performance declined in all kinds of ways when AI was added. They looked at coordination, like how often Mario actually bumped into a Gumba, they looked at how many ingredients the teams gathered each round, and they asked all the people on the teams how motivated they felt when playing with the AI teammate, and all of that was worse with the AI. Here’s Bruce Kogut, one of the authors of this paper. He’s a strategy professor at Columbia Business School.

BRUCE KOGUT: So we had initially all humans playing this thing, but then suddenly we put in this change. You know, like we would take out their best friend Luigi and replace it by an AI algorithm, and they just did not like it. You had the humans increasingly getting more and more depressed over time playing with this AI-driven Luigi or Mario.

TOM STACKPOLE: So Juan, what are the takeaways for managers who may be trying to figure out how to integrate generative AI into their teams?

JUAN MARTINEZ: Eighty-four percent of the people in the experiment said they preferred to play with their human teammates over the AI teammate. That led researchers to conclude that when you add AI to the mix, team sociability can fall. That means human team members feel less motivated, less trusting, and they make less effort. So if you’re going to add AI to your team, keep that very real downside in mind.

TOM STACKPOLE: Okay. So what’s up next, Juan?

JUAN MARTINEZ: All right. So the second article was published on hbr.org in February 2024, and the focus is on how working for an algorithm, which is already a reality if you’re an Uber driver, changes workplace dynamics.

TOM STACKPOLE: Okay, we’re talking about algorithmic management, right?

JUAN MARTINEZ: Exactly, that’s it. They started by surveying workers in the transportation, distribution, and logistics sectors, and they found that workers who were managed algorithmically were less inclined to help or support colleagues, and this was true even when they controlled for factors like the size of the organization, employee turnover, type of job, income, gender, et cetera, et cetera.

Next, the researchers did a field experiment. They paid 1,000 gig workers from an online labor platform to create slogans for a van rental company’s social media marketing campaigns. The workers were randomly divided into two groups. One group was guided and evaluated by an algorithm, and the other group by a human. After the workers had completed the task, the researchers asked them to offer advice to others on how to create effective marketing slogans, and they found that the workers managed by the algorithm offered roughly 20% less advice to their peers than the workers managed by the person.

TOM STACKPOLE: Okay, but what about the marketing slogans? Was there a difference between the quality of their work?

JUAN MARTINEZ: No, that’s the thing. The quality of the actual slogans that the two groups came up with didn’t differ significantly, which suggests that algorithmic management doesn’t necessarily affect workers’ task-based performance, but it can decrease their pro-social motivation, and this was especially true when algorithms were monitoring and evaluating employee performance. Here’s Stefano Puntoni, one of the authors of this paper. He’s a marketing professor at The Wharton School and co-director of AI at Wharton. This is his advice for companies using algorithmic management.

STEFANO PUNTONI: So the moment that you are, as a worker, you feel you’ve been appraised by an algorithm, your performance, your worth, basically, as an employee is being measured by a machine, we find that then people tend to objectify coworkers and be less helpful to them. Companies really ought to think about the byproducts for organizational culture, for individual feelings and cognition when you deploy algorithm. It’s not just about can the machine do it. The question is, should you let the machine do it?

TOM STACKPOLE: Yeah. So it sounds like what he’s saying is that there’s really a cultural element that we need to be thinking about here because this can really change how people feel about each other, even if they’re not necessarily working with a machine coworker but if the terms of the system are being dictated by a machine.

JUAN MARTINEZ: Yeah, I mean, this is the early days, right? Nobody knows exactly how to use these tools to get the maximum benefit out of them, and every role within every organization is going to be a little bit different, so you have to figure out what the right option is for your organization, but then you have to go into use cases and start to figure out how it works for each individual role, and we’re definitely not there yet.

TOM STACKPOLE: Okay. Thanks, Juan.

Next time on Tech at Work, how will the end of third-party cookies change the internet? It’s a big shift that’s coming up fast, and it has huge implications for digital advertising and publishing and how all kinds of incentives on the internet really work.

JUAN MARTINEZ: We’ll talk with the researcher who’s studying how ad effectiveness will be affected, and we’ll speak with an agency executive who’s guiding her clients through this transition That’s in two weeks, right here in the HBR IdeaCast feed.

TOM STACKPOLE: Did you know HBR has more podcasts to help you manage your team, your organization, and your career, including nearly 1,000 episodes of IdeaCast alone? Find them at hbr.org/podcasts or search HBR in Apple Podcasts, Spotify, or wherever you listen. And if you want to help our show, go to your podcast app and rate the show five stars. It helps more than you may know, and we read every comment.

JUAN MARTINEZ: Thanks to our team, senior producer Anne Saini, senior editor Curt Nickisch, audio product manager Ian Fox, and senior production specialist Rob Eckhardt. Special thanks to our friends on HBR’s video and social teams, Nicole Smith, Ramsey Khabbaz, Kelsey Hansen, Scott LaPierre, and Elainy Mata. And much gratitude to our fearless leaders, Maureen Hoch and Adi Ignatius.

Thanks for listening to Tech at Work, a special series of the HBR IdeaCast. I’m Juan Martinez.

TOM STACKPOLE: And I’m Tom Stackpole. We’ll be back in two weeks.



Source link

About The Author

Scroll to Top