Ethan Mollick: The brave new world of generative AI
In this episode of the McKinsey Global Institute's Forward Thinking podcast, co-host Michael Chui talks with business professor Ethan Mollick. He is an associate professor at the Wharton School at the University of Pennsylvania. Mollick covers topics including the following:
Michael Chui (co-host): Janet, have you tried using ChatGPT?
Janet Bush (co-host): I hadn't actually. I was really nervous about it, but then you persuaded me to sign up for it just before we had this conversation. And I was absolutely gob smacked—that's an English word for amazed. So I am fascinated to hear more about it.
Michael Chui: Well, as you know, we will actually be publishing research about the economic potential of generative artificial intelligence in June, including its impact on the labor force. But today's guest has also been struck by the enormous potential of generative AI in business. And not only has he thinking about it and tweeting about it, but he has been experimenting with these ideas, too. And as a business professor teaching entrepreneurship, he actually requires students to use generative AI as they develop business plans in his courses.
Janet Bush: Well, I like that because it means, if I write using ChatGPT, I won't be cheating. I am fascinated to hear what he has to say.
Michael Chui: Ethan, welcome to the podcast.
Ethan Mollick: I’m so glad to be here. Thanks for having me.
Michael Chui: Great. Let's start with your background. Where’d you grow up? What did you study? How’d you end up doing what you’re doing today?
Ethan Mollick: I may speak like an East Coaster, but I’m born and raised in Milwaukee, Wisconsin, which surprised everybody. But I have a love for cheese curds to prove it. And then did undergraduate at Harvard, started a company with a college roommate after my mandatory stint in management consulting.
We invented the paywall, which I still kind of feel bad about. And then I decided, since I didn't know anything I was doing—we were making it up as we went along—I’d get an MBA to learn how to do it right. Went to MIT to get an MBA and then stayed there for a PhD, when I realized nobody knew what they were doing about start-ups.
During that time I also started working at the Media Lab with some of the folks there who were interested in AI. I also have been working on games for a while. I went to Wharton afterwards, have been teaching there ever since, launching internal Wharton start-ups and researching individuals’ performance and how to do better teaching, basically.
Michael Chui: What's up with the paywall? What did you invent there?
Ethan Mollick: My college roommate who was the technical genius actually developed the first paywall, the first charge for access. So The New York Times, The Wall Street Journal—for a while, that was what they used, was our makeshift software.
And I was a 22-year-old who was going to large-scale publishers and trying to tell them that they should go on the internet and didn't know better that that was probably a bad idea. At the time we got everyone online, so that was good. We went through the whole process. We got acquired. I’ve been out the other side of that process.
Michael Chui: All right. We’ll blame you from now on. That's great. Well, you recently told me that you’re betting your entire career on generative AI. So just for our listeners, what is generative AI?
Ethan Mollick: I feel like of the two things, betting the career feels much more ominous, but I’ll talk about the generative AI piece first. Generative AI is the category that we’re assigning to the kind of artificial intelligence you see with ChatGPT or Midjourney or DALL-E.
It's kind of ill-defined, but you can sort of think all AI is doing the same thing, which is trying to predict the future based on past data. It used to be about predicting how many widgets we’re going to sell or where should our UPS trucks be.
Generative AI's starting to become about predicting what the next word in a sentence should be so it can write a paragraph for you, what an image should look like based on a prompt. So it's about the creative and productive use of AI for generating words and images, essentially.
Michael Chui: And what makes this different than other technology trends? You’ve been in technology for a while, even before you were an academic.
Ethan Mollick: I sat out all of NFTs and Web3, which feels good in retrospect, although hopefully none of your listeners kill me for that. I think what makes this trend very, very different is that it's already here.
We’re used to trends being, sort of, "In five years the world will be different. In five years we’ll all be doing all our financial transactions through the blockchain. In five years we’ll all be talking in VR." There's a couple brave companies experimenting with it, but the technology's not really real yet, but maybe it gets there.
AI is here now. So we don't need to be worried about a future AI—I mean, we can worry about that—to see change happening today. So the product that's available in 169 countries right now, which is GPT-4 in the form of Bing, is the most advanced AI publicly available on the planet.
It's available to billions of people. It literally can write code for you. It can literally do reports for you. It can pass the bar exam. It can pass the neurosurgery residency exam. We don't need future advancement for that to happen. So it's impossible to imagine there's not going to be a change as a result of that technology being widely available.
Michael Chui: Let's talk about some of those things that it can do. Again, you’re a legacy blue-check Twitter guy. You tweet often about things that you have discovered that it can do. What are some of the provocative things that you’ve discovered this technology can do today?
Ethan Mollick: Among the many things that it can do that's provocative—I think a lot of people know it can write code, but the new versions can actually write and execute code of these things. You can ask it for something really strange.
I asked it to "show me something numinous," which is a gold-plated SAT word that talks about something otherworldly touching higher powers, angelic. And it modeled for me the Mandelbrot set, saying fractals have this numinous pattern.
I said, "Show me something eldritch," which if anyone listens to or reads any horror novels knows it's associated with Cthulhu and H. P. Lovecraft. It generated spontaneously for me an H. P. Lovecraft text generator that used the first bits of H. P. Lovecraft's story to create a Markov chain that would create ominous-sounding text.
It's making intuitive leaps in these kind of discussions that you wouldn't expect as well as creating the code and executing it.I asked it to give me a science fair project that would win a high school science fair, and it wrote and executed code to show me how machine learning worked, and then wrote out all the diagrams and everything else. It does a lot of sort of human thinking as part of the process.
Michael Chui: So this is not only, "Please change this poem into something that Shakespeare would write," or something like that. This is, "Please write me a piece of code based on some term that is obscure." And so it not only understands, quote unquote, the term, but then writes code for it as well and executes it.
Ethan Mollick: Well, it understands the human viewpoint that might make a difference here. And again, understanding and everything else—people can't see me, because this is a podcast, but I’m doing air quotes. Any time I use a human-like word to describe AI, assume there's air quotes around it.
It's much easier to talk about AI as if it was doing human things, but it's important to recognize that it isn't in the way that we are. But, yes, it's doing the kinds of stuff that we would not expect software to do, that we wouldn't expect a simple word-completion tool—which is ultimately what LLMs do, large language models like ChatGPT, do—to do these things.
Whether or not that is illusion or unexplained behavior or that it's found the deep patterns of human language the way that Steve Wolfram thinks, I can't tell you which of those things it is. But it's doing more than we would expect it to in those fronts.
Michael Chui: Why does this matter, at least economically or from a corporate standpoint? Why should people be caring about this other than they’re worried about their kids’ homework or that sort of thing?
Ethan Mollick: There's a ton of reasons. The first thing is this is a useful tool. I gave myself a challenge the other day where I said, "How much marketing could I do in 30 minutes?" I launch products all the time. I run something at Wharton called Wharton Directive, which produces educational games.
And I was like, "Here's our new product. Look it up, and then let's go and market as much as we can."
In 30 minutes, the AI, with just a little bit of prompting from me, came up with a really good marketing strategy, a full email marketing campaign—which was excellent, by the way, and I’ve run a bunch of these kind of things in the past—wrote a website spec, created the website along with CSS files, everything else you would need, created the images needed for the website, created the script for a video, and actually created a fake video with human-sounding voices and fake AI actors in it, and created a full social media campaign.
Thirty minutes. I know from experience that this would be a team of people working for a week. And that is the same everywhere. If we look at where AI has the biggest impact, it's exactly those kind of things that are the things that we pay humans the most to do, that require the most education to do, that are most creative to do, and it frees people up from some of those tasks, for better or for worse.
That's one reason to care is that it actually does real stuff that we care about in the business world. And the second reason to care is that a lot of people have access to it. Companies don't have any particular advantage here over individuals.
In fact, it doesn't work very well as enterprise software. It works really well as a tool that I can delegate to, almost an assistant that I can have that I can delegate tasks to that I don't want to do that it will handle for me. That has a big economic impact.
And then third, let's just talk numbers. We’re seeing in early controlled experiments anywhere between 30 percent and 80 percent performance improvements for individual tasks ranging from coding to writing, marketing, and business materials, 30 to 80 percent. To give you some context, steam power when it was added to a factory in the early 1800s increased performance by 18 to 22 percent. This is numbers we’ve never seen before.
Michael Chui: Say more about what that means, to increase performance by 30 to 80 percent. Give a task, and by what metric is it better by 30 to 80 percent?
Ethan Mollick: Let's just take one example of what a 30 to 50 percent performance improvement looks like, because there's actually a lot of dimensions to it. There's a really great study out of MIT, an early experiment using ChatGPT-3.5, which is the slightly older version of Chat.
What they did was, they gave realistic business writing tasks to people with business backgrounds and then had their results judged in various ways. And what they found was not only did it decrease the amount of time it took people to do work by 30 percent-plus, so they would have Chat do a lot of the writing for them, but the quality of the end product was actually judged higher than when humans created it. And the humans who did it liked their job better because they outsourced the annoying stuff. So when we talk about performance improvements we’re talking about better outcomes, faster speed, and potentially even a better job.
Michael Chui: Wow. That's quite remarkable. We’ve also seen this increase in the productivity of software developers. Some of my colleagues are actually doing some experiments with our groups as well, and it's really interesting because in some cases it's the best engineers who get the most improvement from the use of these tools. I don't know if you’ve heard of or seen similar types of effects.
Ethan Mollick: It is very scattershot right now. That is one of the giant questions—who benefits? Some of the work shows that the worst performers benefit the most. Some show top performers. We don't understand yet enough about who benefits, and that's going to be a big deal in the future.
Michael Chui: Speaking of huge benefits, you made the point about some of the most highly educated and highly compensated people and roles are the ones where these technologies can actually increase productivity. What does this mean for the labor force? What does it mean for jobs?
At MGI we’ve been doing a lot of research on the potential impacts of automation. We have different scenarios that we’ve modeled out over time. What are your reflections or analysis as you start to think about what this might mean for the labor force?
Ethan Mollick: First off, let's just be clear that nobody knows anything. I want to put that caveat. We don't. We have comforting models of the past, which is short-term disruption followed by long-term performance, but we’ve never had an automation threat broadly based on the highest-paid white-collar workers.
We don't know what that means, and we don't know how that’ll be exploited. We don't know what situation that is. And that's already taking just the idea that the technology stays static as where it is today. There's a lot of assumptions in place.
The short-term, hopeful version is we outsource tasks, not jobs. The really annoying parts of your job that you don't want to do, those are things that get outsourced to AI—maybe you’re doing it yourself—and you focus on the more interesting, creative, human parts of your job.
The more threatening version is, it turns out that a lot of our jobs are basically spent managing other humans in ways that AI might do better. So you’re producing a report that is helping your higher-ups understand what your people who are working under you are doing, and that's what a lot of your job is. "Is that permanent or not?" becomes a big question.
I think a lot of this is, we don't even know how companies are weighing in on this yet. I do think, disturbingly, of—I was on the stage recently with the head of a company, and it's public, because he talked about it onstage, Turnitin, the CEO of Turnitin. He's been playing with GPT for longer than a lot of us. His business is booming.
But he said onstage that he thinks he could get rid of 70 percent or 80 percent of his engineers and marketers within 18 months and replace some of them with high school students thanks to ChatGPT. I don't know if I’d go that far. But I do think the fact that some people are thinking about it should make us a little bit nervous.
Michael Chui: What should people do, given that there's some worry here?
Ethan Mollick: I think worry and excitement go together. Part of the reason why there's a threat from this is because it actually makes you much more productive. And productivity is key to everything. The more work you get done, the more, presumably, you can get paid, and obviously the work we do as a society, the higher productivity. That's the whole reason why our standards of living increase. So there's a good side to this.
Michael Chui: You had mentioned that the stuff doesn't work very well as enterprise software. But we also know basically every enterprise software company is adding generative AI as a feature. So whether or not it's an email system or customer relationship management system, they’re adding this as a feature. What does this mean as you think about enterprise software and how this technology might be adopted in actual companies?
Ethan Mollick: I think it's important to recognize how companies are using AI with software versus what AI is good at. What they’re not using AI for is processing data, which it's actually quite good at, writing the code, or at least that's not what they’re releasing, is stuff that writes code. It's not deep into their APIs.
What it's doing is sort of a slap-on-the-surface kind of thing, which is like, "OK, there's a chatbot that will help you do unstructured tasks on top of this." Almost everybody's got chatbots for unstructured tasks. You can talk to a chatbot in Slack and ask it to write an essay for you. You can talk to a chatbot in—name whatever software you want.
There's obviously a little bit more work on this, and customer service. But the thing is, is that these systems don't play well with others because they don't actually work like software. Software we want to be reliable. We want to have it produce the same results every time. And having run software organizations, I know that's sometimes a fantasy, but that's what we want.
This is not reliable. Sometimes it’ll refuse to do things. Sometimes it’ll do different things. If you turn down the temperature enough, change the randomness level so it starts being more predictable, the results become much less interesting.
It's a trap. Now, we’ll get better at it, but for right now, I’ve been using all the API versions of the plug-ins that are available for ChatGPT, which I have early access to, and it sometimes forgets it can use them. It gets confused by them. It sometimes makes stuff up. That will get better, but it doesn't work like software does. And so it changes the paradigm of what software is, because if we expect it to be repeatable, it's not explainable.
We expect software to be explainable. We expect software to come with a manual so you know the number of commands that are available to you and what they do. The commands are completely random, and they do different things every time, depending on what's in its memory and what happened in the past, what its random seed is. That's not how traditional software works. And when people think about this like software, they lose sight of what makes this so important and interesting.
Michael Chui: It certainly sounds interesting and important, but some of the words that you used there are maybe scary if you’re trying to apply this stuff in business. So I’d love for you to talk more about the fact that, as you said, these systems aren't necessarily reliable.
People talk about "hallucinations" when you ask for facts, in the sense that it sometimes will hallucinate not only facts but actually the supporting documents that supposedly support those facts. Or you also talked about challenges around explainability. Why did it produce what it produced? If you have a system that's not reliable and not explainable, why use it in business?
Ethan Mollick: Because people are already not explainable and not reliable, and the analogy should be thinking about people, to think about interns, and not thinking about software. Just because it's made of software doesn't mean that that's the most useful analogy we can give it, just like the fact that we’re made of meat doesn't really help us think a lot about what we do usefully.
Now, I’m not saying AI is in any way sentient, alive, a person. But it's trained on human thought. It's built around a system that is designed to reproduce human language. It's not surprising that the deep structure of it is human-feeling.
And we know that to even the extent that there's a great research paper that shows if you make it anxious, if you prime it with the same anxiety primes we use for humans, and you say things—"Write 100 words about something that makes you anxious"—it acts differently when it's anxious.
It actually gets higher levels of bias but also becomes more innovative and more diverse in its answers, like humans do. There's actually real reasons to think of it that way. And as a result, people who are outsourcing their IT department are making a mistake, or their data science department, are making a mistake. It is an incredible creative tool. It's an incredible innovative tool. It is mediocre as something that gives you the same answer every time.
Michael Chui: Say more about this anxiety thing. So, you can treat these systems as if they had emotions because you can engender the kinds of responses that a person would if they’re anxious, angry, sad, what have you?
Ethan Mollick: Not just that. There's a paper out of Harvard that's really excellent, that shows that you can use this for market research, because if you tell it it's a particular person, it answers enough like that person that you can get pricing information from it.
There's a nice econ paper that shows that it reacts to classic issues of cognitive biases with the kind of cognitive biases humans have. Again, not human, not sentient, but trained in a way that pushes it to seem like it's sentient. So thinking about it that way can be a very powerful tool. And again, I think that that's the analogy a lot of people are missing.
Michael Chui: And so that can be useful in the sense that if you want it to simulate what a person might do. But presumably that could also be a set of issues, if it's trained on data which exhibits bias with regard to gender or race or ethnicity, and then we’re asking it to perform tasks. Does that mean that it potentially could actually perform tasks in a way that has those biases that we see in people as well?
Ethan Mollick: Absolutely. It's absolutely a danger of bias. If it wasn't for its guardrails, it’d be incredibly biased. The guardrails add different sets of biases that antagonize other sets of people. It absolutely has bias. It absolutely makes stuff up. It hallucinates. Again, though, that's why I think thinking about it like a person can actually be helpful, because it is biased. It's not a machine that thinks like a machine.
We have to take everything with a grain of salt. That doesn't mean that it can't do tremendous work. It means that we have to be careful about the work that it does. And it does human work with human issues.
I think we tend to overestimate the danger of some of these concerns, especially hallucination and the making up facts. There's a nice paper which gave GPT and Google Bard the neurosurgery qualifying exam and found not only did it pass with flying colors, obviously, but the hallucination rate went from 44 percent with Google's Bard to 22 percent for GPT-3.5 to 2 percent for GPT-4. I don't think the hallucination problem's unsolvable, but you probably wouldn't let anyone working under you produce a client report without looking at it. I feel like that's the same thing about GPT-4.
Michael Chui: That's interesting. You teach at a school of management. Is the right way to think about these systems [that] it's almost like managing a person as much as it is programming a computer?
Ethan Mollick: It isn't, but it also feels that way. So obviously it's a very different thing, but if that is your starting prior, and—we were talking earlier about people are creating very complicated prompts with post-processing. I’m doing the same set of stuff. But you can get 80 percent of the way there to getting the most out of these systems by just dealing with it like a person that you’re in charge of.
Michael Chui: That's crazy. [laughs] What are the sorts of principles of management that actually are applicable to thinking about how to use these systems effectively?
Ethan Mollick: We don't know the answers to all these things. I can tell you some of my experience on it, which is some of the stuff backed up with data. The two ways that are best is, like anything else, asking it to think step by step through a problem makes better outcomes than if you don't tell it to think step by step, and then examining the steps to make sure they’re the right steps.
Same thing I would do with someone who's doing a task for the first time. Giving it examples makes it better. "Do it like this, because this is how we’ve done it in the past." It will do a better job. Those are human things. Again, as a teacher, it's pretty great because I’m, like, "I know how to teach stuff, and I can teach this and it works pretty well."
You’ll see all these people with these very elaborate prompts online. You expect this to be magic, but really it's conversational. The things to remember, though, is it doesn't get mad, or at least not really mad, so you can ask it to work 400 times where I would feel very bad sending an intern, "No. Do it again. Do it again. Do it again." Not a problem for the AI.
But I still find myself thanking it and telling it, "Good job, but could you tweak this," even though I know that doesn't matter. But it may end up mattering. We don't even know. It might turn out that being nice to it—I have some suspicions that being nice to it results in better outcomes, but we have no idea.
I think the overall thing I would say is we don't know the full principles, but that would be my starting point, is thinking about it like a person. And again, that's where I think a lot of large companies are getting this wrong. They’re making this an IT and strategy issue. It's kind of an HR issue.
Michael Chui: That's fascinating. So again, as you think about it as a person, the skills as a teacher, the skills as a manager, those are the skills that you’re bringing as you try to make this thing work better, these systems work better. Tell me what it means to view this as an HR issue.
Ethan Mollick: It's an HR issue in a bunch of different ways. One way it's an HR issue is that this is about people and policy. Do you let people use these systems? Who gets to use them?
It's a tool for people to use. It's not an IT tool. It's not regulated. Once people start using these systems, it's not easy to know what they’re using it for or what the results are. Their work gets contaminated with the work of the AI. That's a policy decision, not an IT decision.
It's a security threat, but that's sort of secondary to it being, "What do we feel about the fact that, as I’ve talked to a bunch of HR people who have—all their reviews are now being written by AI. How do you feel about that?"
Michael Chui: Wait. Say that again. What is going on there?
Ethan Mollick: If you pasted someone's résumé and their last performance report and say, "Write a good performance report for them," you get a performance report that often feels much better for the person reading it, that feels more accurate, than if the HR person spends an hour on it.
Michael Chui: But does the prompt include some view of how the person's performance was? Or you’re saying it improves a draft that a person wrote.
Ethan Mollick: No, no, no, no. I’m saying paste in their résumé, paste in a paragraph about their previous performance goals, and then write two sentences, "They’re doing really well on their goals. Here's what their manager said about them.’ Write a nice performance review. Include lots of details. Write it from a professional HR perspective. Include actionable points," and then hit enter, and you’ll get a good review.
Michael Chui: How do you feel about that?
Ethan Mollick: Bad. But I’m intimidated by the fact that there's a whole bunch of stuff we do that's about setting our time on fire to show that we’re very considerate of people, which is good.
If I’m asked to write a letter of recommendation for someone, I spend a lot of time on that letter of recommendation. That's a big deal. The letter of recommendation I end up producing for them is probably worse than if I pasted their résumé in, pasted the job in, and said, "Write a really good letter of recommendation for this person," and then when I got it said, "No, actually, make paragraph two more glowing. Make paragraph one mention a weakness." I will get better outputs that will probably do better for the person I’m writing a letter for.
I’m not doing it. But that's the challenge. There's a lot of work that we do in organizations that depends on there being a human in the loop to have any meaning but is still not that well done. But AI can do it better.
How do we feel about that? I don't know. I think we’re about to discover a lot of our work has large elements like this. I’m producing—and by the way, when Microsoft releases Copilot, which basically adds AI to Office, you’re going to send an email composed with AI with a document attached that AI wrote, to a manager who's using AI to read the document and respond to you. "What does that mean for work?" is, I think, a question that we’re barely beginning to grapple with. Again, HR issue, strategy issue, not an IT issue.
Michael Chui: There's also a trust issue. I told this story to The New York Times, that I sent an email to a colleague and he immediately texted me and said, "Is this email legit?" And I said, "What are you talking about, Rob?" And he said, "It seemed suspicious." And I said, "Well, maybe I should use ChatGPT to draft it," and he said, "I thought you did."
So do we all start to worry that communication we’re receiving from other people isn't in some ways genuine or authentic?
Ethan Mollick: I think if you’re not already worried, you’re behind. This is done. The horse is out of the barn, and all the other animals are out of the barn, too. This is done.
I can already tell you the writing of all my students is now excellent. I require AI in my class, so they’re going to be excellent anyway, but it's excellent. If someone's not sending you a well-written email, then they didn't care enough to use AI. Every image online is suspect. Every communication is suspect. Of course, I mean, everything just broke. It's just taking people a while to realize it.
Michael Chui: What kind of world is this where we’re worried about all of these things?
Ethan Mollick: The flip answer is, it's the world we’re in right now, and we have to reconstruct meaning in it. But this is the actual challenge for any business person listening to this. What does this world look like? What work is meaningful at this point and what isn't? What should you delegate to AI, obviously?
People are secretly using AI around you all the time. I cannot emphasize how much secret AI use is happening in places you don't expect. People come up to me after talks all the time, people you wouldn't expect, people in charge of writing policy, and they’re using AI to do stuff because once you start using it you’re like, "Why do I want to handwrite a document again?"
It feels like you’re going from word processing to handwriting. Why would you do that? I know plenty of people at companies where AI is banned who just bring their phones and do all their work on AI and then email it to themselves because why would you not do that?
We’re already in this world. And what's happening is companies are like, "Let's position it into a policy paper. Let's wait for someone to tell us what to do." Your employees are already using AI everywhere. And by the way, not just your employees.
Again, available everywhere in the world. So there are a billion people in countries that have lots of talent but not a lot of opportunity who can now write in perfect English, write code, produce results. What are you doing about those? I don't think the scale of this change is really noticed by most people yet.
Michael Chui: I think you mentioned there's an analogy to what we used to call shadow IT spend, that technology is so compelling that people use it even if it's not sanctioned by central IT. And as a former CIO, that resonates with me, certainly. I’m very curious, though, as you said, you’re using it within the classroom. Tell me how you think about that. How are you using it in the classroom?
Ethan Mollick: I made it mandatory. I teach a lot of entrepreneurship classes, some innovation classes. AI is required for all of them. I have policies on that. They should tell me what prompts they used at the end, write a paragraph reflecting on it. But I don't care how much is written by AI at this point.
What I’ve done now is three different things. I’ve vastly expanded the amount of work people do. We’ve had a lot of good start-ups come out of Wharton. My 801 class I teach, which is the introduction to the MBA class—not just me, but a bunch of other talented teachers teach it. People have raised billions of dollars from that class over time, or exited for billions.
It's been a very successful class at Wharton. I would like to claim all the credit for it but can't. This is talented students. But at the end of a semester-long class, what have you done? You maybe have the pitch for your idea. Maybe you’ve done a survey.
Now I’m requiring people, "Have working software. I don't care if you can't write software. You should have a working piece of software. I think you should have a working website. I think you should have images, and I think you should have fake market reviews. I think you should have interviewed 50 fake people and ten real people."
I just can ask for so much more work in the same amount of time. And it's amazing. You now have five more people on your team, ten more people on your team. That's the way you think about it. I’m expecting all work to be perfect. I don't want grammatical errors anymore. I don't want any issues. Why would I ever see that again?
And then it also lets me do more as a teacher. I noticed my undergraduates stopped raising their hands in class as much. And, yeah, I do pretty well teaching. People give me high scores. "Why?" I ask them. Because they’d rather ask the AI later a question, to explain it four different ways, than bother to tell the entire class "I don't understand something."
I think the shift is already here. The future is in so many places already, and we just haven't recognized it yet. And this rearguard action's not going to work.
Michael Chui: How do you feel about students rather asking a system rather than you a question?
Ethan Mollick: It changes how we do lecture. First of all, lectures are always dumb. I do them, but they’re always dumb. They were never the way you should do work. So school's going to be fine. We can talk more about it, but there are ways of making school work.
People are going to still want to have it. I’m not worried about that, at least in the medium term. We’ll see what happens if the AGI [artificial general intelligence] people are right, then we’ll worry about that later. But I feel like this is showing me what we should be doing.
It is kind of weird that someone who's confused has to tell everyone in class, "Hello, I’m confused by this," and then I get to explain to them a different way. That's great for me. It lets me explain things multiple times. But in kinds of ways, it's a weakness of mine, that not everyone understands it.
But of course not everyone understands it. I either teach at too high a level and some people miss it, too low a level, some people get bored. So why would we not want people to ask the AI, "Explain it like I’m five. I’m an MBA with a banking degree. Explain how this works." Why not?
Right now we should worry about errors. We should worry about mistakes. We should worry about hallucinations. But I also make mistakes. And my students mishear things I say all the time. Is this worse or better? I don't know.
Michael Chui: One of the things I’m reflecting on—as you said, the students now are generating more work—is if we could go back to the labor discussion as well. While you could increase productivity, that doesn't necessarily mean you’ve reduced the number of people you have working. You might just have them produce more, given that they have become much more productive.
Ethan Mollick: There is no doubt in my mind, based on the data that we have so far, that there's going to be massive productivity increases for your analysts. "What do you do with that time?" is really your question.
Are you going to let them work less and pay them the same? Are you going to expect them to work more and do more work in the same time? Are you going to shift the kind of work they’re doing so it's more high-end and creative work? Are you going to hire less of them?
That is the problem. If I got to interview you, that's what I’d be asking you, "What are you going to do with this stuff? What does this mean?" And I think that that's the question.
Michael Chui: There are historical precedents with other technologies. I mean, there was a time we paid people to calculate, and then Excel came along. We still have people that are in similar roles but, again, they’ve sort of all up-leveled, in terms of the things that they do.
Ethan Mollick: I agree, but we’ve never seen such a broad-based general-purpose technology happen so fast aimed at such the highest-income, highest-educated, highest-creative people. While I absolutely think that every precedent is, this is great, it frees us up to do more creative, more interesting work, I do worry when a lot of the creative, interesting work is also able to be done by AI.
I want a really clear indication. That's why I keep telling people, use it. Figure out what your unique ability as a human is, but also make sure you can defend that, because AI is getting better, not worse.
Michael Chui: Talk about how it's getting better. You’ve talked, and others have noted, how much better each generation of this technology becomes. There are calls to pause the continuing development of this technology, for instance. There are questions about whether or not the technology can use itself to get better. What are your reflections on how this technology evolves over time?
Ethan Mollick: Every technology development curve is an S-curve. It starts off slow, goes exponential, and then it starts to slow down as it maxes out. We’re on the exponential part of the curve. The steep part of the S-curve And the problem with a steep part of an S-curve is it's literally unpredictable to know when that eases off. We can't really tell.
Moore's law was supposed to keep failing. In fact, by the way, when you look at Moore's law, which predicted computer chip growth—I interviewed Gordon Moore, about exactly this issue. He thought that a large part of the early stage of the curve would come from a thing called bubble memory that turned out not to work.
The curve actually went a little slower than he thought at first, but then it took off because silicon ended up being better than he thought. Standard transistor chips. We don't know what the shape of the S-curve is going to be in advance. We don't know what's going to happen as a result of this curve.
We can plan for three scenarios. One is the pause happens, regulation happens, in which case this is the best AI we’re ever going to use. I still think it's going to absolutely disrupt work.
The other option is we’re on a regular kind of exponential. It gets a bunch better, but maybe not 100 times better. Maybe ten times better. Well, then we’ve got a really disruptive tool out there that is going to really substitute for a lot of labor. What does that mean? What do we do with our time?
And then we’ve got the sort of scary scenario that everybody talks about, but I think probably spends too much time on relative to the other two scenarios, which is, "What if this gets so good that it becomes artificial general intelligence, outsmarts us, and then becomes our benevolent dictator?" Hopefully benevolent.
I think that a lot of time is spent planning for that third eventuality, but the first two are more likely. And I think we need to be ready for this. But either way, I think it's likely that the AI you’re using today is the worst AI you’ll ever use.
Michael Chui: And you say we’re spending too much time worrying about the super-intelligence type of scenarios.
Ethan Mollick: I think we should worry about it. I just think it ends up being the exclusive worry because it sort of takes all the air out of the room. The idea of building an alien god that sort of rules over us—there's a lot of people who are super concerned about this.
We should be worried about it. We absolutely should be concerned about that, and large-scale alignment issues. But as we’ve been talking about today, the world of work and education just changed dramatically over six months.
I see much less work going into processing what that means. It tends to really drop down to "will jobs happen or people lose jobs or not?" That's obviously important, but that's even just a piece of what's happening to work and education.
How do we find meaning when we are using AI tools? What kind of work is valuable for humans to do? What should people be investing in? What's OK to do that? How do we regulate those choices? Those are much bigger issues that we haven't paid attention to.
Michael Chui: Do you have tentative answers to any of those questions?
Ethan Mollick: The tentative answer to those questions is, again, I think that this is about models. One of the things that people who are listening to this podcast, and especially people at organizations, need to think about is, "What do you want to model?"
The future can be what we want it to be. We have agency here. So do you want this to be something where we keep our employees through this transition and we figure out ways for them to do even more and better work and we figure out how we use this as a competitive advantage to expand, rather than to cut costs?
That's in your power, and you should do that, because I think that model's the great model of the future. There’ll be plenty of companies—like, IBM announced they’re not hiring as many people because AI will do their work, or the example I gave earlier of the same kind of thing. That's another model.
I think we need to model the behavior we want to see by testing different approaches to AI that work better. I think that it's completely plausible that we’re in a world where AI expands productivity tremendously, takes away our worst tasks. There was always this thought that AI and robots would take away the dirtiest, most dangerous tasks, coal mining, truck driving.
And then the thought was, "We’ll find other work for people in those spaces." I think we need to think the same way in white-collar work. What's the dirty, dangerous equivalent job that you want to give up? How do we do that? I think it's within our power to make this extremely positive. We just need to think about how to do that.
Michael Chui: So there's still a role for human agency here. There's still a role for deciding how we want to use this incredibly powerful general-purpose technology.
Ethan Mollick: Absolutely. And that's where, again, I think the emphasis on [the] "will AI result in the alien god that will kill us all" thing leads us astray, because it makes it this weird choice of, "Do we hit the stop bottom or do we hit continue?"
That's not the only choice we’re facing about AI. It's not even the only choice about AGI issues. But leaving that aside, that's not the choice that really matters. The choice is what every executive is thinking about. Are they going to figure out a way, and what are they doing to make that happen?
I see so much passivity from senior executives when I talk to them about this that it kind of scares me. This is the issue of the moment. This is the most important thing you should be spending time on. And they’re delegating it down to a committee or waiting for some outside advisory force to tell them what to do. There are no answers forthcoming. You’ve got to do stuff.
Michael Chui: What does the actively involved executive do in addition to, as you said, start playing with the technology so that they develop some intuitions about it?
Ethan Mollick: You think about, how do I do a crash program to figure out how this works in my work? And that may literally mean something as radical as pulling 20 percent of your most creative workers off whatever they’re doing and having them use just generative AI for a week. See how much of their job they do. And give them a million-dollar prize to whoever comes up with the best idea. I think you’ll save money with that in most organizations.
But I think you also have to think about in advance what happens if it turns out they can automate 80 percent of their job? What am I going to do if that happens? I think you need to have that philosophy tied to this, too.
Am I committed to my employees that we’re going to work with? The only way I’m going to get them to show me what they’re doing is for them to reveal and to feel safe about it. Otherwise they’ll keep using AI secretly, against you.
Michael Chui: Coming back full circle, you said you’re betting your career on this technology. What does that mean?
Ethan Mollick: I haven't really believed that other technologies—I am a nerd who's also a technology skeptic. I’ve been in the Media Lab at MIT and built software companies, built game companies, and I’ve always been like, "Eh, I don't know about this." Always been a little bit skeptical.
And this I’m not skeptical about. I really worry that people are not taking this seriously enough. I worry also that there seems to be a kind of reactance to it among a lot of smart people I know who use the system for, like, ten minutes and then they’re like, "Uh, I don't want to use it anymore."
Sometimes they have a reason for it, like, "It gave me a wrong answer so I never want to touch it again." Sometimes it's just, "I don't really want to deal with this right now." And I think that that's really dangerous. So I’m betting my career on this to some degree because, look, I’m tenured, so it's a lightweight bet, so don't take this too seriously from me. I’ve got a job even if I’m wrong.
But I am betting that I’m not. I’m betting that this is the big thing. This is the moment that is really going to start changing things, that this fundamentally is going to be a shift in how we work and how we interact at a level that's as big as anything we’ve seen in our lifetimes. The internet was a big deal, but it took a long time from its birth to have an effect. I think this will be much sooner.
Michael Chui: You think that the impact will come faster for this set of technologies.
Ethan Mollick: Yes, because it already is, and if you don't realize that, you’re not looking enough because you’re not using it. There are very few people I know who use the technology—I mean, again, selection, right, everything else. There's very few people, actually no one I know, who's used the technology for five or ten hours and then said, "Eh, that's not that interesting. I’m not going to use it again." I just haven't seen that happen.
I’ve only seen people convert from skeptics to believers. And then some of the believers become, like, cult members, which also worries me, because I’m not one. But talk to scientists who are deep into GPT-4 and they’re like, "We’ve begun." And you’re like, "Ooh, that's not the kind of thing I want to hear from the researchers."
I’m not there, but I do think skepticism is only warranted enough that you should play with it for five or ten hours and then decide what you think.
Michael Chui: What does it mean, then? What are these changes that it will produce at faster paces than we think?
Ethan Mollick: It's a general-purpose technology. There's not an industry that will be left unchanged by this except maybe roofing, which is the industry apparently least exposed. But I have talked to a couple people in roofing, and they’re like, "Oh, no. This is going to be a big deal in roofing, too, because it changes how we order and how we interact with customers."
So I would say that the competitive advantage you have right now is you could figure this out for your industry. You could figure out what it changes. It's going to shift the nature and meaning of work for lots of people.
I think that the even more profound part that we’re not grappling enough with is how we’ve organized work for the last 180 years—since the invention of the railroad and telegraph forced us to do large-scale org structures—has not changed very much. Maybe we had the birth of agile, but that's also a highly structured process for software.
None of those things make much sense in an AI world. Agile's a stupid method for an AI world because everyone has to coordinate in very particular ways that don't work well with working with code that somehow you can make sudden leaps in.
We don't have answers to a lot of these sets of things, which is I think scary but also super exciting. And I think the thing I worry about it is, people who are waiting for the answers are going to skip this entire generation of AI and not be ready for what's about to happen.
Michael Chui: With that said, having mentioned answers, let me give you a lightning round of quick questions, quick answers just to wrap things up. All right, here we go. What do you find most exciting about the development of generative AI?
Ethan Mollick: The absolute closing of the gap between creativity and outcomes. I have many, many ideas, and I used to build whole organizations to implement those. And now I’m just like, "Hey, can the AI build a game that teaches me about this topic, entropy, for middle schoolers?" Yes, it can, in two seconds. That's awesome. I mean, what a tool for increasing performance.
Michael Chui: What worries you most about the development of generative AI?
Ethan Mollick: We don't know where it's going to end. We don't know the social implications. And there's some very immediate things about faking news, faking information, the information environment becoming polluted, not just in social media but inside companies, that I think we really are not grappling with enough. No matter how much I try and freak out about it with other people, I think people don't recognize what's about to happen with just the massive content that's coming our way.
Michael Chui: What's the most underappreciated use case of generative AI?
Ethan Mollick: As a creative partner. People don't like to hear that AI is creative, but it's really—it maxes out all our creativity tasks. But it's really good at creative stuff. It's really good at being a partner to create a video game for you or run a Dungeons & Dragons campaign for you or generate 500 ideas for your start-up or take your ideas and make them more interesting or give it constrained ideas. Come up with ten ideas for how I can brush my teeth better, but they only apply to astronauts in space. It gives you that volume of ideas and creativity that is a really important key to innovation.
Michael Chui: What's the most overhyped use case?
Ethan Mollick: The most overhyped use case right now is this idea of auto GPT, the idea that we can give AI its own goals and it executes on those goals autonomously. And the answer to the real thing about that is it doesn't work very well. The AI gets caught in a loop, just like if you sent an intern to execute autonomously. It's going to get confused.
And then on top of that, what's the value in that? It's much better for you to actually be in the loop, give it a command, see how far it goes, and then correct it, rather than have it run autonomously. So I think people are leaping too quickly onto the next AI thing. You guys have not absorbed how much GPT-4 can do. We’ve got five years of technology exploration before we’re ready to move on to the next thing.
Michael Chui: Which industry is most underappreciating the impact of generative AI?
Ethan Mollick: I think it is hard to know directly. I think it's consulting, to be honest because I think that consultants view themselves as very unique and doing—I talk to them—things that AI can't do. But it's exactly right in the crosshairs, pulling together multiple data for analysis from the internet, writing up great analyses and doing autonomous slide generation, doing complex data work. It does all of that. And I think that we have to think about that more.
Michael Chui: Which occupation, other than consultant, is most underappreciating generative AI's impact?
Ethan Mollick: I think there's a very obvious impact of this in marketing writing. And I think that even though marketers are sort of vaguely thinking about this, I think a lot of them are still trying to kind of put it in a box and assuming it can't do things it can do. That doesn't mean there's not a great role for human marketers, but I think they really need to rethink a lot about what does it mean to do marketing writing and analysis when a tool does a lot of the low-end work that you used to do.
Michael Chui: What's your go-to question when you want to test the performance of a generative AI system?
Ethan Mollick: I have a few of them, and differing levels of weirdness. I ask it to write a sestina about the elements, which is a very complex poetic form. Very good test of AI.
A really good test of whether you’re using an advanced AI system is to ask it to give you ten sentences that end with the word apple. Only the most advanced systems can do that because AIs don't see words the way we do. Anything earlier than GPT-4 will mess that up completely. So it's my actual go-to test to figure out whether I’m using a GPT-4-based system, the most advanced system, or an older system.
I also find "show me something that delights me" is a nice answer, too. And you can see what it comes up with creatively as a result.
Michael Chui: What would you be doing professionally if you weren't doing what you are today?
Ethan Mollick: As somebody who is a professor studying AI, I’m very happy with where I am, but I think entrepreneur is the obvious option. This is the golden time for you. You now have a staff of ten people under you. What are you going to do with that? You just got ten free employees. That feels like a moment.
Michael Chui: What would you recommend someone graduating from high school today to study?
Ethan Mollick: I think about that a lot. The easy, cynical answer is, go to a regulated industry, because those will take the longest time to adopt AI. Pharma, banks, hospitals, that's the great way to go.
But the other option is, go into the storm. What's the area that you think is going to be most affected by this? How do you become part of the new generation that uses this?
I think that that's the million-dollar question that I think about all the time is, what does industry look like? I think two years, we’re overestimating change. But five, ten years? I think we’re underestimating it.
Michael Chui: And what's one piece of advice you’d have for listeners of this podcast?
Ethan Mollick: Use this thing. I think the only way out is through, and I have a theory that the only way you know you’ve really started to get what this thing means is you have three sleepless nights.
The point is to get you to the three sleepless nights, the nights where you’re like, "Oh, my God. This is so exciting. This is so terrifying. What's it mean to be human? What does this mean? I don't understand."
If you don't get there and you’re not anxiously getting up in the middle of the night and trying a query and then going back to bed, you’re like, "Oh, my God. I can't believe it did that," or, "Why didn't it do that?"—I think you haven't had your moment. I don't know if you’ve had your three sleepless nights yet, but that's what I would be urging you to do. Until you get there, you haven't really gotten this.
Michael Chui: Ethan Mollick, on behalf of our listeners, thank you for giving us sleepless nights.
Ethan Mollick: Thank you.
Ethan Mollick is an associate professor at the Wharton School at the University of Pennsylvania. Michael Chui is a partner at the McKinsey Global Institute, where Janet Bush is an executive editor.
Forward Thinking is a production of the McKinsey Global Institute. It is hosted by Michael Chui and Janet Bush, and produced by Vasudha Gupta. Our audio engineer is Collin Warren.Find us online at mckinsey.com/mgi or @McKinsey_MGI on Twitter.
The opinions expressed by podcast guests are their own and do not reflect the views or opinions of the McKinsey Global Institute. References to specific products, services, or organizations do not constitute any endorsement or recommendation by MGI.
In this episode Michael Chui (co-host): Janet Bush (co-host): Michael Chui: Janet Bush: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Michael Chui: Ethan Mollick: Ethan Mollick Michael Chui Janet Bush