managemnet company strategy managemanet Why You (and Your Company) Need to Experiment with ChatGPT Now

Why You (and Your Company) Need to Experiment with ChatGPT Now

Why You (and Your Company) Need to Experiment with ChatGPT Now post thumbnail image

CURT NICKISCH: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Curt Nickisch.

You don’t have to be at the cutting edge of your industry or profession to be working on artificial intelligence. You’ve probably already heard the term ChatGPT. That is, if you’re not using it already yourself. Late in 2022 is when businesses of all sides and individuals started talking about how this easily accessible type of AI really was poised to change work as we knew it. Really this time. ChatGPT has burst into public consciousness by making it much easier for anyone to use it and see its potential and use cases have been making headlines for reasons good and bad.

Even with this rapid distribution, companies are still very far behind in developing strategies, policies, and best practices, and that needs to change quickly according to today’s guest. He says the use of ChatGPT AI could have something like a 50% productivity increase for some workers. Companies need to know how to harness that power and they need to develop a strategy fast.

Ethan Mollick is an associate professor of management at the Wharton School of the University of Pennsylvania. He’s been researching and teaching on artificial intelligence in his work on innovation and he’s been using ChatGPT himself and in his classroom. He wrote the HBR article “ChatGPT Is a Tipping Point for AI.” Hi Ethan.

ETHAN MOLLICK: Hello, nice to be here.

CURT NICKISCH: What exactly is different about this technology or this ecosystem this time around?

ETHAN MOLLICK: So actually the fact that there’s no ecosystem is part of what’s interesting, but the technology’s not new, right? GPT3, which is the sort of base generation of this technology’s been around for over a year and it was okay, right? You’d look at it and it was a distinctly sort of D minus student. It produced okay work and then when the chat bot came out and the associated 3.5 model, they were just larger models with some additional tweaking but not radical changes in technology.

But somehow we crossed a threshold of ability with the size of these models that have made them incredibly more useful. Suddenly they became B minus students in classes and now if you use an even more advanced model like Bing that’s connected to the internet, now they’re A minus students. There’s something at the size of the model interacting with the kind of information that’s in there and the chat method that has made them much more useful suddenly. It’s a radical change in a very short period of time.

We have the first early results on productivity from these tools and one is a controlled experiment that was done with people writing business writing. The other was a controlled experiment with coding, and they have both found improvements in 30 to 50% improvements of productivity, and that’s for people not trained on the system, not using optimized systems, just pasting things into ChatGPT and it’s related tools like co-pilot and getting results out of it.

Something clearly is big here. We don’t see productivity improvements like that. To give you context, the change that happened in productivity when an American plant added steam power in the 1800s was about 25% in productivity. These are very big numbers in an initial pass. It’s not just sort of anecdotal. Now we have data that suggests that something big is happening.

CURT NICKISCH: Got it. These are preliminary studies. They are not peer reviewed yet, but what kind of applications are being opened up specifically in this moment for companies?

ETHAN MOLLICK: There is a lot, right? What’s interesting is that there is no rule book or manual to work from. You have to explore opportunities and options yourself. Obviously writing computer code and writing documents, it’s terrific at, right? You want to write a letter of recommendation, you want to write a performance review, it’ll do an amazing job at all those things. When you start to incorporate the internet with being AI and some of the new AIs that are coming our way, it will do things like you could tell it to learn how to do a Porter’s five force analysis and then look up everything about the precision agriculture industry and give me a table with the growth rates of every company, their strengths and weaknesses from a five forces perspective and save you 12 hours of work to build a document that you can build on because you do have to check the work of AI. It will generate marketing ideas for you. It’s very, very good at idea generation, just a very wide range of tasks.

CURT NICKISCH: A lot of those sound like applications for individuals. I guess maybe that’s where it starts, right? A lot of those sound like they are applications for individuals within a company setting rather than a departmental application or effort.

ETHAN MOLLICK: That’s part of what makes it so interesting though, is that we are in a moment where the technology is released to individuals and so it was the fastest product, ChatGPT, to a hundred million users of any product that we know in history, and the only people can use it are individuals. There’s no sort of benefit to large scale corporate use at this point. I mean, at the individual level there is, but corporations don’t have a huge advantage. You can’t call up a consultant company and have them tell you how to implement this. It doesn’t require all of those additional technologies that would if you were implementing Salesforce or some other system application.

There will be more and more corporate organizational use, but when you talk about a performance difference of 30% to 80% on your time, an individual can get that today in their work. There are secret users of ChatGPT in every company, I guarantee you, who have cut their work by two thirds and don’t have anyone to tell about it or don’t want to tell anyone about it. I’m hearing examples all the time about this.

Lots of people, you have to spend about an hour or two working with these systems to kind of know what they work, how they work, and they’re good in what they’re bad at. But I’ve talked to tons of people who have … First of all, everyone in programming is using this to help with coding. That’s sort of moved from Stack Exchange to ChatGPT as their method of figuring out what to code next. But as I increasingly talked to managers who have actually just automated things like the review process is real, like grant applications or regulatory compliance, I’ve seen people do whole range of different applications out there. It you’d be surprised, it’s a general purpose technology. I didn’t have it right what I’m telling you now, but I could have done that.

CURT NICKISCH: I mean, you just mentioned that there are people doing this in every company and companies don’t even realize it. What do they need to know and what do they need to start developing policies and strategies for?

ETHAN MOLLICK: It’s a really interesting question. There are a number of organizations, I know like JP Morgan at the time we’re talking here, banned the use of a ChatGPT in the office. But I’m willing to bet there’s tons of people who are just doing chat on their phone and emailing it to themselves and making changes. When you have a technology that cuts your time by a considerable margin and the really interesting thing is cuts the most boring parts of your work, you’re going to figure out a way to use it. I think companies need to realize that there isn’t somebody who knows anything.

But at the same time, this technology’s happening so quickly and the performance improvements are so large that you need to figure something out. And you can’t wait by just running a strategy document and seeing what happens in the industry. I would think about what is our crash research program and how this works for us? Who in our organization is experimenting with this? Should we offer some sort of insanely large prize and have everyone spend four hours next Monday hacking away at how they can automate their jobs with ChatGPT and giving large cash prizes to the people who come up with the best ideas? There is no outside organization to turn to here. This is a company thing. I think people really need to assess their exposure to this and understand where it can benefit them.

CURT NICKISCH: The way you describe that, it sounds like companies that have a pretty strong innovation framework and ways to bring it out within the organization are going to be better set up to get rolling with this.

ETHAN MOLLICK: I don’t mean to keep saying the same answer, but we actually don’t know. The issue is that because as we talked about earlier on, this is really an individual interaction? You basically gained a teammate, everybody did, an infinitely sort of patient intern that sometimes lies to you. What do you do with that? Is a question that it’s not clear a traditional innovation framework is as helpful because your R&D team, wherever that is, doesn’t have an advantage over a line manager who’s using this or a customer service rep who’s using this.

CURT NICKISCH: You used the word crash before. Should some companies even be in triage mode? Like, this is bring back the war room from the pandemic?

ETHAN MOLLICK: I think this is as big a deal in the medium term. I absolutely think you should be, right? Pick a topic, idea generation, writing, what’s happening there and then what’s the sort of red team approach? How could other people use this to attack your business model? And so I think that this is a war room situation because the situation’s evolving so quickly that I think it’s very possible for you to kind of wait and use your usual approach to new technology and be cynical about this. And I totally get why people would be cynical about the latest new tech and miss the opportunity in front of us.

CURT NICKISCH: How do you as an organization start using this technology fairly? Should workers have to use it if they haven’t already started?

ETHAN MOLLICK: I don’t know about forcing people to use it, especially if there’s no corporate use case. The policy will come from the use case and in cases where there’s high levels of accuracy required, then the current versions of Chat are probably not going to be what you need but already, for example, some of the fine-tuned models like Google’s medical model is giving, in another controlled study, is giving advice that’s at least equal to that of doctors to patients with common illnesses and other doctors are rating the answers from the medical AI as being less dangerous than the answers from other doctors. There’s going to be specialized disruption coming next, which is part of why I think the general-purpose technology needs to be understood so quickly.

CURT NICKISCH: That process makes sense to me on an abstract level. Can you give an example of how of just a specific company, some use case and maybe how that could play out? Just as an example.

ETHAN MOLLICK: Let’s say you are a company that’s a marketing consulting company. One set of stuff you do is market research. Another set you do is perhaps writing articles on behalf of your clients to help them increase their search order, SEO optimization. You might also provide creative services. What I would be doing is asking my people in those fields to start using Chat and documenting use cases and you’ll start to build policy out of that.

For example, I would start any creative endeavor that I had with using ChatGPT, having a seat at the table and I do work on creativity and innovation. It adds a huge amount to any innovative process because it could generate tons of ideas, it can help you play off ideas, it can find novel combinations.

Effectively it’s creative by the way, it maxes out every creativity test we have, even though it’s not really sentient, right? Try that. How does that work? Now let’s talk about another piece that you might do, which is that SEO article writing. Try using this to actually write the SEO articles. How much human oversight is required? How much do you want to let Chat do the writing and how much should it be a combination of chat and humans, which usually works better? And then let’s talk about that last piece of doing marketing consulting. Let’s use Bing AI for research. How often is it right? And how often is it wrong? How much time is it saving us as a research partner?

I think you have to turn everybody into your organization, into scientists in some way. Let’s experiment. Let’s see what works and what isn’t working, but let’s take our daily tasks and see how AI helps us do those things.

CURT NICKISCH: Yeah, it really sounds like a tremendous opportunity for hybrid work, right? You’re talking about another teammate at the table or an intern who kind of in the same way you go through your old emails to write kind of the same thing again. This is basically going out and doing that kind of stuff for you, but you also have to exercise human judgment. You have to make decisions about what to use and what to leave out. It sounds like the possibilities really are for not automation but hybrid work where humans are interacting with this technology to be more productive and make better work?

ETHAN MOLLICK: And also offload the worst parts of their work. People who use ChatGPT in these preliminary studies are happier because they offload the worst parts of their job and get to do the interesting creative stuff. The more creative and interesting tasks stay in human hands, the less creative and interesting tasks hand out to these chatbots and other AIs. The one caveat of course is that’s today’s technology. This is improving rapidly enough that it’s not clear where humans are in the loop in the future, and that’s both exciting and super worrying.

CURT NICKISCH: What kind of implications does this have for hiring?

ETHAN MOLLICK: Sp I think part of the question with hiring is what does ChatGPT do to skills? And I think it’s going to depend on the organization and the setting, but you can think about four different scenarios.

One scenario is that it makes everybody more productive. We have some early evidence that this seems to be the case, and if that’s true then that’s great, but it might affect how many you’re trying to hire very high talent people who can kind of run these chatbots because they can do more of their own work.

The second option is it makes your best workers more productive. The people who are already good at work are best at using Chat in this kind of hybrid mode or whatever other AIs are out there, in which case you want to make sure you’re kind of hiring more stars and thinking about how many lower level hires are required.

There’s an option that it could end up sort of leveling work. The worst performers do just as well as good performers now because Chat is helping them solve problems and that has implications for who you’re hiring.

The final option is that this idea that there’s some prompt engineering role in the future, that some people are just good AI whisperers and if you could find and hire those people, they’ll have many times more productivity even if they’re not the best traditional worker because they’re the best at getting AI to do work for them. And I think you’re going to have to figure out which of these scenarios works for each kind of job you have in your company.

CURT NICKISCH: If I’m a manager leading a team, not necessarily somebody at the organizational decision level, what should I be doing right now?

ETHAN MOLLICK: I think you should be in the middle of this podcast opening a Chat window, trying to do as much of your work through Chat for the day as possible and experimenting with your job and seeing what it automates for you and what you feel comfortable with. The most tedious parts of your task, right? Writing memos, it’s incredible at writing memos and other information. It’s great at helping you run meetings. You could say, “I want to have a meeting with these four people about this topic. Do the research on the science of meetings and give me the steps I should follow” and it will give you the steps.

And you could say, “Okay, great. The first thing you said is create an agenda. Create an agenda for me and it will create an agenda” and you could say, “Great, create the emails I need to send to each of the people on the team for the agenda.” It will do that. Then during the meeting you can take the transcript and just text to speech and paste it in afterwards and say, “Summarize the meeting notes for me and give me action points and write an email for each of the people with an action point” and it’ll do a pretty good job of that. You have to try this. I mean, it’s the only way to work. And what I find is that when you have that aha moment and people tend to have it at different points, suddenly you’re like, “Wow, this changes my job.”

CURT NICKISCH: What do you say to anybody who’s intimidated by trying this out or just worried about the implications it has for their own work?

ETHAN MOLLICK: I think anyone who’s afraid of this because it’s technology or AI shouldn’t be. There is no advantage really to being a coder here. Not even really a huge advantage to understanding how large language models work when learning to use this for your own system. These are tools that you program in English language prose. You literally give it instructions. “Hey, write a four page market research report for me using the following major bullet points. Write it in an academic but authoritative style, use clear metaphors” and it will do that, right? There’s no reason not to dive in or to be intimidated in that way.

In terms of long-term career impacts, I don’t think we know the answers to that. I mean, in general, new technologies tend to first disrupt jobs and then add jobs. I’m hopeful that that’s the case here, but we haven’t really seen a leap in technology like this. One of the things that I advise workers to do is to figure out how much of their own job they can automate that both makes their life easier, but also gives them a sense of their exposure and risk to these technologies.

I think everybody should be trying to use it for work. The problem is when people first use ChatGPT they often bounce off it or stop using it very quickly because the first things they always do give the worst possible answers. The first thing you tend to do with this is you tend to use it like a Google search and it doesn’t know anything. It’s not connected to the internet. You get kind of frustrated. Then maybe you try and treat it like Alexa or Siri and you ask it kind of fun questions and it doesn’t have much of a fun personality. You bounce off for that reason. Then maybe you ask it a factual question about something like, “What’s my biography?” And it makes stuff up, it hallucinates and you’re like, “This is terrible. Why is anyone using this?”

The issue is that that’s not how you use this tool. What you use this tool for is as you start to explore more with it, you realize it can automate huge amounts of the writing and other tasks. You can paste in a resume and say, “Write a letter of recommendation that makes these three points” and it will do that for you. You could say for Bing, “Go out and summarize the research on this topic and give me answers to it” and it will do that for you. You can say, “Come up with a hundred ideas for my marketing slogan for my company, then here’s the company description” and it will do that for you. These sorts of use cases start to be transformative, but they’re not what we do instinctively.

CURT NICKISCH: Are there any dangers you would warn people of? Any disadvantages?

ETHAN MOLLICK: I mean, there’s tons of risks. One thing we already talked about that’s the sort of most obvious in your face risk is hallucination. You can think about these AIs as trying to do a bunch of different things. They’re trying to give you an answer. They’re predicting the next word to give you, the next token to give you in response to your prompt. They’re doing that to try and maximize your happiness, to give you something that makes you happy, to give you something that’s accurate and to give you something that doesn’t break the moral guidelines that open AI and Microsoft and other companies have placed on these systems. And so in its attempt to do number one, to make you happy, it will often make things up. If you ask it for information it doesn’t have access to, it will make up a completely plausible, completely convincing argument that’s wrong.

The first thing is you have to be accountable for content. As you start to use these more, you understand what the limits are, but you will find it just makes up facts left and right. You have to be very careful to use this in areas well, to start off with.

The second issues are really start to be issues of in general kind of morality and ethics. These systems are trained using the entire internet. Does that mean that they’re plagiarizing work? Not directly, not using someone else’s words, but what does that mean? How do we keep these systems used ethical? I was able to using AI to create a very credible deep fake video of me giving a speech I never gave in my voice and it took about four minutes to do, right? Misinformation, chatbots are very convincing that you can think they’re human.

How do we deal with that? Then there’s the large larger scale work implications. What does it mean if you automate people’s tasks? Who gets the benefit from that? Is that the workers or the companies? Then there’s the even larger implications of what do we do in a world where there are machines that could do some of these, without being sentient, some of these intelligent tasks and how do we deal with their growth and ability over time? I think there’s a lot of concerns all across the board that we need to be worried about.

CURT NICKISCH: Fascinating. Now we’re talking about one company’s product here. A lot of other companies are working on different applications but using essentially the same technologies, right? Natural language processing and large language models. What other kinds of things may be coming down the pike?

ETHAN MOLLICK: I think everybody’s aware that from OpenAI, and maybe even by the time people hear this, there is going to be elements of what’s called GPT 4 release, which is a model that’s much larger than the ChatGPT, GPT 3.5 models we’ve been playing with. And if the capability increases have been at the scale they’ve been before, that’s going to be a big game changer.

Google has by far the largest research team on AI and they’ve being cautiously rolling out products and have specialized products in different industries that might happen soon. Then you’re just going to see all kinds of other explosions outside of just large language models. There’s other companies like Anthropic has Claude, which you can get access to. There’s a lot of LLMs that’ll be out there soon. There are other large language models which are these text completion tools, but that’s not the only tool out there.

There are these what are called diffusion-based models, which are often used for creating images. I can mention DALL-E for example, and you can just easily create images with text. You’ll be able to do that very soon with video and just describe the video you want to see and get a copy of that video. You can do that for user interfaces. I can describe a UI in a couple sentences and there are tools now that will create that for you. So there’s a lot of specialized tools coming out. Ones that are designed to be companions for meetings. There’s ones that will help you write particular kinds of writing. We’re seeing an explosion of tools and fundamental technology that is only growing faster.

CURT NICKISCH: That’s where the strategy documents need to start coming out from organizations too.

ETHAN MOLLICK: And experimentation. I mean, I think it’s great to have the strategy documents. I think that the compliance questions are sticky, but I don’t know if they’re going to be resolved anytime soon, right? The questions about what’s okay to have out on the web versus what isn’t and when can I use this and when not, and what’s the legal copyright implications of this and what are my legal obligations are sort of going to be unanswered for a while while these technologies are out in the world.

CURT NICKISCH: But don’t wait for all that to be answered before you get started.

ETHAN MOLLICK: I mean, use as ethically and in compliance with the law, obviously. But I mean, I think to not do experimentation because you’re waiting for clarity seems like a bad idea. I obviously would not be doing this in cases where danger is involved. You should not be using this as a doctor to be writing notes to patients. That would be unethical and wrong. I think you shouldn’t be doing this where there’s federal compliance that you are violating, but I think that to wait to start experimenting is a mistake.

CURT NICKISCH: Ethan, this has been wonderful. Thank you so much.

ETHAN MOLLICK: Thank you for having me.

CURT NICKISCH: That’s Ethan Mollick, associate professor of management at the Wharton School of the University of Pennsylvania, and he wrote the HBR article “ChatGPT Is a Tipping Point for AI.”

If you liked today’s episode, we have more podcasts to help you manage yourself, your team, and your organization. Find them at or search HBR in Apple Podcast, Spotify, or wherever you listen.

This episode was produced by Mary Dooe. We get technical help from Rob Eckhardt. Hannah Bates is our audio production assistant, and Ian Fox is our audio product manager. Thanks for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Curt Nickisch.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post