The Future of Work Transcript: Tomorrow’s AI-Augmented Workforce

Read the full transcript from this podcast episode

[MUSIC PLAYING]

Michael Wu: See, one of the challenge that I hear all the time from business leaders is that like, well, can I use it to do this? Can I use it to do that? There are lots of things that they don't know the limits and possibility of this new technology because, simply, they just don't know how it works. But if you actually a little bit of how it works, then you would know that if someone's just trying to throw you some words and try to sell you some idea, or it doesn't really have any meat in it.

Jill Finlayson: Welcome to the Future of Work podcast with Berkeley Extension and the EDGE in Tech Initiative at the University of California, focused on expanding diversity and gender equity in tech. EDGE in Tech is part of CITRIS, the Center for IT Research in the Interest of Society and the Banatao Institute. UC Berkeley Extension is the continuing education arm of the University of California at Berkeley.

Job listings mentioning AI have doubled on LinkedIn and demand for AI skills is appearing across a wide range of industries and sectors. Job seekers are jumping on the bandwagon and adding AI skills to their resumes, but what does this mean? What are employers looking for? Where can you learn these skills? And how do you talk about and use AI in meaningful ways?

To answer these and more burning questions about artificial intelligence, machine learning, and the future of work, we turn to Dr. Wu, a leading authority on artificial intelligence and behavioral economics. He's currently the chief AI strategist at PROS, an AI-powered solution provider, and was recently appointed as senior data science research fellow at the École Des Ponts Business School.

A popular advisor and lecturer for UC Berkeley Extensions AI programs, Michael has a triple undergraduate major in applied math, physics, and molecular and cell biology. And his PhD is from UC Berkeley's Biophysics Program. And given the interest in the popular Oppenheimer movie that was filmed, in part, on the Berkeley campus, an interesting tidbit about Michael is that he was also a Department of Energy fellow at the Los Alamos National Lab, where Oppenheimer invented the atomic bomb. Welcome, Michael.

Michael Wu: Well, thank you for the intro, Jill. It's my pleasure to be here.

Jill Finlayson: I want to start out with the big question. Everybody's asking, how will AI impact the future of work?

Michael Wu: Well, that is a quite a big question. And I think that there's no easy way to answer the question. But if you look at the history of how technology evolved, and I think we can get a good sense of how the future will be like with this new technology around AI. And the thing to recognize is that in the future, everything that we do will have some form of AI in it. Everybody will be using some kind of AI to do their work.

If you look at computers, like, 50 years ago, right, I mean, it used to be some programmers or IT specialists who uses these computer. But today, everybody uses computer in everything they do. It doesn't matter what role you are or you are in marketing or customer service or administrator or anything, you will be using computer. So, exactly the same thing is probably going to happen with AI as well in the near future. And this will be the trend. I don't know when that will happen, but it probably will come sooner than we expected.

Jill Finlayson: I think it's really important we talk about the long history that AI has had, that this isn't something completely new. And yet, it feels new to a lot of people. Why do you think that is?

Michael Wu: Yeah, I think it feels new because there is recently, I would say, a new kind of AI called generative AI. And that has basically lowered the entry to this field a lot. Basically, as technology kind of mature, it's always the case that it requires less and less kind of technical skill to use it, right? I mean, before, like I said, you need to be highly specialized person to use a computer. But now, everybody could use it. And in fact, everybody uses a really, really powerful computer that's in their pocket right now. It's now called the smartphone, right?

I think, similarly, I think recently, with the advent of ChatGPT, it made this technology so accessible to everyone. Basically, you just have to be able to speak some language. It doesn't even have to be English, I think, because ChatGPT can actually understand multiple languages as well, right? So you just have to be able to speak some language and describe what you want. Then you can use it. You can actually use it to describe what you want, and you actually help you write an essay, write an email, respond to someone, or even use it to generate images or so on. So it's very accessible in that way.

Jill Finlayson: So it's really having this democratizing effect. Like, the example you gave of everybody now has access to a computer, most people, many people have computers in their home. So, AI will be everywhere. But what if you don't have coding skills, or you don't know how to use it? What does that mean to you?

Michael Wu: Today, anyone could use computer. You have graphical user interface. So, these are developed for people to just drag and drop things. And with a few clicks, you could execute certain commands, and that's it. And I think similarly, for AI, right now, it may require you to have a computer science or a data science degree or a background in statistics or machine learning. But in the future, it won't be.

In fact, today, with ChatGPT, you can actually use English or whatever language you prefer to describe what you want, and you will actually write the code to do what you wanted to do. It doesn't do that perfectly yet today. So it's not the case that you can just use ChatGPT and replace data scientists today. Certainly not there. You still need the data scientists to look at the code and understand the code and make sure that the code is generated by ChatGPT is actually doing what they want it to do.

Jill Finlayson: Yeah, I'm old enough that I remember when you had to actually tell the computer to start bolding, stop bolding, before WYSIWYG, What You See Is What You Get. So you're right, the computer is making it easier for people to design things, to write things. So, with ChatGPT, is it important to know what artificial intelligence is? Like, to use a computer, you don't have to know about 1s and 0s and how to use coding, to your point, today. But is it important to understand what artificial intelligence is, how that is different for generative AI?

Michael Wu: I think even though people may not fully understand all the 1s and 0s, the Boolean logic behind all the foundation of computation, it certainly helps, right? I mean, it certainly helps for you to understand it. And I think for AI, especially right now, because it's so new, it's very helpful if you understand a little more about how AI actually work and what it's actually doing underneath so you know what's possible and what's the limitation of this technology, right?

I think one of the challenge that I hear all the time from business kind of leader is that like, well, can I use it to do this? Can I use it to do that, right? I mean, there are lots of things that they don't know the limits and possibility of this new technology because, simply, they just don't know how it work, right?

But if you actually a little bit of how it works, then you would know that, OK, this is probably not possible, right? Something is probably OK. This is probably a stretch. So you have a much better idea of if someone's just trying to throw you some words and try to sell you some idea, or does it really have any meat in it?

Jill Finlayson: All right, so, to level set, then, so we know what it can do. Can you define artificial intelligence?

Michael Wu: Yeah, sure. I mean, the simplest way I like to define AI is that it really is a machine mimicry of human behavior with two very important characteristic. And the first characteristic is that it's able to automate human decisions or actions. Whatever human do to make decision or what actions they take, a machine is able to automate those, OK?

Now, the second criteria is, basically, it has to have the ability to learn. And what that means is that it has to be able to improve its performance. Whatever it's doing, whatever decision it's making, it has to be able to make better decision as people use it more and more, OK? So, that's the learning component, right?

So, basically, it really just requires two very simple thing, right? One is to automate decisions or action, and two is learn. And basically, that's what gives these AI the ability to adapt because that's how you actually adapt to the world, is through learning about what's different, what's changed in the world, right, through data.

Jill Finlayson: So, can I ask a clarifying question, which is, if the computer can automate human decisions or actions, what are the inputs it needs in order to do that?

Michael Wu: Well, I mean, so human also requires some inputs, too, right? When we make a decision, we don't just make a decision out of the vacuum, right? If we make a decision to bring an umbrella, it's probably because we saw that it's raining outside. So, we have some visual sensory input, and then we make some decision about what our actions will be. Machines need to have that input as well, right?

So, whatever humans need to make those decisions that they make, machines are not able to make those decisions without those inputs. Whether you provide an input in the image of pictures of an outside kind of raining or provided in a formal weather forecast, say, it's raining today, you know, so whether it's raining or not is equals true, you know? So, that's the type of data, right?

So I think anything that I would say human beings now use to make decisions, I think you could actually feed in a digitized version of those data, right? I mean, some are-- in fact, I always say that a lot of those data is probably already digitized. And then you just have to feed those data into your machine learning type of algorithm. And then the AI would be able to mimic how human make those decisions and automate those decisions.

Jill Finlayson: So it is getting data and making decisions based on that, but how is it learning?

Michael Wu: Yeah, the learning actually comes from the feedback, OK? So the feedback actually is essentially the outcome of those decision. So basically, for example, a classic example is you build some AI that tells you whether the image is a cat or a dog, right? Then if you show me an image of a cat, and then the AI said, it's a dog, right, then you know that's actually incorrect answer, right? And then that feedback basically is going to say, this is actually a wrong answer.

So, what we call propagation error backpropagation will basically go and change the model, change the AI, the machine learning model that's inside the AI, to change the parameters of those models such that next time when you show them pictures of cat, you will properly say that it's cat, and pictures of a dog and properly say it's dog. So that's through the outcome. It's actually very similar to how humans learn as well.

Jill Finlayson: So, is there a human in the loop doing this training, or is it training itself?

Michael Wu: OK, so, that depends on the type of machine learning paradigm that you use. So, the machine learning paradigm of supervised learning is actually just creating the data, preparing data, and then just you give it to the machine, and the machine just learns it. There's no human in the loop there. But you could certainly have other learning paradigms.

So, for example, today, there are lots of reinforcement learning algorithm that uses human in the loop that essentially allow machine to essentially get these feedback essentially from a human. Then, instead of getting the feedback from the environment, which sometimes can be fuzzy, right, you sound like you have a human actually tell them that, OK, this is the right answer. This is the wrong answer. You essentially have a human to teach it what it needs to learn. I mean, you can have it both ways.

Jill Finlayson: So it's really not machine learning. It's human teaching.

Michael Wu: Yes, yes. I mean, in reality, I will say that everything is human teaching, right? So even in the supervised learning, the data scientists have to prepare those data, and essentially, the training data and what the outcome they're trying to predict. So, you have to prepare those data, and those data comes from some human labeling those data, right? And so, in fact, I would say that all forms of machine learning probably can be thought of as some form of human teaching the machine.

Jill Finlayson: So if there's a bad outcome, there's really a person or people behind that outcome.

Michael Wu: Yeah, I would say that's probably true in most cases. Yeah, there is probably some human made some bad decisions. I think this is probably very much related to the bias question, right? Because humans are biased. So these AI actually learn from the data that's generated by those human, and the AI learning from those data become biased that way. The AI is actually not inherently biased one way or the other. It's the data that we use to train it that's biased, but where do those data actually come from? They actually come from other humans making biased decisions.

Jill Finlayson: So, building on this, you kind of talked about artificial intelligence. How does machine learning differ from that? And then where does generative AI come into play?

Michael Wu: Yeah, you can think of machine learning as essentially the engine behind most AI. So, machine learning is actually used in every single AI system. Every single AI system has this kind of an engine, right? Like an engine of a car, right? If you think of a car as an AI, right, then the machine learning is essentially that engine. Now, what does that engine allow this AI to do is that is the part that allows the AI to actually learn.

So, machine learning is actually crucial for AI because that's the part that allows AI to learn from data. So any time you feed it data, then, basically, those data goes through your machine learning process, and then the AI is able to automate decision in a different way. Before you may decide, your AI may have one decision when you see this kind of situation. When the input is in this situation, then you make one decision, right?

But with the feedback, right, maybe next time that with the same input because the feedback is, tell it that whether it's a good idea or a bad idea. Maybe next time, you may make a completely different decision the next time.

Jill Finlayson: So, generative AI, how is this similar or different? What is generative AI doing and predicting?

Michael Wu: Yeah, so generative AI, depending on which flavor you're using-- so the one that most people are familiar with is ChatGPT, right? So, really, what they are trying to do is simply predict the next words, basically, and it does that over and over again, right? So the prompt may include something, whether it could be a question, could be an instruction, or it could just be a fragment of something.

So example, like, if I say, "the sky is," what would be the next word that you would predict? Probably, you would say "is blue," right? That is the most obvious, the most likely word to come after, "the sky is." So it's predicting the next word, and but then once you have that next word, it goes back into the algorithm again. And then it's going to predict the next word, right?

So basically, it's going to generate one word after another word. So it creates sentences, and then so on and so forth. Create longer and longer sentences, and basically, sentence after sentence. And they become paragraph, many, many paragraph, and becomes essays, and so on.

Jill Finlayson: So, no matter how long the answer is, it has gone through this cycle of just predicting the next word. So if I say, here's my resume, here's the job description, write a cover letter for me, it's looking at millions and millions of cover letters. But it's looking at my data, and it's trying to predict each word, one at a time?

Michael Wu: Exactly. That's actually what it's actually doing. It's surprising how good it's able to actually come up with a cover letter, simply by doing that, right? I mean, you could think about, what do humans do anyway when they actually are writing a letter, right? I mean, in some way, we have something called planning, which these generative AI currently don't have. So, we may plan that, OK, here's what I want to write, but then I want to write an intro. I want to write a little bit about my background, my strength, and my experience, and then some conclusion, right?

So, you may plan that. But when you actually go and write each sentence, you are pretty much doing the same thing, right? You're coming up with one word after another, right? You may change your words and go back and see how it sounds, but because this generative AI is so good at predicting the next most likely word, that it's actually just doing one word at a time. And then it just does that, generating longer and longer sentences, and basically, one sentence after another sentence, one word at a time.

Jill Finlayson: Are we really that predictable? Like, aren't we more creative than that?

Michael Wu: I think we are much more predictable than you think. I was in the space of social media before, and it's interesting. Because back then, when I was studying consumer behavior on social media platform, people are saying that, oh, human behavior, consumer behaviors are unpredictable. But I see so many data that says otherwise. Humans are actually very, very predictable. And in fact, we are so predictable that, now, there are books written that we are actually predictably irrational, to some extent.

So we are actually quite predictable. When you actually look at humans, the aggregate level, at the group level, our behaviors are very, very predictable. If you look at individual behavior, there may be some part that's actually not predictable. But a majority of it is actually quite predictable. And they actually aggregate across a population. Then those individual variations actually averages out. And what's remaining is this collective group behavior. And those become very, very predictable.

Jill Finlayson: Is AI going to drive us toward mediocrity and averageness by kind of bringing everybody into this narrow norm?

Michael Wu: Well, I don't think so. I think it will actually probably unleash some new forms of creativity as well because I feel that with AI, right, for example, now let's switch gears a little bit and talk about other generative AI, such as like Midjourney or DALL-E. So, these are generative AI that generate images.

So, you can describe what you want, and it'll actually generate image for you, right? And you can actually describe the style, the way that you want it to look, the kind of lighting. You could describe it in every detail that you want. And the thing is that it almost kind of like a PhD advisor type of role next to you, right? You can actually, in order to create something truly novel, how do you know that it's actually novel? Before, you may not know, right? You may not know.

It's like having a PhD student when I say, oh, I want to do my thesis on this, and then your advisor will say, no, no, that's actually done 10 years ago by so and so. And then, how about this idea? And then your PhD advisor will say, like, well, that was actually done five years ago by another colleague of mine. Having someone who actually know what's out there, everything that's out there is really helpful for people who are looking to create something new, right?

Then you actually know what is actually being done before because you can actually see, has this been done? What does this look like? Because now, with this multimodal generative AI, you could go back and forth. Or you could go from text to image, but you could also have the AI look at the image and basically find something similar or something or describe it. Or you can actually even ask it, so, are there anything that look like this?

Jill Finlayson: So you can use this as a brainstorming partner, as an exploration, looking for different ways of looking at ideas that you wouldn't have necessarily come up with on your own?

Michael Wu: Yeah, exactly. I think that is actually where I would say the biggest value of generative AI is. It's because the creation process, it used to require a lot of, I would say, time, certainly, time and effort to come up with a painting or something, right? But now, you can actually do that very quickly, right? You can actually go through hundreds or maybe thousands of different designs.

If it's like a 2D graphic design for some poster or something, you could actually run through many, many ideas and see maybe there are parts that you like from one and then other parts that you like from another one. You can actually ask them to combine those, right? So they're actually amazing kind of in a way that you can actually do that for you without taking much effort and very much time, so.

Jill Finlayson: Yeah, I think along with this creativity comes a question mark, though. Generative AI can make stuff up. It can hallucinate, and that could lead to fake news. That could lead to false information. Why does generative AI hallucinate? And how do we

Michael Wu: Yeah, so generative AI is predicting the next word. And the way that it predicts the next word is actually by sampling those words from a probability distribution of the most likely word, right? So, because you're sampling, you draw some random number from a Gaussian distribution, and you get some random number, right? So, there's always a distribution. Some words are more likely than others. Some words are less likely.

But this distribution may be, I would say, pretty close in terms of probability between two words. And sometimes you may pick one or the other, right? Because you're sampling it, right? So, there's always a little bit of that sampling randomness. And it will hallucinate when the empirical probability of these words are very, very low. That means, there's not a lot of prior data existing. Then it's probably not going to predict that next word very well, right? But it's still going to generate-- sample the next word from some distribution.

But that distribution, even the most likely word may have a really low probability anyway. I would say that it could be a feature or a bug, depending on how you look at it. Because I think in a lot of these creative domains, such as if you're trying to generate art or some kind of music or stories or something like that, you wanted to create something that's new that never existed before, right? That's something that you want, right? You don't want it to create something that already exists.

So, I think it could be a feature if you look at it that way, but very often, we want these AI to give us a definite answer, right? And these are what I call fact-based applications, right? It would be a bug. And the way that you deal with this hallucination in this type of fact-based application is that you use something called retrieval augmented generation, right? So these are a very robust mechanism for what we call grounding, right?

Grounding is just any process that you use to essentially ground these generative AI so that it limits hallucination. There's nothing that will guarantee 100% eliminate hallucination, because if you do that, you don't have a generative anymore. You basically cannot generate anything new that never existed before, then you might as well just go back to your old search engine, right? And you have to have some knowledge to be able to tell that they're not real, right?

And I think this is precisely why I say that you could tell generative AI to write code for you, but you still need the data science background to be able to read those code, make sure that the code is actually doing what you ask it to do. Because it will generate some code, and you will probably be compile. You'll probably have no error, no syntax error, whatsoever. But how do you make sure that it's actually doing what you ask it to do? That, you still need the data science skill, right? So, essentially, humans need to be that guardrail against these type of hallucination.

I wanted to go back to talk a little bit about this retrieval augmented generation because I kind of tangent a little bit. Because in this framework, this RAG, Retrieval Augmented Generation, framework, basically, what you are doing is that you are using information retrieval, such as search engine, to augment your generation. So say you have a question. You ask ChatGPT something like, OK, what's the nearest star that's not our sun, something like that.

So if you ask ChatGPT something like that, you may know the answer to, right? But if you ask something that it actually doesn't know the answer to, right, you will hallucinate. But if you actually use this RAG framework, what you do is that you give this question to the search engine first. OK? So, what's the closest star to us that's not the sun?

So, basically, this becomes a search query that you go out on the internet and search. And you retrieve documents that may contain the answer to this question. And then you basically change the prompt to your ChatGPT, right? Instead of the prompt used to be like, what's the nearest star that's not a sun-- that's your original prompt, right?

But now your prompt will be based on these documents that I find, which could be-- you actually have-- could be pages and pages of documents, right, that people have written about these. And you basically ask ChatGPT to read all those documents and then answer the question based on what they read. So you're actually grounding this generative AI to not create things out of fabrication. You're not asking it to answer the question. You're asking it to answer the question based on these facts, based on these documents that you found with search engine.

Jill Finlayson: This sounds really important guardrails. So, being able to tell it specifically, I don't want you generating new ideas. What I want you to do is generate facts based on this knowledge database.

Michael Wu: That's right. That's right. In fact, if you actually go to Microsoft Bing search or even Google search now, most of these search engines have incorporated generative AI, right? That's actually what they are doing. It is actually not passing your query, your question directly to the generative AI. It's actually passing your query to the plain old search engine, and it's searching the web as they normally do, but it's giving the results of what it find on the web and your question to this ChatGPT and then ask ChatGPT to answer the question based on these documents that they found. Right?

Now, it's a much, much more grounded kind of prompt, right? You're not asking ChatGPT to answer something it doesn't know, right? So because it has these documents that they found, which as part of your prompt that you sent to this large language model.

Jill Finlayson: I think knowledge management is a big issue for a lot of companies. They have a lot of information, but people can't always find that right document or get to the answer more quickly. And the conversational aspect of ChatGPT to be able to say, what was the answer to this, and it searches all those documents for you and comes back with that answer seems really valuable.

Michael Wu: Exactly. It's actually, it made, I would say, not just information retrieval, but I would say like the employee engagement much, much better. Because it's one thing, OK, I search it and I find it. But then I still need to go through the document. The document may have a lot of things. There may be only one obscure paragraph in there or maybe one sentence in that entire document that may be like 50 pages long that answers your specific query. I still need to go through all of that and to find that answer to my question.

But if you have ChatGPT, it's going to read all of that for you, and it's going to be able to answer your question by picking out the relevant information from the relevant paragraph, the relevant sentences within that document, and then summarize it in a way that's really concise and really understandable. So that's huge, I would say, in terms of employee engagement.

Jill Finlayson: So before you start giving all your documents to ChatGPT, what is ChatGPT 3? What is ChatGPT 4? What is the enterprise version? What are all the differences? And what should we be using for what?

Michael Wu: I would say that there's always a danger if you use a free version. Whether it's ChatGPT or some other forms of large language model, right, you run the risk of data leakage. If you give these documents to ChatGPT, right, then, essentially, they're going to learn from those documents to try to encode that into part of their knowledge, right? Because once they read it, it's actually no guarantee that they will ever forget them. They forget, but there's no guarantee. It's like hallucination, right? They may not hallucinate, but there's no guarantee to prevent that 100%.

The way that I think enterprise should use this is to essentially have an enterprise version of it, right, and have it in a secure environment. Right? So, for example, in my company, we accessed OpenAI's ChatGPT through Azure's subscription, right? Azure subscription, basically. So, Azure, we have this cloud subscription that is a secure environment.

So, basically, they are running this ChatGPT inside our secure environment. So we basically have our private, our own instance version of ChatGPT, right? So whatever we give to this ChatGPT, it may remember, but it's never going outside of this Azure subscription that we have inside the firewall that's provisioned by our Azure.

Jill Finlayson: And the difference between the public version and the paid version, is it worth it? Should we be spending $20 a month? Why?

Michael Wu: Well, I mean, that depends on what you're going to use it for. I think, certainly, the quality is going to be different. And I think I can guarantee you, if you pay $20, you're going to get an answer that's actually a lot better. It will actually follow your prompt much more specifically, right?

So, in the free version, right, it will give you something. It will be decent know. It will be-- but then if you refine your prompt and say, OK, I want this, this, and sometime it may not do all those things, and even, I would say, in some cases, if you actually go through this several iterations, right, you start to forget what you told them to do at the beginning.

If you go through this 20 iteration, then it kind of start to forget because you only have a limited what we call the context link, this essentially, like, a-- think of it like a working memory, right? But then if you have the paid version, right, that's really long. That's like 128,000 token long, right, versus 8,000, which is very, very limited in that sense.

Jill Finlayson: And which of the generative GPTs is most current? You hear that they're very out of date. Like, they don't have what happened today in the news, for example. So what's the difference between the different versions, and which gives you the most current info?

Michael Wu: Well, I mean, the thing is that, remember, you don't need ChatGPT to be current to give you the current information. You can use that Retrieval Augmented Generation mechanism. If you ask ChatGPT, right, whether it is the paid version-- so the paid version, I believe it's the most recent training actually stopped in April of 2023 last year. So if you ask ChatGPT what happens this morning, it's not going to know, right? Or even a week ago, it's not going to know. It's going to say, my training stops in April 2023. And I don't know anything after that.

But you don't have to rely completely on ChatGPT for that. You can actually use Retrieve Augmented Generation, right? You can actually ask what happened last night or in San Francisco, something like that, right? So, and you can actually go and search the web and see what you find, right? And then, basically, what you find on the web about what happened last night, say, in this stadium, this place, who's singing in this concert, and you could find that information.

And then you could give those information to ChatGPT and ask ChatGPT to summarize it for you. Then it, basically, in essence, to the user, the kind of as an end user, you probably can't tell the difference, right? Because it's ChatGPT writing in the way that you like to see in this conversational interface, right?

On the backend, it's actually not asking the ChatGPT what's happening yesterday, right? It's actually searching the web. And then based on what it find, it give everything that you find to ChatGPT and then ask ChatGPT to summarize what happened yesterday. So now, it becomes a summarization engine, right, rather than just a question answering kind of engine, right? So which is a lot easier for ChatGPT to do.

Jill Finlayson: This comes back to your point about knowing what the tool is used for and that it is not a Google search engine. It is something quite different and knowing the difference between the two.

Michael Wu: Exactly, yeah. And the thing about these technology today is that you can actually combine them together, right? And a lot of times, when we see, oh, some generative AI seems to be able to answer a question that's a lot more recent, well, they are not actually doing the answering.

It's the search engine that's actually retrieving the information and then the ChatGPT just summarizing what is found. So, you have to think about it in that perspective, that it's not really the large language model that's actually doing the answering, right? It's simply summarizing what the search engine find, and the search engine can actually find anything that exists up to now.

Jill Finlayson: Exactly, and you mentioned the enterprise version. Why do you think organizations, companies, institutions need to adopt AI now?

Michael Wu: Well, I think it's mainly because there's a huge, I would say, first mover advantage, because companies that actually adopt AI, they are going to do everything they do much, much more effectively, much, much more efficiently. And each employee, the cost of labor is actually rising now. They're going to be much, much more productive. They're going to be able to do a lot more in a lot less time with a lot less effort.

And you can actually look at these type of productivity gain and efficiency gain over a long period of time. It's huge. It's actually huge. It's why Elon Musk says that if you work just a couple of hours more than other company, right, but if you accumulate that over years and years, then you actually do a lot more. It's like you magnify those little differences, even the small productivity gain, over time.

So, there's a huge first mover advantage. So, every company should adopt these AI right now. And they should use it to do whatever that-- everything that their employees is comfortable using to do, right? And I think, certainly, there need to be guardrail, but they should definitely adopt it because if they don't, then after a while, the, I would say, technology become kind of commoditized. And what that means is that everybody have access to the same technology. Then it levels the playing field, right?

I mean, before, you could think about, again, like, if you look back in history 50 years ago, the company that have the first computer or the first few company that have computer, these are company that's going to be able to do a lot more. They're going to be a lot more successful, right? But now, everybody have computer. Then it becomes your levels of play field, right? And you kind of have to do something more clever or use it more efficiently and more effectively to win, right?

So, same thing for AI, right? Basically, now, we are at that stage that company are figuring out how to apply this AI in the enterprise environment. So, essentially, there's a lot more compliance, adapt to use it responsibly, use it safely. And so, they're all trying to figure out how to use this in the enterprise environment. And those that figure out first is going to gain a huge first mover advantage.

Jill Finlayson: In our previous podcast, we've really encouraged people to go play with AI, play with ChatGPT. But that doesn't seem like enough. How else can we help people to use AI and use it productively?

Michael Wu: I think there's many things that companies can do, right? I mean, I think, obviously, first, is to, I would say, encourage people to do it, right? And but moreover, I would say that you need to actually invest in that as well, right? For example, like, you should actually create the incentive for people to actually do it, right? Because company, I would say, a lot of times, employees are quite reactive. They're not like, I would say, in academic environment that people are much more self-motivated to do the research and do something, right?

So, I would say that create goals, right? Maybe there should be a corporate goal, right, to use AI in something that you do to make your work easier or to make other people's work easier, right? And everybody should have that goal as part of their objective. And that will be essentially cascaded down from the CEO level, right? So everybody in the company have something.

And when they actually do use these AI to make their work much more efficient, much more effective, right, you should reward them, right? Recognize them, reward them, promote them, because now they're actually able to do their work much, much more effectively.

And the second thing is, obviously, offer some education, right? I think a lot of times, people are reluctant to try because they just don't know what they're getting into. Sometimes the fear of unknown is greater than that fear of the new technology, right? The technology, if they know how it works, if they actually have a little bit of idea of what it's actually doing, then it makes it a lot easier for people to get their hands on, right? Because they wouldn't be so afraid of it.

I think, finally, the last one, I would say, is to make it easy for people to actually try. Right? I mean, I think a lot of times, companies are-- they're risk aversive, right? So, and because there is a risk, right, using new technology, like such as data leakage. So, people often have these, I would say, policies or rules, how to use this technology. And these becomes sometimes just overbearing. If you have 10 pages of rules and what you can and cannot use these AI for, then no one's going to use it. These AI is supposed to make their life easier.

But by having all these rules and policies, you are actually making their life harder. I mean, having Copilot help engineer write code, right, then it's great. But if you want me to document every line that's generated by Copilot, then why should I do it, right? I'm actually making my life harder and waste more time to document this. And it defeats the purpose of having people try this, right?

So, I would say have a lightweight governance that definitely requires some governance and some kind of guardrail to prevent really obvious mistakes. But have it really lightweight. Make it easy for people to try, and actually make it safe for people to try, right? So for example, like, if you don't want data leakage, going back to OpenAI, then I would say invest in getting an enterprise version through maybe Microsoft Azure or something like that, right? Then you have this secure environment.

Then even if they accidentally leak some information, it's only leaked within this enterprise kind of instance of ChatGPT, right? That ChatGPT, you could just restart it or whatever after a month or something like that, right? Then no harm is done. No data is actually leaked outside of the firewall or within your enterprise.

Jill Finlayson: You mentioned education, so let's break this down a little bit. As a leader, I might be intimidated by, oh, my gosh, I've got to create governance. If I'm an individual contributor, I might be-- how do I figure out what in my job I should be automating, how I can integrate it? What should I look for if I want to take a course in AI? What types of things should I be trying to learn?

Michael Wu: Well, I think, certainly, there are, I would say, the literacy, right? I mean, so it depends on your role and your job. Like if you are an AI engineer or a data scientist, right, then, obviously, you can't get away from coding and writing code. That requires some formal technical training in statistics, machine learning, and all that. And you just can't-- there's no silver bullet to that.

But if you are, say, a leader, right, or if you are a kind of manager, then what you need to do is you need to equip yourself with the language that the data scientists speak, right? So you need to understand what they mean.

When they say, like, oh, I'm training my model, and it's actually overfitting, right, and it's not converging, you have to know what that means, right? You have to know what challenges are they actually facing. Is this a really serious challenge, or is this kind of something simple, right? So you need to have some idea what they are talking about. So this is what I mean when I say you need to have that literacy.

And the main thing that enables leaders and managers do is to essentially collaborate with these technical data scientists, AI engineer much, much better, right? Because if two groups of people are speaking a different language, then they're gonna spend a lot of time figuring out what each other are saying, and not actually solving the problem and moving whatever agenda forward. So I think it depends on what you want to accomplish.

Jill Finlayson: Great, and any final words of advice for our audience who are going to be this AI augmented workforce of tomorrow? What should they be excited about and looking forward to?

Michael Wu: Yeah, I think one thing that people can look forward to is there is no more mundane, repetitive jobs anymore, that every one of us probably don't like to do anyway. I think having AI, one of the biggest advantage is that you can actually help us automate a lot of those mundane, repetitive jobs. And because those are things that you can get a lot of data and use those data to train the AI to mimic what you do. So in the sense that those mundane, repetitive jobs can be automated by AI. And so you don't have to ever deal with them again.

So, you can focus on solving problems that's new, that's interesting, that's never been solved before, right? So that's actually much more, I would say, motivating, much more exciting. It's actually what I get excited as a scientist.

Jill Finlayson: So you're going to be teaching a course this summer. Can you tell me a little bit about what are you going to cover? And how is this going to help people enter this productive, new future?

Michael Wu: Yeah, sure. So I will be teaching a course on, essentially, it's like kind of the AI for everybody. It's geared toward business leaders or business kind of people who don't have a technical background. So you don't need to have a computer science or machine learning or statistics background at all, as long as you have grade school math, right, a little bit of math. You can't get away with a little bit of math.

And then you can actually learn how these AIs are built, the foundation of it, which is the data that actually feeds the AI. We will talk about data is essentially the fuel for AI. We learn the foundations of big data and some foundational concepts of them and how we use these data to build machine learning models, and which is essentially this engine for AI, right? And then how these AI actually are used in business contexts, right?

And the challenge that we are facing today, for example, how do you deal with bias, how do you deal with ethics and these deep fake, and with generative AI, there's copyright issue. How do you deal with that, right? So we will be going from the very basic from data to machine learning to AI to the current enterprise challenges of AI.

Jill Finlayson: Amazing. Thank you so much, Michael, for walking us through what is really happening behind the scenes with artificial intelligence and generative AI, and then giving us some examples about why we need to jump onto this bandwagon, start learning, start embracing these technologies today, and using them to generate a better future. Thank you so much for joining us.

Michael Wu: It's my pleasure. Thank you.

Jill Finlayson: And with that, I hope you enjoyed this latest in a long series of podcasts that we'll be sending your way every month. Please share with friends and colleagues who may be interested in taking this Future of Work journey with us. And make sure to check out extension.berkeley.edu to find Michael's course and a variety of courses to help you thrive in this new working landscape.

And to see what's coming up in EDGE in Tech, go ahead and visit edge.berkeley.edu. Thanks so much for listening, and we'll be back next month with another look at the Future of Work. The Future of Work podcast is hosted by Jill Finlayson, produced by Sarah Benzuly, and edited by Matt Dipietro.

[MUSIC PLAYING]