Reading the GPT Leaves | Crooked Media
It's time to organize... or else with Vote Save America. Learn More. It's time to organize... or else with Vote Save America. Learn More.
December 16, 2022
Positively Dreadful
Reading the GPT Leaves

In This Episode

If you could go back in time and had the power to stop the social media revolution in its tracks, what would you do? Would you let things proceed exactly as they did? Or would you try to warn people? We’re at a similar fork in the road right now with artificial intelligence, and the recent advent of OpenAI’s ChatGPT. It’s fun to play around with, but when you do, it becomes clear that the horizons have changed. And if AI like ChatGPT becomes as ubiquitous as smartphones and earbuds and the Internet of Things, it’s going to change the world. A lot. How can we impose order or mitigate the risks around artificial intelligence? Can these early stage innovations generally do more harm than good in the near-term—that is, before the machines get smart enough to build weapons and enslave us? New York Times technology reporter and host of the Hard Fork podcast Kevin Roose joins Brian Beutler to talk about what upheavals AI might bring, including the amazing ones, the scary ones, and the robot apocalypse ones.

 

TRANSCRIPT

 

 

Brian Beutler: Hi, everyone. Welcome to Positively Dreadful with me, your host, Brian Beutler. So here’s a question I’d like you all to ponder for about 5 seconds wherever you happen to be listening. If you could go back in time to the days just before Friendster and then MySpace and then Facebook snaked their way through the world and you had the power to do anything from just stop it in its tracks, so the social media revolution would never happen, to nothing at all. What would you do? Would you hit the kill switch and spare us all from the social web? Would you let things proceed exactly as they did? Would you try to warn people? Look, this is going to start out really cool and chill, but soon it will consume way more of our lives and societies than is probably healthy. It’ll be a breeding ground for hatreds and anxieties. We should probably get ahead of that now. I don’t think of myself as any kind of Luddite, but I’m sure I would intervene somehow, maybe even by shutting the whole thing down. And I’ve been thinking about this a lot in the past couple of weeks, not just because of Elon Musk or every other bad thing that’s come out of social media in the past several years. But because I think we may be at one of these fork in the road moments right now, and I think there’s hope that we can be a bit less naive about what risks we might be courting now than we were in the innocent days of the early web. So like a lot of people I’ve been playing around with, ChatGPT, ChatGPT refers specifically to the Generative Pretrained Transformer. But if you’ve heard about it, you probably know it as like the new AI chat bot. It’s a tool built by the company OpenAI, which builds artificial intelligence and ChatGPT is basically just as I described, it’s a web based chat window. But instead of talking to a person, you’re talking to a machine. And the machine is built by humans and refined by humans. But most of the learning ChatGPT has done to inform its answers and write them in proper English and to create a remarkable degree of verisimilitude. All of that is automated. Basically, if it was on the web in the year 2021, the chat bot knows about it and can draw on it when you ask it for help in some way. And I mean, it’s honestly pretty fun to play around with, but when you do, it becomes clear that the horizons have shifted. If this kind of thing becomes ubiquitous, the way smartphones and earbuds and the Internet of things are, it’s going to change the world a lot, by contrast to the dawn of social media. I don’t think it’s hard to imagine all kinds of plausible upheavals AI could cause. Amazing ones, scary ones, everything in between. I’m not sure who the most prescient Cassander was back when social media was new. I remember it mostly as a moment when we were all like, hey, this is pretty cool. We can keep up with folks. It’s like a phone book with text and pictures and stuff. Whereas writers and philosophers and movie producers have been thinking about this AI watershed moment forever. But here’s the thing we’re going to have to think all this stuff through now and hard and fast, because it’s in the nature of the technology itself to spread and get better. And it’s a species of technology that, quite frankly, scares some of the people who know the most about it. If you ever got stoned out of your mind in college and talked about the robot apocalypse, like that is a scenario that the builders of this stuff can’t rule out. At the same time, there are so many smaller dilemmas between here and there. It’s almost like you need a whole AI system to figure out what they are, prioritize them by urgency, and help you think through how we collectively could impose order on it or mitigate the risks of recording. Can these early stage AI innovations generally help people rather than hurt them? Is an important question well before the machines get smart enough to build weapons and enslave us. So this is going to be in some ways a relatively rudimentary conversation because I don’t really know what I’m talking about, but also a bit more meta than usual, because I don’t think a podcast, or at least this podcast is, is a great place to understand what AI is and how it works. But it could be a good place to start thinking about how to think about AI before AI start thinking about us. Kevin Roose is an author and technology columnist for The New York Times. And I happen to know he’s thought a lot about these very questions because I cribbed some of them from his podcast Hard Fork, and he’s here this week with the answers. Kevin, thanks for coming on the show. 

 

Kevin Roose: Thanks so much for having me. 

 

Brian Beutler: I guess the first question I want to ask you is the most general, which is why might we want advanced AI in our lives? And by we I mean I mean society and not like me, a guy who could really use a robot maid. 

 

Kevin Roose: [laughs] Well, I think humans have always found uses for new technology. I mean, there’s a lot of things that we spend time doing today that we don’t particularly like doing. One example from my own life, I mean, I’m sure you remember years ago in journalism, if you wanted to transcribe an interview, you had to sit there by hand and type it out and listening to the audio. And that was, you know, sort of menial work and nobody really likes doing it. And so now you just throw the WAV file into an AI transcription program and it gives it right back to you. So that’s a kind of time saving device that I find very useful and satisfying, and you can imagine lots more of those. I mean, what if an AI could draft the, you know, the first version of a podcast intro or of a newspaper column or of a marketing email? We can see many, many uses for these things. 

 

Brian Beutler: I, I always end up when I, when I think, like, what are, what are the things that we could get out of AI socially more than just like of individual use to a particular person who doesn’t like their chores or whatever. Is is stuff like what you’re talking about where I feel like. In, in almost every case. You can imagine the innovation without the artificial intelligence, right? Like you like you don’t need AI to do transcription per se. Like you don’t need it like machine learning. Do you? 

 

Kevin Roose: Sure you do. 

 

Brian Beutler: I mean— 

 

Kevin Roose: That’s how that’s how these programs work. They use, you know, speech to text machine learning models to make it so that your transcriptions actually match what the person said. 

 

Brian Beutler: Okay. Maybe like my naivete is showing here, but I guess what I mean is that the technology that you need to get words and audio converted to words on tape doesn’t require the sort of, it’s a program that can just be the program and doesn’t need to get better and smarter and start thinking, like doesn’t need humans constantly feeding back into it to make sure that it’s not like getting out of its box. Do you know what I mean? 

 

Kevin Roose: You’re saying we should just have AI that is sort of mediocre and that it should stay mediocre? 

 

Brian Beutler: [laughs] No, I don’t know. I guess what I’m saying is that a lot of the a lot of the innovations that or a lot of the benefits that I think people see in AI are things that we could figure out how to get to without opening the Pandora’s box that I think I want to talk to you about. Like, like on your show, for instance, you guys talked about mental health and basically AI could be like having a life coach and a therapist at the ready at all times. And if you had that, it would help any one person or large groups of people make better decisions, grapple with crises as they arise. And like that obviously sounds pretty good, like better than the status quo. But like, why is that? Why why would a tech utopian say that working toward that goal is better than just like as a political project, trying to build an abundance of human to human coaching and therapy. And I think you could apply a similar analysis to like transcription. If you want transcription, you can have transcription without having all like that, the whole—

 

Kevin Roose: Right, like why do we need to build the AI, you know, Skynet in order to have these sort of narrowly useful apps. 

 

Brian Beutler: Yes. There we go. Thank you. [laughter]

 

Kevin Roose: So what I think a tech, you know, utopian or an AI, you know, evangelist would say to that is that a lot of the things that so this is a debate that happens in the AI community all the time about whether we should be trying to build what AI researchers refer to as AGI or artificial general intelligence, or whether we should be focused on what they call narrow AI, which is AI that can do one thing or a small set of things very competently. But, you know, you can’t tell your you know, you can’t tell your dishwasher to wash your clothes. You know, maybe you shouldn’t be able to tell your, you know, therapy chat bot to write your essay. Maybe these things should be contained. And I think the response that I’ve heard from a lot of AI researchers is that a lot of these so-called narrow AI tasks are actually require an AI that has kind of a model of all human language and behavior [laughs] in order to work. Well, I mean, just take your example of a therapy bot, which I think is something that is, you know, we can both agree would be very useful for a lot of people if AI could, you know, talk to you and and and help you through your problems and understand and sort of have a model, a working model of of your psyche and the issues that you may have. That’s actually not an easy problem. And it requires first of all, solving language. And language is what these models are working toward. They are you know, that turns out to be quite a hard problem. And once you’ve solved it, or at least produced an AI model that does understand language very well, that has just a tremendous number of applications, that’s really what we’re talking about here is an AI that has been trained on, you know, millions of examples, billions of examples of human communication and has come up with a very sophisticated production engine for human like language. That is what we’re talking about. It these these things, you know, they’re not these are not self-driving cars. These are not you know, they’re they’re various other types of problems that these large language models don’t solve. But what they do is, is they’re able to communicate across a range of different styles in various ways that could be helpful to us. 

 

Brian Beutler: So to make sure I’m understanding, right, the, the, the one side of the argument is we should focus on specific concrete tasks that would just make the experience of human life easier, better or whatever. The other is you can’t really do that well unless you build the big scary Skynet thing. Right. Do you I mean, do you have a sense, just as a reporter of who has the better of that argument? Because that seems critical to understanding whether we should view this as like something that will just improve individual aspects of life that are complex or something that, you know, is going to open a Pandora’s box that could go very badly. [laughs]

 

Kevin Roose: Well, so I think there are a couple of things here. I think one is just we can observe the performance of the narrow approach versus the generalist approach. 

 

Brian Beutler: Mm hmm. 

 

Kevin Roose: So, you know, I’m sure you’ve had the experience of like contacting a customer service chat bot in the past ten years, unlike, you know, the Delta website or whatever. It’s like a pretty bad experience, right? 

 

Brian Beutler: Yeah. 

 

Kevin Roose: That is like a that is a narrow AI application and it’s not that good and it pretty quickly gets to the end of its capacity. It says, you know, I can’t answer that question. I’m going to transfer you to a human or whatever. So that’s the narrow approach, and we’ve seen where that ends. But now all of these customer service interactions are going to be redesigned using these large language models like ChatGPT, and that will be much better and you’ll be able to solve a much wider range of customer’s problems. So it just is true empirically that the larger the model, the more the more general the language model, the better it is at handling queries and solving problems. That’s just I mean, that’s just sort of what the research shows us. Now, the flip side of that is, are the risks of developing this kind of general AI so great that it might overpower in our minds the the added benefits of this functionality. And that’s where I think you’re making subjective judgment calls. 

 

Brian Beutler: Right. Okay. So I. I think I understand. Like. Broadly speaking, what people worry about when they worry about the risks that you might uncork by by just unleashing that kind of technology. What do the evangelists for, for the large language models and, you know, the just an AI future. Say about what? What they think the world will look like if, if this is just if we allow this to just kind of just loose it into society? 

 

Kevin Roose: Well, they argue about it constantly. I mean, it’s a big topic, their entire message boards with, you know, thousands of posts about various risks of AGI and and what might happen. And there are camps, right? There’s a camp that is very concerned about the risk of rogue AI that would sort of get so smart and so capable that it could turn on us and and end civilization. I mean, there are there are serious philosophers and AI researchers who believe that that’s a possibility. There are also people who believe that that’s totally overblown and that we are nowhere near an AI that will approach human sentience and that we’ll be able to, for example, you know, build autonomous weapons and deploy them on its own. And then there’s a lot of people in the middle who sort of worry about some pieces of this and think that we ought to design safeguards and put guardrails around these systems so that they can’t or won’t be used in ways that are harmful to society. But that’s a very active, very ongoing discussion in the AI community. And I would say you could ask 100 AI researchers this question. You’d get a hundred different answers. 

 

Brian Beutler: Do any of them sort of take seriously the like the [?] utopian vision where it’s like, look, we won’t have to work anymore. We will have, you know, all of our productive capacity will be handled by the machines and humans will be able to flourish. Or does that kind of utopianism get sort of laughed out of these discussions? 

 

Kevin Roose: No, it’s definitely a view that’s out there. There’s a fun book that was published a few years ago called Fully Automated Luxury Communism. [laughter] And it was it was sort of this this argument for this kind of utopian vision where the robots and the AIs are just taking care of all of our material needs so we don’t have to work anymore. Everything is abundant and free or close to free, and we can just sit around and play video games all day or make art or hang out with our friends or do whatever we want to do. And I think that’s that’s sort of the vision that a lot of these researchers have in mind. Now, they don’t all think it’s going to be a seamless transition to that. They’re not naive. No technological transformation has ever been seamless since the Industrial Revolution. And so I think there are a lot of people who who sort of, you know, want to get through that period in as cautious a way as possible. But who do think that kind of at the end of that, we will arrive at this techno utopia? 

 

Brian Beutler: Okay. So like the fact that, A, the fact that there are people who like assume and I’m not saying like I definitely disagree, but I think it’s worth thinking that like a world where, you know, nobody had to work, nobody had to till soil to get food, nobody, you know, machines took care of all the all the productive stuff. And we just got to be creative and socialize and so on like that. That would be good. Like, I’m not certain it would be. It sounds fun because working sometimes sucks, but like I think it there’s a question about like what many questions about like what that would do to society and what that would do to, to humans on an individual level. But but beyond the like, the question of, like, is that a good goal? Like the idea that AI is the way to get to sort of like a post labor world and, and like the fact that people are like sort of clinging to that as the end goal of this is part of the reason why I find it so unsettling because it seems so detached to me. Right? Like— 

 

Kevin Roose: Well, well how else would we get there? I mean, like, if if a lot of, you know, if, you know, billions of people now have jobs that require them to, you know, to write things, to send emails, to make phone calls, to, you know, move data around in a database to write programs, to drive cars. I mean, how else other than AI. Do you get to a world where those people don’t have to do those things? 

 

Brian Beutler: I, I, I don’t have an answer, but, I mean. 

 

Kevin Roose: Yeah. 

 

Brian Beutler: I, you know, if, if it’s, as you say, that you couldn’t address like individual tasks with individual dumb AI, then maybe AI, like, you know, the advanced AI is the only way to do it. But I guess I think with many kinds of innovation, like the collective benefit to humanity is so much greater, at least in hindsight. But I think like often in foresight, then any major drawback that you’d, you know, you’d never think about going like you would never unmake penicillin or the internal combustion engine. And even like where the good and evil are kind of comparable in magnitude like vision power, you get why humans collectively would probably say, you know, yeah, we think the good is worth the bad. Like maybe eventually the fact that we mastered fission power will give us fusion power and clean energy abundance, right? Like that. We can make these kind of complex things, but like the utopian vision of benevolent AI it like it can’t promise anything like cures diseases or makes exploring the world possible. It’s typically, I think like machines will do our productive work and we will enjoy the spoils. We’ll have more time for leisure and creativity. And I feel like the leaps you have to make to get from we invented AI to like our bartenders and farmers and doctors are all robots is pretty attenuated and it’s much more attenuated than like we invented a cure for cancer and now nobody dies from from cancer. 

 

Kevin Roose: Oh, see, I totally disagree with that. And I think, like, it’s actually quite related to this conversation we’ve been having about these large language models like ChatGPT. So several years ago now, a company called DeepMind, which is a subsidiary of Google, started using these models, these so-called transformer models, to try to solve a problem that had vexed scientists and molecular biologists for decades. It’s called the protein folding problem. It’s basically like if you have a, you know, two dimensional sequence of amino acids, what does the 3-D protein structure of that protein look like? And scientists had tried and failed for years to solve this using conventional methods. They trained in AI, the same type of AI that goes into GPT-3 and all these other large language models. They trained it to take these amino acid sequences and predict protein structures, and it solved the protein folding problem, essentially. 

 

Brian Beutler: Right. 

 

Kevin Roose: So now if you are a scientist who’s looking to make a breakthrough on using a certain type of protein to make a new drug, for example, or to solve, you know, to cure some type of cancer, that is going to be much, much easier for you because of because the AI researchers figured out that that same model that was generating chat bots could also be used to fold proteins. So I actually think there’s there’s a real link here between some of these these seemingly, you know, sort of luxurious applications of the tech to like solving real life and death problems. 

 

Brian Beutler: Right. Right. Like if you apply the AI to like if you imagine a problem that you want to solve, like curing cancer, you apply the artificial intelligence to that problem and you, you hasten the goal. And that seems like unalloyed good. It’s it’s the utopian end point where I feel like. We have automated stuff before, and sometimes it’s a net win for humankind. But like we know, the displaced workers don’t get to keep the labor savings and go off and make art. And that’s like the distance that I’m talking about that like AI could help find a cure for cancer and then people will stop dying. A cancer, that seems. Like a pretty direct line to me. We will have advanced AI and thus like the the utopia, like the labor for utopia of our dreams, that seems like there’s like a million political problems— 

 

Kevin Roose: Totally. 

 

Brian Beutler: —that will throw that into dis—

 

Kevin Roose: But I would say there are that those are, those are political problems, not technological problems. Right. So that has to do with how we how we implement the technology and who’s getting the the increased productivity fruits of that technology. So and this can go both ways, right? So I wrote a book about AI a few years ago and I went back and I looked at previous waves of sort of automation starting in the Industrial Revolution all the way through the PC Revolution in the eighties and nineties. And it’s gone both ways. So during the Industrial Revolution, for example, workers didn’t see real wage gains as a result of the increased productivity of the factory machines for about 50 years in some cases. So an entire generation went through the workforce, not necessarily having their paychecks go up as a result of the fact that they were working in these hyper productive factories rather than doing this backbreaking farm labor. But in the middle of the 20th century, when a lot of American factories were automating things like auto manufacturing, workers actually did see almost immediate gains from the increased productivity of their workplaces. And the big reason why, obviously, is because they had unions. Right. These were unionized workforces they were able to advocate for and collectively bargain for a larger share of the increased profits. And so the workers lives did end up being better. Those jobs were better as a result of the automation of those factories. So it can go both ways. But I would just say, like those are generally problems with labor and management and political questions rather than questions. About the actual technology,. 

 

Brian Beutler: Right, yes. Okay. And like I’m glad you said that, because it it it speaks to the part of me that does not want to be mistaken for like a Luddite. Like I think it’s super cool that humans have been able to get this far in the AI development process. It’s the it’s like there’s no abstracting it from the political world. Like you can’t I don’t think and given that our political world is fallen and corrupt and there are no perfect societies, but plenty of bad ones, you know, the it doesn’t. Take a lot of imagination to under, to start thinking through how the technology might be abused and really hurt people. Like even short of the robot apocalypse, where where the robots themselves turn on us. Right. 

 

Kevin Roose: Sure. I mean, there are all kinds of things that that could go wrong. But I think a good sort of general test for whether, you know, [laughs] for whether technological progress in general benefits us is like, would you switch lives with someone who lived 200 years ago? I would not. I don’t think a lot of people would. You wouldn’t have penicillin. You wouldn’t have, you know, cars. You wouldn’t. I mean, just the sort of simple litmus test for one’s belief in technological progress is, you know, would you rather live now or would you rather live at any point in the distant past? 

 

Brian Beutler: Well, but I think it’s true for us right now. But it hasn’t always been true that at every point that was today, that yesterday didn’t look better, like there has been backsliding in the world. And I think AI is this watershed thing that. You can’t just assume that the world has changed by AI 50 years from now is going to be one that we don’t regret having walked into. 

 

Kevin Roose: For sure, and I make this point a lot in my book is that I you know, it’s we technology does generally make us better off in the long run, but people don’t live in the long run. Right. There are people who lost their jobs— 

 

Brian Beutler: Another good point, yeah. 

 

Kevin Roose: —you know, people who lost their jobs to the Industrial Revolution or the manufacturing automation boom in the 20th century, who weren’t able to sort of seamlessly make the transition from one era to the next. They fell through the cracks. They, you know, lost high paying jobs and were forced to take lower paying jobs in a different sector. So I don’t want to be mistaken for someone who thinks that this is all going to work out just fine for everyone. I think what we can start to do is to build the systems that can help people as they fall through the cracks, as they get displaced by this new technology to make it so that they can easily get back on their feet. 

 

Brian Beutler: How useful do you think it is to look to the pitfalls we we’ve lived through with social media when we’re thinking about how to steal ourselves for the AI future like you were just talking about, look like this will displaced people from their jobs. It just will, right? Like this will probably have all kinds of other unforeseen consequences, even if the benefits we ultimately decide outweigh them. But like with social media, we did know advanced planning. There was no blue ribbon commission to try to figure out how we were going to integrate this technology into our lives. And like, you know, we’re living with the consequences of that now. Do you think that there’s any way to like. Imagine we’re in the year 2000, three or four right now and start using social media as like a as like a model preparing for what might be coming down the pike with AI so that we’re not tripped up, at least by the most obvious consequences. 

 

Kevin Roose: Yeah, I think that’s already happening to some degree. I mean, you do have, you know, task forces and AI ethics committees and, you know, lots of people writing and thinking about potential downsides. But I think it’s a great question because I think that we we made at least a couple mistakes in evaluating social media when it first emerged. The first was that people just didn’t think it would work. Like, it wasn’t so much that they thought it was bad or destroying society or increasing polarization. They just thought it was kind of stupid. Like there was this whole era where it was like, why would I go on Instagram and like post pictures of my breakfast? [laughter] Like, why would I go on Twitter and, like, talk about what I ate for lunch? Like, it just it was seen as this kind of novelty that was dumb and useless and that people would eventually get sick of it and move on. And what I don’t think people were taking seriously enough at the time was the possibility that this actually would succeed and succeed beyond anyone’s expectations. And I think you start you see a little bit of that happening now with AI, I mean, certainly less than you used to when the technology was worse and less developed. But you still see people saying, oh, this AI like it, can’t even write, you know, a perfect essay [laughter] on the philosophy of John Locke, like it’s you know, this is this would only get a B-minus in a college class. And it’s like, yeah, but like on the other hand, like, this would get a B-minus in a college class. And it was invented like a week ago. So. So I think that we need to take seriously the possibility that all of this will work very well and that the rate of progress will either stay constant or continue to accelerate and and extrapolate from there, rather than saying, oh, these systems now, they’re so dumb, they can’t even do X, Y and Z. 

 

Brian Beutler: What is this? What is like the universe? Like the universe basically that you report on where these discussions, debates, this sort of thinking is happening, look like? Like it sounds like it’s think tanks, universities, you know, tech companies, a largely online discussion among people who are very smart but are a degree or two, at least away from, say, like government decision maker. Am I wrong about that? Is there like a is there like a level at which our government, or like other governments, are in conversation with each other about like, you know, what are the rules of war in the world where AI is advanced and and operating at a very high level? 

 

Kevin Roose: Well, governments are obviously very interested in this, not least because it’s going to affect things like, you know, military weaponry. And there are already, you know, blue ribbon commissions. And Eric Schmidt of Google, formerly of Google, has been, you know, trying to convince the military and Silicon Valley to work together now for years. You know, there are all kinds of actually wrote a book on AI with Henry Kissinger, [laughter] which I reviewed in The New York Times Book Review, using AI [laughter] because I was just like this. This is not a task that I actually want to do. I’m just going to give it to the AI. So it did it did a pretty passable job. 

 

Brian Beutler: Nice. 

 

Kevin Roose: And but I do think that governments are paying attention. I don’t know how sophisticated the conversations they’re having with the tech companies are, but I know that there are efforts to to make that conversation more sophisticated. So Stanford recently held a bootcamp for congressional staffers where the congressional staffers, you know, flew out to Palo Alto and went to workshops and, you know, talked about the latest capabilities of these systems. So I know that there are efforts to get that conversation up to speed. You know, frankly, I’m a little bit pessimistic about it just because it seems like our lawmakers are still figuring out how social media works— 

 

Brian Beutler: Yes, this is what I was gonna say. 

 

Kevin Roose: —and social media has been around for 20 years at least. So I think that what we need to get our heads around is the fact that all this is happening very quickly and that the conversation needs to progress just as quickly. 

 

Brian Beutler: Right. Right. I mean, I think, you know, as you were talking, I was imagining the House and Senate [laughter] hearings with with the social media executives where, like the members of Congress just embarrassed themselves with how little they, and and it was also after the horse was way out of the barn. Right. They were having these hearings after the 2016 election where for the first time they realized, oh, social media is, you know, can be used for more than just keeping up with your ex-girlfriend from wherever, you know, like in having just been through that experience an, an, AI optimized to subvert U.S. elections. Like on the one hand, that’s like textbook fighting the last war. But it also seems like something that will obviously happen like maybe before the decade is out. Right? Like who’s— 

 

Kevin Roose: Yeah. Or, you know, has already happened— 

 

Brian Beutler: Yeah. 

 

Kevin Roose: —I mean, the, the Facebook newsfeed is run by AI. So it is there’s this sort of famous quote in technology that it’s it’s only AI until it works, at which point it’s just a newsfeed or, you know, Siri or, you know, a smart toaster or whatever. It’s like it’s it just we stop referring to it as AI once it just becomes part of our daily lives. 

 

Brian Beutler: Right. 

 

Kevin Roose: So yeah, I think you could argue that, that AI has already affected all of these things that we’re talking about. [music break]

 

[AD BREAK] 

 

Brian Beutler: I guess there’s a loose analogy to like, AI can help scientists solve the like, how do you cure disease problem? AI can help nation states screw around in each other’s election, but I don’t think we have yet, as far as we know, at least seeing a nation state develop a technology that is programed to try to understand how best to deploy messages to whatever, you know, screw with an election. And like that could easily I mean if you play with with with the open AI chat bot like you will instantly realize like how this could be deployed to ends like that, you know, not necessarily specifically that one, but just like chicanery in general. 

 

Kevin Roose: For sure. And I think, you know, the number of governments that are working on this, I don’t know exactly. But, you know, for example, China has has made developing next generation AI a huge national priority for years now. And that’s a big part of what they’re, you know, their government technology spending is going toward. And they’re also, you know, intimately involved with private sector technology companies, you know, in in China. So it’s it’s sort of we’re sort of talking about this as if it’s in the future tense. But I think it’s it’s in the present and past tense as well. 

 

Brian Beutler: That’s scary. [laughter] Okay. Well, you alluded to this a little bit, but let’s talk about how AI might end up competing with human creators. I sort of built this conversation around the the chat bot, but there’s another AI tool, I think it’s called Lensa. I think that’s how you pronounce it, which as far as I can tell, works like you give, Lensa all your information and it generates— 

 

Kevin Roose: It makes you hot. [laughter]

 

Brian Beutler: Yeah. Yeah. Generates avatars for you to use on social media where you look super cool. 

 

Kevin Roose: Yeah. 

 

Brian Beutler: I haven’t I haven’t played around with Lensa, in part because the few artists that I know are horrified by it. And they seem to think that like art generating artificial intelligence is like a terminal parasite for human artists. And part of me trust their instincts. But another part of me thinks, you know, art has survived lots of technological development that makes it easy for non artists to create aesthetically cool stuff. So. Who do you think has the stronger instincts on this debate? Like the people who are like, we need to shut this down or humans will never work as artists again. Or the people are like. It’s it’s really not all that. It’s really not worth getting so worked up about it. 

 

Kevin Roose: I mean, I think I understand why folks are upset about it. At the same time, people have been making this exact criticism of new art making technology for hundreds of years. I mean, if you go back and look at the reactions among artists to the camera, you know, there are there are art critics and very serious, thoughtful people who proclaimed that the camera was the death of art. And obviously, that wasn’t true. It just created a new genre of art and amateurs and professionals. You know, could could get better and could use photographs to make kinds of art that weren’t possible with paintings. And and so I think there’s I think of AI generated art as sort of a just a new genre in the same way that photography was a new genre. It’s slightly different because what’s happening with AI generated art is that these models are trained on millions and billions of examples of art, some of which was, you know, created and copyrighted and sold by actual living human artists. And it really it really sucks for some of those artists to see that there’s now an AI that’s capable of mimicking their style. There’s an interesting example. A few weeks ago, there was an artist named Greg Rakowski, who is sort of a famous fantasy artist. He makes a lot of like, you know, fantasy graphics. And people who are using these AI art generators discovered that if you just typed in Greg Rakowski to the prompt engine and and told it to make your image look like it, it would make it amazing because Greg, Greg Rakowski is an amazing artist. So it was sort of this like secret hack that people discovered that you could use to make your art in these apps turn out way better. And that sucks— 

 

Brian Beutler: Yeah. 

 

Kevin Roose: —for Greg Rakowski because all of a sudden, like, he has been commoditized and now people can use this AI art generator that was trained, you know, in part on his productions to make art that looks every bit or, you know, most bits as good as an actual Greg Rakowski creation. So I think that’s a genuine problem and a genuine thing that artists are correct to be mad about. And I hope that these AI programs will, for example, allow them to opt out of having their imagery included in the training sets. But I do think I do think that this is going to and is already transforming art in many, many ways. And I think that, you know, it’s the cat’s sort of out of the bag with that. 

 

Brian Beutler: Yeah, maybe this is a little half baked, but my instinct is the AI is probably a greater threat the more abstract the creative enterprises is. Like, you know, people have asked me, are you concerned as a journalist that AI is going to make your work irrelevant? And I think, like, in the realm of journalism, if you’re if you’re not doing paint by numbers journalism or it’s just like, okay, your dial a quote here, dial a quote there, put it into a story. But you’re like applying actual judgment to, you know, what stories are interesting and who are the meaningful people to talk to, who are the most interesting sources? What is the right way to think about a development in the world? Right. Like if you outsource those kinds of questions to an AI, the final product is going to seem weird, like, like very uncanny. On the other hand, like, I can imagine AI’s creating magical realist fiction or music or, or abstract art of the kind you were just talking about that nobody could reliably identify as machine generated. And and that said and I think it touches on what we were, you know, this utopia question if AI is supposed to liberate us to be our best creative selves, but it simultaneously colonizing the creative realm, like what’s left for us [laughs] to do with our newfound free time? 

 

Kevin Roose: Yeah, I, I sort of disagree with that, actually. 

 

Brian Beutler: Okay. 

 

Kevin Roose: I think that, I think that there have been some interesting sort of studies done where if you ask people, you know, what percentage of workers do you think will be displaced by AI? You know, many people say it’s like, you know, 75% of people say it’s like a huge problem that’s going to displace lots of workers. And then if you ask people if they’re worried about their own jobs being displaced by AI, like only 25% of people say that they’re worried about that. So there’s this kind of like hubris that people have where it’s like, oh, AI could replace like those people, but I am so special— 

 

Brian Beutler: I’m special, yeah

 

Kevin Roose: —unique and talented that it couldn’t replace what I do. And I think, you know, and I love your writing and I would never say anything bad about it. But, you know, just for for my own writing or my own, you know, newspaper, like, I think there’s like a decent amount of what I do that could be automated, like some of some days. I’m really creative. I’m really firing on all cylinders. I’m coming up with ideas that are sort of original. Somedays days I’m being totally honest, like it’s pretty predictable. Something happens in the news. I come out with a take like it’s not all that original. And so I think even for those of us who consider ourselves sort of special and talented and beyond the reach of these AI programs like we might have to step our games up. 

 

Brian Beutler: Yeah, yeah. I didn’t mean to suggest that it would affect nothing like actually like the competitive pressure would be to be to like what we have over the machines is this abstract notion of judgment, I guess. 

 

Kevin Roose: But, but I don’t know. I don’t even know if I agree with that— 

 

Brian Beutler: No? 

 

Kevin Roose: —because a lot of what these AIs are doing is is a lot of what we are doing as writers and journalists and analysts, we’re synthesizing ideas. We’re, you know, taking an argument and combining it with another argument. We’re we’re sort of saying like it’s actually like kind of what these guys are doing in some sense. And I, I even like, like, I just don’t think we can, we can rest on our laurels here. And I don’t think we can assume that because we, you know, we think that we’re so talented and special. These AI’s aren’t going to be a problem for us. 

 

Brian Beutler: So you think you think the abstract creators are are like more insulated from the from the AI colonization than us lowly hacks? 

 

Kevin Roose: It could be. It could be. But I think I don’t think anyone’s safe. I don’t think there—

 

Brian Beutler: [laughter] There we go. 

 

Kevin Roose: And I don’t mean that in, like, a doomsaying way. I just genuinely think that, like, you know, one hard thing that we’re going to have to get our minds around as a society is that, you know, a lot of us are pattern recognition machines. 

 

Brian Beutler: Yeah. 

 

Kevin Roose: We are we are people who are doing pretty basic sort of take one. I mean, one sort of thing that I’ve gotten wrong about AI over the years is I thought that the sort of manual jobs would would be automated before the sort of creative jobs. And what we’re seeing now is that a lot of so-called creative jobs are actually just about sort of rote repetition of kind of processes of like. Like a fashion designer, maybe they’re an an amazing genius, original fashion designer. Or maybe they’re just saying, like, let’s combine these three styles in a way or like, let’s take, you know, what, what Gucci did last spring and let’s change in a couple of different ways and make some variations on it and like do it ourselves. It’s like kind of pattern recognition and synthesis. 

 

Brian Beutler: I guess. Here’s why. Here’s why. Like I and it really I think I honestly think I came to this less because like, I want to think of myself as some sort of like special entity in the world that can’t be out-completed by AI, but just the nature of like the best journalism. Like, for instance, let’s say you were tipped off by a source about some 19 year old kid at Stanford or wherever who had like done something amazing techno— technologically with AI or anything else, just like this kid had it was like the Einstein of tech for something. And you were on this lead and you tracked him down and you got an interview and you came to understand what he was working on, and you spun it into, like, this incredible story that was like part biography of the kid and part exploration of what he was doing. Right? And okay, if like new information comes to light, you recognize it as like a fruitful topic to report on. And you can do that if you if you tell and AI at least AI as they exist today about that and have them do the same try to do the same work. I think it would look weird. I don’t think it would look like a story, a piece of narrative nonfiction or whatever that anybody recognized as as like being done by a human. They would think they would I think they would know that it was like a a machine that had tried to figure out how the world had changed, even though it couldn’t actually see that the world had changed. You know what I mean? 

 

Kevin Roose: Totally. And and that’s why I think, for example, opinion writers are going to be easier for AIs to replace than— 

 

Brian Beutler: Right. 

 

Kevin Roose: —investigative reporters, because investigative reporting relies on human relationships and, you know, getting people to divulge secrets and doing archival research and all these things that like sort of depend on what for now are human skills. Whereas, you know, some opinion writing and I’m not naming names is sort of just, you know, paint by numbers like something happened. And here’s my take. 

 

Brian Beutler: Yeah, yeah, I, I have over the over the course of the last decade or so, written a couple and abandoned a couple like book proposals. Because a lot of the work I do, I worry like if I spent six months or a year writing a book on it would be relevant for like a month or two and then be forgotten. And I want to avoid that. So I asked the chat bot, I said, I’m a life coach advising the journalist Brian Beutler. Brian wants to write a book, but only if he’s confident that the book won’t become irrelevant commercially or otherwise. Very quickly, given his interest in areas of expertize, please provide three brief synopsis of books he might write that will continue to be cited, purchased or signed in universities for years after publication. And it gave me th—

 

Kevin Roose: What did it say?

 

Brian Beutler: Oh my god. Okay, I’ll just read them all. The Art of Persuasion A Guide to Influential Writing and Speaking. This book would explore the various techniques that writers and speakers can use to persuade their audiences and make a lasting impact. It would cover topics such as logical argumentation, the use of emotional appeals, and the importance of framing and storytelling. Two. Digital Democracy, the Future of Politics and the Media in the Information Age. This book would examine the ways in which the rise of the Internet and social media is transforming the political landscape and the role of the media. It will discuss the challenges and opportunities posed by the shift to digital communication and offer strategies for navigating the new political environment. Three. The Science of Happiness How to Increase Wellbeing and Reduce Suffering. This book will delve into the latest research on happiness and well-being, and offer practical advice on how individuals can improve their own happiness and the happiness of those around them. It would cover topics such as mindfulness, gratitude and the importance of social connection, and would provide a scientific framework for understanding and enhancing well-being. 

 

Kevin Roose: That’s pretty good. 

 

Brian Beutler: I know. I mean, like, it’s not I’m not going to write any of those books. They don’t— 

 

Kevin Roose: Why not? 

 

Brian Beutler: They’re not. Well, I mean, they actually kind of go in order like the first one is the one that I could, I think do most authoritatively. Followed by the nextt two—

 

Kevin Roose: Do all three. Use ChatGPT to write the books. 

 

Brian Beutler: Well, I mean, but like I found this, like I was like this isn’t quite helpful yet, but like the next generation, like, you know, before you crossed the line into just basically outright plagiarism, it’s like a sounding board and like these aren’t particularly bad ideas. And they do—

 

Kevin Roose: No, there are bestselling books about all of those topics.

 

Brian Beutler: All of those things and like and like and they intersect with stuff that I’ve written about recently that clearly the chat bot is aware of. And I was like, I was like a little scared by it, by the response and also like a little wary about reading the response out on the podcast. But I was like, screw it. Like, people should understand how much the chat bot already knows about them. 

 

Kevin Roose: Totally. 

 

Brian Beutler: Like by the time you know if you have any kind of public profile. 

 

Kevin Roose: And even if you don’t, I mean you could say, you know, I’m a geologist, you know, with an interest in, you know, antique cars or whatever, you know, three books. So it’s it’s not even dependent on sort of you having a public profile and a body of existing work. It could just do that based on some things that you tell it about yourself. 

 

Brian Beutler: Right, right. Right. But I mean, I think I think in my I assume in my case, because I’m on the Internet in 2021, it it knew who I was in some sense. 

 

Kevin Roose: Yeah. 

 

Brian Beutler: I know you have to go soon, but. But since, like, we’re at this, like. How will creators use this? How will this affect creators? Like, there are definitely benevolent ways. Creators can use this to help jog their thinking or whatever. But because so many people are interfacing with AI for the first time on this bot, a lot of the navel gazing has been about how it might disrupt things like teaching, journalism. Like, can teachers teach the way they know how to teach? If a chat bot if a chat bot can just do a kid’s homework for them. You know, if a journalist can just say, write me an article about this. And I guess my question is, is, is this kind of concern inextricable from AI, or can AI be developed in a way that at least preserves this realm where, like, we know what is human creation? Rather than like it’s kind of a mystery. Everything might be plagiarized in a way. 

 

Kevin Roose: I think it’s all right. We’ve already crossed that bridge. I mean, it’s I’ve heard from students who are using this to do all their homework. I’ve heard from, you know, marketers who have been using AI to write their their client emails. This is already, you know, this this particular toothpaste is out of the tube already. So I think that it’s going to be a challenge for especially evaluators like I talked to some teachers who are who basically consider this the end of homework and say, you know, I’m going to have to move to oral exams in class essays, things like that. Because, you know, I assume that if something like ChatGPT is available, all of my students will be using it. Why would they not and frankly, why would I not allow them to? It seems a little arbitrary. 

 

Brian Beutler: But I mean, the the human engineers behind it is is AI something that is definable in a way that would in some sense, sort of like watermark, what is the creation of a AI versus what is the creation of a human? So, I mean— 

 

Kevin Roose: Sure. 

 

Brian Beutler: It does feel like cheating to me and maybe I’m just like being a little bit of a grandpa here. But there’s a difference to me between like, help me understand why nine times six equals 54. And hey, my, my homework assignment is what is nine times six. Give me the answer and, I mean, obviously, we’ve been way past simple math calculation. But I think, you know, my prompt for the essay is this write the essay versus my property essay is this really helped me understand what the prompt means so that I could write the essay. Like those are two different creative enterprises. One’s theft to me and one’s one’s like a human learning, which is great. And I don’t I don’t know if that’s a a line that the creators of AI can brighten so that AI just isn’t just used to to help people skip out on learning. 

 

Kevin Roose: [laughs] Well, there’s that famous quote good artists borrow, great artists steal. I think that was maybe Picasso. So I think it’s maybe a little bit romanticized to say that we’ve you know, we’ve we were these sort of engines for our original generation of ideas, and now we’re all just going to be glorified plagiarists. But I do think there’s an idea in there about watermarking a generated content that’s probably going to be important, especially in situations where work is being sort of evaluated. But yeah, I mean, I think that like, I’m, I’m aware of the downsides and I think those are really important. And at the same time, like I’ve been using ChatGPT as my personal tutor for the last week and it’s amazing. 

 

Brian Beutler: Yeah. 

 

Kevin Roose: I mean, it’s explaining stuff to me that I have been meaning to learn about for years, but maybe I couldn’t find the right, you know, journal article or it was too technical or something like that. And I’ve just been having it explained concepts to me and it’s really, really good. So I think, yes, there will be issues for students, yes, there will be issues for teachers assigning homework. But the idea that every student could have access to a personalized tutor that would explain anything they wanted to them at any level of difficulty and do it in a reliable way is, I think, just a really, really exciting possibility. 

 

Brian Beutler: That’s a good note to end on, but I’m going to make you end on. Are you worried about the robot apocalypse? 

 

Kevin Roose: Sure. I think I think you have to be. [laughs] It doesn’t keep me up at night, but it’s also, you know, something that I think we all need to be paying attention to. 

 

Brian Beutler: Yeah. Yeah. I mean, and it’s just when you when like the creators of this technology acknowledge we can’t promise this won’t end the world. I mean, it raises novel questions that you just don’t get to when you’re talking about like drug development. And I don’t know. I don’t know how you can approach it without feeling like like how you can approach it with the sense of utopianism that some of the people in the tech world seem to have about it. Like if you can’t rule out that scenario. 

 

Kevin Roose: Yeah. I mean, it’s just it’s sort of how you how you want to live your life. Do you want to, you know, do you want to see something new? And be, and think of all the ways that it could end civilization? [laughter] Or do you want to or do you want to see something new and figure out how we can use it for good and and to to put real pressure on the companies developing the stuff to make it harder to use for nefarious purposes and easier to use for good purposes. I think that’s a totally legitimate goal. 

 

Brian Beutler: Yeah, the, you could tell that that the chat bot is being refined to be less. I mean, it’s a fun game to try to get the chat bot to admit that like for instance, it would ultimately start lying to you if, if it was, if that was optimal in some way. And, and as it’s gotten smarter that the people who are like, I guess it sort of works like you punish the AI when it starts giving these answers, like how do you build a bomb or whatever so that it stops doing that. And I’ve noticed just in the last couple of weeks that it’s gotten harder to trick it. 

 

Kevin Roose: Yeah. 

 

Brian Beutler: And so maybe maybe like there’s some hope there that just like public pressure on the people responsible for creating this thing will be great enough that they will be policing it at least well enough so that we don’t fast forward from here to the to the nightmare scenarios. Kevin Roose I kept you too long. Thank you for spending so much of your time with us. 

 

Kevin Roose: Thanks for having me. [music break]

 

Brian Beutler: All right. I got to say, I envy Kevin’s chill. And in the spirit of that chill, I’m not going to try to end on any profound note, because as you probably gathered, I’m not sure any one person can really wrap their head around what it all means when the it is advanced artificial intelligence. So instead, I’ll close with some more ChatGPT content. We thought it’d be fun to ask the chat bot for some episode ideas for Positively Dreadful. But then we realized ChatGPT can’t really be aware of our existence because we launched this year. So first I asked about our precursor show. I said, give me three good episode ideas for the podcast Rubicon. And it offered the rise and fall of Ancient Rome. This episode could explore the history of ancient Rome from its founding to its eventual collapse. The episode could focus on key events and figures in Roman history, such as Julius Caesar, Augustus and Constantine. The power of propaganda. This episode could delve into the ways in which propaganda has been used throughout history to manipulate public opinion and shape political outcomes. The episode could explore examples of propaganda from different time periods and discuss how it has evolved over time. The role of religion in politics. This episode could explore the intersection of religion and politics, examining the ways in which religious beliefs have influenced political decisions and vice versa. The episode could discuss examples of religious influence in politics from different time periods and regions of the world. That first answer in particular seemed pretty fishy to me because while the show was called Rubicon, it had nothing to do with ancient Rome. So I asked ChatGPT how it was able to generate those ideas, and it said, in this case I was able to generate ideas for the podcast based on my understanding of what a podcast is and the context of the prompt, as well as my knowledge of history, politics and other relevant subjects. I did not have any specific information about a podcast called Rubicon because that information is not part of my training data. Bummer. So then I said, Positively Dreadful is a weekly interview podcast hosted by the journalist Brian Beutler. The show is about worrisome trends in politics and society and how people can better understand those trends and hopefully improve them. Given this theme, give me three possible, Positively Dreadful guests along with the topics they discuss and ChatGPT offered a political scientist who studies the rise of authoritarianism in democracies and around the world. They could discuss the warning signs of authoritarianism and offer advice on how individuals and communities can protect against it. A social psychologist who studies the effects of social media on mental health and political discourse, they could discuss potential negative effects of social media, as well as strategies for using social media in a more positive and constructive way. An expert on disinformation and propaganda who studies the ways in which false information is spread and used to manipulate public opinion. They could discuss the tactics used by those who spread disinformation and offer advice on how to identify and counter false information. I’m pretty sure we’ve done at least two of those three shows already, which means the AI knew with a one sentence prompt how to do in an instant what my producers and I spent at least an hour a week working on with all our combined years of experience. Yeah.  [music break] Positively Dreadful is a Crooked Media production. Our executive producer is Michael Martinez, our producer is Olivia Martinez and our associate producer is Emma Illick-Frank. Evan Sutton mixes and edits the show each week. Our theme music is by Vasilis Fotopoulos.