港澳天下彩

漏 2024 WLRN
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How AI is polluting our culture

Close-up of a person's hand using the Midjourney generative AI image generator in Lafayette, California on May 7, 2024 (Smith Collection/Gado/Getty Images)
Close-up of a person's hand using the Midjourney generative AI image generator in Lafayette, California on May 7, 2024 (Smith Collection/Gado/Getty Images)

AI-generated content online is almost impossible to avoid. There are AI-boosted Google search results, AI-generated imagery, AI-generated articles, AI-generated music, even AI-generated children鈥檚 TV shows.

Neuroscientist Erik Hoel says we鈥檙e drowning in 鈥淎I dream slop.鈥

Today, On Point: The cost to our humanity in a world of synthetic culture.

Guest

Erik Hoel, neuroscientist and writer. Author of the Substack newsletter .

Transcript

Part I

DAVE ESPINO: Imagine making $1.2 million creating AI generated videos. We鈥檙e going to show you how in this video.

MEGHNA CHAKRABARTI: That鈥檚 Dave Espino, host of a YouTube channel called Making Money With AI. In that clip you just heard, Espino鈥檚 using a company called RV AppStudios as an example of how to make those bags of money.

RV AppStudios makes games and apps for adults, some of which you have to pay for. They also make free games, apps, and online videos for children. And they do that with AI, as Espino says. He describes how RV AppStudios makes the content on one of their kids鈥 learning channels called Lucas and Friends.

ESPINO: Yeah, this channel is mind boggling. It has 917,000 subscribers. This is the type of animation that you can create. Let me show you really quick. And you could create that type of animation using AI, and these are mostly learning videos. So the kid is getting some great value, they鈥檙e learning all kinds of stuff.

These are for toddlers, as you can see.

CHAKRABARTI: Dave鈥檚 co-host, James Renouf, emphasizes repeatedly that gone are the days where children鈥檚 television makers need a huge staff of writers and animators. All you need now is some AI software, and in the quote 鈥渨riter鈥檚 room,鈥 ChatGPT.

JAMES RENOUF: And I don鈥檛 want people to say, gosh, it takes, these videos are 30 minutes long.

First of all, these are like not crazy dialogue here, okay? We鈥檙e not writing Shakespeare, okay? It鈥檚 like the letter A, the letter B. And you can use ChatGPT to make these little scripts. So say make me a little script where we teach kids ABCs, or we teach them numbers, etc. And then you use the power of AI to make these videos.

You don鈥檛 have to be a graphic artist. It鈥檚 amazing what can be done very simply. So little Johnny here, go learn your ABCs while mommy puts her feet up, okay? You create these kind of videos and you can have a channel blow up.

CHAKRABARTI: (LAUGHS) Sorry, as a mommy myself, yeah, sometimes you want to put your feet up, but that鈥檚 not necessarily why you would need your child to learn the ABCs.

But the point is that kid鈥檚 content on YouTube these days is a very different beast than traditional educational kids鈥 television. For example, The Sesame Workshop, home to Big Bird of course, has more than 1,000 employees in the United States alone. And among them are an army of childhood learning experts who obsess over the latest research on child development, and they even commission studies of their own, such as a 2021 study Sesame Street commissioned to investigate how the pandemic was impacting young children in particular, and how they could incorporate those findings into their television programming.

Over at RV AppStudios, it鈥檚 not clear how much research or academic expertise informs how they create Lucas and Friends. Maybe there is, but it鈥檚 not easy information to find. But it is clear that the company is having an impact, given the kind of metrics that matter most to digital organizations, metrics like 400 million free kid apps download so far, pacing at 15 million more every month.

Their AI generated YouTube videos get millions of views within weeks of being posted. So here鈥檚 an example of one. This is from Lucas and Friends, their animated YouTube video. And you can鈥檛 see it, but it鈥檚 Lucas, a yellow animated animal figure, happily teaching your baby how to wave.

LUCAS: Hello, friends! Wave your hand and say, Hello! Hello! Hello! When you meet someone, you say, Hello! When you meet someone, you say, Hello!

CHAKRABARTI: Joining us today is Erik Hoel. He鈥檚 a neuroscientist and writer and author of 鈥淭he World Behind the Word: Consciousness, Free Will, and the Limits of Science.鈥 He鈥檚 also author of a Substack newsletter called 鈥淭he Intrinsic Perspective.鈥 Erik, welcome to On Point.

ERIK HOEL: Thank you so much for having me.

CHAKRABARTI: You were literally grimacing when we played that sound from Lucas and Friends. Is it up there in sort of the Baby Shark level of adult irritation for you?

HOEL: No, I think Baby Shark is so much better. Only a human could write a tune as catchy as Baby Shark.

I first stumbled across this stuff while I was researching the proliferation of AI generated content on the internet. And I wrote about it on my Substack and then Wired ended up doing an investigation into some of the channels. And what I found was that if you look at the actual content of these videos, most of the time there are numerous errors, deep, baked deep into them.

So they鈥檒l show a shape, that鈥檚 a hexagon, they鈥檒l say it鈥檚 a pentagon. They are incredibly formulaic because they have to be. In the end, these technologies, like forgetting about this sort of moral, philosophical question of if we should be entrusting non-human minds with the education of our children, there鈥檚 still just the practical question of these things get a lot of things wrong.

I was recently testing some of these more frontier models of AI鈥檚. I鈥檝e been teaching my son to read. He turns three in a week. And so we鈥檝e been going over simple sentences with the most common letter sounds. And so you can create these simple sentences. And it鈥檚 really the best way, I think, to teach a child to read.

So you say something like Bob sat in mud. And I have to create a lot of these sentences to give him new sentences to practice, and it鈥檚 this boring task. Perfect, I would think, for an AI to maybe outsource this, too. So I tried honestly using the AI for some lesson planning to give me back simple sentences.

And I asked the smartest AI, which I think is probably Claude Pro, maybe this new recently released GPT 4.0 is slightly better. But at the time that I asked Claude Pro. Claude was basically the leading model and it couldn鈥檛 do it. It couldn鈥檛 come up with sentences that are all the simplest sounds because it鈥檚 something that鈥檚 not in their training set.

It鈥檚 a weird ask and in the end, it would say something like, Bob was big. Was? That鈥檚 not the way an S normally sounds, is it? And even when I pointed out the mistakes to it, it would go back and make the same mistakes over again. And if you can鈥檛 teach a two-year-old basic letter sounds, how are you going to scale that up to, the dreams of AI supplementary tutors teaching, physics to kids?

CHAKRABARTI: I suppose you could argue that the more complex the learning, maybe the better the AI is at it, but we鈥檒l come back to that in a second. It鈥檚 interesting that you made the example of 鈥榳as.鈥

And the S sound. Of course, that sort of more Z sound is, in American English, the S sometimes makes that sound. But the key thing is the progression in which how a child learns the different kinds of S sounds there are, right? And it stuck the was in there, whereas for an early reader, sat or something like that would probably have been better, right?

So it鈥檚 understanding how the child learns, which seems, at least in that example, seems to be missing a little bit.

HOEL: And I don鈥檛 think that the people making these YouTube generated videos are using even the latest frontier models. They鈥檙e using whatever the cheap, free versions are.

CHAKRABARTI: To that point, we heard them say all you need is an animation, AI program, and ChatGPT. Okay, so Erik, I love doing real time experiments. It drives the staff crazy because they have no idea what I鈥檓 going to do. But I have a computer here, obviously, in the studio with us, and I鈥檝e got ChatGPT.

This is 3.5. Open, it鈥檚 the free one. Alright, so let鈥檚 write a script for a kid鈥檚 animated YouTube short.

HOEL: (LAUGHS)

CHAKRABARTI: So I鈥檓 going to say, write a script for a toddler鈥檚 YouTube video, should we say that?

HOEL: Sure.

CHAKRABARTI: About what? Let鈥檚 do learning how to read. Okay teach the child. Let鈥檚 make it a little more specific, to be fair.

HOEL: Let鈥檚 say, teach them the most common letter sounds.

CHAKRABARTI: Teach the child the most common letter sounds. Should we specify how long the script should be?

HOEL: Yeah, can you say, create simple sentences?

CHAKRABARTI: Okay.

HOEL: That use only the most common letter sounds.

CHAKRABARTI: Sentences. I鈥檒l say create 15 sentences. Simple.

HOEL: Sentences that use only the most common letter sounds.

CHAKRABARTI: Letter sounds. Ready?

HOEL: Okay. Live experiment.

CHAKRABARTI: Okay. Oh, it鈥檚 thinking. Ha! Title card! Let鈥檚 learn letter sounds with Tim. Timmy, cheerful music plays as video begins. Oh my god. This is actually quite a long script, ChatGPT, a good thing I don鈥檛 have to pay it. Then Timmy says, Hi friends.

I am Timmy. And today we鈥檙e going to learn some super cool letter sounds together. Are you ready? Yay. Okay, he claps his hands, and he says let鈥檚 start with the letter 鈥楢.鈥 Can you say A? Oh, A, it says ah, so that鈥檚 the letter sound.

HOEL: So already we asked it to create 15 sentences that only used the most common letter sounds.

It didn鈥檛 create 15 sentences that used the most common letter sounds.

CHAKRABARTI: Oh, there鈥檚 more though, I scroll, I gotta scroll, oh no, it didn鈥檛. It just says the letter sounds, C, cat, D, dog, I鈥檓 getting my child YouTube video voice on here, E, elephant, oh where are the, there鈥檚 no sentences.

HOEL: Yeah. Okay. Yeah, exactly.

鈥楥ause you asked it to do something slightly weird.

CHAKRABARTI: It doesn鈥檛 seem weird.

HOEL: No. It doesn鈥檛 seem weird to us. But if you think about what鈥檚 in their training set, there鈥檚 probably a huge number of scripts in the training set. But you asked for something very specific. And most of the time, these AIs are just not very good at out-of-distribution sampling.

So what you鈥檝e been presented with is something that looks impressive. It鈥檚 a lot of text, right? But it鈥檚 not actually really grokking the fundamental thing of what you just asked it for.

CHAKRABARTI: Yeah.

HOEL: And now if you imagine the feedback between a confused child and the AI, right? You get this spiral of confusion.

And this is when it鈥檚 interactive. Most of these scripts for these YouTube videos are exactly like this. They鈥檙e just the most obvious, what would it be, 鈥楢鈥 for Apple, et cetera, et cetera, et cetera. Getting it to do anything beyond that, it鈥檚 actually surprisingly difficult.

CHAKRABARTI: Wow. Okay, so Timmy goes on to say, Timmy, I鈥檓 actually calling this character like it really exists. Whee! That was fun! Let鈥檚 do more! And then, you get down to P! Penguin! Claps! Q! Queen! Oh, see, Q is really interesting one. That鈥檚 really interesting.

HOEL: Queen.

CHAKRABARTI: Queen. Yeah. You鈥檙e not actually teaching what the 鈥楺鈥 needs to do.

Do you know when I was in kindergarten, I took a little test and I was asked by the test giver Meghna, say a sound, say a word that starts with 鈥楺.鈥 And I said cute, right?

HOEL: (LAUGHS) Clever.

CHAKRABARTI: But again, it鈥檚 like, how does the brain work in terms of processing information? And here鈥檚 the thing. This is why we鈥檝e invited you not to use ChatGPT to write new YouTube kids鈥 content, Erik. But you鈥檝e written extensively about how this kind of quickly generated AI content is everywhere, to the point where you say it鈥檚 hurting our culture. It鈥檚 hurting our way to consider, think of ourselves as human beings.

Part II

CHAKRABARTI: Now, I should say, Erik, we got a lot of responses from listeners when we said we were going to talk about the ubiquity of AI generated content and how it鈥檚 really impossible to find a space on the internet, or growing increasingly challenging, where there鈥檚 not clearly this sort of synthetic content.

So let me just play a little bit of what some of our listeners said. This is Rachel Chu from Charleston, South Carolina, and she told us that recently she started to notice a lot of this stuff on Facebook.

RACHEL CHU: Just today I saw an AI generated photo of a young girl, like a toddler on the shore of a beach with an oxygen mask lying in the water next to a birthday cake and the caption said something like, 鈥淢y birthday is today hope I get birthday greetings.鈥

And I guess this is to create comments and likes, and whoever is behind these is trying to play on users鈥 emotions and make them think they can help somehow. I鈥檓 not sure if they鈥檙e making money, but it does feel wrong when there are real people and real causes that do need attention.

CHAKRABARTI: So that鈥檚 Rachel from Charleston, South Carolina, and here鈥檚 Eli Hornstein, a scientist who works with plants and reptiles.

He left us a message talking about two recent Google searches that he did. In one, he asked whether there were vegetarian snakes. Oh boy. And in the second, he asked if there were edible bromeliads other than pineapples.

ELI HORNSTEIN: And the only results on the entire internet for those questions were AI generated lies, which are perfectly composed, sound factual, but use the names of real organisms in completely made-up ways, saying that the rainbow boa, which is a real snake, is vegetarian, which it is not. Or that a long list of plants are edible bromeliads when they鈥檙e neither edible nor bromeliads.

I really don鈥檛 understand what goes on underneath the surface to produce this type of content there waiting for me, but I鈥檓 frankly quite alarmed by it.

CHAKRABARTI: And Eli, my apologies for mispronouncing bromeliads. Okay, so Erik, these are examples about how we already know that sometimes AI can be very factually challenged, let鈥檚 put it that way.

But you take your analysis or your criticism even further, and you say that our entire culture is becoming affected, as you say, by AI鈥檚 runoff. What do you mean by that?

HOEL: I think the reason I use analogies like runoff, and I think are appropriate, is that if you look at the history of technological change, there鈥檚 been these various problems that have cropped up, and one of the most significant ones are issues around climate change, global warming, and also local destructions of environment, and it required a change in thinking in the 20th century.

Where we went from thinking of the environment as this big immutable thing that could not really be injured by us because it鈥檚 so big, it鈥檚 so omnipresent, to something that鈥檚 actually fragile and that we needed to protect, and we needed to enact regulations to protect it. I think the same realization has to happen in the 21st century for human culture.

It鈥檚 been this big immutable thing. That there is human culture, it鈥檚 produced by humans, it鈥檚 the water in which we swim, and it鈥檚 so big that we don鈥檛 think anything can really hurt it. At this point, it would not surprise me if 5% of the content online was being produced by AIs. You can go to any leading tweet, or I guess now post, and find the top reply will be something very obviously AI written, once you hear that cheery Wikipedia voice of ChatGPT, you will find it everywhere.

Even on my own blog, I鈥檝e had to ban people for just posting AI comments just for engagement. And it鈥檚 because there鈥檚 this economic incentive. We are a cult, we are a content hungry economy and the ability to create cheap content, even if it鈥檚 not good, like even if the quality is much lower than a human, if it鈥檚 orders of magnitude cheaper, there鈥檚 just pure economic reasons to pursue that.

CHAKRABARTI: So then, so let鈥檚 get back. Let鈥檚 look to understand your concern here. I want to have some shared definitions just so that we all were talking about the same things when you talk about culture. It is, because of its ubiquity, it鈥檚 an amorphous concept. And, obviously there are millions of various cultures and many more microcultures, etc.

So what, how are you defining what culture in this case is?

HOEL: Everything you see online, everything you read, everything you watch, let me give a brief example of this. Sports Illustrated, right? Culture, right? Okay. It鈥檚 being produced by humans as content for humans, but, they were recently caught using fake AI writers to create their articles and because there鈥檚 a clear economic incentive for them to do that.

And that, there is a possibility where when I was born, everything I saw, everything I read, everything I watched even the lowly labels at a grocery store were thought over and created by human minds, and it鈥檚 very possible that I will die in a world where the vast majority of the things I read, see, or watch are not created by human minds.

They鈥檙e created by unconscious artificial neural networks. And I think that is the real immediate risk of AI because it鈥檚 what we鈥檙e already seeing. Again, you can go out on the internet and find all these numerous examples of that. And there鈥檚 this creeping weirdness to changes.

And let me give like a little brief story about how this seeps even into the real world. As I said, my son Roman is turning three, so we鈥檙e going to host a birthday party, and it鈥檚 Curious George themed. So we got all these Curious George stickers that we ordered online, and we were about to put them all into the little packs for people to take to go, right?

And my wife is looking through them, and she comes to me, and she says, These are very strange, some of these. And some of them are fine, and some of them are like, Curious George holding an automatic rifle, Curious George without skin, Curious George OD鈥檌ng, bi Curious George holding a banana evocatively.

It鈥檚 just obviously not safe for work content. And if any human had been in the process of making these stickers, and I don鈥檛 know, I have no evidence that they did use AI or if it鈥檚 just people, somewhere in China, a company in China who doesn鈥檛 know what Curious George is and doesn鈥檛 really care.

And it鈥檚 just pulling stuff. But the point is that when you create culture algorithmically, you begin to run into these scenarios where clearly there was no conscious thought behind this at all. And that鈥檚 only going to continue. There鈥檚 going to be this creeping alien-ness to our culture.

CHAKRABARTI: Yeah. Algorithms are the perfect rule followers, right?

So it鈥檚 following the sets of rules given to it. But there鈥檚 no discernment about whether what it鈥檚 producing is good, bad, appropriate, that kind of thing. As long as it fits within the framework of the rules given. Yeah. If you鈥檝e noticed like this an increasing high strangeness to some things, it often is either AI or algorithmically produced content.

And I think at this point, there鈥檚 not a huge distinction, but soon most algorithmically produced content will be fully AI generated.

CHAKRABARTI: Okay. Okay, so I want to get some more examples here. Now that I have a better understanding of what you鈥檙e talking about when you say culture, we started by giving the example of the increasing amount of AI, either enhanced or generated kids content, right?

Say on YouTube, and other issues with that are what there鈥檚 oftentimes no narrative cohesion to what they see. Yes, you said it鈥檚 getting things completely wrong sometimes, that is troubling, but it also has great reach. On the other end of the spectrum, you say that this sort of AI, not only generated content, but then there鈥檚 a feedback loop that happens when more AI generated content鈥檚 getting out there and then AI is learning from that content that it made.

And you say you can see that even in the scientific literature?

HOEL: There鈥檚 this phenomenon, which is little discussed, but very well known, called model collapse. And if you look at, the funny thing is that in a way, the companies and I have common cause in that most of these companies and us have common cause, in that these companies don鈥檛 want to train their latest version of their AI on their old AI鈥檚 output. They don鈥檛 like that. They don鈥檛 want that. You might ask, wait a minute. Why wouldn鈥檛 you want that?

And it鈥檚 because of this issue of model collapse. And what happens is when you take a model and you start feeding it its own data, it eventually collapses. Researchers have compared it to getting mad cow disease. Because of course for mad cow disease, the cattle eat the brains of other cattle.

And in this case, it鈥檚 they鈥檙e eating their own, they鈥檙e eating their own dog food. And they begin to collapse in on themselves and you trigger either something that looks like schizophrenia or just incredibly simplistic outputs or so on. And you have to think about how strange that is, that the companies don鈥檛 want their AI generated products, even really in the training of their next generated model, but it鈥檚 fine for us to consume them, right?

There鈥檚 this strange hypocrisy baked into the whole thing. And that鈥檚 because these systems, they are trained on our data, and they are very impressive in some ways, as much as I sometimes point out their simplistic reasoning flaws, at other times I play around with them.

And I think this is like world changing. This is crazy. I can鈥檛 believe that I鈥檓 living in a world in which this sort of thing is possible. So it flips to me like a Necker cube, like an optical illusion where I can see it one way. And then sometimes I see it the other way. And I do think that, in the end, we鈥檙e going to have to start making some decisions about to what degree do we put limits on this?

They themselves are very protective of their own models. We ourselves are neural networks, biological neural networks. Should we be concerned about if culture stops being just 5% and it鈥檚 just funny Curious George stickers and it starts being most of the content that you read or see, or a huge amount of what鈥檚 posted online. That seems to me to be the immediate concern.

Other people are very concerned about AI, but they鈥檙e often concerned about things more like existential risk. These more sort of sci fi scenarios

CHAKRABARTI: Oh, the world coming to an end.

HOEL: The world coming to an end.

CHAKRABARTI: Terminator scenario.

HOEL: Yeah, exactly. The Skynet scenario. But there is an effect that鈥檚 happening right now, which is that the internet is getting filled up with junk because of the economics of it.

Back in 1968, Garrett Hardin wrote a very famous article in Science that was instrumental for the environmental movement, and in it he coined the term, a tragedy of the commons. And getting people to think that way, that there was this commons. That needed to be protected, that you couldn鈥檛 just say a chemical plant wants to make money and so they can just go pollute this river. Like, no, you actually can鈥檛 do that.

You鈥檙e damaging the commons in a particular way. And I think human culture is a commons, like even Curious George stickers are a commons, right? Like I expect my Curious George stickers to be okay. And this AI creates this fundamental mistrust, and I don鈥檛 think we should necessarily throw out the entire technology or anything, but we need to start putting in similar sort of pressure and regulatory guidelines that we did for actual physical pollution.

CHAKRABARTI: We鈥檙e going to talk about that a little bit later in the show in detail, Erik, but I want to lean on your academic expertise as a neuroscientist, right? Because there鈥檚 our common experience of culture, which you鈥檙e already saying we should be thinking about, or concerned about AI鈥檚 impact on that.

But I鈥檓 also wondering about just how we as human beings, how our brains absorb this information or take that cultural feedback. AI hasn鈥檛, at scale, has not been around long enough, I would say, to have any sort of real robust study on this question. So I want to put that out there.

But you do quote in your Times piece, you quote Einstein actually, right? And you say that Einstein once said, let me see if I can find this here. About if you want to really teach your, oh yeah. If you want your children to be intelligent, read them fairy tales. If you want them to be more intelligent, read them more fairy tales.

So what is, why鈥檇 you use that quote?

HOEL: That鈥檚 such a great question. What鈥檚 funny is that this connects to this issue that I鈥檝e been fascinated with ever since I was young, you mentioned in the introduction that I grew up in my mother鈥檚 independent bookstore, the Jabberwocky, which is here on the east coast.

And so I was always surrounded by fictions. And I was also interested in science and neuroscience. And at some point, I began thinking what is the purpose of these things, right? One could imagine a race of aliens who are like literalists, who are like, why do you care about Harry Potter?

Everything about Harry Potter is a lie, right? Everything that happens in Harry Potter is a lie. You people seem to care massively. And the common explanation, which is normally given by evolutionary psychologists. This would be something that Steven Pinker would probably say, which is that fictions are just, the fictions of our culture, the stories of our culture are just the super stimuli, and we just like them for the same reason we like cheesecake.

In fact, I think Steven Pinker once said that music was auditory cheesecake. And I always thought that can鈥檛 be right. And one way in which I think that鈥檚 not right is that if you think about humans as a continuously learning neural network, we need to sample things that are outside of our day-to-day distribution in order to generalize our learning.

And so this is now getting more theoretical. So I introduced this hypothesis called the overfitted brain hypothesis. And the idea is that during your day-to-day learning, you鈥檙e becoming very statistically fitted to what you鈥檙e doing, and you need something to shake you out of that, and probably that鈥檚 one of the reasons why dreams initially evolved, but also one of the reasons why we tell stories, and we tell fictions.

We talk about things that never happened and couldn鈥檛 happen. And these things probably are cognitively important to us. So it鈥檚 not just cheesecake. It鈥檚 not just some super stimulus that we鈥檙e attracted to because there鈥檚 heightened emotions or because there鈥檚 lots of action. We鈥檙e actually getting them.

Maybe something fundamental out of human culture for our brains themselves, for our learning. And then if you take that view of things and you begin asking, Okay, so what are the effects going to be of filling up our culture with text that鈥檚 the most obvious continuation. All these sort of properties of these artificial neural networks. And the answer is, we might start damaging this really fundamental thing that I think humans have relied on, which is having an enriching culture that allows you to generalize your day-to-day learning.

CHAKRABARTI: Did I hear correctly? Did you say brain micronutrients? Or is my brain remembering that from your article?

HOEL: That鈥檚 from the article. Okay. And yeah, so I think stories contain within them cognitive micronutrients. You could call them something like that. And, I鈥檒l caution listeners that if you think neuroscience is a set of well-established facts, I have unfortunate news for you. Neuroscience is like a bunch of competing narratives and hypotheses, and this is one of them. But it does say that if we are entering this unknown space where we don鈥檛 know exactly what the risks of letting go of the control of our own culture. And I think that鈥檚 a microcosm of the problem with AI generally, which is this problem of do humans maintain agency?

Yeah, if we don鈥檛 maintain agency over the content that we create, right? What are our chances of maintaining agency in the long run?

This article was originally published on

Copyright 2024 NPR

Tags
More On This Topic