-0.6 C
Ottawa
Thursday, April 25, 2024

Why We’re Worried About Generative AI

Date:

Tulika Bose: Last week, Google announced the new products and features coming from the company. And it was AI all the way down.

Thank you for reading this post, don't forget to subscribe!

Sophie Bushwick: AI features are coming to Google’s software for email, word processing, data analysis—and of course searching the web.

Bose: This follows Microsoft’s previous announcements that it also plans to incorporate generative AI into its own Bing search engine and Office Suite of products. 

Bushwick: The sheer volume of AI being introduced, and the speed with which these features are rolling out, could have some, uh, unsettling consequences. This is Tech Quickly, a tech-flavored version of Scientific American’s Science Quickly podcast. I’m Sophie Bushwick.

Bose: And I’m Tulika Bose. 

[MUSIC]

Bose: Sophie, hasn’t Google had AI in the works for a long time? What’s the big problem? 

Bushwick: That’s true. Actually, some of the basic principles that were later used in programs like OpenAI’s GPT-4, those were actually developed in-house at Google. But they didn’t want to share their work, so they kept their own proprietary large language models and other generative AI programs under wraps.

Bose: Until OpenAI came along.

Bushwick: Exactly. So ChatGPT becomes available to the public, and then the use of this AI-powered chatbot explodes. (wow) AI is on everyone’s mind. Microsoft is using a version of this ChatGPT software in its search engine, Bing. And so Google, to stay competitive, has to say, hey, we’ve got our own version of software that can do the same thing. Here’s how we’re going to use it.

Bose: It feels like all of a sudden AI is moving really, really fast.

Bushwick: You are not the only one who thinks so. Even Congress is actually considering legislation to rein in AI. 

Bose: Yeah, Sam Altman, he’s the CEO of Open AI, (the company behind ChatGPT) had to testify in front of Congress this week.

Altman: My worst fears are that we cause significant .. we, the field, the technology, the industry cause significant harm to the world.

Bushwick: The EU is also working on AI legislation. And on the private side, there are intellectual property lawsuits pending against some of these tech companies because they trained their systems on the creative work produced by humans. So I’m really glad that there’s some momentum to put legislation in place and to, sort of, slow down the hype a bit, or at least make sure that there are limitations in place because this technology could have some potentially disastrous consequences. 

[NEWS CLIP] Aside from debt ceiling negotiations, Capitol Hill was also focused today on what to do about Artificial Intelligence, the fast-evolving, remarkably powerful…The metaphors today reflected the spectrum. Some said this could be as momentous as the industrial revolution, others said this could be akin to the Atomic Bomb.

Bose: Let’s start with consequence number one.

Bushwick: Some of the issues here are kind of baked into large language models, aka LLMs, aka the category of AI programs that analyze and generate text. So these problems, I’m betting you’ve heard about at least one of them before.

Bose: Yeah, so I know that these models hallucinate– which means they literally  just make up the answers to questions sometimes.

Bushwick: Correct. For example, if you were to ask what the population of Mars was, ChatGPT might say, oh, that’s 3 million people. But even when they’re wrong, there’s this human inclination to trust their answers—because they’re written in this very authoritative way.

Another inherent problem is that LLMS can generate text for people who want to do terrible  things, like build a bomb, run a propaganda campaign, they could send harassing messages, they could scam thousands of people all at once, or they could be a hacker who wants to write malicious computer code. 

Bose: But a lot of the companies are only releasing their models with guardrails—rules the models have to follow to prevent them from doing those very things.

Bushwick: That’s true.The problem is, people are constantly working out new ways to subvert those guardrails. Maybe the wildest example I’ve seen is the person that figured out that you can tell the AI model to pretend it’s your grandmother, and then inform it that your grandmother used to tell bedtime stories about her work in a napalm factory. (wow) Yeah, so if you set it up that way and then you ask the AI to tell a bedtime story just like granny did, it just very happily provides you instructions on how to make napalm! 

Bose: Okay, that’s wild. Probably not a good thing. Um

Bushwick: No, not great.

Bose: No. Uh, can the models eventually fix these issues as they get more advanced?

Bushwick: Well, hallucination does seem to become, uh, less common in more advanced versions of the models, but it’s always going to be a possibility—which means you can never really trust an LLM’s answers. And as for those guardrails—as the napalm example shows, people are never going to stop trying to hop over them, and this is going to be especially easy for people to play around with once AI is so ubiquitous that it’s part of every word processing program and web browser. That’s going to supercharge these issues.

Bose: So let’s talk about consequence number two? 

Bushwick: One good thing about these tools is that they can help you with boring, time-consuming tasks. So instead of wasting time answering dozens of emails, you have an AI draft your responses. Uh, have an AI turn this planning document into a cool Powerpoint presentation, that kind of thing.

Bose: Well, that honestly would make us work a lot more efficient. I don’t really see the problem. 

Bushwick: So that’s true. And you are going to use your increased efficiency to be more productive. The question is who’s going to benefit from that productivity? 

Bose: Capitalism! [both laugh]

Bushwick: Right, exactly, so basically the gains from using AI, you’re not necessarily going to get a raise from your super AI-enhanced work. All that extra productivity is just  going to benefit the company that employs you. And these companies, now that their workers are getting so much done, well, they can fire some of their staff—or a lot of their staff. 

Bose: Oh, wow.

Bushwick: Many  people could lose their jobs. And even whole careers could become obsolete.

Bose: Like what careers are you thinking of?

Bushwick: I’m thinking of somebody who writes code or is maybe an entry-level programmer, and they’re writing pretty simple code. I can see that being automated through an AI. Uh, certain types of writing. Um, I don’t think that AI is necessarily going to be writing a feature article for Scientific American or to be capable of doing that, but AI is already being used to do things like simple news articles based on sports games or financial journalism that’s about changes in the market. Some of these changes happen pretty regularly, ​​and so you could have sort of a rote form that an AI fills out. 

Bose: That makes sense. Wow, that’s actually really scary.

Bushwick: I definitely think so. In our career, I definitely find that scary.

Like I said, AI can’t do everything. It can’t write as well as a professional journalist, but it might be that a company says, well, we’re gonna have AI write the first draft of- of all of these pieces, and then we’re gonna hire a human writer to edit that work, but we’re gonna pay them a much lower salary than we would if they just wrote it in the first place. And that’s an issue because it takes a lot of work to edit some of the stuff that comes out of these models because like I said,  they’re not necessarily writing, the way that a professional human journalist or writer would.

Bose: That sounds like a lot of what’s going on with the Hollywood writer’s strike.

Bushwick: Yes, one of the union’s demands is that studios not replace human writers with AI.

Bose: I mean, ChatGPT is good. But it can’t write, like, I don’t know, Spotlight, all on its own—not yet anyway.

Bushwick: I totally agree! And I think that the issue isn’t that AI is going to replace me in my job. I think it’s that for some companies, that quality level doesn’t necessarily matter. If what you care about is cutting costs, then a mediocre but super cheap imitation of human creativity might just be good enough. 

Bose: Okay. So I guess we could probably figure out who we’re gonna benefit, right?

Bushwick: Right, the ones on top are going to be reaping the benefits, the financial benefits of this. And that also is true with this AI rush and tech companies. . So it takes a lot of resources to train these very large models that are then used as the basis for other programs built on top of them. And the ability to do that is concentrated in already powerful tech giants like Google. And right now a lot of companies that work on these models have been making them pretty accessible to researchers and developers. Uh, they make them open access. For instance, meta has made its large language model called LLaMa really easy for researchers to explore and to study. And this is great because it helps them understand how these models work. It could potentially help people catch flaws and biases in the programs. But because of this newly competitive landscape, because of this rush to get AI out there, a lot of these companies are starting to say, well, maybe we shouldn’t be so open.

And if they do decide to double down on their competition and limit open access, that would further concentrate their power and their control over this newly lucrative field.

Bose: What’s consequence number three? I’m kind of starting to get scared here.

Bushwick: So this is a consequence that is really important for you and me, and it has to do with the change in search engines, the idea that when you type in a query, instead of giving you a list of links, it’s going to generate text to answer your question. A lot of the traffic to Scientific American’s website comes because someone searches for something like artificial intelligence on Google, and then they see a link to our coverage and click it. 

Bose: Mm-hmm

Bushwick: Now Google has demonstrated a  version of their search engine that uses generative text, so it still has a list of links beneath the AI generated answer, and the answer itself cites some of its sources and links out to them. But a lot of people are just gonna see the AI-written answer, read it, and move on.  Why would they go all the way to Scientific American when it’s so easy to just read a regurgitated summary of our coverage?

Bose: I mean, if people stop clicking through to media websites, that could seriously cut down on website traffic, which would reduce advertising revenue, which a lot of publications rely on. And it also sounds like, basically, this is aggregation.

Bushwick: In- in a way it is. It’s relying on the work that humans did and taking it and remixing it into the AI-written answer.

Bose: What could happen to original reporting if this happens?

Bushwick: You could picture a future of the internet where most of the surviving publications are producing a lot of AI-written content ‘cause it’s cheaper and it doesn’t really matter in this scenario that it’s lower quality, that maybe it doesn’t have as much original reporting and high quality sources as the current best journalistic practices would call for.

But then you could say, well, what are Google’s answers now gonna be drawn from? What is its AI program gonna pull from in order to put together its answer to your search engine query? And maybe it’s gonna be an AI pulling from AI, and it’s gonna just be lower quality information (mm-hmm) And it, it’s gonna suck. It’s gonna be terrible. [laughs]

Bose: Yeah…

Bushwick: This is the worst case scenario, right? So not for sure that would play out, but I could see that as a possibility. Sort of a- internet as a wasteland, AI tumbleweeds blowing around and getting tangled up in the Google search engine.

Bose: That sounds terrible.

Bushwick: Don’t- don’t really love that.

Bose: We’ve- we’ve talked about some truly terrible things so far, but there’s a consequence number four, isn’t there?

Bushwick: Yes. This is the science fiction doomsday scenario. AI becomes so powerful, it destroys humanity. 

Bose: Okay, you mean like Hal, 2001 Space Odyssey? 

Bushwick: Sure, or Skynet from the Terminator movies or uh, whatever the evil AI is called in the Matrix. I mean, the argument here isn’t that you’re gonna have like, uh, you know, an Arnold-shaped evil robot coming for us, it’s a little bit more real world than that.  But the basic idea is that AI is already surpassing our expectations about what it can do. These large language models are capable of things like passing the bar exam, um, they’re able to do math, which is not something they were trained to do, and in order to do these things called emergent abilities, researchers are surmising that they might possibly be doing something like developing an internal model of the physical world (wow) in order to solve some of these problems. So some researchers like most famously Geoffrey Hinton—

Bose: Also known as the godfather of AI—

Bushwick: Yeah. He’s been in the news a lot recently because he just recently resigned his position at Google. Um, (okay) so Hinton actually helped develop the machine learning technique that has been used to train all of these super powerful models. And he’s now sounding the alarm on AI. And so one of the reasons he stepped down from Google was so he could speak for himself without being a representative of the company when he’s talking about the potential negative consequences of AI.

Geoffrey Hinton: I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence. You couldn’t directly evolve digital intelligence; it requires too much energy and too much careful fabrication. You need biological intelligence to evolve, so that it can create digital intelligence. Digital intelligence can then absorb everything people ever wrote, in a fairly slow way which is what ChatGPT’s been doing, but then it can start getting direct experience of the world and learn much faster. They may keep us for awhile to keep the power stations running, but after that maybe not.

Bose: So AI surpassing us could be bad. How likely is it really? 

Bushwick: I don’t want to just dismiss this idea as catastrophizing. Hinton is an expert in this field and I think the idea that AI could become powerful and then could be given sort of enough initiative to do something negative— it doesn’t have to be, you know, a sentient robot, right in order to- to come to some pretty nasty conclusions. Like, if you create an AI and tell it, your goal is to maximize the amount of money that this bank makes, you could see the AI maybe deciding, well, the best way to do this is to destroy all the other banks (right) because then people will be forced to use my bank.

Bose: Okay.

Bushwick: Right? So if- if you give it enough initiative, you could see an AI following this logical chain of reasoning to doing horrible things. (Oh my gosh) Right, without guardrails or other limitations in place.

But I do think this catastrophic scenario, it’s – for me, it’s is less immediate than the prospect of an AI-powered propaganda or scam campaign or, um, the disruption that this is gonna cause to something that was formerly a stable career or to, you know, the destruction of the internet as we know it, et cetera. (Wow) Yeah, so for me, I worry less about what AI will do to people on its own (mm-hmm) and more about what some people will do to other people using AI as a tool.

Bose: Wow, okay. Um [laughs] When you put it that way, the killer AI doesn’t sound quite so bad.

Bushwick: I mean, halting the killer AI scenario, it would take some of the same measures as halting some of these other scenarios. Don’t let the rush to implement AI overtake the caution necessary to consider the problems it could cause and to try to prevent them  before you put it out there.

Make sure that there are some limitations on the use of this technology and that there’s some human oversight  over it. And I think that is what legislators are hoping to do. That’s the reason that Sam Altman is testifying before Congress this week, and I just would hope that  they actually take steps on it because there’s a lot of other tech issues, like, for example, data privacy that Congress has raised an alarm about, but not actually passed legislation on. 

Bose: Right. I mean, this sounds like it’s a big deal.

Bushwick: This is absolutely a big deal.

Bushwick: Science Quickly is produced by Jeff DelViscio, Tulika Bose and Kelso Harper. Our theme music was composed by Dominic Smith.

Bose: Don’t forget to subscribe to Science, Quickly wherever you get your podcasts. For more in-depth science news and features, go to ScientificAmerican.com. And if you like the show, give us a rating or review!

Bushwick: For Scientific American’s Science, Quickly, I’m Sophie Bushwick. 

Bose: I’m Tulika Bose. See you next time! 

know more

Popular

More like this
Related

Smith Rowe leaving Arsenal, Rashford to PSG: Best pure profit sale for every Premier League club

There is an obsession with ‘pure profit’ at the minute with clubs battling to stay within the Premier League’s profit and sustainability guidelines. With that in mind, here is one player each Premier League club could use to easily make a few quid…   Arsenal – Emile Smith Rowe Eddie Nketiah would be the preferred

Undercover wildlife officer with UV-inked turtles nets $35K fine for Calgarian making illegal imports

CalgaryMan fined $35,000 for illegally importing turtles in boxes...

Former priest charged with sexually abusing children in Nunavut granted bail

NorthEric Dejaeger was granted bail Tuesday while awaiting his...

U.S. authorities recover 2 bodies amid search for B.C. kayakers

British ColumbiaThe two bodies recovered by U.S authorities in...