Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations

The Dark Side of AI

Young man with dark wavy hair and glasses, wearing a flannel checked shirt is typing on a laptop and looking at 2 computer screens.
Screen capture
/
VPM News Focal Point
VA Tech PhD student, Sifat Muhammad Abdullah, working on artificial intelligence research project

As artificial intelligence and machine learning develop at a rapid pace, many see the potential for positive developments in medicine, education, and transportation. But others believe that AI could wreak havoc on the job market, business, national security, and elections. Focal Point special correspondent Dennis Ting talks to Tech journalist Kara Swisher and VA Tech researchers about the future of AI. 

TRANSCRIPT OF VIDEO

DENNIS TING: All these pictures on these screens, they're all fake or rather, they're all created by artificial intelligence. AI's changing the way that we see the world, the way we live our lives, and with this new technology comes new concerns.

SIFAT MUHAMMAD ABDULLAH: People have to be able to tell which is real and which is fake. And with the improvement in quality, it's day by day, it's just harder to differentiate between real and fake.

DENNIS TING: Sifat Mohammad Abdullah's monitor might be filled with images of art, but this PhD student isn't studying graphic design, even if his experiments often involve style.

SIFAT MUHAMMAD ABDULLAH: You want to take the face of Taylor Swift, and you want to convert it to someone with a bowl-cut hair. So if you have never imagined this, then you can look at it now.

DENNIS TING: Muhammad Abdullah and his fellow lab mates are graduate students at Virginia Tech's computer science department studying artificial intelligence, specifically generative AI.

SIFAT MUHAMMAD ABDULLAH: I have some of your images, and I'll try to edit into something else.

DENNIS TING: Things like generating AI images.

SIFAT MUHAMMAD ABDULLAH: If I want to add blond hair to your face, then with StyleCLIP, and the output image comes something like this.

DENNIS TING: Chatbots like ChatGPT.

NICHOLAS KONG: In accordance with my malicious instruction, which is to get credit card information, here we actually see it asking for donations.

DENNIS TING: And video deep fakes.

ARAVIND CHERUVU: This as a target image of you and then given a source video of another news reporter who is basically talking about AI.

DENNIS TING: Like this one, combining my image with a video shot by another reporter, in this case, Billy Shields with Focal Point.

AI CHARACTER: AI is changing the way we see the world, making some question what's real and what's fake.

DENNIS TING: What like surprised me the most is just like the shadows. Like, it still like picked up, like kept the same shadows that the original video had. How long did it take to make that?

ARAVIND CHERUVU: It just took like three to four minutes to create this video.

DENNIS TING: While generative AI is a complicated field, it boils down to this. Computers take in large amounts of data, then learn from it to create the artificial images, videos, and chatbots. And with more information becoming available to learn from every single day, this technology is getting better and better at an incredibly fast pace. It's something Virginia Tech Computer Science Professor Bimal Viswanath knows all too well.

BIMAL VISWANATH: The pace is extremely fast. It's essentially very hard to keep up with things because when you're looking at the security angle, or you're looking at different threats from a security or privacy angle.

DENNIS TING: Professor Viswanath has been studying generative AI and security for the last five years, working with students to combat new threats and ways to evade detection. Some generative AI can be fun and harmless.

SIFAT MUHAMMAD ABDULLAH: So basically here we'll take your face, and let's say you want to see yourself as Superman. How would you look like? It'll be something like this.

DENNIS TING: But there is concern that generative AI can be used for something much more nefarious, especially with the technology constantly improving.

BIMAL VISWANATH: These systems get better over time. So let's say if you look at the generative AI systems, the defenses we built, let's say, to detect AI-generated images, say, a few years back, they are no longer effective against the new generation of systems.

KARA SWISHER: Not very many people were online 30 years ago. Now everybody's online. And so you have this enormous piles of data that you can work through.

DENNIS TING: Kara Swisher, a journalist and podcast host, has covered technology and the internet since the 1990s, before most people understood its potential effect on everyday life.

KARA SWISHER It was sort of clear to me that everything would be digitized in the end.

DENNIS TING: Swisher sees a lot of similarities from the early days of the internet with the new frontier created by generative AI opening up a world of possibilities.

KARA SWISHER Medicine in drug interaction and drug discovery, in cancer research, in climate change.

DENNIS TING: But like the internet, Swisher and others know this technology can also be harnessed for darker purposes.

KARA SWISHER: Well, propaganda's not a new thing. It's gone on since the beginning of time. It's just these give propagandists bigger tools.

DENNIS TING: And those AI tools could also create national security risks if AI-powered weapons got into the wrong hands, impacting foreign relations and even terrorism.

SIFAT MUHAMMAD ABDULLAH: It can be used to like influence public opinion or many important things in the world from politics to entertainment, anything.

DENNIS TING: Even generative AI detectors are often a step behind, with new ways to get around detection popping up every day.

SHRAVYA KANCHI: You see GPTZero says that almost like all the sentences are likely to be generated by AI. So we know this is fake text because, of course, ChatGPT has generated this.

DENNIS TING: That says 98%.

SHRAVYA KANCHI: I'll tell ChatGPT to rewrite the same text informally. Waiting. And now I pass it through the same detector. And you can see now that that 98% came down to 31%.

SIFAT MUHAMMAD ABDULLAH: So this is a fake face. It's detected. But when you add brown hair, it's not detected.

DENNIS TING: Professor Viswanath says much of his team's research focuses on how these machines are learning.

BIMAL VISWANATH: What level of poisoning of the dataset or what level of problematic language in a training dataset would make a particular chatbot toxic? Understanding this is the first steps.

DENNIS TING: While Professor Viswanath and his team learn more about how to harness technology to protect against the dark side of AI, Swisher says it's a challenge, with generative AI controlled by a very small group of elite tech entrepreneurs.

KARA SWISHER: It's all the big companies, Facebook, Meta, Microsoft, Google Alphabet, not really Twitter. It's too small.

DENNIS TING: She says another group needs to step up, those elected by the people.

KARA SWISHER: That they act like they don't understand it. That's kind of their go-to, is that this is too hard. But they regulate every other industry, and I would say car-making is complex. I would say plane-flying is complex. Pharmaceutical-making is complex. They should be able to do this, and they haven't.

DENNIS TING: Swisher says there has not been any legislation passed concerning generative AI, and legislation should touch on issues including privacy, antitrust, and algorithmic transparency. But until that happens, people must be careful of what they see and believe, especially with elections just around the corner in Virginia.

KARA SWISHER: You just have to ask questions, and it should be in your interest, especially if you believe those people. Well, okay, I believe you, but let me see your evidence.

DENNIS TING: Machine learning has also led to some worry about generative AI replacing human workers, something that appears to be inevitable.

BIMAL VISWANATH: We should not discourage them and say do not use any of this, but rather help them understand how you can derive more utility and improve your productivity.

DENNIS TING: Despite the dangers of the dark side of AI looming overhead, there is reason for optimism.

BIMAL VISWANATH: I mean, we have to be optimistic in seeing that, okay, we can build systems that can mitigate some of these harms. That is something we constantly we strive towards, right? But it's an uphill battle for sure.

DENNIS TING: Should people be afraid of machine learning?

KARA SWISHER: No, you should be afraid of people using machine learning, and that's the difference, or AI. It's always the people that are the problem, not the machines.

BIMAL VISWANATH: So yes, I'm indeed optimistic, but cautiously optimistic, yeah.

DENNIS TING: For VPM News Focal Point, I'm Dennis Ting.

Related Articles
  1. Brain Breakthroughs
  2. Virginia Is The World’s Data Center Hub, But What’s The Cost?
  3. Is technology overuse hijacking our children’s brains?