Artificial intelligence “can be a weapon, but it's a tool” - an interview with tech journalist Kara Swisher
VPM News Focal Point special correspondent Dennis Ting spoke with journalist Kara Swisher about artificial intelligence, machine learning, and their impact on society. Swisher has been reporting on the tech industry since the 1990s, covering the rise of the Internet and many Silicon Valley companies. She also hosts two podcasts, "On with Kara Swisher" and "Pivot," and appears as a contributor on several media networks.
Swisher discusses how artificial intelligence is changing the world, the potential dangers of exploitation and the need for safeguards. She said Congress has not passed any legislation to regulate generative AI and warns that we need regulations to address privacy, antitrust and algorithmic transparency. Until that happens, Swisher said, people must be careful about what they see and believe, especially with upcoming elections.
NOTE: This interview transcript has been edited for clarity
DENNIS TING: How did you get into tech journalism?
KARA SWISHER: I spent some time at Duke University studying misinformation and propaganda, it was always my area of study. And you could start to see as the World Wide Web came into being, how manipulative it could be, and how bad it could be for people, and also how helpful. And so, I did things like downloaded books and all kinds of experiments. And it was sort of clear to me that everything would be digitized, in the end, that could be digitized would be digitized, and it would change every industry.
Back then did you foresee what we see now with, AI, social media, all that being as big of a factor today as it was back then?
Well, propaganda is not a new thing. It's gone on since the beginning of time. It's just these give propagandists bigger tools. And so, I was always quite aware of that part, because it was my area of study at college. And so, you know, look, Hitler didn't need Instagram. Mussolini didn't need Twitter. Had they had them or other tools of social media to target people it would have been devastating in a lot of ways. And of course, it is devastating right now, given all that we've seen.
Should people be afraid of machine learning?
No, you should be fair to people using machine learning. And that's the difference or AI. It's always the people that are the problem, not the machines. And anything you put into it is what you get out of it. I used to call Google a “database of human intentions.” And that's what it is. It's human knowledge, human intentions, human thoughts, and it depends on what you put into it. You know, you can put bad data in or data that's skewed in some way and, and then it spits out something else. But it's not anything else but us, reflecting ourselves back, just faster. And eventually, it starts to see patterns that humans can't or pull from knowledge that humans, just the brain isn't big enough to, you know, like medical knowledge, that's an area of great promise. They can do gene folding, they can do all kinds of things, because humans can't do that with their small, tiny brains, compared to computers.
It’s getting harder to detect what's machine-generated and what's not. Is that a concern going forward?
Everything is machine generated. This is a digital world we live in.
Crap in crap out. That's really, it's not that complicated. And so, if the data say on criminals, more Black people are arrested. Does that mean they're more criminal? Or that it's the system is skewed and racist? Well, I'd say the latter. And therefore, there's more record of that and therefore they seem to the computer to be more prone to crimes, and therefore the community will say, “go look over there more,” and then you'll find more people committing the crimes. And over here, this pile of white people is attacking the Capitol.
How do you detect what's real and what's not?
Provenance, where it comes from. What's going in, you know, how do you detect anything? How do you detect anybody lying, right? You have provenance, following, and that's one of the big issues is where does it come from? What are they using in these large language models, these LLMs? And where does it come from? And that has a lot to do with their rights to use it because of copyright. But the problem is, it can be deeply manipulated and very hard to follow for the average citizen.
Do you see AI, machine learning, this area, having an impact on human creativity?
Humans are very creative. That's the that's the greatest part of it. These machines are not creative. They can take other people's things and mash them together. You give them the prompts. I want a picture of, you know, Dolly Parton dancing, and then it'll pull from what exists. So, the creativity started with humans, the computers just has the data and is bringing it out again.
You know human beings are sort of like a third world nation whose chemicals and mining and jewels are getting taken. That's really what it's like, you know, and then they take them away and do things with them and the question is, who owns them? And who has provenance over them? Where do they come from? And how did you use it? And so if you take someone's copyrighted material, you should be paying for it. It's not that difficult, you know, tech people always act like it’s difficult, but it's not particularly difficult.
When you talk about AI is it hard to know, the algorithms that go in and to my knowledge, I don't think there's any, like legislation or requirement that says -
No there’s no algorithm for transparency now, there's no legislation on anything. Really. You can ask on all of them, there's no legislation.
Why do you think that is?
The politicians are incompetent, I don't know. That they've been bought and sold by lobbyists. That they that they act like they don't understand it. That's kind of their go to, is that this is too hard, but they regulate every other industry. And I would say car making is complex, I would say, plane flying is complex, pharmaceutical making is complex. They should be able to do this and they haven't. I think there's been a real, you know, celebration of entrepreneurs in a way that's idolatry. And so they're very, especially because they're the richest people on earth. So you tend not to, it's not unusual. This happened with the robber barons, and then they got them under control. Eventually.
Do you see potential legislation coming in the near future?
In 25 years of the internet, there's been exactly zero pieces of legislation passed. So, I would hope that maybe they would pay attention. They're doing a lot of meetings, holding a lot of get-togethers. You know, bringing in the powerful people, talking to them. All citizenry should have a role in it. We paid for the Internet. They didn't. They benefited from it, we paid for it. So, there should be a very. Elected officials should have a very big role in this and they haven't passed any privacy legislation, no data transparency legislation, no antitrust legislation, no algorithmic transparency. They haven’t passed anything. And in fact, the rule that exists, the single rule tells us we can't sue them for their bad behavior.
That’s Section 230?
I know you're not a politician. But if you were to think of legislation that would be able to, you know, check a lot of these powers. What would that look like?
It's multifaceted. It's not a single piece of legislation. The cars are not legislated by a single piece of legislation. There's not the car act, you know, I mean, there's a lot of safety issues. I think I would have absolutely a privacy bill, a national privacy bill that has teeth. I'd have a data transparency bill, an algorithmic transparency bill. I would overhaul antitrust which hasn't been overhauled in, I don’t know, a century. I would be pushing research and innovation for small companies. It’s like a package of really 10 to 12 really important pieces of legislation.
You can pretty much create deep fake videos of politicians, and you know, you have them say the wrong thing. And you know, you might start a war or start some international conflict. What, is the impact do you see of this on democracy?
It’s already here. Look at Photoshop has been here and caused problems, but people figure that stuff out in a lot of ways. I think the issue is that it's now even worse. I mean, it's already happened. Like, I don't know, if you've gotten on Facebook anytime recently, but there's all kinds of crap on there. It's been working its way into the American psyche for 25 years. And very bad actors, including domestic ones, are taking advantage of it because they want power. And that's really what's happening is they want power.
So do you think these deep fakes are kind of the equivalent of the memes that we saw on Facebook during the 2016 election cycle?
It’s just another thing, another terrible thing we're gonna have to contend with. It's the ability to target people that's, you know, target a million different people with a million different messages is, and they're very, they're very aimed at that person instead of spray-and-pray kind of thing. It's a real problem.
For the people who are concerned about AI taking their job, what should they do? What can they do?
Nothing. I think, you know, I think it's really you know, it's interesting because white collar jobs is now what's being debated. And then therefore white collar people are in a panic, right? This has been happening all across, whether it's manufacturing or farming or anything else. Delivery. Automation has been here for a long, long time. And so, the question is, what do we do about it and what new jobs can we create from it? Some jobs shouldn't, should be done by a computer. It should not be done by people. It's stupid. It's stupid that we continue to insist on it, including long haul trucking, for example. We can say, “oh, no, the jobs are being lost,” or we can think of new jobs to create for that shorter haul, talking into cities, maintaining the fleets, all kinds of things. But people shouldn't be doing a lot of the things they're doing, you know. You think about a lot of the wasted energy of young associates at law firms. And when a computer can do it, why? To keep their jobs, I don't, that makes no sense. Then figure out what new jobs are.
What ways are there to protect people, especially, you know, kids, people who are more vulnerable and might not understand the difference from you know, that toxicity that, you know, to be fair, has existed, since the internet has been around existed long.
It’s existed long before that. You know, media education, fact-based education, people learning critical learning. A lot is going to come at these people, lots of people, and so you have to know, to have critical thinkers and that's, that's has to do with our education system. I think you can, you know, some of it is very hard to discern. And so, if you swallow up everything whole I don't quite know what you can do about that, except train people to understand what they're consuming. Just like food, like, what's in that food? What's it doing to your body? Is it causing cancer? Is it bad for you? Is there too much sugar? Or, you know, it's not unlike that. It's actually exactly like that.
But who controls AI and who should be responsible for what happens?
Right now it's controlled by giant companies. It's not controlled by our federal officials, or people we elected. So right now, a lot of it is being controlled and decided on by a small group of people, the richest people on earth with an interest in keeping their power. So, there we have it. It's all the big companies Facebook, Meta, Microsoft, Google Alphabet, not really Twitter, it's too small. You know, all the big companies, Amazon, Apple, same ones who own everything else.
How do you break into it to create change?
You legislate. It's called legislation, it's called doing your job as legislators. Also citizens have to demand it of legislators, otherwise, they'll be telling you how to think about everything. And they may be good or they may be bad, that's the problem. And, again, unelected, unaccountable all-powerful. Sounds great.
It sounds like legislation’s what's needed but it's been 25 years and nothing's happened. Should people be optimistic that that things will change or that you know, we can harness AI for the better and mitigate some of these, you know?
I'm optimistic because it happened with trains. Yes, we have an ability to do it. We've done it before again and again and again, when absolute power controls everything. And so, we certainly can do it again. It's just it's the will of the people and it’s elected officials to do something about it. Again, elected officials are really problematic, but they were elected, right? You can crap all you want on government, but they were elected. And that's, I'm going to start with them and then no one elected any of these tech people, and they have unimaginable power over your life. And you either decide because it's convenient that you get a map or you're able to get a dating service or whatever, that this is the trade you want to make? That's really, you’re a cheap date for these people.
The government created the internet, just so you know, the government created rockets. Why does Jeff Bezos and Elon Musk control it all now? Wow, we created that, we created this. We created that, we made that possible. The privatization of all our the things that we share in common is really quite disturbing. It disturbing to me, it may not be disturbing to other people, but I find it disturbing.
There's all kinds of things in medicine, in drug interaction, in drug discovery, in cancer research, in climate change, all kinds of things that we can use these miraculous technologies for. But it will be determined by people who may have interests that "I don't want to deal with that,” you know, or, you know, child poverty, oh, my God, everything. Every problem we have could be this could help not solve it, but help be solved by with better information. And information and ideas of how to fix it. But it's again, in the hands of a small group of people who have their own self-interest at heart.
We have to think about what we’d want to use this for and it should be what’s good for the greater good of humanity versus a small group of people.
Is there one area that you are personally particularly excited about?
Medicine, medicine and all kinds of stuff like drug discovery, drug interaction, cancer research. Just that's one disease, there's many. All kinds of research into diseases, understanding patterns. Oh, the kinds of things it takes time to take make the time shorter. EVs, I think, not electric vehicles, but autonomous vehicles could be or transportation could be transformed in this way. Saving fossil fuel, getting rid of fossil fuel, you know. Development of nuclear energy, or other energy sources, another area. Just so many things, so many really have and education, those are all really important. That's great stuff. That's really great stuff. But it has to be again, someone thinking about the whole world versus a small slice of it.
The 10 richest people in the world are tech people, except for a couple of exceptions. And the 10 most valuable companies in the world are tech companies, except for maybe one or two exceptions. And so, you know, and they all look alike, whether it's Elon Musk, or Tim Cook, or Mark Zuckerberg, or Satya Nadella, looks a little different. But not much, you know. So, it's a very, it's a demographic that decides everything. And there's a lot of people on this planet, maybe there should be some more voices and that's an issue.
So you've got an issue of diversity, and then the ones that are speaking up, you know, around, say, facial recognition, most of the people speaking about facial recognition, are the people that get affected by it, not the people who don't get affected by it. Because why would it concern them? It doesn't affect them, you know, that’s the kind of thing you have to think about. Look and see, look with your eyes to see who controls everything. It's the same people. They’re a lot alike, and that's not ever any good. Being homogeneous is not good in this case. Heterogeneity is really important, different points of views, disagreeing with each other, that's what's powerful.
As a journalist yourself, you know, how have you seen AI impact your work or the work of other people?
It will make it easier like writing headlines doing well, giving you ideas, generating ideas, taking care of stuff that really doesn't need to be done by a human. I know journalists get all prickly about it, but there's all kinds of stuff, there's just earnings or headlines or idea generate. It’s okay for it to generate ideas, the way you use the internet, it's okay to look things up on Wikipedia. It’s fine. It doesn't mean it's the end of it, end of your search, right. And so it could be used in all patterns, files, you could see, investigative journalism. You could see all kinds of applications for journalists. You could also see all kinds of laziness, where you rely on it too much, or you just because it says it, it so, but you know, any journalist that does their reporting from Wikipedia deserves to lose their job. So, you, know. You’ve got to combine humanity with these tools, just like anything else.
And then when it comes to like ethics, from, you know, in journalism, or just, in general, like using AI.
It's fine. It's like using a car or using a phone. It's fine as long as you use it properly. If you use it, and you are the one in control of the situation, that's fine. If you rely on it and are lazy, you know, it's very typical. You've seen both cases, in the case of the internet, or anything else, anything can be manipulated. And so you have to think of it as a tool. But it can be a weapon. That's how you have to really look at it. It can be a weapon, but it's a tool.
I think that there are dangers, you know, of automating everything where humans are involved, that's always going to be an issue. We're already there, by the way, in a lot of ways. But you know, I think government agencies have to be updated, there should be probably a Department of Technology, the way there's a department of the FAA or the FCC, or SEC, you know. Every other industry has something governing it. I think it's a little more dicey, because it veers into free speech, but technology is not necessarily for speech. And so probably every single agency has to have some element of this, and they do. There's Cyber Command, there's all kinds of things. But, you know, when these tools become so sophisticated, we really should have oversight over them.
With elections coming up, what should people know about, you know, generative AI? And, you know, the election process, voting, being an informed voter?
Well as with elections in the past, buyer beware, right, just be clear about what you're getting in the information, making sure it's accurate. What happens is people manipulate it, manipulate the stuff and allow it to go on and allow lies to just they say them, and then it's hard to figure out. This is a place where people who lie thrive, they really do. They just thrive in these in these environments. And so, you have to be much more aware of lying, and the manipulation, but again, this is not new, it's just more sophisticated.
It totally amplifies the lies. And you can see that. You can look at a lot of these, you know, and it takes them to a place that's really ugly. So you just have to ask questions, and it should be in your interest, especially if you believe those people. “Well okay, I believe you. But let me let me see your evidence.” What's wrong with that? You know, what's wrong with it? It's a lie, that's what's wrong with it. You know, but people want to live in those worlds, and then they get radicalized, and you see it everywhere.
Click Here to watch our VPM News Focal Point story that features Kara Swisher