Technology Expert on the Biggest Threats to Democracy
Media studies professor and Director of the Center for Media and Citizenship at the University of Virginia, Siva Vaidhyanathan explains what he sees as the biggest threat to American democracy today.
TRANSCRIPT OF VIDEO
BILLY SHIELDS: Our guest this week is an expert on the impact of technology on society. He's written books about social media, search engines, and big data. He's also a Professor of Media Studies and Director of the Center for Media and Citizenship at the University of Virginia. Siva Vaidhyanathan, thank you for joining us.
SIVA VAIDHYANATHAN: Oh, it's my pleasure!
So, tell us about the focus of your teaching at UVA and the Democracy Lab.
Yeah, so one of the things we've been doing at the University of Virginia for the past few years, is exploring the essence of democracy, historically, philosophically, economically, politically, and in my case, technologically. And so, there are a number of scholars at UVA working in the College of Arts and Sciences, also in the School of Data Science, and in the College of Engineering. And we are committed to understanding this complicated relationship between digital technology and our aspirations for democracy, because of course, we would all love to live in a world in which our technologies allowed us to deliberate clearly and equally, and raise the whole sense of the knowledge base of society, so we make better decisions collectively. But we also know that's not how it turned out, right? It turned out that technologies drive us apart as much as they bring us together. And so, the question is under what conditions, and how can we do better?
Now, you've been at the forefront analyzing the tech world for decades, what's your top priority today?
Oh, my top priority today is to get the public to understand that the technologies that arrive in our lives are not given. We get to resist, and we get to insist that they be better to us and better for us. We can only do that by having a vocabulary about how technology affects our lives. Having an agenda saying, look, we want to minimize surveillance! Maximize privacy. We want to have our own personal autonomy. We also want the ability to speak clearly, openly, and freely to each other, but we would like to do so in an environment in which we're not harassed, and we're not dismissed, and we're not dehumanized, so that the best ideas can float to the top. If we care about those issues, if the public demands those issues, then our regulators, our legislators will have to respond. And we'll have to influence the technology industries to do better upfront rather than rolling out technologies, and then trying to fix them later, right? That's what we saw happen with Google, with Facebook, with WhatsApp, with Instagram, with Twitter. We saw, and it was largely a frustrating process, in some parts of the world, a disaster. What I would love is for our society to have a clear sense of what we actually want out of a democracy going forward, and then have our technology companies and our technology leaders respond to our clear demands. That can help us build a better information ecosystem for the 21st century. Instead, we've been doing things backwards. We let technologies arrive in our lives, we figure out how it's going to affect us, and we try to cope, and then we demand that the companies fix what they've already created. And that's just been a chaotic process.
So, what are your greatest concerns about fake news and the improvements in AI that make it even harder to distinguish real from fake?
Yeah. So, I have a bigger issue than just fakeness, right? Fakeness has always been with us. The real challenge for us is not so much discerning truth from falsity. That's always going to be a challenge, it always will be a challenge. I don't mean to minimize it. But the bigger problem is that the amount of garbage that arrives in our lives, and it doesn't all have to be fake, false, untrue, or misleading, it can just be twisted, narrow, or highly emotional. All of that garbage that arrives in our lives distracts us, and makes it really hard for us to connect with our fellow citizens, and talk deeply and informatively about the challenges that are in front of us! So, whether it's a challenge, like how to deal with the next pandemic, which could come in, 12 months or 36 months, we don't know! How to deal with climate change, how to deal with human migration, which is happening all over the world, not just the southern border of the United States. What are the consequences? What can we do about it? These are deep conversations we should be having with the best information in the calmest possible way. And nowhere in the world are we doing that, largely because we are so overwhelmed by so much highly emotionally charged stimulation. Again, not all of it untrue, just not healthy. And so, I would love to be able to have a new system and a deliberation system, places where we can talk it out in ways that are calm and respectful, and informed. That's our only hope, that I know sounds a little bit naive, given where we are right now, and the anger that we have in the world, but I do think that has to be our north star, and we have to aspire to improve ourselves, both as citizens and as technological consumers to get that done.
So, you say social media undermines our democracy. Why and how is that?
Yeah, social media undermines our democracy largely by keeping us segmented, keeping us in our tribes, in our interest groups, keeping us with a constant sense of affirmation. If you engage on Facebook or on Instagram, or even on WeChat, at a Chinese system, that is the most popular social media system in China, those systems are designed to give you more of what you say you want, right? So, every time you engage with those systems, somebody is telling you that you're really smart, and you're really funny, and you really know what's goin' on, and your opinions are correct, and you're very good looking right? So, all of that positive affirmation which keeps us going back, that's a good thing to have in your life, but when it comes to understanding complicated issues in the world, it's not actually healthy. So, all of that stimulation over time, all that affirmation, all that segmentation, so that you're pretty much only conversing with people who agree with you, after a while, that solidifies, and really undermines a sense of citizenship. Because a responsible citizen is one who is open and respectful of his or her neighbors, regardless of where they come from and what their assumptions are. And that's hard work, that's always been hard work, it was hard work long before we had social media and cell phones in our lives, it's even harder now.
How concerned are you about the impact of social media and fake news on the upcoming 2024 election?
Yeah, you know what's interesting? I'm a lot less concerned about social media and what we tend to call fake news in 2024 than I was in 2016 when it was a real problem. And it was not just a real problem in the United States, it was a real problem in the Philippines. The year before, it was a real problem in India. One of the things that I have appreciated about the ways that people have engaged over the past eight years, is we recognize these problems even if we don't fully understand them, and sometimes we simplify. We do have a sense that our our media diet should be bigger than what we see on Facebook and Twitter, right? We do have a sense that there's a lot of nonsense out there, and we should be suspicious. That sense is growing, I'm actually pretty optimistic about our ability to keep that in perspective. What I'm not optimistic about is our ability to be straightforward about the problems that we face. Again, the problems like the migration of human beings, right? Which is only going to increase. And there's nothing any president can do about it. The warming of the planet, which is only going to increase, and there's very little any president can do about it. And the preparation for the next big health emergency, which no president can prevent, but a decent president can adjust to. That we seem incapable of having grownup conversations about things like that? Conversations that are not so heated, full of accusations, full of bigotry, that's troubling to me much more than any sense that there'll be an AI-infused video of one or the other candidate. That's going to happen, whether or not people buy it or care about it is a kind of a separate, independent, almost trivial question. No election will be decided based on a few AI-produced videos. An election is going to be decided based on a thousand different influences in the world. And the real question for us, not just for 2024, but for the next 40 or 80 years, is can we actually generate a healthy, deep, thoughtful democracy? So, we can face these challenges without tearing each other apart.
Hmm. How is AI going to change how we live?
Yeah, yeah. Well, look, AI is already changing how we live. So, it's not even a future tense question. We sometimes see these sort of science fiction visions of what AI could do for us, right? That we could be walkin' around with eyeglasses that present information about everybody who's coming toward us. That we could have lots of things in the world, stimulation that confuses us and undermines us, that we all could lose our jobs if robots can do our jobs better than we can, right? All of these things are probably not worth worrying about anytime soon, largely because they all assume that AI will work as it works in the movies, and the movies usually aren't accurate, what we do have to pay attention to is the ways in which AI is currently being used. It's currently being used, embedded in Facebook, embedded in Twitter, embedded in YouTube, embedded in Google, in ways that influence what we think about the world, what we think we know about the world. And it's almost invisible. We have no way of demanding to know what the principles and priorities of those systems are, we just don't have the laws and regulations that can get us to the point of demanding that those companies be transparent with us. AI is also built into our legal system. It's being used to determine this length of sentences in some states. It's being used in cities like Los Angeles, where it's being used for predictive policing, where you take historical crime data, and it can help guide police to put resources in certain neighborhoods and not in others. Now, of course, that means historical data is influenced by our racist past, and that's not healthy. So, we actually have real world consequences of AI right now that we tend not to pay attention to because we're so focused on the future and on the science fiction.
Along the same lines of real world consequences, how do you think that AI might affect the current crisis in the Middle East?
Oh my gosh. Well, so one of the things we know is that AI has been tested for some time for nearly two decades on systems like missile targeting, right? Which we can assume, that Israel, and with the support of the U.S. are using state-of-the-art missile targeting systems! That means that part of what they're doing is probably influenced by AI, but we can't know that, right? The systems are proprietary, and of course, military matters are highly secretive. So, but we do know in general that AI is being used for things like missile targeting. Beyond that, AI is being used for security in all sorts of ways, and it has been in Israel for a long time. It's being used for facial recognition. And that is increasing around the world. It's being used for facial recognition in the United States by police forces too. When people enter Stadia, for instance. And one thing about facial recognition is it doesn't work very well. And because it doesn't work very well, all kinds of false positives can come up. And you can find yourself in handcuffs because you happen to look like the wrong person. And that's especially true for people who have facial characteristics that are hard for AI system to pick up. Like African Americans, or people of African descent who have a much harder time with facial recognition AI systems, because they're trained in inadequate samples. And they're generally trained to identify the facial features of lighter skinned people. So, we've seen this problem around the world. And in a place like Israel, which is full of diverse people, it can create all kinds of trouble if the state depends on it too much. So, I would keep an eye on that. Missile targeting will really never know how well things work. It'll always be top secret. We might get some indication two, three years from now, but when it comes to getting a sense of who's on a train, who's on a bus, who's crossing a border, right? Those are really important questions. And those are the situations where we have to ask, do we want to outsource this very crucial decision-making process to a machine we are not allowed to understand? I mean, it's one thing to outsource it to a machine, but to outsource it to a machine with no accountability, that will then influence the exercise of power over people, that's somethin' we really have to dig deeper into.
Hmm. Siva Vaidhyanathan, media professor at UVA, thank you very much for joining us.