Play Live Radio
Next Up:
0:00
0:00
Available On Air Stations

A new AI tool can moderate your texts to keep the conversation from getting tense

AYESHA RASCOE, HOST:

How many times have you been in some kind of text exchange with a friend, a colleague, maybe your partner, and suddenly, it looks like, out of nowhere, things have gone south? I have been there. Well, artificial intelligence can help. Researchers at Cornell University are working on an AI tool called ConvoWizard. Basically, it works like a browser extension and warns you when it senses things may be getting tense in your text exchange. We're joined now by Professor Cristian Danescu-Niculescu-Mizil, who is one of the creators of the tool. Welcome to the show.

CRISTIAN DANESCU-NICULESCU-MIZIL: Hi, Ayesha.

RASCOE: How does ConvoWizard sense when a conversation is getting heated?

DANESCU-NICULESCU-MIZIL: We have an algorithm that - together, with my student, Jonathan Chang, we developed, over the last few years, trying to model the dynamics within a conversation, right? The magic really comes from the fact that we teach the computer to have an intuition about where the conversation is going by showing them a lot of - millions of conversations. It's really interesting 'cause I think we all kind of have an intuition about when the conversation is getting tense. The problem is to act on that intuition in the heat of the moment, right? It's actually harder when you're not talking face-to-face, when you're online. You are actually missing those signals. So what we're doing with this tool is trying to supplement our intuition.

RASCOE: It's supposed to, like, dial down the situation, so I guess if you could just show me how it works.

DANESCU-NICULESCU-MIZIL: This is a conversation about the Supreme Court, so this is a community that we have collaborated with - it's Change My View. It's a pretty large community that have, like, about 3 million subscribers, and they have a very nice space for discussing controversial topics.

RASCOE: OK, so what I'm seeing here now is this discussion is getting somewhat tense 'cause this person has already said basically, like, you should think how little you know about the court, and - but see - I see ConvoWizard is now saying this comment might - it's turning red, and it's saying might increase the tension.

DANESCU-NICULESCU-MIZIL: Yeah, can you make it even - you can actually make it redder. You can try.

RASCOE: The way you would do that is you would be like, who do you think you are to tell me? (Laughter).

DANESCU-NICULESCU-MIZIL: Who do you think you are?

RASCOE: Yeah, OK, it's very red now. (Laughter). And why make it so that it just kind of nudges the user? Why a nudge instead of saying don't say that or giving, like, say it this way, or something.

DANESCU-NICULESCU-MIZIL: Our goal is not to constrain the people that are having the conversations. Like, we believe that it's ultimately their decision what they want to say. There's also another aspect here, where we have to recognize that these signals are really read by an algorithm. Algorithms are biased. And therefore, using an algorithm to constrain your conversations can be very problematic from an ethical perspective.

RASCOE: You're testing this tool out on Reddit, but where else do you imagine people would use it? Like, do you see it being used on Twitter, on Slack, WhatsApp?

DANESCU-NICULESCU-MIZIL: Yeah, we imagine that this tool could be useful in many places where well-intentioned participants might need some aid to their already existing intuition, right?

RASCOE: Have you learned anything about de-escalating language?

DANESCU-NICULESCU-MIZIL: So we started by looking at what people actually do to de-escalate their own conversation, and what we're finding is actually that people, when they try to de-escalate the situation, they use more polite language. They try to be less direct. They use more formal language sometimes. And importantly, they try to use more objective statements, right? So less subjectivity and more objective statements. It's really interesting that we actually do have those skills, but we sometimes forget to use them.

RASCOE: That's Cristian Danescu-Niculescu-Mizil, associate professor at Cornell University's Department of Information Science. Thank you so much for joining us.

DANESCU-NICULESCU-MIZIL: Thank you, Ayesha. Transcript provided by NPR, Copyright NPR.

Ayesha Rascoe
Ayesha Rascoe is the host of Weekend Edition Sunday and the Saturday episodes of Up First. As host of the morning news magazine, she interviews news makers, entertainers, politicians and more about the stories that everyone is talking about or that everyone should be talking about.