Play Live Radio
Next Up:
Available On Air Stations

AI agents: Chatbots that do more than chat

There is a move toward AI "agents" that take actions on users' behalf rather than just regurgitate information.
(David Espejo/Getty Images)
There is a move toward AI "agents" that take actions on users' behalf rather than just regurgitate information. (David Espejo/Getty Images)

Chatbots can already regurgitate information to answer questions, generate text and images, and converse with users.

But the new ‘gpt2-chatbot’ can do more than that. It comes at a time when engineers are building AI “agents” that can take action on users’ behalf, everything from booking a flight to handling a customer service complaint. There are benefits, but also risks, and chief technology correspondent at Axios Ina Fried says some companies have hesitated to implement AI agents.

The restaurant reservation system OpenTable is already using AI agents to book tables for people. Fried says the company’s model is low-risk, but other companies could face problems letting AI have more control over operations.

“A car dealership put in a chatbot to answer questions, and somebody negotiated a binding deal to buy a car for ridiculously cheap,” Fried says. “A court said, ‘Hey, you put it there. You got to live with it.’ That’s a small taste of what’s coming.”

3 questions with Ina Fried

How is AI different from technology like Apple’s Siri or Amazon’s Alexa?

“Siri and Alexa are kind of ChatGPT’s grandparents. They were actually trained a different way. The type of generative AI that power Google’s and OpenAI’s chatbots, all these things, is a new generation. But it is based on the same idea that we can use natural language to interact with computers.

“The idea of an agent is that instead of saying, ‘Where’s the best deal on a ticket to Paris,’ you would say, ‘Hey, book me a ticket to Paris.’ And maybe you’d say so long as it meets the following criteria and set a price limit. And that’s, that’s doable today.”

How is this technology being used already?

“In customer service, we’re seeing companies do it two ways. One is that a human being has a chatbot there so they can answer calls more efficiently and there is a productivity boost.

“But we’re seeing other companies say, ‘We’re willing to take the risks, and let’s have AI answer more of the questions on its own,’ which obviously scales much more, but opens up huge categories of risk as well.”

What risks can we anticipate with this technology growing?

“I think the biggest risk is these chatbots are still not accurate. They still make what we call hallucinations, but it’s really being confidently wrong. Today the human is really serving as an important check on that.

“If you start letting agents take action on their own and they’re wrong, what happens? Especially if an agent starts talking to another agent.

“I do think that’s our future, because the productivity gains and the idea of having your computer do menial tasks for you is so appealing. But I think we have to get to a place where the AI systems are in better shape, and we have better safeguards to make sure when they are wrong, there’s recourse versus actions that are irreversible.”

Chris Bentley produced and edited this interview for broadcast with Peter O’DowdGrace Griffin adapted it for the web.

This article was originally published on

Copyright 2024 NPR. To see more, visit