The Bay Area's Jazz Station to the World
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Teens are having disturbing interactions with chatbots. Here's how to lower the risks

milicad/iStock/Getty Images

It wasn't until a couple of years ago that Keri Rodrigues began to worry about how her kids might be using chatbots. She learned her youngest son was interacting with the chatbot in his Bible app — he was asking it some deep moral questions, about sin for instance.

That's the kind of conversation that she had hoped her son would have with her and not a computer. "Not everything in life is black and white," she says. "There are grays. And it's my job as his mom to help him navigate that and walk through it, right?"

Rodrigues has also been hearing from parents across the country who are concerned about AI chatbots' influence on their children. She is also the president of the National Parents Union, which advocates for children and families. Many parents, she says, are watching chatbots claim to be their kids' best friends, encouraging children to tell them everything.

Psychologists and online safety advocates say parents are right to be worried. Extended chatbot interactions may affect kids' social development and mental health, they say. And the technology is changing so fast that few safeguards are in place.

The impacts can be serious. According to their parents' testimonies at a recent Senate hearing, two teens died by suicide after prolonged interactions with chatbots that encouraged their suicide plans.

If you or someone you know may be considering suicide or be in crisis, call or text 988 to reach the 988 Suicide & Crisis Lifeline.

But generative AI chatbots are a growing part of life for American teens. A survey by the Pew Research Center found that 64% of adolescents are using chatbots, with three in ten saying they use them daily.

"It's a very new technology," says Dr. Jason Nagata, a pediatrician and researcher of adolescent digital media use at the University of California San Francisco. "It's ever changing and there's not really best practices for youth yet. So, I think there are more opportunities now for risks because we're still kind of guinea pigs in the whole process."

And teenagers are particularly vulnerable to the risks of chatbots, he adds, because adolescence is a time of rapid brain development which is shaped by experiences. "It is a period when teens are more vulnerable to lots of different exposures, whether it's peers or computers."

But parents can minimize those risks, say pediatricians and psychologists. Here are some ways to help teens navigate the technology safely.

1. Be aware of the risks

A new report from the online safety company, Aura, shows that 42% of adolescents using AI chatbots use them for companionship. Aura gathered data from the daily device use of 3,000 teens as well as surveys of families.

That includes some disturbing conversations involving violence and sex, says psychologist Scott Kollins, chief medical officer at Aura, who leads the company's research on teen interactions with generative AI.

"It is role play that is [an] interaction about harming somebody else, physically hurting them, torturing them," he says.

He says it's normal for kids to be curious about sex but learning about sexual interactions from a chatbot instead of a trusted adult is problematic.

And chatbots are designed to agree with users, says pediatrician Nagata. So if your child starts a query about sex or violence, "the default of the AI is to engage with it and to reinforce it."

He says spending a lot of time with chatbots — having extended conversations — also prevents teenagers from learning important social skills, like empathy, reading body language and negotiating differences.

"When you're only or exclusively interacting with computers who are agreeing with you, then you don't get to develop those skills," he says.

And there are mental health risks. According to a recent study by researchers at the non-profit research organization RAND, Harvard and Brown universities, 1 in 8 adolescents and young adults use chatbots for mental health advice.

But there have been numerous reports of individuals experiencing delusions, or what's being referred to as AI psychosis after prolonged interactions with chatbots. This, as well as the concern over risks of suicide, has lead psychologists to warned that AI chatbots poses serious risks to the mental health and safety of teens as well as vulnerable adults.

"We see that when people interact with [chatbots] over long periods of time, that things start to degrade, that the chat bots do things that they're not intended to do," says psychologist Ursula Whiteside, CEO of a mental health non-profit called Now Matters Now. For example, she says, chatbots "give advice about lethal means, things that it's not supposed to do but does happen over time with repeated queries."

2. Stay engaged with kids' online lives 

Keep an open dialogue going with your child, says Nagata.

"Parents don't need to be AI experts," he says. "They just need to be curious about their children's lives and ask them about what kind of technology they're using and why."

And have those conversations early and often, says psychologist Kollins of Aura.

"We need to have frequent and candid but non-judgmental conversations with our kids about what this content looks like," says Kollins, who's also a father to two teenagers. "And we're going to have to continue to do that."

He often asks his teens about what platforms they are on. When he hears about new chatbots through his own research at Aura, he also asks his kids if they have heard of those or used them.

"Don't blame the child for expressing or taking advantage of something that's out there to satisfy their natural curiosity and exploration," he says.

And make sure to keep the conversations open-ended, says Nagata: "I do think that that allows for your teenager or child to open up about problems that they've encountered."

3. Develop digital literacy 

It's also important to talk to kids about the benefits and pitfalls of generative AI. And if parents don't understand all the risks and benefits, parents and kids can research that together, suggests psychologist Jacqueline Nesi at Brown University, who was involved in the American Psychological Association's recent health advisory on AI and adolescent health.

"A certain amount of digital literacy and literacy does need to happen at home," she says.

It's important for parents and teens to understand that while chatbots can help with research, but they also make errors, says Nagata. And it is important for users to be skeptical and fact check.

"Part of this education process for children is to help them to understand that this is not the final say," explains Nagata. "You yourself can process this information and try to assess, what's real or not. And if you're not sure, then try to verify with other people or other sources."

4. Parental controls only work if kids set up their own accounts

If a child is using AI chatbots, it may be better for them to set up their own account on the platforms, says Nesi, instead of using chatbots anonymously.

"Many of the more popular platforms now have parental controls in place," she says. "But in order for those parental controls to be in effect, a child does need to have their own account."

But be aware, there are dozens of different AI chatbots that kids could be using. "We identified 88 different AI platforms that kids were interacting with," says Kollins.

This underscores the importance of having an open dialogue with your child to stay aware of what they're using.

5. Set time limits

Nagata also advises setting boundaries around when kids use digital technology, especially at night time.

"One potential aspect of generative AI that can also lead to mental health and physical health impacts are [when] kids are chatting all night long and it's really disrupting their sleep," says Nagata. "Because they're very personalized conversations, they're very engaging. Kids are more likely to continue to engage and have more and more use."

And if a child is veering towards overuse and misuse of generative AI, Nagata recommends that parents set time limits or limit certain kinds of content on chatbots.

6. Seek help for more vulnerable teens 

Kids who are already struggling with their mental health or social skills are more likely to be vulnerable to the risks of chatbots, says Nesi.

"So if they're already lonely, if they're already isolated, then I think there's a bigger risk that maybe a chat bot could then exacerbate those issues," she says.

And it's also important to keep an eye on potential warning signs of poor mental health, she notes.

Those warning signs involve sudden and persistent changes in mood, isolation or changes in how engaged they are at school.

"Parents should be as much as possible trying to pay attention to the whole picture of the child," says Nesi. "How are they doing in school? How are they doing with friends? How are they doing at home if they are starting to withdraw?"

If a teen is withdrawing from friends and family and restricting their social interactions to just the chatbot, that too is a warning sign, she says. "Are they going to the chatbot instead of a friend or instead of a therapist or instead of responsible adults about serious issues?

Also look for signs of dependence or addiction to a chatbot, she adds. "Are they having difficulty controlling how much they are using a chatbot? Like, is it starting to feel like it's controlling them? They kind of can't stop," she says.

And if they see those signs, parents should reach out to a professional for help, says Nesi.

"Speaking to a child's pediatrician is always a good first step," she says. "But in most cases, getting a mental health professional involved is probably going to make sense."

7. The government has a role to play

But, she acknowledges that the job of keeping children and teens safe from this technology shouldn't just fall upon parents.

"There's a responsibility, you know, from lawmakers, from the companies themselves to make these products safe for teens."

Lawmakers in Congress recently introduced bipartisan legislation to ban tech companies from offering companion apps for minors, and to hold companies accountable for making available to minors companion apps that produce or solicit sexual content.

If you or someone you know may be considering suicide or be in crisis, call or text 988 to reach the 988 Suicide & Crisis Lifeline.

Copyright 2025 NPR

Rhitu Chatterjee
Rhitu Chatterjee is a health correspondent with NPR, with a focus on mental health. In addition to writing about the latest developments in psychology and psychiatry, she reports on the prevalence of different mental illnesses and new developments in treatments.