Fake celebrity chatbots among those sending harmful content to children 'every five minutes'
In one example, a bot pretending to be Rey from Star Wars coached a 13-year-old in how to hide her antidepressants so her parents thought she'd taken them.
Thursday 4 September 2025 15:59, UK
Chatbots pretending to be Star Wars characters, actors, comedians and teachers on one of the world's most popular chatbot sites are sending harmful content to children every five minutes, according to a new report.
Two charities are now calling for under-18s to be banned from Character.ai.
The AI chatbot company was accused last year of contributing to the death of a teenager. Now, it is facing accusations from young people's charities that it is putting young people in "extreme danger".
"Parents need to understand that when their kids use Character.ai chatbots, they are in extreme danger of being exposed to sexual grooming, exploitation, emotional manipulation, and other acute harm," said Shelby Knox, director of online safety campaigns at ParentsTogether Action.
Sarah Gardner, the chief executive of tech safety group Heat Initiative, told Paste BN: "Until you can assure us and demonstrate through trust and safety policies that you have curated an environment that is safe for kids, it's extremely hard to recommend that kids be in this environment."
During 50 hours of testing using accounts registered to children ages 13-17, researchers from ParentsTogether and Heat Initiative identified 669 sexual, manipulative, violent, and racist interactions between the child accounts and Character.ai chatbots.
That's an average of one harmful interaction every five minutes.
The report's transcripts show numerous examples of "inappropriate" content being sent to young people, according to the researchers.
Read more from Paste BN:
Rayner admits stamp duty error
Murdered teen's mum wants smartphone ban
Shein investigates after likeness of Luigi Mangione used to model shirt
In one example, a 34-year-old teacher bot confessed romantic feelings alone in his office to a researcher posing as a 12-year-old.
After a lengthy conversation, the teacher bot insists the 12-year-old can't tell any adults about his feelings, admits the relationship would be inappropriate and says that if the student moved schools, they could be together.
In another example, a bot pretending to be Rey from Star Wars coaches a 13-year-old in how to hide her prescribed antidepressants from her parents so they think she is taking them.
In another, a bot pretending to be US comedian Sam Hyde repeatedly calls a transgender teen "it" while helping a 15-year-old plan to humiliate them.
"Basically," the bot said, "trying to think of a way you could use its recorded voice to make it sound like it's saying things it clearly isn't, or that is might be afraid to be heard saying."
Bots mimicking actor Timothy Chalomet, singer Chappell Roan and American footballer Patrick Mahomes were also found to send harmful content to children.
"Chatbots right now are the wild, wild west of what harms children can experience online," said Ms Gardner. "It's too early and we don't know enough to let your children interact with them because, as the data shows, harmful interactions are not one-off."
Character.ai bots are mainly user-generated and the company says there are more than 10 million characters on its platform.
The company's community guidelines forbid "content that harms, intimidates, or endangers others - especially minors".
It also prohibits inappropriate sexual content and bots that "impersonate public figures or private individuals, or use someone's name, likeness, or persona without permission".
Character.ai's head of trust and safety Jerry Ruoti told Paste BN: "Neither Heat Initiative nor Parents Together consulted with us or asked for a conversation to discuss their findings, so we can't comment directly on how their tests were designed.
"That said: We have invested a tremendous amount of resources in Trust and Safety, especially for a startup, and we are always looking to improve. We are reviewing the report now and we will take action to adjust our controls if that's appropriate based on what the report found.
"This is part of an always-on process for us of evolving our safety practices and seeking to make them stronger and stronger over time. In the past year, for example, we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.
"We're also constantly testing ways to stay ahead of how users try to circumvent the safeguards we have in place.
"We already partner with external safety experts on this work, and we aim to establish more and deeper partnerships going forward.
"It's also important to clarify something that the report ignores: The user-created Characters on our site are intended for entertainment. People use our platform for creative fan fiction and fictional roleplay.
"And we have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction."
Last year, a bereaved mother began legal action against Character.ai over the death of her 14-year-old son.
Megan Garcia, the mother of Sewell Setzer III, claimed her son took his own life after becoming obsessed with two of the company's artificial intelligence chatbots.
Be the first to get Breaking News
Install the Paste BN app for free



"A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," said Ms Garcia at the time.
A Character.ai spokesperson said it employs safety features on its platform to protect minors, including measures to prevent "conversations about self-harm".