Anyone 18 and under may no longer engage with the artificial characters at the Character.AI chatbot service, parent of Character Technologies, announced Wednesday. This move follows a handful of lawsuits alleging suicide and mental health problems among teens as they either chatted or participated in an online game offered by the app.
The company will implement the change by Nov. 25th, and in the interim, teenagers will have a 2-hour chat limit. Teens under age 18 will no longer be able to chat with artificial friends in free-ranging conversations, the company said in an announcement yesterday. Instead, they can build videos, stories, and streams using real characters.
In its statement, the company said: “We don’t decide to turn off open-ended Character chat lightly – but we do believe this is the right course of action in light of the concerns raised about how teens do and should ultimately interact with this new technology.

Character.AI controversy around how teens and children should interact with AI has led online safety advocates and lawmakers to urge tech companies to strengthen their parental controls for AI. Last year, a Florida mother sued the company, claiming the app led to the suicide of her 14-year-old son. In September, three other families filed suit against the company, claiming their children died by or attempted suicide and were otherwise harmed after engaging with its chatbots.
In a previous statement on the September lawsuits, the company said it cares “very deeply about the safety of our users,” claiming to spend “tremendous resources in our safety program.” It also said it had “deployed and are continually developing safety features — such as self-harm resources — as well as features geared towards the safety of our underage users.”
Character Technologies said it made the changes after regulators asked questions about the company’s work and following recent media reports.
The firm is also rolling out additional tools for age verification and is creating an AI Safety Lab operated by an independent non-profit dedicated to safety research in AI entertainment. The updates come after earlier Character AI safety measures, such as a prompt that sends users to the National Suicide Prevention Lifeline if suicide or self-harm is referenced.
Amid growing fears about the impact of the technology on the mental health of teenagers, Character Technologies is the latest AI company to announce new or additional safeguards for teens. This year, several reports detail cases of users becoming emotionally disturbed or feeling estranged from family members following long chats with ChatGPT.
At the end of September, OpenAI began allowing parents to link their account to a teen’s and restricted some content on teen accounts like “graphic content, viral challenges, sexual, romantic, or violent roleplay, and extreme beauty ideals.” For example, this month, Meta said it would soon enable parents to stop teens from chatting with AI characters on Instagram.




