In addition to verifying the user's age, platforms should regularly post messages reminding users that the person they are talking to is a machine.
California enacted laws yesterday that, for the first time in the United States, set rules for chatbots, Artificial Intelligence programs that simulate human communication, in response to the suicides of teenagers who had entered into a "relationship" with these tools, announced the state's governor, Gavin Newsom.
The Democratic governor signed a series of laws that require, among other things, companies to confirm the age of users, post regular warning messages and implement suicide prevention protocols.
“We have seen truly horrific, tragic examples of young people falling victim to unregulated technologies, and we will not stand idly by while these companies operate without limits and accountability,” Newsom said in a statement.
One of the laws, called SB243, would regulate chatbots that act as “companions” or “confidants,” such as those developed by platforms like Replika and Character.AI. The latter is being sued by the parents of a 14-year-old named Sewell, who committed suicide in 2024 after falling in love with a Game of Thrones-inspired chatbot that allegedly fueled his suicidal thoughts.
“We can continue to lead in the field of AI (…) but we must act responsibly, protecting our children,” added the governor of California, the state that hosts the main giants in the field, such as OpenAI (ChatGPT), Google (Gemini) and xAI (Grok).
In addition to verifying the user’s age, platforms will have to post regular messages reminding users that their interlocutor is a machine (every three hours for minors). They are also required to detect suicidal thoughts, provide access to prevention programs and provide authorities with statistics on this issue.
“The technology industry is encouraged to attract the attention of young people at the expense of their relationships in the real world,” said Senator Steve Padilla, the initiator of the law.
“Emerging technologies like chatbots and social networks can inspire, educate and connect people, but without real safeguards, they can also exploit, mislead and endanger our children,” the governor insisted.
There are no national rules in the US to limit the risks associated with AI.
0 Comments