SACRAMENTO — Julianna Arnold wasn’t alarmed when her teen daughter first joined Instagram.
Many people her age were using it. And her daughter Coco had a social life and other hobbies, like track and gymnastics, to balance out her time online.
“It was music and dancing videos and it seemed innocent,” said Arnold, who resides in Los Angeles, explaining that she would look over the content Coco watched.
But Arnold said a man used Instagram to target her daughter while they were living in New York in 2022, sending private messages and acting like a “big brother” to earn her trust. Two weeks after her 17th birthday, Coco met him near her home — and died after taking a fentanyl-laced fake Percocet that he provided.
Similar stories are playing out nationwide as parents grapple with how to protect their children from a myriad of threats online.
As the state is home to many tech giants, Gov. Gavin Newsom has said California is paving the way for legislative restrictions on social media and artificial intelligence. But while child safety advocates agree progress was made at the state capital this year, they argue there’s still a long way to go and plan to fight for more protections when legislators reconvene in January.
“I would say California is definitely leading on this,” said Jai Jaisimha, co-founder of the Transparency Coalition, a nonprofit researching the risks and opportunities associated with AI. “[But] I would love to see a willingness to be a bit stronger in terms of understanding the impacts and taking action faster. We can’t afford to wait three or four years — harm is happening now.”
A survey last year from the Pew Research Center found nearly half of U.S. teens ages 13 to 17 say they’re online “almost constantly.” Nine in 10 teens said they use YouTube, and roughly 6 in 10 said they use TikTok and Instagram. Fifty-five percent reported using Snapchat.
During the recent legislative session, Newsom signed a slate of legislation intended to make the internet safer, particularly for minors.
One new law requires operating system providers to ask account holders for the user’s age when setting up equipment such as laptops or smartphones. The system providers then send a signal to apps about the user’s age range so content can be adjusted for age-appropriateness. Another measure requires certain platforms to display warning labels about the adverse mental health effects social media can have on children.
A third new law requires companion chatbots to periodically remind users they are not interacting with a human and to put suicide prevention processes in place to help those who show signs of distress. A companion chatbot is a computer program that simulates humanlike conversations to provide users with entertainment or emotional support.
Newsom, however, vetoed what was arguably the most aggressive bill, saying it was too broad and could prevent children from accessing AI altogether.
Assembly Bill 1064 would have prohibited making companion chatbots available to minors if the chatbots were “foreseeably” capable of promoting certain behaviors, like self-harm, disordered eating or violent acts. It would also have required independent safety audits on AI programs for children.
“That is one piece that we are going to revisit next year,” said Sacha Haworth, executive director of the Tech Oversight Project. “We are in conversations with members’ offices and the governor’s office about getting that legislation to a place where he can sign it.”
Another organization is taking a different approach.
Common Sense Media Chief Executive Jim Steyer has launched a campaign for a state ballot initiative, dubbed the California Kids AI Safety Act, to take the issue directly to voters. Among other provisions, it would strictly limit youth access to companion chatbots and require safety audits for any Al product aimed at children or teens. It would also ban companies from selling the personal data of users under 18 without consent.
Steyer added that AB 1064 had widespread support and likely would have been signed were it not for the tech industry’s aggressive lobbying and threats to leave the state.
“In the world of politics, sometimes you have to try and try again,” Steyer said. “[But] we have the momentum, we have the facts, we have the public and, most of all, we have the moral high ground, so we are going to win.”
Ed Howard, senior counsel and policy advocate for the Children’s Advocacy Institute at the University of San Diego, said one of its goals for next year is to give more teeth to two current laws.
The first requires social media platforms to provide a mechanism for minors to report and remove images of themselves being sexually abused. The second requires platforms to create a similar reporting mechanism for victims of cyberbullying.
Howard said the major platforms, like TikTok, Facebook and Instagram, have either not complied or made the reporting process “incredibly difficult.”
“The existence of such imagery haunts the survivors of these crimes,” he said. “There will be a bill this year to clean up the language in [those laws] to make sure they can’t get away with it.”
Howard believes legislators from both sides of the aisle are committed to finding solutions.
“I’ve never before seen the kind of bipartisan fury that I have seen directed at these [tech] companies,” he said.
Lishaun Francis, senior director of behavioral health for Children Now, said the organization is still exploring potential legislative priorities for 2026.
She explained they often take a measured approach because stronger legislation tends to get tied up in lawsuits from the tech industry. Meta, Google and TikTok, for example, are challenging a California law enacted last year that restricts kids’ access to personalized social media feeds.
“We are still trying to do a little bit more research with our young people about how they want to interact with AI and what they think this should look like,” Francis said. “We think that is an important missing piece of the conversation; you’ve just got a bunch of 40-and-up adults in the room talking about technology and completely ignoring how young people want to use it.”
David Evan Harris, senior policy advisor for the California Initiative for Technology and Democracy, said he’s keeping an eye on Washington as he prepares for the state session.
“There are people in Congress and in the White House who are trying to make it impossible for states” to regulate AI, he said. “They want to take away that power from the states and not replace it with any type of federal regulation, but replace it with nothing.”
The White House has a draft executive order on hold that would preempt state laws on artificial intelligence through lawsuits and by withholding federal funds, Reuters reported Saturday.
When advocates speak out at the statehouse next year, Arnold will be among them. Since her daughter died three years ago, she has co-founded Parents Rise — a grassroots advocacy group — and works to raise awareness about the risks youth face online.
Even before Coco was targeted by a predator, Arnold said technology had already taken a toll on their lives. Her once-lively daughter became addicted to social media, withdrawing from activities she used to love. Arnold took Coco to therapy and restricted her time online, but it resulted in endless fights and created a rift between them.
“You think your kid is safe in their bedroom, but these platforms provide a portal into your home for predators and harmful content,” Arnold said. “It’s like they’re just walking through the front door.”
The post Online child safety advocates urge California lawmakers to increase protections appeared first on Los Angeles Times.




