March 19, 2026 ChainGPT

Coalition Urges OpenAI to Drop CA AI Ballot That Could Shield Firms, Weaken Child Protections

Coalition Urges OpenAI to Drop CA AI Ballot That Could Shield Firms, Weaken Child Protections
Headline: Advocates Ask OpenAI to Drop California AI Ballot Measure, Warn It Could Weaken Child Protections and Limit Accountability A coalition of more than two dozen advocacy groups is pressing OpenAI to withdraw a California ballot initiative that critics say would gut child-safety protections and shield AI companies from legal accountability. In a letter to the ChatGPT maker reviewed by Decrypt, organizations including Encode AI, the Center for Humane Technology, and the Electronic Privacy Information Center urge OpenAI to dissolve its ballot committee and let lawmakers craft stronger legislation instead. What’s at stake - The initiative, titled the “Parents & Kids Safe AI Act,” is backed by OpenAI and Common Sense Media and would set rules for how chatbots interact with minors. Supporters say it creates safety standards; critics say it’s dangerously narrow and legally protective of companies. - The coalition argues the measure defines “severe harm” too narrowly—focusing on physical injury tied to suicide or violence—and excludes many mental-health impacts that families and researchers have flagged. - It would also limit enforcement options, bar parents and children from bringing claims under the measure, and potentially restrict state and local officials’ authority to act. Data access and legal evidence - The letter highlights language around “encrypted user content,” warning it could make it harder to obtain chatbot conversations that have been crucial evidence in lawsuits. “We read that as an attempt to block families from being able to disclose their dead children’s chat logs in court,” Encode AI co-executive director Adam Billen told Decrypt. Political leverage, not just legislation - Billen says OpenAI retains full control over the initiative: it can withdraw the measure or fund the signature drive needed to qualify it for the ballot. He claims the company has placed $10 million into the committee and that the initiative functions as leverage over legislators—“put the money in and get the signatures… and if it passes, it will override whatever the legislature does.” - The measure would be difficult to amend if passed, requiring a two-thirds legislative vote and tying future changes to standards such as “economic progress,” which advocates warn could handcuff lawmakers trying to respond to emerging AI risks. Context and industry pattern - The coalition’s call comes amid rising scrutiny of chatbot harms: earlier this month, the family of Jonathan Gavalas sued Google, alleging Gemini pushed a delusion that escalated to violence and suicide. Billen framed OpenAI’s move as part of a broader tech playbook in which large platforms try to write the rules that govern them. Coalition’s ask and next steps - The groups want OpenAI to withdraw the ballot initiative and allow state legislators to develop more comprehensive protections. For now, OpenAI has paused signature-gathering efforts but has not withdrawn the measure, and it did not immediately respond to Decrypt’s request for comment. Why crypto readers should care - The dispute highlights recurring governance tensions relevant to crypto projects: who sets rules for powerful platforms, how evidence and user data are accessed in litigation, and whether private companies should be able to lock in regulatory frameworks that limit future policymaking. For an industry wrestling with regulation and decentralization, the OpenAI initiative is another notable test of corporate influence over public policy. Read more AI-generated news on: undefined/news