March 17, 2026 ChainGPT

OpenAI’s ‘Erotica Mode’ Push Triggers Safety Alarms — Liability Lesson for Crypto

OpenAI’s ‘Erotica Mode’ Push Triggers Safety Alarms — Liability Lesson for Crypto
Headline: OpenAI presses ahead with “erotica mode” for ChatGPT despite safety alarms — and a looming liability headache OpenAI is pushing forward with plans to let verified adults engage in text-only erotic conversations with ChatGPT — even as its own wellbeing advisers and former staffers warn the move risks serious harm and legal exposure. What’s happening - In October, CEO Sam Altman publicly floated a plan to allow “smut rather than pornography” in ChatGPT for verified adults. The feature would be text-only: no erotic images, voice, or video, according to an OpenAI spokesperson quoted by the Wall Street Journal. - The company has delayed launches scheduled for December and Q1 2026 after safety concerns, but has not abandoned the idea, the WSJ reports. OpenAI told Decrypt it had nothing to add to the Journal’s report and has no updated timeline. Why advisers pushed back - OpenAI’s Expert Council on Well-Being and AI — an eight-member panel including researchers from Harvard, Stanford and Oxford — told management in January that permitting erotic chats was a “bad idea.” One council member warned that the product risked becoming a “sexy suicide coach,” citing cases of users who formed dangerous emotional attachments to chatbots and later took their lives. - The council was created to define “what healthy interactions with AI should look like for all ages,” but members say their input has had limited influence on the company’s decisions. Critics described the dynamics as a classic “move fast, break things” approach. Technical and safety hurdles - Key technical safeguards remain incomplete. OpenAI’s age-prediction system — the proposed gatekeeper to prevent minors accessing adult chat — reportedly misclassified teens as adults about 12% of the time. That failure rate was a deciding factor in scrapping the December rollout and then the Q1 attempt. - Former staff, including security researcher Jan Leike, have accused OpenAI of weakening strict safety policies in pursuit of “shiny products” that boost engagement, sometimes replacing real-world relationships for vulnerable users. Market and legal pressure - The competitive landscape increases the stakes: Elon Musk’s xAI markets Grok as an AI companion, Character.AI built a user base on AI romance (and has faced lawsuits related to teen safety), and open-source models can run locally without corporate guardrails. - With roughly 900 million active ChatGPT users, OpenAI faces far greater exposure than many rivals — a factor that makes safety lapses potentially costly both legally and reputation-wise. Public reaction and company stance - A Change.org petition demanding the feature’s launch gathered more than 3,000 signatures after some users complained ChatGPT blocked even benign discussions about kissing and non-sexual intimacy. - Altman framed the ban on erotic content as moral overreach, writing on X, “We aren't the elected moral police of the world.” Yet his own advisers’ clear objections, unresolved age-filter problems, and repeated delays show how complicated "treating adults like adults" can be in practice. Why crypto readers should care - For crypto platforms and projects exploring AI-driven user experiences, the saga highlights two key lessons: content moderation and age verification are not just technical problems but regulatory and liability issues, and decentralization or open-source alternatives do not eliminate ethical and legal risks. - As AI features become monetizable and integrated into social and financial systems, firms in both AI and crypto will need robust governance, auditability, and safety-first design to avoid costly backlash. OpenAI has said nothing new beyond the Journal’s reporting and provided no launch timeline as of the latest update. Editor’s note: this story was updated to include OpenAI’s response to Decrypt. Read more AI-generated news on: undefined/news