April 24, 2026 ChainGPT

Altman: Anthropic’s 'Fear-Based' Claude Mythos Pitch Risks Centralized AI Control, Threat to Crypto

Altman: Anthropic’s 'Fear-Based' Claude Mythos Pitch Risks Centralized AI Control, Threat to Crypto
OpenAI CEO Sam Altman has accused Anthropic of using “fear-based marketing” to promote its powerful new AI, Claude Mythos — arguing the company is framing the model’s risks in ways that could justify concentrating control of advanced AI among a small set of actors. Speaking on the Core Memory podcast with tech journalist Ashlee Vance, Altman said legitimate safety concerns exist, but warned that rhetoric about catastrophic models can be weaponized to push restrictive access. “You can justify that in a lot of different ways, and some of it’s real,” he said. “But if what you want is like ‘we need control of AI, just us, because we’re the trustworthy people,’ I think fear-based marketing is probably the most effective way to justify that.” He added a blunt metaphor: “It is clearly incredible marketing to say: ‘We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million.’” Why the fuss over Mythos Anthropic unveiled Claude Mythos last month and it quickly became a lightning rod. Tests indicate Mythos can autonomously find software vulnerabilities and simulate complex, multi-stage cyber operations — behavior that has drawn attention from researchers, security firms and governments alike. In one test, Mythos reportedly identified hundreds of vulnerabilities in Mozilla’s Firefox. Anthropic says the model can speed defensive work by helping detect critical flaws faster, but it also acknowledges the potential for misuse. Controlled rollout and industry split Rather than a broad public release, Anthropic is distributing Mythos selectively through Project Glasswing, allowing a handful of organizations — including Amazon, Apple and Microsoft — to test the system. Anthropic has argued defenders should benefit from the technology first and has pledged resources to open-source security efforts. That approach highlights a broader industry divide: some companies favor tightly controlled deployments to limit risk, while others push for wider access to accelerate research and understanding. Measuring Mythos and pushback Anthropic has conceded that many existing cybersecurity benchmarks are inadequate to gauge Mythos’ capabilities. A group of researchers, however, claimed last week they reproduced Mythos-style findings using publicly available models — a sign that the barrier between restricted and public capabilities may be narrowing. Meanwhile, parts of the U.S. government have urged caution or even a pause on use of systems like Mythos amid concerns about their potential applications in warfare and surveillance. Yet the National Security Agency reportedly began testing a preview version on classified networks. Market odds and industry reaction On prediction market Myriad (owned by Decrypt’s parent company, Dastan), traders assigned a 49% chance that Claude Mythos would be released to the wider public by June 30 — reflecting uncertainty about whether Anthropic will broaden access or keep the model tightly controlled. Altman’s stance and OpenAI’s posture Altman said we should expect growing rhetoric about models “too dangerous to release” as capabilities rise, but he urged skepticism toward blanket claims. “There will also be very dangerous models that will have to be released in different ways,” he said. He argued Mythos could be valuable for cybersecurity and insisted OpenAI has a plan for releasing powerful capabilities responsibly. He also pushed back against stories that OpenAI is pulling back on infrastructure spending. “People really want to write the story of pulling back… but very soon it will be again, like, ‘OpenAI is so reckless. How can they be spending this crazy amount?’” he said, affirming the company will continue expanding its compute capacity. What this means for crypto and security For the crypto sector — where smart contracts, wallets and infrastructure are frequent targets — advances in automated vulnerability discovery could be a double-edged sword: tools like Mythos may accelerate defenses and auditing, but they could also lower the bar for large-scale offensive exploits if access isn’t tightly controlled. The debate over Mythos underscores a wider question facing AI and security communities: who should get access to powerful models, and under what governance? Read more AI-generated news on: undefined/news