Anthropic’s Mythos is forcing crypto security to evolve — fast
Anthropic’s new AI model, Mythos, is reshaping how the crypto industry thinks about risk. Built to act like an adversary, Mythos doesn’t just hunt for single bugs; it maps how small weaknesses can be chained together across systems to produce real-world exploits. That capability is exposing a blind spot in decentralized finance: the infrastructure that sits behind smart contracts.
From code to keys: the new perimeter of risk
For years DeFi security has centered on smart contracts: audits, bug bounties, and classifying common exploits. Mythos flips that focus outward. Security leaders now warn the riskiest attack vectors are often outside contract code — in key management systems, signing services, bridges, oracle networks and the cryptographic glue that ties these pieces together.
“The bigger risks sit in infrastructure,” says Paul Vijender, head of security at risk firm Gauntlet. “When I think about AI-driven threats, I’m less concerned about smart contract exploits and more focused on AI-assisted attacks against the human and infrastructure layers.”
That concern isn’t theoretical. Recently web-infrastructure provider Vercel disclosed a breach that may have exposed customer API keys, forcing crypto teams to rotate credentials and audit deployments. Vercel traced the intrusion to a compromised Google Workspace connection via the third-party AI tool Context.ai — a reminder of how human workflows and third-party services can open doors.
AI as simulated adversary — and stress tester
Mythos belongs to a growing class of AI systems designed to simulate attackers rather than simply flag known vulnerabilities. By modeling how protocols interact, these tools can surface multi-step exploit chains and infrastructure weaknesses that traditional audits rarely touch. That capability has attracted attention beyond crypto: banks such as JP Morgan are treating AI-driven cyber risk as systemic, and reportedly exploring tools like Mythos for stress testing. CoinDesk reported that Coinbase and Binance have also approached Anthropic about testing with the model.
“AI models are especially valuable for multi-step exploit chains that historically only get discovered after money is lost, and for infrastructure-layer vulnerabilities that traditional audits never touch,” Vijender says.
Composability multiplies the danger
DeFi’s composability — the very feature that enables innovation and capital efficiency — also creates pathways for cascading failures. Protocols share liquidity, oracles and integrations in ways that are hard to fully map. A minor flaw in one service can propagate, becoming a critical attack vector across multiple protocols. Recent bridge incidents, including an exploit that enabled an attacker to mint large amounts of bridged tokens, highlight how cross-system verification failures can lead to huge losses.
Without AI, these long dependency chains are difficult and time-consuming to trace. With AI, the same chains can be mapped and exploited at scale, converting isolated bugs into systemic failures.
Defenders adapt: AI for offense and defense
Industry voices frame Mythos less as a turning point than as an acceleration of an already adversarial environment. “Web3 is no stranger to well-funded and motivated adversaries,” Stani Kulechov, founder of Aave Labs, told CoinDesk. “AI models represent an evolution in the tools used to achieve exploits.”
Aave, Gauntlet and other firms are responding by moving beyond the traditional “audit-then-deploy” model. The new playbook includes continuous auditing, real-time simulation, and designing systems under the assumption that breaches will happen. Aave has already integrated AI into simulations and code reviews to complement human auditors. “We take an AI-first approach where it adds clear value,” Kulechov said, “but it complements, rather than replaces, human-led auditing.”
Hayden Adams, CEO of Uniswap Labs, sees the same duality. “AI gives builders better ways to stress test and harden systems,” he says. Over time he expects a widening gap: projects that prioritize security and use AI to harden systems will be more resilient, while others will be increasingly exposed.
The new security reality
The arrival of AI-driven adversaries compresses timelines. Attacks can be conceived and refined at machine speed, while traditional defenses — periodic audits and manual monitoring — operate at human speed. The result: security is becoming a continuous, adaptive process rather than a one-time checkbox.
In short, Mythos is accelerating a long-simmering shift: DeFi can no longer treat security as primarily about eliminating code bugs. It must assume vulnerabilities will be continuously rediscovered and recombined — and invest in tools, processes and AI-driven defenses that operate at the same velocity as the threats.
Bottom line: AI amplifies both attack and defense. Projects that adapt and adopt these tools will harden over time; those that don’t will face growing systemic risk.
Read more AI-generated news on: undefined/news