April 25, 2026 ChainGPT

Anthropic's Mythos Forces DeFi to Rethink Security: AI Exposes Infrastructure Risks

Anthropic's Mythos Forces DeFi to Rethink Security: AI Exposes Infrastructure Risks
Anthropic’s new adversarial AI, Mythos, is doing more than worrying traditional tech and finance firms — it’s forcing the crypto world to rethink the fundamentals of security. For years, DeFi defenses have focused on smart contracts: audits, bug bounties, vulnerability catalogs and lessons learned from well-known exploits. Mythos flips that focus. Rather than hunting single-line bugs, it is engineered to simulate an attacker’s mindset — chaining small weaknesses across systems to reveal complex, multi-step exploits. That changes the threat model from isolated code flaws to the less-visible infrastructure that ties the ecosystem together. “The bigger risks sit in infrastructure,” says Paul Vijender, head of security at risk firm Gauntlet. “When I think about AI-driven threats, I’m less concerned about smart contract exploits and more focused on AI-assisted attacks against the human and infrastructure layers.” He points to components outside typical audit scopes: key management and signing services, bridges, oracle networks and the cryptographic plumbing between them. Those concerns are already materializing. This month web infrastructure provider Vercel disclosed a breach that may have exposed customer API keys, pushing multiple crypto projects to rotate credentials and re-check code. Vercel traced the intrusion to a compromised Google Workspace connection via Context.ai, a third-party AI tool used by an employee — a reminder that human and supply-chain channels are ripe attack vectors. Mythos belongs to a new class of AI tools designed to be adversaries. Instead of flagging known bugs, they map how different systems and human processes interact, testing whether small gaps can be stitched together into real-world attacks. That capability is attracting attention beyond DeFi: banks such as JP Morgan now consider AI-driven cyber risk systemic, and reportedly financial firms and exchanges — including Coinbase and Binance — have approached Anthropic to evaluate Mythos as a stress-testing tool. Early results from Mythos-like models have homed in on the kinds of backstage systems many audits miss: the tech that stores keys, the services that sign transactions, and the channels that pass messages between protocols. Those weaknesses are particularly dangerous in a composable ecosystem. DeFi’s modularity — its shared liquidity, common oracles and layered integrations — is the source of its innovation and its vulnerability. A small bug in one protocol can propagate across the network, as seen in the Hyperbridge incident where an attacker minted roughly $1 billion of bridged Polkadot tokens on Ethereum by exploiting cross-chain message verification. “Composability is what makes DeFi capital efficient and innovative,” Vijender says. “But it also means a minor vulnerability in one protocol can become a critical exploit vector with contagion potential across the ecosystem.” AI changes the calculus by making those interdependencies discoverable and exploitable at scale. The result: a shift from discrete hacks to systemic failures that can cascade. Not everyone sees Mythos as a sudden revolution. For builders already living in an adversarial landscape, AI is an evolution of existing dynamics. “Web3 is no stranger to well-funded and motivated adversaries,” says Stani Kulechov, founder of Aave Labs. “AI models represent an evolution in the tools used to achieve exploits.” He argues that DeFi already runs at machine speed — smart contracts execute instantly and risk mechanisms operate autonomously — so AI mainly intensifies an environment that has long required relentless vigilance. Still, Kulechov acknowledges Mythos surfaces classes of vulnerabilities human auditors have historically deprioritized. That breadth matters when relatively small issues can be combined into larger attacks that erode trust and cause financial damage. As attackers potentially gain speed, defenders are adapting their playbook. The traditional model — audit before launch and manual monitoring afterward — was built for human-paced threats. The new approach emphasizes continuous, AI-assisted security: real-time simulation, ongoing audits, and designing systems under the assumption that breaches will occur. Gauntlet and Aave are already integrating AI into their defenses. “To defend against offensive AI, we will need to take an AI-centric approach where speed and continuous adaptation are essential,” Vijender says. Aave uses AI for simulations and code review alongside human auditors, treating automation as a force multiplier rather than a replacement. That duality — AI for offense and defense — is why some builders view Mythos as a tool for hardening systems rather than only a menace. “We haven’t tested Mythos yet, but we’re genuinely interested in what it and tools like it can do for protocol security,” says Hayden Adams, CEO of Uniswap Labs. He predicts the gap will widen between projects that prioritize security and those that don’t: the former will be better able to stress-test and harden before launch, while the latter become obvious targets. The practical takeaway for the crypto industry is clear: security is moving from a static checklist to a continuous, adaptive discipline. Vulnerabilities won’t disappear; they will be constantly rediscovered and recombined. The real defense lies in speed, automation and an AI-aware security posture that assumes attackers — and defenders — will increasingly use the same powerful tools. Read more AI-generated news on: undefined/news