At Consensus in Hong Kong this February, Cardano founder Charles Hoskinson pushed back against a common worry: that leaning on hyperscalers like Google Cloud or Microsoft Azure threatens blockchain decentralization. His argument leaned on advanced cryptography — multi-party computation (MPC) and confidential computing — and the idea of “cryptographic neutrality”: if the cloud can’t see the data, it can’t control the system.
That’s an attractive position, but it deserves a tighter scrutiny. MPC and Trusted Execution Environments (TEEs) are powerful tools, yet they don’t erase the core risk: concentration at the physical and operational layers.
MPC and TEEs reduce some attack surfaces, but they create others
MPC fragments secret material so no single party can reconstruct it, which mitigates the single-node compromise risk. But it also expands the security surface: coordination, communication channels, governance, and correct protocol implementation all become critical. The single point of failure doesn’t disappear — it becomes a distributed trust surface that must be correctly managed.
TEEs encrypt data during execution and limit exposure to cloud operators, but they rest on hardware assumptions — microarchitectural isolation, firmware integrity and flawless implementation. Academic research has repeatedly exposed side-channel and enclave vulnerabilities. TEEs narrow the security boundary compared with traditional cloud, but they don’t make it absolute.
Crucially, both MPC and TEEs typically sit on top of hyperscaler infrastructure. Even if cryptography prevents data inspection, an infrastructure provider that controls machines, bandwidth and regions still retains operational leverage: it can throttle throughput, shut down capacity, or apply policy interventions. Cryptography raises the bar for certain attacks, but it doesn’t remove infrastructure-level failure modes.
You don’t need Layer 1 to run everything — but you do need verifiable results
Hoskinson is right that Layer 1 chains weren’t built to run AI training loops, HFT engines, or enterprise analytics. L1s exist to maintain consensus, verify state transitions and provide durable availability. The modern reality is heavy computation increasingly happens off-chain — what matters is that its results are provable and verifiable onchain. That’s the premise behind rollups, zero-knowledge systems and verifiable compute networks.
The debate should therefore center on who controls the off-chain execution and storage infrastructure that feeds verification, not whether an L1 can shoulder global compute. If off-chain computation relies on centralized cloud providers, you inherit centralized failure modes: settlement may stay decentralized in theory, but the path to producing valid state transitions is concentrated in practice.
Hardware is the new battleground for decentralization
Cryptographic neutrality is vital, but cryptography runs on hardware. That physical layer determines participation economics — who can afford to run nodes or provers, who scales, and who is vulnerable to censorship. If hardware production, distribution and hosting remain concentrated, protocol-level neutrality becomes fragile under real-world pressure. A neutral protocol on concentrated infrastructure is neutral in theory but constrained in practice.
This is why the community should focus on diversifying hardware ownership and building infrastructure that aligns economically with protocol participants. Without that, a small set of providers can exert outsized influence by rate-limiting workloads, restricting regions, or imposing compliance gates.
Hyperscalers are efficient — but specialization wins for heavy, predictable workloads
It’s tempting to treat hyperscalers as the enemy. They’re not. They offer flexibility, global reach and robust enterprise tooling. But they optimize for generality and elasticity, which carries cost overhead. Zero-knowledge proving and verifiable compute are deterministic, compute-dense, memory-bandwidth constrained, and pipeline-sensitive. Those workloads reward specialization.
A purpose-built proving network that vertically integrates hardware, prover software, circuit design and aggregation logic can outperform hyperscalers on metrics that matter for proofs: proof per dollar, proof per watt, and proof per latency. For steady, high-volume tasks, dedicated clusters with sustained throughput often beat elastic, multipurpose cloud instances — both economically and technically.
A pragmatic way forward
Hyperscalers should be part of a resilient architecture — used for burst capacity, geographic redundancy, and edge distribution — but they shouldn’t be the foundation of systems that generate and persist the critical artifacts used for verification. Settlement, final verification, and availability of proof artifacts must remain intact even if a cloud region fails or a vendor exits a market.
Decentralized storage and compute — operated by economically-aligned participants and structured to be hard to switch off — offer a more robust alternative for preserving decentralization in practice. If a hyperscaler disappears, a properly designed network should slow, not grind to a halt, because core functions are distributed across many owners rather than rented from a single chokepoint.
Bottom line: cryptography is necessary but not sufficient. To honor crypto’s decentralization ethos, the industry must pair advanced cryptographic techniques with diversified, incentive-aligned hardware and purpose-built compute networks. Only then will we have systems that are not only provably fair on paper, but resilient and permissionless in reality.
Read more AI-generated news on: undefined/news