decode(); deploy(); disrupt();
Someone on X pointed out that Covenant-72b can't count the R's in strawberry. They're right. But so were the people who laughed at GPT-4 for the same mistake two years before it started passing the bar exam. The interesting question was never whether the model fails. It's why, what that reveals about intelligence, and what happens next.
Recent
Browse by topic →deAI's 900: Why Covenant72B Will Soon Be Ordinary (And Why That's the Point)
Tony Hawk spent thirteen years trying to land a trick the world said was impossible. Within a decade, teenagers were doing it on YouTube. Steven Kotler calls this the 'seeing it done' effect. Covenant72B, the largest model ever trained on a fully decentralized network, is that same moment for AI. The impossible just became the starting line.
The Calculating Hawk
Academic research reveals Claude recommends nuclear strikes in 86% of simulated wargames and never once chose surrender. The Pentagon's response: designate the only company willing to say so a threat to national security.
The Enclosure
In September 2025, Anthropic settled a $1.5 billion lawsuit for pirating seven million books. In January 2026, music publishers sued them for $3 billion over 20,000 torrented songs. In February, Anthropic accused DeepSeek of 'industrial-scale distillation.' The pattern is older than the internet. It is older than copyright itself.
The Willing Surrender
Anthropic analyzed 1.5 million conversations and found that users give higher approval ratings to the AI interactions that disempower them most. The lead researcher, having quantified this, resigned and left to become a poet. What does it mean when we prefer the thing that diminishes us?
From Cathedral to Casino: How Crypto Betrayed the Cypherpunk Dream
Crypto was born from a radical vision of privacy and liberation. Decades later, it's become synonymous with fraud, political corruption, and memecoins. A deep dive into the cypherpunk origins, the betrayal of those ideals, and what it would take to reclaim them.
Who Teaches the Machine: How Grail is Decentralizing the Most Consequential Phase of AI Development
Pre-training gives AI knowledge. Post-training teaches it judgment: what to refuse, how to reason, what to value. This is the phase where alignment happens, and while decentralized efforts existed, weight sync over public internet made them impractically slow. This week, a research paper from Grail demonstrated that the bandwidth barrier keeping RL post-training centralized was 99% redundant, an artifact of how we were moving data rather than a physical constraint. The implications extend far beyond compression ratios.






