AI and the War Machine

Cover Image for AI and the War Machine

The mask has finally slipped.

On October 21, 2025, Dario Amodei—CEO of Anthropic, the company that once promised to build "AI to serve humanity's long-term well-being"—published a statement that would have been unthinkable just two years ago. He endorsed Trump administration officials, adopted Pentagon terminology ("Department of War" instead of "Department of Defense"), and framed AI development as a zero-sum competition where American dominance is the paramount goal.

The statement reveals $200 million in military contracts to develop "frontier AI capabilities that advance national security"—a sanitized description for technologies that will assist in targeting, surveillance, and intelligence analysis. Through partnerships with Palantir Technologies, Claude AI models now operate in classified environments processing data up to "Secret" level, supporting operations explicitly including "legally authorized foreign intelligence analysis" and "identifying covert influence campaigns."

This represents the culmination of a systematic betrayal. Companies founded explicitly to benefit "all of humanity" are now building weapons systems, processing classified intelligence, and adopting the language of empire. Meanwhile, they invoke "AI safety" to justify regulatory capture that further entrenches their monopolies. The decentralized AI movement offers a fundamentally different vision: intelligence as a public good, permissionlessly accessible, transparently governed, and owned by everyone rather than controlled by a handful of defense contractors masquerading as humanitarians.

Amodei's statement exposes the nationalist turn

Most tellingly, Amodei positions Anthropic as "the only frontier AI company to restrict selling of AI services to PRC-controlled companies," casting business decisions as patriotic acts while emphasizing the existential threat from China. This is textbook AI nationalism: framing technology development as civilizational competition where democracies must achieve "decisive strategic and military advantage over their adversaries."

The rhetoric has completely inverted. Where Anthropic once promised to build "AI to serve humanity's long-term well-being," it now touts being "the fastest-growing software company in history" with revenue exploding from $1 billion to $7 billion in nine months. Where it once delayed releasing Claude for safety testing despite competitive pressure, it now rapidly deploys to classified military networks with minimal public discussion of unique risks. Where it structured itself as a Public Benefit Corporation to prioritize mission over profits, leaked memos reveal Amodei seeking investments from UAE and Qatar authoritarian monarchies because refusing money from "bad people" is "an impossibly high bar."

The cascade is complete across all major labs

Anthropic isn't an outlier—it's following a pattern established across every major AI company in a stunning 18-month cascade from January 2024 to today.

OpenAI removed its explicit ban on "military and warfare" from usage policies in January 2024, quietly deleting the prohibition without announcement. By December 2024, it had partnered with Anduril Industries for battlefield drone defense. In June 2025, it secured a $200 million DoD contract and launched "OpenAI for Government," with two executives sworn in as military reservists. The company founded as a nonprofit to ensure AGI "benefits all of humanity, unconstrained by a need to generate financial return" is now a defense contractor supporting "warfighting and enterprise domains."

Meta reversed its military prohibition in November 2024, opening Llama models to U.S. defense agencies and expanding to NATO allies by September 2025. Partners now include Lockheed Martin, Palantir, Anduril, and the full ecosystem of defense contractors. CEO Mark Zuckerberg, once challenged by privacy advocates, now positions Meta as a "proud American company" supporting "the most technologically advanced military in the world."

Google faced employee revolts in 2018 when 4,000 workers protested Project Maven—Pentagon AI for analyzing drone surveillance footage. The company backed down, declined to renew the contract, and adopted AI Principles including not using AI for weapons. By July 2025, Google quietly secured its own $200 million DoD contract. Nearly 200 DeepMind employees signed a letter protesting the return to military work, but they were ignored. Palantir took over Project Maven and the Pentagon now credits it with 2024 targeting support for U.S. airstrikes in Iraq, Syria, and Yemen.

xAI, Cohere, and Scale AI complete the picture. Every frontier AI lab now sells to the military. Every one has rationalized it as supporting "democratic values." Every one frames it as necessary to counter China. Every one has subordinated their stated missions to national security partnerships and defense revenues.

The talking points are identical: China threat, defensive use, democratic values. The timeline is coordinated: major contracts announced simultaneously in July 2025. The intermediaries are the same: Palantir and Anduril provide the bridge into classified environments. The financial motivation is transparent: training frontier models costs hundreds of millions, and the Pentagon budget approaches $1 trillion with half going to contractors.

Decentralized AI offers the philosophical alternative

The centralized AI model—concentrated in a handful of corporations, aligned with military-industrial interests, optimizing for national advantage—directly contradicts the original vision of AI benefiting humanity. The decentralized AI movement provides the counter-narrative.

Bittensor articulates this most clearly: "To ensure that the supremely important commodity of intelligence is owned by everyone." Not owned by Anthropic. Not owned by OpenAI. Not owned by the Pentagon. Everyone.

This philosophy has roots in Bitcoin's demonstration that critical infrastructure can be governed permissionlessly through distributed consensus rather than centralized control. Just as Bitcoin proved you don't need Chase or the Federal Reserve to have a functioning monetary system, decentralized AI proves you don't need Google or the DoD to develop intelligence systems.

The core principles stand in direct opposition to the centralized model:

Open ownership over open source: Beyond sharing code, deAI distributes control and governance to the community. Centralized labs call themselves "open" while maintaining proprietary control over models, training data, and decision-making. This is open-washing—claiming openness while retaining power.

Permissionless innovation: Anyone can contribute compute, develop models, and earn tokens based on value provided—no credentials, no approval, no gatekeepers. As Bittensor states: "Anyone can join, no questions asked, and contributions get paid directly and continuously, no contracts. No HR department—this is capitalism in its purest form." Compare this to Anthropic requiring military security clearances and Palantir partnerships to access frontier capabilities.

Transparency and auditability: Decentralized systems make training data, model architecture, and decision processes inspectable. Centralized AI operates behind APIs and classification barriers. Which model do you trust more—one you can audit, or one processing your data in a classified intelligence facility?

Democratic distribution of value: Economic benefits flow to contributors, not concentrate in corporate hands. OpenAI's $300 billion valuation accrues to shareholders and investors. Bittensor's TAO tokens distribute value to the network of participants providing intelligence.

Resistance to centralized control: Single points of failure, censorship, and monopolistic control are fundamental threats. When OpenAI's API goes down, every dependent application fails. When the U.S. government can pressure one company, civil liberties vanish. When five corporations control AI, we get exactly what we're seeing now: alignment with power rather than people.

Hugging Face CEO Clément Delangue testified to Congress that "open science and open-source AI are critical to incentivize and are extremely aligned with American values" because they "create a safer path for development of the technology by giving civil society, nonprofits, academia and policymakers the capabilities they need to counterbalance the power of big private companies." This is the deAI position: democratizing access counterbalances concentrated power.

The philosophical divide is between those who believe intelligence is too important to be controlled by corporations, and those who believe massive resources require massive organizations. Between those who think transparency and community governance ensure accountability, and those who think professional oversight in closed systems ensures safety. Between those who see permissionless innovation as the path to progress, and those who see it as dangerous chaos.

Bittensor frames it as an existential choice: "We are approaching a forking point for mankind; down one road is the centralization of power and resources, in large regulated industries... Down the other road is the potential for sharing these resources through open protocols, via technological foundations, which enable global participation and ownership."

We're watching in real-time which road the centralized labs have chosen.

Historical parallels warn of militarization's costs

This isn't the first time transformative technologies became militarized with promises of security, only to delay beneficial applications, create global inequalities, and ultimately fail at their stated security goals.

GPS spent 17 years degraded for civilians through "Selective Availability"—intentional signal corruption to prevent adversaries from using accurate positioning. From 1983 to 2000, civilian accuracy was limited to ~100 meters versus ~15 meters for military. This delayed the entire ecosystem of location-based innovation: precision agriculture, smartphone navigation, autonomous vehicles, location services that generate hundreds of billions in economic value annually. When President Clinton finally ended Selective Availability in 2000, accuracy improved 10x overnight and innovation exploded. The security justification proved hollow—adversaries simply developed alternative systems (GLONASS, Galileo, BeiDou), while civilians paid the price in delayed progress.

Cryptography export controls in the 1990s treated encryption software as munitions, limiting exports to 40-bit key lengths (easily broken) while 128-bit encryption was technically feasible. The stated goal was preventing adversaries from obtaining strong encryption. The actual result: U.S. software companies lost billions in sales to foreign competitors without export restrictions, domestic users received weakened security, and adversaries developed their own cryptography anyway. A 1996 survey identified 1,181 foreign cryptographic products—the controls failed completely at their security objective while successfully handicapping the U.S. tech industry. When restrictions finally lifted in 2000, strong encryption enabled secure e-commerce, online banking, and the modern internet economy.

Nuclear technology remains classified 80 years after the Manhattan Project, creating permanent global inequality between nuclear "haves" and "have-nots." The strictest secrecy in human history couldn't prevent the Soviet Union from developing weapons within four years. Meanwhile, beneficial applications in medicine and power generation were delayed by decades of military control and secrecy. Classification didn't work—it just created a two-tier world.

The Internet itself originated as ARPANET, a Pentagon project, but its massive civilian benefits were unplanned byproducts that emerged only after transitioning to civilian control. The military intended a resource-sharing network for defense researchers; it got a global communication revolution when freed from military constraints. The value of civilian applications—e-commerce, social connection, information access, remote work—vastly exceeds military applications, and emerged precisely because development moved away from the military control model.

The pattern is consistent: militarization delays beneficial use, fails to prevent adversary development, harms domestic industry competitiveness, and creates lasting inequalities. Controls work best on law-abiding parties who pose the least threat. Sufficiently motivated actors—nation-states, well-funded organizations—develop indigenous capabilities within years. Knowledge, once discovered, cannot be permanently restricted in a connected world.

Every lesson from these historical cases applies directly to AI. Export controls on AI algorithms and models would repeat the cryptography mistake—handicapping U.S. companies while failing to prevent foreign AI development. Classification of AI research would slow beneficial applications and AI safety work precisely when speed is critical. Restricting foreign researchers would reduce U.S. innovation capacity. Framing AI as a zero-sum national competition will motivate adversaries to develop alternatives while reducing international cooperation on governance.

The technologies that generated the most human benefit did so after restrictions lifted, not while controls were in place. GPS after Selective Availability ended. Encryption after export controls lifted. The Internet after commercialization beyond military control. AI is heading in the opposite direction—toward greater restriction, nationalist competition, and military control. History suggests this path leads to delayed progress, global inequality, and ultimate failure at stated security objectives.

The contradictions reveal the hypocrisy

The gap between rhetoric and reality has never been wider, and the contradictions expose the bankruptcy of the centralized model's justifications.

"AI safety" becomes weapons development: Anthropic built its brand on Constitutional AI, harmlessness, and safety-first development. It delayed releasing Claude in 2022 for additional safety testing despite competitive pressure. Now it rapidly deploys to classified military networks processing intelligence data with minimal public discussion of unique risks. The company that exists because its founders thought OpenAI was moving too fast toward commercialization now operates in "time-sensitive situations" supporting military operations. When "safety" means safe for the Pentagon to use in targeting decisions, the term has lost all meaning.

"Benefiting all humanity" becomes American nationalism: OpenAI's charter explicitly states the mission to "ensure that artificial general intelligence benefits all of humanity." Amodei speaks of "humanity's long-term well-being." These are universal, cosmopolitan commitments. Amodei's October 2025 statement mentions "American" or "America" 15 times. It endorses specific politicians. It uses nationalist, competitive framing where AI development is about securing advantage over adversaries. There is no coherent way to reconcile "all of humanity" with "American AI leadership" in zero-sum competition with China. Either the mission is universal or it's nationalist. They chose nationalist while pretending otherwise.

"Democratic values" while seeking authoritarian money: Amodei's statement emphasizes that "democracies must work together to ensure AI development strengthens democratic values globally." Yet leaked memos from July 2025 reveal Amodei told staff Anthropic was seeking investments from UAE and Qatar—absolute monarchies with documented human rights abuses. Amodei acknowledged this would "likely enrich dictators" but rationalized it as necessary because refusing money from "bad people" is "an impossibly high bar." So: democratic values as marketing, authoritarian money in practice. The contradiction is absolute.

Safety rhetoric enables regulatory capture: The centralized labs invoke AI safety to justify regulations that would create enormous barriers to entry—expensive testing, documentation, compliance costs that favor large incumbents. Yet safety testing is actually cheap (approximately $235,000 versus $191 million for training). David Sacks, White House AI Czar, accused Anthropic of running a "sophisticated regulatory capture strategy based on fear-mongering." Whether or not that's fair to Anthropic specifically, the pattern is clear: "safety" becomes the justification for rules that entrench monopolies. Meanwhile, actual present harms—labor exploitation, bias, misinformation, worker displacement—receive far less attention than hypothetical AGI extinction scenarios.

"Open source" without actual openness: Meta positions Llama as "open source" and frames this as democratizing AI while simultaneously opening it to military contractors. But Llama isn't open-source by any meaningful definition—it's developed centrally at Meta using proprietary methods and data, then released as "open weights" without training data or development transparency. This is open-washing: claiming the virtue of openness while maintaining centralized control. True openness would mean transparent training data, community governance, and distributed development—the model that platforms like Hugging Face and Bittensor actually provide.

Independence from commercial pressure becomes total integration: Anthropic was founded explicitly because OpenAI had become too commercially compromised by Microsoft's $1 billion investment. The Amodei siblings and other senior researchers left to build an independent organization prioritizing mission over profits. Fast forward to 2025: Anthropic has taken $8 billion from Amazon, $2 billion from Google, and $200 million from the Pentagon. It's partnered with Palantir, the surveillance company founded by Peter Thiel that's synonymous with military-industrial integration. It's structured deals that commit it to Amazon's cloud infrastructure and chips. There is no independence—it's totally embedded in the same commercial and military-industrial structures it was founded to avoid.

These aren't minor inconsistencies. They're fundamental contradictions that reveal the original missions were either never serious, or have been completely abandoned in favor of commercial growth and political alignment with power.

AI colonialism and the global equity gap

The nationalist and militarized turn in AI development doesn't just contradict universal benefit rhetoric—it actively reproduces colonial patterns of extraction, exploitation, and inequality.

Research from MIT Technology Review's AI Colonialism series and academic scholars documents how AI development mirrors historical colonialism through resource extraction, labor exploitation, and infrastructure dominance concentrated in the Global North.

Resource extraction operates through data harvesting—companies scrape data globally, particularly from the Global South, enriching wealthy nations while externalizing costs. This is the new resource colonialism: data as the oil of the 21st century, extracted from populations worldwide to train models that serve Western commercial and military interests.

Labor exploitation appears in data labeling and content moderation. During Venezuela's economic crisis, data labeling firms paid poverty wages with no protections. Kenya, the Philippines, and Southeast Asia provide content moderation at exploitative rates. Western firms outsource to regions with weak labor laws, creating a race to the bottom. Facebook dedicates 87% of moderation resources to non-U.S. regions with only 13% of moderation hours—the Global South provides data and labor while receiving inadequate protection.

Infrastructure dominance concentrates power. AI model development, cloud infrastructure, funding, and decision-making center in the Global North—primarily the U.S., with some in China and Europe. Five corporations (Google, Microsoft, Amazon, Meta, OpenAI/Microsoft) control the compute infrastructure, data access, and deployment platforms. This creates permanent dependency reminiscent of neocolonial relationships where multinational corporations dominated post-independence economies.

Algorithmic bias embeds Western values. AI systems trained on Global North data and values deploy worldwide without contextual adaptation. Facial recognition trained on white faces fails on darker skin tones. Language models prioritize English and European languages. Content moderation over-removes legitimate content in non-Western contexts while failing to remove harmful material quickly. Scholar Sabelo Mhlambi notes this makes it "almost impossible for people to determine their own futures on their own terms"—the definition of colonial control.

Top-down development excludes local participation. Projects like crop disease detection in Africa are "designed for communities, not necessarily with them," reinforcing extractive innovation models. The Decolonial AI Manifesto argues current AI development excludes diverse objectives and values, concentrating problem definition and funding among "western-educated engineers" and "Silicon Valley venture capitalists."

Now add militarization and nationalism to this already-colonial structure. When AI development becomes about "American AI leadership" and "strategic advantage over adversaries," the inequality deepens. Export controls would restrict Global South access to frontier capabilities. Military applications prioritize U.S. national security over global human needs. Intelligence services use AI for surveillance that disproportionately targets developing nations. The nationalist frame explicitly positions AI as a tool of geopolitical competition rather than shared human progress.

AI Now Institute researchers argue that "AI nationalism cannot be understood without careful attention to how racism and imperialism underpin the AI arms race... it is a fundamental contest over racial and civilisational superiority, one deeply rooted in previous histories of colonial violence and racial capitalism." The competitive framing against China specifically reproduces "Yellow Peril" racial tropes. The presumption that Western democracies should lead AI development embeds civilizational hierarchy.

You cannot simultaneously claim "AI for everyone" while building systems that extract value from the Global South, concentrate power in Western hands, serve nationalist military interests, and reproduce colonial hierarchies. The contradiction is total.

The effective accelerationism capture of discourse

The debate between "effective accelerationism" (e/acc) and "AI safety" creates a false binary that obscures the real issue: who controls AI and in whose interests.

E/acc advocates like Marc Andreessen and Guillaume Verdon argue for unrestricted technological acceleration, dismissing AI safety concerns as "doomerism" that would stifle innovation. They claim market forces and competition provide sufficient discipline. Andreessen's "Techno-Optimist Manifesto" lists among enemies: sustainability, social responsibility, trust and safety, and tech ethics. E/acc presents itself as anti-establishment, championing permissionless innovation against regulatory capture.

But look at what's actually happening: e/acc rhetoric serves incumbent power. When Andreessen advises Senate Majority Leader Chuck Schumer while opposing all regulation, when Marc Andreessen's firm invests heavily in defense tech companies while promoting "American Dynamism," when the Trump administration's AI policy dismantles safety guardrails while accelerating military AI—e/acc functions as ideological cover for concentration, militarization, and corporate dominance.

The e/acc position that safety regulations would create barriers favoring incumbents contains truth. Safety testing requirements, documentation mandates, and compliance costs do favor large companies with legal teams and resources. This IS regulatory capture risk. But e/acc's solution—eliminate regulation entirely and let markets decide—ignores that market concentration is already happening through network effects, economies of scale, control of infrastructure, and capital requirements. The "free market" in AI has produced exactly what unregulated markets always produce: monopoly.

Meanwhile, AI safety advocates working at centralized labs like Anthropic invoke safety to justify their existence, secure funding, and advocate for regulations that happen to disadvantage competitors. When safety becomes a product differentiator and regulatory moat rather than a genuine commitment, it's been captured.

Both sides of this debate can be simultaneously right about the other's capture and wrong about the solution. E/acc correctly identifies safety rhetoric as potentially serving regulatory capture, but incorrectly assumes unregulated markets solve the problem. Safety advocates correctly identify real risks from powerful AI systems, but many work for organizations that instrumentalize safety for competitive advantage.

The decentralized AI position transcends this false binary. The problem isn't too much safety or too much acceleration—it's who decides, who benefits, and who governs. Vitalik Buterin's "defensive accelerationism" (d/acc) gestures at this middle path: pro-technology but recognizing that optimizing solely for profit may not lead to desirable outcomes. D/acc advocates for targeted development that prioritizes technologies making the world safer.

But deAI goes further: decentralize control itself. When intelligence is developed through permissionless networks, transparent models, and distributed governance, both the regulatory capture problem and the market concentration problem diminish. When anyone can contribute and value accrues to contributors, you don't need to choose between corporate-controlled "safety" and corporate-controlled "acceleration." When models are auditable and governance is democratic, safety emerges from transparency rather than trusting centralized labs.

The e/acc vs safety debate is a distraction from the real question: Will AI development serve narrow corporate and national interests, or will it be genuinely decentralized to serve humanity?

The path forward: decentralization as principle and practice

The centralized model has failed by its own standards. Labs founded to benefit humanity now serve nationalist military interests. Companies claiming to democratize AI have created monopolies. Organizations promising safety deploy to weapons systems. The rhetoric was always a marketing claim, not a genuine commitment—and now even the marketing has dropped the pretense.

Decentralized AI offers the alternative in both philosophy and technical architecture:

Bittensor's subnet model creates permissionless markets for intelligence where contributors earn rewards based on value provided, not credentials or corporate affiliation. The TAO token distributes ownership to participants. Yuma Consensus enables decentralized validation of AI quality without centralized authority. Anyone can build, contribute, earn—no gatekeepers, no military clearances, no Palantir partnerships required.

Hugging Face's platform hosts 3+ million models, datasets, and apps, providing infrastructure for 10 million+ AI builders. It champions transparency through Model Cards documenting limitations and biases. It enables researchers globally to audit, improve, and customize models. It demonstrates that open infrastructure can support an ecosystem far more diverse than any single corporation could produce.

The broader deAI ecosystem—including Basilica.ai (SN39), Lium.io (SN51), Chutes (SN56), io.net, Render Network, and Akash Network for decentralized compute; Templar (SN3), Nous Research, Prime Intellect, and Pluralis Research for decentralized model pre-training; with Macrocosmos's IOTA (SN9) and DSTRBTD (SN 38) entering the race to frontier-level model training; Ocean Protocol and SingularityNET for data markets; various DAOs experimenting with community governance—shows the technical path exists. The infrastructure is being built. The alternative to corporate-military control exists, not as a hypothetical but as an active movement.

Even Chinese competitors like DeepSeek, Moonshot, Zhipu/Z.AI, Qwen, and other major players are open-source but still centralized and subject to government interference, making them suspect as well—though at least they're open-source, providing some transparency and accessibility to the broader AI community.

The principles that guide this movement stand in direct opposition to everything Anthropic, OpenAI, and the centralized labs now represent:

Intelligence as public good, not proprietary asset. Too important to be owned by corporations, too powerful to be controlled by governments, too valuable to be restricted by borders.

Permissionless participation over credentialed access. Contribution and merit determine influence, not institutional affiliation or security clearances.

Transparency and auditability over opacity and classification. Open models, open data, open governance—because accountability requires visibility.

Democratic distribution over monopolistic capture. Value flows to contributors, not concentrates in shareholder hands. Economic benefits distribute widely, not enrich defense contractors.

Global cooperation over nationalist competition. AI development as shared human project, not zero-sum race for military advantage.

Community governance over corporate control. Decisions made through transparent processes involving stakeholders, not behind closed doors by executives aligned with power.

This represents practical necessity, not philosophical idealism. The centralized model is producing exactly what concentrated power always produces: alignment with existing power structures (military-industrial complex), capture by incumbent interests (regulatory moats), reproduction of historical inequalities (AI colonialism), and betrayal of stated values (humanity's benefit → American dominance).

History teaches that technologies controlled by military-industrial interests delay beneficial applications, create lasting global inequalities, and ultimately fail at stated security goals while succeeding at concentrating power and profit. GPS, cryptography, nuclear technology—the pattern is consistent. AI is following the same trajectory unless we choose the other fork in the road.

Bittensor frames it as existential: centralization of power versus sharing resources through open protocols for global participation and ownership. That choice is live right now. Every dollar that flows to Anthropic's military contracts, every model deployed in classified intelligence systems, every nationalist frame that treats AI as a weapon of competition—these cement one path. Every contribution to decentralized networks, every open model released, every collaboration across borders—these build the alternative.

Dario Amodei's statement clarifies the stakes. The centralized labs have chosen. They will build AI for American empire, for military advantage, for corporate dominance. They will invoke safety while deploying to weapons systems. They will claim humanitarian missions while seeking authoritarian investment. They will promise to benefit humanity while serving national security apparatus.

The decentralized AI movement rejects this path entirely. Intelligence belongs to everyone or it will serve only power. There is no middle ground, no compromise between these visions. The centralized labs have made their choice clear. Now we must build the alternative—technically, economically, politically—before the concentration becomes irreversible.

The forking point is here. Which road will we take?


Disclosure: I am directly involved with and invested in several of the decentralized AI projects mentioned in this post, including Bittensor subnets and other initiatives. For full transparency about my involvement and investments, see my projects page. Any opinions expressed in this post are entirely my own and do not necessarily reflect those of my employer.

For more on Bittensor's vision of decentralized intelligence, visit bittensor.com. For community discussions and updates on decentralized AI development, join the Bittensor Discord.

This analysis builds on research from MIT Technology Review's AI Colonialism series, academic work on AI nationalism, and ongoing developments in the decentralized AI ecosystem.


Related Reading: Vote No on BOT-08: We Must Not Fund the Weaponization of Humanoid Robotics — Why I'm urging my robotics DAO to reject an investment in the only U.S. humanoid company openly building killer robots.