AI

The Silicon Valley Warlord

The Political Coming-of-Age of Corporate Power

A former Uber executive now controls DARPA, the Pentagon's AI office, and $200 billion in defense lending authority. In a revealing podcast, Emil Michael laid out the timeline: 20-30% of defense spending on autonomous weapons within a decade, robots as 'the new front line,' and an open door for startups who want to build the machines that kill. What remains for those of us who refuse to accept this trajectory?

Cover Image for The Silicon Valley Warlord

In 2008, the political theorist Sheldon Wolin published Democracy Incorporated, a book that described something he called "inverted totalitarianism." In the classical version—Hitler's Germany, Stalin's Russia—the state seizes economic power. In the inverted version, economic power captures the state. Corporate executives don't abolish democracy; they operate through it, hollowing out institutions from within. The forms persist—elections, legislatures, courts—while the substance quietly migrates to boardrooms and quarterly earnings calls. Wolin called this process "the political coming-of-age of corporate power."

I found myself thinking about Wolin while listening to a recent podcast interview with Emil Michael.

Michael is the former Chief Business Officer of Uber, a company not known for gentle treatment of obstacles. He reportedly suggested digging up dirt on journalists critical of the company. He is now the Under Secretary of Defense for Research and Engineering, which means he oversees DARPA, the Chief AI Office for the Pentagon's 3 million employees, the Defense Innovation Unit in Mountain View, and the Office of Strategic Capital with its $200 billion in lending authority.

The interview, on the venture capital podcast No Priors, is revealing in ways that seem almost accidental.

What emerges over the course of the conversation is something like a blueprint for the next decade of American military technology—and, perhaps unwittingly, an illustration of how the relationship between Silicon Valley and the state has fundamentally changed.

The Consolidation

The interview opens with Michael describing his portfolio:

"I'm now responsible for DARPA... in the last few months I took over as chief AI—the chief AI office in the Department of War which with 3 million employees, the biggest organization with the biggest budget in the world... the Defense Innovation Unit which is based in Mountain View... and then the Strategic Capability Office... So all that has technology underneath it and the idea is to unify that across the department because we spend $150 billion a year on tech."

The scope is striking. DARPA—the agency that invented the internet, GPS, and much of modern computing—now reports to a former Uber executive once described as the company's "most scandal-ridden exec". The entire military's AI strategy has been unified under a single person who describes his management philosophy as being "impervious to barriers."

What's perhaps more notable than Michael himself is the atmosphere of the conversation. The interviewers—venture capitalists who have invested in defense technology—offer no pushback. They congratulate him. "It's been impressive in terms of the types of talent that you all have been able to recruit."

There's a word for what this represents, though it sounds almost too academic: capture. Not the dramatic kind, with jackboots and coercion, but the quiet kind—mutual benefit, shared assumptions, aligned incentives. The revolving door doesn't need to revolve when everyone is already on the same side.

The Language Shift

Notice what Michael calls his employer: the "Department of War."

Not Defense. War.

This language choice is deliberate. Dario Amodei used the same terminology in his October statement. Secretary Hegseth uses it. The euphemism era appears to be ending. America isn't defending anything. America is preparing to fight.

Michael frames it in explicitly competitive terms:

"The military buildup in China is the biggest military buildup in world history. And so there's a real urgency on our side to ensure that we are ahead, but we stay ahead."

The China threat functions as mantra throughout the interview. It justifies everything: the urgency, the consolidation, the open door for startups, the $200 billion lending facility. The logic is circular: China is building up therefore we must build up therefore they will build up more. An arms race presented as defensive necessity.

History suggests where this leads. The naval arms race between Britain and Germany before World War I followed the same logic: each buildup justified the next, each escalation proved the need for counter-escalation, until the continent was primed for catastrophe. The nuclear arms race between the US and Soviet Union produced 70,000 warheads, enough to destroy civilization several times over, before anyone questioned whether more weapons actually meant more security.

Arms races have their own momentum. They create the threats they claim to prevent.

GenAI.mil: AI for Three Million War Fighters

The most concrete announcement is GenAI.mil, which Michael describes as deploying AI to "three million people" across the Pentagon:

"We had over a million people uniques use it in the last 30 days which is kind of awesome—you got one-third of the enterprise on one model."

One million unique users in thirty days. One-third of the largest organization on Earth running queries through military AI in a month. They started with Google's Gemini, are adding Grok from xAI, and are "seeing where we get to" with other models.

The architecture prevents data from flowing back to commercial providers, a reasonable security precaution that also means the military is building its own sovereign AI capability, increasingly independent of oversight or scrutiny from the companies that created the underlying models.

Michael frames the applications in three categories: "enterprise use cases," "intelligence," and "war fighting." The intelligence applications are revealing:

"We get a lot of intelligence from satellites, from all the things the United States does across the enterprise that previously used human analysts for. And so if you could improve the leverage of the human analyst by saying, here's 10 things we saw that you should look at, man, you're going to make that analyst way more efficient."

This is the targeting pipeline. AI identifies potential targets; humans approve strikes. The human remains "in the loop" technically while the machine increasingly determines what the loop contains. This is how Palantir's Maven system works, the same system that provided 2024 targeting support for U.S. airstrikes in Iraq, Syria, and Yemen.

The pattern echoes Vietnam, where Robert McNamara's "systems analysis" approach promised to rationalize warfare through data. Body counts became metrics. Villages became coordinates. The abstraction made atrocity manageable. Fifty years later, we're building the same abstraction layer with better algorithms.

Robots as the New Front Line

Michael is candid about the timeline for autonomous weapons:

"In 10 years it wouldn't surprise me if 20-30% of the budget is spent on those kinds of systems which are also cheaper. So for 10-20-30% of the budget you get way more firepower than you would for the other systems."

Twenty to thirty percent of a trillion-dollar budget—two to three hundred billion dollars annually flowing to autonomous systems within a decade. The framing is economic: "cheaper," "more firepower." There's an almost startup quality to the pitch.

On drones specifically:

"Think of it as robots as the new front line... in a territorial battle, if you're fighting over land, you know, less costly in human life if you have robots fighting first and then before humans come in."

The phrase "less costly in human life" is doing significant work here. It refers, of course, to American life. The humans on the receiving end of robotic warfare are not absent from the equation—they're simply on the other side of it.

Michael points to Ukraine as a model:

"Without the drone warfare in that area of the world, you probably would have seen way more human casualties. It would—it's already devastating. It would have been way more devastating."

This is the humanitarian case for autonomous weapons: that they reduce total suffering by making war more precise and less lethal. It's worth taking the argument seriously, even if one is skeptical. But the evidence from drone warfare over the past two decades suggests a more complicated picture. Suffering doesn't disappear; it redistributes. It moves from soldiers to civilians misidentified as targets, to populations living under the constant presence of machines that can strike without warning, to the operators themselves—watching kills on screens in Nevada, developing their own particular forms of moral injury. The immediacy of the battlefield gives way to a longer, more diffuse aftermath.

There's a historical echo here. When Curtis LeMay's B-29s firebombed Tokyo in March 1945—killing more people in a single night than either atomic bomb—the logic was similar: air power saves American lives by ending wars faster. The civilian cost, being distant and statistical, becomes acceptable in ways that ground combat never would. Autonomous weapons represent something like the logical endpoint of this trajectory: killing without even the residual human connection of a pilot above the flames.

The Open Door

Perhaps the most significant moment in the interview concerns the shift in Pentagon procurement culture:

"It's crazy to me that SpaceX and Anduril and Palantir all had to sue the Department of War for their first contract. So the idea is you don't have to sue anymore. Come through the front door because people are not going to fight you. We're now excited about lower cost, faster, more sophisticated options."

The old military-industrial complex, whatever its flaws, had gatekeepers. The prime contractors—Lockheed, Raytheon, Northrop—controlled access. Startups, no matter how innovative, often had to litigate their way to a first contract. That friction is being deliberately removed.

Michael describes an "Arsenal of Freedom" tour: twenty defense companies visited in two weeks, a roadshow to signal that the door is open, the welcome mat laid out. "We're serious about the fitness and the expansion and health of the defense industrial base."

The venture capitalists interviewing him seem genuinely pleased:

"It's wonderful that you've been so early in this... defense technologies that has now become much more mainstream... it sounds ridiculous to say but it is acceptable and exciting to work on American defense now."

Acceptable and exciting. The phrase lingers. There was a time—not so long ago, within living memory of most people in tech—when defense work carried a stigma in Silicon Valley. Google employees walked out over Project Maven. Engineers worried about what their work would be used for. That cultural resistance, whatever remained of it, appears to be dissolving.

This is perhaps what Wolin's "inverted totalitarianism" looks like in practice: not jackboots and rallies, but funding rounds and podcast interviews; not forced compliance, but genuine enthusiasm; not ideology imposed from above, but incentives that make participation feel like the obvious choice.

The historical parallel is the aerospace industry after Sputnik. What had been a scattered collection of aircraft manufacturers became, within a decade, an integrated military-industrial complex—the very thing Eisenhower would warn "could endanger our liberties or democratic processes" in his 1961 farewell address. By the mid-1960s, companies like Boeing and Lockheed derived most of their revenue from military contracts. The defense establishment didn't force this transformation; it simply made it profitable. Something similar now seems to be happening to Silicon Valley, but at a faster pace.

The $200 Billion Lending Facility

Michael also discusses the Office of Strategic Capital, which has "$200 billion in lending authority" for defense tech companies:

"Low-cost loans... that sends a demand signal to these companies that other private capital crowds around it equity capital often... in the last four or five months we've done five critical minerals deals really fast."

This is industrial policy at unprecedented scale. The government is financing the entire supply chain, from critical minerals to manufacturing capacity, using below-market loans to crowd in private equity. The "valley of death" between prototype and production is being systematically eliminated through capital intervention.

The stated goal is national self-sufficiency in everything from rare earth minerals to pharmaceuticals. But the vehicle for achieving that self-sufficiency is the defense establishment, ensuring that civilian supply chains remain permanently entangled with military priorities.

Compare this to the Reconstruction Finance Corporation of the 1930s and 1940s, which financed American industrial mobilization for World War II. The RFC was explicitly temporary, a wartime measure to be unwound after victory. The Office of Strategic Capital has no sunset provision. The militarization of supply chains is designed to be permanent.

Wolin described this mechanism precisely: in inverted totalitarianism, the state serves as financier and guarantor for corporate accumulation, while corporate methods and personnel colonize state functions. The result is not socialism for the rich, exactly, but something more insidious: the permanent fusion of public power and private profit, with war as the organizing principle.

The Resource Wars Have Already Begun

Michael's discussion of critical minerals and supply chain self-sufficiency takes on a different character when you consider what the administration has been doing.

In the interview, Michael describes the urgency around critical minerals:

"Who knew germanium was a thing three years ago? The American public wasn't thinking about it. So what we're trying to get ahead of is what are the 10, 20 things that are components... we're dependent on China for brushless motors... what kind of semiconductors besides TSMC go into a data center, what are other things we have to make sure we domesticate?"

He frames this as defensive necessity. But the methods for achieving "self-sufficiency" have become clear.

Greenland contains 25 of the 34 materials the European Commission classifies as critical minerals, including potentially the world's second-largest rare earth reserves after China. When National Security Advisor Mike Waltz was asked about the administration's designs on the island, he was direct: "It's our national security. It's critical minerals." The administration has refused to rule out military force, despite 85% of Greenlanders opposing any transfer to U.S. control.

Canada has been threatened with annexation as the "51st state" since the administration took office. The explicit linkage to resources came when the president offered Canada protection under the Golden Dome missile defense system, offered for free, but only if Canada became a U.S. state. Otherwise, he claimed, it would cost $61 billion. This after 25% tariffs on all Canadian goods, which the president explicitly framed as economic pressure toward annexation. Prime Minister Trudeau warned: "What he wants is to see a total collapse of the Canadian economy because that'll make it easier to annex us."

Venezuela is no longer a threat. It's an ongoing military operation. The administration launched strikes beginning in September 2025, established a naval blockade in December, and in January 2026 exfiltrated President Maduro from Caracas. The stated justifications shifted from drug trafficking to regime change, but the president made the real motivation explicit: "The United States will run Venezuela for a time and will maintain a presence in Venezuela as it pertains to oil." Venezuela holds the world's largest proven oil reserves: 300 billion barrels, roughly 17% of the global total.

Three territories. Three resource profiles: rare earth minerals, strategic geography and resources, and oil. Three different approaches: threatened annexation, economic coercion, and military invasion.

The international response has been extraordinary. In Greenland, European NATO allies including Germany, France, Sweden, Norway, the Netherlands, and Finland have deployed troops to participate in joint exercises with Denmark. Danish Prime Minister Mette Frederiksen warned directly: "If the United States chooses to attack another NATO country militarily, then everything stops... including our NATO." The Atlantic Council has called the situation potentially "NATO's darkest hour", not because of external threats, but because of the alliance's founding member.

In Canada, the response has been to pivot. Prime Minister Carney's trade deal with China, which allows 49,000 electric vehicles at 6.1% tariffs instead of 100% and reduces Chinese tariffs on Canadian canola, represents exactly the kind of diversification that U.S. pressure was meant to prevent. In Beijing, Carney described the partnership as positioning Canada "well for the new world order," by which he means the breakdown of the multilateral trading system: "The multilateral system that has been developing these is being eroded, to use a polite term, undercut to use another term. The question is, what gets built in its place?"

The answer, apparently, is bilateral deals with whoever isn't threatening to annex you.

When Michael talks about the Office of Strategic Capital making "critical minerals deals really fast," this is the context. When he describes rebuilding the defense industrial base and achieving supply chain self-sufficiency, these are the methods. The language of national security and the language of resource extraction have become indistinguishable.

As a Canadian watching this unfold, the podcast takes on a different resonance. Michael's discussion of supply chains, critical minerals, and the defense industrial base sounds technical and abstract until you realize the concrete policy implications include treating your country as a resource to be acquired. The "front door" that's now open for defense startups is the same door through which territorial ambitions walk.

The historical parallel is not subtle. When great powers build military-industrial complexes at scale, they tend to use them. When they identify resource dependencies as national security threats, they tend to resolve those dependencies through force. The British Empire didn't colonize half the world for abstract geopolitical advantage. It colonized for cotton, rubber, oil, and strategic ports. The logic of military self-sufficiency, taken seriously, leads to resource imperialism.

Michael probably believes he's building defensive capabilities. The critical minerals he's securing probably feel like prudent supply chain management. But the same infrastructure that enables "self-sufficiency" enables extraction. The same military that "deters" adversaries can coerce allies. The line between defense and empire has always been a matter of perspective. The perspective from north of the border looks increasingly imperial.

The Shape of Things

Three months ago, writing about Anthropic's growing military partnerships, I observed that the major AI labs seemed to have made a choice about whose interests they would serve. What's changed since then is not the direction but the velocity.

Consider what the Michael interview reveals, taken together. One person now oversees DARPA, the Pentagon's AI office, the Defense Innovation Unit, and $200 billion in strategic lending authority. Three million Pentagon employees have access to military AI systems—a million active users in the first month alone. The timeline for autonomous weapons has become explicit: potentially a quarter of military spending within a decade. The institutional and cultural barriers that once separated Silicon Valley from the Pentagon are being deliberately removed.

And industrial policy is reshaping supply chains around military priorities. Two hundred billion in below-market loans. Critical minerals, manufacturing capacity, semiconductors—all being reconstructed with federal financing and what appears to be permanent military entanglement.

I keep returning to Wolin's phrase: "the political coming-of-age of corporate power." He died in 2015, before any of this happened, but he described its logic precisely. Corporate executives move into government. Government contracts flow to corporations. The boundaries between public and private power blur until the distinction starts to feel almost academic.

What's striking is how natural it all seems to the participants. No one on the podcast expresses doubt. There's no wrestling with tradeoffs, no acknowledgment of what's being lost. Just enthusiasm, momentum, opportunity.

The Honest Assessment

Listening to Michael describe $150 billion in annual technology spending, I found myself asking a question I suspect many readers share: What can anyone who finds this trajectory troubling actually do about it?

I don't have a satisfying answer. Part of intellectual honesty is acknowledging constraints. State power operates at a scale that individual action cannot directly counter. The Pentagon's budget exceeds the GDP of most countries. The institutional momentum behind military AI—the careers, the contracts, the lobbying, the genuine conviction among participants that they're doing necessary work—is largely self-sustaining.

Those of us who work on decentralized systems and privacy infrastructure face a particular version of this frustration. We can build parallel structures. We can opt out of some forms of surveillance. We can create encrypted communications and private financial rails. But we cannot prevent governments from weaponizing the underlying technologies. The same neural networks that enable ChatGPT enable targeting systems. The same advances in robotics that might someday care for the elderly can be mounted with weapons.

And decentralization, I've come to realize, doesn't solve this. Bittensor aims to build decentralized AI so that "intelligence is owned by everyone"—but if that intelligence can be accessed by military users, we've simply democratized the tools of war. Open-source AI doesn't filter its users.

The international order offers little help. The UN General Assembly voted 166-3 to regulate lethal autonomous weapons. Nothing happened. Treaties require enforcement; enforcement requires power; power is precisely what states refuse to surrender.

Geographic exit, too, is less useful than it once was. You can leave the United States, but you cannot leave a world where American military AI operates globally. There's no geography entirely outside the system.

What Remains

And yet. The question persists: What do you do when you find yourself on the wrong side of something this large?

I keep thinking about the cypherpunks.

The cypherpunks of the 1990s knew they couldn't stop the NSA. They built PGP anyway. Phil Zimmermann faced a three-year criminal investigation for publishing encryption software. He persisted. Thirty years later, every HTTPS connection and encrypted message exists because people built the tools despite knowing they couldn't win outright.

The cypherpunk victory, such as it was, was partial. The surveillance state grew anyway. But encryption preserved spaces of autonomy that would otherwise have been foreclosed. The existence of alternatives mattered, even when the alternatives remained marginal.

I find myself drawn to this model—not because it promises success, but because it offers a framework for meaningful action in the face of overwhelming institutional momentum. You build the things you think should exist. You refuse, where you can, to participate in the things you find objectionable. You document what's happening with enough specificity that the euphemisms don't go unchallenged.

The Pentagon, after all, cannot build weapons without workers. Defense contractors cannot recruit without willing engineers. The cultural validation of military work in Silicon Valley—the "acceptable and exciting" that the podcast hosts celebrate—depends on enough people accepting that validation. The Google employees who walked out over Project Maven didn't stop the project, but they forced Google to decline the contract renewal. The DeepMind researchers who protested military contracts didn't prevent them, but they made the internal debate visible.

During World War II, a handful of German physicists—Werner Heisenberg among them—could have accelerated Nazi nuclear weapons development but chose not to. Whether from moral conviction, bureaucratic obstruction, or deliberate sabotage remains debated. What's clear is that the German bomb was not built. Individual choices, aggregated across enough people, can change outcomes, even when the machinery seems unstoppable.

There's something to be said, too, for institutional memory. The narrative that "this is just how the world works" depends on forgetting that other arrangements are possible. There was a time when AI labs claimed to be building technology for humanity's benefit. There was a time when robotics companies pledged not to weaponize their products. There was a time when Silicon Valley's relationship with the Pentagon was genuinely adversarial. Those moments passed. But remembering them keeps the door open for their return.

The anti-war movement of the 1960s didn't stop Vietnam quickly, but it changed the culture enough that subsequent interventions faced resistance. The nuclear freeze movement of the 1980s didn't achieve its stated goals, but it contributed to a context in which arms reduction became thinkable. Trajectories that seem permanent can shift suddenly. The Berlin Wall fell faster than anyone predicted. Apartheid ended. The draft ended.

The current trajectory of military AI will face crises—an autonomous weapons atrocity that penetrates public consciousness, a technological failure with catastrophic consequences, a political realignment that changes priorities. When those moments arrive, the alternatives that were built in the apparent hopelessness beforehand become suddenly relevant.

The Fork We Didn't Take

In my October piece, I quoted from the Bittensor whitepaper: "Down one road is the centralization of power and resources... Down the other road is the potential for sharing these resources through open protocols."

Three months later, the centralized road has widened considerably. The Department of War is integrating AI across three million employees. The defense industrial base is being rebuilt with $200 billion in federal loans. Autonomous weapons are projected to claim a quarter of military spending within a decade.

The decentralized road still exists. Bittensor still builds. Privacy technologies advance. The cypherpunks still write code. But the disparity in resources between the two paths has never been more apparent.

I don't have a hopeful conclusion. Military AI will be built. Autonomous weapons will be deployed. The integration of Silicon Valley and the Pentagon will continue regardless of what anyone writes about it. The state has resources, legitimacy, and conviction. Critics have blog posts and encrypted chat apps.

And yet, the current trajectory is not a law of nature. It is chosen, daily, by technologists who take defense jobs, by investors who fund weapons companies, by executives who sign military contracts, by politicians who appropriate military budgets, by voters who accept the framing that security requires domination. Choice points remain choice points, even when the momentum feels irresistible.

I've been trying to hold two things in mind simultaneously. The first is that Emil Michael almost certainly sees himself as a builder bringing urgency and innovation to national defense. From his perspective—and the perspective of everyone on that podcast—he's protecting America and its allies from authoritarian adversaries. This view is sincere. It's not a mask.

The second is that, from where I sit, he's building the infrastructure of permanent war. He's applying technologies that could serve human flourishing to the optimization of killing. He's recruiting the best minds of a generation to work on weapons systems. And he's celebrating the whole enterprise as patriotic innovation, with venture capitalists nodding along.

Both of these framings can be true at once. That's part of what makes the situation so difficult to think about clearly. And it's part of what makes this "inverted totalitarianism" rather than the classical kind: no one is forced. Everyone chooses. The coercion is economic. The compliance is voluntary. The ideology is entrepreneurship itself.

I know which fork I'd prefer. The path there, if it exists at all, runs through the small choices that accumulate into something larger: building alternatives, declining participation where conscience demands it, documenting what's happening with enough specificity that the euphemisms don't go unchallenged, remembering that other arrangements have existed and might exist again.

It isn't much. But it's what remains.


Disclosure: I am directly involved with and invested in several decentralized AI projects, including Bittensor subnets and other initiatives. For full transparency about my involvement and investments, see my projects page. Any opinions expressed in this post are entirely my own.

The No Priors podcast with Emil Michael is available on YouTube. I'd encourage readers to listen to the full interview and draw their own conclusions.


Related Reading:

  • AI and the War Machine: The original piece documenting how centralized AI labs abandoned their humanitarian missions.
  • Vote No on BOT-08: My argument against funding the only U.S. humanoid company openly building killer robots.
  • The Second Crypto War: How cypherpunks are fighting back against surveillance states.