Vote No on BOT-08: We Must Not Fund the Weaponization of Humanoid Robotics

I'm going to ask you to walk away from money.
That's not easy in crypto. We're here because we believe in asymmetric returns, in backing transformative technology early, in being rewarded for taking risks others won't. Foundation Robotics, on paper, checks the boxes: first paid deployment within 11 months, $10 million in government contracts already secured, a $1.1 billion valuation, proprietary cycloidal actuators with superior torque specifications. The financial case is straightforward.
But the proposal before us isn't just financial. The Northstar Council was honest about this—BOT-08 explicitly asks the DAO to decide "whether our portfolio should include defense-focused companies or remain purely commercial." They're asking us to set a precedent. To decide who we are.
I'm voting no. And I'm asking you to do the same.
Foundation is building killer robots. Full stop.
Let me be direct about what "defense-focused" means in Foundation's case, because the proposal's language sanitizes the reality.
Foundation Robotics is the only known humanoid robotics firm purposely building machines for military applications. Not "defense applications" in the euphemistic sense of logistics or maintenance support. Actual weapons platforms. The company's co-founder Mike LeBlanc has stated explicitly that Foundation is open to attaching guns or other weapons to its robots.
When Foundation employees raised internal concerns about weaponization, LeBlanc's response was: "Are you a vegetarian? If you are a meat-eater who doesn't like that we kill the animals, I don't respect that."
This isn't a company reluctantly accepting some defense contracts while maintaining commercial focus. This is a company that from the start intended to sell to the military. CEO Sankaet Pathak has stated on X: "Unlike most humanoid robot companies in the U.S., which have committed to non-weaponization, we believe it's essential for our robots to master these tasks to support human expansion."
Foundation is currently exploring Phantom use cases with the U.S. Army, Air Force, Marine Corps, and the Department of Homeland Security—including using robots to patrol U.S. borders. They're in negotiations with Anduril, the Peter Thiel-backed defense contractor that represents everything wrong with the military-industrial complex's capture of Silicon Valley.
The proposal mentions Foundation's "current deployments are focused on maintenance and aircraft support, not weaponization." This is technically true and deeply misleading. Foundation's master plan explicitly commits to "collaborating with the Department of Defense to ensure our allies always maintain the technological superiority they require." The company positions itself as filling "a strategic niche its competitors have avoided" precisely because those competitors understand what building humanoid weapons platforms means for humanity.
Every other major robotics company has refused this path
The proposal frames Foundation's willingness to build weapons as a "unique positioning" and "strategic opportunity." Let's be clear about what this positioning actually represents: a deliberate rejection of the ethical consensus that the robotics industry has reached.
In October 2022, Boston Dynamics led a coalition of robotics companies in signing an open letter pledging not to weaponize their robots. The signatories included Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics, and Unitree Robotics—essentially the major players in advanced robotics.
Their reasoning was explicit: "We are concerned about recent increases in makeshift efforts by individuals attempting to weaponize commercially available robots... For this technology to be broadly accepted throughout society, the public needs to know they can trust it."
Figure AI—arguably Foundation's closest competitor in the humanoid space—has taken an even stronger position. Founder Brett Adcock's Master Plan states unequivocally: "We will not place humanoids in military or defense applications, nor any roles that require inflicting harm on humans. Our focus is on providing resources for jobs that humans don't want to perform."
Apptronik reportedly removed defense work from their focus. Every major humanoid robotics company has looked at the weapons question and answered: no.
Foundation looked at the same question and saw a market opportunity.
When the entire industry reaches ethical consensus and one company deliberately positions itself as the exception, that's not visionary contrarianism. That's a warning sign.
The founder's track record should concern us
The BOT-08 proposal focuses on Foundation's technology and market position. It doesn't mention Sankaet Pathak's previous company, Synapse Financial Technologies. It should.
Synapse was a banking-as-a-service platform that filed for bankruptcy in April 2024—the same month Foundation was founded. The collapse wasn't a quiet wind-down. It was described by Fortune as "the spectacular Synapse collapse" that left nearly $160 million in customer funds frozen, with ongoing questions about where $85-95 million went.
A U.S. Trustee filed an emergency motion to convert Synapse's bankruptcy to Chapter 7 liquidation, citing "gross mismanagement" of the estate. Court documents revealed that in 2023, as discrepancies in Synapse's ledgers were piling up, the board discussed removing Pathak as CEO—but investors from Andreessen Horowitz argued against it.
Pathak acknowledged taking two loans from the company totaling $320,000 in late 2023 and early 2024, as the company was failing. The "board" that approved these loans consisted of Pathak himself, a seed investor, and a co-founder.
According to TechCrunch, Pathak began aggressively fundraising for Foundation "shortly after Synapse filed for bankruptcy"—even while questions remained about the whereabouts of customer funds. He's now seeking a $1 billion valuation.
This is who we're being asked to trust with $203,500 of DAO treasury.
There's more. In June 2024, CNBC reported that Foundation had circulated a pitch deck claiming General Motors was about to invest, place a $300 million order, and give Foundation access to its factories. GM flat-out denied it: "GM has never invested in Foundation Robotics and has no plans to do so."
Exaggerated investor claims. Gross mismanagement findings. Missing customer funds. This isn't FUD—these are documented facts from court filings and major business publications.
The world is trying to ban killer robots. We shouldn't fund them.
The international community has reached increasing consensus that autonomous weapons represent an existential threat requiring prohibition.
On December 2, 2024, the United Nations General Assembly adopted a resolution on Lethal Autonomous Weapons Systems with overwhelming support: 166 votes in favor, only 3 opposed (Belarus, North Korea, and Russia). The resolution explicitly mentions the potential for prohibiting certain autonomous weapons under international law.
The UN Secretary-General has called for a legally binding treaty to prohibit autonomous weapons systems by 2026. Pope Francis warned G7 leaders: "No machine should ever choose to take the life of a human being."
The Campaign to Stop Killer Robots—a coalition of over 270 NGOs in 70 countries—has been working since 2013 to prevent exactly what Foundation is building. Human Rights Watch has published extensive research on why autonomous weapons systems "pose grave risks to human rights during both war and peacetime."
The core ethical argument is simple: machines cannot appreciate the value of human life. As philosopher Peter Asaro argues: "Distinguishing a 'target' in a field of data is not recognizing a human person as someone with rights." To have machines able to make decisions to end people's lives reduces humans to objects—a fundamental violation of human dignity regardless of how sophisticated the underlying technology.
Academic research frames this as the third revolution in warfare after gunpowder and nuclear weapons. The difference is we have a chance to prevent this one before it proliferates.
Foundation positions itself against this global consensus. The company frames building weapons as patriotic necessity, arguing that "adversaries are actively developing defense-oriented robots" and therefore the U.S. must keep pace. This is the same arms race logic that gave us nuclear proliferation, and it will have the same result: making everyone less safe while enriching weapons manufacturers.
The atomic parallel is instructive
The proposal acknowledges that I might have ethical concerns, framing the decision as choosing between "strategic opportunity" and "values alignment." But this understates the stakes.
I mentioned atomic energy in my opening for a reason. Nuclear technology offers perhaps our clearest historical example of how transformative technology can be captured by military interests in ways that shape—and constrain—its development for decades.
The Manhattan Project concentrated nuclear expertise under military control and classification. Eighty years later, nuclear technology remains bifurcated into military and civilian tracks, with the former dominating development, investment, and public perception. Beneficial applications in medicine and power generation were delayed by decades of secrecy and military prioritization. The "Atoms for Peace" program was largely propaganda—the real investment went to weapons.
The strict classification regime couldn't prevent the Soviet Union from developing nuclear weapons within four years. It just created a two-tier world of nuclear "haves" and "have-nots" while convincing the public that nuclear meant bombs.
We're at a similar inflection point with robotics. Foundation is explicitly positioning humanoid robots as weapons platforms. If this framing takes hold—if humanoid robots become associated with military applications the way nuclear technology did—it will shape public perception, regulatory approaches, and development priorities for decades.
Boston Dynamics and Figure AI understand this. Their anti-weaponization commitments aren't just ethics theater—they're strategic positioning to ensure robotics can achieve broad civilian adoption without the stigma and restrictions that captured nuclear technology.
Foundation is choosing the opposite path. And they're asking us to fund it.
Financial returns don't justify everything
I can already hear the counterarguments. "We're a robotics investment DAO. Our job is to generate returns for token holders. If Foundation succeeds, we profit. If they fail, we lose $203,500. That's the calculus."
But DAOs aren't just investment vehicles. They're experiments in collective decision-making, in aligning capital with values, in proving that decentralized coordination can work. If we reduce every decision to expected financial return, we're just a venture fund with extra steps.
The proposal itself acknowledges this: "This is not just a financial decision—it is a reflection of the DAO's values and priorities moving forward."
What do we value?
I joined XMAQUINA because I believe robotics will transform human civilization. I believe decentralized ownership structures can align development with broad human benefit rather than concentrated private gain. I believe DAOs can make decisions that traditional capital structures can't—including walking away from profitable opportunities that conflict with our values.
If we fund Foundation, we're saying that financial returns justify weapons development. That our portfolio can include companies building machines designed to kill humans. That the global consensus against autonomous weapons doesn't bind us because we're in it for the gains.
I don't want to be part of that DAO.
This vote defines who we are
The Northstar Council was right to frame BOT-08 as precedent-setting. A yes vote doesn't just allocate $203,500 to Foundation—it establishes that XMAQUINA will invest in weapons manufacturers if the risk-reward looks attractive. Future proposals will cite BOT-08 as proof that defense applications are within scope.
A no vote establishes the opposite principle: that some opportunities are outside our mandate regardless of potential returns. That we're a robotics DAO, not a weapons DAO. That we'll evaluate future defense-focused proposals with the same skepticism.
I'm not arguing for a blanket prohibition on any company that works with government. Commercial robotics companies will inevitably have some government customers, and not all government applications are ethically problematic. Logistics, inspection, hazardous environment operations—these are legitimate uses.
But Foundation isn't a commercial robotics company with incidental government business. They're a weapons company that happens to use humanoid form factors. The entire thesis—the "strategic niche," the "differentiated positioning," the premium valuation—depends on building military robots. That's the bet.
I won't take that bet. Not for any return.
Vote no
To my fellow DAO members: I understand the appeal. The valuation looks reasonable for the stage. The technology has real differentiation. The market for humanoid robots will be enormous. Foundation might generate exceptional returns.
But some money isn't worth making.
We have an opportunity to demonstrate that decentralized organizations can hold principles that traditional capital markets won't. We can show that tokenholders can prioritize values alongside returns. We can draw a line that says humanoid weapons platforms are outside our investment mandate.
Or we can take the money and become complicit in the militarization of robotics.
I know which DAO I want to be part of.
Vote no on BOT-08.
Disclosure: I am a tokenholder in XMAQUINA DAO and have participated in previous governance votes. I have no financial relationship with any of Foundation's competitors mentioned in this post. The views expressed are entirely my own.
Sources for this analysis include reporting from SF Standard, TechCrunch, Fortune, CNBC, Newsweek, Boston Dynamics, Figure AI, Stop Killer Robots, Human Rights Watch, and academic research on autonomous weapons and human dignity.
Related Reading: AI and the War Machine — How centralized AI labs abandoned their humanitarian missions to build weapons systems for American empire.