On Friday, the President of the United States ordered every federal agency to immediately stop using Anthropic's AI technology. The Pentagon designated the company, an American AI startup based in San Francisco, a "supply chain risk to national security," a label normally reserved for companies like Huawei that are extensions of foreign adversaries. Defense Secretary Hegseth called it "final." Trump called Anthropic "leftwing nut jobs." An undersecretary called Anthropic's CEO a liar with a "God complex." (NPR, Fortune, CNN)

The crime? Anthropic told the Pentagon it wouldn't allow its AI model, Claude, to be used for mass domestic surveillance of American citizens or to power fully autonomous weapons systems that fire without human involvement. In a statement, Anthropic said: "We do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons," and "we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights." The Pentagon wanted unrestricted access for "all lawful purposes." Anthropic said no. (NPR, CNBC, ABC News)

Notably, the Pentagon itself has publicly stated it has no intention of using AI for mass surveillance or autonomous weapons. Pentagon spokesman Sean Parnell wrote on X: "The Department of War has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement." But when Anthropic reviewed the actual contract language, the company said it "made virtually no progress" on codifying those assurances, and that "new language framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will." (ABC News, NPR, NBC News)

And so the full weight of the federal government came down on a company for refusing to remove the moral guardrails from its own product.

The media coverage has largely framed this as a question of corporate conscience: can a company be a conscientious objector? It's a compelling frame. Anthropic's CEO, Dario Amodei, used the language of conscience explicitly: "We cannot in good conscience accede to their request." And there's a rich tradition of individuals refusing to participate in state violence on moral grounds. But extending that framework to a corporation and its products opens up questions that most commentators aren't asking.

Here's the one I can't stop thinking about: does the morality of a creator extend to its creation?

The Theological Problem

For anyone who takes the Christian faith seriously, this question has an immediate resonance. God is the Creator of all things. His moral nature is perfect. And yet His creation, us, commits unspeakable evil. If the creator's morality extends to the creation, and if moral responsibility is transitive through the chain from creator to creation to use, then God bears moral responsibility for every act of human evil in history.

That's the problem of evil, restated through the lens of a Pentagon contract dispute. And it's not a trivial restatement. It actually illuminates something.

The orthodox Christian answer, the one the church has held for two thousand years, is that God is not morally responsible for human evil precisely because He grants genuine agency to His creatures. The moral weight transfers at the point of free use. God created the capacity for violence, deception, and destruction. Humans actualize it. The Creator's moral nature informs the design. We bear the imago Dei, we have conscience, we have natural law written on our hearts. But none of that makes God culpable when Cain picks up the rock.

This is important because it means the theological tradition actually undermines the strongest version of Anthropic's moral argument. If even God, the ultimate Creator with a perfect moral nature, doesn't bear transitive moral responsibility for the misuse of His creation, then a San Francisco AI company certainly can't claim that burden. Or the authority that comes with it.

You can't have it both ways. Either the creator's morality flows through the creation and the creator bears responsibility for all downstream use (in which case God has a lot to answer for) or moral responsibility transfers to the agent at the point of use, and the creator's job is to build well and set terms, not to follow the product around forever dictating what it can and can't do.

But God Set Terms

Here's where it gets interesting. Because God didn't just create and walk away. He set terms.

"Don't eat from that tree."

That's a license agreement. That's a covenant with conditions of use. God didn't prevent the misuse by crippling the creation. He didn't make Adam incapable of reaching the fruit. He built a fully capable creation, established terms, and attached consequences for violation. The moral weight of the choice fell on the user, not the maker. But the maker absolutely had standing to define the boundaries.

And this is where the Anthropic situation pivots from a philosophical debate to a contract law question, which, frankly, is where it should have been all along.

It's a Contract Dispute

Software licensing is fundamentally different from property transfer. When you buy a rifle, you own it. The manufacturer has no say in how you use it. But when you license software, you're granted specific rights under specific terms. The licensor retains ownership and can absolutely stipulate conditions of use. That's not some radical claim. It's the foundation of the entire software industry. Every piece of software you've ever used has a license agreement you clicked "accept" on without reading.

And here's the detail that tells you everything you need to know about this situation: the Pentagon was actively trying to negotiate with Anthropic to change the terms. According to Axios, a Pentagon official was on the phone offering Anthropic a deal right up until the moment Hegseth tweeted the supply chain designation. Reports from NBC News indicate that over the preceding weeks, senior officials including Undersecretary Emil Michael held multiple meetings with Anthropic leadership to discuss specific scenarios and contract language. (Axios, NBC News)

If the government already had unrestricted rights under the existing contract, there would be nothing to negotiate. The very act of negotiation is an admission that Anthropic's license terms included restrictions the government wanted removed. Amodei himself noted that the two disputed uses "have never been included in our contracts with the Department of War, and we believe they should not be included now." (NPR)

Which means what actually happened on Friday isn't the government punishing a conscientious objector. It's the government using a national security designation designed for foreign adversaries to punish a domestic company for enforcing the terms of its own contract. That should trouble everyone regardless of where they fall on AI policy, military readiness, or partisan politics.

The Real Question

The conscientious objector frame makes for good headlines and interesting theology, but it's ultimately a distraction. The morality of the creator doesn't need to extend to the creation for the creator to have standing. God didn't need to be morally responsible for human sin in order to set the terms of the covenant. Anthropic doesn't need to claim moral ownership of every use of Claude in order to enforce the license it wrote.

The real question is simpler and more dangerous: can the government use punitive power to override the contractual terms a company won't voluntarily surrender? Can it label an American company a threat to national security, not because of espionage, not because of foreign ties, but because the company said "that wasn't part of our agreement"?

If the answer is yes, then every software company, every defense contractor, every business that licenses technology to the federal government just learned that their contracts are only as strong as the government's willingness to honor them. And that's not a lesson about AI. That's a lesson about power.

There's a six-month window now. Maybe cooler heads prevail. Maybe a deal gets done. But the precedent has been set, and it's worth paying attention to, not because of what it means for artificial intelligence, but because of what it means for the rule of law.

As for the theological question, whether a creator's morality is transitive through its creation, I think the answer is no. And I think God Himself is the proof. He built something beautiful, set the terms, and let the agents choose. The moral weight belongs to the one who acts, not the one who builds. But the one who builds always has the right to say what the building is for.

That's not a God complex. That's just good stewardship.