Larmedias

Explore Scientific Knowledge.Understand Intelligence, Autonomy, and Decision-Making.

From the latest trends in AI, engineering, and space science to the mechanisms of intelligence, autonomy, and decision-making. Search for topics of interest and smoothly navigate to related concepts and resources.

Trace the knowledge behind events

Explore latest news and related concepts

Highlighted Story

Military AI Policy Needs Democratic Oversight

<img src="https://spectrum.ieee.org/media-library/a-white-man-in-his-40s-speaking-into-a-microphone-he-is-wearing-glasses-a-suit-jacket-and-tie.jpg?id=65162768&width=1245&height=700&coordinates=0%2C469%2C0%2C469"/><br/><br/><p>A <a href="https://www.nytimes.com/2026/02/23/us/politics/pentagon-anthropic-ai.html" rel="noopener noreferrer" target="_blank">simmering dispute</a> between the United States Department of Defense (DOD) and Anthropic has now escalated into a <a href="https://www.techpolicy.press/a-timeline-of-the-anthropic-pentagon-dispute/" rel="noopener noreferrer" target="_blank">full-blown confrontation</a>, raising an uncomfortable but important question: who gets to set the guardrails for military use of artificial intelligence — the executive branch, private companies or Congress and the broader democratic process?</p><p>The conflict began when Defense Secretary Pete Hegseth reportedly gave Anthropic CEO Dario Amodei a deadline to allow the DOD <a href="https://www.politico.com/news/2026/02/24/hegseth-sets-friday-deadline-for-anthropic-to-drop-its-ai-red-lines-00795641" rel="noopener noreferrer" target="_blank">unrestricted use</a> of its AI systems. When the company refused, the administration moved to designate Anthropic a <a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">supply chain risk</a> and ordered federal agencies to phase out its technology, dramatically escalating the standoff.</p><p>Anthropic has refused to cross <a href="https://www.anthropic.com/news/statement-department-of-war" rel="noopener noreferrer" target="_blank">two lines</a>: allowing its models to be used for domestic surveillance of United States citizens and enabling fully autonomous military targeting. Hegseth has objected to what he has described as “<a href="https://www.war.gov/News/Transcripts/Transcript/Article/4377190/remarks-by-secretary-of-war-pete-hegseth-at-spacex/" rel="noopener noreferrer" target="_blank">ideological constraints</a>” embedded in commercial AI systems, arguing that determining lawful military use should be the government’s responsibility — not the vendor’s. As he put it in a <a href="https://www.war.gov/News/Transcripts/Transcript/Article/4377190/remarks-by-secretary-of-war-pete-hegseth-at-spacex/" rel="noopener noreferrer" target="_blank">speech at Elon Musk’s SpaceX</a> last month, “We will not employ AI models that won’t allow you to fight wars.”</p><p>Stripped of rhetoric, this dispute resembles something relatively straightforward: a procurement disagreement.</p><h2>Procurement policies</h2><p>In a market economy, the U.S. military decides what products and services it wants to buy. Companies decide what they are willing to sell and under what conditions. Neither side is inherently right or wrong for taking a position. If a product does not meet operational needs, the government can purchase from another vendor. If a company believes certain uses of its technology are unsafe, premature or inconsistent with its values or risk tolerance, it can <a href="https://www.anthropic.com/news/responsible-scaling-policy-v3" rel="noopener noreferrer" target="_blank">decline to provide them</a>. For example, a coalition of companies have signed an open letter pledging <a href="https://bostondynamics.com/news/general-purpose-robots-should-not-be-weaponized/" rel="noopener noreferrer" target="_blank">not to weaponize general-purpose robots</a>. That basic symmetry is a feature of the free market.</p><p>Where the situation becomes more complicated — and more troubling — is in the decision to designate Anthropic a “<a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">supply chain risk</a>.” That tool exists to address genuine national security vulnerabilities, such as foreign adversaries. It is not intended to blacklist an American company for rejecting the government’s preferred contractual terms. </p><p>Using this authority in that manner marks a significant shift — from a procurement disagreement to the use of coercive leverage. <a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">Hegseth has declared</a> that “effective immediately, no contractor, supplier, or partner that does business with the U.S. military may conduct any commercial activity with Anthropic.” This action will almost certainly face <a href="https://x.com/SecWar/status/2027507717469049070" rel="noopener noreferrer" target="_blank">legal challenges</a>, but it raises the stakes well beyond the loss of a single DOD contract.</p><h2>AI governance</h2><p>It is also important to distinguish between the two substantive issues Anthropic has reportedly raised.</p><p>The first, opposition to domestic surveillance of U.S. citizens, touches on well-established civil liberties concerns. The U.S. government operates under constitutional constraints and statutory limits when it comes to monitoring Americans. A company stating that it does not want its tools used to facilitate domestic surveillance is not inventing a new principle; it is aligning itself with longstanding democratic guardrails.</p><p>To be clear, DOD is not affirmatively asserting that it intends to use the technology to surveil Americans unlawfully. Its position is that it does not want to procure models with built-in restrictions that preempt otherwise lawful government use. In other words, the Department of Defense argues that compliance with the law is the government’s responsibility — not something that needs to be embedded in a vendor’s code. </p><p>Anthropic, for its part, has invested heavily in training its systems to refuse certain categories of <a href="https://www-cdn.anthropic.com/78073f739564e986ff3e28522761a7a0b4484f84.pdf" rel="noopener noreferrer" target="_blank">harmful or high-risk tasks</a>, including assistance with surveillance. The disagreement is therefore less about current intent than about institutional control over constraints: whether they should be imposed by the state through law and oversight, or by the developer through technical design.</p><p>The second issue, opposition to fully autonomous military targeting, is more complex. </p><p>The DOD already maintains policies requiring <a href="https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf" rel="noopener noreferrer" target="_blank">human judgment in the use of force</a>, and debates over autonomy in weapons systems are ongoing within both military and international forums. A private company may reasonably determine that its current technology is not sufficiently reliable or controllable for certain battlefield applications. At the same time, the military may conclude that such capabilities are necessary for deterrence and operational effectiveness.</p><p>Reasonable people can disagree about where those <a href="https://itif.org/publications/2026/02/26/survey-most-americans-say-tech-companies-should-allowed-set-ai-limits/" rel="noopener noreferrer" target="_blank">lines should be drawn</a>.</p><p>But that disagreement underscores a deeper point: the boundaries of military AI use should not be settled through ad hoc negotiations between a Cabinet secretary and a CEO. Nor should they be determined by which side can exert greater contractual leverage.</p><p>If the U.S. government believes certain AI capabilities are essential to national defense, that position should be articulated openly. It should be debated in Congress, and reflected in doctrine, oversight mechanisms and statutory frameworks. The rules should be clear — not only to companies, but to the public.</p><p>The U.S. often distinguishes itself from authoritarian regimes by emphasizing that power operates within transparent democratic institutions and legal constraints. That distinction carries less weight if AI governance is determined primarily through executive ultimatums issued behind closed doors.</p><p>There is also a strategic dimension. If companies conclude that participation in federal markets requires surrendering all deployment conditions, some may exit those markets. Others may respond by weakening or removing model safeguards to remain eligible for government contracts. Neither outcome strengthens <a href="https://www.reuters.com/business/retail-consumer/big-tech-group-tells-pentagons-hegseth-they-are-concerned-about-declaring-2026-03-04/" rel="noopener noreferrer" target="_blank">U.S. technological leadership</a>.</p><p>The DOD is correct that it cannot allow potential “ideological constraints” to undermine lawful military operations. But there is a difference between rejecting arbitrary restrictions and rejecting any role for corporate risk management in shaping deployment conditions. In high-risk domains — from aerospace to cybersecurity — contractors routinely impose safety standards, testing requirements and operational limitations as part of responsible commercialization. AI should not be treated as uniquely exempt from that practice.</p><p>Moreover, built-in safeguards need not be seen as obstacles to military effectiveness. In many high-risk sectors, layered oversight is standard practice: internal controls, technical fail-safes, auditing mechanisms and legal review operate together. Technical constraints can serve as an additional backstop, reducing the risk of misuse, error or unintended escalation.</p><p><strong>Congress is AWOL</strong></p><p>The DOD should retain ultimate authority over lawful use. But it need not reject the possibility that certain guardrails embedded at the design level could complement its own oversight structures rather than undermine them. In some contexts, redundancy in safety systems strengthens, not weakens, operational integrity.</p><p>At the same time, a company’s unilateral ethical commitments are no substitute for public policy. When technologies carry national security implications, private governance has inherent limits. Ultimately, decisions about surveillance authorities, autonomous weapons and rules of engagement belong in democratic institutions.</p><p>This episode illustrates a pivotal moment in AI governance. AI systems at the frontier of technology are now powerful enough to influence intelligence analysis, logistics, cyber operations and potentially battlefield decision-making. That makes them too consequential to be governed solely by corporate policy — and too consequential to be governed solely by executive discretion.</p><p>The solution is not to empower one side over the other. It is to strengthen the institutions that mediate between them.</p><p>Congress should clarify statutory boundaries for military AI use and investigate whether sufficient oversight exists. The DOD should articulate detailed doctrine for human control, auditing and accountability. Civil society and industry should participate in structured consultation processes rather than episodic standoffs and procurement policy should reflect those publicly established standards.</p><p>If AI guardrails can be removed through contract pressure, they will be treated as negotiable. However, if they are grounded in law, they can become stable expectations.</p><p>Democratic constraints on military AI belong in statute and doctrine — not in private contract negotiations.</p><p><em>This article is adapted by the author with permission from </em><a href="https://www.techpolicy.press/" rel="noopener noreferrer" target="_blank"><em>Tech Policy Press</em></a><em>. Read the </em><a href="https://www.techpolicy.press/why-congress-should-step-into-the-anthropicpentagon-dispute/" rel="noopener noreferrer" target="_blank"><em>original article</em></a><em>.</em></p>

Published
Mar 8, 2026, 10:00 AM
Source
IEEE Spectrum AI

More News

Latest Knowledge

Recently added important knowledge

Higher Education and National Security
Higher Education and National Security

The intersection of higher education and national security involves the consideration of how educational institutions manage foreign students and researchers in the context of national interests. This can include concerns about intellectual property, research security, and the potential for espionage.

Conflict of Interest
Conflict of Interest

A conflict of interest occurs when an individual or organization has multiple interests, one of which could potentially corrupt the motivation for an act in another. In research, this can lead to biased results and undermine trust in scientific findings.

Aging Research
Aging Research

Aging research focuses on understanding the biological processes that lead to aging and developing interventions to slow down or reverse these processes. This field encompasses various disciplines, including genetics, cellular biology, and biochemistry.

Research Independence
Research Independence

Research independence is the ability of a PhD student to conduct their research autonomously, making decisions about their project direction and methodology. This skill is essential for developing critical thinking and problem-solving abilities in a scientific context.

Cognitive Computing
Cognitive Computing

Cognitive Computing refers to systems that simulate human thought processes in complex situations. It combines elements of AI, machine learning, and data analytics to create systems that can learn from data, reason, and interact naturally with humans.

International Student Policies
International Student Policies

International student policies refer to the regulations and guidelines that govern the admission and enrollment of students from other countries in educational institutions. These policies can be influenced by political, economic, and social factors, and can vary significantly from one country to another.