
Artificial intelligence is rapidly becoming dual-use infrastructure for both civilian and military systems. When national security imperatives collide with ethical guardrails, who sets the boundary — the state, the firm, or the market?
This piece argues for a “Defense First” doctrine: prioritizing AI applications that strengthen infrastructure resilience, cybersecurity, and biosecurity while maintaining explicit red lines around autonomous lethal systems. The future of AI governance will not be decided by rhetoric, but by institutional alignment.
I. The Real AI Arms Race
The artificial intelligence arms race is frequently framed in cinematic terms: autonomous drones, battlefield robotics, machine-speed escalation. That imagery is powerful, but it misdirects attention from the more consequential structural question now confronting governments and frontier AI firms.
When national security imperatives collide with ethical guardrails in AI development, who sets the boundary: the state, the firm, or the market?
This tension surfaced publicly in 2026 when reports indicated that the U.S. Department of Defense pressed Anthropic to relax usage safeguards on its Claude models for “all lawful purposes,” including expanded military applications.¹ Anthropic reportedly resisted removing constraints on fully autonomous weapons and domestic surveillance use cases, even at risk to its defense relationship.² The dispute was not scandalous. It was structural.
It exposed a pressure gradient that will define the next decade of AI governance.
Today, the most advanced AI systems are developed by private firms whose capabilities rival or exceed those of many states. These systems are dual-use infrastructure. They can accelerate biomedical discovery, fortify cybersecurity, optimize logistics, and potentially automate lethal decision-making.
The AI arms race is therefore not merely geopolitical. It is institutional.
At its core lies a question of doctrine.
II. Governance Convergence in Principle, Fragmentation in Practice
Global AI ethics frameworks show remarkable convergence at the level of principle. A 2025 systematic review surveying twenty-one major governance sources found broad agreement around fairness, transparency, accountability, explainability, and sustainability.³
The European Union’s AI Act codified a risk-tiered regulatory regime and prohibited certain high-risk uses.⁴ The World Health Organization articulated ethical guidance for AI in health centered on autonomy, equity, and safety.⁵ Regional bodies such as ASEAN issued governance frameworks intended to harmonize development standards.⁶
Yet convergence in principle has not yielded convergence in enforcement. Oversight mechanisms differ widely. Some frameworks carry binding legal authority. Others remain advisory. Regulatory fragmentation creates space for strategic maneuvering.⁷
International coordination efforts reinforce this convergence in principle. The OECD AI Principles, the G7’s ongoing dialogues on high-performance computing and AI governance, and UNESCO’s Recommendation on the Ethics of Artificial Intelligence each articulate commitments to human oversight, accountability, and rights protection. These initiatives signal that the ethical boundaries under discussion are not idiosyncratic corporate positions, but part of an emerging global consensus. The challenge remains translating consensus into enforceable architecture.¹³
As Mueller observes, AI governance resembles distributed computing itself. It is decentralized, fragmented, and evolving faster than formal oversight structures can stabilize.⁸
When national security enters the equation, ethical guidelines drafted in civilian contexts are stress-tested under procurement pressure. What is aspirational in peacetime becomes contested under strategic competition.
III. The Anthropic–DoD Dispute as Institutional Signal
The reported dispute between Anthropic and the Department of Defense illustrates the tension clearly.
The Pentagon sought expanded flexibility for military AI applications. Anthropic maintained guardrails around autonomous lethal systems and domestic surveillance.¹
Both positions are rational within their incentive structures. The Department of Defense is tasked with safeguarding national security. Frontier AI capabilities promise strategic advantage in logistics, intelligence modeling, and cyber defense. From a procurement perspective, flexibility enhances readiness.
Anthropic faces a different calculus. Its brand identity rests in part on safety commitments. Its employees, investors, and customers expect ethical constraint. Its long-term legitimacy depends upon public trust.
The episode demonstrates how quickly ethical red lines encounter compression when national security arguments intensify.
Outcomes in such scenarios depend not on rhetoric, but on institutional alignment.
IV. Framing Doctrine: Defense Versus War
The United States maintains a Department of Defense, not a Department of War. That distinction is not semantic. It encodes doctrine.
Framed as war, national security implies offensive projection and dominance. Framed as defense, it implies preservation and continuity. Most Americans do not conceptualize national security as the pursuit of lethal capability for its own sake. They conceptualize it as protection of daily life: uninterrupted electricity, functioning financial systems, accessible healthcare, secure communication, and stable governance.
Under that framing, the most strategically transformative application of AI in national security is not battlefield automation.
It is infrastructure protection.
V. AI as Shield: Cybersecurity and Infrastructure Resilience
Modern civilization operates through digitally integrated systems. Electrical grids, water utilities, transportation networks, financial clearinghouses, hospital systems, and telecommunications backbones are interdependent. A coordinated cyber intrusion into one domain can cascade across others.
Artificial intelligence excels at pattern recognition within large, dynamic datasets. Intrusion detection is a temporal anomaly problem. It requires identifying deviations within streams of behavioral data.
Research into Spiking Neural Networks offers a compelling illustration. Knapp et al. conducted a 2025 comparative study evaluating bare SNN architectures for intrusion detection.⁹ Across multiple datasets, SNNs achieved competitive or superior accuracy while consistently generating fewer false positives than conventional neural networks.¹⁰
False positives degrade cybersecurity posture by overwhelming analysts and delaying genuine threat response. Reducing them strengthens resilience.
SNNs also possess architectural advantages. On neuromorphic hardware, event-driven computation reduces energy demands relative to conventional models.¹¹ In national-scale monitoring systems, efficiency becomes strategic.
Here, AI is not a weapon. It is a stabilizer.
VI. Cyberbiosecurity and the Expanding Perimeter
National security now extends into biological data systems. Genomic repositories, synthetic biology pipelines, and pharmaceutical research networks represent economic and strategic assets.
Houser describes this emerging domain as cyberbiosecurity, emphasizing vulnerabilities at the intersection of digital and biological systems.¹² Compromise in this space could disrupt healthcare delivery, intellectual property, and public health stability.
Artificial intelligence can reinforce defenses here by modeling anomalous access patterns, detecting data exfiltration, and protecting distributed biomedical infrastructure.
The frontier of defense is increasingly informational and biological.
AI belongs there as shield infrastructure.
VII. The Ethical Fault Line: Autonomous Lethality and Human Agency
The decisive ethical fault line emerges when systems remove meaningful human judgment from lethal decision-making.
At stake is not only risk, but moral agency.
Human-in-the-loop oversight is not a procedural nicety. It is an acknowledgment that lethal force carries irreducible moral weight. Ethical governance frameworks consistently emphasize accountability and explainability.³ When lethal decision cycles collapse into opaque algorithmic processes, responsibility diffuses and moral agency erodes.
Autonomous lethal systems compress time for reflection, reduce space for discretion, and complicate attribution of error. They raise escalation risk by delegating irreversible decisions to systems optimized for speed and pattern recognition rather than moral reasoning.
Defensive AI fortifies society. Autonomous lethality externalizes risk.
Philosophically, this is the difference between systems that preserve the conditions for human flourishing and systems that instrumentalize human life within probabilistic frameworks.
Boundaries here must be explicit. The principle of meaningful human control is not a technical preference. It is a civilizational safeguard.
VIII. Consortium Stability and Collective Constraint
Competitive markets alone cannot guarantee restraint. Individually, firms face incentive to relax safeguards under contract pressure. Collectively, they can alter equilibrium.
A consortium of frontier AI developers could ratify shared prohibitions on fully autonomous lethal systems while affirming commitment to defensive infrastructure priorities. Such coordination would not override democratic authority. It would represent voluntary refusal to build systems that undermine human oversight.
Historical analogues demonstrate that coordinated constraint can stabilize high-risk domains. Without coordination, defection is rational. With coordination, ethics becomes baseline rather than liability.
The question is not whether capability exists. It is whether institutions align to govern it.

Figure 1. AI Arms Race Structure — Escalation or Stability?
IX. Institutional Lag and Strategic Choice
Legal frameworks governing privacy and executive authority were not written with AI-enabled cross-agency aggregation in mind. Litigation over federal data consolidation initiatives illustrates ongoing friction between executive acceleration and statutory aging.
When governance lags capability, escalation becomes easier than stabilization.
The artificial intelligence arms race is not destined to culminate in battlefield automation. It can become a race for resilience: grid stability, financial integrity, cyberbiosecurity, and infrastructure continuity.
Artificial intelligence can amplify destruction or reinforce stability.
The difference lies in doctrine.
When ethics competes with national security, the future will be determined not by who speaks most forcefully, but by who aligns most effectively.
The wiser course is coordinating, not blinking.
Notes
¹ Axios, “Pentagon Threatens to Cut Off Anthropic Over AI Safeguards Dispute,” February 15, 2026.
² Reuters, “Pentagon Threatens to Cut Off Anthropic in AI Safeguards Dispute,” February 15, 2026.
³ Osama Ismail and Naim Ahmad, “Ethical and Governance Frameworks for Artificial Intelligence: A Systematic Literature Review,” International Journal of Interactive Mobile Technologies 19, no. 14 (2025): 121–36.
⁴ European Parliament and Council, Artificial Intelligence Act, 2024.
⁵ World Health Organization, Ethics and Governance of Artificial Intelligence for Health, 2021.
⁶ ASEAN, Guide on AI Governance and Ethics, 2024.
⁷ Ibid.
⁸ Milton L. Mueller, “It’s Just Distributed Computing: Rethinking AI Governance,” Telecommunications Policy, 2025.
⁹ Leonard Knapp et al., “Efficacy of Spiking Neural Networks for Intrusion Detection Systems,” 2025 International Conference on Cybersecurity and AI-Based Systems.
¹⁰ Ibid.
¹¹ M. Davies et al., “Loihi: A Neuromorphic Manycore Processor with On-Chip Learning,” IEEE Micro 38, no. 1 (2018): 82–99.
¹² Ryan Scott Houser, “Cyberbiosecurity: An Emerging Answer to Bio-Based Cybersecurity Vulnerabilities,” Journal of Homeland Security & Emergency Management 23, no. 1 (2026).
¹³ OECD, OECD Principles on Artificial Intelligence (Paris: Organisation for Economic Co-operation and Development, 2019); UNESCO, Recommendation on the Ethics of Artificial Intelligence (Paris: United Nations Educational, Scientific and Cultural Organization, 2021); G7 Leaders, Hiroshima AI Process Comprehensive Policy Framework (Hiroshima: Group of Seven, 2023),
Sources
ASEAN. Guide on AI Governance and Ethics. 2024.
Axios. “Pentagon Threatens to Cut Off Anthropic Over AI Safeguards Dispute.” February 15, 2026.
Davies, M., Narayan Srinivasa, Tsung-Han Lin, et al. “Loihi: A Neuromorphic Manycore Processor with On-Chip Learning.” IEEE Micro 38, no. 1 (2018): 82–99.
European Parliament and Council of the European Union. Artificial Intelligence Act. 2024.
G7 Leaders. Hiroshima AI Process Comprehensive Policy Framework. Hiroshima: Group of Seven, 2023.
Houser, Ryan Scott. “Cyberbiosecurity: An Emerging Answer to Bio-Based Cybersecurity Vulnerabilities.” Journal of Homeland Security & Emergency Management 23, no. 1 (2026).
Ismail, Osama, and Naim Ahmad. “Ethical and Governance Frameworks for Artificial Intelligence: A Systematic Literature Review.” International Journal of Interactive Mobile Technologies 19, no. 14 (2025): 121–136.
Knapp, Leonard, S. Nitzsche, M. Börsig, A. Vasilache, I. Baumgart, and J. Becker. “Efficacy of Spiking Neural Networks for Intrusion Detection Systems.” In 2025 International Conference on Cybersecurity and AI-Based Systems (Cyber-AI), 89–95. Varna, Bulgaria, 2025.
Mueller, Milton L. “It’s Just Distributed Computing: Rethinking AI Governance.” Telecommunications Policy, 2025.
OECD. OECD Principles on Artificial Intelligence. Paris: Organisation for Economic Co-operation and Development, 2019. https://oecd.ai/en/ai-principles.
Reuters. “Pentagon Threatens to Cut Off Anthropic in AI Safeguards Dispute.” February 15, 2026.
UNESCO. Recommendation on the Ethics of Artificial Intelligence. Paris: United Nations Educational, Scientific and Cultural Organization, 2021.
World Health Organization. Ethics and Governance of Artificial Intelligence for Health. Geneva: WHO, 2021.


Leave a Reply