As of this morning, March 5, 2026, the United States and Israel are on Day 6 of an active war with Iran. Operation Epic Fury, launched February 28, has already killed Supreme Leader Ali Khamenei, struck nuclear facilities across 24 of Iran's 31 provinces, and triggered a wave of retaliatory missile and drone strikes on US bases across Bahrain, Kuwait, Qatar, the UAE, Jordan, and Iraq. In the first 12 hours of the campaign, the US and Israel reportedly carried out nearly 900 strikes. For context, that tempo would have taken days in any conflict before this decade. Probably a week. That means, weeks of work, compressed into a single morning.
And the thing that made it possible is the same technology that just got its biggest AI supplier banned from the Pentagon five days ago.
This is the AI arms race. It's happening right now, in real time, and most people covering it are still writing about it like it's a future concern.
The Problem AI Actually Solved
To understand why this matters, you have to understand what problem AI solved in the first place.Information gaps are a bigger reason for a modern military to lose than their soldiers not being brave enough or the breakage of equipment. Specifically, the time it takes to go from "we know where a target is" to "we hit it." You have to verify the intelligence. Cross-reference it against other sources. Brief the commanders. Work through the targeting sequence. Consider what happens if you're wrong. In a complex conflict, that full cycle can take hours. For a high-value leadership target, days.
Iran built its entire defense strategy around that window. Hardened facilities. Leadership compounds that moved on irregular schedules. Nuclear sites buried deep enough that you couldn't hit them without knowing exactly where to go. The assumption baked into Iranian deterrence was that any adversary would need time, and that time bought survival.
AI closed the window.
The systems running underneath Operation Epic Fury were fusing drone feeds, satellite imagery, and telecommunications intercepts at speeds no human analytical team could come close to. And crucially, they were doing it across all target categories simultaneously. Leadership targeting, air defense suppression, nuclear facility strikes. All at once, rather than sequentially. Craig Jones, a senior lecturer at Newcastle University who studies military kill chains, described what that looks like from the outside: AI systems "making recommendations for what to target" at speeds that exceed human cognitive processing, enabling "simultaneous execution at scale."
900 strikes in twelve hours. That's what a targeting system running faster than any human staff can sustain actually looks like in practice.
How the US Actually Built This
Here's something most people don't know: the US military almost didn't have any of this.
Project Maven launched in 2017 with a modest goal - use machine learning to scan drone surveillance footage and automatically flag objects of military interest, so analysts didn't have to manually watch hours of video looking for a weapons cache or a vehicle. When you can process surveillance faster than a target can move, you change the whole logic of the battlefield. Google won the contract, then over 4,000 employees signed a petition refusing to build it, and Google walked away. The Pentagon scrambled.
Then Palantir stepped in and by May 2024 held a $480 million Army contract for the Maven Smart System, a platform fusing satellite imagery, geolocation data, and communications intercepts into a single battlefield interface now deployed across five combatant commands and adopted by NATO's Allied Command Operations.
Alongside Maven, the Pentagon built GenAI.mil, a platform every military and civilian DoD employee can access. By December 2025, xAI's Grok models were being integrated into it at a classification level that allows handling of sensitive controlled information. A poster in Pentagon hallways told employees the new AI tool was available and they were "highly encouraged" to use it.
Then came Venezuela. Earlier in 2026, during the US operation that captured Nicolás Maduro, Anthropic's Claude, deployed through its Palantir contract, supported intelligence analysis and targeting. According to the Wall Street Journal, Claude was at that moment the only AI model running inside the Pentagon's classified networks.
That arrangement lasted until five days ago, when the Pentagon and Anthropic publicly fell apart.
The breakdown came down to a specific disagreement about what the military could use AI for. Anthropic drew two lines: no fully autonomous weapons, and no mass domestic surveillance of Americans. The Pentagon wanted authorization for any lawful use. Those two positions couldn't be reconciled. The Trump administration designated Anthropic a "supply chain risk to national security," and ordered all government agencies to stop using its products. Within hours, OpenAI announced a deal. xAI followed days later. The transition is actively underway while strikes continue over Tehran.
What that reshuffling tells you is this: the US military now treats frontier AI as infrastructure. The kind where losing a supplier creates an immediate operational hole, not an inconvenience you address next quarter.
Arms Race vs AI Race
People keep reaching for the nuclear analogy when they talk about AI and geopolitics. Let’s talk if that analogy holds true.The Cold War arms race had a physical constraint built into it. Enriching uranium is hard. Building missiles requires factories. Counting warheads is possible because they exist as physical objects. That physical scarcity is what made arms control treaties work eventually, because you could verify. The horror of mutually assured destruction was at least a stable horror.
AI runs on compute, data, and talent. Compute can be manufactured domestically, purchased through intermediaries, or built around different chip architectures entirely. Data can be stolen, synthesized, or built up from open-source foundations. The moat is real and it leaks constantly.
The more honest historical parallel is Britain's Chain Home radar network in 1940. Chain Home was genuinely decisive in the Battle of Britain. German pilots flew into airspace where British controllers could see them coming. The Luftwaffe's strategic plan assumed approximate informational parity. They were wrong, and it cost them the campaign. Germany had radar technology too. What Germany didn't have was the system around it: the network of stations, the protocols for relaying intercept data to controllers in real time, the doctrine for acting on that data under fire, the trained personnel who made the whole thing function when it actually mattered.
That distinction between technology and system is the most important thing to understand about where the US stands right now. The advantage is the years of classified deployment infrastructure, the operational doctrine built around AI-generated intelligence, the battlefield feedback from three actual conflicts that has been feeding back into the systems themselves. That takes years to build. It doesn't replicate overnight from a procurement document.
The question is how long it stays ahead.
Where does China Stands
The PLA's doctrinal framework calls the goal "intelligentized warfare." The concept treats AI as the organizing principle for the entire future military, not a layer added onto existing structures. Georgetown's Center for Security and Emerging Technology reviewed thousands of PLA procurement requests from 2023 and 2024 and found something pointed: China is building AI decision-support systems specifically designed to compensate for perceived weaknesses in its own officer corps. The PLA doesn't fully trust its chain of command to outthink American commanders in a fast-moving conflict. So it's building AI to do it instead.
And China has a real card to play. DeepSeek's emergence in early 2025 showed that a highly capable reasoning model could be built with significantly less compute than Western frontier labs require. That efficiency advantage matters in a military context because edge-deployed systems, drones and autonomous vehicles operating far from cloud infrastructure, can't run heavy server-side inference. PLA procurement notices referencing DeepSeek accelerated throughout 2025. The model runs on Huawei's domestically produced chips, which is exactly the kind of "algorithmic sovereignty" Beijing has been building toward for years.
The Pentagon's own December 2025 China report acknowledged the performance gap had "narrowed."
The harder gap to measure is operational. The PLA hasn't fought a war since 1979. Its AI systems have been tested in simulations and procurement benchmarks, not in the live-fire conditions that US and Israeli systems have been refined through across three actual conflicts in five years. Simulation-trained AI and combat-tested AI are different things. How different is something you only discover when it matters.
And there are zero ethical debates happening inside Beijing about any of this. The same Georgetown procurement review found nothing resembling the Anthropic-style red lines around autonomous kill chains. A March 2025 paper from PLA-linked researchers described fully autonomous execution of combat decisions in urban environments, including the decision to engage, as a straightforward development goal. Moving that fast toward autonomous lethal AI probably creates real failure modes: systems that misidentify targets, escalate in ways operators can't reverse, behave unpredictably under stress. But the countries that find those limits will be the ones that deployed first.
What Rest of the World Demonstrated
Previously, Ukraine showed the first generation of AI-enabled warfare in practice. AI-assisted drone targeting went from roughly 30-50% accuracy to around 80%. Both sides developed electronic warfare countermeasures and both sides adapted around them. Ukrainian volunteer developers were shipping AI targeting modules for $25 a drone. The whole conflict became a live machine-learning competition where the training data was real battlefield performance.
If Ukraine surprised you, Gaza went further still. Israel deployed a targeting stack with no real precedent in open warfare. The Gospel generated building target lists. Lavender identified individual Hamas members from commanders down to foot soldiers. “Where's Daddy” tracked targets' phones to their homes. The IDF maintained that human validation occurred at the final step, but the pace of operations had compressed that window to seconds.
Iran, this week, is the inverse demonstration. Shahed drones in large numbers. Ballistic missiles aimed at fixed, known targets. The strikes have caused real damage: six American soldiers killed, airports hit across the Gulf, Amazon's data centers offline. But the UAE Ministry of Defense reported intercepting 165 ballistic missiles, two cruise missiles, and 541 Iranian drones since the counterstrikes began. Most of them never arrived.
When one side has AI-enabled precision and the other is launching at volume without it, that intercept ratio is what the divergence actually looks like in practice.
So Is AI Actually a Competitive Edge?
Yes. Definitively, in 2026. The evidence is running right now over Iranian airspace, and it's been accumulating since 2020.
What it is, specifically, is a significant multiplier on existing military capability. It makes capable militaries faster, more precise, and able to sustain operational tempo that human staff alone could never match. It doesn't transform an underfunded military with bad doctrine into a formidable one.
And the advantage sits on a narrower foundation than it looks. A small number of American companies control the frontier models. Those companies have their own views on what their technology should do, and those views are now demonstrably negotiable under political pressure, in ways that create real instability at the worst possible moments. The operational data that makes battlefield AI good accumulates only through actual conflicts. The talent pipeline for building frontier models doesn't respect borders.
The arms race parallel is real. The Manhattan Project was classified for three years before it changed everything. This race is playing out in corporate press releases, Pentagon procurement notices, and X posts from AI company CEOs, with active strikes in the background and an ongoing negotiation about what the models are even allowed to do.
The window in which the US holds a commanding lead in military AI is open. It is not permanent.
Sources: Al Jazeera, CNBC, Washington Post live conflict coverage (March 2026); Interesting Engineering, "Iran war exposes the expanding role of AI in military strike planning"; MIT Technology Review, "OpenAI's compromise with the Pentagon is what Anthropic feared"; Foreign Affairs, "China's AI Arsenal" (March 2026); CSET, "China's Military AI Wish List" (February 2026); DefenseScoop, GenAI.mil and Pentagon AI coverage; Breaking Defense, "NATO picks Palantir's Maven AI" (April 2025); U.S. Army War College, "AI's Growing Role in Modern Warfare" (August 2025); CSIS, "Technological Evolution on the Battlefield" (October 2025); UK House of Commons Library, "US-Israel strikes on Iran: February/March 2026."