This page has been visited an unknown number of times.
Page last generated:
2026/03/15 13:43:07.
bc1q7ehwrh37u84tj6ymxfvj0jvjamkvyv7xgjskfc), via CashApp / $DerpSoft, or just by sharing my work on social media. Your support helps me create more content and keep it free for everyone. Thank you!
This post was written 2026-03-15 01:50:00 -0700 by Robert Whitney and has been viewed an unknown number of times since unknown time. This post was last viewed an unknown length of time ago.
In late February the US struck over 1,000 targets in the first 24 hours of the war with Iran.
Now we are mid-way into March with over 5,500 total targets having been destroyed.
These military campaigns are far faster than other wars in the past such as Iraq in 2003.
This acceleration comes from AI processing vast treasure troves of data from satellites, drones, and intel signals.
The core tools in use are Palantir Maven & Anthropic's Claude (despite recent drama).
The Maven Smart System fused with Claude LLM scans imagery/intel for anomalies and Claude acts as a layer for analysts to help query data and rank priorities.
The system also suggests strike sequences and similates scenarios.
Although the Pentagon blacklisted Anthropic in February as a "supply chain risk" Claude remains integral with existing integrations being in place until they are phased out by replacements.
CENTCOM Commander Adm. Brad Cooper has stated that "Humans will always make final decisions on what to shoot and what not to shoot and when to shoot."
It seems, for right now anyway, that AI is strictly allowed to recommend targets, prioritize them, and provide coordinates, but all lethal action requires human review.
I am sure that there are plenty of commanders who would like for autonomous lethal action, however that doesn't seem to be on the schedule right now and for that I am very glad!
If anything the Terminator franchise should have been a warning and not an instruction manual!
While the US may never allow autonomous action, other nation states, (*cough* Ukraine *cough*) may do something completely different with AI. That in itself is a very scary thought don't you think?
Anthropic draws the line at unrestricted military use: no full autonomy, no mass domestic surveillance..
This has lead to them being blacklisted and the Pentagon pivoting to OpenAI/xAI.
Labs want guardrails but DoD is pushing for "all lawful purposes".
Meanwhile there are congressional pushes for clearer rules on AI in warfare (statutory boundaries on autonomy & surveillance).
The key take-aways here are that AI is supercharging the kill-chain and speeding up operations overseas. Humans still pull the trigger for now unlike in the Ukraine/Russia conflict where drones are autonomously killing targets.
Below is a list of currently known events, as of this publication, of how AI has been used in the Iranian war.
| Date / Period | Key Event Description | AI Usage / Role in the Event |
|---|---|---|
| February 27, 2026 | President Trump issues final "go order" for Operation Epic Fury from Air Force One. | Pre-operation planning; AI (Maven + Claude) used in intelligence prep and target list generation for initial strikes. |
| February 28, 2026 (opening salvo) | US and Israel launch coordinated strikes (~900 in first 12 hours); Supreme Leader Ali Khamenei and senior officials killed; targets include command/control, air defenses, missile sites, naval assets. Over 1,000 targets hit in first 24 hours. | Heavy AI reliance: Maven/Claude fused satellite/drone/intel data, proposed hundreds of prioritized targets with coordinates/weapons recommendations, compressed analysis from days to seconds for rapid synchronized wave. Enabled historic speed/scale. |
| February 28 – March 1, 2026 (initial phase) | Focus on "dazing" Iranian forces: strikes on C2 infrastructure, naval forces, ballistic missile sites, intel assets. Iran retaliates with missiles/drones across region. | AI tools (Claude via Maven) for real-time intelligence assessment, target ranking, "what-if" simulations, and prioritizing high-value assets to blunt Iranian response. |
| March 1–2, 2026 | Continued strikes on missile production, navy, proxies; air superiority established over parts of Tehran. | Ongoing AI support for data sifting/prioritization; accelerated kill-chain processing to maintain momentum. |
| March 3–5, 2026 | Intensified campaign; degradation of Iranian missile/drone capabilities reported; CENTCOM highlights AI's role in faster decisions. | Adm. Brad Cooper (CENTCOM) publicly touts "variety of advanced AI tools" for sifting vast data in seconds; Maven/Claude central to target selection/prioritization across thousands of strikes. |
| March 6–10, 2026 | Sustained high-intensity strikes; buried launchers hit with B-2 bombers; naval minelayers destroyed near Strait of Hormuz; Iranian launch rates drop sharply (to ~10% of Day 1 for missiles). | AI integration for real-time ops: processing multi-source intel, updating target lists dynamically, supporting Space Force/Cyber Command non-kinetic elements. |
| March 11, 2026 | "Most intense day" of strikes reported; total targets exceed 5,500; entire Soleimani-class warships eliminated; naval interdiction near Hormuz. | AI emphasized in CENTCOM update: tools enable overwhelming pace/precision; humans retain final lethal decisions, but AI compresses planning to real-time. |
| March 12–15, 2026 (ongoing) | Campaign continues with focus on remaining missile/naval/proxy threats; Trump signals potential extension (weeks); oil volatility, regional escalation risks. | Persistent AI use for sustainment/targeting; phase-out of Claude delayed due to operational reliance (replacements sought post-blacklist drama). |
Notes on AI overall:
- AI (primarily Maven Smart System powered by Claude) acted as an "accelerator" — enabling massive scale (e.g., 1,000+ targets Day 1, 5,500+ total) by automating data fusion, anomaly detection, prioritization, and scenario modeling.
- No autonomous strikes: CENTCOM/DoD repeatedly stress "humans will always make final decisions on what to shoot."
- Ethical backdrop: Use persisted despite Anthropic's Feb 2026 blacklist over guardrails on autonomy/surveillance.
Edits: After publishing this it came to my attention that Ukraine already is using autonomous AI systems in the war against Russia. I have updated to reflect this new information as well as wrote another post about that conflict.
xnite, real name Robert Whitney, is a self-taught computer programmer with a passion for technology. His primary focus is on secure, reliable, and efficient software development that scales to meet the needs of the modern web. Robert has been writing since 2010 and has had contributions published in magazines such as 2600: The Hacker Quarterly. His background in technology & information security allows him to bring a unique perspective to his writing. Robert's work has also been cited in scientific reports, such as "Future Casting Influence Capability in Online Social Networks: Fake Accounts and the Evolution of the Shadow Economy" by Matthew Duncan, DRDC Toronto Research Centre.
For Minecraft related inquiries feel free to reach out to the community on the Break Blocks Club discord server, for everything else please email me at Sheepenheimer@proton.me
You can also find me on the various social media platforms listed here on my website, but I do not check them often.