Key Takeaways
- AI can act as “artificial staff support” by automating clerical tasks such as mission receipt, mission‑analysis drafting, voice‑to‑doctrine translation, order production, and red‑team questioning, freeing commanders and staff for conceptual work.
- AI must never be tasked with commander‑level judgments—intent, decisive operation, risk acceptance, or order approval—because these are nondelegable, responsibility‑bearing functions.
- Product‑governance discipline (single source of truth, version control, named validators) is a prerequisite for safe AI use; without it, speed translates into hidden errors that can cause fratricide or synchronization breakdowns.
- Deployability in contested environments is a critical limitation: many current tools rely on cloud connectivity and could fail when denied, degraded, intermittent, or limited (DDIL) conditions arise.
- Adversaries are pursuing similar AI‑enabled C2 improvements; relative governance discipline, not raw adoption speed, will determine which force gains a tempo advantage in future conflict.
AI as a Force‑Multiplier for Staff Work
During the 3rd Brigade Combat Team’s rotation at the Joint Readiness Training Center (JRTC) at Fort Polk, the lead plans officer stood before a whiteboard, map, and headset and walked the staff through a complex defensive operation. As he spoke, a transcription tool captured every word, and thirty minutes later the brigade had a first‑draft operations order in correct doctrinal format—drafted not by any officer in the room but by an AI trained on Army operations‑order structure. “It was the twelfth hour of the planning cycle when our lead plans officer stood between a whiteboard and a map… and started describing a complex defensive operation,” the authors recall. This single instance illustrated how AI can compress the labor‑intensive transcription and formatting steps that traditionally consume hours of staff time.
Doctrinal Boundaries for AI Employment
Army Doctrine Publication 5‑0, The Operations Process, makes clear that planning is commander‑driven and staff‑supported; the commander owns the conceptual dimension while the staff handles the supporting labor. The authors note that both Major Michael Zequeira and Colonel Jason Adler have argued that AI’s greatest immediate value lies in “unburdening staffs without severing the human role in judgment.” Consequently, the brigade confined AI to tasks that required high clerical effort but limited conceptual judgment, leaving the commander’s intent, decisive operation selection, risk acceptance, and order approval firmly in human hands.
Mission Receipt: Faster Warning Orders
When the division‑level operation order arrived, the staff fed it and its supporting products into an AI tool that extracted specified and implied tasks, constraints, restraints, command relationships, and critical deadlines. Although human interpretation and validation remained essential, the raw output was faster and less error‑prone than manual scanning. “We published a complete warning order to subordinate battalions within one hour of receipt—a pace that exceeds standard brigade performance for a warning order subordinates can actually execute against,” the article states. This acceleration gave subordinate units more time to prepare and rehearse.
Mission Analysis: Starting With a Structured Draft
A separate AI‑enabled tool ingested planning documents, pulled out mission‑analysis factors, and produced structured outputs for staff refinement. While assumptions, risks, and essential tasks still demanded human judgment, the operations officer entered the mission‑analysis brief with a working product rather than a folder of marked‑up notes. The authors observe that “a staff working from a structured draft under deadline behaves fundamentally differently than one staring at empty paragraphs at hour four of a twenty‑four‑hour clock.” The shift allowed the team to spend more time refining assumptions instead of gathering them.
Voice‑to‑Doctrine Translation: Preserving Commander’s Thought
Perhaps the most consequential use case was converting the plans officer’s verbal walkthrough into doctrinal prose. Commanders naturally think in narrative, correction, and refinement, but traditional staff work forces that thoughts through a bottleneck of transcription, interpretation, and formatting. The AI took the transcript and “put it into the format the Army requires… preserving and formalizing command thought—narrowing the gap between how commanders speak and how doctrine requires headquarters to publish.” By automating the translation step, the team reduced the risk that commander’s intent would be lost in the shuffle.
Orders Production and Synchronization: Absorbing Product Churn
After the scheme of maneuver was set, an AI tool drafted warning orders, built timelines, created synchronization matrices, and generated first‑draft versions of supporting products. In many headquarters, each refinement cascades into changes across timelines, tasks, matrices, and rehearsal materials, causing staff to expend disproportionate energy merely keeping products aligned. By letting AI absorb that churn, the brigade could “spend our energy assessing the quality of the plan rather than maintaining the mechanics of the plan.” The result was a higher‑fidelity plan with less internal friction.
AI as a Staff Coach: Enhancing Rigor Without Replacing Judgment
In a final, underappreciated role, the brigade executive officer prompted an AI to generate pointed, cross‑functional questions—such as “What is our biggest vulnerability during a forward passage of lines?” or “What happens to the scheme of fires if the main effort breaches thirty minutes early?” The AI functioned as a staff coach, surfacing considerations that might otherwise be missed, while the humans retained authority to answer and act on those questions. “The machine enhanced rigor without displacing judgment,” the authors note, underscoring the importance of keeping AI in a supportive, not directive, role.
Where AI Stayed Back: Course of Action Development
During course of action (COA) development, the AI deliberately took a back seat. This stage demands tactical imagination, terrain appreciation, enemy understanding, and the commander’s sense of acceptable risk—functions that are inherently conceptual. The plans officer stood at the map with the commander and operations officer, building COAs by hand against terrain, enemy disposition, and intent. “The AI was in the room. The AI was not the author,” the article emphasizes, reinforcing that AI’s proper place is to support, not to supplant, the commander’s creative and judgmental work.
The Risks Masked by the Training Environment
Even in a disciplined JRTC rotation, three failure modes emerged. First, hidden confidence: generative models can produce polished but factually incorrect output—wrong unit designations, inverted phase lines, fabricated control‑measure names, or time‑distance errors of thirty to sixty minutes. If such errors slipped into an executed order, they could cause fratricide or missed linkups. The section chiefs caught them only because they refused to treat AI output as validated, highlighting that “a polished paragraph is not a correct paragraph. A clean matrix is not an approved matrix.”
Second, architectural fragility: the tools relied on cloud‑hosted, commercial models requiring continuous connectivity. In denied, degraded, intermittent, or limited (DDIL) conditions—or when operating on classified networks—these tools may fail or risk exposing classified data. The authors warn that dependence on such a service creates a logistics vulnerability as critical as water or fuel.
Third, strategic parity: adversaries are pursuing comparable AI‑enabled C2 improvements. If they integrate AI into their planning cycles more effectively, the tempo advantage observed at Fort Polk could evaporate. The decisive factor will be whose force first masters governance discipline, not merely who adopts the tools fastest.
Implications for Army Adoption
The authors distill four imperatives for brigade commanders and staffs considering AI:
- Apply AI aggressively to receipt of mission, mission‑analysis drafting, voice‑to‑doctrine translation, order production, and red‑team questioning—areas where clerical labor yields the greatest cognitive‑time return.
- Prohibit AI authorship of commander’s intent, decisive operation, or any approval‑authority product without explicit human validation; encode these limits in unit SOPs before operational pressure tests them.
- Institute product‑governance discipline up front—single source of truth, version control, named validators per product class—because governance, not speed, determines success. A brigade that adopts AI without these safeguards is worse off than one that abstains.
- Plan for connectivity loss by identifying which AI tools function in degraded conditions and rehearsing planning processes without them; dependence on a tool that cannot operate in DDIL environments creates a dangerous single point of failure.
The broader Army should embed AI governance into professional military education (CGSC, captains’ courses, pre‑command courses) and eventually update FM 5‑0, Planning and Orders Production, with a chapter on AI‑enabled staff processes. By doing so, the force can preserve the essential human role in command while leveraging AI to reclaim time for the conceptual work that only humans can perform.
Image credit: Master Sgt. Whitney Hughes, US Army
Artificial Staff, Human Command: An AI Integration Experiment

