Key Takeaways:
- The US Department of War has released a new strategy to accelerate the deployment of AI for military purposes, aiming to make the US military the world’s leading AI-enabled fighting force.
- The strategy focuses on experimentation with AI models, eliminating bureaucratic barriers, and investing in AI infrastructure, but ignores the limitations and potential risks of AI capabilities.
- The approach has been criticized for being overly reliant on marketing hype and lacking technical rigour, with potential consequences including increased civilian harm and vulnerability in military settings.
- The US is not alone in incorporating AI into its military, with countries like China and Israel also investing in AI-powered military projects.
- The strategy’s emphasis on speed and efficiency may lead to unintended consequences, such as accelerated civilian death tolls and unnecessary escalation of conflicts.
Introduction to the US AI Strategy
The United States is set to become the world’s leading artificial intelligence-enabled fighting force, according to the country’s Department of War. The department’s new "AI Acceleration Strategy" aims to accelerate the deployment of AI for military purposes, with the goal of making the US military more lethal and efficient. However, critics argue that the strategy ignores the limitations and potential risks of AI capabilities, and instead relies on marketing hype to drive its adoption. This approach has been dubbed "AI peacocking," where the public signalling of AI adoption and leadership clouds the reality of unreliable systems.
The Strategy’s Objectives and Projects
The US AI strategy sets an unambiguous objective of establishing the US military as the frontrunner in AI warfighting. To achieve this goal, the department will encourage experimentation with AI models, eliminate bureaucratic barriers to implement AI across the military, support investment in AI infrastructure, and pursue a set of major AI-powered military projects. One of these projects aims to use AI to turn intelligence into weapons in hours, not years, which has raised concerns about the potential for increased civilian harm. Another project seeks to put American AI models directly in the hands of three million civilian and military personnel, without clear justification or consideration of the potential impacts.
The Narrative vs. Reality
Despite the hype surrounding AI, the reality is that the technology has significant limitations. A recent MIT study found that 95% of organizations received a zero return on investment in generative AI, due to technical limitations such as the inability to retain feedback, adapt to new contexts, or improve over time. These findings apply broadly, beyond just business contexts, and highlight the shortcomings of AI that are often hidden by marketing hype. AI is an umbrella term that encompasses a spectrum of capabilities, from large language models to computer vision models, each with different uses and purposes. However, most AI applications have been bundled together to form a globally successful marketing agenda, reminiscent of the dotcom bubble of the early 2000s.
The Risks of AI Peacocking
The Department of War’s AI-first strategy reads more like a guide to "AI peacocking" than a legitimate strategy to implement technology. AI is posited as the solution to every problem, including those that do not exist, and the marketing behind AI has created a fabricated fear of falling behind. The strategy feeds off this fear by alluding to a technically advanced military strategy, but the reality is that these technology capabilities fall short of their claimed effectiveness. In military settings, these limitations can have devastating consequences, including increased civilian death tolls. The US is leaning heavily into a marketing-led business model to implement AI across its military without technical rigour and integrity, which will likely expose a vulnerable vacuum across the Department of War when these brittle systems fail.
The Consequences of AI-Powered Military Projects
The use of AI in military settings raises significant concerns about the potential for increased civilian harm. For example, the Israeli military’s use of AI-enabled decision support systems has been linked to a higher civilian death toll in Gaza. The US strategy’s emphasis on speed and efficiency may lead to similar consequences, as the accelerated pipeline of intelligence into weapons risks unnecessary escalation of civilian harm. Furthermore, the widespread dissemination of military AI capabilities across a civilian population could have unintended consequences, such as the potential for misuse or exploitation of these capabilities. The lack of clear justification or consideration of these potential impacts is a significant concern, and highlights the need for a more nuanced and informed approach to the development and deployment of AI-powered military projects.
Conclusion
In conclusion, the US Department of War’s AI strategy is overly reliant on marketing hype and ignores the limitations and potential risks of AI capabilities. The approach has been criticized for being overly focused on speed and efficiency, without sufficient consideration of the potential consequences. As the US and other countries continue to invest in AI-powered military projects, it is essential to prioritize technical rigour and integrity, and to carefully consider the potential impacts of these technologies on civilian populations and the broader geopolitical landscape. By taking a more nuanced and informed approach, we can ensure that the development and deployment of AI-powered military projects is guided by a clear understanding of the technology’s capabilities and limitations, and is aligned with our values and principles.


