Key Takeaways
- Cybercriminals are likely to develop AI-powered toolkits that can automate cyberattacks at scale in the near future
- These toolkits may be used to enhance various stages of attacks, including initial network reconnaissance, phishing, and data theft
- The development of AI-powered toolkits could lead to a "democratization of threats," making it easier for less sophisticated attackers to launch complex attacks
- Defenders may need to redefine success in the post-AI era, focusing on minimizing the impact of attacks rather than preventing them entirely
- AI-enabled defenses, such as those that can detect and respond to malicious activity in real-time, will be crucial in mitigating the threat of AI-powered attacks
Introduction to the Threat of AI-Powered Attacks
The world of cybersecurity is on the cusp of a significant shift, with cybercriminals likely to develop AI-powered toolkits that can automate cyberattacks at scale. According to Heather Adkins, Vice President of Security Engineering at Google, this development is probably still a few years away, but it’s already possible to see the beginnings of this trend. Cybercriminals are using AI to enhance small parts of their workflows, such as grammar and spell-checking phishing copy, and it’s only a matter of time before these individual components are combined into a full, end-to-end toolkit.
The Current State of AI in Cyberattacks
The Google Threat Intelligence Group (GTIG) has published an overview of the most recent developments in how attackers are experimenting with AI. According to the report, malware families are already using Large Language Models (LLMs) to generate commands in order to steal victim data. Additionally, countries such as China, Iran, and North Korea are abusing AI tools to aid different stages of their respective attacks, including initial network reconnaissance and command and control (C2) development. These small components may seem insignificant on their own, but they have the potential to be chained together to provide similar functionality to today’s exploit kits.
The Fear of Democratization of Threats
The fear among senior Googlers is that these small components will be combined to create a powerful toolkit that can be used by anyone, regardless of their level of sophistication. This "democratization of threats" could lead to a significant increase in the number of complex attacks, making it more difficult for defenders to keep up. Anton Chuvakin, Security Advisor at Google’s Office of the CISO, compared this potential development to the "Metasploit moment" 20 years ago, when exploit frameworks became easily accessible and made it easier for attackers to launch complex attacks.
The Potential Impact of AI-Powered Attacks
The potential impact of AI-powered attacks is significant, and could lead to a worst-case scenario where an autonomously executing ransomware toolkit spreads rapidly, encrypting computers en masse. Alternatively, it could lead to a scenario where an altruistic person unleashes an AI-powered tool that patches a bunch of bugs, but still causes widespread disruption. The key point is that the motives of the person or group behind the attack will play a significant role in determining the impact of the attack.
The Limitations of Current AI Technology
While the potential of AI-powered attacks is significant, it’s worth noting that current AI technology is still struggling with the basics. LLMs are still unable to discern right from wrong, and are often unable to switch from strange thought paths when looking for vulnerabilities. However, as AI technology continues to evolve, it’s likely that these limitations will be overcome, and the threat of AI-powered attacks will become more significant.
The Need for AI-Enabled Defenses
In order to mitigate the threat of AI-powered attacks, defenders will need to develop AI-enabled defenses that can detect and respond to malicious activity in real-time. This may involve using intelligent reasoning systems to make decisions about when to shut down an instance or turn off a particular service. However, implementing these systems will require careful consideration, as they have the potential to cause problems if not implemented correctly. As Adkins noted, "we’re going to have to put these intelligent reasoning systems behind real-time decision-making and disrupt decision-making on the ground, without causing reliability problems."
Redefining Success in the Post-AI Era
The development of AI-powered attacks may require defenders to redefine what success looks like in the post-AI era. Rather than focusing on preventing attacks entirely, defenders may need to focus on minimizing the impact of attacks, and reducing the amount of time that attackers are able to spend inside a network. This may involve using a range of tactics, including AI-enabled defenses, to disrupt and degrade the capabilities of attackers. As Adkins noted, "we have to start reasoning about real-time disruption capabilities or degradation, and use the whole information operations playbook to change the battlefield to confuse AI attackers."


