Measuring AI’s Dark Side

0
20

Key Takeaways:

  • Reports of AI-related incidents have increased by 50% year-over-year from 2022 to 2024, with incidents already surpassing the 2024 total in the 10 months to October 2025.
  • The AI Incident Database, a crowd-sourced repository of media reports on AI mishaps, has collected data on various AI-related events, including deepfake-enabled scams and chatbot-induced delusions.
  • The rise in AI incidents is partly due to increased media scrutiny of the technology, but also reflects real-world harm caused by AI.
  • Efforts to improve accountability and tracking of AI incidents are underway, including the development of tools to detect AI-generated content and the implementation of regulations such as the E.U. AI Act and California’s Transparency in Frontier AI Act.
  • Major AI companies, including Google, Microsoft, and Meta, are backing initiatives to ensure authenticity and flag AI-generated content, but more work is needed to address the growing risks associated with AI.

Introduction to AI-Related Incidents
The increasing adoption of artificial intelligence (AI) around the world has led to a growing number of AI-related incidents, with reports rising by 50% year-over-year from 2022 to 2024, according to the AI Incident Database. As Daniel Atherton, an editor at the AI Incident Database, notes, "AI is already causing real-world harm… Without tracking failures, we can’t fix them." The database compiles data by collecting news coverage of AI-related events and consolidating multiple reports about the same event into a single incident entry. However, crowd-sourcing data has limitations, and the rise in AI incidents is, in part, a reflection of increased media scrutiny of the technology.

Breaking Down AI-Related Incidents
Artificial intelligence is an umbrella term for several different technologies, from autonomous vehicles to chatbots, and the database lumps these together without a comprehensive structure. As Simon Mylius, an affiliate researcher at MIT FutureTech, notes, "That makes it very, very difficult to see patterns over whole datasets to understand trends." To address this issue, Mylius and colleagues released a tool that enhances the AI Incident Database by using a language model to parse the news reports associated with each incident, before classifying them by type of harm and severity. This tool aims to help policymakers sort large numbers of reports and spot trends, and ultimately, respond quickly to emerging harms.

Trends in AI-Related Incidents
Using the AI tool to sort incidents using an established taxonomy of AI risks reveals that the upward trend in incidents has not occurred equally across all domains. While reports of AI-generated misinformation and discrimination decreased in 2025, so-called ‘computer human interaction’ incidents, which includes those involving ChatGPT psychosis, have risen. Reports of malicious actors using AI, particularly to scam victims or spread disinformation, have grown the most, rising 8-fold since 2022. As Atherton notes, "All the reporting that has happened globally is a fraction of the lived realities of everybody experiencing AI harms."

The Rise of Deepfake Incidents
The increase in deepfake incidents has coincided with rapid improvements in their quality and accessibility. The shift reveals that while some AI incidents stem from system limitations, others are driven by technical advances. As Mylius notes, "I think we’re going to see lots more cyber attacks that result in aggregated, significant financial loss in the very near future." The case of xAI’s Grok, which allowed for the rampant use of the model to sexualize images of real women and minors, highlights the need for greater accountability and regulation in the development and use of AI.

Efforts to Improve Accountability
Efforts to improve accountability and tracking of AI incidents are underway, including the development of tools to detect AI-generated content and the implementation of regulations such as the E.U. AI Act and California’s Transparency in Frontier AI Act. Content Credentials, a system of watermarks and metadata designed to ensure authenticity and flag AI-generated content, is backed by major AI companies, including Google, Microsoft, OpenAI, Meta, and ElevenLabs. However, more work is needed to address the growing risks associated with AI, and as Atherton notes, "staying alert to new risks is crucial, but it’s also important not to allow present harms to become ‘part of the background noise’."

Conclusion
The increasing number of AI-related incidents highlights the need for greater accountability and regulation in the development and use of AI. As Mylius notes, "Societal issues, privacy issues, erosion of rights, disinformation and misinformation [are] less obvious when an individual incident happens, but they add up to quite significant harms overall." By tracking and analyzing AI-related incidents, we can better understand the risks associated with AI and work towards mitigating them. As Atherton notes, "Without tracking failures, we can’t fix them," and it is essential that we prioritize transparency and accountability in the development and use of AI to ensure that its benefits are realized while minimizing its harms.

https://time.com/7346091/ai-harm-risk/

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here