Welcome to the Artificial Intelligence Incident Database
Searchable, open repository of real-world AI harms and near-misses — autonomous-vehicle fatalities, facial-recognition misidentifications, deepfake fraud, hallucinated legal citations and biased algorithmic decisions.
Official source
incidentdatabase.ai
- Published
- Jan 1, 2023
- Last verified
- Apr 2, 2026
Opens in a new tab. You are about to leave AI Resource Zone.
Editorial summary
Searchable, open repository of real-world AI harms and near-misses — autonomous-vehicle fatalities, facial-recognition misidentifications, deepfake fraud, hallucinated legal citations and biased algorithmic decisions. Operated by the Responsible AI Collaborative, with taxonomies, submission tools and an API; analogous to aviation-safety databases.
Why this matters
The AI Incident Database is to AI what aviation-safety databases are to flying — a searchable, open repository of real-world harms and near-misses that helps future systems avoid repeating them. It catalogues autonomous-vehicle fatalities, facial-recognition misidentifications, deepfake fraud, hallucinated legal citations, and biased algorithmic decisions, each with taxonomies and submission tools. For journalists, regulators, and buyers of AI products, it is the first place to check whether a vendor's chosen technique has already produced documented harm. Operated by the Responsible AI Collaborative, it is one of the few sources in the field that genuinely welcomes external submissions.
Topics covered
At a glance
- Type
- Tool
- Country
- Global / multi-national
- Agency
- —
- Published
- Jan 1, 2023
- Last verified
- Apr 2, 2026
- Permalink
- https://airesourcezone.com/resources/ai-incident-database
Ready to read it at the source? Visit incidentdatabase.ai →
Related resources
Other resources that share at least one topic with this one.
-
Tool
Model cards — Google DeepMind
Directory of structured model cards for Google DeepMind systems (Gemini, Gemma, Imagen, Veo, Lyria) giving a consistent, skimmable summary o...
-
Consumer
Trustworthy Artificial Intelligence
Mozilla Foundation consumer-facing explainer for what trustworthy AI should mean in practice, organised around five values — privacy, fairne...
-
Government Singapore
Model AI Governance Framework for Generative AI
Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering acco...
Source: AI Verify Foundation
-
Government United States
NIST AI Risk Management Framework (AI RMF 1.0)
Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...
Source: National Institute of Standards and Technology (NIST) — Artificial Intelligence
-
Research United Kingdom
Research Programme on AI, Government and Policy
Oxford Internet Institute research programme (Oct 2021 to Oct 2026) focused on how AI intersects with governance — legislative responses lik...
-
Government Japan
AI Guidelines for Business (Ver. 1.0)
Joint METI and MIC guidelines consolidating earlier Japanese AI R&D, utilization, and governance texts into a single living document setting...