After years of eye-opening statistics about cybersecurity attacks, it is AI incidents’ turn to be tracked and tallied. The AI Incident Database (AIID), a research effort, has compiled reports on 1,140 publicly disclosed AI-related incidents, classified into 23 types of harms and risks. The Organisation of Economic Cooperation and Development runs another, mostly automated tracker that has added an average of approximately 330 AI incident reports per month to its database this year. Additionally, in April, the non-profit MITRE launched an AI incident-sharing site that incentivizes companies to confidentially report model tampering, adversarial data injections, voice cloning and other malicious acts targeting AI systems. MITRE has already released 32 case studies for professionals and policymakers to reference. This article presents observations by leaders at MITRE and AIID on the maturity of AI incident tracking, how to define what counts as an AI incident, incident trends and the benefits of AI incident information sharing. See “First Independent Certification of Responsible AI Launches” (Apr. 12, 2023).