Safety Declining -15.8%

Open-source safety tooling stalls on funding

First observed Jan 9, 2026 · last updated Apr 6, 2026 · Brazil

Share LinkedIn X Facebook

Summary Several widely used open-source evaluation and interpretability libraries report flat or declining maintainer budgets despite growing downstream adoption. The gap risks leaving safety infrastructure dependent on volunteer labor even as regulators begin to cite those tools in guidance.

Facts on record: 6 observed sources contribute to this signal.

Related articles

  • NIST AI Risk Management Framework explainer

    The NIST AI RMF offers a voluntary map for identifying, measuring, and managing risks in AI systems across their life cycle.

    AI Resource Zone Admin · Apr 10, 2026

  • UK AI Safety Institute, what it does

    Britain stood up a public body to test frontier models before deployment. Its remit, methods, and limits are worth understanding.

    AI Resource Zone Admin · Apr 5, 2026

  • Deepfakes and election integrity, research summary

    Research on synthetic media in elections is still catching up with the technology. Early findings are more mixed than headlines suggest.

    AI Resource Zone Admin · Apr 1, 2026

  • Why AI literacy matters for older adults

    Older adults face distinct risks and opportunities as AI moves into banking, health, and communication tools they already use.

    AI Resource Zone Admin · Mar 25, 2026

  • How to read an AI incident report

    Incident reports are becoming standard. Knowing how to read them helps consumers tell a minor bug from a serious failure.

    AI Resource Zone Admin · Mar 20, 2026

  • Hiring tools and bias, regulatory posture

    Automated hiring tools face growing rules on bias audits and candidate disclosure. The map is uneven but becoming clearer.

    AI Resource Zone Admin · Mar 10, 2026

Related resources