Safety Declining -21.3%

Model transparency disclosures regress at frontier labs

First observed Jan 12, 2026 · last updated Apr 15, 2026

Share LinkedIn X Facebook

Summary Independent transparency indexes show the top frontier labs publishing less detail about training data, compute, and evaluation than a year ago. The backslide complicates third-party review just as governments lean on disclosure as a core oversight mechanism.

Facts on record: 8 observed sources contribute to this signal.

Related articles

  • NIST AI Risk Management Framework explainer

    The NIST AI RMF offers a voluntary map for identifying, measuring, and managing risks in AI systems across their life cycle.

    AI Resource Zone Admin · Apr 10, 2026

  • UK AI Safety Institute, what it does

    Britain stood up a public body to test frontier models before deployment. Its remit, methods, and limits are worth understanding.

    AI Resource Zone Admin · Apr 5, 2026

  • Deepfakes and election integrity, research summary

    Research on synthetic media in elections is still catching up with the technology. Early findings are more mixed than headlines suggest.

    AI Resource Zone Admin · Apr 1, 2026

  • Why AI literacy matters for older adults

    Older adults face distinct risks and opportunities as AI moves into banking, health, and communication tools they already use.

    AI Resource Zone Admin · Mar 25, 2026

  • How to read an AI incident report

    Incident reports are becoming standard. Knowing how to read them helps consumers tell a minor bug from a serious failure.

    AI Resource Zone Admin · Mar 20, 2026

  • Hiring tools and bias, regulatory posture

    Automated hiring tools face growing rules on bias audits and candidate disclosure. The map is uneven but becoming clearer.

    AI Resource Zone Admin · Mar 10, 2026

Related resources