Topic

Ethics & Safety

Responsible AI use, compliance, and risk awareness.

Share LinkedIn X Facebook

Featured resources

  • Model cards — Google DeepMind

    Directory of structured model cards for Google DeepMind systems (Gemini, Gemma, Imagen, Veo, Lyria) giving a consistent, skimmable summary o...

  • Trustworthy Artificial Intelligence

    Mozilla Foundation consumer-facing explainer for what trustworthy AI should mean in practice, organised around five values — privacy, fairne...

  • Model AI Governance Framework for Generative AI

    AI Verify Foundation

    Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering acco...

  • AI Guidelines for Business (Ver. 1.0)

    Joint METI and MIC guidelines consolidating earlier Japanese AI R&D, utilization, and governance texts into a single living document setting...

  • CNIL Recommendations on AI and the GDPR

    Commission Nationale de l'Informatique et des Libertés (CNIL)

    CNIL's set of recommendations helping AI developers apply the GDPR through the AI lifecycle, covering purpose definition, legal basis, train...

  • Home — AI Now Institute

    AI Now is an independent research institute producing expert analysis of AI in the public interest — landscape reports on AI market concentr...

  • AI and cyber security: what you need to know

    UK National Cyber Security Centre plain-English guide pitched at non-technical readers, board members and small-business owners. Covers AI h...

  • Ai2: Truly Open Breakthrough AI

    The Allen Institute for AI (Ai2) is a Seattle-based nonprofit research institute releasing fully-open language and multimodal models (OLMo,...

  • Generative AI — position statement

    Australia's eSafety Commissioner position statement on generative AI, written for the public and industry. Explains how tools like ChatGPT,...

  • ICO Guidance on AI and Data Protection

    Information Commissioner's Office (ICO)

    ICO hub of guidance explaining how UK GDPR principles apply to AI systems, covering fairness, lawfulness, transparency, accountability, and...

  • NIST AI Risk Management Framework (AI RMF 1.0)

    National Institute of Standards and Technology (NIST) — Artificial Intelligence

    Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...

  • Welcome to the Artificial Intelligence Incident Database

    Searchable, open repository of real-world AI harms and near-misses — autonomous-vehicle fatalities, facial-recognition misidentifications, d...

Public Opinion on Ethics & Safety

Recent articles

  • NIST AI Risk Management Framework explainer

    The NIST AI RMF offers a voluntary map for identifying, measuring, and managing risks in AI systems across their life cycle.

    AI Resource Zone Admin · Apr 10, 2026

  • UK AI Safety Institute, what it does

    Britain stood up a public body to test frontier models before deployment. Its remit, methods, and limits are worth understanding.

    AI Resource Zone Admin · Apr 5, 2026

  • Deepfakes and election integrity, research summary

    Research on synthetic media in elections is still catching up with the technology. Early findings are more mixed than headlines suggest.

    AI Resource Zone Admin · Apr 1, 2026

  • Why AI literacy matters for older adults

    Older adults face distinct risks and opportunities as AI moves into banking, health, and communication tools they already use.

    AI Resource Zone Admin · Mar 25, 2026

  • How to read an AI incident report

    Incident reports are becoming standard. Knowing how to read them helps consumers tell a minor bug from a serious failure.

    AI Resource Zone Admin · Mar 20, 2026

  • Hiring tools and bias, regulatory posture

    Automated hiring tools face growing rules on bias audits and candidate disclosure. The map is uneven but becoming clearer.

    AI Resource Zone Admin · Mar 10, 2026