Australia's AI Ethics Principles

Eight voluntary principles — including human-centred values, fairness, privacy, reliability, transparency, contestability, and accountability — published by the Department of Industry, Science and Resources to guide safe and responsible AI.

Save to your dashboard for quick access later.
Share LinkedIn X Facebook

Official source

industry.gov.au

Published
Nov 7, 2019
Last verified
Mar 11, 2026
Visit official page

Opens in a new tab. You are about to leave AI Resource Zone.

Editorial summary

Eight voluntary principles — including human-centred values, fairness, privacy, reliability, transparency, contestability, and accountability — published by the Department of Industry, Science and Resources to guide safe and responsible AI.

Why this matters

Australia does not yet have a dedicated AI law, so these eight voluntary principles — human-centred values, fairness, privacy, reliability, transparency, contestability, and accountability among them — are what federal agencies, state governments, and most large Australian employers cite when they write internal AI policies. They are deliberately high-level, which cuts both ways: they are easy to say you follow, and harder to enforce. Reading the official page matters because the phrasing is copied into procurement clauses and codes of conduct across the country, and the accompanying case studies show what the principles look like in practice.

Topics covered

At a glance

Type
Government
Country
Australia
Published
Nov 7, 2019
Last verified
Mar 11, 2026
Permalink
https://airesourcezone.com/resources/australia-ai-ethics-principles

Ready to read it at the source? Visit industry.gov.au →

Related resources

Other resources that share at least one topic with this one.

  • Government Singapore

    Model AI Governance Framework for Generative AI

    Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering acco...

    Source: AI Verify Foundation

  • Government United States

    NIST AI Risk Management Framework (AI RMF 1.0)

    Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...

    Source: National Institute of Standards and Technology (NIST) — Artificial Intelligence

  • Government Japan

    AI Guidelines for Business (Ver. 1.0)

    Joint METI and MIC guidelines consolidating earlier Japanese AI R&D, utilization, and governance texts into a single living document setting...

  • Government France

    CNIL Recommendations on AI and the GDPR

    CNIL's set of recommendations helping AI developers apply the GDPR through the AI lifecycle, covering purpose definition, legal basis, train...

    Source: Commission Nationale de l'Informatique et des Libertés (CNIL)

  • Government United Kingdom

    ICO Guidance on AI and Data Protection

    ICO hub of guidance explaining how UK GDPR principles apply to AI systems, covering fairness, lawfulness, transparency, accountability, and...

    Source: Information Commissioner's Office (ICO)

  • Government United States

    Blueprint for an AI Bill of Rights

    White House OSTP non-binding framework setting out five principles for the design and use of automated systems: safe and effective systems,...