Generative AI — position statement
Australia's eSafety Commissioner position statement on generative AI, written for the public and industry.
Official source
esafety.gov.au
- Published
- Aug 31, 2023
- Last verified
- Feb 28, 2026
Opens in a new tab. You are about to leave AI Resource Zone.
Editorial summary
Australia's eSafety Commissioner position statement on generative AI, written for the public and industry. Explains how tools like ChatGPT, Gemini and image generators work, documents harms (child-safety risks, deepfake abuse, misinformation) and sets out Safety-by-Design interventions providers are expected to adopt.
Why this matters
The eSafety Commissioner is Australia's world-first online-safety regulator, with actual power to order takedowns. This position statement lays out how the office thinks about ChatGPT, Gemini, and image generators — how they work, where the harms concentrate (child safety, deepfake abuse, misinformation), and what Safety-by-Design interventions it expects providers to adopt. For Australian parents, teachers, and platform teams, reading the original is the fastest way to understand which specific behaviours the regulator considers out of bounds and what it will ask for if it formally investigates a product or incident.
Topics covered
At a glance
- Type
- Consumer
- Country
- Australia
- Agency
- —
- Published
- Aug 31, 2023
- Last verified
- Feb 28, 2026
- Permalink
- https://airesourcezone.com/resources/esafety-generative-ai-position-statement
Ready to read it at the source? Visit esafety.gov.au →
Related resources
Other resources that share at least one topic with this one.
-
Consumer
Trustworthy Artificial Intelligence
Mozilla Foundation consumer-facing explainer for what trustworthy AI should mean in practice, organised around five values — privacy, fairne...
-
Consumer United Kingdom
AI and cyber security: what you need to know
UK National Cyber Security Centre plain-English guide pitched at non-technical readers, board members and small-business owners. Covers AI h...
-
Government United States
Blueprint for an AI Bill of Rights
White House OSTP non-binding framework setting out five principles for the design and use of automated systems: safe and effective systems,...
-
Consumer United States
Operation AI Comply: Detecting AI-infused frauds and deceptions
FTC consumer alert describing the agency's sweep against AI-powered scams — bogus chatbot lawyers, AI-written fake review services and "guar...
-
Government Singapore
Model AI Governance Framework for Generative AI
Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering acco...
Source: AI Verify Foundation
-
Government United States
NIST AI Risk Management Framework (AI RMF 1.0)
Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...
Source: National Institute of Standards and Technology (NIST) — Artificial Intelligence