Model AI Governance Framework for Generative AI
Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering accountability, data, trusted development, incident reporting, testing, security, content provenance, safety research, and AI for public good.
Official source
aiverifyfoundation.sg
- Published
- May 30, 2024
- Last verified
- Apr 4, 2026
- Agency
- AI Verify Foundation
Opens in a new tab. You are about to leave AI Resource Zone.
Editorial summary
Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering accountability, data, trusted development, incident reporting, testing, security, content provenance, safety research, and AI for public good.
Why this matters
Singapore has positioned itself as the Asia-Pacific clearinghouse for practical AI governance, and this framework is why. Published by IMDA and the AI Verify Foundation, it extends the older Model AI Governance Framework specifically to generative AI with nine dimensions — accountability, data, trusted development, incident reporting, testing, security, content provenance, safety research, and AI for public good. Multinationals with a Singapore footprint routinely adopt it as their internal baseline because it maps cleanly to the EU AI Act and NIST AI RMF. Reading the official source gives you the current wording that compliance teams in the region are working against.
Topics covered
At a glance
- Type
- Government
- Country
- Singapore
- Agency
- AI Verify Foundation
- Published
- May 30, 2024
- Last verified
- Apr 4, 2026
- Permalink
- https://airesourcezone.com/resources/singapore-model-ai-governance-genai
Ready to read it at the source? Visit aiverifyfoundation.sg →
Related resources
Other resources that share at least one topic with this one.
-
Government United States
NIST AI Risk Management Framework (AI RMF 1.0)
Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...
Source: National Institute of Standards and Technology (NIST) — Artificial Intelligence
-
Government Japan
AI Guidelines for Business (Ver. 1.0)
Joint METI and MIC guidelines consolidating earlier Japanese AI R&D, utilization, and governance texts into a single living document setting...
-
Government France
CNIL Recommendations on AI and the GDPR
CNIL's set of recommendations helping AI developers apply the GDPR through the AI lifecycle, covering purpose definition, legal basis, train...
Source: Commission Nationale de l'Informatique et des Libertés (CNIL)
-
Government United Kingdom
ICO Guidance on AI and Data Protection
ICO hub of guidance explaining how UK GDPR principles apply to AI systems, covering fairness, lawfulness, transparency, accountability, and...
Source: Information Commissioner's Office (ICO)
-
Government United States
Blueprint for an AI Bill of Rights
White House OSTP non-binding framework setting out five principles for the design and use of automated systems: safe and effective systems,...
-
Government Australia
Australia's AI Ethics Principles
Eight voluntary principles — including human-centred values, fairness, privacy, reliability, transparency, contestability, and accountabilit...
Source: Department of Industry, Science and Resources — National AI Centre