Model cards — Google DeepMind
Directory of structured model cards for Google DeepMind systems (Gemini, Gemma, Imagen, Veo, Lyria) giving a consistent, skimmable summary of how each model was designed and evaluated.
Official source
deepmind.google
- Published
- Nov 1, 2024
- Last verified
- Mar 24, 2026
Opens in a new tab. You are about to leave AI Resource Zone.
Editorial summary
Directory of structured model cards for Google DeepMind systems (Gemini, Gemma, Imagen, Veo, Lyria) giving a consistent, skimmable summary of how each model was designed and evaluated. Useful as a transparency-checking tool for consumers and researchers comparing frontier models. Replaces the retired modelcards.withgoogle.com.
Why this matters
Model cards are short, structured summaries that document how an AI model was designed, trained, and evaluated — the nearest thing the industry has to a drug label. Google DeepMind's directory gathers cards for Gemini, Gemma, Imagen, Veo, and Lyria in one place, using a consistent format that makes them genuinely skimmable. For consumers and researchers comparing frontier models, this is a useful transparency-checking tool, especially because it replaces the retired modelcards.withgoogle.com site that many older references still point at. Reading the cards directly is how you verify claims a vendor or a third-party benchmark is making about a specific Google model.
Topics covered
At a glance
- Type
- Tool
- Country
- Global / multi-national
- Agency
- —
- Published
- Nov 1, 2024
- Last verified
- Mar 24, 2026
- Permalink
- https://airesourcezone.com/resources/deepmind-model-cards
Ready to read it at the source? Visit deepmind.google →
Related resources
Other resources that share at least one topic with this one.
-
Tool
Welcome to the Artificial Intelligence Incident Database
Searchable, open repository of real-world AI harms and near-misses — autonomous-vehicle fatalities, facial-recognition misidentifications, d...
-
Consumer
Trustworthy Artificial Intelligence
Mozilla Foundation consumer-facing explainer for what trustworthy AI should mean in practice, organised around five values — privacy, fairne...
-
Government Singapore
Model AI Governance Framework for Generative AI
Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering acco...
Source: AI Verify Foundation
-
Government United States
NIST AI Risk Management Framework (AI RMF 1.0)
Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...
Source: National Institute of Standards and Technology (NIST) — Artificial Intelligence
-
Research United Kingdom
Research Programme on AI, Government and Policy
Oxford Internet Institute research programme (Oct 2021 to Oct 2026) focused on how AI intersects with governance — legislative responses lik...
-
Government Japan
AI Guidelines for Business (Ver. 1.0)
Joint METI and MIC guidelines consolidating earlier Japanese AI R&D, utilization, and governance texts into a single living document setting...