Research Programme on AI, Government and Policy

Oxford Internet Institute research programme (Oct 2021 to Oct 2026) focused on how AI intersects with governance — legislative responses like the EU AI Act, AI safety and security, regulation of surveillance and AI-enabled weapons, and how AI exacerbates discrimination and misinformation.

Save to your dashboard for quick access later.
Share LinkedIn X Facebook

Official source

oii.ox.ac.uk

Published
Oct 1, 2021
Last verified
Feb 21, 2026
Visit official page

Opens in a new tab. You are about to leave AI Resource Zone.

Editorial summary

Oxford Internet Institute research programme (Oct 2021 to Oct 2026) focused on how AI intersects with governance — legislative responses like the EU AI Act, AI safety and security, regulation of surveillance and AI-enabled weapons, and how AI exacerbates discrimination and misinformation. Funded by the Dieter Schwarz Foundation.

Why this matters

The Oxford Internet Institute is one of the world's leading academic centres for studying how the internet and AI intersect with public life, and this five-year research programme focuses specifically on AI in government and policy. Its projects cover the EU AI Act, AI safety and security, the regulation of surveillance and AI-enabled weapons, and how AI can exacerbate discrimination and misinformation. Reading the programme page gives you a reliable map of the academic work that regulators quietly read before drafting the rules the rest of us will eventually follow, and it surfaces publications months before they show up in the general press.

Topics covered

At a glance

Type
Research
Country
United Kingdom
Agency
Published
Oct 1, 2021
Last verified
Feb 21, 2026
Permalink
https://airesourcezone.com/resources/oii-research-programme-ai-government-policy

Ready to read it at the source? Visit oii.ox.ac.uk →

Related resources

Other resources that share at least one topic with this one.

  • Research

    The 2026 AI Index Report

    Stanford HAI's flagship annual benchmarking report — nine chapters covering research output, technical performance, responsible AI, economic...

  • Government Germany

    EU Artificial Intelligence Act (Regulation 2024/1689)

    The world's first comprehensive AI law, taking a risk-based approach that prohibits certain practices, imposes strict obligations on high-ri...

  • Consumer

    Trustworthy Artificial Intelligence

    Mozilla Foundation consumer-facing explainer for what trustworthy AI should mean in practice, organised around five values — privacy, fairne...

  • Government Singapore

    Model AI Governance Framework for Generative AI

    Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering acco...

    Source: AI Verify Foundation

  • Government United Kingdom

    A pro-innovation approach to AI regulation (UK AI White Paper)

    UK government white paper proposing a context-based, principles-led approach to AI regulation delivered through existing sectoral regulators...

  • Government United States

    NIST AI Risk Management Framework (AI RMF 1.0)

    Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...

    Source: National Institute of Standards and Technology (NIST) — Artificial Intelligence