Government
CA · Canada

Directive on Automated Decision-Making

Mandatory Treasury Board directive requiring Canadian federal departments to complete an Algorithmic Impact Assessment and apply proportional safeguards when using automated decision systems to render administrative decisions.

Save to your dashboard for quick access later.
Share LinkedIn X Facebook

Official source

tbs-sct.canada.ca

Published
Mar 4, 2019
Last verified
Mar 6, 2026
Visit official page

Opens in a new tab. You are about to leave AI Resource Zone.

Editorial summary

Mandatory Treasury Board directive requiring Canadian federal departments to complete an Algorithmic Impact Assessment and apply proportional safeguards when using automated decision systems to render administrative decisions.

Why this matters

This directive is not optional guidance — it is a binding Treasury Board instruction that every federal department in Canada must follow before using an automated system to make administrative decisions about people. It requires a public Algorithmic Impact Assessment and proportional safeguards tied to the impact level of the system. For Canadians, that means you have the right to know when a federal benefit, visa, or tax decision involved an automated system, and the assessment behind it is meant to be findable. Reading the directive itself clarifies what agencies can and cannot quietly automate.

Topics covered

At a glance

Type
Government
Country
Canada
Agency
Published
Mar 4, 2019
Last verified
Mar 6, 2026
Permalink
https://airesourcezone.com/resources/canada-directive-automated-decision-making

Ready to read it at the source? Visit tbs-sct.canada.ca →

Related resources

Other resources that share at least one topic with this one.

  • Government Singapore

    Model AI Governance Framework for Generative AI

    Framework published by IMDA and the AI Verify Foundation extending Singapore's Model AI Governance Framework to generative AI, covering acco...

    Source: AI Verify Foundation

  • Government United States

    NIST AI Risk Management Framework (AI RMF 1.0)

    Voluntary framework released by NIST to help organizations manage risks across the AI lifecycle, organized around the Govern, Map, Measure,...

    Source: National Institute of Standards and Technology (NIST) — Artificial Intelligence

  • Government Japan

    AI Guidelines for Business (Ver. 1.0)

    Joint METI and MIC guidelines consolidating earlier Japanese AI R&D, utilization, and governance texts into a single living document setting...

  • Government France

    CNIL Recommendations on AI and the GDPR

    CNIL's set of recommendations helping AI developers apply the GDPR through the AI lifecycle, covering purpose definition, legal basis, train...

    Source: Commission Nationale de l'Informatique et des Libertés (CNIL)

  • Government United Kingdom

    ICO Guidance on AI and Data Protection

    ICO hub of guidance explaining how UK GDPR principles apply to AI systems, covering fairness, lawfulness, transparency, accountability, and...

    Source: Information Commissioner's Office (ICO)

  • Government United States

    Blueprint for an AI Bill of Rights

    White House OSTP non-binding framework setting out five principles for the design and use of automated systems: safe and effective systems,...