Methodology

How country scores are constructed, what the numbers mean, and what this tool can and cannot tell you.

What this is

The AI Regulation Map visualizes the current state of AI governance in 196 countries across six dimensions, each scored 1–5. It is intended as a reference view for policy researchers, academics, and civil society: a place to form a comparable mental model of how different jurisdictions approach AI, and a jumping-off point to the primary sources behind each score.

Epistemic caveat. Scores are inferred by Claude from public sources and refreshed monthly. They are not produced by human coders and have not been independently audited. Treat them the way you would treat a well-read colleague's summary: useful for orientation, worth verifying before citation.

The six dimensions

Each country is scored on six dimensions. Five of them describe how regulation operates; the sixth (average score) is their mean, shown by default.

Regulation Status

The existence and maturity of AI-specific regulation — whether binding law, a draft bill, a national strategy, or nothing at all.

1
No regulation or minimal engagement; AI is not named in policy documents. Example: many least-developed countries with no ICT ministry position on AI.
2
Early-stage engagement: voluntary guidelines, a sectoral code of practice, or an advisory committee. Example: a national ethics advisory board with non-binding recommendations.
3
National strategy or draft legislation in progress; public consultation underway. Example: a published national AI strategy without enabling legislation.
4
Active, enacted AI regulation covering a substantial slice of deployment contexts. Example: a jurisdiction with binding sector-specific AI rules (finance, health).
5
Comprehensive, binding, cross-sector AI regulation with explicit enforcement mechanisms. Example: the EU under the AI Act.

Policy Lever

The breadth of policy instruments in use — from narrow, sector-specific tools to broad, cross-cutting frameworks.

1
Narrow: one tool, one sector, or indirect leverage only (e.g. data protection law being bent to cover AI).
3
Mixed: several instruments — standards, procurement guidance, R&D funding, some sectoral rules.
5
Broad: a horizontal regulatory framework plus sectoral adaptations, public investment, and compliance infrastructure.

Governance Type

Where authority for AI regulation sits — concentrated in a single body vs. distributed across agencies, sectors, and levels of government.

1
Centralized: a single national authority sets and enforces policy.
3
Hybrid: a lead body coordinates with sectoral regulators or sub-national jurisdictions.
5
Distributed: authority is spread across independent regulators, courts, and sub-national governments; coordination is emergent rather than imposed.

Actor Involvement

Which actors shape AI policy — narrow expert circles vs. broad engagement with industry, civil society, academia, and the public.

1
Limited: policy is set inside government with minimal external input.
3
Consultative: published consultations, industry working groups, some academic input.
5
Broad: structured multi-stakeholder processes including civil society, trade unions, and international partners.

Enforcement Level

How strictly existing rules are enforced — from rules on paper only, to active supervision with penalties.

1
No enforcement mechanism; obligations are not tied to any authority.
3
Soft enforcement: oversight bodies exist but audits and penalties are rare.
5
Active enforcement: penalties have been issued, audits are routine, and a dedicated authority publishes enforcement actions.

Average Score

The arithmetic mean of the five substantive dimensions above. Shown by default on the map because it gives the fastest cross-country read, but the individual dimensions are where the interesting signal lives.

Confidence levels

Each country record carries a confidence label that reflects how much public, primary evidence was available to the research pass.

Confidence is currently assigned at the record level (per country), not per individual dimension. We consider per-field confidence a future enhancement.

Data update cadence

A GitHub Action runs on the 1st of each month and researches any country whose record is older than a staleness threshold or flagged as low confidence. Manual overrides (single-country re-research, forced refresh of the whole dataset) are possible via scripts/update_data.py.

Every month's run is preserved as a snapshot in public/history.json, which drives the timeline slider on the main page — so the "score at date X" view is always reproducible from the raw data.

Known limitations

Citing this site

Every view on the site — a single country, a comparison set, a historic date, a specific score dimension — has a stable permalink encoded in the URL query string. Include that permalink in your citation so readers can reproduce the exact view.

Suggested formats:

The in-app "Cite" button on any country panel generates these strings for the current view and copies them to your clipboard.

Source code and contact

The site is open source at github.com/riadeane/airegulationmap. Issues and pull requests welcome. Made by Ria Deane.