Post‑Labor Economics Series • Policy Brief • July 2025
Executive Snapshot
2025 marks the first year in which an AI system banned on one continent can legally operate on another.
The EU AI Act began prohibiting “unacceptable‑risk” uses of AI on 2 February 2025 and will impose transparency duties on general‑purpose models such as GPT‑4o in early 2026 .
Meanwhile, the United States is still relying on executive orders, agency guidance and state bills rather than a federal statute , and China’s Interim Measures for Generative AI focus on social‑stability filters and state oversight .
Canada, Brazil, Singapore and others are writing their own rules—but not the same rules .
Question: Can the world afford a regulatory patchwork when AI models—and the labour shocks they trigger—cross borders at the speed of an API call?
Answer: Only if we build a Bretton Woods‑style framework that aligns safety, trade and labour safeguards across jurisdictions.
This brief maps today’s regulatory landscape, pinpoints flashpoints of conflict, and proposes a three‑pillar Global AI Framework that policymakers and tech executives can start shaping now.
1 | The Emerging Patchwork: Five Regulatory Archetypes
| Region | Core Approach | Flagship Instrument | Labour‑Impact Hooks |
| European Union | Precautionary, rights‑driven, tiered by risk | EU AI Act – bans “unacceptable‑risk” uses (social scoring, predictive policing); heavy duties on high‑risk workplace AI; transparency for general‑purpose models | Mandatory fundamental‑rights impact assessments include workforce effects; algorithmic management tools treated as high risk. |
| United States | Innovation‑first, sectoral & executive‑order driven | AI Executive Orders (Oct 2023; Jan 2025) + NIST RMF; draft Senate bills; 25+ state laws | Federal guidance urges agencies to evaluate labour displacement but imposes no binding standard; Congress debating a 10‑year moratorium on state AI laws . |
| China | State‑control / social‑stability lens | Interim Measures on Generative AI (2023, amended 2024) | Providers must ensure outputs uphold socialist values; must label synthetic content; labour issues framed mainly as social stability. |
| Middle‑powers (Canada, Brazil, Singapore) | EU‑inspired risk model + domestic tweaks | Canada Bill C‑27 (AIDA) in 3rd reading 2025 ; Brazil PL 2338/2023 passed Senate Dec 2024 ; Singapore AI Verify voluntary testing regime | Canada’s AIDA requires impact assessments incl. “effects on workers.” Brazil’s bill adopts EU risk tiers; enforcement details TBD. |
| Multilateral Soft‑Law | Principles & fora | OECD AI Principles (updated 2024) ; UNESCO Ethics Rec.; GPAI; G7 Hiroshima & Seoul AI Summits | Non‑binding; encourage governments to consider labour share, inclusion and human‑rights impacts. |
Friction Warning
- Data‑flow barriers: EU’s data‑export rules + U.S. state privacy patchwork risk fragmenting model‑training pipelines.
- Market access asymmetry: A hiring algorithm legal in Texas could be banned in France—firms face regulatory forum‑shopping or multi‑model compliance stacks.
- Subsidy & tariff disputes: If one bloc taxes AI‑driven productivity while another subsidises it, WTO disputes loom.
2 | Lessons from Past Tech Treaties
| Precedent | What We Can Re‑Use for AI |
| Bretton Woods (1944) – fixed‑rate currency regime | Align incentives via shared metrics (e.g., “AI Risk Tiers”) + pooled safety infrastructure (testing institutes). |
| Nuclear Non‑Proliferation Treaty (1968) | Dual‑track bargain: peaceful tech access in exchange for inspections; echoes idea of AI audit privileges for signatories. |
| Montreal Protocol (1987) – phased CFC ban | Time‑staged sunset of “unacceptable‑risk” AI practices, plus tech‑transfer fund for developing nations. |
| Basel III (post‑2008) financial rules | Globally agreed stress‑testing scenarios could map to AI catastrophe‑risk evaluations. |
These show that global coordination is possible when nations perceive shared existential risks and economic upside.
3 | Blueprint for a Global AI Framework (G‑AIF)
Pillar 1 — Risk & Safety Alignment
- Common Risk Taxonomy (build on EU’s four‑tier model).
- Distributed AI Safety Institutes share red‑team findings; 10 democracies + the EU already pledged cooperation after the Seoul AI Summit 2024 .
- AI Incident Reporting Exchange for near‑miss harm events (modelled on aviation).
Pillar 2 — Fair Labour Transition
- AI Labour‑Impact Registry: Firms above a size threshold file annual disclosures on task automation, wage effects, retraining budgets. Data feeds an OECD dashboard to monitor labour‑share shifts.
- Just Transition Fund: 0.5 % levy on frontier‑model compute spend finances re‑skilling and guaranteed‑income pilots in lower‑income states.
Pillar 3 — Trusted Cross‑Border Data & Trade
- Regulatory Equivalence Passports: Systems certified in one adherent country gain fast‑track access to others if they meet baseline safety + labour criteria.
- AI Standards Exchange (a UN advisory body proposal, Sept 2024) enables rapid harmonisation of technical norms.
4 | Policy Choices for 2025‑26
| Decision Point | EU | U.S. | China | What Executives Should Do |
| Adopt G‑AIF ‘Risk Taxonomy v1’ | Already aligned via AI Act; push for global uptake | Decide whether to federalise risk tiers or maintain sectoral patchwork | Could endorse taxonomy at UN AI forum, retain sovereignty in enforcement | Build compliance to the strictest tier now to avoid retrofit costs. |
| Labour‑Impact Registry | High chance—ties to European Pillar of Social Rights | Politically fraught; unions in favour, Chamber of Commerce opposed | May reject, citing proprietary data | Voluntary disclosure shows leadership; mitigates supply‑chain scrutiny. |
| Just Transition Fund | Supports; carbon levy precedent | Questionable; anti‑tax climate | Might back south‑south funding to gain soft power | Budget for 0.5 % compute levy in long‑range plans if consensus emerges. |
5 | What If We Fail to Coordinate?
- Race‑to‑the‑Bottom Regulation – firms relocate AI labs to weakest jurisdictions, undermining safety standards.
- Trade Fragmentation – digital‑services tariffs or data‑localisation walls slice global AI markets into incompatible blocks.
- Shadow Supply Chains – exploitative gig‑work and label‑laundering flourish where labour rules are lax.
- Geostrategic Tension – dual‑use frontier models drive arms‑race dynamics absent verification protocols.
History shows coordination is cheaper than crisis management. The Bretton Woods architects understood this in 1944. AI’s cross‑border labour shock requires an equally bold design.
6 | Action Agenda for 2025
For Policymakers (EU & U.S. priority)
- Convene a “Paris Charter on AI & Work” at the 2025 UNESCO Ethics Forum, embedding labour safeguards into G‑AIF.
- Mandate mutual audit rights for high‑risk AI exported into each other’s markets—think “Schrems II” but for algorithms.
- Seed the Just Transition Fund with $5 bn from existing digital‑service taxes (EU) and CHIPS Act residuals (U.S.).
For Tech Executives
- Map your model portfolio against EU risk tiers—even if not operating in Europe.
- Pre‑register automation roadmaps with labour agencies to shape forthcoming disclosure rules.
- Join the AI Standards Exchange pilot to influence interoperable safety testing methods.
7 | Conclusion—A Bretton Woods Moment
The first half of 2025 proved that national AI laws can—and will—clash. The question is whether leaders seize the next eighteen months to craft a Global AI Framework that makes safe, inclusive automation the default rather than the exception.
A world without common digital rails risks repeating the competitive currency chaos of the 1930s—only this time, jobs not just money are on the line.
I invite regulators, firms and civil‑society researchers to co‑draft G‑AIF v0.1.
Subscribe at thorstenmeyerai.com/newsletter to receive the consultation draft and contribute use‑cases or data.
End‑Note Citations
- European Parliament. “EU AI Act: First Regulation on Artificial Intelligence.” Feb 2025.
- White House. “Executive Order on Advancing U.S. Leadership in AI Infrastructure.” Jan 2025.
- DLA Piper. “Ten‑Year Moratorium on AI Regulation Proposed in U.S. Congress.” May 2025.
- China Cyberspace Administration. “Interim Measures for Generative AI Services (amended 2024).”
- Parliament of Canada. Bill C‑27 (Artificial Intelligence and Data Act) – Status June 2025.
- White & Case. “AI Watch: Brazil Regulatory Tracker.” Jun 2025.
- AI Verify Foundation. “Building Trustworthy AI – Press Release ATxSG 2025.”
- OECD. “Principles for Trustworthy AI (2024 update).”
- IFOW. “Reflections on the Seoul AI Summit: A Prelude to Paris.” Nov 2024.
- Reuters. “UN Advisory Body Makes Seven Recommendations for Governing AI.” Sep 2024.
Optional Visuals Available on Request