Post‑Labor Economics Series • Policy Brief • July 2025

Executive Snapshot

2025 marks the first year in which an AI system banned on one continent can legally operate on another.

The EU AI Act began prohibiting “unacceptable‑risk” uses of AI on 2 February 2025 and will impose transparency duties on general‑purpose models such as GPT‑4o in early 2026  .

Meanwhile, the United States is still relying on executive orders, agency guidance and state bills rather than a federal statute  , and China’s Interim Measures for Generative AI focus on social‑stability filters and state oversight  .

Canada, Brazil, Singapore and others are writing their own rules—but not the same rules  .

Question: Can the world afford a regulatory patchwork when AI models—and the labour shocks they trigger—cross borders at the speed of an API call?

Answer: Only if we build a Bretton Woods‑style framework that aligns safety, trade and labour safeguards across jurisdictions.

This brief maps today’s regulatory landscape, pinpoints flashpoints of conflict, and proposes a three‑pillar Global AI Framework that policymakers and tech executives can start shaping now.

1 | The Emerging Patchwork: Five Regulatory Archetypes

RegionCore ApproachFlagship InstrumentLabour‑Impact Hooks
European UnionPrecautionary, rights‑driven, tiered by riskEU AI Act – bans “unacceptable‑risk” uses (social scoring, predictive policing); heavy duties on high‑risk workplace AI; transparency for general‑purpose models Mandatory fundamental‑rights impact assessments include workforce effects; algorithmic management tools treated as high risk.
United StatesInnovation‑first, sectoral & executive‑order drivenAI Executive Orders (Oct 2023; Jan 2025) + NIST RMF; draft Senate bills; 25+ state laws Federal guidance urges agencies to evaluate labour displacement but imposes no binding standard; Congress debating a 10‑year moratorium on state AI laws  .
ChinaState‑control / social‑stability lensInterim Measures on Generative AI (2023, amended 2024) Providers must ensure outputs uphold socialist values; must label synthetic content; labour issues framed mainly as social stability.
Middle‑powers (Canada, Brazil, Singapore)EU‑inspired risk model + domestic tweaksCanada Bill C‑27 (AIDA) in 3rd reading 2025  ; Brazil PL 2338/2023 passed Senate Dec 2024  ; Singapore AI Verify voluntary testing regime Canada’s AIDA requires impact assessments incl. “effects on workers.” Brazil’s bill adopts EU risk tiers; enforcement details TBD.
Multilateral Soft‑LawPrinciples & foraOECD AI Principles (updated 2024)  ; UNESCO Ethics Rec.; GPAI; G7 Hiroshima & Seoul AI Summits Non‑binding; encourage governments to consider labour share, inclusion and human‑rights impacts.

Friction Warning

  • Data‑flow barriers: EU’s data‑export rules + U.S. state privacy patchwork risk fragmenting model‑training pipelines.
  • Market access asymmetry: A hiring algorithm legal in Texas could be banned in France—firms face regulatory forum‑shopping or multi‑model compliance stacks.
  • Subsidy & tariff disputes: If one bloc taxes AI‑driven productivity while another subsidises it, WTO disputes loom.

2 | Lessons from Past Tech Treaties

PrecedentWhat We Can Re‑Use for AI
Bretton Woods (1944) – fixed‑rate currency regimeAlign incentives via shared metrics (e.g., “AI Risk Tiers”) + pooled safety infrastructure (testing institutes).
Nuclear Non‑Proliferation Treaty (1968)Dual‑track bargain: peaceful tech access in exchange for inspections; echoes idea of AI audit privileges for signatories.
Montreal Protocol (1987) – phased CFC banTime‑staged sunset of “unacceptable‑risk” AI practices, plus tech‑transfer fund for developing nations.
Basel III (post‑2008) financial rulesGlobally agreed stress‑testing scenarios could map to AI catastrophe‑risk evaluations.

These show that global coordination is possible when nations perceive shared existential risks and economic upside.

3 | Blueprint for a Global AI Framework (G‑AIF)

Pillar 1 — Risk & Safety Alignment

  • Common Risk Taxonomy (build on EU’s four‑tier model).
  • Distributed AI Safety Institutes share red‑team findings; 10 democracies + the EU already pledged cooperation after the Seoul AI Summit 2024 .
  • AI Incident Reporting Exchange for near‑miss harm events (modelled on aviation).

Pillar 2 — Fair Labour Transition

  • AI Labour‑Impact Registry: Firms above a size threshold file annual disclosures on task automation, wage effects, retraining budgets. Data feeds an OECD dashboard to monitor labour‑share shifts.
  • Just Transition Fund: 0.5 % levy on frontier‑model compute spend finances re‑skilling and guaranteed‑income pilots in lower‑income states.

Pillar 3 — Trusted Cross‑Border Data & Trade

  • Regulatory Equivalence Passports: Systems certified in one adherent country gain fast‑track access to others if they meet baseline safety + labour criteria.
  • AI Standards Exchange (a UN advisory body proposal, Sept 2024)  enables rapid harmonisation of technical norms.

4 | Policy Choices for 2025‑26

Decision PointEUU.S.ChinaWhat Executives Should Do
Adopt G‑AIF ‘Risk Taxonomy v1’Already aligned via AI Act; push for global uptakeDecide whether to federalise risk tiers or maintain sectoral patchworkCould endorse taxonomy at UN AI forum, retain sovereignty in enforcementBuild compliance to the strictest tier now to avoid retrofit costs.
Labour‑Impact RegistryHigh chance—ties to European Pillar of Social RightsPolitically fraught; unions in favour, Chamber of Commerce opposedMay reject, citing proprietary dataVoluntary disclosure shows leadership; mitigates supply‑chain scrutiny.
Just Transition FundSupports; carbon levy precedentQuestionable; anti‑tax climateMight back south‑south funding to gain soft powerBudget for 0.5 % compute levy in long‑range plans if consensus emerges.

5 | What If We Fail to Coordinate?

  1. Race‑to‑the‑Bottom Regulation – firms relocate AI labs to weakest jurisdictions, undermining safety standards.
  2. Trade Fragmentation – digital‑services tariffs or data‑localisation walls slice global AI markets into incompatible blocks.
  3. Shadow Supply Chains – exploitative gig‑work and label‑laundering flourish where labour rules are lax.
  4. Geostrategic Tension – dual‑use frontier models drive arms‑race dynamics absent verification protocols.

History shows coordination is cheaper than crisis management. The Bretton Woods architects understood this in 1944. AI’s cross‑border labour shock requires an equally bold design.

6 | Action Agenda for 2025

For Policymakers (EU & U.S. priority)

  • Convene a “Paris Charter on AI & Work” at the 2025 UNESCO Ethics Forum, embedding labour safeguards into G‑AIF.
  • Mandate mutual audit rights for high‑risk AI exported into each other’s markets—think “Schrems II” but for algorithms.
  • Seed the Just Transition Fund with $5 bn from existing digital‑service taxes (EU) and CHIPS Act residuals (U.S.).

For Tech Executives

  • Map your model portfolio against EU risk tiers—even if not operating in Europe.
  • Pre‑register automation roadmaps with labour agencies to shape forthcoming disclosure rules.
  • Join the AI Standards Exchange pilot to influence interoperable safety testing methods.

7 | Conclusion—A Bretton Woods Moment

The first half of 2025 proved that national AI laws can—and will—clash. The question is whether leaders seize the next eighteen months to craft a Global AI Framework that makes safe, inclusive automation the default rather than the exception.

A world without common digital rails risks repeating the competitive currency chaos of the 1930s—only this time, jobs not just money are on the line.

I invite regulators, firms and civil‑society researchers to co‑draft G‑AIF v0.1.

Subscribe at thorstenmeyerai.com/newsletter to receive the consultation draft and contribute use‑cases or data.

End‑Note Citations

  1. European Parliament. “EU AI Act: First Regulation on Artificial Intelligence.” Feb 2025.  
  2. White House. “Executive Order on Advancing U.S. Leadership in AI Infrastructure.” Jan 2025.  
  3. DLA Piper. “Ten‑Year Moratorium on AI Regulation Proposed in U.S. Congress.” May 2025.  
  4. China Cyberspace Administration. “Interim Measures for Generative AI Services (amended 2024).”  
  5. Parliament of Canada. Bill C‑27 (Artificial Intelligence and Data Act) – Status June 2025.  
  6. White & Case. “AI Watch: Brazil Regulatory Tracker.” Jun 2025.  
  7. AI Verify Foundation. “Building Trustworthy AI – Press Release ATxSG 2025.”  
  8. OECD. “Principles for Trustworthy AI (2024 update).”  
  9. IFOW. “Reflections on the Seoul AI Summit: A Prelude to Paris.” Nov 2024.  
  10. Reuters. “UN Advisory Body Makes Seven Recommendations for Governing AI.” Sep 2024.  

Optional Visuals Available on Request

You May Also Like

The New Social Contract: Ensuring Livelihoods When Work Isn’t Guaranteed

Keeping up with changing economies requires a new social contract that guarantees livelihoods; discover how society can adapt and thrive in uncertain times.

GDP in the Age of Automation: Does Our Favorite Metric Still Make Sense?

Many question if GDP still reflects true progress amid rapid automation, but understanding its limitations is essential as our economy transforms.

Who Owns the Robots? The Future of Capital in an Automated Economy

Forces shaping robot ownership reveal who truly controls the future of capital in an automated economy and why it matters.

These Jobs Won’t Exist In 24 Months! We Must Prepare For What’s Coming!

Like prehistoric dinosaurs facing extinction, today’s workforce stands at the edge of…