The next five years will be pivotal for artificial intelligence (AI) policy. Governments have begun to translate ethical principles into binding law, and the transition from voluntary guidelines to compulsory regulation will accelerate between 2025 and 2030. The European Union’s AI Act, the United States’ evolving executive orders and state legislation, and China’s rapidly developing rules for generative and surveillance AI form the core of this landscape. At the same time, international cooperation is emerging through declarations and treaties, but divergent approaches may create compliance challenges and innovation bottlenecks. This report summarises the current state of major AI regulatory regimes and uses scenario analysis to project how compliance demands, innovation obstacles and international cooperation could evolve by 2030.

AI regulation and workforce protection

EU AI Act

Risk‑Based Framework and Timeline

The EU AI Act is the world’s first comprehensive AI law. It categorises AI applications into unacceptable risk, high risk, and limited/no risk. Unacceptable‑risk practices—such as subliminal manipulation, social scoring and certain predictive policing—are banneddigital-strategy.ec.europa.eu. High‑risk systems include AI used in critical infrastructure, education, employment, essential services, law enforcement and migration control; these must meet strict obligations: comprehensive risk management, use of high‑quality training data, logging for traceability, detailed technical documentation, human oversight, and robust cybersecuritydigital-strategy.ec.europa.eu.

The Act entered into force on 1 Aug 2024, with staggered application dates. Prohibitions and AI‑literacy requirements apply from 2 Feb 2025, obligations for general‑purpose AI models from 2 Aug 2025, the AI office and governance rules from Aug 2025, and full application (including high‑risk systems in regulated products) from 2 Aug 2026; requirements for AI components of regulated products have a further transition period until 2 Aug 2027digital-strategy.ec.europa.eu. This timeline implies that compliance preparation must intensify over the next two years.

The EU AI Act works alongside the General Data Protection Regulation (GDPR), the Digital Operational Resilience Act (DORA) and the Data Act. The Cloud Security Alliance notes that 2024–2025 bring a “major overhaul” of privacy and AI laws: several US states implemented new privacy laws in January 2025, DORA took effect for EU financial services on 17 Jan 2025, and the AI Act’s prohibitions began on 2 Feb 2025cloudsecurityalliance.org. Many organisations will need cross‑jurisdictional compliance strategies to navigate this patchwork.

U.S. Executive Orders and State Regulation

Executive Orders

The United States lacks a federal AI statute, so the executive branch and states are shaping policy. President Donald Trump’s Executive Order 14179 (“Removing Barriers to American Leadership in Artificial Intelligence”, Jan 23 2025) directs agencies to promote innovation and eliminate “ideological bias.” It rescinds or suspends actions taken under President Joe Biden’s 2023 AI order (EO 14110) and tasks agencies with developing an AI action plan within 180 dayswhitehouse.gov. The order emphasises free‑market leadership and rejects approaches perceived as limiting innovation.

President Biden’s earlier Executive Order 14110 (“Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, Oct 30 2023) established eight guiding principles—safety and security, responsible innovation, support for workers, equity and civil rights, consumer protection, privacy, federal government use and international leadershiplawfaremedia.org. It required large AI developers and cloud providers to share safety test results with the government and emphasised fairness, transparency and risk mitigation.

In January 2025, a separate Executive Order on Advancing United States Leadership in Artificial Intelligence Infrastructure called for building domestic AI compute capacity to protect national security, ensure economic competitiveness and integrate clean energy into data centresbidenwhitehouse.archives.gov. This order signals that U.S. policy will pivot toward strategic infrastructure as well as governance, and underscores the tension between open markets and strategic autonomy.

State Laws and Sector‑Specific Initiatives

By 2025, eleven U.S. states have enacted comprehensive privacy laws with AI provisions, and more states are drafting algorithmic discrimination bills and facial recognition bans. Without federal legislation, companies face a patchwork of requirements that vary widely in scope and enforcement. Industry‑led standards (NIST Risk Management Framework and ISO 42001) are being adopted voluntarily, but state‑level laws and sectoral rules (e.g., in finance and healthcare) drive compliance costs and uncertainty.

China’s AI Regulation and Surveillance Rules

China is pursuing a layered approach: generic AI rules, sector‑specific measures and strict controls on surveillance technologies.

Generative AI Measures

In July 2023, seven Chinese regulators (led by the Cyberspace Administration of China, CAC) released the Interim Measures for the Management of Generative AI Services (effective Aug 15 2023). They apply to organisations providing public generative AI services and set principles: providers must use legitimate and high‑quality training data, respect intellectual‑property and personal rights, establish service agreements, be transparent about content generation and ensure security assessmentssecuriti.ai. Providers must prevent discrimination, protect minors and maintain algorithmic transparency.

Labeling Requirements

On 14 Mar 2025, the CAC issued the Measures for Labeling AI‑Generated Content and national standard GB 45438‑2025, effective 1 Sep 2025. These rules require platforms and “internet information service providers” to label AI‑generated content with visible tags and metadata. Platforms must detect AI‑generated content and classify it as “confirmed,” “possible,” or “suspected” AI‑generated, applying different labeling requirements; metadata must include the original producer and generation methodinsideprivacy.com. Authorities launched enforcement campaigns to improve labeling, combat misinformation and regulate AI applicationsinsideprivacy.com.

Surveillance and Video Regulations

China’s regulation of surveillance technology continues to tighten. New Public Security Video System Regulations (promulgated Jan 13 2025, effective Apr 1 2025) limit the installation of cameras in private spaces (guest rooms, dormitories, bathrooms) and require operators to file records with local authorities and delete footage after 30 days unless legally requiredloc.gov. They prohibit sharing or publishing video information without consent and impose fines for improper retention or disclosureloc.gov. This signals an emerging privacy framework around surveillance even within an authoritarian context.

International Cooperation and Global Frameworks

Declarations and Treaties

The Bletchley Declaration (Nov 2023), signed by 28 countries including the U.S., China and EU members, affirms that AI should be human‑centric, safe, trustworthy and responsible, acknowledging both opportunities and significant risks. Signatories commit to transparency, fairness, accountability, bias mitigation, privacy and data protection. The declaration emphasises that frontier AI risks are international and require global cooperationgov.uk.

The Council of Europe Framework Convention on Artificial Intelligence (Vilnius, 5 Sep 2024) is the first legally binding treaty covering the entire AI lifecycle. It was signed by various Council of Europe members, Israel, the United States and the European Union. The convention aims to ensure AI respects human rights, democracy and the rule of law across design, development and deployment stagescoe.int. It will enter into force after ratification by five signatories.

Other Regional Initiatives

Other jurisdictions—including Canada, Australia, Japan, Brazil and India—are drafting AI strategies or bills. Many adopt risk‑based frameworks similar to the EU, with stricter controls on high‑risk applications and sector‑specific guidelines. Africa and Latin America face resource constraints and rely on soft‑law frameworks inspired by UNESCO’s AI ethics recommendations. Global regulatory divergence is thus expected to persist.

Projected Scenarios (2025–2030)

Scenario 1: Coordinated Compliance and Standardisation

Drivers: Successful implementation of the EU AI Act and the Council of Europe convention, global uptake of the Bletchley principles, growing adoption of international standards (ISO 42001, NIST AI RMF) and closer transatlantic cooperation.

Outcomes:

  • Streamlined compliance—Companies adopt modular compliance architectures and risk‑based audits that satisfy multiple jurisdictions. This reduces duplication and lowers costs.
  • Improved interoperability—Convergence around shared safety tests, transparency requirements and data‑protection principles fosters cross‑border AI trade and research.
  • Innovation remains robust—Clear rules for high‑risk and general‑purpose AI give developers legal certainty and encourage investment. EU guidelines for general‑purpose models (e.g., open weights, safety testing) may influence global norms.
  • International institutions—A global AI governance forum or an OECD/UN body emerges to harmonise standards and mediate disputes.

Scenario 2: Fragmented Regulation and Innovation Bottlenecks

Drivers: Divergent national interests (e.g., U.S. emphasis on market dominance vs. EU risk control, China’s political and security priorities), lack of federal AI law in the U.S., geopolitical tensions, and rapid technological advances (e.g., autonomous agents, multimodal models) outpacing regulation.

Outcomes:

  • High compliance costs—Companies face multiple overlapping or conflicting requirements. The Ethicalogic study warns that regulatory chaos increases operational costs and delays product launches; fragmentation hampers public trust and slows R&Dethicalogic.com.
  • Innovation bottlenecks—Diverse testing and certification regimes delay global deployment. Small and medium enterprises struggle to navigate complex regulatory landscapes, leading to market consolidation and reduced competitionethicalogic.com.
  • Strategic decoupling—Geopolitical blocs develop incompatible standards (e.g., EU safety tests vs. U.S. national security priorities). China’s algorithm registry and content labeling create barriers for foreign AI providers.
  • Privacy and surveillance tensions—China continues to strengthen surveillance AI while imposing some privacy safeguards; Western democracies push for stronger privacy and anti‑bias measures. These competing paradigms hinder mutual recognition of compliance.

Scenario 3: Responsible Innovation and Inclusive Governance

Drivers: Civil society advocacy, public pressure to address algorithmic harms, increasing AI literacy programs (as mandated by the EU AI Act), and success of pilot projects using AI for public goods (healthcare, climate mitigation).

Outcomes:

  • Human‑centric design—AI systems are co‑created with stakeholders, emphasising fairness, accessibility and inclusion. Requirements for explainability and auditability become standard.
  • Worker protection and re‑skilling—Policies supporting workforce transition and AI augmentation (as in Biden’s EO) reduce resistance and build social trust.
  • Global South engagement—Developing countries participate in governance frameworks, adapt rules to local contexts and contribute to standard‑setting. Regional bodies (e.g., African Union) develop AI strategies leveraging UNESCO’s ethical guidance.
  • Ethical innovation labs—Public–private partnerships create sandboxes for safe experimentation and share results internationally.

Implications for Organisations

Regulatory regimeKey obligations/policiesImplications for compliance and innovation
EU AI ActBans unacceptable risks; high‑risk systems must meet strict obligations (risk management, data quality, documentation, human oversight, cybersecurity)digital-strategy.ec.europa.eu; staged application starting Feb 2025 to Aug 2027digital-strategy.ec.europa.euCompanies must map AI inventory, classify risk, conduct impact assessments and maintain documentation; long‑term certainty and cross‑sector harmonisation may spur innovation but require significant upfront investment
U.S. Executive OrdersTrump’s EO 14179 prioritises innovation and revokes prior regulatory actionswhitehouse.gov; Biden’s EO 14110 emphasises safety, equity and international leadershiplawfaremedia.org; infrastructure EO calls for domestic AI computebidenwhitehouse.archives.govLack of federal law creates uncertainty; regulatory requirements may fluctuate with administrations; industry standards (NIST, ISO) will likely be the baseline; state laws necessitate agile compliance systems
China’s regulationsGenerative AI measures require legitimate training data, respect for IP and personal rights, transparency, and security assessmentssecuriti.ai; labeling rules mandate explicit and metadata tags for AI‑generated contentinsideprivacy.com; video regulations restrict surveillance camera installation and mandate retention and deletion standardsloc.govForeign providers must register algorithms and comply with content controls; labeling requirements add technical overhead; local data storage and security assessments may limit cross‑border collaboration; privacy safeguards on surveillance systems reflect gradual tightening
International frameworksBletchley Declaration calls for safe, human‑centric AI and global cooperationgov.uk; Council of Europe convention is legally binding and covers the AI lifecyclecoe.intVoluntary declarations build soft norms; the convention may serve as a template for broader treaties; companies may align practices to meet emerging international standards

Conclusion

Between 2025 and 2030, AI regulation will shift from principles to enforcement. The EU AI Act’s phased implementation will set a global benchmark, while U.S. policy may oscillate between deregulation and safety mandates depending on political leadership. China will continue to balance rapid AI adoption with content control and emerging privacy rules. Global cooperation initiatives—the Bletchley Declaration and Council of Europe convention—show promise but must contend with divergent national interests. Organisations should invest in adaptable governance frameworks, cross‑functional compliance teams and transparent AI practices to navigate this evolving landscape and harness AI responsibly.

You May Also Like

How Human–AI Relationships Are Re‑Shaping Social Structure

From Tools to Partners For most of modern history machines have been…

Reality Check: Is a 4-Day Workweek a Solution or a Fantasy in an Automated World?

Fascinating debates persist on whether a 4-day workweek is feasible amid automation, leaving us to wonder if it’s a future reality or mere fantasy.

The AI Regulation Landscape: 2025–2030 Outlook

A Comprehensive Analysis of Global AI Governance Trends, Compliance Challenges, Innovation Bottlenecks,…

Startup Sofa Briefing: The Real Cost of Starting Up (It’s Not Just Money)

I. Executive Summary: The Myth vs. Reality of Entrepreneurship The article “The…