Overview of the EU AI Act and the voluntary Code of Practice

The European Union’s Artificial Intelligence Act (AI Act) is the first comprehensive legal framework for artificial intelligence. It uses a risk‑based model to ban unacceptable‑risk AI (social scoring, manipulative persuasion, etc.) and impose strict obligations on high‑risk AI systems used in fields like finance, healthcare and employmentdigital-strategy.ec.europa.eu. The law entered into force on 1 August 2024; prohibitions and AI‑literacy requirements became binding on 2 February 2025, general‑purpose AI (GPAI) rules apply from 2 August 2025, and obligations for high‑risk systems start on 2 August 2026, with existing models having until 2 August 2027 to complydigital-strategy.ec.europa.euttms.com.

One of the most important developments of 2025 was the General‑Purpose AI Code of Practice. Published by the European Commission on 10 July 2025, the Code is a voluntary framework developed by 13 independent experts after consultation with more than 1,400 stakeholdersdigital-strategy.ec.europa.eu. It interprets the AI Act’s obligations for providers of large models, offering templates and benchmarks for:

  • Transparency – providers must produce detailed technical documentation and a public summary of their training data (model name, modalities, dataset sizes, data sources and processing methods)ai-analytics.wharton.upenn.edu. The EU AI Office provides a standardized model documentation formdigital-strategy.ec.europa.eu.
  • Copyright compliance – the code offers guidance on respecting EU copyright law and implementing opt‑out mechanisms for text‑and‑data miningdigital-strategy.ec.europa.eu.
  • Safety & security (systemic risk) – targeted at the most advanced models (defined as models requiring >10²⁵ floating‑point operationsdigital-strategy.ec.europa.eu), this chapter outlines risk‑management, incident‑reporting and cybersecurity measuresdigital-strategy.ec.europa.eu.

Signing the Code is voluntary, but it creates a “presumption of conformity”: providers that adhere to it are deemed to comply with the AI Act’s GPAI obligations and benefit from reduced administrative burdens and legal certaintyttms.com. Companies that decline must still meet the law’s requirements and prove equivalencyai-analytics.wharton.upenn.edu. Most major model developers—including Amazon, Google, Microsoft, OpenAI and Anthropic—signed the Code, while some (Meta) declined and others (xAI) signed only the safety chapterttms.com. European heavyweights such as Airbus and ASML even called for a two‑year “clock stop” to delay implementation, arguing that the Act’s complexity could harm competitivenesschathamhouse.org.

How the Code of Practice influences competition

A safe‑harbour for compliance

The Code functions as a safe‑harbour. Providers that adopt it gain a legally endorsed compliance pathway and avoid uncertainty around the AI Act’s requirementsai-analytics.wharton.upenn.edu. For large players with robust compliance teams, signing the Code is a manageable cost; the real advantage lies in signalling responsible behaviour and securing EU market access. Smaller developers may find the administrative overhead significant—documentation, data provenance checks and risk‑mitigation procedures consume resources. However, the grace period and cooperation pledge from the AI Office (during the first year after 2 August 2025, regulators will not treat partial implementation as a violationdigital-strategy.ec.europa.eu) help new entrants gradually adapt.

Potential competitive distortions

  1. Barriers to entry – Comprehensive documentation and risk‑management obligations favour firms with deep pockets. For example, summarising training data requires tracking web domains and dataset compositionai-analytics.wharton.upenn.edu; copyright policies must vet datasets; and systemic‑risk models must implement advanced security testingdigital-strategy.ec.europa.eu. These costs may discourage smaller European startups or open‑source projects from releasing models, potentially consolidating the market around a few giants.
  2. Legal certainty and trust – Adhering to the Code provides clarity for regulators and customers. Companies that follow it can avoid fines of up to €35 million or 7 % of global turnover for severe violationsttms.com. Investors may prioritise compliant providers, further tilting the field towards signatories.
  3. Global spill‑over – Because the AI Act applies extraterritorially to providers whose AI outputs are used in the EUgoodwinlaw.com, non‑European firms must either align with the Code or implement equivalent controls. This may create de facto global standards, raising compliance costs worldwide. On the other hand, by clarifying expectations, the Code reduces legal risk for cross‑border innovators and may encourage companies outside Europe to adopt similar governance, leveling the playing field.
  4. Regulatory pressure vs. innovation – Critics argue that the Code’s transparency and copyright provisions could slow innovation. Google’s Kent Walker warned that the Code might “slow down Europe’s development and deployment of AI”chathamhouse.org, while Meta claimed it would “throttle the development and deployment of frontier AI models”chathamhouse.org. This tension reflects a broader debate: does stricter regulation foster trust and thus adoption, or does it deter investment and push research elsewhere? The answer may differ by vertical.
Amazon

Top picks for "general purpose code"

Open Amazon search results for this keyword.

As an affiliate, we earn on qualifying purchases.

Impact across key verticals

1. Healthcare (Medical devices & digital medicine)

High‑risk classification and timeline. AI/ML‑enabled medical devices and software are classified as high‑risk under the AI Actpmc.ncbi.nlm.nih.gov. Obligations for these high‑risk systems apply 36 months after entry into force, meaning digital medical products must comply by August 2027pmc.ncbi.nlm.nih.gov. Developers must maintain an AI quality management system, conduct risk management, and ensure data governance, transparency and human oversightpmc.ncbi.nlm.nih.gov. Incident reporting and post‑market monitoring are also mandatorypmc.ncbi.nlm.nih.gov.

Competitive implications. Established medical device manufacturers already operate under the EU Medical Device Regulation (MDR); integrating AI Act requirements could reinforce their advantage. Providers outside the EU must appoint an authorized representative and meet EU standardspmc.ncbi.nlm.nih.gov, raising barriers for foreign entrants. However, compliance could increase trust among hospitals, clinicians and patients, creating demand for certified AI diagnostics and boosting adoption of EU‑compliant products.

Customer impact. For patients and clinicians, the AI Act aims to ensure that AI‑driven diagnostics and treatment recommendations are accurate, fair and explainable. Data governance provisions require evaluating training datasets for biases and geographical contextpmc.ncbi.nlm.nih.gov. This may reduce the risk of models misdiagnosing minorities or producing unsafe recommendations. Continuous incident reporting creates accountability, potentially enhancing patient safety and trust.

2. Financial services (banking, insurance and fintech)

High‑risk AI use cases. The AI Act designates credit‑scoring systems and insurance risk‑assessment models as high‑riskgoodwinlaw.com. Banks must ensure models are robust, accurate and embedded in a strong risk‑management framework, with human oversightkpmg.com. Prohibited practices include social‑scoring systems that evaluate people’s behaviour to determine creditworthinessgoodwinlaw.com.

Implications for competition. Large financial institutions already operate under stringent supervisory regimes; compliance with the AI Act may dovetail with existing model‑risk management. For fintechs and insurtechs, the cost of conformity assessments and documentation may be significant. However, the AI Act emphasises equal treatment—Recital 158 of the Act aims “to ensure consistency and equal treatment in the financial sector”goodwinlaw.com—which may prevent unfair competitive advantages. Smaller players could leverage compliance as a trust signal to customers, but may need partnerships or regulatory tech solutions to manage costs.

Customer impact. Consumers stand to benefit from fairer credit and insurance decisions. Providers of high‑risk AI must explain how models reach decisions and maintain logsgoodwinlaw.com. Transparency and data governance reduce the risk of discriminatory outcomes, while human oversight ensures that algorithms do not make final decisions without review. On the other hand, stricter documentation may slow the rollout of innovative credit products and could lead to more conservative lending criteria during the transition.

3. Employment and human resources

High‑risk HR systems. AI systems used for recruitment, candidate screening, targeted job advertisements, performance evaluation or decisions on promotion/termination are classified as high‑riskhunton.com. Obligations apply from 2 August 2026 (with some rules on AI literacy beginning February 2025)hunton.com.

Compliance duties. Employers deploying such systems must:

  • Inform candidates and employees about the use of AI and provide explanations of how decisions are madehunton.com.
  • Ensure training data is accurate, representative and free from biashunton.com.
  • Continuously monitor AI systems and maintain human oversighthunton.com.
  • Conduct data‑protection impact assessments when personal data is processedhunton.com.
  • Provide AI‑literacy training for staff involved in AI operationshunton.com.

Competitive effects. Compliance may strain small recruiting platforms but can also differentiate trustworthy providers. Open‑source or in‑house HR tools will need significant documentation and monitoring capabilities. Large HR tech vendors who sign the Code can use standardized documentation and risk‑management practices as marketing advantages, potentially consolidating the market.

Customer impact. Job seekers and employees gain new rights to understand algorithmic decisions and to request human review. By requiring representative datasets and transparency, the Act aims to mitigate biases against protected groups, potentially leading to fairer hiring and promotion practices. However, some firms may reduce reliance on AI for HR to avoid compliance costs, slowing innovation in recruitment technologies.

4. Education and access to essential services

The AI Act treats AI systems that determine access to education (e.g., exam scoring or admissions) and essential services like credit, insurance or social benefits as high‑riskdigital-strategy.ec.europa.eu. Providers must meet the same obligations: risk management, data quality, logging and human oversightdigital-strategy.ec.europa.eu. For educational institutions and public service agencies, the Code’s templates simplify documentation. The compliance burden may push smaller ed‑tech firms to partner with larger providers, potentially reducing competition but improving reliability. Students and citizens, meanwhile, gain assurance that algorithmic decisions affecting life opportunities are transparent and subject to human review.

5. Law enforcement, biometric and critical infrastructure

High‑risk designations also cover AI systems used in law enforcement (e.g., evidence evaluation), migration and border control, and critical infrastructure safety componentsdigital-strategy.ec.europa.eu. These sectors must implement robust risk controls, cybersecurity and human oversight. The Code’s safety chapter (for systemic‑risk models) and the AI Act’s prohibitions (such as bans on real‑time remote biometric identification for law enforcementdigital-strategy.ec.europa.eu) impact vendors of surveillance technologies and biometric solutions. Compliance costs may limit smaller surveillance startups, while providing citizens greater protection against intrusive AI. Critical infrastructure operators (transport, energy) will need to document AI safety components and coordinate with regulatorsdigital-strategy.ec.europa.eu.

Recommendations for businesses and policy outlook

  1. Perform a risk inventory and classify AI systems. Identify all AI‑enabled products and categorize them into unacceptable, high‑risk, limited‑risk and minimal‑risk categoriesdigital-strategy.ec.europa.eu. Determine whether you are a provider (developer), deployer (user) or importer, as obligations differ. For GPAI developers, evaluate whether your model may fall under “systemic risk” (compute >10²⁵ flops)digital-strategy.ec.europa.eu.
  2. Adopt internal AI governance. Establish cross‑functional teams (legal, technical, risk) to oversee compliance. For high‑risk applications, implement AI quality management systems, risk‑management processes, dataset governance and human oversightpmc.ncbi.nlm.nih.gov. Create clear procedures for incident reporting and update logs.
  3. Leverage the Code of Practice. Even if you choose not to sign, use the Code’s templates for model documentation, data summaries and risk‑assessment. Signing reduces administrative burden and signals accountability to customers and regulatorsttms.com. The AI Office has promised a cooperative approach during the first year of enforcementdigital-strategy.ec.europa.eu.
  4. Invest in transparency and customer communication. Develop plain‑language explanations of AI decisions for clients and users. Prepare to answer questions from regulators and customers about training data, copyright compliance and model limitations. Transparency will increasingly become a competitive differentiatorttms.com.
  5. Monitor regulatory updates and engage. The AI Office will update the Code every two yearsdigital-strategy.ec.europa.eu. Participate in consultations, industry associations and standard‑setting efforts to shape future iterations. Engage with international bodies as the Code may become a global templatechathamhouse.org.

Conclusion

The EU’s General‑Purpose AI Code of Practice and the broader AI Act herald a new era of accountability for artificial intelligence. By clarifying obligations around transparency, copyright and safety, the Code provides a safe‑harbour for model providers and sets expectations for high‑risk AI across sectors. While compliance imposes costs—especially for smaller firms—it also promises legal certainty, fosters trust, and may level competition by preventing irresponsible deployment. For customers and citizens, the Code and the AI Act aim to ensure that AI systems making decisions about health, finance, jobs or education are fair, explainable and safe. As enforcement escalates through 2026 and 2027, organisations should proactively adopt the Code’s guidance and integrate AI governance into their core strategies to remain competitive in a regulated AI ecosystem.

You May Also Like

From GPUs to AI Factories

Assessing NVIDIA’s CES 2026 announcements (Vera Rubin, AI-native storage, and Alpamayo) and…

Building Europe’s AI Factory and Antenna Network

Towards a Digitally‑Sovereign Artificial Intelligence Ecosystem Executive Summary The European Union is…

Turning Waste Heat into a Data Center Asset

A White Paper for Data Center Operators, Policymakers, and Investors Executive summary…

Inside the New AI Manhattan Project: 8 Dynamics Shaping the Future

What if I told you that the world’s superpowers are currently engaged…