There is a version of this story that has been written many times: regulators propose sweeping rules for artificial intelligence, industry pushes back, timelines slip, and frameworks that eventually emerge are weaker than originally envisioned. That story is still unfolding in some jurisdictions. But in 2026, for the first time, it is also being displaced by something new — actual enforcement, actual penalties, and actual compliance deadlines that technology companies are scrambling to meet.
The European Union's AI Act, which became the world's most comprehensive binding AI law when it entered full enforcement in early 2026, marks a genuine inflection point. It is not that the law is perfect — practitioners cite ongoing ambiguities, contentious definitions, and implementation costs that have fallen unevenly across the industry. But it is law, and it applies to a market of roughly 450 million people and every company that wants to serve that market, regardless of where it is incorporated.
Around the world, other governments are watching closely and charting their own courses. The regulatory landscape that is emerging is not a unified global standard — far from it. It is a patchwork of national and regional approaches, each shaped by different political traditions, industrial interests, and conceptions of what AI governance is actually supposed to achieve. For the technology sector, navigating this patchwork has become one of the defining compliance challenges of the decade.
The EU AI Act: From Text to Enforcement
The EU AI Act had been years in the making before its final adoption. First proposed by the European Commission in 2021 and substantially revised through parliamentary negotiations, the Act creates a tiered regulatory structure based on the risk level of AI applications. At the top are prohibited uses — systems deemed to pose unacceptable risks, such as social scoring by governments and most forms of real-time remote biometric identification in public spaces. Below that are high-risk systems, which face the most extensive requirements. Lower-risk general-purpose systems must meet lighter transparency obligations, while minimal-risk applications — the vast majority of AI software in daily use — remain largely unregulated.
The high-risk category is where most of the compliance action is concentrated. It covers AI deployed in critical infrastructure, education, employment (particularly automated CV screening and job applicant assessment), essential services such as credit scoring and insurance underwriting, migration and border control, and the administration of justice. For developers and deployers of such systems, the obligations are substantial: detailed technical documentation, conformity assessments, registration in a publicly accessible EU database, post-market monitoring, and mechanisms for human oversight.
The European AI Office, created in 2024 to coordinate implementation and enforcement alongside national authorities, has begun its first wave of compliance reviews. Several large-scale complaints have been filed by consumer advocacy groups, primarily targeting AI-powered hiring platforms and automated credit assessment tools. As of March 2026, no formal penalties have been issued under the Act's full enforcement framework, but the Office has signalled it expects to conclude initial reviews and issue findings before year end.
Compliance in Practice: What Companies Are Doing
For companies with products in scope, the practical work of compliance has turned out to be considerably more intensive than many anticipated. The conformity assessment process for high-risk systems requires assembling documentation spanning the entire development lifecycle — from data sourcing and model training decisions to testing methodologies and post-deployment monitoring protocols. For organisations that have historically managed AI development as a purely engineering function, the Act has required creation of new governance structures and documentation practices from the ground up.
A recurring challenge involves the supply chain complexity of modern AI products. Many enterprise software vendors build their products on top of foundation models provided by third parties — using, for example, large language models accessed through APIs as a core component of a higher-level application. Under the Act, responsibility for compliance is distributed across this chain, but working out exactly where each obligation falls requires careful legal analysis that the Commission's guidance documents have not fully resolved.
"The challenge isn't understanding the broad intent of the regulation — that's reasonably clear," said one product compliance specialist at a Brussels-based consultancy, speaking on background. "The challenge is the detail. What counts as a 'substantial modification' that triggers a new conformity assessment? When does an AI system used in recruitment fall inside the high-risk definition versus outside it? These questions matter enormously for compliance planning."
Smaller companies, including startups building AI-powered professional tools, are finding the compliance overhead particularly burdensome. Several European AI ecosystem reports from early 2026 note that the Act's requirements have increased the perceived cost of building in high-risk application areas, leading some early-stage companies to defer product launches or explore whether product designs can be adjusted to fall into lower-risk categories.
The General-Purpose AI Question
One of the most technically and politically contentious aspects of the EU AI Act involves its treatment of general-purpose AI models — the large, multi-capability foundation models that underpin a growing share of AI applications globally. The final version of the Act introduced specific provisions for GPAI models, requiring providers of models above certain computational thresholds to publish technical documentation, comply with EU copyright law in their training data practices, and cooperate with downstream deployers.
For the small number of companies that provide the most widely used foundation models, these requirements involve disclosing information they have historically treated as proprietary. The Act distinguishes between standard GPAI models and those deemed to pose "systemic risk" due to their scale and capabilities, with more stringent obligations for the latter tier. Industry representatives have argued that the criteria for systemic risk classification are ambiguous and that documentation requirements could harm competitive positioning without meaningfully improving safety.
The AI Office has published interim guidance on GPAI compliance, but several aspects remain under active negotiation between the Office and major model providers. The European Parliament's AI committee is monitoring the process, with several members expressing concern that industry lobbying could water down the provisions before meaningful enforcement begins.
The United States: Fragmentation and Selective Progress
Across the Atlantic, AI regulation in the United States presents a starkly different picture. There is no federal AI law, and the prospects for one in the near term remain uncertain. The AI governance landscape instead consists of executive branch directives, sector-specific guidance from agencies like the Federal Trade Commission and the Equal Employment Opportunity Commission, and an increasingly active state-level regulatory environment.
The National Institute of Standards and Technology's AI Risk Management Framework, first published in 2023, has become the closest thing to a de facto national standard for AI governance practices. Adoption among federal agencies is widespread, and many large private-sector organisations have incorporated it into their internal programmes. The Framework is voluntary, however, and its practical impact on companies with no federal contracting relationships or particular motivation to adopt it is limited.
In Congress, AI legislation has been introduced repeatedly without advancing to final passage. The most substantive proposals have focused on specific high-impact use cases: requiring algorithmic impact assessments for AI used in consequential decisions affecting individuals, mandating transparency disclosures for AI-generated content, and establishing baseline requirements for AI used in federal agency functions. A bipartisan group of senators has been working on a narrower bill addressing transparency and accountability in AI-assisted government decision-making, which observers consider more likely to advance than broader proposals.
The state level has been more active. California, which enacted landmark AI transparency and safety requirements in 2024, has continued to develop its regulatory framework. Colorado, Connecticut, and Texas have passed laws addressing algorithmic discrimination in consequential decisions. By early 2026, more than a dozen states have enacted AI-related legislation covering specific areas, creating a patchwork that is increasingly difficult for nationally operating companies to navigate consistently.
The FTC and Sector-Specific Enforcement
The Federal Trade Commission has emerged as one of the more active federal players in AI accountability, using its existing authority over unfair and deceptive trade practices to pursue cases involving AI-related consumer harms. The Commission has opened investigations into several companies over allegations relating to deceptive marketing claims about AI capabilities, discriminatory outcomes in AI-powered credit and housing decisions, and unfair data practices in the training of consumer-facing models.
The FTC's AI enforcement approach is notable for focusing on actual consumer outcomes rather than process requirements — a contrast to the EU Act's more documentation-centric framework. Critics argue this means enforcement is reactive rather than preventive, arriving only after harm has occurred. Supporters counter that outcome-focused enforcement avoids bureaucratic overhead and adapts more flexibly to a rapidly changing technology landscape.
The Food and Drug Administration has developed guidance for AI used in medical devices and clinical decision support, with a risk-based classification framework that shares structural similarities with the EU approach. The Consumer Financial Protection Bureau has taken a firmer stance on AI in credit decisions, issuing guidance that lenders cannot use AI models to discriminate against protected classes even if the discrimination is an unintended output of a model trained on historical data.
China's Targeted Regulatory Strategy
China has pursued a deliberately different regulatory architecture: rather than a single horizontal framework, the government has enacted a series of targeted rules addressing specific use cases and risks as they emerged. This allowed for faster legislative action in areas of immediate concern while deferring broader governance questions.
The most significant early moves covered recommendation algorithms, with regulations effective from 2022 requiring providers of algorithmic recommendation systems to disclose their use and offer users options to adjust or disable algorithmic curation. Generative AI regulations followed in 2023, requiring providers of publicly accessible services to register with authorities and submit content moderation plans. In late 2025, China issued new rules on synthetic media requiring disclosure of AI-generated content to viewers, technical watermarking standards, and platform liability for distributing unlabelled synthetic material.
Industry analysts note that China's regulatory interventions have been shaped partly by the government's own significant investments in AI development. Domestic companies have generally accepted the compliance requirements while benefiting from regulatory barriers that create friction for foreign competitors — a pattern regulators in other jurisdictions are watching with interest.
Japan, South Korea, and the Voluntary Approach
Japan and South Korea have both adopted lighter-touch AI governance strategies relying primarily on voluntary industry guidance rather than binding legislation. Japan's AI Governance Guidelines, developed through a multi-stakeholder process, articulate principles around transparency, fairness, privacy, and safety without creating enforceable legal obligations. South Korea's approach has been similar, though the government has signalled it may move towards binding provisions in specific high-risk domains, particularly AI used in healthcare diagnostics and criminal justice.
Observers sometimes characterise this as a "soft law" approach — creating norms and expectations that influence industry behaviour without enforcement mechanisms. Proponents argue this preserves regulatory flexibility and avoids locking in technical requirements that may quickly become outdated. Critics suggest it creates inadequate accountability and allows harms to accumulate before any official response is possible.
International Coordination Efforts
The absence of a binding international AI governance framework remains one of the most significant structural gaps in the current regulatory landscape. The G7 Hiroshima AI Process produced a set of guiding principles and a code of conduct for developers of advanced AI systems, but lacks enforcement mechanisms and relies on voluntary adherence. The Council of Europe's Framework Convention on AI and Human Rights, opened for signature in 2024, obligates signatories to ensure AI systems are used consistently with human rights and democratic processes — a more legally substantive instrument, though one that allows considerable national discretion in implementation.
At the UN level, the General Assembly adopted a non-binding resolution on AI governance in 2024. A follow-up process is underway but has been complicated by geopolitical tensions. Experts generally view a binding UN AI treaty as unlikely in the near to medium term, though the accumulation of shared principles through multilateral processes may gradually influence domestic legislation, particularly in smaller economies that look to major blocs for regulatory direction.
How Businesses Are Responding
For technology companies operating across multiple jurisdictions, the emerging regulatory environment creates a complex compliance planning challenge. The fundamental difficulty is not just the number of regulatory regimes but their diversity: different definitions of high-risk AI, different documentation requirements, different enforcement mechanisms, and different timelines. A system compliant in the United States may require significant additional documentation to meet EU standards.
Large platform companies and AI providers have generally responded by investing heavily in centralised compliance infrastructure — teams of policy and legal specialists who track regulatory developments globally, internal governance frameworks that attempt to meet the highest common denominator of applicable requirements, and documentation practices designed to be auditable across multiple regulatory contexts.
Mid-sized companies face more difficult trade-offs. Several have adopted a strategy of building compliance to EU AI Act standards as a baseline — on the reasoning that meeting the most demanding requirements provides a reasonable starting point for meeting others. Smaller startups building in higher-risk domains face the full weight of applicable requirements with limited dedicated compliance resources, and legal advisors report increasing demand for guidance on structuring products to minimise regulatory exposure.
Looking Ahead: The Rest of 2026
Regulatory activity in AI is likely to intensify in the second half of 2026. The EU AI Office is expected to issue its first substantive enforcement findings under the Act, providing important clarity on how broad provisions are being interpreted in practice. In Canada, the Artificial Intelligence and Data Act (AIDA) is expected to advance further through Parliament, establishing requirements for high-impact AI systems including impact assessments, mitigation measures, and incident reporting obligations.
In the United States, the most likely federal legislative movement involves narrower, targeted provisions — AI-generated content disclosure, safety testing for high-stakes government applications, and algorithmic accountability in financial services among the areas where bipartisan coalitions are working. The trajectory remains significantly influenced by the political composition of Congress and the administration's regulatory priorities.
For companies and practitioners in the AI sector, the fundamental message from the regulatory landscape in 2026 is that AI governance has moved from the domain of policy aspiration into the domain of practical compliance obligation — at least in some major markets, with more to follow. The era of AI development proceeding largely in the absence of binding external requirements is ending. The era of navigating a complex, multi-jurisdictional compliance landscape has well and truly begun.
Enforcement Gaps and the Credibility Question
One of the most important questions hanging over every new AI regulatory framework is whether enforcement will be credible. Regulation that exists on paper but is not enforced has limited practical effect — companies will calculate the expected cost of non-compliance as very low and adjust their behaviour accordingly. Several factors affect whether enforcement credibility develops in the new AI governance landscape, and the picture varies significantly across jurisdictions.
In Europe, the EU AI Act's enforcement architecture distributes responsibilities between the European AI Office and national authorities designated by member states. The AI Office has primary responsibility for supervising GPAI models and for cases with cross-border implications, while national authorities handle enforcement of other AI Act provisions in their jurisdictions. This distributed model has both strengths and weaknesses. National authorities bring proximity and local knowledge, but there are significant differences in the resources and capacity that different member states have devoted to AI oversight. A startup building a high-risk AI system and launching across the EU might face quite different levels of enforcement scrutiny depending on which member state is deemed the lead authority for that case.
The AI Office itself has been building its capacity rapidly but is still in its early stages. It has recruited technical and legal expertise and published guidance materials, but as of early 2026 it has not yet completed any major enforcement action. The absence of visible enforcement — while partly a product of the Act's recent entry into full force — risks creating a perception among regulated companies that compliance urgency is lower than the legislative text implies. The Office's first significant enforcement actions, widely expected before year end, will be closely watched as a signal of regulatory intent and stringency.
In the United States, the picture is more fragmented. Federal agencies using existing authority — primarily the FTC, EEOC, CFPB, and FDA depending on the sector — have more established enforcement track records but limited AI-specific mandate. State-level AI regulation is enforced by state attorneys general and relevant state agencies, with varying levels of resources and political priority. The absence of a federal AI law means there is no single enforcement authority whose posture and priorities set the tone for the broader US AI compliance environment.
Algorithmic Transparency and the Explainability Debate
A thread running through AI regulatory frameworks across jurisdictions is the question of algorithmic transparency — to what degree should AI systems be required to explain how they reach their decisions, and who should be entitled to that explanation? This question sits at the intersection of technical feasibility, privacy, commercial confidentiality, and fundamental rights, and different frameworks have resolved it differently.
The EU AI Act requires that high-risk AI systems provide sufficient transparency to allow their outputs to be interpreted and reviewed by human overseers, but the specific technical mechanisms through which this transparency is to be achieved are largely left to the developer and deployer to determine. For some AI systems — particularly those based on rule sets or decision trees — meaningful explanations of individual decisions are technically straightforward. For large neural networks that determine credit scores, hiring recommendations, or medical risk assessments, the technical challenge of explanation is more fundamental: the model's internal processing does not map onto human-interpretable reasoning in any simple way, and generating plausible-sounding post-hoc explanations that do not actually reflect how the decision was made is possible but arguably misleading.
Several AI regulation frameworks specifically address the right of individuals affected by automated decisions to receive an explanation. The EU AI Act, in conjunction with GDPR's existing provisions on automated decision-making, gives individuals affected by certain AI-driven decisions rights to explanation and to human review. Implementing these rights in practice raises difficult questions about what constitutes a meaningful explanation, who bears the burden of providing it, and how to handle cases where the explanation reveals information the deployer considers commercially sensitive.
Academic and industry research on explainable AI (XAI) has produced a range of techniques for generating explanations of neural network behaviour, but the adequacy of these explanations for regulatory purposes remains contested. Some researchers argue that the explanations generated by current XAI methods are plausible but not necessarily faithful — they describe something that could be a reasonable basis for the decision but may not accurately reflect the actual computational basis. Regulators and courts are beginning to grapple with what level of explanation is legally sufficient, and the resolution of this question will significantly shape how AI transparency requirements are implemented in practice.
Civil Society, Advocacy, and the Role of Litigation
Alongside formal regulatory processes, civil society organisations and individual litigants are playing an increasingly active role in shaping how AI is governed in practice. This includes both advocacy and direct litigation challenging specific AI deployments, and it represents an important complement to formal regulatory enforcement in jurisdictions where official enforcement capacity is limited or slow.
In Europe, several digital rights organisations — including noyb (None of Your Business), AlgorithmWatch, and Access Now — have been active in filing complaints about specific AI deployments under the EU AI Act and under GDPR. These organisations have the legal standing to file complaints on behalf of affected individuals or in the public interest, and they have used this standing to challenge AI systems used in contexts ranging from border control biometric screening to automated social benefit decisions. Even in cases where formal enforcement action is slow, the filing of complaints creates public attention and can prompt companies to modify or suspend challenged deployments pre-emptively.
In the United States, litigation challenging AI deployments has been proceeding through civil rights and consumer protection frameworks rather than dedicated AI regulation. Cases challenging AI-powered hiring tools, predictive policing systems, and algorithmic content moderation have proceeded through the courts with varying results. The legal theories available under existing US law — primarily anti-discrimination law and consumer protection statutes — impose constraints on AI that may differ in important ways from the risk-based framework of the EU AI Act, and the development of a body of US case law on AI liability is still at an early stage.
International human rights bodies have also begun engaging with AI governance questions, with the UN Human Rights Committee and the Council of Europe's Human Rights Commissioner publishing guidance on the human rights implications of AI in public decision-making. While these bodies do not have binding enforcement authority in most contexts, their positions contribute to the normative environment in which AI governance is debated and can influence domestic legal and regulatory interpretation.
Sector-Specific Regulation: Healthcare, Finance, and Employment
While horizontal AI frameworks like the EU AI Act receive the most attention, much of the practical regulatory action around AI is happening at the sector level — through regulatory bodies with existing authority over specific industries, applying that authority to AI applications within their remit. Understanding sector-specific regulatory developments is essential for any organisation deploying AI in a regulated industry.
In healthcare, the regulatory challenge is how to oversee AI systems used in clinical decision support, diagnostic assistance, and patient risk stratification while supporting beneficial innovation. Regulators in the EU, US, UK, and several other jurisdictions have been developing guidance on how existing medical device and pharmaceutical frameworks apply to AI, with a general trend toward treating AI systems that influence clinical decisions as requiring registration and review commensurate with their risk. The FDA's Digital Health Center of Excellence has been developing a risk-based framework for AI software as a Medical Device (SaMD) that attempts to balance oversight with an acknowledgement that AI systems, unlike static medical devices, may update and evolve over time.
In financial services, AI is being used across a wide range of functions — credit scoring, fraud detection, trading algorithms, insurance underwriting, customer service — and is regulated through a patchwork of existing financial services frameworks that are being updated to explicitly address AI. The Basel Committee on Banking Supervision has published principles for banks' use of AI that emphasise governance, model risk management, and explainability. The Financial Stability Board has examined AI and machine learning in financial services from a systemic risk perspective, raising concerns about correlated behaviour and potential market stability implications of widely shared models.
Employment AI — the use of algorithmic tools in hiring, promotion, performance management, and compensation decisions — is receiving increasing regulatory attention given its potential for discriminatory outcomes and its impact on fundamental economic interests. The EU AI Act's classification of AI systems used in employment contexts as high-risk is one of the most consequential aspects of that legislation for the HR technology industry, which has been rapidly deploying AI tools for CV screening, interview assessment, and workforce planning. Several major HR software vendors are navigating significant compliance work to bring their AI offerings into compliance with the Act's requirements.
Small Business and Startup Impacts
The compliance burden of AI regulation falls disproportionately on smaller organisations — startups, small businesses, and academic researchers — who lack the legal and compliance resources of large enterprises but may be developing or deploying AI systems that are subject to the same regulatory requirements. This distributional concern has been raised consistently in consultations on AI regulatory frameworks across jurisdictions, and regulators have responded with varying degrees of accommodation.
The EU AI Act includes some provisions intended to reduce the burden on SMEs — including simplified documentation requirements and access to regulatory sandboxes where small companies can test their products under supervised conditions before full compliance is required. In practice, however, the overall architecture of the Act is complex, and navigating it without substantial legal and compliance support is difficult. Several European AI industry associations have published guidance for SMEs and offer compliance assistance programmes, but coverage is uneven and the quality of available guidance varies.
Academic AI research is another area where regulatory requirements create challenges. Research institutions that build AI systems as part of research projects may technically be in scope of AI regulation for systems that would be classified as high-risk in commercial deployment, but the research context creates different risk profiles and compliance capabilities. The EU AI Act includes an exemption for AI developed exclusively for research and development purposes, but the boundary between research and deployment is not always clear in practice, particularly for research groups that make their systems available as open-source tools or as demonstrations accessible to external users.
Regulatory sandboxes — controlled environments in which AI developers can test products with regulatory oversight but without full compliance requirements — have been established or announced in several jurisdictions as a mechanism for supporting innovation while maintaining engagement with regulators. Spain, Norway, and several other EU member states have launched AI sandboxes under the EU AI Act framework. The effectiveness of these mechanisms in practice depends on their accessibility, the speed of engagement, and whether they genuinely reduce time-to-market for compliant products rather than simply adding an additional regulatory process.
Corporate Responsible AI Programmes
Alongside external regulatory requirements, a significant share of the practical AI governance action in 2026 is taking place inside organisations themselves — through internal responsible AI programmes that set policies, processes, and guardrails for how AI is developed and deployed. These programmes have grown substantially in scale and sophistication over the past two years, driven by a combination of regulatory compliance requirements, reputational concerns, and genuine organisational commitment to deploying AI responsibly.
The structure and content of responsible AI programmes varies significantly across organisations. At larger technology companies, they typically include dedicated teams of AI ethics and safety specialists, structured review processes for AI products at different stages of development, red-teaming exercises that attempt to identify potential harms before deployment, and post-deployment monitoring systems that track model behaviour in production. At smaller companies, responsible AI practices may be embedded in existing product development processes rather than organised as a separate function.
Several companies have published voluntary transparency reports on their AI development practices, model capability assessments, and safety evaluations — partly in response to regulatory expectations and partly as a form of stakeholder communication. These publications vary in depth and verifiability; some represent substantive technical reporting while others are primarily communications exercises. Third-party auditing of AI systems is an emerging industry, and several organisations offer assessment and certification services for AI governance, though the standards and methodologies used vary and no universal certification framework has yet emerged.
One area where corporate responsible AI programmes have had visible practical impact is in the implementation of content policies and usage restrictions for AI products — the rules governing what AI systems will and will not do, what content they will and will not generate, and what use cases they are intended or not intended to support. These policies are genuinely consequential: they determine how AI products behave for hundreds of millions of users and have been the subject of significant public debate about the appropriate role of private companies in setting content norms for AI systems with broad societal reach.
AI Liability: Who Is Responsible When Things Go Wrong?
One of the most consequential and unresolved questions in AI law and regulation is liability — who bears legal responsibility when an AI system causes harm? This question matters enormously for how AI risk is allocated across the value chain, and the different answers that different legal systems are converging on will significantly shape incentives for AI development and deployment.
The EU is addressing AI liability directly through the AI Liability Directive, which was in late-stage legislative process as of early 2026. The draft directive would create a rebuttable presumption of causality in cases where an AI system has breached applicable requirements and that breach plausibly contributed to harm — effectively shifting part of the evidentiary burden from plaintiffs to defendants in AI liability cases. This is a significant change from the standard tort law position in most EU member states, where plaintiffs bear the burden of proving both causation and fault. Combined with the EU AI Act's compliance requirements, the Liability Directive creates a framework where failure to comply with the Act's requirements not only exposes companies to regulatory enforcement but also creates litigation risk.
The US approach to AI liability is being shaped primarily through existing tort and statutory frameworks rather than AI-specific legislation. Product liability, negligence, and anti-discrimination law are all potentially applicable depending on the facts of a given case, but each framework has limitations when applied to AI. Product liability law was developed for physical products and does not map cleanly onto software systems that are regularly updated and whose behaviour is probabilistic. Negligence requires establishing a duty of care, breach, and causation — proving causation in cases involving complex AI systems presents particular challenges. The question of whether AI system providers or deployers are primarily liable, and how responsibility is allocated between them, is being worked out case by case in US courts.
Insurance is one practical mechanism through which AI liability risk is being managed, and the AI-specific insurance market is developing rapidly. Cyber liability policies are being updated to cover AI-specific risks, and dedicated AI liability insurance products are emerging. The pricing and terms of these products reflect the insurers' assessment of AI risk, and the development of an active AI insurance market is itself a useful indicator of how risk professionals are thinking about the probability and magnitude of AI-related harms.
Building Regulatory Capacity for AI
Effective AI regulation requires not only well-drafted rules but also regulatory institutions with the technical capacity to understand, evaluate, and oversee AI systems. This is a significant challenge: the technical complexity of modern AI systems, the pace of change in the field, and the highly competitive labour market for AI expertise all make it difficult for public sector regulatory agencies to build and retain the in-house expertise needed for effective AI oversight.
The EU has been building the European AI Office's technical capacity since its establishment in 2024, recruiting AI engineers, data scientists, and policy specialists. Several member states are also investing in technical capacity within their national AI oversight bodies. But the challenge of competing with private sector compensation for scarce AI expertise is real, and regulatory agencies in most jurisdictions have fewer technical specialists working on AI oversight than the scope of the regulatory task would ideally require.
Several jurisdictions have explored models for supplementing in-house regulatory expertise: convening academic advisory boards that can provide technical assessment on specific questions, establishing regulatory sandbox arrangements that create ongoing engagement with companies at the frontier, and developing capacity to commission independent technical assessments of AI systems. The UK's AI Safety Institute, established in 2023, was an early model for a government body focused on technical AI evaluation capability, and similar institutions are being considered or established in other countries.
Regulatory capacity at the international level is even more constrained. The bodies involved in AI governance — the OECD, UNESCO, the G7, the Council of Europe — have policy and legal expertise but limited technical AI evaluation capability. Developing shared evaluation tools, methodologies, and findings that can inform regulatory decisions across jurisdictions is an important challenge for international AI governance that is receiving increased attention but has not yet been adequately resourced.