In April 2026, the European Union's AI Act came into full force — the first comprehensive legal framework for artificial intelligence in the world. It was greeted simultaneously as a landmark of democratic governance and a cautionary tale of regulatory overreach, depending on who you asked. This division is itself instructive. There is no consensus on what good AI regulation looks like, because there is no consensus on what AI fundamentally is: a tool, a service, an agent, a risk, or something entirely unprecedented.
The challenge facing regulators is not merely technical. It is philosophical. AI systems do not fit neatly into existing legal categories. A large language model is not a product in the conventional sense; it is a capability. It does not have a single use case; it has infinitely many. It does not simply break; it fails in subtle, context-dependent ways that may not be apparent until millions of people have been harmed. Regulators trained to govern cars, pharmaceuticals, and financial instruments are being asked to govern something that behaves like none of them.
The European Experiment
The EU AI Act takes a risk-based approach, classifying AI systems into tiers — unacceptable risk (prohibited), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). High-risk systems — those used in healthcare, critical infrastructure, law enforcement, and credit scoring — must be registered in a public database, undergo conformity assessment, and maintain detailed technical documentation. Developers must ensure human oversight mechanisms and demonstrate that their systems meet standards for accuracy, robustness, and cybersecurity.
"Regulation that arrives five years too late is not caution — it is abdication. The question is not whether to regulate AI, but whether we have the institutional capacity to regulate it well."
— Prof. Priya Sharma, Oxford Internet InstituteThe Act's critics — largely from the technology industry and some academic researchers — argue that it conflates risk with capability, that its definitions are too narrow to capture AI's evolving nature, and that its compliance costs disproportionately burden smaller companies and research institutions. Critics from civil society take the opposite view: that the Act's exemptions are too broad, that its enforcement mechanisms are too weak, and that foundation models — the large-scale AI systems from which hundreds of derivative products are built — were added as an afterthought rather than treated as the central regulatory challenge they represent.
Both critiques contain truth. The AI Act is an imperfect instrument for an imperfect moment. But imperfect governance is not the same as failed governance. The Act's existence forces companies to document their systems, test for foreseeable harms, and maintain oversight mechanisms they would otherwise ignore. These are not small achievements.
The United States: Fragmentation by Default
In contrast, the United States has taken what might charitably be called a federalist approach to AI governance — and what critics call a governance vacuum. In the absence of comprehensive federal legislation, individual states have moved aggressively: California's AB 2013 requires impact assessments for high-risk AI systems. Colorado has regulated AI in insurance. Texas and Illinois have passed laws governing automated hiring tools. The result is a patchwork that creates compliance complexity for companies without providing coherent protection for citizens.
The Biden-era AI Executive Order established significant process requirements for federal agencies and companies providing AI to the government, but executive orders are vulnerable to reversal — as subsequent events have demonstrated. The legislative branch has struggled to develop durable AI policy in an environment of deep partisan polarisation and rapid technological change that outpaces the traditional legislative cycle.
Asia's Divergent Paths
Asia presents a study in contrasts. China has moved rapidly and comprehensively to regulate specific AI applications — deepfakes, recommendation algorithms, and generative AI services — through targeted regulations that impose specific obligations on providers while maintaining broad state authority over AI development as a strategic national capability. The result is a regulatory system that protects certain citizen interests while simultaneously enabling surveillance at a scale unprecedented in history.
Japan has taken a markedly different approach. The Japanese government's AI Governance Guidelines are advisory rather than mandatory, emphasising co-creation between government, industry, and civil society over top-down prescription. Japan's model reflects its broader regulatory philosophy: preference for industry self-governance, collaborative standard-setting, and incremental adjustment over transformative intervention. Whether this approach will prove adequate to the challenges ahead is a question this magazine will continue to investigate.
What Effective Governance Requires
Effective AI governance requires three things that are currently in short supply: technical expertise in regulatory bodies, international coordination to prevent regulatory arbitrage, and mechanisms to keep pace with technology that evolves faster than any legislative process can accommodate.
On expertise: the gap between those who understand AI systems deeply and those who govern them is vast. Regulatory bodies need the ability to hire and retain technical talent — which means competing with industry on compensation and offering the kind of mission-driven work that attracts the best people regardless of salary. Several proposals for standing AI regulatory agencies — modelled on the FDA or the NRC — have been advanced in both the US and UK. None has succeeded.
On coordination: AI is a global technology governed by national laws. A company can train a model in one jurisdiction, fine-tune it in a second, deploy it in a third, and serve users in a fourth. Without international frameworks for regulatory recognition and enforcement cooperation, even the strongest national rules can be circumvented. The G7's Hiroshima AI Process and the OECD's AI Policy Observatory are steps in the right direction — but they remain voluntary, advisory, and chronically underfunded.
The governance gap is real. But it is not inevitable. The question before governments, companies, civil society, and citizens is whether we have the collective will — and the institutional imagination — to close it before the costs become irreversible.