Artificial intelligence (AI) and automation systems are now utilized in a wide range of applications, from hiring tools and self-driving cars to exam-proctoring systems, credit scoring, and medical diagnostics, as well as autonomous vehicles. These technologies are changing how humans perform certain activities, but the rules governing them remain dangerously underdeveloped.
As ChatGPT, Claude, Gemini, and countless automation solutions reshape industries, the need for clear regulatory guardrails is becoming more urgent. While governments, corporations, and civil society organizations attempt to respond, their approaches vary widely and are nowhere near a global consensus. As such, one question looms: Who's in charge of the machines?
The Fragmented Regulatory Landscape
European Union
The European Union has taken the lead with the world's first comprehensive AI legislation: the EU AI Act. The Act was finalized in 2024 and enforced in stages from February 2, 2025, to August 2, 2026 and beyond. The Act classifies AI systems by risk level from "minimal" to "unacceptable." Under this framework, systems deemed too risky (such as social scoring or real-time biometric surveillance) are banned outrightly. High-risk applications, such as AI in education, healthcare, or law enforcement, must undergo strict pre-market assessments, transparency obligations, and post-deployment monitoring.
Importantly, the EU AI Act doesn't just target European developers. It applies to any company offering AI systems in the EU market, thereby effectively exporting European standards to the world, similar to GDPR. In response, global tech firms are adjusting their product development pipelines to remain compliant. The EU also mandates the creation of national AI regulatory sandboxes to foster innovation while monitoring safety, although this is a balanced move that other regions are still struggling to replicate.
United States – Reactive, Fragmented, and Lobby-Heavy
The U.S. continues to operate without a comprehensive national AI law. Regulatory efforts have not been systematic and largely led by agencies like the Federal Trade Commission (FTC), the National Institute of Standards and Technology (NIST), and the newly rebranded Center for AI Standards and Innovation. It is also worthy of note that the U.S. have no formal legislation for AI; instead, the federal government has issued executive orders and voluntary guidance. A typical example is the 2023 AI Bill of Rights and NIST's AI Risk Management Framework. In practice, these frameworks do lack the the enforcement power although they support fairness, transparency, and accountability. There is also the newly established Centre for AI Standards and Innovation (CAISI), which focuses on national security concerns, including bio, cyber, and foreign-influence use cases. This void in standard regulations has pushed individual U.S. states to step in. As of June 2025, U.S. states from California to Massachusetts have enacted over 1,000 AI-related bills that regulate facial recognition bans and algorithmic transparency in hiring. However, due to this divided approach, startups are faced with compliance chaos, especially when they operate in various U.S. states.
China
China's regulatory strategy is tightly incorporated with state control. The Cyberspace Administration of China (CAC) oversees AI content and behaviour, requiring providers of generative AI tools to pre-register with authorities, ensure political alignment, and clearly label AI-generated content. The Chinese government has banned AI systems that generate content contradicting state narratives or spread "socially harmful" information.
In China, generative AI services must undergo security assessments and refrain from producing outputs that endanger national unity. China's governance model is deeply embedded in its authoritarian framework leveraging AI to enhance surveillance, regulate expression, and reinforce ideological boundaries. But even in this centralized system, there are signs of caution. The government recently rolled back some advanced autonomous vehicle testing and suspended AI-enabled features during the 2025 national college entrance exam to prevent cheating, demonstrating an awareness of the technology's unintended consequences.
Africa and the Global South
Outside the North, regulation varies widely. In Africa, Mauritius, Kenya, Nigeria, and South Africa have launched national AI strategies or stakeholder consultations, although these regulations are still in their formative stages. In Nigeria's draft, AI policy emphasizes inclusion, data sovereignty, and ethical safeguards, but capacity constraints make implementation difficult. Because of this lack of regulation clarity, many Nigerian AI startups are forced to operate in regulatory grey zones. While this has fueled rapid experimentation, it raises issues around data privacy, algorithmic bias, and long-term safety
In Latin America and Southeast Asia, a similar pattern emerges: high-level commitments to ethical AI and digital inclusion but limited regulatory infrastructure or enforcement power. The result is a reliance on international standards or frameworks developed by multilateral organizations.
The Core Challenges
- Pace vs. Policy: Technology moves faster than legislative cycles. By the time a bill is passed, the system it targeted may already be obsolete.
- Opacity of Models: Many AI systems operate as "black boxes," offering little insight into how decisions are made, complicating legal accountability.
- Jurisdictional Complications: AI tools deployed globally may be regulated differently in each country, leading to conflicting obligations.
- Data Ownership and Consent: Questions about who owns, processes, and benefits from AI-generated insights remain unresolved in many jurisdictions.
Why Regulation Matters
Unchecked AI development poses significant risks:
- Discrimination and bias in hiring, lending, and policing decisions
- Displacement of workers without social protections or reskilling frameworks
- Mass surveillance and erosion of civil liberties
- Proliferation of misinformation, deepfakes, and election manipulation
Who Should Regulate the Machines?
No single actor can govern AI alone. Governments bring legal authority. Companies build the tools. Academia provides independent research. Civil society pushes for rights and inclusion. International collaboration through groups like the UN, OECD, and GPAI is essential. New multistakeholder bodies, such as AI Safety Institutes emerging in the U.S., UK, Canada, and Japan, point toward shared governance models. However, power imbalances, geopolitical rivalries, and divergent ethical priorities still complicate this ideal.
Conclusion
The global regulatory landscape for AI and automation is inconsistent, fragmented, and fiercely contested. Europe leads with structure. The U.S. dithers under fragmentation. China enforces control through ideology. Africa and the Global South seek structure but lack resources. And in between lies a growing urgency to get this right.
We are standing at a crossroads: regulate the machines now or risk being governed by them later through bias, misinformation, exclusion, and surveillance. The time to decide who writes the rules for AI isn't in the future. It's now.