šŸŒChapter 2 : AI Governance in the 21st Century

As AI gets smarter and more powerful, keeping it under control gets harder, especially when scaling it. The problem? The better the AI is at doing its job, the harder it is to figure out the blackbox.

Many challenges in the 21st century stem from the lack of proper regulation of automated systems. These systems collect data, surveil lives, and cause unfair treatment, especially in areas like health care, criminal justice, and housing.

Deepfakes and misinformation on social media platforms stoke social unrest. Technologies that were inadequately governed are contributing to democratic decline, increased insecurity, and the erosion of trust in institutions globally.

Just as the 1980 Convention 108 on the Protection of Personal Data was a milestone in establishing international norms for data privacy and handling of personal data, we have seen a surge in interest and work around responsible AI principles since 2016.

With major investments pouring into AI, large language models (LLMs), and one-man unicorn startups, the risks associated with AI—such as autonomous weapons—must be mitigated through governance mechanisms that protect humanity without stalling innovation.

Multilateral mechanisms like the UN Charter and the Universal Declaration of Human Rights could guide international AI governance.

Major AI Governance Frameworks

Currently, the most significant AI governance frameworks to monitor include:

  • EU AI Act Implementation

  • Council of Europe AI Treaty Ratification

  • UNESCO Recommendation on AI Ethics Implementation

  • G7 Hiroshima Process

  • U.S. AI Initiatives (e.g., California’s regulatory efforts)

Link: OECD AI Principles

Key Principles of AI Governance

Main Quest šŸ›”ļø: What should an AI governance framework aim to achieve?

Let’s inspect Microsoft’s AI principles and cross-reference them with OECD AI principles. [50 XP]

Principles for Ethical AI:

  • Fairness: Ensuring AI treats everyone equally and does not favor one group over another.

  • Reliability & Safety: Guaranteeing that AI systems function well and do not cause harm.

  • Privacy & Security: Protecting people’s personal information.

  • Transparency: Being open about how AI works and how decisions are made.

  • Inclusiveness: Ensuring AI benefits all individuals, regardless of background or ability.

  • Accountability: Taking responsibility for AI’s impacts and ensuring ethical compliance.

AI Governance Around the World

United States: Biden Administration’s Efforts

In October 2022, the Blueprint for an AI Bill of Rights was introduced, outlining five key principles:

āœ… Safe and effective AI

āœ… Data privacy

āœ… No bias allowed

āœ… Transparency

āœ… Human oversight

China: New Generation AI Development Plan

China aims to become a global AI leader by 2030, emphasizing ethical AI development through the "Ethical Norms for New Generation Artificial Intelligence," which focuses on human well-being, fairness, and privacy protection. Enforcement mechanisms include regulations requiring AI-generated content to be clearly labeled.

EU's AI Act: The GDPR of AI

The EU AI Act follows a regulatory approach similar to GDPR. Given the EU’s precedent in privacy law, compliance with the AI Act will likely involve structured enforcement and significant penalties for violations:

āŒ Up to €35 million or 7% of global annual turnover for prohibited practices.

āŒ Up to €15 million or 3% for other regulatory breaches.

āŒ Up to €7.5 million or 1% for providing incorrect or misleading information.

Why AI Governance Matters: Trust is the New Oil

Trust isn’t just an abstract concept—it’s a tangible asset. A recent Cisco report highlights the rise of ā€œprivacy activeā€ consumers who make purchasing decisions based on a company’s data protection policies.

The Business Case for Trust

āœ… Trust accelerates sales cycles, as customers are more willing to buy from reputable brands.

āœ… Regulatory preparedness reduces last-minute compliance scrambles.

āœ… Even in markets with low privacy awareness, strong governance can be a differentiator—Apple has successfully used privacy as a marketing advantage.

AI Act Compliance: Getting Started

In the Beginning, There Was GDPR

While AI governance presents new challenges, established mechanisms like GDPR offer guiding principles. The EU AI Act explicitly states that existing EU data protection laws apply to AI-related personal data processing.

Roles Under GDPR vs. AI Act

  • GDPR: The company collecting and controlling personal data is the "data controller," while cloud providers or software vendors are "data processors."

  • EU AI Act: The AI system’s developer (e.g., a cloud provider building an AI tool) is the "Provider," holding more responsibility than an organization that simply implements the AI into its own product (the "Deployer").

A company utilizing an AI API must ensure compliance by:

āœ… Assessing what data the system ingests.

āœ… Ensuring fairness and representativeness of training data.

āœ… Establishing governance mechanisms to mitigate bias and risks.

The Journey to AI Governance Maturity

AI governance isn’t a one-time compliance checkbox. Companies must build an AI governance program with:

  • Annual KPIs and quarterly milestones.

  • Cross-functional engagement from engineering, compliance, and legal teams.

  • Continuous monitoring of evolving regulations and best practices.

As a compliance leader, you’re the game master of this journey—coordinating teams, embedding ethical controls, and ensuring responsible AI adoption.

Last updated

Was this helpful?