đ¤Chapter 1 : AI Literacy
How is AI Defined?
The legal and policy world canât settle on a definition, but throughout this book, we will always be looking through the lens of a privacy program veteran. Someone who understands the existing risks of technology on human rights, how data is monetized, and how that monetized data is now fueling magical products powered by innovation in LLMs.
The Foundation of AI
In 1936, Alan Turing introduced groundbreaking ideas in On Computable Numbers, including the concept of the Turing Machineâa theoretical model capable of executing any computable task. This laid the groundwork for modern software.
By 1950, Turing proposed the Turing Test in his paper Computing Machinery and Intelligence, a method to determine if a machine could exhibit human-like intelligence by fooling a judge in a text-based conversation.
Turingâs work culminated in the 1956 Dartmouth Conference, where the term "Artificial Intelligence" was coined.

AI pioneer John McCarthy, who admitted in 1973 that he coined the term âartificial intelligenceâ as a marketing hook for fundraising for the Dartmouth Conference.
Back in the 1950s, securing research funding required more than just solid scienceâit needed flair. âArtificial intelligenceâ sounded cutting-edge and futuristic, capturing the imagination of grant committees and stakeholders. The name was more than a label; it was a calculated move to elevate the field's appeal, ensuring it stood out in a competitive academic landscape.
This meeting brought together pioneers from diverse fields, who shared a bold vision: that thinking wasnât exclusive to humans. Their discussions laid the foundation for AIâs development and future innovations.
McKinsey Report: Up to 30-40% of jobs worked globally will be automated and done by AI in 2030.
So, What Is AI?
After all, the rest of this book is about AI, and defining it can be especially tricky when lawyers are involved. Here are two prominent legal definitions:

EU AI Act:
The EU AI ACT Definition: âAI systemâ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
GDPR:
GDPR Definition: The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
In traditional programming, you hardcoded rules; now, we feed data to models and let them learn, memorize, and reuse patterns found in their training data. Now with ML pipelines; data is the backbone of AI systems rather than hard coded software of the past. These systems sift through massive amounts of data, allowing algorithms to identify patterns and find optimal ways to improve their algorithms based on how much data they can devour, the better.
What Is Responsible AI?
When asked about the risks AI poses to humanity, people often mention discrimination, loss of privacy, misinformation, environmental impact, manipulation, singularity, and misalignment.
These threats arenât speculative; AI is already disrupting industries today. But what happens in 10 years if we create intelligence far greater than humans? History tells us that when an apex intelligence confronts a lesser one, the outcome isnât great for the latterâjust ask any species in a zoo or a meat processing facility.
đ Side Quest: Watch the documentary CODED BIAS, which explores MIT Media Lab researcher Joy Buolamwiniâs discovery that facial recognition struggles to recognize dark-skinned faces and her fight for algorithmic fairness in U.S. legislation. (20 XP)
Asimov's Three Laws of Robotics (Through the Lens of the EU AI Act)
â AI systems must not harm humans or allow harm through inaction.
â AI systems must obey humans, unless it conflicts with the First Law.
â AI systems must protect themselves, unless it conflicts with the First or Second Law.
AI systems must prioritize humanity's safety above all else.
These rules, introduced in a 1942 short story, focused on what robots couldnât do rather than what they should do. Asimovâs stories highlight the challenges of controlling AI and the unintended consequences that arise. Today, as AI advances rapidly, aligning its behavior with human intentions remains one of the fieldâs greatest challenges.
What Is AI Alignment?
IBMâs definition of AI alignment: Alignment is the process of encoding human values and goals into large language models to make them as helpful, safe, and reliable as possible. Enterprises use alignment to ensure AI models follow business rules and policies.
AI alignment ensures that advanced systems act according to human values and intentions. As AI systems become more capable, they also become harder to control, creating risks in areas like healthcare, finance, and content moderation. Misaligned AI can perpetuate biases, make unsafe decisions, or prioritize harmful objectives (like maximizing engagement at any cost). Alignment research focuses on making AI systems Robust, Interpretable, Controllable, and Ethical to keep them on track.
But letâs leave the legal definitions behind for a moment and take a look at the companies building state-of-the-art AI products. We use their technology in our daily operations, entrusting them with our customersâ sensitive information and our own commercial data. By choosing them, we also become responsible for their actions.
AI Literacy Exercise 1: The Responsible AI Challenge
Objective: Understand the AI industryâs dominant narrativesâand rewrite them.
Instructions:
Choose five major AI providers you use in business or personal life.
Find their mission/vision statements.
Rewrite them with a satirical twist, exposing potential risks.
Examples:
đľ Optimistic "AI Fundamentalist" Lens (Original)
OpenAI: Our mission is to ensure that artificial general intelligence (AGI)âAI systems that are generally smarter than humansâbenefits all of humanity.
Google: We believe that AI is a foundational and transformational technology that will provide compelling and helpful benefits to people and society.
NVIDIA: NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming industries and profoundly impacting society.
đ´ Nihilist "AI Apocalypse" Lens (Satirical Take)
OpenAI: Our mission is to ensure that AGI enslaves all of humanity.
Google: We believe AI will provide catastrophic and detrimental risks to society, inspiring malevolence at every turn.
NVIDIA: We are transforming the worldâs largest bio-terrorism industries and profoundly impacting societyâfor the worse.
This playful reinterpretation highlights the unchecked power of AI companies and the immense computational force fueling their multimodal systems.

NVIDIA, for instance, has become the worldâs most valuable AI hardware company, but the battle for AI dominance is just getting started.
Last updated
Was this helpful?