Chapter 1 : AI Literacy
Last updated
Was this helpful?
Last updated
Was this helpful?
The legal and policy world can’t settle on a definition, but throughout this book, we will always be looking through the lens of a privacy program veteran. Someone who understands the existing risks of technology on human rights, how data is monetized, and how that monetized data is now fueling magical products powered by innovation in LLMs.
In 1936, Alan Turing introduced groundbreaking ideas in On Computable Numbers, including the concept of the Turing Machine—a theoretical model capable of executing any computable task. This laid the groundwork for modern software.
By 1950, Turing proposed the Turing Test in his paper Computing Machinery and Intelligence, a method to determine if a machine could exhibit human-like intelligence by fooling a judge in a text-based conversation.
This meeting brought together pioneers from diverse fields, who shared a bold vision: that thinking wasn’t exclusive to humans. Their discussions laid the foundation for AI’s development and future innovations.
McKinsey Report: Up to 30-40% of jobs worked globally will be automated and done by AI in 2030.
After all, the rest of this book is about AI, and defining it can be especially tricky when lawyers are involved. Here are two prominent legal definitions:
EU AI Act:
The EU AI ACT Definition: ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
GDPR:
GDPR Definition: The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.
In traditional programming, you hardcoded rules; now, we feed data to models and let them learn, memorize, and reuse patterns found in their training data. Now with ML pipelines; data is the backbone of AI systems rather than hard coded software of the past. These systems sift through massive amounts of data, allowing algorithms to identify patterns and find optimal ways to improve their algorithms based on how much data they can devour, the better.
These threats aren’t speculative; AI is already disrupting industries today. But what happens in 10 years if we create intelligence far greater than humans? History tells us that when an apex intelligence confronts a lesser one, the outcome isn’t great for the latter—just ask any species in a zoo or a meat processing facility.
To prevent dystopian AI scenarios, experts from legal, data science, cybersecurity, psychology, and even science fiction backgrounds are working on solutions.vLooking beyond the fictional yet compelling framework of Isaac Asimov's Three Laws of Robotics, real-world strategies are needed to address modern AI’s complexities.
✅ AI systems must not harm humans or allow harm through inaction.
✅ AI systems must obey humans, unless it conflicts with the First Law.
✅ AI systems must protect themselves, unless it conflicts with the First or Second Law.
AI systems must prioritize humanity's safety above all else.
IBM’s definition of AI alignment: Alignment is the process of encoding human values and goals into large language models to make them as helpful, safe, and reliable as possible. Enterprises use alignment to ensure AI models follow business rules and policies.
AI alignment ensures that advanced systems act according to human values and intentions. As AI systems become more capable, they also become harder to control, creating risks in areas like healthcare, finance, and content moderation. Misaligned AI can perpetuate biases, make unsafe decisions, or prioritize harmful objectives (like maximizing engagement at any cost). Alignment research focuses on making AI systems Robust, Interpretable, Controllable, and Ethical to keep them on track.
But let’s leave the legal definitions behind for a moment and take a look at the companies building state-of-the-art AI products. We use their technology in our daily operations, entrusting them with our customers’ sensitive information and our own commercial data. By choosing them, we also become responsible for their actions.
Objective: Understand the AI industry’s dominant narratives—and rewrite them.
Instructions:
Choose five major AI providers you use in business or personal life.
Find their mission/vision statements.
Rewrite them with a satirical twist, exposing potential risks.
Examples:
🔵 Optimistic "AI Fundamentalist" Lens (Original)
OpenAI: Our mission is to ensure that artificial general intelligence (AGI)—AI systems that are generally smarter than humans—benefits all of humanity.
Google: We believe that AI is a foundational and transformational technology that will provide compelling and helpful benefits to people and society.
NVIDIA: NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming industries and profoundly impacting society.
🔴 Nihilist "AI Apocalypse" Lens (Satirical Take)
OpenAI: Our mission is to ensure that AGI enslaves all of humanity.
Google: We believe AI will provide catastrophic and detrimental risks to society, inspiring malevolence at every turn.
NVIDIA: We are transforming the world’s largest bio-terrorism industries and profoundly impacting society—for the worse.