Chapter 3 - Getting Started with AI Act Compliance
Dear reader, this chapter dives into AI Act compliance, explaining how GDPR fits into the picture and how to ensure your AI systems follow privacy and data protection measures of your organization.
Foreword for Chapter 3 by Gรถrkem รetin, CEO @VerifyWise
"AI and data protection go hand in hand. If you're building AI systems, you can't ignore the rules around personal data. The EU has set clear guidelines with the EU AI Act, and it all started with the GDPR. But following these laws isnโt just about avoiding fines. Itโs also about building trust and doing things the right way.
Over time, the technology and the need for stronger rules evolved. The GDPR became the gold standard for personal data protection, giving individuals more control over their information. Now AI is playing a bigger role in our lives and the EU AI Act extends those protections to ensure AI systems handle personal data securely.
Mert's chapter breaks it all down in simple terms. Youโll learn who is responsible for what when it comes to handling personal data. You'll read how to design AI systems that respect privacy from the start. And you'll get practical steps to stay compliant without slowing down innovation.
AI compliance isnโt a one-time task. Itโs rather an ongoing process. With the right approach, you can build ethical AI." - Gรถrkem รetin (Check him out on Linkedin)
Chapter 3: Getting Started with AI Act Compliance
As we venture into the ever-changing and sometimes unpredictable world of AI, itโs essential to remember that we arenโt starting from scratch.
In the Beginning, There Was GDPR
But letโs rewind even more. Back in 1980, Convention 108 laid the groundwork for privacy and data protection rules and principles.
From there, various regulations evolved, shaping how we treat personal data. Flash forward to today, and weโve got the AI Act and the GDPR. Yet there are several significant frameworks that we should also consider to fill in the gaps of these EU regulations to ethically handle personal information regarding AI and data protection compliance.
The EU AI Act clearly ties AI systems that deal with personal data to GDPR compliance FIRST! ๐
Now, hereโs the key takeaway: if your AI system uses personal data, it falls under GDPR. Full stop. Article 2 of the EU AI Act tells you that GDPR applies if your AI system is involved in personal data processing. Not a suggestion, not a guidelineโmandatory.
Understanding GDPR Requirements for AI Systems
Hereโs the basics to get you covered and make sure your AI doesnโt get you into hot water:
Roles and Responsibilities โ
In the world of GDPR, youโll encounter two main roles: data controllers and data processors.
Data Controllers and Processors: In GDPR lingo, the controller (your startup) makes the calls on data use. The processor (hello, cloud vendors) simply carries out the instructions. If something goes sideways, the controller takes the heat.
Data Controller: This is your organizationโthe one determining what data is collected, how itโs processed, and for what purpose. For example, if you're using a cloud service like Digital Ocean to host your app, you (the startup) are the data controller, and Digital Ocean is the processor.
Data Processor: This is the entity that processes data on behalf of the controller. In this case, Digital Ocean hosts your data, but it doesnโt control it.
However, when it comes to AI systems, the roles can kind of shift.
But when we talk about the EU AI Act, the game changes a bit. The system provider (say, your third-party platform) bears a bigger responsibility. Meanwhile, your startup, as the โdeployer,โ has its own set of duties to ensure everythingโs in line.
Example: As a deployer of AI (i.e., the one using an AI Assistant API like the one from OpenAI), your organization may now fall under the term "processor" while the cloud provider becomes more of a "controller" with higher compliance obligations this time on their end.
Itโs essential to understand these shifts in roles and responsibilities as they affect your data protection strategy and obligations set out in your DPAs with AI providers.
2. Implement Privacy by Design and Default Strategies
Consent or Legal Bases: Users must know if their data is being used, and they need to consentโor you need another valid reason to process it. No sneaky stuff.
Data Security: Data should be protected. Not just โprotectedโโproperly secured. People expect that.
Monitoring: Inform YOUR users about the ML models they're interacting with and monitoring the system for unauthorized access or vulnerabilities (e.g., LLM attacks, data poisoning).
Data Minimization: Only collect what you actually need. No unnecessary data hoarding.
If possible, anonymize or mask sensitive data before feeding it into the AI model.
Transparency: Be clear with your users about what personal data you're collecting, for how long youโll keep it, and what rights they have. Be upfront about what data youโre collecting and why. People respect clarity.
This includes informing them about who is responsible for their data and how to contact you.
Proportionality: Just because you can collect data, doesnโt mean you should. Donโt go overboard. Design your AI system to use only the data necessary to achieve the desired outcome.
Avoid over-collecting data or retaining it for longer than necessary.
Purpose Limitation: Use the data for what it was meant for, and donโt hold on to it longer than necessary. Ensure that the data collected is used only for its intended purpose.
If you're processing data for training, be clear about that and make sure itโs in line with fundamental GDPR principles.
Lawfulness, Fairness, and Transparency: Play by the rules. Itโs the only way to avoid problems. Your AI use case should not breach any laws, and you should be clear and open about how data is processed.
Respect for Data Subject Rights: Give users control over their data, whether itโs correcting it or deleting it. Make sure itโs easy for them to do so.
Once you have consent, ensure you design workflows that allow users to exercise their rights (e.g., object, correct, or delete their data) within the specified timeframes.
3. Data Mapping and Third-Party Risk Management
Know your data. Where is it? Whoโs using it? Howโs it being used? Itโs like tracking your socks. Know where they all are.
Itโs vital to understand and map the personal data flow in and out of your app. Start by creating an inventory of the personal data you're collecting, where it resides, who has access, and whether sensitive data is included.
As part of your risk management strategy, ensure that third-party vendors and subprocessors have appropriate data protection measures in place and included in your data maps. Use Data Processing Agreements (DPAs) to establish the legal framework for data processing and monitor compliance throughout the contract.
If youโre working with vendors like Deepseek, Stable Diffusion or OpenAI check their data policies. Get that Data Processing Agreement (DPA) signed, and ensure they follow it. If they mess up, youโre the one who looks bad.
4. Automated Decision-Making and Explainability
Automated decision-making is a hot topic. If your AI system makes decisions without human intervention, tell your users. Transparency is key hereโdonโt try to pull a fast one.
Article 22 of GDPR applies to AI systems involved in automated decision-making. This requires that individuals have meaningful information about the logic behind decisions made by AI models.
This is where explainable AI (XAI) becomes crucial. While still an emerging field, the ability to explain AI decisions is vital for protecting user rights.
Moving Beyond GDPR: Integrating AI Act Requirements
While GDPR provides a strong foundation for data protection, the EU AI Act introduces additional requirements that focus on the safe and ethical deployment of AI systems. As you begin to integrate these into your AI system, remember that the principles outlined in GDPR still apply, but now with added complexity due to the AI Actโs scope.
Treat the following principles as guidelines during your planning:
Transparency: Be upfront about how your AI models operate, especially in cases where AI decisions affect individualsโ lives.
Accountability: Be ready to take responsibility for the AI systems you deploy and the outcomes they produce.
Risk Management: Identifying, assessing, and mitigating AI risks at every lifecycle stage is key. This could involve continuous monitoring of AI models to ensure they comply with regulatory requirements.
The Road Ahead
As you embark on AI Act compliance, remember: this is a journey, not a destination. The principles may seem daunting at first, but think of them as a guiding light through the complexities of AI development. With careful planning, transparency, and accountability, youโll ensure that your AI system remains compliant and trustworthy.