Sauter les liens
AI Robot

Avis de l'EDPB sur les modèles d'IA et les données personnelles : Ce que les entreprises doivent savoir

The European Data Protection Board (EDPB) has released an opinion addressing key data protection concerns in AI model development and deployment. Here’s what businesses need to understand about using données personnelles in AI models.

Key Takeaways from the EDPB Opinion

  • Personal data in AI models isn’t limited to training data – even unintentional data retention in model parameters counts as personal data traitement

  • AI models trained on personal data cannot be considered anonymous in all cases – claims must be assessed by competent Data Protection Authority case-by-case

  • Using legitimate interest as legal basis requires documented proof of three elements: legitimate pursuit, necessity, and balancing tests

  • DPIAs are required when AI model processing likely results in high risk to rights and freedoms

  • Breaking the rules while creating the model might affect whether you can keep using it and when you are using someone else’s model, you must check if the model was created legally.

Understanding AI Model Development and Personal Data

What Counts as Personal Data in AI Models?

Personal data in AI models extends beyond just training data. Even when models aren’t designed to process personal information, they may retain personal data in their parameters. This “absorbed” data still falls under GDPR protection.

The Anonymity Challenge

Many businesses claim their AI models are anonymous. However, the EDPB clarifies that models trained on personal data cannot automatically be considered anonymous. Two key factors must have insignificant likelihood:

  1. Direct extraction of personal data used in training
  2. Obtaining personal data from queries, whether intentional or not

Documentation Requirements

To prove anonymity, businesses must maintain comprehensive documentation including:

  • Data Protection Impact Assessments (DPIAs)
  • DPO advice and feedback
  • Technical measures for reducing identification risks
  • Lifecycle protection measures
  • Proof of resistance to re-identification attempts

Using Legitimate Interest as Legal Basis

When relying on legitimate interest for AI model development or deployment, businesses must demonstrate:

1. Legitimate Interest Exists

  • The interest must be lawful
  • It must be clearly articulated
  • It must be real and present, not speculative

2. Processing Necessity

  • Show why processing is needed for the stated purpose
  • Prove no less intrusive alternatives exist

3. Balancing Test

  • Evaluate impact on personne concernée rights
  • Consider reasonable expectations
  • Document safeguards and mitigating measures

Implications of Unlawful Development

The EDPB addresses three scenarios for AI models developed with unlawfully processed personal data:

  1. Same controller continues using the model
  2. Different controller uses the model
  3. Model is anonymized before deployment

Important Considerations

  • Supervisory authorities can order data erasure
  • Using another company’s model requires due diligence
  • Anonymization doesn’t automatically solve prior unlawful processing
  • Documentation requirements apply throughout the model lifecycle

Best Practices for Businesses

  1. Document Everything
  • Maintain detailed records of processing activities
  • Keep technical documentation of anonymization measures
  • Record all legitimate interest assessments
  1. Conduct Regular Assessments
  • Perform DPIAs for high-risk processing
  • Review legitimate interest balancing tests
  • Evaluate model anonymity claims
  1. Implement Safeguards
  • Technical measures to prevent re-identification
  • Organizational controls throughout the AI lifecycle
  • Regular monitoring and updates

Conclusion

The EDPB opinion makes clear that using personal data in AI models requires careful consideration and robust documentation. Businesses cannot simply claim anonymity or rely on legitimate interest without substantial proof. Even when using third-party models, companies must ensure compliance with data protection requirements.

Noa_Kahalon
Noa Kahalon
COO à  |  + de postes

Noa est certifiée CIPM, CIPP/E et Fellow of Information Privacy (FIP) de l'IAPP. Elle a travaillé dans le domaine du marketing, de la gestion de projets, des opérations et du droit. Elle est cofondatrice et directrice de l'exploitation de hoggo, une plateforme de gouvernance numérique pilotée par l'IA qui permet aux équipes juridiques et de conformité de se connecter, de surveiller et d'automatiser la gouvernance numérique dans tous les flux de travail de l'entreprise.