EDPB aims to give clarity on personal data and AI
The European Data Protection Board (EDPB) has issued an opinion on the use of personal data for the development and deployment of AI models.
The guidance follows a request from Ireland’s data watchdog, the Data Protection Commission (DPC).
It examines:
- When and how AI models can be considered anonymous,
- Whether and how legitimate interest can be used as a legal basis for developing or using AI models, and
- What happens if an AI model is developed using personal data that was processed unlawfully.
Anonymous models
The board says that whether an AI model is anonymous should be assessed on a case-by-case basis by the national authorities.
For a model to be anonymous, the guidance states, it should be:
- “Very unlikely” to directly or indirectly identify individuals whose data was used to create the model, and
- “Very unlikely” to extract such personal data from the model through queries.
The opinion provides a list of methods to demonstrate anonymity.
It also provides general considerations that national data-protection watchdogs should take into account when they assess if legitimate interest is an appropriate legal basis for processing personal data for the development and the deployment of AI models.
The EDPB gives the examples of a conversational agent to assist users, and the use of AI to improve cyber-security.
“These services can be beneficial for individuals and can rely on legitimate interest as a legal basis, but only if the processing is shown to be strictly necessary and the balancing of rights is respected,” it states.
Criteria
The opinion also includes a number of criteria to help authorities to assess if individuals may reasonably expect certain uses of their personal data.
These criteria include:
- Whether or not the personal data was publicly available,
- The nature of the relationship between the individual and the controller,
- The nature of the service,
- The context in which the personal data was collected, the source from which the data was collected,
- The potential further uses of the model, and
- Whether individuals are actually aware that their personal data is online.
The guidance also provides a list of examples of mitigating measures that can be taken to limit the negative impact on individuals of the use of their data.
The EDPB also stresses that, when an AI model is developed with unlawfully processed personal data, this could have an impact on the lawfulness of its deployment, unless the model has been duly anonymised.
DPC sought ‘clarity’
Welcoming the opinion, the DPC said that it had made the request with a view to gaining European-wide regulatory harmonisation and clarity on a number of key questions.
The commission said that the guidance would benefit supervisory authorities across Europe in regulating the responsible development of AI products.
EDPB chair Anu Talus said that AI technologies could bring many opportunities and benefits, but added that the board needed to ensure these innovations were done ethically, safely, and in a way that benefited everyone.
“The EDPB wants to support responsible AI innovation by ensuring personal data are protected and in full respect of the General Data Protection Regulation (GDPR),” she stated.
Gazette Desk
Gazette.ie is the daily legal news site of the Law Society of Ireland