This week the European Data Protection Board (EDPB) adopted new measures concerning the transfer of personal data to countries outside of Europe. The EDPB is the lead agency in the European Union on data protection, and was established by the General Data Protection Regulation (GDPR). The EDPB is composed of representatives of the national data protection authorities, and the European Data Protection Supervisor. The EDPB Recommendations follow the Schrems II decision earlier this year and could significantly influence the development of AI systems.
In announcing the Recommendations, EDPB Chair Andrea Jelinek said: “Our goal is to enable lawful transfers of personal data to third countries while guaranteeing that the data transferred is afforded a level of protection essentially equivalent to that guaranteed within the European Economic Area.” Dr. Jelinek added: “The implications of the Schrems II judgment extend to all transfers to third countries. Therefore, there are no quick fixes, nor a one-size-fits-all solution for all transfers, as this would be ignoring the wide diversity of situations data exporters face.”
The EDPB Recommendations require data controllers and data processors to adopt additional data protection measures to ensure “an essentially equivalent level of protection” to the data they transfer to third countries. The EDPB advises that “data exporters must proceed with due diligence and document their process thoroughly, as they will be held accountable to the decisions they take on that basis, in line with the GDPR principle of accountability.” And the EDPB warns that “data exporters should know that it may not be possible to implement sufficient supplementary measures in every case.”
In the first issue of the CAIDP Update, we reported on the Schrems II judgement and said that the privacy decision will have global consequences. In a subsequent article for the European Law Journal, we urged the United States Congress to update federal privacy law, to establish a data protection agency, and to ratify the Council of Europe Privacy Convention.
The key point is this: in the realm of privacy and AI there is significant difference between systems that rely on the collection and use of personal data and those that do not. For example, AI systems that track climate change or progress toward sustainable development goals do not generally require personal data, whereas AI systems for criminal justice assessments, hiring determinations, and facial surveillance do implicate data protection and privacy. This dichotomy is reflected also in “Data Free Flows with Trust (DFFT),” the proposal of former Japanese Prime Minister Shinzo Abu, backed by the OECD, to enable the free flow of anonymized data, aggregate data, and industrial data, but to ensure strong controls and safeguards for personal data. This distinction is critical to understand the relationship between AI and privacy.
And the Recommendation of the European Data Protection Board is a further indication that “trustworthy AI,” “human-centric AI,” and AI that advances social benefits will be based on strong data protection rules.
Marc Rotenberg, Director
Center for AI and Digital Policy at Michael Dukakis Institute
The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy