In a recent ruling delivered by a court in Amsterdam, Uber has been ordered to reveal data used as the basis for evaluating employees. The case established landmark rights for employees in the “gig economy” and also points toward critical questions that are likely to arise more frequently as companies move to AI-based techniques.
The drivers for Uber objected to the automated and opaque techniques (Uber’s Real Time ID and Ola’s Guardian system) that determined how much they earned. In December they sought access to the data that determine their payments. Uber was ordered to provide anonymized customer information, but withheld other information sought by Worker Info Exchange, the group representing the workers.
In a separate ruling, Ola Cabs was ordered to reveal driver performance related profiling, including the controversial “fraud probability” profile and “earnings profile” it maintains on every driver. The court did not find, as some drivers alleged, that accounts were terminated based solely on the basis of algorithms.
In the report Artificial Intelligence and Democratic Values, CAIDP identified “algorithmic transparency” as one of the key metrics for trustworthy and human centric AI. As the report explained, “One of the most significant AI policy issues today is Algorithmic Transparency. We take the position that individuals should have the right to access the logic, the factors, and the data that contributed to a decision concerning them.”
Countries that established algorithmic transparency ranked more highly in the CAIDP Index. The CAIDP report noted that algorithmic transparency is currently established in the GDPR (Article 22) and the modernized Council of Europe Privacy Convention (Article 9).
Center for AI and Digital Policy at the Michael Dukakis Institute
The Center for AI and Digital Policy advises governments on technology policy.