Information
Algorithmic recommendations are part of our daily lives. TikTok, Tinder, Deliveroo, Uber, Expedia, Booking.com, Spotify, Youtube, Google, Amazon, LinkedIn, Facebook, Instagram ... feed their algorithms from hundreds of billions of our interactions with them. We are two billion humans training them to influence us.
But who audits these algorithms, who checks that they do not cheat us, that they respect the rules of law: competition, consumer, labor, privacy and that they do not discriminate against us?
After illustrating the main families of violated rights and the evolution of the European regulatory framework, we will try to highlight the new challenges posed by the audit of these algorithms, for the authorities in charge of their supervision and regulation, and even for the companies themselves, producing user-facing AI. We will insist on the role played by artificial intelligence and the new manipulations it allows, as well as the difficulty to audit algorithms that are both personalized and multi-dimensional. We will evoke the notion of black box auditing which seems to us a key element to regain transparency.