YouTube’s Algorithmic Pedo-Failure
This week, YouTube turned to Facebook and said, “Hold my beer.”
Yesterday, after intense pressure, the platform decided to demonetize channels that promote hate speech and discrimination. And in a terrifying story on Monday, The New York Times reported that YouTube’s recommendation engine surfaces videos of partially clothed children to viewers who’ve watched similar videos or erotic content in the past. These videos may seem innocuous in and of themselves, but consider who’s likely to be binge-watching them along with erotic content to such a degree that the behavior trains YouTube’s algorithm, and one thing becomes clear: YouTube is enabling pedophilia.
YouTube is certainly not the first company to learn the hard way that AI can have harmful unintended consequences. Its parent Google’s first foray into computer vision is a case study in algorithmic bias. But there’s an important difference between this situation and others that have preceded it. In this case, YouTube’s algorithm is working exactly as it is supposed to. It detected a behavior among a specific cohort of people, then exploited that pattern to increase viewership. It did precisely what it was designed to do.
The solution here is therefore not about having the right training data or even about model explainability. Instead, it’s about ensuring that algorithmic models don’t violate moral codes in their ruthless pursuit of business objectives. YouTube’s recommendation engine is optimized for one thing: engagement. It wants users to view as many videos as possible so it can maximize ad revenue. It is not optimized for child safety. I doubt child welfare was even a consideration in the creation of its recommendation system. Obviously, it needs to be.
How YouTube responds to this revelation will be an important lesson for companies in how to resolve ethical issues resulting from AI that may impact their business models. Apparently, YouTube’s recommendation engine no longer links these underage videos together. However, YouTube has not taken the ideal preventive measure — removing videos of children from its recommendation system. Since 70% of YouTube’s views come from its recommendations and advertising dollars come from views, it is loath to do so. But it should do so anyway.
Ethics and values can be highly cultural. Much like societies, companies need to decide on their own moral codes. These moral codes should be inviolable, no matter whether they impinge on the bottom line or not.
Yesterday, YouTube decided to take a moral stance in demonetizing channels promoting hate speech. They should take a similar (and universally agreed-upon) moral stance and safeguard children by removing videos of them from their recommendation system. Recommendation engines are meant to exploit behavioral patterns, not children. YouTube, let’s keep it that way.