Once models are trained to identify potentially violative content, the role of content moderators remains essential throughout the enforcement process. Machine learning identifies potentially violative content at scale and nominates for review content that may be against our Community Guidelines. Content moderators then help confirm or deny whether the content should be removed.
This collaborative approach helps improve the accuracy of our models over time, as models continuously learn and adapt based on content moderator feedback. And it also means our enforcement systems can manage the sheer scale of content that’s uploaded to YouTube (over 500 hours of content every minute), while still digging into the nuances that determine whether a piece of content is violative.
For example, a speech by Hitler at the Nuremberg rallies with no additional context may violate our
hate speech policy
. But if the same speech was included in a documentary that decried the actions of the Nazis, it would likely be allowed under our
EDSA guidelines
. EDSA takes into account content where enough context is included for otherwise violative material, like an educational video or historical documentary.
This distinction may be more difficult for a model to recognize, while a content moderator can more easily spot the added context. This is one reason why enforcement is a fundamentally shared responsibility ? and it underscores why human judgment will always be an important part of our process. For most categories of potentially violative content on YouTube, a model simply flags content to a content moderator for review before any action may be taken.