YouTube removed nearly twice as many videos during Q2 as it did during the first quarter, the Google-owned company said on Tuesday, as it shifted away from human moderators and towards artificial intelligence tools to spot videos that break its rules.
Between April and June, YouTube removed 11.4 million videos, compared to the 5.6 million videos it removed during the first three months of the year. Back in March, YouTube warned creators there could be a potential surge in Q2 removals because it had to rely less on human moderators, who were sent home due to the coronavirus pandemic; that prediction ended up being on the money.
“When reckoning with greatly reduced human review capacity due to COVID-19, we were forced to make a choice between potential under-enforcement or potential over-enforcement,” YouTube said in its Q2 Transparency Report. “Because responsibility is our top priority, we chose the latter — using technology to help with some of the work normally done by reviewers.”
YouTube’s “over-enforcement” spurred the video giant taking down plenty of videos that didn’t actually violate its policies, too. That led to a jump in appeals from 166,000 in Q1 to 325,000 in Q2 — which ultimately resulted in 161,000 videos being reinstated, or about four times as many as the first quarter.
There was a 100% increase in the number of videos removed for violating YouTube’s “child safety” rules, as well as videos including “nudity and graphic” content; videos removed for “violent extremism” increased from about 260,000 in Q1 to 921,000 in Q2.
You can read the full report here.
YouTube’s report comes soon after Facebook shared its own report on moderation policies, which showed the social network removed 22.5 million posts for violating its hate speech rules in Q2.