On Monday, Alphabet Inc.’s Facebook, Youtube, and Twitter warned users that more content and videos could be removed due to AI error. Social Media companies are emptying their offices and relying on artificial intelligence, or AI, to handle taking down content that might contain policy violations.
However, the software is not always as accurate as a human employee, and “turnaround times for appeals against these decisions may be slower,” said Google. This has led to posts and videos on Facebook, Twitter, and Youtube being taken down, and of course, conspiracy theorists to hypothesize.
Facebook drew public criticism last week when they refused to send home their content moderators. They lacked secure technology to handle their moderation remotely. But now, as they send these same employees home indefinitely, with pay, they’ve chosen to rely on automated tools. These tools have been tasked to identify offensive material by analyzing digital clues from previous takedowns, which has its limitations.
Twitter said that it too would step up and begin switching to these automated tools as their employees practice social distancing. However, it will not ban users based on the automated takedowns of posts alone due to accuracy concerns.
Google is looking to implement human review of automated policy decisions but said that this process would be slow as well, and phone support would be limited. Its content rules cover ad campaigns, new apps uploaded to Google Play Store, and business reviews on Google Maps.
“Some users, advertisers, developers, and publishers may experience delays in some support response times for non-critical services, which will now be supported primarily through our chat, email, and self-service channels,” Google said in a blog post.
The content review teams for Facebook and Google span several countries, including India, Ireland, Singapore, and the United States, all of which have been affected by the virus.