Facebook-owned Instagram will now let users report posts that contain false content in the United States.
Given the cottage industry of influencers always seemingly with a camera on hand to capture implausibly perfect moments, you might think that applies to a lot of Instagram’s content. But this specifically refers to the scourge of parent company Facebook: fake news. In the run up to the 2020 election, the site is keen to crack down on anything where misinformation can spread like wildfire, after what happened last time.
Posts flagged by users and then confirmed by Facebook to be false will be removed from areas where people are exposed to new accounts, such as the “explore” tab and searches of hashtags. In other words, the company won’t kill the posts, but will prevent them from spreading.
“This is an initial step as we work towards a more comprehensive approach to tackling misinformation,” Stephanie Otway, a Facebook company spokesperson, told The Guardian.
You might think Instagram, with its focus on photography rather than text and video, wouldn’t be a particularly effective medium for propagandists, but the jury is still out. Indeed, an independent report commissioned by the Senate select committee on intelligence claims Instagram is “perhaps the most effective platform” for nation states looking to disseminate material designed to selectively boost or suppress turnout.
The research was conducted by New Knowledge, and relates to the levels of engagement on Instagram which seem to outstrip that of Facebook. “Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” the researchers wrote.
The move comes as Facebook seems to be upping its moderation game on the platform. Just last month, the company said it would be upping the number of people it bans on Instagram in a bid to knock problems like hate speech, bullying and harassment on the head.
Do you think Facebook can effectively police fake news on Instagram? Let us know what you think on Twitter: @TrustedReviews.