Instagram’s New Fact-Checking Tool May Have Limited Impact on Disinformation

Social

Researchers worry that a new feature giving Instagram users the power to flag false news on the platform won’t do much to head off efforts to use disinformation to sow political discord in 2020.

The role of Instagram in spreading political disinformation took centre stage in a pair of Senate reports in December, which highlighted how Russian state operatives used fake accounts on the platforms, masquerading as members of activist groups like Black Lives Matter during and well after the 2016 election. Researchers found that some Instagram posts by Russian trolls generated more than twice the “engagement” among users than they did on either Facebook or Twitter.

While Instagram and its parent company, Facebook, have cracked down on the kinds of coordinated campaigns launched by Russia, Instagram still serves as a potent source of memes and images laden with misinformation, especially for younger voters.

“Even though we don’t talk about it as much as Twitter and YouTube, it could potentially sway elections, on the local level especially,” says Joan Donovan, director of the Technology and Social Change Research Project at Harvard University’s Shorenstein Center. “Instagram is where a lot of younger audiences are, so the threat isn’t necessarily about influencing someone from one candidate or another but what kind of wedge issues are going to be impacted by posts on Instagram.”

Donovan pointed to gay rights and immigration issues as political topics that gain traction on the platform.

“There’s definitely a concern that there’s many more young people on Instagram using it as a news source, and that those groups could be targeted by disinformation.”

Yet it wasn’t until April that Instagram began a pilot to proactively send content to US fact-checking partners. Facebook launched its fact-checking initiative in 2016 and CEO Mark Zuckerberg has praised the program as a powerful tool against false news. While Instagram’s use of fact-checkers is in a testing phase in the United States, Instagram is hoping to fast-track results with a new tool it released last week to allow US users to flag a post as “false information.” The flag does not guarantee a post will be seen by a fact-checker, but it is calculated alongside other factors in determining if the company’s algorithms will select the content for a fact-check review – and hopefully making the artificial intelligence smarter at finding content like it next time.

Content that is determined “false” by a fact-checking partner will be removed from a hashtag search and Explore features, a page that surfaces new content to Instagram users. (Facebook fact-checking partners in other countries can still access Instagram content, but the less-aggressive version of the program has been criticised by at least one British partner as ineffectual.)

It’s hard to say how much even that modest change will help in removing false information from the image-sharing platform.

Instagram declined to share how much content is reviewed by fact-checkers or removed from the hashtag search and Explore page because of the process. Researchers have long opined against flagging tools as a one-size-fits-all solution to content moderation, arguing they not only put the burden on users but ignore an unsavoury truth many platforms turn a blind-eye to: Extremist content thrives because there’s an audience for it.

“When it comes to political disinformation or extremist content, there are enormous communities [on Instagram] that are existing in plain sight,” says Cristina López G., an extremism researcher and former deputy director for extremism at Media Matters for America. López says these communities exist through hashtags and networks of individual accounts.

A search by The Washington Post two days after the platform announced its new feature found that the hashtag #voterfraud surfaced a number of memes that have independently been debunked as false by organisations that Facebook uses in its fact-checker partner program. One post that turns up on the first page of results, a meme from November 2018, repeats the false claim that Democratic billionaire donor George Soros owns the voting machine company Smartmatic. Another post claims that 90,000 illegal immigrants voted in the midterms, which is also false.

If fact-checking partners rated the posts as “false” to Instagram they would be removed from the #voterfraud results. But because these posts still show up on a hashtag page, it means Instagram’s fact-checkers haven’t found them yet. Instagram spokeswoman Stephanie Otway says that’s where the new flagging tool can help.

“The more reports there are, the more signals we have to determine the pervasiveness of fake news on Instagram,” Otway explains.

When asked if Instagram would institute proactive bans for election misinformation, similar to its attempts to de-emphasise anti-vaxxer misinformation this spring, Otway says the service would block hashtags “designed to prevent or deter people from voting in line with our voter suppression policies” (policies it shares with Facebook). Furthermore, any tag associated with “a certain amount of violating content” is automatically restricted from search until that number “drops back down.” It’s unclear what amount of disinformation is required for a hashtag to warrant such a penalty, but Otway cited instances of removing hashtags being misused to share violating content such as nudity as an example.

Otway says Instagram largely shares its policies with Facebook and works to best adapt programs like the fact-checking pilot to the specific needs of its platform. It also shares Facebook’s concerns about misinformation.

“In general, our misinformation efforts are focused on keeping our elections safe,” says Otway.

Instagram isn’t alone in trying to better understand how political misinformation spreads on its platform.

There isn’t much concrete research on how misinformation and disinformation spreads on Instagram besides the 2018 Senate report. Some high-follower accounts, such as @the_typical_liberal, a meme account described by The Atlantic’s Taylor Lorenz as a popular source of conservative information for teens, are set to private. So even if researchers like López wanted to monitor its content, there’s no guarantee they’d have access to it. Compared with Twitter or even Facebook, Instagram provides researchers with extremely limited access to its internal data.

The company is exploring other things like computer vision technology – which would help detect text overlay on images – but there are still other tricks those seeking to spread fake news or political propaganda without getting caught by Instagram.

Accounts seeking to spread misinformation could easily omit hashtags or certain text from their post captions, says López. Donovan points out there have been cases of sites paying popular meme accounts to share their content without disclosing their advertising partnership, something that could also prove dangerous in 2020.

© The Washington Post 2019

Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *