Logo 468x60
Uncategorized

Great misinformation clean-up act

CALIFORNIA (Axios): Social media giants have taken a number of steps to clear misinformation off their platforms, but those efforts aren’t likely to appease furious lawmakers in both parties, Axios’ Ashley Gold and Margaret Harding McGill report.

What’s happening: When they testify virtually before House lawmakers on Thursday, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey will point to recent company policy changes to argue they’re doing what they can to stem the tide of misinformation and extremism online.

Yes, but: Policy changes are not the same thing as effective results.

“The performance that they’ve shown us to date is largely, much of it, unacceptable,” Rep. Jan Schakowsky, who chairs the House Energy and Commerce consumer protection panel, said at an event Monday. “We are moving ahead with legislation and with regulation …. It’s happening.”

Flashback: Democratic lawmakers have long been angry about misinformation on social platforms and have previously questioned the CEOs on the problem.

The anger peaked again after pro-Trump insurrectionists stormed the U.S. Capitol Jan. 6, with lawmakers pointing to extremists organizing on Facebook groups, posting their indiscretions on Instagram Live and following former President Trump’s tweets calling supporters to go to the Capitol.

Shortly after, Twitter permanently suspended Trump, while Facebook and YouTube have suspended his account until further notice. Trump’s appeal of the account suspension is currently before Facebook’s oversight board.

Conservative lawmakers have argued that platforms’ decisions to suspend Trump and groups who support him are examples of censorship and political bias.

Facebook outlined its work to deter misinformation in an op-ed Monday that noted that warning screens placed on false posts deter people from clicking 95% of the time.

This month, Facebook expanded its restrictions on recommending civic and political groups to users around the world, after previously imposing the limits on recommending such groups in the U.S.

Other changes include penalizing groups that break Facebook rules by not recommending them as often and warning users when they’re about to join a group that violates Facebook’s standards.

In February, the company announced a crackdown on pandemic misinformation, saying it would bar the posting of debunked claims about vaccines.

Twitter has suspended more than 150,000 accounts for sharing QAnon content since the Capitol attack, a spokesperson told Axios.

The company also announced this month it will label tweets with potentially misleading information about COVID-19 vaccines, and introduce a strike system that can lead to permanent account suspension.

Twitter is revisiting its policies on politicians and government officials, seeking public input on whether world leaders should be subject to the same rules as everyone else.

YouTube said this month it has taken down more than 30,000 videos that made misleading or false claims about COVID-19 vaccines over the last six months.

The other side: Facebook could have prevented about 10.1 billion estimated views on pages sharing misinformation if it had implemented certain algorithm and moderation policies in March 2020, according to a new study from progressive non-profit Avaaz.

Facebook disputes the findings, saying Avaaz used flawed methodology.