Meta to Eliminate Third-Party Fact-Checking
Fact-checkers are being phased out at Meta.
“We will end the current third-party fact-checking program in the United States and instead begin moving to a Community Notes program,” announced Joel Kaplan, Meta’s Chief Global Affairs Officer, in a company blog post on Tuesday.
Kaplan added that Meta would address the “mission creep” that has resulted in overly restrictive and over-enforced platform rules.
“We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate,” he wrote. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”
Meta will also modify its automated systems for detecting policy violations. “[T]his has resulted in too many mistakes and too much content being censored that shouldn’t have been,” Kaplan wrote.
These systems will prioritize illegal and severe violations, such as terrorism, child exploitation, drugs, fraud, and scams. Less severe violations will require user reporting before action is taken.
Furthermore, Meta is raising the bar for content removal, requiring multiple reviewers for a takedown decision and increasing visibility of civic content (elections, politics, social issues) for interested users.
Censorship Tool
Kaplan explained that the 2016 launch of the independent fact-checking program aimed to avoid Meta becoming the arbiter of truth.
“The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read,” he wrote.
“That’s not the way things played out, especially in the United States,” he continued. “Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how.”
“Over time, we ended up with too much content being fact-checked that people would understand to be legitimate political speech and debate,” he noted. “Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.”
David Inserra, a fellow at the Cato Institute, who previously served on a Facebook content policy team, criticized the fact-checkers’ selection bias. “The only people who joined to be fact-checkers wanted to moderate content,” he told TechNewsWorld. “People who wanted users to make their own decisions about content didn’t become fact-checkers.”
“My experience with the effectiveness of Facebook’s fact-checking was pretty mixed overall,” added Darian Shimy, CEO of FutureFund.
“It added a layer of accountability, but… I found it was too slow and inconsistent to keep up with…viral misinformation,” he told TechNewsWorld. “Most people felt that relying on third-party fact-checkers created a perception of bias.”
‘Not a Victory for Free Speech’
Irina Raicu, director for internet ethics at Santa Clara University, noted that disinformation persisted on Facebook despite fact-checking.
“Part of the problem was the automation of content moderation,” she told TechNewsWorld. “The algorithmic tools…missed the nuances… And the problem was even more widespread in posts in languages other than English.”
“With billions of pieces of content posted daily, it was…impossible for human fact-checkers to keep up,” added Paul Benigeri, co-founder and CEO of Archive.
“Fact-checking felt more like a PR move,” he told TechNewsWorld. “Sometimes it worked, but it never…caught the full volume of misleading posts.”
Tal-Or Cohen Montemayor, founder of CyberWell, questioned Meta’s decision.
“While the previous fact-checking system…[was] ineffective…the answer cannot be less accountability…from the platforms,” she told TechNewsWorld.
“This is not a victory for free speech,” she declared. “It’s an exchange of human bias…for human bias at scale through Community Notes…[We need] legal requirements…that enforce social media reform and transparency.”
Flawed Community Solution
Cody Buntain, an assistant professor at the University of Maryland, commented on the Community Notes replacement. “The community-based approach…deals partially with the scale issue…. It allows many more people to engage…and add context.”
“The problem is that community notes…is generally not fast enough and gets…overwhelmed with new major events,” he explained.
“We saw this…after the attacks in Israel…in October of 2023,” he continued. “…Twitter…got swamped…with misinformation….Community notes aren’t…set up to deal with those issues.”
“I’ve never been a fan of community notes,” added Karen Kovacs North, a clinical professor at the Annenberg School.
“The type of people…willing to put notes on something are usually polarized…,” she told TechNewsWorld. “The middle-of-the-roaders don’t…put their comments…on a story.”
Currying Trump’s Favor
Vincent Raynauld, an assistant professor at Emerson College, noted that community moderation, though appealing, has flaws. “Even though the content might be flagged…it is still available…,” he told TechNewsWorld.
Along with Kaplan’s announcement, Meta released a video of CEO Mark Zuckerberg praising the changes. “We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression,” he said.
“Zuckerberg’s announcement has nothing to do with…better platforms and everything to do with currying favor with Donald Trump,” asserted Dan Kennedy, a journalism professor at Northeastern University.
“There was a time when Zuckerberg cared about…dangerous misinformation…,” he told TechNewsWorld. “Now Trump is returning to office…so Zuckerberg is just getting with the program.”
Musk as Trend Setter
Damian Rollison, director of marketing for SOCi, highlighted an irony in Meta’s move. “No one predicted Elon Musk’s chaotic takeover of Twitter would become a trend…,” he told TechNewsWorld.
“Musk established a standard…for the loosening of online content moderation, one that Meta has now embraced in advance of the incoming Trump administration,” he said. This may lead to increased political speech and controversial posts on Facebook and Instagram. “This change may make the platform less attractive to advertisers,” he added.
Source Link