But will fines and shaming actually decrease the appetite for fake news? We look at the legal implications and alternative ways
Increasingly worried about the impact of misinformation on their upcoming elections, as well as political polarization over the refugee crisis, Germany’s ruling party chairman has called for Facebook either to remove flagged content within 24 hours, or face fines of over $500,000 for each instance.
“The legislation would require social media companies to set up offices that would respond to complaints from people affected by hateful messages within 24 hours,” Deutsche Welle reports. The government is seeking to write a new law on hate speech and fake news, though it won’t find support from the German media in this endeavor, apparently, as the federal association of newspaper publishers said the social media platforms “should be viewed and regulated like telecom companies which are not responsible for what people are saying into the handset.”
Shortly before this was announced, Facebook stated it would revise its policies for news stories, making it easier to report content as untrue or harassing, and will bring in fact-checkers from “third-party” organizations, including the AP, Snopes, Factcheck, PoltiFact, ABC, and Poynter, reports Business Insider. They will add a disclaimer to sources they judge to be “disputed,” which will affect how the platform sorts news items, disfavoring those marked as such.
That would, in effect, de-incentivize the thriving market for scammers purposefully posting fake news, because a handicap like this will in theory keep this content from going viral – and therefore, racking up the clicks.
“Once a story is flagged, it can’t be made into an ad and promoted, either,” Facebook says, and, “On the buying side we’ve eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications.”
Since many fake news posters are driven by financial incentives, depriving them of Facebook and Google ad venues is potentially useful because it does not deprive them of their right to free speech – even if it’s lying – and thus avoids that thorny issue.
It is not censorship to pull product banners off of a site you find objectionable, as respecting free speech doesn’t mean you have to do business with them, something Facebook is belatedly realizing after a lot of negative coverage to its initial responses.
Defining what’s fake isn’t going to go down easy
But the issuance of fines, and proposals to flag “fake news” with some app or special policy, gets onto less stable ground. Specifically, they raise questions about free speech.
Flagged news (or “news”) sites will sue, for starters, as shown by the imbroglio between web developer Daniel Sieradski and the news site Naked Capitalism. Sieradski designed, as a proof-of-concept, a database and Chrome extension that marked 500 websites as peddlers of “B.S.”
“I built this in about an hour,” Sieradski wrote on producthunt.com, “after reading Zuck’s BS about not being able to flag fake news sites. Of course you can. It just takes having a spine to call out nonsense.” He added, “The domains cover the political spectrum from left to right and I have done my absolute best to be impartial in my selections.”
Naked Capitalism, among other outlets, cried foul and sent its legal team out. Naked Capitalism has since been removed from the database of “bullshit” sites, and Sieradski apologized, saying that the site and several others were “erroneously included.”
This shows that outlets will pursue legal action against such designations, demanding that social media platforms or the fake news flagging app makers “prove” to a court’s satisfaction they are actively guilty of defamation.
That’s a high burden of proof in Western democracies. RT UK has, for example, been able to avoid such a fate, despite official press watchdog sanctions and legal maneuvers, even though UK media laws are much less liberal than American ones.
It isn’t illegal, of course, since attempts to sue private individuals will run into the defense the app makers can call on, in that naming these sites is them exercising their own right to free speech: To “call out nonsense,” as Sieradski wrote.
The problem lies then in defining exactly what constitutes “fake news,” because someone could under this theory make a credible argument that wrong reporting is fake news. Once the power to make that call is set down as law, that means it will be harder to roll back, and politicians demanding it now may find it abused to their detriment if they aren’t in power the next election.
There is also the distinction that some fake news is not maliciously wrong, but wrong because new facts emerge in quick succession and initial reports make mistakes or are grasping at straws.
Sieradski touched on this issue at producthunt.com, too, saying that his app shouldn’t be used for that purpose, and is not designed to be.
Drawing lines between bad lies and honest mistakes
“You can’t use an algorithm,” he wrote in response to a complaint, “to detect whether a claim in a news story from a verifiable source like bloomberg or even fox is false,” and “that’s why my list focuses on well-known bogus news sites, rather than legitimate news agencies.”
Just look at coverage in the first few hours of a mass shooting or terrorist attack, where information about death tolls, number of attackers, places under fire, and claims of responsibility shift rapidly. With proposed fines and legal actions, would Facebook be liable for promoting a story that has incorrect information here and failing to take it down?
So what can be done to call out misinformation more clearly? Is Facebook really going to start handicapping its advertising machine for “societal considerations”? Even the most abrasive, outright censorship has failed when tried. How do you educate readers in basic fact-checking and journalism when most really just don’t care at all? People sharing, liking, and upvoting fake content without even reading past the headline aren’t going to be deterred by “B.S. Detector,” since they’re barely registering who the source is in the first place. They also aren’t going to take the time to read the site’s “about us” page, run Google image searches, or check to see who else is quoting a source cited. How do journalists and tech companies actually address the demand for fake news, then?
Automatic disclosure is, perhaps, one of the most promising ways forward, one that would place the burden on Facebook et al., while avoiding thorny legal issues with the outlets.
“In order to become a verified news provider on the website,” Adam Klasfeld proposes, “Facebook should force an outlet to disclose information about its funding, masthead, editorial structure and conflicts of interest, which the social media should make available to users.” And fake news farms that made a killing in 2016 and are now humming away in anticipation of the next big political moment would be marked, “in the same way newspapers use a special design to denote advertorials from partisan advocacy groups.” Such disclosure would make clear, for example, if an outlet presenting itself as an independent news source is actually funded by a government and what its owners’ on-the-record partisan agenda is.
Klasfeld also thinks Facebook should work with governments to set up the sort of safeguards that news outlets do have, like a public editor and review board who can independently take the outlet to task for overreach and sins of omission.