No, there was no bombing in Bangkok. But Facebook Safety Check thought so
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email

Image Credit: Facebook

Image Credit: Facebook

This mistake shows a potential shortcoming in Safety Check’s November update

According to Khaosod English, a report from a year ago about the Erawan Shrine bombing was mistakenly republished on Bangkok Informer with a 2016 date, and from there, was picked up by an MSN bot, and reproduced on MSN.com. More and more sites began running it after that, and so Facebook’s Safety Check algorithm assumed that this was a breaking news item.

Having detected these hits, Safety Check turned itself on despite the fact that all of the reports were word-for-word reproductions of the original 2015 coverage, and centered around a BBC video dated to 2015, not 2016. Additionally, Bangkok Informer is only a content aggregator, according to Thai journalist Saksith Saiyasombut, and not an actual news outlet.

The error, though quickly corrected, shows a potential shortcoming in Safety Check’s November update. Since then, Facebook has delegated Safety Check to user communities and third-party verification services in response to complaints it was being too selective in its warnings.

Clearly, with Bangkok, that system failed. (Nor did it function properly earlier this year when an alert only about Pakistan went global.) However, it has not all been a record of failures: Safety Check was not fooled by mass panics at JFK Airport and the Jersey Gardens Mall this year, which were erroneously reported as mass shootings by terrified crowds, and fueled by online chatter.

Safety Check false alarm. Image Credit: Facebook

Safety Check false alarm. Image Credit: Facebook

But since the system can be manipulated by spamming the web with “breaking news” that isn’t breaking (or news) at all, this raises the question if people could fabricate reports of disasters or violence to trigger Safety Check. It is possible, in theory, and could follow a precedent that was established two years ago, on September 11, 2014.

Safety Check debuted in October of that year, so did not have to face down what turned out to be a coordinated disinformation campaign to convince Americans that ISIS had attacked their country on the thirteenth anniversary of 9/11.

On that date, a Russian troll farm known as the Internet Research Agency perpetrated the “Columbia Chemicals” hoax. Using spoofed websites, doctored images, Twitter bots, and mass texting of fake alerts, the trolls made it appear that a major industrial plant had been bombed. Adrian Chen, writing for The New York Times, later found that this campaign could be connected to other hoaxes involving fake Ebola outbreaks and police shootings.

Facebook would probably not have been tricked by these hoaxes back then, though. Data scientist Gilad Lotan analyzed the campaign for betaworks, and found that “Even though it was carefully planned, and seeded across different platforms, the content generated did not gain enough user trust, and hence no network effects were triggered.”

In other words, it didn’t go viral, and would have been easily called out by human overseers.

Footage of a 2014 bombing that didn't happen. Image Credit: YouTube

Footage of a 2014 bombing that didn’t happen. Image Credit: YouTube

But this was two years ago, and since then, it’s become clearer that mischief makers can make hoaxes look real. Legitimate news websites and official government Twitter accounts can and have been hacked to spread disinformation, unwillingly lending their good names to a hoax. It would not be all that hard to coordinate such an effort. Just look at how easily the OurMine group simultaneously hacked the accounts of multibillion dollar entertainment companies last week, simply to prove that it could.

And, since Facebook was fooled by a Thai content aggregator, that means it can be gamed by other aggregators, like those out in force on both the left and right of the political spectrum in 2016.

The obvious artificiality of fake news influencers, which betaworks emphasized as to why the “Columbia Chemical” hoax failed in 2014, no longer acts as a safeguard against disinformation. One thing this year has made abundantly clear on social media platforms is that more and more social media users can’t and won’t distinguish fantasy from reality.

This is something Facebook must keep in mind since its “crisis hub” goes into action based on what the algorithms decide is going on in the world, especially now that there is less human oversight to flag hoaxes before they go viral on Facebook, or other platforms and news sites.

Share on:Share
Share on Facebook
Share on Twitter
Share on Google+
Share on Reddit
Share on Email

More Goodies From News


Toong inks strategic partnership to help Singapore companies enter Vietnam

Russia in talks with US to create cybersecurity working group

FBI warns parents: Internet-connected toys can spy on your kids