From the New York Times:
By SINAN ARAL MARCH 8, 2018
Sinan Aral (@sinanaral) is a professor at the M.I.T. Sloan School of Management.
The spread of misinformation on social media is an alarming phenomenon that scientists have yet to fully understand. While the data show that false claims are increasing online, most studies have analyzed only small samples or the spread of individual fake stories.
My colleagues Soroush Vosoughi, Deb Roy and I set out to change that. We recently analyzed the diffusion of all of the major true and false stories that spread on Twitter from its inception in 2006 to 2017.
Our data included approximately 126,000 Twitter “cascades” (unbroken chains of retweets with a common, singular origin) involving stories spread by three million people more than four and a half million times.
We started by identifying thousands of true and false stories, using information from six independent fact-checking organizations, including Snopes, PolitiFact and Factcheck.org. These organizations exhibited considerable agreement — between 95 percent and 98 percent — on the truth or falsity of these stories.
In other words, these are not “all of the major true and false stories,” they are all of the factually controversial stories that Snopes and the like got involved with. Presumably, a vast number of true stories (e.g., “Donald Trump wins the election”) are not Snopes-ized. Further, lots of false stories (e.g., “I know what I’m talking about!”) never spread enough for Snopes et al to bother with.
Finally, there is the issue of political and institutional bias. For example, I couldn’t find either Snopes or Factcheck.org touching the Rolling Stone “A Rape on Campus” story. (PolitiFact did mention it, but mostly in a tangential way.)
Moreover, huge stories in the press — e.g., the collapse of the Housing Bubble, crime, the Republican Party’s possible strategy for winning back the White House — are allowed only certain angles of interpretation. Thus, after the 2012 election, I pointed out that Romney had lost because he narrowly lost six states in the north central region by failing to appeal to white working class voters. But almost nobody spread that story, with the MSM giving overwhelming credence instead to the idea that the only way the GOP could get back to the White House was by doing exactly what the Democrats wanted: amnesty.
Then we searched Twitter for mentions of these stories, followed the sharing activity to the “origin” tweets (the first mention of a story on Twitter) and traced all the retweet cascades from every origin tweet. We then analyzed how they spread online.
For all categories of information — politics, entertainment, business and so on — we found that false stories spread significantly farther, faster and more broadly than did true ones.
That’s why you never heard on social media that Trump won until a couple of weeks later.
No, what they are doing is defining “true stories” as those that sound dubious enough to get considered by one of their factchecking sites.
Falsehoods were 70 percent more likely to be retweeted, even when controlling for the age of the original tweeter’s account, its activity level, the number of its followers and followees, and whether Twitter had verified the account as genuine. These effects were more pronounced for false political stories than for any other type of false news.
Surprisingly, Twitter users who spread false stories had, on average, significantly fewer followers, followed significantly fewer people, were significantly less active on Twitter, were verified as genuine by Twitter significantly less often and had been on Twitter for significantly less time than were Twitter users who spread true stories.
In other words, most of the activity on Twitter is sending along fairly true stories, but they don’t count as true stories because Snopes et al didn’t bother factchecking them.
Falsehood diffused farther and faster despite these seeming shortcomings.
And despite concerns about the role of web robots in spreading false stories, we found that human behavior contributed more to the differential spread of truth and falsity than bots did. Using established bot-detection algorithms, we found that bots accelerated the spread of true stories at approximately the same rate as they accelerated the spread of false stories, implying that false stories spread more than true ones as a result of human activity.
Why would that be?
There are a certain number of “too good to be true” stories. I have a decent nose for them so I don’t fall for them all that often, but sometimes I do. (Here’s me falling for an expensive video created by professionals to fool people.) Also, as a professional, I have the time to read further into the story to see if it checks out, and to search for supporting information.[Comment at Unz.com]