Ever wondered how you can contribute to combating misinformation on platforms like WhatsApp? A study found that, in the context of Indian WhatsApp users, user-driven corrections were effective in lowering people's belief in misinformation.
Sumitra Badrinathan from the University of Pennsylvania, Simon Chauchard from Leiden University and D. J. Flynn from IE University recently conducted a study titled ""I Don't Think That's True, Bro!" An Experiment on Fact-checking WhatsApp Rumors in India", where they investigated the role of users in fact checking mis/disinformation on WhatsApp.
The study revealed that corrections to potentially misleading information on WhatsApp threads can minimise belief in the content of such messages, even when such corrections are low on sophistication (without a source), and the identity of the user is unknown.
The researchers recommended that WhatsApp create a "button" to easily express their doubts over claims made in the app, which would minimse the efforts required by users to report a message and would thus effectively slow down the dissemination of such messages.
"Our findings suggest that though user-driven corrections work, merely signaling a doubt about a claim (regardless of how detailed this signal is) may go a long way in reducing misinformation," Badrinathan said in a tweet.
The study was conducting by recruiting over 5000 Hindi speakers through Facebook, who were exposed to nine different WhatsApp threads. These threads (screenshot of a WhatsApp conversation) included a claim made by an unknown users using pro-ruling party and anti-ruling party sources, which was followed by a response by another user. The response was varied from a simple "thank you" (control condition with no correction) to expressing simple disbelief to fact checking the claim using a source.
The subject of the claim in the nine threads were varied from politics, health to sports, while the fact checks by the respondent in the thread used one of these five sources: AltNews, VishwasNews, Times of India, Facebook and WhatsApp.
Existing literature on people's response to fact checking initiatives in countries like the United States have found that motivated reasoning and partisanship are highly influential factors that contribute to the acceptance of a fact check.
However, in the context of WhatsApp users in India, the study found that motivated reasoning and partisanship had less of a role to play for user's in their interaction with the claim and the fact check.
It also found that the sophistication of the message (cited with fact checks by media organisations) had little to do with people believing in the fact check by the responding user. Rather, an expression of doubt or a counter argument by peer in a WhatsApp group was enough to lower the belief in the initial claim.
Importantly, we find that the source and the sophistication of corrective messages _does not_ strongly condition their effect:— Sumitra Badrinathan (@KhariBiskut) January 22, 2020
A random guy posting an unsourced correction achieves an effect comparable to fact checking by credible sources
Results from 4 rumors below:
The "Beacon" Of Doubt
The study argues that expecting users in real life to consistently counter claims made by their peers with sophistication and details would be unrealistic. The researchers suggested that a button-like feature should be added to messengers like WhatsApp, which would allow users to express doubt over a claim with the simple click of a button.Also Read Prejudice Leads To More Misinformation Than Low Tech Literacy: LSE Study
Last year, a similar suggestion was made by a few researchers at the London School of Economics, who conducted a WhatsApp-funded study in India to investigate the role of the messenger in orchestrating and influencing mob violence around the country. They had suggested the addition of a "beacon" like feature for users to flag potentially dangerous misinformation that may lead to violence.
Do you always want to share the authentic news with your friends?