Categories: world

Twitter bots had 'disproportionate' role spreading misinformation in 2016 election: study

The spread of an article that claimed 3 million illegal immigrants voted in the 201 6 U.S. presidential election. De…



The spread of an article that claimed 3 million illegal immigrants voted in the 201

6 U.S. presidential election. De links viser artiklenes spread through retweets and quoted tweets, in blue, and replies and mentions, in red. Credit: Filippo Menczer, Indiana University

An analysis of information shared on Twitter during the 2016 U.S. presidentiële verkiezingen hebben gevonden dat automatische accounts of ‘bots’ -played een disproportionate rol in het verspreiden van misinformatie online.

The study, conducted by Indiana University researchers and published Nov. 20 in the journal Nature Communications analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017, a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017.

Among the findings: A more 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the low-credibility information on the network.

The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral. 19659005] The brief length of this time-2 to 10 seconds-highlights the challenges of countering the spread of misinformation online. Similar issues are seen in other complex environments like the stock market, where serious problems can arise in more moments due to the impact of high-frequency trading.

“This study finds that bots significantly contribute to the spread of misinformation online as well as shows how quickly these messages can spread, “said Filippo Menczer, a professor of the IU School of Informatics, Computing and Engineering, who led the study.

The analysis also revealed that bots amplify a message’s volume and visibility until it’s

“People tend to put greater trust in messages that appear to originate from many people,” said co-author Giovanni Luca Ciampaglia, an assistant research scientist with the IU Network Science Institute at the time of the study. “Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”

Information sources labeled as low-credibility in the study were identified based on their appearance on lists produced by independent third -party organizations or outlets that regularly share false or misleading information. Deze bronnen, zoals websites met misleidende namen zoals “USAToday.com.co”, bevatten uitzettingen met beide rechts- en linkerpuntenpunten.

De onderzoekers hebben ook andere tactieken geïdentificeerd voor het verspreiden van misinformatie met Twitter bots. Deze omvatten versterkende een single tweet-potentieel gecontroleerd door een menselijk operator – over honderden of automatische retweets; repeating links in recurring posts; and targeting highly influential accounts.

For example, the study cites a case in which a single account mentioned @realDonaldTrump in 19 separate messages about millions of illegal immigrants casting votes in the presidential election-a false claim that was also a major administration talking point.

The researchers also ran an experiment inside a simulated version of Twitter and found that the deletion of 10 percent of the accounts in the system based on their likelihood to be bots-resulted in a major drop in the number of

“This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks,” Menczer said.

The study also suggests that steps companies could take to slow misinformation spread on their networks. Disse inkluderer forbedringsalgoritmer for automatisk å finne bots og kreve en “menneskelig i løpet” for å redusere automatiserte meldinger i systemet. For eksempel kan brugerne være påkrævet for å fullføre en CAPTCHA for å sende en melding.

Selv om deres analyse fokuserede på Twitter, har de forfatterne tilføjet, at andre sociale netværk også er udsat for manipulation. For eksempel kan platforme som Snapchat og WhatsApp kollede for å kontrollere feilinformasjon om deres nettverk fordi deres bruk av kryptering og destruktible meldinger kompliserer muligheten til å studere hvordan deres brukere deler informasjon.

“As people across the globe are increasingly turning to social networks As their primary source of news and information, the fight against misinformation requires a grounded assessment of the relative impact of the different ways in which it spreads, “Menczer said.” This work confirms that bots play a role in the problem-and suggests their reduction could improve the situation. “

To explore election messages currently shared on Twitter, Menczer’s research group has also recently launched a tool to measure” Bot Electioneering Volume. ” Created by IU Ph.D. Students, the program shows the level of bone activity around specific election-related conversations, as well as the topics, user names and hashtags they are currently pushing.


Explore further:
What’s trending in fake news? Tools show which stories go viral, and if ‘bots’ are to blame

More information:
Chengcheng Shao et al., The spread of low-credibility content by social bots, Nature Communications (2018). DOI: 10.1038 / s41467-018-06930-7

Journal reference:
Nature Communications


Provided by:
Indiana University


Share
Published by
Faela