Social bots play a significant role in spreading low-credibility content on social media, as shown by a study analyzing 14 million tweets from 2016 to 2017. The research found that bots disproportionately amplified misinformation from low-credibility sources, especially in the early stages of content spread. Bots targeted users with large followings through replies and mentions, influencing humans to share the content. Successful low-credibility sources were heavily supported by bots, suggesting that curbing bot activity could help mitigate the spread of misinformation.
The study analyzed 389,569 articles from low-credibility sources and 15,053 from fact-checking organizations, finding that low-credibility content was just as likely to go viral as fact-checked content. Bots were identified as key actors in this process, with a significant portion of tweets and article shares attributed to bot activity. The study also found that bots often targeted influential users, increasing the likelihood of content being shared by humans.
The research highlights the need for better detection and mitigation strategies against social bots. While platforms are beginning to address the issue, the effectiveness of current measures remains unclear. The study underscores the importance of understanding the role of bots in misinformation spread and developing countermeasures to reduce their impact. The findings suggest that targeting bot activity could be an effective strategy for improving the quality of information on social media.Social bots play a significant role in spreading low-credibility content on social media, as shown by a study analyzing 14 million tweets from 2016 to 2017. The research found that bots disproportionately amplified misinformation from low-credibility sources, especially in the early stages of content spread. Bots targeted users with large followings through replies and mentions, influencing humans to share the content. Successful low-credibility sources were heavily supported by bots, suggesting that curbing bot activity could help mitigate the spread of misinformation.
The study analyzed 389,569 articles from low-credibility sources and 15,053 from fact-checking organizations, finding that low-credibility content was just as likely to go viral as fact-checked content. Bots were identified as key actors in this process, with a significant portion of tweets and article shares attributed to bot activity. The study also found that bots often targeted influential users, increasing the likelihood of content being shared by humans.
The research highlights the need for better detection and mitigation strategies against social bots. While platforms are beginning to address the issue, the effectiveness of current measures remains unclear. The study underscores the importance of understanding the role of bots in misinformation spread and developing countermeasures to reduce their impact. The findings suggest that targeting bot activity could be an effective strategy for improving the quality of information on social media.