BotOrNot: A System to Evaluate Social Bots

BotOrNot: A System to Evaluate Social Bots

2 Feb 2016 | Clayton A. Davis, Onur Varol, Emilio Ferrara, Alessandro Flammini, Filippo Menczer
**BotOrNot: A System to Evaluate Social Bots** Clayton A. Davis, Onur Varol, Emilio Ferrara, Alessandro Flammini, Filippo Menczer **Abstract:** Social bots, also known as sybil accounts, are automated agents that interact with humans on social media platforms. These bots have been used to manipulate discussions, alter user popularity, spread misinformation, and even engage in terrorist activities. This paper introduces BotOrNot, a publicly available service that evaluates the likelihood of a Twitter account being controlled by a social bot. Since its launch in May 2014, BotOrNot has processed over one million requests through its website and APIs. **Introduction:** Social bots are computer algorithms that produce content and interact with humans on social media. They have been observed to perform various malicious activities, such as creating artificial grassroots support for political aims and manipulating stock prices. BotOrNot aims to identify whether a Twitter account is controlled by a human or a machine by computing a bot-likelihood score based on over 1,000 features. **Release Timeline:** BotOrNot was initially available only via the website due to capacity concerns. After implementing rate limits to address robustness issues, the service became available via public APIs in December 2015. Since then, it has served over 540,000 requests, bringing the total to over one million queries. **System Design:** The BotOrNot service uses Twitter's REST API to retrieve recent activity from a specified Twitter screen name. The server then computes the bot-likelihood score using a classification algorithm. The classification system generates features in six main categories: network features, user features, friends features, temporal features, content features, and sentiment features. The model is trained using a dataset of 15,000 manually verified social bots and 16,000 legitimate accounts, achieving a 0.95 AUC in cross-validation. **Conclusion:** BotOrNot provides a free service to evaluate social bots, aiming to lower the barrier for researchers, reporters, and enthusiasts. The service offers ready-made reports and an API for easy access to classification results. The authors welcome applications from the social media community to build on their public bot classification service. **Acknowledgments:** The work was supported by NSF, DARPA, and the J.S. McDonnell Foundation.**BotOrNot: A System to Evaluate Social Bots** Clayton A. Davis, Onur Varol, Emilio Ferrara, Alessandro Flammini, Filippo Menczer **Abstract:** Social bots, also known as sybil accounts, are automated agents that interact with humans on social media platforms. These bots have been used to manipulate discussions, alter user popularity, spread misinformation, and even engage in terrorist activities. This paper introduces BotOrNot, a publicly available service that evaluates the likelihood of a Twitter account being controlled by a social bot. Since its launch in May 2014, BotOrNot has processed over one million requests through its website and APIs. **Introduction:** Social bots are computer algorithms that produce content and interact with humans on social media. They have been observed to perform various malicious activities, such as creating artificial grassroots support for political aims and manipulating stock prices. BotOrNot aims to identify whether a Twitter account is controlled by a human or a machine by computing a bot-likelihood score based on over 1,000 features. **Release Timeline:** BotOrNot was initially available only via the website due to capacity concerns. After implementing rate limits to address robustness issues, the service became available via public APIs in December 2015. Since then, it has served over 540,000 requests, bringing the total to over one million queries. **System Design:** The BotOrNot service uses Twitter's REST API to retrieve recent activity from a specified Twitter screen name. The server then computes the bot-likelihood score using a classification algorithm. The classification system generates features in six main categories: network features, user features, friends features, temporal features, content features, and sentiment features. The model is trained using a dataset of 15,000 manually verified social bots and 16,000 legitimate accounts, achieving a 0.95 AUC in cross-validation. **Conclusion:** BotOrNot provides a free service to evaluate social bots, aiming to lower the barrier for researchers, reporters, and enthusiasts. The service offers ready-made reports and an API for easy access to classification results. The authors welcome applications from the social media community to build on their public bot classification service. **Acknowledgments:** The work was supported by NSF, DARPA, and the J.S. McDonnell Foundation.
Reach us at info@study.space