Toward Mitigating Misinformation and Social Media Manipulation in LLM Era

Toward Mitigating Misinformation and Social Media Manipulation in LLM Era

May 13-17, 2024 | Yizhou Zhang, Karishma Sharma, Lun Du, Yan Liu
The proliferation of misinformation on social media has become a significant concern, especially with the rise of Large Language Models (LLMs). These models enable manipulators to generate highly convincing deceptive content, posing challenges for both users and social media platforms. This tutorial introduces advanced machine learning research aimed at mitigating misinformation and social media manipulation. It covers three main areas: (1) detecting social manipulators, (2) learning causal models of misinformation and social manipulation, and (3) detecting LLM-generated misinformation. The tutorial also discusses future directions for research in this field. The challenges posed by LLMs include more interactive social bots, more deceptive misinformation content, and higher efficiency in generating misinformation. Traditional methods for detecting misinformation, such as identifying social bots with predefined content, are becoming less effective as LLMs enable the creation of human-like interactive bots. Additionally, LLMs can generate misleading content that is difficult to detect, such as out-of-context multi-modal media. The tutorial outlines the history and recent advances in research directions that can help address these threats. It includes sections on misinformation detection in the large model era, manipulator detection on social media, causal models of misinformation and social manipulation, and the road to the future. The tutorial also discusses the relevance of these topics to the research community and provides a detailed schedule for the presentation. The tutorial is intended for researchers interested in social media analysis and introduces advanced machine learning tools for combating misinformation and social media manipulation. It covers topics such as detecting social bots, understanding the causal effects of misinformation, and future directions for research. The tutorial also includes references to related works and provides support materials such as slides and recordings. The authors are researchers from various institutions, including the University of Southern California and Coupang, Inc. The work is supported by NSF grants and other funding sources.The proliferation of misinformation on social media has become a significant concern, especially with the rise of Large Language Models (LLMs). These models enable manipulators to generate highly convincing deceptive content, posing challenges for both users and social media platforms. This tutorial introduces advanced machine learning research aimed at mitigating misinformation and social media manipulation. It covers three main areas: (1) detecting social manipulators, (2) learning causal models of misinformation and social manipulation, and (3) detecting LLM-generated misinformation. The tutorial also discusses future directions for research in this field. The challenges posed by LLMs include more interactive social bots, more deceptive misinformation content, and higher efficiency in generating misinformation. Traditional methods for detecting misinformation, such as identifying social bots with predefined content, are becoming less effective as LLMs enable the creation of human-like interactive bots. Additionally, LLMs can generate misleading content that is difficult to detect, such as out-of-context multi-modal media. The tutorial outlines the history and recent advances in research directions that can help address these threats. It includes sections on misinformation detection in the large model era, manipulator detection on social media, causal models of misinformation and social manipulation, and the road to the future. The tutorial also discusses the relevance of these topics to the research community and provides a detailed schedule for the presentation. The tutorial is intended for researchers interested in social media analysis and introduces advanced machine learning tools for combating misinformation and social media manipulation. It covers topics such as detecting social bots, understanding the causal effects of misinformation, and future directions for research. The tutorial also includes references to related works and provides support materials such as slides and recordings. The authors are researchers from various institutions, including the University of Southern California and Coupang, Inc. The work is supported by NSF grants and other funding sources.
Reach us at info@study.space
[slides] Toward Mitigating Misinformation and Social Media Manipulation in LLM Era | StudySpace