1 Feb 2024 | Xinlin Peng, Ying Zhou, Ben He, Le Sun, and Yingfei Sun
This paper presents an adversarial evaluation of AI-generated student essay detection methods. The authors construct the AIG-ASAP dataset, which contains AI-generated student essays created using various text perturbation methods to evade detection. They evaluate the performance of existing AI-generated content (AIGC) detectors on this dataset and find that current detectors can be easily circumvented using simple adversarial attacks. The study highlights the need for more accurate and robust detection methods for AI-generated student essays.
The AIG-ASAP dataset is built based on the ASAP dataset, which contains essays written by high school students in the United States. The authors use different generation methods, including instruction-based writing, refined writing, and continuation writing, to create the dataset. They also apply perturbation methods such as essay paraphrasing, word substitution, and sentence substitution to generate essays that are harder to detect.
The study shows that while paraphrasing can help evade detection, word and sentence substitution methods significantly degrade detection performance. The results indicate that current AIGC detectors are vulnerable to adversarial attacks and can be easily fooled by subtle changes to the text. The authors also conduct human evaluations to assess the quality of the generated essays and find that AI-generated essays are often preferred by human evaluators, especially when they are more similar to human-written essays.
The paper also discusses the limitations of the study, including the fact that the AIG-ASAP dataset is based on English essays from U.S. high school students, which may limit the generalizability of the findings. Additionally, the study does not explore the practical implementation and deployment of the proposed detection methods.
Overall, the study highlights the challenges of detecting AI-generated student essays and the need for more robust detection methods that can accurately identify AI-generated content while preserving the quality of the generated essays. The authors conclude that human writing plays a crucial role in evading detection, and that more research is needed to develop effective detection methods for AI-generated student essays.This paper presents an adversarial evaluation of AI-generated student essay detection methods. The authors construct the AIG-ASAP dataset, which contains AI-generated student essays created using various text perturbation methods to evade detection. They evaluate the performance of existing AI-generated content (AIGC) detectors on this dataset and find that current detectors can be easily circumvented using simple adversarial attacks. The study highlights the need for more accurate and robust detection methods for AI-generated student essays.
The AIG-ASAP dataset is built based on the ASAP dataset, which contains essays written by high school students in the United States. The authors use different generation methods, including instruction-based writing, refined writing, and continuation writing, to create the dataset. They also apply perturbation methods such as essay paraphrasing, word substitution, and sentence substitution to generate essays that are harder to detect.
The study shows that while paraphrasing can help evade detection, word and sentence substitution methods significantly degrade detection performance. The results indicate that current AIGC detectors are vulnerable to adversarial attacks and can be easily fooled by subtle changes to the text. The authors also conduct human evaluations to assess the quality of the generated essays and find that AI-generated essays are often preferred by human evaluators, especially when they are more similar to human-written essays.
The paper also discusses the limitations of the study, including the fact that the AIG-ASAP dataset is based on English essays from U.S. high school students, which may limit the generalizability of the findings. Additionally, the study does not explore the practical implementation and deployment of the proposed detection methods.
Overall, the study highlights the challenges of detecting AI-generated student essays and the need for more robust detection methods that can accurately identify AI-generated content while preserving the quality of the generated essays. The authors conclude that human writing plays a crucial role in evading detection, and that more research is needed to develop effective detection methods for AI-generated student essays.