Adaptive Ensembles of Fine-Tuned Transformers for LLM-Generated Text Detection

Adaptive Ensembles of Fine-Tuned Transformers for LLM-Generated Text Detection

20 Mar 2024 | Zhixin Lai, Xuesheng Zhang, Suiyao Chen
This paper addresses the challenge of detecting text generated by large language models (LLMs) to mitigate risks such as fake news and copyright infringement. The authors trained five transformer-based classifiers on different datasets and evaluated their performance on both in-distribution and out-of-distribution datasets. Single classifier models showed decent performance on in-distribution data but struggled with out-of-distribution data. To improve generalization, the authors employed adaptive ensemble algorithms, which significantly enhanced the average accuracy from 91.8% to 99.2% on in-distribution data and from 62.9% to 72.5% on out-of-distribution data. The results highlight the effectiveness and potential of adaptive ensemble algorithms in LLM-generated text detection, demonstrating their robustness and enhanced generalization capabilities.This paper addresses the challenge of detecting text generated by large language models (LLMs) to mitigate risks such as fake news and copyright infringement. The authors trained five transformer-based classifiers on different datasets and evaluated their performance on both in-distribution and out-of-distribution datasets. Single classifier models showed decent performance on in-distribution data but struggled with out-of-distribution data. To improve generalization, the authors employed adaptive ensemble algorithms, which significantly enhanced the average accuracy from 91.8% to 99.2% on in-distribution data and from 62.9% to 72.5% on out-of-distribution data. The results highlight the effectiveness and potential of adaptive ensemble algorithms in LLM-generated text detection, demonstrating their robustness and enhanced generalization capabilities.
Reach us at info@study.space