2024 | Anna Yoo Jeong Ha, Josephine Passananti, Ronik Bhaskar, Shawn Shan, Reid Southen, Haitao Zheng, Ben Y. Zhao
Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?
The rise of generative AI has significantly disrupted the art world, making it increasingly difficult to distinguish human art from AI-generated images. This paper evaluates the effectiveness of various methods for identifying AI-generated images, including automated detectors and human experts. The study involves 280 real human artworks across seven styles and 350 AI-generated images from five models. Eight detectors, including five automated ones and three human groups (crowdsourced non-artists, professional artists, and expert artists), are tested. Hive and expert artists perform well, but each has weaknesses. Hive is vulnerable to adversarial perturbations, while expert artists produce higher false positives. A combination of human and automated detectors provides the best accuracy and robustness.
The study finds that non-artists struggle to distinguish AI-generated images from human art, while professional and expert artists perform better. Supervised classification, particularly Hive, performs well with zero false positives. However, accuracy correlates with training data availability, with Hive performing best and Firefly worst. Hybrid images and AI-upscaled photos pose no significant challenges. Adversarial perturbations significantly affect ML detectors like Hive, with feature space perturbations being the most effective. Expert artists perform well but often misclassify AI images as human, leading to false positives. A combined approach of human and automated detectors is most effective.
The paper discusses the challenges of detecting AI-generated images, including the ethical concerns of generative AI models and the need for accurate detection to prevent fraud and ensure copyright protection. Automated detectors like Hive, Optic, and Illuminarty are evaluated, as are research-based detectors like DIRE and DE-FAKE. The study also explores the impact of adversarial perturbations and the effectiveness of different detection methods. The results show that while automated detectors perform well, they are not foolproof, and human expertise remains crucial. The study highlights the importance of accurate detection in preventing fraud and ensuring the integrity of the art world.Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?
The rise of generative AI has significantly disrupted the art world, making it increasingly difficult to distinguish human art from AI-generated images. This paper evaluates the effectiveness of various methods for identifying AI-generated images, including automated detectors and human experts. The study involves 280 real human artworks across seven styles and 350 AI-generated images from five models. Eight detectors, including five automated ones and three human groups (crowdsourced non-artists, professional artists, and expert artists), are tested. Hive and expert artists perform well, but each has weaknesses. Hive is vulnerable to adversarial perturbations, while expert artists produce higher false positives. A combination of human and automated detectors provides the best accuracy and robustness.
The study finds that non-artists struggle to distinguish AI-generated images from human art, while professional and expert artists perform better. Supervised classification, particularly Hive, performs well with zero false positives. However, accuracy correlates with training data availability, with Hive performing best and Firefly worst. Hybrid images and AI-upscaled photos pose no significant challenges. Adversarial perturbations significantly affect ML detectors like Hive, with feature space perturbations being the most effective. Expert artists perform well but often misclassify AI images as human, leading to false positives. A combined approach of human and automated detectors is most effective.
The paper discusses the challenges of detecting AI-generated images, including the ethical concerns of generative AI models and the need for accurate detection to prevent fraud and ensure copyright protection. Automated detectors like Hive, Optic, and Illuminarty are evaluated, as are research-based detectors like DIRE and DE-FAKE. The study also explores the impact of adversarial perturbations and the effectiveness of different detection methods. The results show that while automated detectors perform well, they are not foolproof, and human expertise remains crucial. The study highlights the importance of accurate detection in preventing fraud and ensuring the integrity of the art world.