October 14–18, 2024, Salt Lake City, UT, USA | Anna Yoo Jeong Ha*, Josephine Passananti*, Ronik Bhaskar, Shawn Shan, Reid Southen, Haitao Zheng, Ben Y. Zhao
The paper "Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?" by Anna Yoo Jeong Ha, Josephine Passananti, Ronik Bhaskar, Shawn Shan, Reid Souther, Haitao Zheng, and Ben Y. Zhao explores the challenge of distinguishing human art from AI-generated images. The authors highlight the growing impact of generative AI on the art world, emphasizing the need to address this issue for legal, ethical, and practical reasons. They review various approaches to detection, including supervised learning classifiers, research tools targeting diffusion models, and professional artists' expertise. The study curates a dataset of 280 real human art images across 7 styles and generates matching images from 5 generative models. It evaluates 8 detectors (5 automated and 3 human) and considers adversarial perturbations. Key findings include:
1. **Commercial Detectors**: Hive performs the best, with 98.03% accuracy and no false positives for human art. Optic and Illuminarity misclassify human art more often.
2. **Perturbations**: JPEG compression and adversarial noise have minimal impact, while Glaze significantly affects detection, particularly for AI-generated images.
3. **Human Detection**: Professional artists and expert artists perform better than non-artists, with expert artists showing the highest accuracy.
4. **Training Data**: Detector performance is influenced by the availability of training data, with newer models like Firefly performing poorly due to limited training.
The study concludes that a combination of human and automated detectors provides the best accuracy and robustness. The authors also discuss the limitations and future directions, including the need for more diverse and realistic training datasets to improve detection accuracy.The paper "Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?" by Anna Yoo Jeong Ha, Josephine Passananti, Ronik Bhaskar, Shawn Shan, Reid Souther, Haitao Zheng, and Ben Y. Zhao explores the challenge of distinguishing human art from AI-generated images. The authors highlight the growing impact of generative AI on the art world, emphasizing the need to address this issue for legal, ethical, and practical reasons. They review various approaches to detection, including supervised learning classifiers, research tools targeting diffusion models, and professional artists' expertise. The study curates a dataset of 280 real human art images across 7 styles and generates matching images from 5 generative models. It evaluates 8 detectors (5 automated and 3 human) and considers adversarial perturbations. Key findings include:
1. **Commercial Detectors**: Hive performs the best, with 98.03% accuracy and no false positives for human art. Optic and Illuminarity misclassify human art more often.
2. **Perturbations**: JPEG compression and adversarial noise have minimal impact, while Glaze significantly affects detection, particularly for AI-generated images.
3. **Human Detection**: Professional artists and expert artists perform better than non-artists, with expert artists showing the highest accuracy.
4. **Training Data**: Detector performance is influenced by the availability of training data, with newer models like Firefly performing poorly due to limited training.
The study concludes that a combination of human and automated detectors provides the best accuracy and robustness. The authors also discuss the limitations and future directions, including the need for more diverse and realistic training datasets to improve detection accuracy.