Feb 23, 2024 | Zejun Zhang, Li Zhang, Xin Yuan, Anlan Zhang, Mengwei Xu, Feng Qian
This paper presents a comprehensive longitudinal study of the GPT app ecosystem, focusing on two major GPT app stores: GPTStore.AI and the official OpenAI GPT Store. The study analyzes the evolution, landscape, and vulnerabilities of the emerging LLM app ecosystem over a five-month period. The research develops two automated tools and a TriLevel configuration extraction strategy to efficiently gather metadata and user feedback for all GPT apps, as well as configurations for the top 10,000 popular apps. The findings reveal that user enthusiasm for GPT apps consistently rises, while creator interest plateaus within three months of GPTs’ launch. Nearly 90% of system prompts can be easily accessed due to widespread failure to secure GPT app configurations, leading to considerable plagiarism and duplication among apps. The study highlights the need to enhance the LLM app ecosystem by app stores, creators, and users. The research also identifies significant concentration of user engagement within a small subset of available apps, indicating that most apps are underutilized. The high success rate of configuration extraction exposes the vulnerabilities of GPT apps, indicating a lack of sufficient protective mechanisms. Notable instances of plagiarism within the GPT app store have raised copyright issues. The study provides a detailed dataset on GPT apps, which is intended to be open-sourced if the paper is accepted. The research does not raise any ethical issues. The findings have significant implications for GPT users, creators, and app stores. For GPT app stores, the study recommends implementing continuous monitoring to notify GPT creators of potential leaks and duplication issues. For GPT creators, they should be aware of the importance of protecting GPT configurations. For users, the study highly advises them to actively provide feedback and ratings. The study's contributions include the first extensive monitoring and analysis of the LLM app ecosystem, a novel TriLevel GPT app configuration extraction approach, and a detailed dataset on GPT apps. The study also explores the prevalence of plagiarism among GPT apps, highlighting the need for strategies to prevent this phenomenon. The findings suggest that the ease of accessing GPT app configurations has led to a notable issue of plagiarism within the current GPT app store. The study recommends implementing more rigorous review processes and enabling users and developers to easily report suspected plagiarism to help app stores quickly identify and take action against such apps.This paper presents a comprehensive longitudinal study of the GPT app ecosystem, focusing on two major GPT app stores: GPTStore.AI and the official OpenAI GPT Store. The study analyzes the evolution, landscape, and vulnerabilities of the emerging LLM app ecosystem over a five-month period. The research develops two automated tools and a TriLevel configuration extraction strategy to efficiently gather metadata and user feedback for all GPT apps, as well as configurations for the top 10,000 popular apps. The findings reveal that user enthusiasm for GPT apps consistently rises, while creator interest plateaus within three months of GPTs’ launch. Nearly 90% of system prompts can be easily accessed due to widespread failure to secure GPT app configurations, leading to considerable plagiarism and duplication among apps. The study highlights the need to enhance the LLM app ecosystem by app stores, creators, and users. The research also identifies significant concentration of user engagement within a small subset of available apps, indicating that most apps are underutilized. The high success rate of configuration extraction exposes the vulnerabilities of GPT apps, indicating a lack of sufficient protective mechanisms. Notable instances of plagiarism within the GPT app store have raised copyright issues. The study provides a detailed dataset on GPT apps, which is intended to be open-sourced if the paper is accepted. The research does not raise any ethical issues. The findings have significant implications for GPT users, creators, and app stores. For GPT app stores, the study recommends implementing continuous monitoring to notify GPT creators of potential leaks and duplication issues. For GPT creators, they should be aware of the importance of protecting GPT configurations. For users, the study highly advises them to actively provide feedback and ratings. The study's contributions include the first extensive monitoring and analysis of the LLM app ecosystem, a novel TriLevel GPT app configuration extraction approach, and a detailed dataset on GPT apps. The study also explores the prevalence of plagiarism among GPT apps, highlighting the need for strategies to prevent this phenomenon. The findings suggest that the ease of accessing GPT app configurations has led to a notable issue of plagiarism within the current GPT app store. The study recommends implementing more rigorous review processes and enabling users and developers to easily report suspected plagiarism to help app stores quickly identify and take action against such apps.