2 Jan 2024 | Simiao Zhang, Jiaping Wang, Guoliang Dong, Jun Sun, Yueling Zhang, Geguang Pu
The paper introduces AISD (AI-aided Software Development), a prototype designed to automate small program construction using large language models (LLMs). AISD aims to free software engineers from low-level coding by generating detailed use cases, system designs, and implementations based on high-level user requirements. Unlike existing approaches, AISD emphasizes user engagement throughout the development process, particularly during requirement engineering and system testing. The system generates initial use cases and system designs, which are refined through user feedback. AISD then automatically implements the system and performs iterative testing, allowing users to validate and refine the prototype. The paper evaluates AISD using a novel benchmark called CAASD (Capability Assessment of Automatic Software Development), which assesses the quality and completeness of system implementations. Experimental results show that AISD achieves a 75.2% pass rate with significantly fewer tokens compared to baselines, highlighting the importance of human engagement in AI-aided software development. The study concludes that AISD can significantly improve software development efficiency and effectiveness, suggesting a future where software engineering may focus more on requirement engineering and system testing.The paper introduces AISD (AI-aided Software Development), a prototype designed to automate small program construction using large language models (LLMs). AISD aims to free software engineers from low-level coding by generating detailed use cases, system designs, and implementations based on high-level user requirements. Unlike existing approaches, AISD emphasizes user engagement throughout the development process, particularly during requirement engineering and system testing. The system generates initial use cases and system designs, which are refined through user feedback. AISD then automatically implements the system and performs iterative testing, allowing users to validate and refine the prototype. The paper evaluates AISD using a novel benchmark called CAASD (Capability Assessment of Automatic Software Development), which assesses the quality and completeness of system implementations. Experimental results show that AISD achieves a 75.2% pass rate with significantly fewer tokens compared to baselines, highlighting the importance of human engagement in AI-aided software development. The study concludes that AISD can significantly improve software development efficiency and effectiveness, suggesting a future where software engineering may focus more on requirement engineering and system testing.