AI literacy and its implications for prompt engineering strategies

AI literacy and its implications for prompt engineering strategies

2024 | Nils Knoth, Antonia Tolzin, Andreas Janson, Jan Marco Leimeister
Artificial intelligence (AI) technologies are rapidly advancing, with large language models (LLMs) becoming increasingly used in human-AI interactions. Prompt engineering, the skill of formulating precise instructions to elicit desired responses from LLMs, is essential for effective interaction. However, research on how non-experts use LLMs through prompt engineering and the role of AI literacy in this process is limited. This study explores the relationship between non-experts' AI literacy and their prompt engineering skills, particularly in the context of higher education. The study involved 45 university students, with 17 having no prior experience with generative AI. Participants were asked to complete two tasks using an LLM: creating a travel plan to Andorra and planning a scientific project on automated essay scoring. The quality of LLM outputs was assessed using an integrative complexity score, while the quality of prompt engineering was evaluated both quantitatively and qualitatively. Quantitative analysis showed that higher-quality prompt engineering skills predicted higher-quality LLM outputs, supporting the hypothesis that prompt engineering is a critical skill for goal-directed use of generative AI. Additionally, AI literacy was found to influence prompt engineering behavior, with certain aspects of AI literacy positively affecting the quality of prompts. However, the relationship was not consistent across all AI literacy subscales. Qualitative analysis of prompts revealed that students often used human-like communication styles, treating the AI as a conversational partner rather than a tool. Many prompts were formulated as questions, reflecting a tendency to view generative AI as an information repository rather than a content generator. This behavior may stem from a lack of awareness of the capabilities of generative AI, leading non-experts to default to familiar behaviors when using other technologies. The study highlights the importance of integrating AI literacy into higher education curricula to enable students to effectively use generative AI tools. By fostering AI literacy, students can develop the skills needed to engage in prompt engineering and interact with AI systems more effectively. The findings suggest that AI literacy is a crucial competency for higher education and academic success, enabling students to navigate the complexities of AI technologies and their applications.Artificial intelligence (AI) technologies are rapidly advancing, with large language models (LLMs) becoming increasingly used in human-AI interactions. Prompt engineering, the skill of formulating precise instructions to elicit desired responses from LLMs, is essential for effective interaction. However, research on how non-experts use LLMs through prompt engineering and the role of AI literacy in this process is limited. This study explores the relationship between non-experts' AI literacy and their prompt engineering skills, particularly in the context of higher education. The study involved 45 university students, with 17 having no prior experience with generative AI. Participants were asked to complete two tasks using an LLM: creating a travel plan to Andorra and planning a scientific project on automated essay scoring. The quality of LLM outputs was assessed using an integrative complexity score, while the quality of prompt engineering was evaluated both quantitatively and qualitatively. Quantitative analysis showed that higher-quality prompt engineering skills predicted higher-quality LLM outputs, supporting the hypothesis that prompt engineering is a critical skill for goal-directed use of generative AI. Additionally, AI literacy was found to influence prompt engineering behavior, with certain aspects of AI literacy positively affecting the quality of prompts. However, the relationship was not consistent across all AI literacy subscales. Qualitative analysis of prompts revealed that students often used human-like communication styles, treating the AI as a conversational partner rather than a tool. Many prompts were formulated as questions, reflecting a tendency to view generative AI as an information repository rather than a content generator. This behavior may stem from a lack of awareness of the capabilities of generative AI, leading non-experts to default to familiar behaviors when using other technologies. The study highlights the importance of integrating AI literacy into higher education curricula to enable students to effectively use generative AI tools. By fostering AI literacy, students can develop the skills needed to engage in prompt engineering and interact with AI systems more effectively. The findings suggest that AI literacy is a crucial competency for higher education and academic success, enabling students to navigate the complexities of AI technologies and their applications.
Reach us at info@study.space