2024 | Andrea Cartolano, Alfredo Cuzzocrea, Giovanni Pilato
This paper explores the application of Explainable AI (XAI) in smart agriculture, focusing on a multiclass classification problem using the Crop Recommendation dataset. The study aims to predict the most suitable crop for a given set of features, including nitrogen, phosphorus, potassium, temperature, humidity, pH, and rainfall. Two prominent XAI approaches, SHAP (SHapley Additive ExPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), are employed to provide interpretations and explanations of the model's predictions. These methods generate visualizations that help users understand the logic behind the model's decisions, even without delving into the mathematical details.
The paper discusses the importance of interpretability in smart agriculture, where the decisions made by AI systems can have significant implications for farmers and agronomists. It highlights the challenges and limitations of current XAI approaches, such as computational costs and potential biases, and provides a detailed analysis of the experimental setup, including the training and testing of five different models (Extreme Gradient Boosting, Multi Layer Perceptron, and three types of Support Vector Machines).
The results show that both SHAP and LIME effectively explain the model's predictions, particularly in identifying common misclassifications, such as the misidentification of rice as jute due to heavy rainfall. The visualizations provided by these tools allow for a deeper understanding of the model's behavior and the importance of specific features in making predictions.
The paper concludes by discussing the practical implications and challenges of integrating XAI into agricultural practices, emphasizing the need for seamless integration with existing systems, infrastructure requirements, and training for stakeholders. It also outlines future research directions, including the exploration of different XAI approaches and their application in multidisciplinary fields.This paper explores the application of Explainable AI (XAI) in smart agriculture, focusing on a multiclass classification problem using the Crop Recommendation dataset. The study aims to predict the most suitable crop for a given set of features, including nitrogen, phosphorus, potassium, temperature, humidity, pH, and rainfall. Two prominent XAI approaches, SHAP (SHapley Additive ExPlanations) and LIME (Local Interpretable Model-Agnostic Explanations), are employed to provide interpretations and explanations of the model's predictions. These methods generate visualizations that help users understand the logic behind the model's decisions, even without delving into the mathematical details.
The paper discusses the importance of interpretability in smart agriculture, where the decisions made by AI systems can have significant implications for farmers and agronomists. It highlights the challenges and limitations of current XAI approaches, such as computational costs and potential biases, and provides a detailed analysis of the experimental setup, including the training and testing of five different models (Extreme Gradient Boosting, Multi Layer Perceptron, and three types of Support Vector Machines).
The results show that both SHAP and LIME effectively explain the model's predictions, particularly in identifying common misclassifications, such as the misidentification of rice as jute due to heavy rainfall. The visualizations provided by these tools allow for a deeper understanding of the model's behavior and the importance of specific features in making predictions.
The paper concludes by discussing the practical implications and challenges of integrating XAI into agricultural practices, emphasizing the need for seamless integration with existing systems, infrastructure requirements, and training for stakeholders. It also outlines future research directions, including the exploration of different XAI approaches and their application in multidisciplinary fields.