This paper addresses the current challenges and opportunities in the field of Explainable Artificial Intelligence (XAI). It identifies two complementary cultures within XAI: BLUE XAI, which focuses on human/value-oriented explanations, and RED XAI, which emphasizes model/validation-oriented explanations. The paper argues that RED XAI, despite being under-explored, holds significant potential for ensuring the safety and reliability of AI systems. It highlights the need for new methods to question and debug models, extract knowledge from well-performing models, and identify and fix bugs in faulty models. The paper also discusses common misconceptions about XAI, such as the binary nature of interpretability, the existence of a single silver bullet explanation method, and the overreliance on user studies for validation. It proposes several challenges for future research, including the construction of complementary explanations, the development of benchmarks, tools, and standards, and the adoption of an explorer mindset in model validation. The paper concludes by emphasizing the importance of addressing these challenges to advance the field of trustworthy machine learning.This paper addresses the current challenges and opportunities in the field of Explainable Artificial Intelligence (XAI). It identifies two complementary cultures within XAI: BLUE XAI, which focuses on human/value-oriented explanations, and RED XAI, which emphasizes model/validation-oriented explanations. The paper argues that RED XAI, despite being under-explored, holds significant potential for ensuring the safety and reliability of AI systems. It highlights the need for new methods to question and debug models, extract knowledge from well-performing models, and identify and fix bugs in faulty models. The paper also discusses common misconceptions about XAI, such as the binary nature of interpretability, the existence of a single silver bullet explanation method, and the overreliance on user studies for validation. It proposes several challenges for future research, including the construction of complementary explanations, the development of benchmarks, tools, and standards, and the adoption of an explorer mindset in model validation. The paper concludes by emphasizing the importance of addressing these challenges to advance the field of trustworthy machine learning.