From COBIT to ISO 42001: Evaluating Cybersecurity Frameworks for Opportunities, Risks, and Regulatory Compliance in Commercializing Large Language Models

From COBIT to ISO 42001: Evaluating Cybersecurity Frameworks for Opportunities, Risks, and Regulatory Compliance in Commercializing Large Language Models

February 27, 2024 | Timothy R. McIntosh, Teo Susnjak, Tong Liu, Paul Watters, Raza Nowrozy, Malka N. Halgamuge
This study evaluates the readiness of four leading cybersecurity Governance, Risk, and Compliance (GRC) frameworks—NIST CSF 2.0, COBIT 2019, ISO 27001:2022, and ISO 42001:2023—for addressing the opportunities, risks, and regulatory compliance associated with integrating Large Language Models (LLMs) into cybersecurity operations. Using qualitative content analysis and expert validation, the study identifies gaps in these frameworks, particularly in their ability to manage LLM-specific risks and ensure compliance with emerging regulations like the EU AI Act. While ISO 42001:2023 is the most comprehensive for LLM opportunities, COBIT 2019 aligns closely with the EU AI Act. The study highlights the need for continuous evolution of these frameworks to better address the multifaceted risks of LLMs, emphasizing the importance of human-expert-in-the-loop validation to enhance cybersecurity frameworks. The findings suggest that all frameworks require improvements to more effectively and comprehensively address LLM-related risks and ensure regulatory compliance. The study proposes integrating human-expert-in-the-loop validation processes to support secure and compliant LLM integration and discusses the necessity for ongoing updates to cybersecurity GRC frameworks to adapt to the dynamic technological landscape. The research contributes to the field by providing an academic evaluation of the preparedness of leading cybersecurity frameworks for LLM integration, highlighting the need for a multi-dimensional approach to address LLM risks and ensure compliance with emerging regulations. The study also identifies gaps in risk oversight and the need for more detailed controls for managing LLM hallucination risks. The analysis underscores the urgency for framework modernization to address risks and compliance issues associated with emerging AI technologies while capitalizing on the opportunities of their adoption and integration through improved regulatory compliance and secure LLM guidelines. The study recommends continuous evolution of cybersecurity standards to address rapid technological changes like LLMs.This study evaluates the readiness of four leading cybersecurity Governance, Risk, and Compliance (GRC) frameworks—NIST CSF 2.0, COBIT 2019, ISO 27001:2022, and ISO 42001:2023—for addressing the opportunities, risks, and regulatory compliance associated with integrating Large Language Models (LLMs) into cybersecurity operations. Using qualitative content analysis and expert validation, the study identifies gaps in these frameworks, particularly in their ability to manage LLM-specific risks and ensure compliance with emerging regulations like the EU AI Act. While ISO 42001:2023 is the most comprehensive for LLM opportunities, COBIT 2019 aligns closely with the EU AI Act. The study highlights the need for continuous evolution of these frameworks to better address the multifaceted risks of LLMs, emphasizing the importance of human-expert-in-the-loop validation to enhance cybersecurity frameworks. The findings suggest that all frameworks require improvements to more effectively and comprehensively address LLM-related risks and ensure regulatory compliance. The study proposes integrating human-expert-in-the-loop validation processes to support secure and compliant LLM integration and discusses the necessity for ongoing updates to cybersecurity GRC frameworks to adapt to the dynamic technological landscape. The research contributes to the field by providing an academic evaluation of the preparedness of leading cybersecurity frameworks for LLM integration, highlighting the need for a multi-dimensional approach to address LLM risks and ensure compliance with emerging regulations. The study also identifies gaps in risk oversight and the need for more detailed controls for managing LLM hallucination risks. The analysis underscores the urgency for framework modernization to address risks and compliance issues associated with emerging AI technologies while capitalizing on the opportunities of their adoption and integration through improved regulatory compliance and secure LLM guidelines. The study recommends continuous evolution of cybersecurity standards to address rapid technological changes like LLMs.
Reach us at info@study.space