ChIRAAG is a novel framework that leverages Large Language Models (LLMs) to generate System Verilog Assertions (SVAs) from natural language design specifications. The framework systematically formats the design specifications into a standardized JSON format, which is then used to generate SVAs using an LLM. The generated assertions are refined based on simulation logs and error messages, ensuring correctness. The framework also includes manual intervention when necessary, such as when the LLM cannot resolve errors or when implementation bugs are detected. ChIRAAG was tested on OpenTitan designs, where it successfully generated correct SVAs with only 27% requiring refinement. The framework significantly reduces the time and effort required for assertion generation, improving the efficiency of verification workflows. The results show that LLMs can effectively assist in generating functional assertions, although further refinement may be needed for domain-specific applications. ChIRAAG's ability to generate more assertions than provided in OpenTitan designs and to detect implementation bugs demonstrates its effectiveness in formal verification. The framework is implemented using OpenAI's GPT-4 model, which provides a large context window and training data up to December 2023. The framework's performance is validated through experiments on six OpenTitan designs, showing that it can generate assertions in under 15 seconds. The framework's ability to detect and correct errors, as well as its potential for further refinement with domain-specific LLMs, highlights its value in assertion-based verification.ChIRAAG is a novel framework that leverages Large Language Models (LLMs) to generate System Verilog Assertions (SVAs) from natural language design specifications. The framework systematically formats the design specifications into a standardized JSON format, which is then used to generate SVAs using an LLM. The generated assertions are refined based on simulation logs and error messages, ensuring correctness. The framework also includes manual intervention when necessary, such as when the LLM cannot resolve errors or when implementation bugs are detected. ChIRAAG was tested on OpenTitan designs, where it successfully generated correct SVAs with only 27% requiring refinement. The framework significantly reduces the time and effort required for assertion generation, improving the efficiency of verification workflows. The results show that LLMs can effectively assist in generating functional assertions, although further refinement may be needed for domain-specific applications. ChIRAAG's ability to generate more assertions than provided in OpenTitan designs and to detect implementation bugs demonstrates its effectiveness in formal verification. The framework is implemented using OpenAI's GPT-4 model, which provides a large context window and training data up to December 2023. The framework's performance is validated through experiments on six OpenTitan designs, showing that it can generate assertions in under 15 seconds. The framework's ability to detect and correct errors, as well as its potential for further refinement with domain-specific LLMs, highlights its value in assertion-based verification.