This paper proposes a method to bootstrap large language models (LLMs) for radiology report generation (RRG). The approach combines in-domain instance induction and a coarse-to-fine decoding process to align LLMs with the medical domain and improve report generation. In-domain instance induction learns to align the LLM with radiology reports by contrastive learning, while the coarse-to-fine decoding process refines intermediate reports using visual features and refinement prompts. The method is evaluated on two benchmark datasets, IU X-RAY and MIMIC-CXR, demonstrating superior performance compared to existing state-of-the-art solutions. The results show that the induction process enables the LLM to better align with the medical domain, and the coarse-to-fine generation allows for more precise text generation. The approach also shows that using limited in-domain data for domain adaptation and generation optimization can significantly improve RRG performance. The method is implemented using MiniGPT-4 as the base model, with visual encoding, in-domain instance induction, and coarse-to-fine decoding as the three main components. The results indicate that the proposed approach outperforms existing methods in both natural language generation (NLG) and clinical efficacy (CE) metrics. The study also highlights the importance of using domain-specific knowledge and refining the generation process to produce accurate and informative radiology reports.This paper proposes a method to bootstrap large language models (LLMs) for radiology report generation (RRG). The approach combines in-domain instance induction and a coarse-to-fine decoding process to align LLMs with the medical domain and improve report generation. In-domain instance induction learns to align the LLM with radiology reports by contrastive learning, while the coarse-to-fine decoding process refines intermediate reports using visual features and refinement prompts. The method is evaluated on two benchmark datasets, IU X-RAY and MIMIC-CXR, demonstrating superior performance compared to existing state-of-the-art solutions. The results show that the induction process enables the LLM to better align with the medical domain, and the coarse-to-fine generation allows for more precise text generation. The approach also shows that using limited in-domain data for domain adaptation and generation optimization can significantly improve RRG performance. The method is implemented using MiniGPT-4 as the base model, with visual encoding, in-domain instance induction, and coarse-to-fine decoding as the three main components. The results indicate that the proposed approach outperforms existing methods in both natural language generation (NLG) and clinical efficacy (CE) metrics. The study also highlights the importance of using domain-specific knowledge and refining the generation process to produce accurate and informative radiology reports.