The paper introduces R3CD (Relation-aware Compositional Contrastive Control Diffusion), a novel framework for generating images from scene graphs using large-scale diffusion models. The main challenges addressed by R3CD are the inability to capture precise and concise interactions among entities and the generation of complete entities. R3CD consists of two main components: (1) SGFormer, a scene graph transformer that encodes both nodes and edges to capture local and global information, and (2) a relation-aware compositional contrastive control framework that uses attention maps and denoising steps to guide the image generation process. The framework is evaluated on the Visual Genome and COCO-Stuff datasets, demonstrating superior performance in both quantitative metrics (IS, FID) and qualitative results. Extensive ablation studies validate the effectiveness of each module, showing that SGFormer captures global and local information, while the contrastive losses ensure spatial and interaction consistency. The proposed method outperforms existing models in generating realistic and diverse images that respect the scene graph specifications, especially for abstract relations.The paper introduces R3CD (Relation-aware Compositional Contrastive Control Diffusion), a novel framework for generating images from scene graphs using large-scale diffusion models. The main challenges addressed by R3CD are the inability to capture precise and concise interactions among entities and the generation of complete entities. R3CD consists of two main components: (1) SGFormer, a scene graph transformer that encodes both nodes and edges to capture local and global information, and (2) a relation-aware compositional contrastive control framework that uses attention maps and denoising steps to guide the image generation process. The framework is evaluated on the Visual Genome and COCO-Stuff datasets, demonstrating superior performance in both quantitative metrics (IS, FID) and qualitative results. Extensive ablation studies validate the effectiveness of each module, showing that SGFormer captures global and local information, while the contrastive losses ensure spatial and interaction consistency. The proposed method outperforms existing models in generating realistic and diverse images that respect the scene graph specifications, especially for abstract relations.