RCooper is a real-world, large-scale dataset designed for roadside cooperative perception (RCooper), aiming to enhance autonomous driving and traffic management. The dataset includes 50,000 images and 30,000 point clouds, covering two typical traffic scenes: intersections and corridors. It provides manually annotated data for 3D object detection and tracking, enabling research on practical roadside cooperative perception. The dataset addresses challenges such as data heterogeneity, cooperative representation, and perception performance. It includes benchmarks for 3D object detection and tracking, demonstrating the effectiveness of cooperative perception. The dataset is released to facilitate further research and development in roadside cooperative perception. The dataset includes a variety of scenarios, capturing diverse weather and lighting conditions. The dataset is structured to support multiple cooperative perception tasks, including detection, tracking, prediction, localization, counting, and monitoring. The dataset is annotated with 3D bounding boxes and trajectories, enabling the training and evaluation of cooperative perception approaches. The dataset is designed to support both single-infrastructure and cross-infrastructure cooperative perception, with different sensor configurations for corridors and intersections. The dataset is used to evaluate cooperative perception methods, including early, intermediate, and late fusion strategies. The results show that cooperative perception improves detection and tracking performance, particularly in complex scenarios. The dataset is expected to advance research in cooperative perception and contribute to the development of practical roadside perception systems.RCooper is a real-world, large-scale dataset designed for roadside cooperative perception (RCooper), aiming to enhance autonomous driving and traffic management. The dataset includes 50,000 images and 30,000 point clouds, covering two typical traffic scenes: intersections and corridors. It provides manually annotated data for 3D object detection and tracking, enabling research on practical roadside cooperative perception. The dataset addresses challenges such as data heterogeneity, cooperative representation, and perception performance. It includes benchmarks for 3D object detection and tracking, demonstrating the effectiveness of cooperative perception. The dataset is released to facilitate further research and development in roadside cooperative perception. The dataset includes a variety of scenarios, capturing diverse weather and lighting conditions. The dataset is structured to support multiple cooperative perception tasks, including detection, tracking, prediction, localization, counting, and monitoring. The dataset is annotated with 3D bounding boxes and trajectories, enabling the training and evaluation of cooperative perception approaches. The dataset is designed to support both single-infrastructure and cross-infrastructure cooperative perception, with different sensor configurations for corridors and intersections. The dataset is used to evaluate cooperative perception methods, including early, intermediate, and late fusion strategies. The results show that cooperative perception improves detection and tracking performance, particularly in complex scenarios. The dataset is expected to advance research in cooperative perception and contribute to the development of practical roadside perception systems.