This paper introduces a new large-scale person Re-Identification (ReID) dataset called MSMT17, which addresses the limitations of existing datasets by capturing complex scenes, lighting variations, and a large number of identities. MSMT17 is collected using a 15-camera network deployed in both indoor and outdoor environments, covering a long period with varying lighting conditions. It contains 4,101 identities and 126,441 bounding boxes, making it the largest and most challenging public dataset for person ReID. The paper also proposes a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap between different ReID datasets. PTGAN is designed to transfer person images from one dataset to another while preserving their identities and adapting their styles. This approach effectively reduces the domain gap, as demonstrated by experiments showing significant improvements in performance when training on transferred data. The paper evaluates PTGAN on several datasets, including MSMT17, and shows that it significantly improves ReID accuracy. The results indicate that PTGAN is effective in reducing the domain gap and can be used to train models on new testing domains with limited labeled data. The proposed dataset and method contribute to the advancement of person ReID research by providing a more realistic and challenging benchmark and a novel approach to address the domain gap issue.This paper introduces a new large-scale person Re-Identification (ReID) dataset called MSMT17, which addresses the limitations of existing datasets by capturing complex scenes, lighting variations, and a large number of identities. MSMT17 is collected using a 15-camera network deployed in both indoor and outdoor environments, covering a long period with varying lighting conditions. It contains 4,101 identities and 126,441 bounding boxes, making it the largest and most challenging public dataset for person ReID. The paper also proposes a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap between different ReID datasets. PTGAN is designed to transfer person images from one dataset to another while preserving their identities and adapting their styles. This approach effectively reduces the domain gap, as demonstrated by experiments showing significant improvements in performance when training on transferred data. The paper evaluates PTGAN on several datasets, including MSMT17, and shows that it significantly improves ReID accuracy. The results indicate that PTGAN is effective in reducing the domain gap and can be used to train models on new testing domains with limited labeled data. The proposed dataset and method contribute to the advancement of person ReID research by providing a more realistic and challenging benchmark and a novel approach to address the domain gap issue.