Deep CORAL: Correlation Alignment for Deep Domain Adaptation

Deep CORAL: Correlation Alignment for Deep Domain Adaptation

6 Jul 2016 | Baochen Sun* and Kate Saenko**
Deep CORAL is a method for unsupervised domain adaptation that extends the CORAL algorithm to align correlations of layer activations in deep neural networks. Traditional domain adaptation methods, like CORAL, align second-order statistics of source and target distributions using a linear transformation. However, Deep CORAL introduces a differentiable loss function that minimizes the difference between source and target correlations, enabling a nonlinear transformation. This approach is more powerful and integrates seamlessly with deep CNNs. The method addresses the challenge of adapting models to new domains without labeled data in the target domain. It leverages pre-trained deep features and labeled source data, initializing network parameters from a generic pre-trained network and fine-tuning on the source data. The CORAL loss minimizes the difference in second-order statistics between source and target feature activations, ensuring the final features work well on the target domain. Deep CORAL is evaluated on the Office dataset, showing state-of-the-art performance. It outperforms several existing methods, including CORAL, DDC, and DAN, in terms of accuracy across different domain shifts. The method is end-to-end, combining classification loss and CORAL loss to learn discriminative and domain-invariant features. Experiments demonstrate that the CORAL loss helps maintain strong classification accuracy on the source domain while improving performance on the target domain. Deep CORAL is integrated into deep networks, with the CORAL loss applied to the last classification layer. It is shown to be effective in reducing domain shift by constraining the distance between source and target domains during fine-tuning. The method is flexible and can be applied to different layers or network architectures. Overall, Deep CORAL provides a powerful and efficient approach to unsupervised domain adaptation in deep neural networks.Deep CORAL is a method for unsupervised domain adaptation that extends the CORAL algorithm to align correlations of layer activations in deep neural networks. Traditional domain adaptation methods, like CORAL, align second-order statistics of source and target distributions using a linear transformation. However, Deep CORAL introduces a differentiable loss function that minimizes the difference between source and target correlations, enabling a nonlinear transformation. This approach is more powerful and integrates seamlessly with deep CNNs. The method addresses the challenge of adapting models to new domains without labeled data in the target domain. It leverages pre-trained deep features and labeled source data, initializing network parameters from a generic pre-trained network and fine-tuning on the source data. The CORAL loss minimizes the difference in second-order statistics between source and target feature activations, ensuring the final features work well on the target domain. Deep CORAL is evaluated on the Office dataset, showing state-of-the-art performance. It outperforms several existing methods, including CORAL, DDC, and DAN, in terms of accuracy across different domain shifts. The method is end-to-end, combining classification loss and CORAL loss to learn discriminative and domain-invariant features. Experiments demonstrate that the CORAL loss helps maintain strong classification accuracy on the source domain while improving performance on the target domain. Deep CORAL is integrated into deep networks, with the CORAL loss applied to the last classification layer. It is shown to be effective in reducing domain shift by constraining the distance between source and target domains during fine-tuning. The method is flexible and can be applied to different layers or network architectures. Overall, Deep CORAL provides a powerful and efficient approach to unsupervised domain adaptation in deep neural networks.
Reach us at info@study.space