Learning Robust Global Representations by Penalizing Local Predictive Power

Learning Robust Global Representations by Penalizing Local Predictive Power

5 Nov 2019 | Haohan Wang, Songwei Ge, Eric P. Xing, Zachary C. Lipton
The paper introduces a method called Patch-wise Adversarial Regularization (PAR) to train convolutional neural networks that focus on global image structure rather than local patterns. The method penalizes the predictive power of local representations in earlier layers, forcing the network to rely on global concepts instead of superficial features like color and texture. This approach improves generalization across synthetic and benchmark domain adaptation tasks. The authors also introduce a new dataset, ImageNet-Sketch, consisting of sketch-like images that match the ImageNet classification validation set in categories and scale. The method works by introducing a patch-wise classifier that predicts labels based on local features, while simultaneously training the network to fool this classifier. This adversarial setup encourages the model to learn global concepts rather than relying on local patterns. The paper evaluates the method on various datasets, including MNIST, CIFAR10, and PACS, and demonstrates that PAR outperforms existing methods, especially when domain information is not available. The results show that PAR improves robustness to distribution shifts and performs well on out-of-domain tasks. The authors also explore different variants of PAR, including more powerful pattern classifiers, broader local patterns, and higher-level local concepts. These variants are tested on various datasets and show different levels of effectiveness. The study highlights the importance of learning global concepts for robust visual recognition, as local patterns may not be reliable across different domains. The results suggest that PAR is particularly effective in tasks involving sketch-like images, where traditional methods struggle. The paper concludes that PAR provides a promising approach for developing robust image classifiers that can generalize well to out-of-domain data.The paper introduces a method called Patch-wise Adversarial Regularization (PAR) to train convolutional neural networks that focus on global image structure rather than local patterns. The method penalizes the predictive power of local representations in earlier layers, forcing the network to rely on global concepts instead of superficial features like color and texture. This approach improves generalization across synthetic and benchmark domain adaptation tasks. The authors also introduce a new dataset, ImageNet-Sketch, consisting of sketch-like images that match the ImageNet classification validation set in categories and scale. The method works by introducing a patch-wise classifier that predicts labels based on local features, while simultaneously training the network to fool this classifier. This adversarial setup encourages the model to learn global concepts rather than relying on local patterns. The paper evaluates the method on various datasets, including MNIST, CIFAR10, and PACS, and demonstrates that PAR outperforms existing methods, especially when domain information is not available. The results show that PAR improves robustness to distribution shifts and performs well on out-of-domain tasks. The authors also explore different variants of PAR, including more powerful pattern classifiers, broader local patterns, and higher-level local concepts. These variants are tested on various datasets and show different levels of effectiveness. The study highlights the importance of learning global concepts for robust visual recognition, as local patterns may not be reliable across different domains. The results suggest that PAR is particularly effective in tasks involving sketch-like images, where traditional methods struggle. The paper concludes that PAR provides a promising approach for developing robust image classifiers that can generalize well to out-of-domain data.
Reach us at info@study.space