The paper introduces a novel training method for neural classifiers to improve their ability to detect out-of-distribution (OOD) samples while maintaining high classification accuracy. The key idea is to train classifiers to produce less confident predictions on OOD samples and more confident predictions on in-distribution (ID) samples. This is achieved by adding two additional terms to the standard cross-entropy loss: one that minimizes the Kullback-Leibler (KL) divergence between the predictive distribution on OOD samples and a uniform distribution, and another that generates effective OOD training samples using a generative adversarial network (GAN). The GAN is designed to generate samples in the low-density region of the ID distribution, which helps improve the classifier's ability to distinguish between ID and OOD samples. The method is evaluated on various image datasets, including CIFAR-10, SVHN, ImageNet, and LSUN, and shows significant improvements in detection performance compared to existing threshold-based detectors. The results demonstrate that the proposed method effectively enhances the classifier's confidence calibration, leading to better OOD detection. The paper also provides visual interpretations of the method's effectiveness and discusses its potential applications in other tasks such as regression, Bayesian models, and semi-supervised learning.The paper introduces a novel training method for neural classifiers to improve their ability to detect out-of-distribution (OOD) samples while maintaining high classification accuracy. The key idea is to train classifiers to produce less confident predictions on OOD samples and more confident predictions on in-distribution (ID) samples. This is achieved by adding two additional terms to the standard cross-entropy loss: one that minimizes the Kullback-Leibler (KL) divergence between the predictive distribution on OOD samples and a uniform distribution, and another that generates effective OOD training samples using a generative adversarial network (GAN). The GAN is designed to generate samples in the low-density region of the ID distribution, which helps improve the classifier's ability to distinguish between ID and OOD samples. The method is evaluated on various image datasets, including CIFAR-10, SVHN, ImageNet, and LSUN, and shows significant improvements in detection performance compared to existing threshold-based detectors. The results demonstrate that the proposed method effectively enhances the classifier's confidence calibration, leading to better OOD detection. The paper also provides visual interpretations of the method's effectiveness and discusses its potential applications in other tasks such as regression, Bayesian models, and semi-supervised learning.