Evidential Deep Learning to Quantify Classification Uncertainty

Evidential Deep Learning to Quantify Classification Uncertainty

31 Oct 2018 | Murat Sensoy, Lance Kaplan, Melih Kandemir
This paper proposes a method for quantifying classification uncertainty using the theory of subjective logic. The approach involves modeling class probabilities with a Dirichlet distribution, treating neural network predictions as subjective opinions. By learning a function that collects evidence leading to these opinions, the method provides a more detailed uncertainty model than traditional softmax outputs. The proposed method achieves improved uncertainty estimation, particularly in detecting out-of-distribution queries and resisting adversarial perturbations. The model is trained using a loss function that minimizes the negative log-likelihood, with a focus on minimizing prediction error and variance. The method is evaluated on MNIST and CIFAR datasets, showing superior performance in uncertainty modeling compared to other approaches. The results demonstrate that the proposed method provides more accurate uncertainty estimates and is more robust to adversarial examples. The approach is based on the Dempster-Shafer theory of evidence, which allows for the modeling of uncertainty through belief masses and uncertainty measures. The method is implemented using a neural network that outputs a Dirichlet distribution, with parameters derived from the network's output. The model is trained to minimize the loss function, which includes a term for regularization based on information-theoretic complexity. The results show that the proposed method outperforms existing approaches in uncertainty modeling, particularly in detecting out-of-distribution queries and resisting adversarial attacks.This paper proposes a method for quantifying classification uncertainty using the theory of subjective logic. The approach involves modeling class probabilities with a Dirichlet distribution, treating neural network predictions as subjective opinions. By learning a function that collects evidence leading to these opinions, the method provides a more detailed uncertainty model than traditional softmax outputs. The proposed method achieves improved uncertainty estimation, particularly in detecting out-of-distribution queries and resisting adversarial perturbations. The model is trained using a loss function that minimizes the negative log-likelihood, with a focus on minimizing prediction error and variance. The method is evaluated on MNIST and CIFAR datasets, showing superior performance in uncertainty modeling compared to other approaches. The results demonstrate that the proposed method provides more accurate uncertainty estimates and is more robust to adversarial examples. The approach is based on the Dempster-Shafer theory of evidence, which allows for the modeling of uncertainty through belief masses and uncertainty measures. The method is implemented using a neural network that outputs a Dirichlet distribution, with parameters derived from the network's output. The model is trained to minimize the loss function, which includes a term for regularization based on information-theoretic complexity. The results show that the proposed method outperforms existing approaches in uncertainty modeling, particularly in detecting out-of-distribution queries and resisting adversarial attacks.
Reach us at info@study.space