Robust Loss Functions under Label Noise for Deep Neural Networks

Robust Loss Functions under Label Noise for Deep Neural Networks

27 Dec 2017 | Aritra Ghosh, Himanshu Kumar, P. S. Sastry
This paper investigates robust loss functions for deep neural networks in the presence of label noise. The authors propose theoretical conditions for loss functions to be inherently noise-tolerant in multiclass classification. They show that mean absolute error (MAE) loss is robust to label noise, and demonstrate through experiments that risk minimization with MAE loss is more robust than other common loss functions like categorical cross entropy (CCE) and mean square error (MSE). The paper also provides theoretical results on the robustness of risk minimization under different types of label noise, including symmetric, simple non-uniform, and class-conditional noise. The authors show that MAE loss satisfies the necessary conditions for robustness under these noise scenarios. They also compare the performance of different loss functions on real-world datasets, showing that MAE loss maintains high accuracy even under high levels of label noise. The results demonstrate that MAE loss is more robust to label noise than other common loss functions, and that risk minimization with MAE loss is a viable approach for learning neural networks in the presence of label noise.This paper investigates robust loss functions for deep neural networks in the presence of label noise. The authors propose theoretical conditions for loss functions to be inherently noise-tolerant in multiclass classification. They show that mean absolute error (MAE) loss is robust to label noise, and demonstrate through experiments that risk minimization with MAE loss is more robust than other common loss functions like categorical cross entropy (CCE) and mean square error (MSE). The paper also provides theoretical results on the robustness of risk minimization under different types of label noise, including symmetric, simple non-uniform, and class-conditional noise. The authors show that MAE loss satisfies the necessary conditions for robustness under these noise scenarios. They also compare the performance of different loss functions on real-world datasets, showing that MAE loss maintains high accuracy even under high levels of label noise. The results demonstrate that MAE loss is more robust to label noise than other common loss functions, and that risk minimization with MAE loss is a viable approach for learning neural networks in the presence of label noise.
Reach us at info@study.space