Learning from Noisy Labels with Deep Neural Networks: A Survey

Learning from Noisy Labels with Deep Neural Networks: A Survey

10 Mar 2022 | Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee
This survey presents a comprehensive review of robust training methods for deep neural networks (DNNs) in the presence of noisy labels. The paper discusses the challenges of learning from noisy labels, which can severely degrade the generalization performance of DNNs. It categorizes 62 state-of-the-art robust training methods into five groups based on their methodological differences and evaluates them using six properties. The survey also analyzes noise rate estimation and summarizes typical evaluation methodologies, including public noisy datasets and evaluation metrics. It highlights promising research directions for future studies. The paper begins by discussing the problem of learning with label noise from a supervised learning perspective. It then provides a detailed analysis of the types of label noise, including instance-independent and instance-dependent noise. It also reviews non-deep learning approaches for managing noisy labels, such as data cleaning, surrogate loss, probabilistic methods, and model-based methods. The survey then focuses on deep learning approaches for robust training, including robust architecture, robust regularization, robust loss function, loss adjustment, and sample selection. It discusses various techniques for improving the robustness of DNNs to label noise, such as noise adaptation layers, dedicated architectures, and robust loss functions. It also covers loss adjustment methods, including loss correction, loss reweighting, label refurbishment, and meta learning. Finally, it discusses sample selection methods, which aim to identify true-labeled examples from noisy training data. The paper concludes by emphasizing the importance of robust training methods for DNNs in the presence of noisy labels and highlights the need for further research in this area. The survey provides a comprehensive overview of the current state of research on robust training methods for DNNs and serves as a guide for future studies.This survey presents a comprehensive review of robust training methods for deep neural networks (DNNs) in the presence of noisy labels. The paper discusses the challenges of learning from noisy labels, which can severely degrade the generalization performance of DNNs. It categorizes 62 state-of-the-art robust training methods into five groups based on their methodological differences and evaluates them using six properties. The survey also analyzes noise rate estimation and summarizes typical evaluation methodologies, including public noisy datasets and evaluation metrics. It highlights promising research directions for future studies. The paper begins by discussing the problem of learning with label noise from a supervised learning perspective. It then provides a detailed analysis of the types of label noise, including instance-independent and instance-dependent noise. It also reviews non-deep learning approaches for managing noisy labels, such as data cleaning, surrogate loss, probabilistic methods, and model-based methods. The survey then focuses on deep learning approaches for robust training, including robust architecture, robust regularization, robust loss function, loss adjustment, and sample selection. It discusses various techniques for improving the robustness of DNNs to label noise, such as noise adaptation layers, dedicated architectures, and robust loss functions. It also covers loss adjustment methods, including loss correction, loss reweighting, label refurbishment, and meta learning. Finally, it discusses sample selection methods, which aim to identify true-labeled examples from noisy training data. The paper concludes by emphasizing the importance of robust training methods for DNNs in the presence of noisy labels and highlights the need for further research in this area. The survey provides a comprehensive overview of the current state of research on robust training methods for DNNs and serves as a guide for future studies.
Reach us at info@study.space