5 May 2019 | Mengye Ren, Wenyuan Zeng, Bin Yang, Raquel Urtasun
This paper proposes a novel meta-learning algorithm for reweighting training examples to improve the robustness of deep learning models. The method learns to assign weights to training examples based on their gradient directions, and determines the example weights by performing a meta gradient descent step on the current mini-batch example weights to minimize the loss on a clean unbiased validation set. The proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
The paper discusses the challenges of training deep neural networks on biased or noisy datasets, and presents a meta-learning approach that dynamically adjusts example weights during training. This approach leverages a small validation set to guide the training process and adaptively assigns importance weights to examples in every iteration. The method is shown to significantly increase the robustness to training set biases in both class imbalance and noisy label scenarios.
The paper also provides a detailed analysis of the convergence properties of the reweighting method, showing that it converges to the critical point of the validation loss function under mild conditions. The method is implemented using automatic differentiation and is shown to be effective on standard benchmarks such as MNIST and CIFAR, outperforming existing methods in both class imbalance and noisy label settings. The results demonstrate that the proposed method is less affected by changes in the noise type and achieves better performance on both CIFAR-10 and CIFAR-100 datasets. The method is also shown to be robust to overfitting noise and to benefit from the use of a clean validation set.This paper proposes a novel meta-learning algorithm for reweighting training examples to improve the robustness of deep learning models. The method learns to assign weights to training examples based on their gradient directions, and determines the example weights by performing a meta gradient descent step on the current mini-batch example weights to minimize the loss on a clean unbiased validation set. The proposed method can be easily implemented on any type of deep network, does not require any additional hyperparameter tuning, and achieves impressive performance on class imbalance and corrupted label problems where only a small amount of clean validation data is available.
The paper discusses the challenges of training deep neural networks on biased or noisy datasets, and presents a meta-learning approach that dynamically adjusts example weights during training. This approach leverages a small validation set to guide the training process and adaptively assigns importance weights to examples in every iteration. The method is shown to significantly increase the robustness to training set biases in both class imbalance and noisy label scenarios.
The paper also provides a detailed analysis of the convergence properties of the reweighting method, showing that it converges to the critical point of the validation loss function under mild conditions. The method is implemented using automatic differentiation and is shown to be effective on standard benchmarks such as MNIST and CIFAR, outperforming existing methods in both class imbalance and noisy label settings. The results demonstrate that the proposed method is less affected by changes in the noise type and achieves better performance on both CIFAR-10 and CIFAR-100 datasets. The method is also shown to be robust to overfitting noise and to benefit from the use of a clean validation set.