Can Biases in ImageNet Models Explain Generalization?

Can Biases in ImageNet Models Explain Generalization?

1 Apr 2024 | Paul Gavrikov, Janis Keuper
Can Biases in ImageNet Models Explain Generalization? Paul Gavrikov and Janis Keuper from Offenburg University investigate whether biases in ImageNet models, such as shape bias, spectral bias, and critical band bias, can explain generalization performance. They analyze 48 ImageNet models trained with different methods to assess how these biases interact with generalization across various benchmarks, including in-distribution (ID), robustness, conceptual changes, and adversarial robustness. Their findings reveal that these biases are insufficient to accurately predict generalization holistically. The study shows that while some biases correlate with generalization, many do not, and some even show negative correlations with human perception. The research highlights the complexity of generalization in neural networks and suggests that biases alone are not sufficient to improve generalization. The authors emphasize the need for further research to understand how these biases affect generalization and to develop more robust models. The study provides access to all checkpoints and evaluation code at https://github.com/paulgavrikov/biases_vs_generalization/.Can Biases in ImageNet Models Explain Generalization? Paul Gavrikov and Janis Keuper from Offenburg University investigate whether biases in ImageNet models, such as shape bias, spectral bias, and critical band bias, can explain generalization performance. They analyze 48 ImageNet models trained with different methods to assess how these biases interact with generalization across various benchmarks, including in-distribution (ID), robustness, conceptual changes, and adversarial robustness. Their findings reveal that these biases are insufficient to accurately predict generalization holistically. The study shows that while some biases correlate with generalization, many do not, and some even show negative correlations with human perception. The research highlights the complexity of generalization in neural networks and suggests that biases alone are not sufficient to improve generalization. The authors emphasize the need for further research to understand how these biases affect generalization and to develop more robust models. The study provides access to all checkpoints and evaluation code at https://github.com/paulgavrikov/biases_vs_generalization/.
Reach us at info@study.space
Understanding Can Biases in ImageNet Models Explain Generalization%3F