November 1994 | Francis X. Diebold, Roberto S. Mariano
This paper by Francis X. Diebold and Roberto S. Mariano proposes and evaluates tests for comparing the accuracy of two competing forecasts. The authors address the limitations of existing tests, which often assume quadratic loss functions and do not account for various realistic features of forecast errors, such as non-Gaussianity, non-zero mean, and serial and contemporaneous correlation. They introduce a wide range of accuracy measures that can be tailored to specific decision-making contexts and allow for non-Gaussian, non-zero mean, serially correlated, and contemporaneously correlated forecast errors. The paper presents asymptotic and exact finite-sample tests, evaluates their performance through Monte Carlo simulations, and provides an empirical example using exchange rate forecasting. The results show that the proposed tests maintain correct size under various conditions and are more robust to violations of assumptions compared to existing tests. The authors conclude by discussing potential extensions and future research directions, emphasizing the importance of model selection, estimation, and prediction using relevant loss functions.This paper by Francis X. Diebold and Roberto S. Mariano proposes and evaluates tests for comparing the accuracy of two competing forecasts. The authors address the limitations of existing tests, which often assume quadratic loss functions and do not account for various realistic features of forecast errors, such as non-Gaussianity, non-zero mean, and serial and contemporaneous correlation. They introduce a wide range of accuracy measures that can be tailored to specific decision-making contexts and allow for non-Gaussian, non-zero mean, serially correlated, and contemporaneously correlated forecast errors. The paper presents asymptotic and exact finite-sample tests, evaluates their performance through Monte Carlo simulations, and provides an empirical example using exchange rate forecasting. The results show that the proposed tests maintain correct size under various conditions and are more robust to violations of assumptions compared to existing tests. The authors conclude by discussing potential extensions and future research directions, emphasizing the importance of model selection, estimation, and prediction using relevant loss functions.