This paper presents a method for testing whether the predictive accuracy of two nested models is equal, with the smaller model being the null hypothesis. The authors propose adjusting the mean squared prediction error (MSPE) of the larger model to account for the noise introduced by estimating parameters that are zero under the null hypothesis. They argue that using standard normal critical values will result in actual sizes close to, but slightly less than, the nominal size. Simulation evidence supports this approach, showing that the adjusted MSPE test performs well in terms of size and power compared to other methods. The authors also compare the performance of different tests, including the MSPE-adjusted, MSPE-normal, and CCS tests, and find that the MSPE-adjusted test is the most accurate and powerful. The paper concludes that the MSPE-adjusted test is a reliable method for comparing the predictive accuracy of nested models.This paper presents a method for testing whether the predictive accuracy of two nested models is equal, with the smaller model being the null hypothesis. The authors propose adjusting the mean squared prediction error (MSPE) of the larger model to account for the noise introduced by estimating parameters that are zero under the null hypothesis. They argue that using standard normal critical values will result in actual sizes close to, but slightly less than, the nominal size. Simulation evidence supports this approach, showing that the adjusted MSPE test performs well in terms of size and power compared to other methods. The authors also compare the performance of different tests, including the MSPE-adjusted, MSPE-normal, and CCS tests, and find that the MSPE-adjusted test is the most accurate and powerful. The paper concludes that the MSPE-adjusted test is a reliable method for comparing the predictive accuracy of nested models.