This paper examines Bayesian and bootstrap methods for generating error bands on impulse responses in dynamic linear models. It shows that Bayesian intervals have a stronger theoretical foundation in small samples, are easier to compute, and perform as well as the best bootstrap intervals in small samples by classical criteria. Bootstrap intervals based on the simulated small-sample distribution of an estimator without bias correction perform poorly. The paper also shows that a method used to extend Bayesian intervals to overidentified cases is incorrect and explains how to obtain correct Bayesian intervals for such cases.
The paper discusses the theory of classical confidence intervals and regions, explaining why they are harder to construct than Bayesian posterior probability regions. It explains why bootstrap methods that use computer simulation to determine the sampling distribution of an estimator conditional on a single true parameter value can produce correct confidence intervals only under strong auxiliary assumptions. It also discusses the scope for divergence between classical confidence levels and Bayesian posterior probabilities, and the sense in which each can be "biased" from the point of view of the other.
The paper shows that bias-corrections to bootstrap methods that have been suggested but not widely used can produce classical confidence intervals whose coverage probability is less divergent from their nominal coverage probabilities in most models considered. It also shows that Bayesian intervals, which are less difficult to compute than bootstrap ones, are competitive with corrected classical intervals even on classical criteria. The paper documents the bias and imprecision in "corrected" classical bootstrap intervals as summaries of the implications of the data for the unknown true impulse responses.
The paper's objective is not mainly to advance the state of the art of constructing error bands with good classical coverage probabilities. It regards classical coverage probabilities as of secondary interest and leaves to others the task of finding random intervals with better coverage probabilities in dynamic models. It documents the fairly good performance of Bayesian intervals by classical criteria partly as a way to reassure classically trained econometricians that Bayesian intervals do not misbehave badly by the criteria they usually study. It examines the performance by Bayesian criteria of "bias-corrected bootstrap intervals" motivated by classical reasoning because these methods may be used in practice, and econometricians who share with us the view that the Bayesian performance criteria are primary will want to know how deficient such bootstrap methods are. Conscientious classical econometricians will be interested in these results also, because they will recognize that confidence levels are not decision-making probabilities and will want to warn readers or clients of cases where the discrepancy is likely to be large. The paper's main objective is to show that Bayesian intervals are relatively straightforward to compute and well-behaved and to show how to construct them for overidentified models, where there are some nontrivial computational difficulties.This paper examines Bayesian and bootstrap methods for generating error bands on impulse responses in dynamic linear models. It shows that Bayesian intervals have a stronger theoretical foundation in small samples, are easier to compute, and perform as well as the best bootstrap intervals in small samples by classical criteria. Bootstrap intervals based on the simulated small-sample distribution of an estimator without bias correction perform poorly. The paper also shows that a method used to extend Bayesian intervals to overidentified cases is incorrect and explains how to obtain correct Bayesian intervals for such cases.
The paper discusses the theory of classical confidence intervals and regions, explaining why they are harder to construct than Bayesian posterior probability regions. It explains why bootstrap methods that use computer simulation to determine the sampling distribution of an estimator conditional on a single true parameter value can produce correct confidence intervals only under strong auxiliary assumptions. It also discusses the scope for divergence between classical confidence levels and Bayesian posterior probabilities, and the sense in which each can be "biased" from the point of view of the other.
The paper shows that bias-corrections to bootstrap methods that have been suggested but not widely used can produce classical confidence intervals whose coverage probability is less divergent from their nominal coverage probabilities in most models considered. It also shows that Bayesian intervals, which are less difficult to compute than bootstrap ones, are competitive with corrected classical intervals even on classical criteria. The paper documents the bias and imprecision in "corrected" classical bootstrap intervals as summaries of the implications of the data for the unknown true impulse responses.
The paper's objective is not mainly to advance the state of the art of constructing error bands with good classical coverage probabilities. It regards classical coverage probabilities as of secondary interest and leaves to others the task of finding random intervals with better coverage probabilities in dynamic models. It documents the fairly good performance of Bayesian intervals by classical criteria partly as a way to reassure classically trained econometricians that Bayesian intervals do not misbehave badly by the criteria they usually study. It examines the performance by Bayesian criteria of "bias-corrected bootstrap intervals" motivated by classical reasoning because these methods may be used in practice, and econometricians who share with us the view that the Bayesian performance criteria are primary will want to know how deficient such bootstrap methods are. Conscientious classical econometricians will be interested in these results also, because they will recognize that confidence levels are not decision-making probabilities and will want to warn readers or clients of cases where the discrepancy is likely to be large. The paper's main objective is to show that Bayesian intervals are relatively straightforward to compute and well-behaved and to show how to construct them for overidentified models, where there are some nontrivial computational difficulties.