This paper presents a theoretical and empirical basis for a predictor identification procedure proposed by the author. The procedure is based on the final prediction error (FPE), which is defined as the mean square prediction error of a predictor. The method applies the least squares approach to identify predictors when the observed stochastic process is an autoregressive process generated from a strictly stationary and mutually independent innovation process. The identification is achieved by fitting autoregressive models of increasing orders, computing FPE estimates for each model, and selecting the model with the smallest FPE estimate.
The statistical properties of the FPE estimates and the overall procedure are discussed to demonstrate their practical utility. A modified version of the procedure is proposed, which addresses the lack of consistency in the original method for estimating the order of a finite autoregressive process. The concept of FPE is also applied to determine constants in a decision procedure proposed by Anderson for determining the order of a Gaussian autoregressive process, resulting in a third procedure.
The performances of the three procedures are compared using various artificial time series. The results indicate that for practical applications where the true orders of autoregressive processes are generally infinite, the original procedure is the most useful. The implications of this identification procedure on the estimation of power spectra are discussed in a subsequent paper.
The paper defines FPE as the mean square prediction error of a predictor and provides a mathematical expression for it. It also introduces the concept of a predictor structure that depends on the recent values of the observed process and is identified using the entire past history of the process. The paper concludes with a discussion of the statistical properties of the FPE estimates and their application in predictor identification.This paper presents a theoretical and empirical basis for a predictor identification procedure proposed by the author. The procedure is based on the final prediction error (FPE), which is defined as the mean square prediction error of a predictor. The method applies the least squares approach to identify predictors when the observed stochastic process is an autoregressive process generated from a strictly stationary and mutually independent innovation process. The identification is achieved by fitting autoregressive models of increasing orders, computing FPE estimates for each model, and selecting the model with the smallest FPE estimate.
The statistical properties of the FPE estimates and the overall procedure are discussed to demonstrate their practical utility. A modified version of the procedure is proposed, which addresses the lack of consistency in the original method for estimating the order of a finite autoregressive process. The concept of FPE is also applied to determine constants in a decision procedure proposed by Anderson for determining the order of a Gaussian autoregressive process, resulting in a third procedure.
The performances of the three procedures are compared using various artificial time series. The results indicate that for practical applications where the true orders of autoregressive processes are generally infinite, the original procedure is the most useful. The implications of this identification procedure on the estimation of power spectra are discussed in a subsequent paper.
The paper defines FPE as the mean square prediction error of a predictor and provides a mathematical expression for it. It also introduces the concept of a predictor structure that depends on the recent values of the observed process and is identified using the entire past history of the process. The paper concludes with a discussion of the statistical properties of the FPE estimates and their application in predictor identification.