This chapter presents a unified asymptotic theory for nonlinear statistical models. It shows that various estimators, such as nonlinear least squares and method of moments estimators, can be treated similarly in terms of their asymptotic properties. The key idea is that an estimator is the solution to an optimization problem, and the objective function can be treated as a likelihood function to derive test statistics like the Wald, likelihood ratio, and Rao score tests. These tests have similar asymptotic distributions to those in maximum likelihood estimation.
The chapter discusses two main types of estimators: least mean distance estimators and method of moments estimators. Least mean distance estimators minimize an objective function that depends on the data and possibly on preliminary estimates of nuisance parameters. Method of moments estimators use instrumental variables to estimate parameters by matching moments of the data with theoretical moments.
The chapter shows that these estimators can be analyzed using similar asymptotic techniques, leading to a unified theory. It also addresses the issue of model misspecification, where the assumed model or error distribution may not hold. The chapter demonstrates that the asymptotic theory can be applied to both correctly and incorrectly specified models.
The chapter concludes by showing that the asymptotic distributions of the estimators and test statistics depend on the objective function and the model assumptions. It provides examples of estimators, such as M-estimators and iteratively rescaled M-estimators, to illustrate the theory. The results are applicable to a wide range of nonlinear statistical models and can be used to assess the robustness of inference procedures under specification error.This chapter presents a unified asymptotic theory for nonlinear statistical models. It shows that various estimators, such as nonlinear least squares and method of moments estimators, can be treated similarly in terms of their asymptotic properties. The key idea is that an estimator is the solution to an optimization problem, and the objective function can be treated as a likelihood function to derive test statistics like the Wald, likelihood ratio, and Rao score tests. These tests have similar asymptotic distributions to those in maximum likelihood estimation.
The chapter discusses two main types of estimators: least mean distance estimators and method of moments estimators. Least mean distance estimators minimize an objective function that depends on the data and possibly on preliminary estimates of nuisance parameters. Method of moments estimators use instrumental variables to estimate parameters by matching moments of the data with theoretical moments.
The chapter shows that these estimators can be analyzed using similar asymptotic techniques, leading to a unified theory. It also addresses the issue of model misspecification, where the assumed model or error distribution may not hold. The chapter demonstrates that the asymptotic theory can be applied to both correctly and incorrectly specified models.
The chapter concludes by showing that the asymptotic distributions of the estimators and test statistics depend on the objective function and the model assumptions. It provides examples of estimators, such as M-estimators and iteratively rescaled M-estimators, to illustrate the theory. The results are applicable to a wide range of nonlinear statistical models and can be used to assess the robustness of inference procedures under specification error.