"Model Selection and Inference: A Practical Information-Theoretic Approach" by Kenneth P. Burnham and David R. Anderson is a comprehensive guide to using information-theoretic methods in statistical analysis. The book introduces graduate students and researchers to the use of these methods for analyzing empirical data. It emphasizes the importance of selecting a good approximating model that best represents the data, and discusses the Kullback-Leibler distance between models as a fundamental quantity in science. The authors present Akaike's Information Criterion (AIC) as a key tool for model selection, which is based on Fisher's maximized log-likelihood. AIC provides a new paradigm for model selection and is relatively simple to use, though not widely taught in statistics classes.
The book argues against the notion of a single "true model" in the biological sciences and instead views modeling as an approximation of the explainable information in empirical data. It advocates for the information-theoretic approach in observational studies, where hypothesis testing approaches often lack theoretical justification. For experiments, traditional approaches like analysis of variance are generally supported. However, the authors emphasize the importance of fitting explanatory models and estimating the size and precision of treatment effects, with less emphasis on testing null hypotheses.
The book discusses various information criteria, including AIC, AICc, QAIC, and TIC, and presents methods for estimating model selection uncertainty and incorporating it into precision estimates. It provides examples to illustrate technical issues and emphasizes model averaging as a way to avoid inference based on a single best model. The authors also discuss the importance of parsimony in model selection and the use of Akaike weights for model averaging.
The book is written for biologists, statisticians, and researchers in other life sciences and medicine. It is intended as a text for a 3-credit-hour course for students with substantial experience in statistics and data analysis. The authors also acknowledge the contributions of various individuals and organizations that helped in the preparation of the book. The book is structured with an introduction, chapters on information theory, practical use of the information-theoretic approach, model selection uncertainty, and statistical theory, followed by a summary and references. The authors emphasize the importance of objective science, the use of information-theoretic criteria, and the incorporation of model selection uncertainty into statistical inferences."Model Selection and Inference: A Practical Information-Theoretic Approach" by Kenneth P. Burnham and David R. Anderson is a comprehensive guide to using information-theoretic methods in statistical analysis. The book introduces graduate students and researchers to the use of these methods for analyzing empirical data. It emphasizes the importance of selecting a good approximating model that best represents the data, and discusses the Kullback-Leibler distance between models as a fundamental quantity in science. The authors present Akaike's Information Criterion (AIC) as a key tool for model selection, which is based on Fisher's maximized log-likelihood. AIC provides a new paradigm for model selection and is relatively simple to use, though not widely taught in statistics classes.
The book argues against the notion of a single "true model" in the biological sciences and instead views modeling as an approximation of the explainable information in empirical data. It advocates for the information-theoretic approach in observational studies, where hypothesis testing approaches often lack theoretical justification. For experiments, traditional approaches like analysis of variance are generally supported. However, the authors emphasize the importance of fitting explanatory models and estimating the size and precision of treatment effects, with less emphasis on testing null hypotheses.
The book discusses various information criteria, including AIC, AICc, QAIC, and TIC, and presents methods for estimating model selection uncertainty and incorporating it into precision estimates. It provides examples to illustrate technical issues and emphasizes model averaging as a way to avoid inference based on a single best model. The authors also discuss the importance of parsimony in model selection and the use of Akaike weights for model averaging.
The book is written for biologists, statisticians, and researchers in other life sciences and medicine. It is intended as a text for a 3-credit-hour course for students with substantial experience in statistics and data analysis. The authors also acknowledge the contributions of various individuals and organizations that helped in the preparation of the book. The book is structured with an introduction, chapters on information theory, practical use of the information-theoretic approach, model selection uncertainty, and statistical theory, followed by a summary and references. The authors emphasize the importance of objective science, the use of information-theoretic criteria, and the incorporation of model selection uncertainty into statistical inferences.