This chapter reviews the key identities of probability calculus relevant to Bayesian inference and examines three fundamental contexts in parametric modeling: off-line inference, on-line inference of time-invariant parameters, and on-line inference of time-variant parameters. Each context is explored in detail in subsequent chapters.
The Bayesian approach quantifies degrees of belief using probabilities and the rules of probability calculus, ensuring consistency in inductive inference. Unlike classical methods, which use various criteria like relative frequency or distance in a normed space, Bayesian methods leverage the marginalization operator (integration) to focus inference on specific subsets of quantities of interest. This capability is particularly valuable for tasks such as projection into a desired subset of the hypothesis space and model complexity measurement, which naturally aligns with Ockham's razor.
Despite these advantages, Bayesian methods are often avoided in practical applications due to concerns about the prior's existence and influence. Non-Bayesians argue that probabilities should only be attached to objects or hypotheses that vary randomly in repeatable experiments and that inferences should be based solely on data. However, the core strength of Bayesian methods lies in their ability to marginalize, which is crucial for practical applications.
The chapter also distinguishes between off-line and on-line parametric inference. Off-line inference involves updating beliefs after all data have been collected, while on-line inference updates beliefs as new data are observed. In the on-line scenario, Bayesian methods must update the state of knowledge conditioned by the incremental data at each time step, making it essential for control and decision tasks.This chapter reviews the key identities of probability calculus relevant to Bayesian inference and examines three fundamental contexts in parametric modeling: off-line inference, on-line inference of time-invariant parameters, and on-line inference of time-variant parameters. Each context is explored in detail in subsequent chapters.
The Bayesian approach quantifies degrees of belief using probabilities and the rules of probability calculus, ensuring consistency in inductive inference. Unlike classical methods, which use various criteria like relative frequency or distance in a normed space, Bayesian methods leverage the marginalization operator (integration) to focus inference on specific subsets of quantities of interest. This capability is particularly valuable for tasks such as projection into a desired subset of the hypothesis space and model complexity measurement, which naturally aligns with Ockham's razor.
Despite these advantages, Bayesian methods are often avoided in practical applications due to concerns about the prior's existence and influence. Non-Bayesians argue that probabilities should only be attached to objects or hypotheses that vary randomly in repeatable experiments and that inferences should be based solely on data. However, the core strength of Bayesian methods lies in their ability to marginalize, which is crucial for practical applications.
The chapter also distinguishes between off-line and on-line parametric inference. Off-line inference involves updating beliefs after all data have been collected, while on-line inference updates beliefs as new data are observed. In the on-line scenario, Bayesian methods must update the state of knowledge conditioned by the incremental data at each time step, making it essential for control and decision tasks.