January 2002 | ALISON L. GIBBS AND FRANCIS EDWARD SU
The paper discusses the importance of choosing appropriate probability metrics when studying the convergence of measures. It provides a summary and new results on bounds among important probability metrics used by statisticians and probabilists. Understanding other metrics can help derive bounds for another metric in applied problems and offer alternative insights. The paper also shows that the rate of convergence can strongly depend on the chosen metric, emphasizing the need for careful consideration when selecting a metric.
The paper reviews ten important probability metrics, including discrepancy, Hellinger distance, relative entropy, Kolmogorov, Lévy, Prokhorov, separation, total variation, Wasserstein, and χ²-distance. It describes their definitions, properties, and relationships. The paper also provides new bounds between several metrics and illustrates how the choice of metric can affect both the rate and nature of convergence.
The relationships among these metrics are summarized in a diagram, and the paper provides proofs of some of the bounds. It also discusses how these metrics can be used in various applications, such as in the study of Markov chains, random walks, and Bayesian statistics. The paper concludes that the choice of metric is crucial in measuring convergence, as different metrics can lead to different conclusions about the convergence behavior of measures.The paper discusses the importance of choosing appropriate probability metrics when studying the convergence of measures. It provides a summary and new results on bounds among important probability metrics used by statisticians and probabilists. Understanding other metrics can help derive bounds for another metric in applied problems and offer alternative insights. The paper also shows that the rate of convergence can strongly depend on the chosen metric, emphasizing the need for careful consideration when selecting a metric.
The paper reviews ten important probability metrics, including discrepancy, Hellinger distance, relative entropy, Kolmogorov, Lévy, Prokhorov, separation, total variation, Wasserstein, and χ²-distance. It describes their definitions, properties, and relationships. The paper also provides new bounds between several metrics and illustrates how the choice of metric can affect both the rate and nature of convergence.
The relationships among these metrics are summarized in a diagram, and the paper provides proofs of some of the bounds. It also discusses how these metrics can be used in various applications, such as in the study of Markov chains, random walks, and Bayesian statistics. The paper concludes that the choice of metric is crucial in measuring convergence, as different metrics can lead to different conclusions about the convergence behavior of measures.