Conformal prediction is a method that uses past experience to determine precise confidence levels for new predictions. Given an error probability \(\epsilon\) and a prediction method that produces a label \(\hat{y}\), it generates a set of labels, typically containing \(\hat{y}\), that also contains the true label \(y\) with probability \(1 - \epsilon\). This method can be applied to various prediction methods, including nearest-neighbor methods, support-vector machines, ridge regression, and more.
Conformal prediction is designed for an online setting where predictions are made successively, with each prediction being revealed before the next. The key feature of conformal prediction is that if the examples are independently sampled from the same distribution, the successive predictions will be correct \(1 - \epsilon\) of the time, even though they are based on an accumulating dataset rather than independent datasets.
The tutorial covers the theory of conformal prediction, including the concept of validity in the online setting, the relationship between exchangeability and independence, and the construction of prediction regions using nonconformity measures. It also provides numerical examples to illustrate the concepts and algorithms.
The validity of conformal prediction under exchangeability is demonstrated through a law of large numbers for exchangeable sequences, which ensures that a high proportion of the high-probability predictions will be correct. The tutorial emphasizes the on-line concept of validity, the meaning of exchangeability, and the generalization to other online compression models, such as the Gaussian linear model.
The tutorial also discusses the efficiency of conformal prediction, which depends on the probability distribution \(Q\) and the nonconformity measure used. It highlights the importance of choosing a nonconformity measure that is efficient under the assumptions made about the data distribution.Conformal prediction is a method that uses past experience to determine precise confidence levels for new predictions. Given an error probability \(\epsilon\) and a prediction method that produces a label \(\hat{y}\), it generates a set of labels, typically containing \(\hat{y}\), that also contains the true label \(y\) with probability \(1 - \epsilon\). This method can be applied to various prediction methods, including nearest-neighbor methods, support-vector machines, ridge regression, and more.
Conformal prediction is designed for an online setting where predictions are made successively, with each prediction being revealed before the next. The key feature of conformal prediction is that if the examples are independently sampled from the same distribution, the successive predictions will be correct \(1 - \epsilon\) of the time, even though they are based on an accumulating dataset rather than independent datasets.
The tutorial covers the theory of conformal prediction, including the concept of validity in the online setting, the relationship between exchangeability and independence, and the construction of prediction regions using nonconformity measures. It also provides numerical examples to illustrate the concepts and algorithms.
The validity of conformal prediction under exchangeability is demonstrated through a law of large numbers for exchangeable sequences, which ensures that a high proportion of the high-probability predictions will be correct. The tutorial emphasizes the on-line concept of validity, the meaning of exchangeability, and the generalization to other online compression models, such as the Gaussian linear model.
The tutorial also discusses the efficiency of conformal prediction, which depends on the probability distribution \(Q\) and the nonconformity measure used. It highlights the importance of choosing a nonconformity measure that is efficient under the assumptions made about the data distribution.