This paper explores the relationship between Precision-Recall (PR) curves and Receiver Operating Characteristic (ROC) curves in evaluating machine learning algorithms. It shows that there is a deep connection between these two types of curves, such that a curve dominates in ROC space if and only if it dominates in PR space. This equivalence allows for the computation of an achievable PR curve, which is analogous to the convex hull in ROC space. The paper also highlights the importance of using PR curves for highly skewed datasets, where ROC curves may provide an overly optimistic view of performance. It demonstrates that linear interpolation between points in PR space is not valid, and that algorithms optimizing the area under the ROC curve do not necessarily optimize the area under the PR curve. The paper provides an efficient algorithm for computing the achievable PR curve and discusses the implications for algorithm design and evaluation. It concludes that while ROC curves are useful for certain applications, PR curves are often more informative for highly skewed datasets. The paper also addresses the issue of interpolation in PR space and shows how to correctly approximate the area under the PR curve. Finally, it demonstrates that algorithms optimizing the area under the ROC curve may not perform well in PR space, and that the achievable PR curve can be used to find the best performing classifier in PR space.This paper explores the relationship between Precision-Recall (PR) curves and Receiver Operating Characteristic (ROC) curves in evaluating machine learning algorithms. It shows that there is a deep connection between these two types of curves, such that a curve dominates in ROC space if and only if it dominates in PR space. This equivalence allows for the computation of an achievable PR curve, which is analogous to the convex hull in ROC space. The paper also highlights the importance of using PR curves for highly skewed datasets, where ROC curves may provide an overly optimistic view of performance. It demonstrates that linear interpolation between points in PR space is not valid, and that algorithms optimizing the area under the ROC curve do not necessarily optimize the area under the PR curve. The paper provides an efficient algorithm for computing the achievable PR curve and discusses the implications for algorithm design and evaluation. It concludes that while ROC curves are useful for certain applications, PR curves are often more informative for highly skewed datasets. The paper also addresses the issue of interpolation in PR space and shows how to correctly approximate the area under the PR curve. Finally, it demonstrates that algorithms optimizing the area under the ROC curve may not perform well in PR space, and that the achievable PR curve can be used to find the best performing classifier in PR space.