This paper introduces a new approach to multiple-hypothesis testing by fixing the rejection region and estimating the error rate, rather than the traditional method of fixing the error rate and estimating the rejection region. The authors propose using the positive false discovery rate (pFDR) as a more appropriate measure than the traditional false discovery rate (FDR). They define pFDR as the expected proportion of false positives among rejected hypotheses, conditioned on at least one rejection occurring. This approach allows for more accurate and powerful statistical inference, as it takes into account the proportion of true null hypotheses and provides a more conservative estimate of the error rate.
The authors also introduce the concept of the q-value, which is the pFDR analogue of the p-value. The q-value provides a measure of the strength of an observed statistic with respect to pFDR and allows for more flexible and accurate hypothesis testing. The q-value is calculated by finding the minimum pFDR that can occur when rejecting a statistic with a given value.
The paper compares the proposed method with the traditional Benjamini-Hochberg method for controlling FDR. The proposed method is shown to be more powerful and flexible, as it allows for a more accurate estimation of the error rate and provides a more conservative estimate of the false discovery rate. The authors also provide numerical results showing that the proposed method can yield an increase of over eight times in power compared with the Benjamini-Hochberg method.
The paper also discusses the theoretical properties of the proposed method, including the finite sample and large sample results for pFDR and FDR. The authors show that the proposed method provides strong control of the error rate and is asymptotically equivalent to the maximum likelihood estimate. They also discuss the advantages of using pFDR and the q-value over FDR, as they provide more accurate and conservative estimates of the error rate.
Finally, the authors provide a method for calculating the optimal value of λ, which is used to estimate pFDR and FDR. This method involves using the bootstrap technique to estimate the mean-squared error of the estimates and selecting the value of λ that minimizes this error. The authors show that this method produces accurate and conservative estimates of the error rate and provides a more robust approach to multiple-hypothesis testing.This paper introduces a new approach to multiple-hypothesis testing by fixing the rejection region and estimating the error rate, rather than the traditional method of fixing the error rate and estimating the rejection region. The authors propose using the positive false discovery rate (pFDR) as a more appropriate measure than the traditional false discovery rate (FDR). They define pFDR as the expected proportion of false positives among rejected hypotheses, conditioned on at least one rejection occurring. This approach allows for more accurate and powerful statistical inference, as it takes into account the proportion of true null hypotheses and provides a more conservative estimate of the error rate.
The authors also introduce the concept of the q-value, which is the pFDR analogue of the p-value. The q-value provides a measure of the strength of an observed statistic with respect to pFDR and allows for more flexible and accurate hypothesis testing. The q-value is calculated by finding the minimum pFDR that can occur when rejecting a statistic with a given value.
The paper compares the proposed method with the traditional Benjamini-Hochberg method for controlling FDR. The proposed method is shown to be more powerful and flexible, as it allows for a more accurate estimation of the error rate and provides a more conservative estimate of the false discovery rate. The authors also provide numerical results showing that the proposed method can yield an increase of over eight times in power compared with the Benjamini-Hochberg method.
The paper also discusses the theoretical properties of the proposed method, including the finite sample and large sample results for pFDR and FDR. The authors show that the proposed method provides strong control of the error rate and is asymptotically equivalent to the maximum likelihood estimate. They also discuss the advantages of using pFDR and the q-value over FDR, as they provide more accurate and conservative estimates of the error rate.
Finally, the authors provide a method for calculating the optimal value of λ, which is used to estimate pFDR and FDR. This method involves using the bootstrap technique to estimate the mean-squared error of the estimates and selecting the value of λ that minimizes this error. The authors show that this method produces accurate and conservative estimates of the error rate and provides a more robust approach to multiple-hypothesis testing.