Multiple Hypothesis Testing in Microarray Experiments

Multiple Hypothesis Testing in Microarray Experiments

2003, Vol. 18, No. 1, 71-103 | Sandrine Dudoit, Juliet Popper Shaffer and Jennifer C. Boldrick
This article discusses multiple hypothesis testing in DNA microarray experiments, where thousands of genes are simultaneously tested for differential expression. The goal is to identify genes whose expression levels are associated with a response or covariate of interest. Due to the large number of tests, traditional methods for controlling Type I errors (false positives) may be too conservative. The article reviews various approaches to multiple hypothesis testing, including the family-wise error rate (FWER), false discovery rate (FDR), and adjusted p-values. It compares different procedures on microarray and simulated data sets, highlighting the trade-offs between controlling Type I errors and maintaining statistical power. The article also discusses resampling methods, such as permutation, to estimate p-values without assuming a specific distribution for the test statistics. The key challenge is to balance the number of false positives and false negatives while accounting for the complex dependencies among test statistics in microarray experiments. The article concludes that FDR-based methods are often more powerful than FWER-based methods, especially when the number of tests is large.This article discusses multiple hypothesis testing in DNA microarray experiments, where thousands of genes are simultaneously tested for differential expression. The goal is to identify genes whose expression levels are associated with a response or covariate of interest. Due to the large number of tests, traditional methods for controlling Type I errors (false positives) may be too conservative. The article reviews various approaches to multiple hypothesis testing, including the family-wise error rate (FWER), false discovery rate (FDR), and adjusted p-values. It compares different procedures on microarray and simulated data sets, highlighting the trade-offs between controlling Type I errors and maintaining statistical power. The article also discusses resampling methods, such as permutation, to estimate p-values without assuming a specific distribution for the test statistics. The key challenge is to balance the number of false positives and false negatives while accounting for the complex dependencies among test statistics in microarray experiments. The article concludes that FDR-based methods are often more powerful than FWER-based methods, especially when the number of tests is large.
Reach us at info@study.space
[slides] Multiple Hypothesis Testing in Microarray Experiments | StudySpace