SigmaPedia - The Free Online Lean Six Sigma Encyclopedia

English |  Español |  Français |  Português |  Deutsch |  中文

Multiple Comparisons

Go Back

Definition

When the overall ANOVA test of several population Means indicates that at least one pair of population Means differ significantly from each other, you might want to know which pair(s) they are. You could test all the pairs of Means for significance, but this approach poses a problem – when a large number of tests are performed on the same data, the chances of getting a falsely significantly result are greatly increased. This problem is called multiplicity.

In fact, if you perform n = 10 tests of significance on the sample data and each has a significance level of α = 0.05, then the probability that at least one of the tests will be significant by random chance, is given by:

1 – (1 - α)n = 1 – (1- 0.05)10 = 1 – 0.6 = 0.4

So there is actually a 40% chance of committing a type I error! In order to avoid this, you can use a multiple comparisons procedure to test all the pair-wise differences of Means. The advantage of a multiple comparisons technique is that it controls the overall (combined) rate of false positives (or a Type I Error). This overall rate is also called Experiment-wise error.

Application

Many multiple comparisons procedures have been proposed over the years, among them:

Fisher's Least Significant Difference (can be used ONLY if the Anova test is significant)
Tukey-Kramer method (Tukey's Honestly Significant Difference)
Scheffe’s Method
Bonferroni’s Adjustment Methods
Newman-Keuls Procedure
Duncan's Multiple Range Test and
Hsu’s Multiple Comparison with the Best

External Links

More on Multiple Comparisons from NIST: - http://www.itl.nist.gov/div898/handbook/prc/section4/prc47.htm Multiple Comaprison Procedures: - http://www.tufts.edu/~gdallal/mc.htm