Statistical tests can be either parametric or non-parametric.
Parametric Tests: The Power of Normality
- Parametric tests assume an approximately normal distribution.
- They involve continuous or interval-type variables and require a sufficiently large sample size (typically > 30).
- They also assume homogeneity of variances (homoscedasticity).
These tests have a higher statistical power because they provide a greater probability of correctly rejecting a false statistical hypothesis.
Examples of parametric tests we have covered include the Z-test for the standardized normal distribution, Student’s t-test, ANOVA (Analysis of Variance), and the Pearson correlation coefficient r.
Non-Parametric Tests: Versatility and Flexibility
Non-parametric tests, on the other hand, do not assume any specific type of distribution and do not require the estimation of statistical parameters such as mean, variance, or standard deviation.
In simple terms, non-parametric tests can be broadly categorized as follows:
1) Goodness-of-fit tests (comparing observed values with expected values).
2) Tests that serve as non-parametric alternatives to parametric tests.
Examples of non-parametric tests we have discussed, with links for more detailed information, include the Chi-square test, the Wilcoxon test, the Spearman rank correlation coefficient, and the Kendall rank correlation coefficient.
General Considerations
Generally, parametric tests are more powerful than non-parametric tests, but they require data to meet certain conditions. When these conditions are not met, non-parametric tests provide a viable alternative.
Additionally, it is important to note that non-parametric tests can sometimes be used even when data meet the requirements for parametric tests; in such cases, they are often chosen for greater robustness or to avoid making overly restrictive assumptions.