TIBCO Statistica® Power Analysis and Interval Estimation
The Power Analysis module implements the techniques of statistical power analysis, sample size estimation, and advanced techniques for confidence interval estimation. The main goal of the first two techniques is to allow you to decide, while in the process of designing an experiment, (a) how large a sample is needed to allow statistical judgments that are accurate and reliable, and (b) how likely your statistical test will be to detect effects of a given size in a particular situation. The third technique is useful in implementing objectives (a) and (b) above, and in evaluating the size of experimental effects in practice.
Performing power analysis and sample size estimation is an important aspect of experimental design, because without these calculations, sample size may be too high or too low. If sample size is too low, the experiment will lack the precision to provide reliable answers to the questions it is investigating. If sample size is too large, time and resources will be wasted, often for minimal gain.
Suppose you are planning a 1-Way ANOVA to study the effect of a drug. Prior to planning the study, you find that there has been a similar study previously. This particular study had 4 groups, with N = 50 subjects per group, and obtained an F-statistic of 15.4. From this information, you can (a) gauge the population effect size with an exact confidence interval, and (b) use this information to set a lower bound to appropriate sample size in your study.
Other features available with this module:
- calculates power as a function of sample size, effect size, and Type I error rate for the tests listed below:
- 1-sample t-test
- 2-sample independent sample t-test
- 2-sample dependent sample t-test
- planned contrasts
- 1-way ANOVA (fixed and random effects)
- 2-way ANOVA
- Chi-square test on a single variance
- F-test on 2 variances
- Z-test (or chi-square test) on a single proportion
- Z-test on 2 independent proportions
- Mcnemar's test on 2 dependent proportions
- F-test of significance in multiple regression
- t-test for significance of a single correlation
- Z-test for comparing 2 independent correlations
- Log-rank test in survival analysis
- Test of equal exponential survival, with accrual period
- Test of equal exponential survival, with accrual period and dropouts
- Chi-square test of significance in structural equation modeling
- Tests of "close fit" in structural equation modeling confirmatory factor analysis
- calculates probability distributions that are of special value in performing power and sample size calculations
- noncentral distributions are also distinguished by the ability to calculate a noncentrality parameter that places a given observation at a given percentage point in the noncentral distribution; ability to perform this calculation is essential to the technique of "noncentrality interval estimation"
- routines, which include the noncentral t, noncentral F, noncentral chi-square, binomial, Perason Correlation, and the exact distribution of the squared multiple correlation coefficient, are characterized by their ability to solve for an unknown parameter, and for their ability to handle "non-null" cases
For additional information on noncentrality interval estimation see Steiger and Fouladi (1997).