Hypothesis testing is a common technique in statistics to ascertain the validity of a given condition in doubt. Since in the real world, we work with samples and not the whole population, we must be sure that a particular trait occurring in the sample is really valid in the whole population or not.
For that we conduct a hypothesis testing with two sets of opposing hypothetical conditions (null and alternative hypotheses). Then one or more of several testing techniques (Student’s t-test, z-test, f-test, c²-test, etc.) are applied to reach a convincing conclusion. The ability to correctly reject an actually false null hypothesis is called the statistical power of the test. It is measured by the probability of this rejection.
The probability of accepting an actually false null hypothesis is called a ‘type II error’ (β). Thus, statistical power = 1 – β. This is also referred to as the ‘sensitivity’.
Since sample size often relates with cost incurred in the real life, power analysis gives a fair idea of the minimum sample size to work with to eliminate any type-II error.
Analysis of statistical power also helps compares two testing procedures, e.g. parametric and non-parametric. This can guide which test to use in similar situations.