|Home | About | Journals | Submit | Contact Us | Français|
In most areas in life, it is difficult to work with populations and hence researchers work with samples. The calculation of the sample size needed depends on the data type and distribution. Elements include consideration of the alpha error, beta error, clinically meaningful difference, and the variability or standard deviation. The final number arrived at should be increased to include a safety margin and the dropout rate. Over and above this, sample size calculations must take into account all available data, funding, support facilities, and ethics of subjecting patients to research.
In most areas in life, it is very difficult to work with populations. During elections for instance, news channels interview a few hundred people and predict results based on their choices. Similarly, in a factory manufacturing light bulbs, a few bulbs are chosen at random to assess their quality. Likewise, in research, while it is ideal to work with the entire population, it is almost impossible to do so. Hence researchers choose to work with samples. Sample size calculations enable researchers to draw strong robust conclusions from the limited amount of information and also permit generalization of results. It is however important to remember that since it is very difficult to predict the outcome of any clinical study or lab experiment, sample size calculations will always remain approximate.
The estimation of the minimum sample size required for any study is not a single unique method, but the concepts underlying most methods are similar. The determination of the sample size is critical in planning clinical research because this is usually the most important factor determining the time and funding to perform the experiment. In most studies, there is a primary research question that the researcher wants to investigate. Sample size calculations are based on this question. Sample size calculations must take into account all available data, funding, support facilities, and ethics of subjecting patients to research. The present paper outlines the principles of sample size calculation for randomized controlled trials (RCTs) with a few solved examples.
Sample size calculations begin with an understanding of the type of data and distribution we are dealing with. Very broadly, data are divided into quantitative (numerical) and categorical (qualitative) data. For the former, information on the mean responses in the two groups’ u1 and u2 are required as also the common standard deviation for the two groups. For categorical data, p1 and p2 or information on proportions of successes in the two groups is needed. This information is usually obtained either from the published literature, a pilot study, or at times guesstimated. The other two key components are the alpha and beta error. Because the estimated sample size represents the minimum number of subjects required for the study, a “safety factor” should be added. The size of the safety factor is again an educated guess. Additions for drop-outs/attrition during the course of the study should also be made. Apart from this, an understanding of whether the data are normally distributed (follows the Gaussian or bell-shaped curve) or otherwise is also needed.
The calculation of sample size based on power considerations requires that an investigator specify the points given below. The first three items are under the control of the investigator:
The size of the effect that is clinically worthwhile to detect (d). This for numerical data is the difference between u1 and u2 for quantitative data and p1 and p2 for categorical data. This is also called the clinical meaningful difference, which will make the physician change his or her practice.
The probability of falsely rejecting a true null hypothesis (α-error). This is also called the false positive error and is the probability of finding a difference where none exists. This error is perceived to be the more dangerous of the two errors, since it can impact clinical practice. It is also called the regulator’s error. The alpha error is linked to the P-value or probability value and is conventionally set at 5%.
The probability of failing to reject a false null hypothesis (β-error). This is also called the false negative error and is the probability of NOT finding a difference when one actually exists. This is conventionally set either at 10% or 20% and is also called the investigator’s error.
The standard deviation of the population being studied (SD or σ). This is the variability or spread associated with quantitative data.
Standard values of the alpha and beta error are given in the solved examples and can be found in most statistics books. The examples below can be solved by hand using simple or scientific calculators. The website and the pdf file, http://www.idfbridges.org/files/BRIDGES-sample-size-calculation-and-example-of-budget.pdf, provide an easy tool of how to use online sample size calculators.
Source of Support: Nil
Conflict of Interest: None declared