The choice of a valid study design is critical. Selection of a design and its features appropriate to the study’s context will minimize threats to internal validity
by providing unbiased estimates of effect measures (all italicized words are defined in Data Supplement S1, available as supporting information in the online version of this paper
). Research is generally grouped into four unique general categories, including experimental
, and observational
designs. Previous ED-based research has utilized all of these designs to achieve the goal of maximizing the internal validity of research performed to broadly improve the public’s health.
Emergency department–based public health research may also be separated into four specific categories that include surveillance, screening or testing, interventions, and economic evaluation. Ideal study designs depend primarily on the study question being asked and the type of investigation being conducted; however, some universal principles that increase the methodologic quality of the studies do apply.
is a term that describes the systematic collection of population-based information to report primarily disease occurrence and their etiologies. Generally, surveillance has been divided into researchor non–research-related categories. Surveillance is likely to be nonresearch when it involves the regular, ongoing collection and analysis of health-related data, conducted to monitor the frequency of occurrence and distribution of disease or injury in a population. As such, these systems typically are under the purview of governmental and international organizations, including the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO).1,2
Surveillance conducted under a research purview, on the other hand, occurs when it involves the collection and analysis of health-related data conducted either to generate knowledge that is applicable to other populations and settings from those which the data were collected or to contribute to new knowledge about the health condition. Surveillance research in EDs is becoming increasingly common.3,4
Surveillance research, by definition, is observational, and the most robust observational design includes prospective data collection. Surveillance research may also involve linking data collected in the ED to other sources of data, including those from national organizations, state health departments, or legal or welfare systems. Additionally, the validity and generalizability of surveillance research in the ED may be increased by use of multiple centers, standardized definitions of cases and outcomes, and around-the-clock recruitment. Excellent examples of such research include defining the prevalence of violence-related injury and STIs.5–7
A large, high-impact ED-based surveillance study conducted near the beginning of the HIV epidemic was performed in the ED at Johns Hopkins University, using a then-novel identity-unlinked testing approach for characterizing the prevalence and demographic factors associated with HIV infection.8
This approach has now been widely adopted to help characterize various other public health issues in both ED and other health care settings.9
Linking data collected in the ED with other data sources related to major public health issues may improve our understanding of and perspective on these health issues. For example, Davidson et al.10
in 1997 linked ED visits for alcohol-related complaints with the Colorado Death Registry to learn that 5-year mortality rates among alcohol-intoxicated patients were 2.4 times that of an age- and sex-matched comparison group and that these alcohol-related visits were a significant predictor of increased morbidity and mortality. Similarly, a study of men with injuries inflicted by a female partner found that over 50% of the injured men had previous histories of arrest for domestic violence.11
Real-time surveillance research has the potential for immediate reporting to the public health community as alerts of incipient and ongoing threats to the public’s health and it can also be used to assess the effectiveness of ongoing interventions.3
ED-based infectious disease surveillance for HIV and methicillin-resistant Staphylococcus aureus
(MRSA), for example, were critical in early identification of the existence of these national epidemics and helped inform public health strategies directed at curtailing their spread.12
Audit studies can be used to track access to outpatient care for vulnerable populations.13–15
Lack of access to outpatient primary and specialty care is both a major public health concern and a contributor to ED crowding. Using experimental designs, auditing has been used as a research tool that employs an intentionally deceptive approach to uncover intentional or unintentional discrimination in a variety of markets (e.g., housing, credit, employment).16,17
Asplin et al.14
incorporated this approach in the health care sector by having the same person call the same clinic twice with the same scripted request for an appointment following a visit to the ED. The investigators varied the person’s insurance status, however, to assess the impact of insurance status on access to care, demonstrating that patients with private insurance were more likely to receive an appointment than those with Medicaid or those without insurance.
Capture–recapture is a relatively uncommon surveillance technique originally described for population biology. In recent years, however, it has been extended to epidemiologic investigations to estimate incidence or prevalence.18,19
Although several capture–recapture approaches have been used, its general methodology includes examining the overlap in identification of cases from different data sources or populations. By calculating the expected number of persons in each combination of data sources, capture–recapture methodology allows for the estimation of persons identified by no data sources. The addition of persons identified by no source to the number of unique persons already identified provides an estimate of disease prevalence.20–22
is broadly defined as anything introduced into a research environment that is specifically controlled by the investigator. Examples of interventions in ED-based public health research may include a unique approach to screening, use of different types of counselors or different approaches to providing specific content to patients, or a multifaceted screening or testing program. Studies that attempt to evaluate the efficacy or effectiveness of an intervention typically use experimental designs; however, quasi-experimental and preexperimental designs have been used with some success. Experimental designs, also often referred to as randomized controlled trials, are considered the highest quality and most valid approach to assessing the impact of an intervention. This design provides the opportunity to create two or more study groups that are theoretically balanced across all measured and unmeasured characteristics with the exception of the intervention. As such, investigators are better able to assess the effect of the intervention while minimizing bias. There are a number of well-designed randomized controlled trials described or discussed in the ED-based public health research literature.23–28
Randomized controlled trials are subject to certain biases, two of the more common being selection bias and measurement bias. Selection bias occurs when the study sample does not represent the population from which it was selected. This may occur when the sample size is too small, if refusal rates are high, or if eligibility criteria are too stringent. As an example, ED-based studies evaluating brief interventions for unhealthy alcohol use generally have strict inclusion and exclusion criteria. Many of these studies exclude patients with alcohol dependence, other drug use, or psychiatric illness, thus selecting a sample that is not representative of the broader group of patients with alcohol dependence. Also, choosing an ED as the site for recruitment limits the sample population and creates a potential bias relative to other clinical venues.
There are a number of quasi-experimental designs that have been used in public health ED research and are often used when the performance of a randomized controlled trial is too expensive, is premature relative to the maturation of the content area, or is simply impractical or not feasible. Specific quasi-experimental designs include nonequivalent control group, interrupted time–series, and the equivalent time–samples designs. Equivalent time–sample designs, for example, in which the intervention is alternated sequentially or in a randomized fashion with the control condition, allows the investigator to approximate true randomization. This approach attempts to balance two or more study groups (depending on the number of times the intervention and control are alternated, the length of each time period, and the number of subjects enrolled in each period). All quasi-experimental designs use some form of quasi-randomization to assess the effect of an intervention.
Preexperimental designs are considered the weakest experimental design in terms of ensuring internal validity. In general, two unique preexperimental designs exist, including one-group pretest–posttest (also commonly referred to as “before–after” or “pre–post”) and static group comparison (also commonly referred to as a “historical control group” or “nonconcurrent control group”) designs. These specific designs do not allow for the control for secular trends, and the groups being compared are likely to be dissimilar. The dissimilarity is most apparent when the physician is selecting patients for an intervention. When a preexperimental design is used, the two groups are likely not comparable at all, thus requiring multivariable modeling to adjust for variation between the two groups while assessing the independent effect of the intervention. This approach can result in erroneous conclusions about the usefulness of an intervention. Regardless of the study’s design, duplicate findings by different groups of investigators in different settings are required to provide evidence that an intervention is effective.
Economic analyses are important components of program evaluation, providing integrated financial considerations related to public health interventions. Cost-effectiveness, or the costs per outcome (e.g., quality-adjusted life-years), must be distinguished from cost benefit, in which both costs and outcomes are expressed in monetary terms. In cost–effectiveness research, the denominator can be identified and enumerated. In cost–benefit analysis, the benefit is often nebulous and difficult to value in dollars. While many cost–effectiveness analyses
related to public health interventions exist, there is still a need for critical economic evaluation of many specific ED interventions.29–35
Most cost–effectiveness analyses are also constructed using theoretical models (i.e., combining data from multiple unique sources), which is useful, but use of actual clinical trial data to inform the economic evaluation is essential. Economic analyses performed concurrently with controlled clinical trials may provide a strong basis for understanding the financial impact of the intervention being studied and can be used to leverage funding to support research. A cost–benefit analysis
, however, identifying actual costs of an intervention (the numerator in the analysis), is complex and generally requires the input of a health care economist.