Search tips
Search criteria 


Logo of hsresearchLink to Publisher's site
Health Serv Res. 2011 October; 46(5): 1628–1645.
PMCID: PMC3207196

Comparative Logic Modeling for Policy Analysis: The Case of HIV Testing Policy Change at the Department of Veterans Affairs

Erika M Langer, M.S., Assistant Professor, Allen L Gifford, M.D., and Kee Chan, Ph.D.



Logic models have been used to evaluate policy programs, plan projects, and allocate resources. Logic Modeling for policy analysis has been used rarely in health services research but can be helpful in evaluating the content and rationale of health policies. Comparative Logic Modeling is used here on human immunodeficiency virus (HIV) policy statements from the Department of Veterans Affairs (VA) and Centers for Disease Control and Prevention (CDC). We created visual representations of proposed HIV screening policy components in order to evaluate their structural logic and research-based justifications.

Data Sources and Study Design

We performed content analysis of VA and CDC HIV testing policy documents in a retrospective case study.

Data Collection

Using comparative Logic Modeling, we examined the content and primary sources of policy statements by the VA and CDC. We then quantified evidence-based causal inferences within each statement.

Principal Findings

VA HIV testing policy structure largely replicated that of the CDC guidelines. Despite similar design choices, chosen research citations did not overlap. The agencies used evidence to emphasize different components of the policies.


Comparative Logic Modeling can be used by health services researchers and policy analysts more generally to evaluate structural differences in health policies and to analyze research-based rationales used by policy makers.

Keywords: Evidence-based practice, HIV, health policy, Centers for Disease Control and Prevention (U.S.), Veterans Affairs (U.S.)

Health care decision makers have used research evidence to justify adoption of Centers for Disease Control and Prevention (CDC) human immunodeficiency virus (HIV) testing guidelines into a variety of health care settings. In 2006, the CDC released revised HIV testing guidelines for adults and adolescents (Branson et al. 2006). Within the U.S. Department of Veterans Affairs (VA), reports emerged indicating that HIV testing rates were low (Owens et al. 2007), prompting reevaluation of VA HIV testing practices and policies. As a result, in 2009, the VA adopted CDC recommendations by eliminating written informed consent requirements for HIV testing and making testing a routine part of Veterans' health services (Department of Veterans Affairs 2009).

Translation of specific clinical recommendations and evidence into policies and practice within health care systems is a major challenge. If the structural logic and rationale of the CDC recommendations are not clear, initiatives and resource allocations necessary for a change in testing policy could be difficult to implement. James and Jorgensen (2009) have suggested that research utilization theory offers a robust conceptual framework for assessing the policy process. There is a pressing need to use effective evaluation tools to reveal the evidence-based resources, inputs, and outputs of policy in a systematic, logical, and transparent manner as health care settings implement the CDC guidelines. Logic Modeling is a technique that offers a way to analyze and quantify how policy makers use research—how their “research utilization” informs the process and outcomes of translating specific research-based knowledge into evidence-based practice (Rich 1997).

To understand research utilization, we propose a novel application of the logic model. Logic Modeling provides a visual representation of input, throughput, and output components that are brought together to produce intended change. While Logic Modeling can be used after completion of primary activities as a way of evaluating how well policy was able to meet intended outcomes, it may also be used earlier in institutional change processes to plan for and guide future evaluation. Logic Modeling as an evaluation tool has been used widely to examine the resources, inputs, outputs, and outcomes of programs in a clear and systematic fashion. Therefore, we applied this to better understand the evolution of VA HIV testing and assess how evidence was brought to bear on the policy design.

Over the past 40 years, logic models have largely been applied to evaluations of specific social programs (Wholey 1987), but Logic Modeling has also been used in both community-based and systems-level initiatives (Julian 1997; Moyer, Verhovsek, and Wilson 1997; Kaplan and Garrett 2005), in planning (Dwyer 1996; Macaskill et al. 2000), management (Millar, Simone, and Carnevale 2001), and preevaluation to develop indicators and document outcomes (Kellogg Foundation 2004; Innovation Network 2008). Logic Modeling can stimulate reflection (Moyer, Verhovsek, and Wilson 1997), enable communication, and promote continued learning toward health service objectives (Kellogg Foundation 2004; Innovation Network 2008).

Policy documents such as the VA HIV directive and CDC HIV recommendations represent endpoints within the decision making process and are particularly relevant to successful replication of policies within other contexts. Used as primary sources for analysis, policy documents can counter the problematic “fuzziness” of health policy by providing insights into the formal rules and intentions of policy measures (Kroneman and van der Zee 1997). James and Jorgensen (2009) have suggested that by working backward from final policy statements, it is possible to reconstruct policy decision making to explore knowledge utilization, a term which encompasses both scientific and nonscientifically generated information sources apparent in the policy making process (Trostle, Bronfman, and Langer 1999; Dobrow, Goel, and Upshur 2004). In this formulation, “research utilization” is a sub-type of knowledge utilization that can describe the development of evidence-based practices.

Use of research data in policy making has previously been characterized as instrumental (i.e., direct, actionable), conceptual (i.e., diffuse, gradually enlightening), or symbolic (i.e., strategic, tactical) (Pelz 1978; Weiss 1979; Trostle, Bronfman, and Langer 1999; Hanney et al. 2003; Almeida and Báscolo 2006). We explore the symbolic use of research evidence. This use can occur to attain political legitimacy, support a position, give confidence, reduce uncertainty, and raise financial resources for policy decisions (Hanney et al. 2003).

Despite the personal, economic, and public health benefits of HIV screening, the revised 2006 CDC guidelines have not been universally adopted. Lack of consensus in how testing policy should be applied may stem in part from lack of transparency in how evidence-based resources, inputs, and outputs have been synthesized, translated, and adopted in different settings. We propose that when informed by research utilization theory, policy Logic Modeling can generate this transparency. Modeling policy alongside causal assumptions allows us to compare across policy settings in order to better understand the rationales of specific policy choices. Where research-based justifications are entirely lacking in the model, there may exist gaps in CDC or VA policy logic. We propose that by generating insights into the policy making process, comparative Logic Modeling is thus an effective tool for health policy analysis.


Using final CDC and VA policy statements on HIV testing (Branson et al. 2006; Department of Veterans Affairs 2009), this case study examines retrospectively the structural logic and research-based rationale of each policy. To describe the structural logic of each agency's policy design, we adopted the outcomes-based logic model from W. K. Kellogg Foundation (2004). We used content analysis of agency policy documents to identify elements of the five policy design components presented in the model. These components are resources/inputs, activities, outputs, outcomes, and impact (labeled 1–5 in Figure 1).

Figure 1
Logic Model with Policy Design Components, 1–5, and Research-Based Links, A–D

Comparison of Design Logic

Resources/inputs include space, technology, equipment, materials, and the human, financial, organizational, and community inputs available to direct toward proposed activities. Activities include processes, tools, events, technology, and actions needed to bring about intended change, and encompass the services, products, advocacy, and infrastructures of the intervention. Outputs are the direct measurable or tangible products of activities quantified as types, levels, and targets of delivered services. They are not themselves the anticipated change, but they help to assess how well change is being implemented. Outcomes are the individual, community, systematic, or organizational changes to behavior, knowledge, skills, status or level of functioning. Impact is the fundamental change that occurs over the longer term (Kellogg Foundation 2004; Innovation Network 2008).

We used these definitions to construct separate logic models for the CDC and VA HIV testing policy statements. First, we categorized direct quotes from the statements according to the components of the logic model, and we cross checked this categorization between researchers. Next, we compared structural components across logic models to determine the extent of policy overlap, and we abbreviated these quotes for the purpose of display (see Tables 1 and and2).2). Sections of the CDC guidance pertaining to HIV screening for adolescents, pregnant women, and their infants were excluded from the analysis, as maternity care is typically provided outside the VA, and the agency does not provide care for adolescents or infants (Department of Veterans Affairs 2010).

Table 1
Logic Modeling of the Five Components of HIV Screening Policy from the Centers for Disease Control and Prevention
Table 2
Logic Modeling of the Five Components of HIV Screening Policy from the Department of Veterans Affairs

Comparison of Design Rationales

Next, we adapted the logic model to draw specific attention to the presence or absence of a research-based rationale or evidence-based practice for policy design. The research-based policy rationale was described through use of the model arrows or “links” (labeled A–D in Figure 1). These represent causal inferences of the design which connect one policy component to another in an “if-then” statement (Kellogg Foundation 2004). The causal inferences of link B, for example, state that if the policy includes certain activities, then it will produce a particular output. This causal inference may or may not be supported explicitly by research evidence. While causal inferences may be supported by a variety of information types, our study was concerned only with research-based links.

To quantify research-based causal inferences, we first identified all research references or formal citations in the texts. We then looked to see if these references were used to support causal inferences about any elements within the five policy design components, as determined previously. To be counted as a link, at least two elements belonging to different, adjacent model components had to be present. The number of links for each location (A–D) in the logic model were counted and compared within and across policy designs (see Figure 2).

Figure 2
Research-Based Causal Inferences in Two Federal HIV Testing Policy Statements

To assess accuracy and permit possible revision to these coding rules, we piloted them on the proposed rule for Veteran HIV testing policy that was published in the Federal Register on December 29, 2008 (Department of Veterans Affairs 2008). This early draft of the VA policy invited public comment prior to the final announcement on July 16, 2009. The result of our pilot evaluation was to assign links occurring between more than two components into all relevant categories; for example, a research-based link between a single output and two or more outcomes would be counted as two pairings for Link C of the model (i.e., both “Output 1 >Outcome 1” and “Output 1 >Outcome 2”).

Outcomes of the final comparison were to quantify the number of references given and the number of research-based causal inferences under each link of the model. This was used to determine where agencies were focusing. We incorporated these findings into a conceptual framework connecting policy context, content, and rationale (Figure 3).

Figure 3
Conceptual Framework for Comparative Logic Modeling across Two Health Policies


CDC and VA HIV testing policy shared design elements under every component of the logic model (see Tables 1 and and22).


Both agencies invited community input into the design process. Only the CDC mentioned potential barriers to proposed testing activities in the form of human, financial, organizational, and legal resources not be readily available. The VA listed technological resources such as an educational HIV website, electronic medical records, and a computerized provider ordering system that could better enable policy activities.


Both agencies proposed routine, voluntary HIV testing and removed prior requirements for written consent and pretest counseling. They required oral informed consent and its documentation within the medical record, as well as provision of multilingual educational materials to accompany testing. Both recommended annual repeat testing of patients at high risk for infection. The VA differed from the CDC in that it specifically eliminated a requirement for posttest counseling. The CDC did not require direct personal contact between patient and provider to convey negative test results, but it did recommend counseling referrals for high-risk patients and required efforts that would link patients who test positive to counseling. The CDC proposed additional criteria for screening in populations where HIV prevalence is greater than 1 percent. The CDC noted that where these policy changes were incompatible with existing state laws, steps should be taken to resolve the conflicts.


Both agencies anticipated increased screening rates. The CDC recommended that all health care settings implement HIV testing of patients aged 13–64, patients seeking treatment for tuberculosis or sexually transmitted diseases, patients who are starting a new sexual relationship, patients who are sources of occupational exposure, and patients who are thought to be at high risk for infection. The VA made recommendations for Veterans receiving medical care benefits. Here there was a marked difference in policy scope between the disease monitoring agency and VA's health services delivery arm.


Both agencies anticipated a reduction in high-risk behaviors in the short term. Patients who tested positive would be linked to treatment. Long-term goals for both agencies were to reduce HIV-related morbidity and mortality. The CDC foresaw further benefit in removing stigma against HIV testing, ensuring a good patient–provider relationship and achieving cost-effectiveness.


Both agencies anticipated advancement of public health goals and patient rights.

Research Evidence Application

Both the CDC and the VA used research references to support policy design inferences, but the chosen evidence base differed greatly between these two agencies.

The VA emphasized a need for Veteran-specific research, and, unlike the CDC, cited studies conducted at VA health care facilities and with a primary focus on the U.S. Veteran population. The VA disagreed with public comment that would maintain mandated pre- and posttest counseling because that literature was “drawn from settings outside the VA” (Department of Veterans Affairs 2009).

Whereas the VA was issuing a final rule governing health practices of its own service facilities, the CDC was generating policy recommendations for adoption externally. As such, the CDC required a strong level of transparency to make the guidelines readily adoptable to a variety of U.S. settings.

Consistent with these different policy mandates, we found no overlapping references between HIV testing policy statements, despite similar design components. The VA did not cite the CDC guidelines formally, but stated its intention to bring procedures in line with current CDC HIV testing recommendations, suggesting that the agency is familiar with the evidence base of CDC policy without the need to reiterate these references.

There were differing quantities of research evidence, with the CDC citing many more references than the VA. It is important to note that this comparison captures only symbolic research references in the final policy statements. The policies may be products of greater instrumental and conceptual research use occurring earlier in the process in order to identify desired evidence-based practices.

We found additional differences between agency policy statements after quantifying and comparing the research-based causal inferences used to link design components. Every link in the CDC logic model was supported with at least one research-based causal inference (Figure 2), but the VA model did not link outcomes to impact. Particular links of the logic model took on greater relative importance according to the agency (Figure 2). In the following, we provided only a few examples of these causal links.

Link A, Resources to Activities

Both agencies cited minimal evidence linking policy resources to activities and focused more on the limitations of resources to produce successful alternatives, such as risk-based screening.

Link B, Activities to Outputs

The VA emphasized this link with the bulk of its research citations. Both agencies believed that an increase in testing rates would result from the elimination of pretest counseling and prior written informed consent requirements.

Link C, Outputs to Outcomes

The majority of research-based causal inferences made by the CDC linked outputs to outcomes. The agency focused on findings of reduced transmission, improved health outcomes, and reasonable cost-effectiveness, which occurred as a result of early HIV diagnosis and treatment. The VA also offered justifications for this model link, stating that research exists which supports an “excellent record of linkage to care” following positive HIV diagnosis.

Link D, Outcomes to Impact

Only the CDC cited research to support an impact to patient rights, justice, and public health over the longer term.


Since 2006, the CDC has recommended routine, one-time HIV testing for all U.S. adolescents and adults ages 13–64, in all health care settings except those with undiagnosed HIV prevalence known to be <0.1 percent. Current guidelines support a broad opt-out testing approach under the patient's general consent for medical care, with annual repeat testing of patients at highest behavioral risk (Branson et al. 2006). These guidelines differ from past CDC recommendations that focused screening on those with risk behaviors, required signed, informed consent, and included pre- and posttest behavioral counseling to reduce risk behaviors (Centers for Disease Control and Prevention 2001). The use of Logic Modeling as a comparative tool for policy analysis provides a systematic and transparent approach to examine design logic and research use in relating the CDC guidelines to requirements in VA health care facilities that began in 2009.

Prior study has indicated that the policy making and research processes are heavily influenced by context (Rütten et al. 2003; Almeida and Báscolo 2006; Contandriopoulos et al. 2010), and that policy making setting may be a factor in research utilization (Weiss 1978; James and Jorgensen 2009). In this study, we developed a conceptual framework to evaluate context, content, and rationale of the policy process simultaneously (Figure 3).

While we found some differences in CDC and VA policy design and rationale, the agencies' HIV testing policies are largely comparable. The strong overlap of CDC and VA HIV testing policy components in this study suggests that VA policy may be generalizable to other settings considering adoption of, or alignment with, CDC guidelines. The CDC has identified “states, local jurisdictions, or agencies” as the regulatory bodies that oversee HIV screening, in such settings such as “hospital emergency departments, urgent care clinics, inpatient services, substance abuse treatment clinics, public health clinics, community clinics, correctional health care facilities, and primary care settings” (Branson et al. 2006). These settings all stand to benefit from observation of HIV testing policy implementation at the VA.

Content overlap between agencies' policies permitted comparison of research-based causal inferences. Despite similar policy design choices, however, we found no overlapping references between the CDC and VA policy statements. This may indicate the use of other (non-research-based) information types to make policy design choices. But because it reflects symbolic research use by the agencies, this finding provides additional insights into the tactics, formal interests, and design focus of agency policy makers. We found that the VA cited evidence that was specific to U.S. Veterans in VA health service settings, while the CDC drew from a broader evidence base than the VA, as is reflective of their differing scopes of practice.

We found that the CDC offered evidence in support of all logic model links, while the VA did not justify its claims of a longer-term impact for the proposed policy change. This finding may represent an evidence gap in VA policy rationale, or the agency may be citing supportive evidence as a formality, having already decided to adopt the CDC guidelines. When used symbolically, research utilization in policy statements may be a tactical move designed to create stakeholder buy-in of an earlier, and authoritative, decision making process.

Depending on the agency, particular links of the logic model took on greater relative importance. The majority of the VA's research-based justifications were given to link planned activities to expected outputs, while the CDC placed the most evidentiary emphasis on linking outputs to outcomes, a finding also related to the agencies' different public health missions. The VA is responsible for operations specific to U.S. Veterans' health and has focused on policy throughputs. The CDC is charged with setting broad policy to influence service providers and the public, and it has placed greater emphasis on policy results. These research emphases differ because symbolic research use can achieve different strategic purposes. Policy makers' purpose will determine the need for supportive evidence that justifies the decision. The VA is running a health care system and must directly allocate resources as well as persuade practitioners, patients, and other stakeholders that the new processes and procedures put into place will achieve desired change. Alternatively, the CDC develops and promotes public health policies with the goal of improving care across the U.S. health care system, and it must communicate the wider benefits of adopting new testing guidelines to an array of health care settings.

Our study connects symbolic research utilization to the federal policy making process through HIV testing policy content and rationale. Others have suggested that different types of evidence are useful at different times in the policy process (Bowen and Zwi 2005), and the use of Logic Modeling in this study provided a content-based window to compare these contexts, as illustrated by our conceptual framework (Figure 3).

There are several limitations to the approach we have taken here. Policy documents used to reconstruct policy logic are windows into decision making, but they do not capture instrumental or conceptual uses of research. However, it may be valuable to judge the policy at face value—that is, how it is presented in its final form to other agencies, states, service organizations, patients, and stakeholders who will interpret and apply the policy. Policy statements that provide minimal insight into the intentions, strategies, and rationale of a policy decision are noteworthy for their lack of logic and research-based rationale.

Policy decision making is frequently criticized for a lack of rationality (Buse, Mays, and Walt 2005). This study does not determine if supportive research evidence exists but was not cited, or if policy logic was based on other types of information. Future study might adapt our proposed framework so that logic model links represent multiple information types. Indeed, it is this combination of information sources that is critical to the meaning of evidence-based policy (Bowen and Zwi 2005). Future study also may look explicitly at included and excluded information as it varies across setting with policy design. Others have advocated this approach as context-specific evidence is critical to effective policy making (Bowen and Zwi 2005; James and Jorgensen 2009).

A final limitation of the study is the potential for researcher bias in creating and employing the counts used to quantify research-based links. We reduce such bias by defining our recording units in advance (i.e., the model components and links) and by piloting our coding rules on the proposed VA rule for stability and reproducibility.


This case study has compared the 2009 policy change in HIV testing at the VA with existing screening guidelines from the CDC. Through the use of comparative Logic Modeling as a tool for policy evaluation, we examined the substance and rationale of policy choices, and the research emphases of these designs in determining an evidence-based practice.

We found considerable overlap in agency policy logic despite dissimilar use of research evidence. The VA largely replicated the CDC HIV testing guidelines, a result which suggests that future evaluation of the VA's policy adoption could be compared to other non-VA health care facilities. In fact, while the VA is unique as one of the world's largest integrated health care systems providing services to U.S. veterans, it is often overlooked as a relevant source of information for other health care organizations. There are often important similarities between the VA and other large health care systems, such as large proportions of older clients in need of chronic disease management, overall size, geographic spread, and level of system integration.

While the CDC and the VA shared policy elements under every component of the logic model, there were no shared research citations used to justify model links. This difference in research use may be attributable to the distinct missions of the agencies and to the particular focus of the VA on determining evidence-based practice from a Veteran-specific context. This conclusion is further supported by the agencies' different emphases on model links. Recently, Logic Modeling was used to evaluate the health policy process in Vietnam across three maternal health case studies (Ha et al. 2010). These research utilization findings suggest that this method may be useful in identifying policy maker interests and intentions in other contexts as well; for example, in comparing policy drafts across legislative bodies, across time and/or settings, such as historical or international comparisons of similar health care services.

The quantification of causal links in Logic Modeling is an effective approach to comparing research use in health policy. As we continue to develop new methods for characterizing this research–policy relationship, advancement of our theoretical knowledge through new tools of policy analysis will be critical to the promotion of effective public health policies.


Joint Acknowledgment/Disclosure Statement: This project was funded by the Ruth Freelander Kotlier Graduate Student Enrichment Fund from Boston University, College of Health and Rehabilitation Sciences: Sargent College. The authors wish to thank Ms. Nicole Germino for her editorial and administrative assistance in preparation of the manuscript.

Disclosures: None.

Disclaimers: None.

Supporting Information

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.


  • Almeida C, Báscolo E. Use of Research Results in Policy Decision-Making, Formulation, and Implementation: A Review of the Literature. Cadernos de Saúde Pública, Rio de Janeiro. 2006;22(suppl):S7–33. [PubMed]
  • Bowen S, Zwi AB. Pathways to “Evidence-Informed” Policy and Practice: A Framework for Action. PLoS Medicine. 2005;2:600–05. [PMC free article] [PubMed]
  • Branson BM, Handsfield HH, Lampe MA, Janssen RS, Taylor AW, Lyss SB, Clark JE. Revised Recommendations for HIV Testing of Adults, Adolescents, and Pregnant Women in Health-Care Settings. Morbidity and Mortality Weekly Recommendations and Reports. 2006;55:1–17. [PubMed]
  • Buse K, Mays N, Walt G. Making Health Policy (Understanding Public Health) Maidhead, UK: Open University Press; 2005.
  • Centers for Disease Control and Prevention. Revised Guidelines for HIV Counseling, Testing, and Referral. Morbidity and Mortality Weekly Recommendations and Reports. 2001;50:1–57. [PubMed]
  • Contandriopoulos D, Lemire M, Denis JL, Tremblay E. Knowledge Exchange Processes in Organizations and Policy Arenas: A Narrative Systematic Review of the Literature. Milbank Quarterly. 2010;88(4):444–83. [PubMed]
  • Department of Veterans Affairs. Elimination of Requirements for Prior Signature Consent and Pre-and Post-Test Counseling for HIV Testing. Proposed Rule. Federal Register. 2008;73(249):79428–30. [PubMed]
  • Department of Veterans Affairs. Elimination of Requirement for Prior Signature Consent and Pre-and Post-Test Counseling for HIV Testing. Final Rule. Federal Register. 2009;74(135):34500–03. [PubMed]
  • Department of Veterans Affairs: Office of Public Health and Environmental Hazards. 2010. “Women Veterans Health Care: Frequently Asked Questions” [accessed on January 1, 2011]. Available at
  • Dobrow MJ, Goel V, Upshur REG. Evidence-Based Health Policy: Context and Utilisation. Social Science and Medicine. 2004;58(1):207–17. [PubMed]
  • Dwyer J. Applying Program Logic Model in Program Planning and Evaluation. Public Health and Epidemiology Report Ontario. 1996;7:38–46.
  • Ha BTT, Green A, Gerein N, Danielsen K. Health Policy Processes in Vietnam: A Comparison of Three Maternal Health Care Studies. Health Policy. 2010;98(2):178–85. [PubMed]
  • Hanney S, Gonzalez-Block M, Buxton M, Kogan M. The Utilisation of Health Research in Policy-Making: Concepts, Examples and Methods of Assessment. Health Research Policy and Systems. 2003;1(1):1–28. [PMC free article] [PubMed]
  • Innovation Network. Logic Model Workbook. Washington, DC: Innovation Network; 2008. [accessed on January 1, 2011]. Available at
  • James TE, Jorgensen PD. Policy Knowledge, Policy Formulation, and Change: Revisiting a Foundational Question. Policy Studies Journal. 2009;37(1):141–62.
  • Julian D. The Utilization of the Logic Model as a System Level Planning and Evaluation Device. Evaluation and Program Planning. 1997;20(3):251–57.
  • Kaplan SA, Garrett KE. The Use of Logic Models by Community-Based Initiatives. Evaluation and Program Planning. 2005;28:167–72.
  • Kellogg Foundation. Logic Model Development Guide: Using Logic Models to Bring Together Planning, Evaluation, and Action. Battle Creek, MI: W.K. Kellogg Foundation; 2004.
  • Kroneman MW, van der Zee J. Health Policy as a Fuzzy Concept: Methodological Problems Encountered When Evaluating Health Policy Reforms in an International Perspective. Health Policy. 1997;40(2):139–55. [PubMed]
  • Macaskill L, Dwyer JJM, Uetrecht C, Dombrow C, Crompton R, Wilck B, Stone J. An Evaluability Assessment to Develop a Restaurant Health Promotion Program in Canada. Health Promotion International. 2000;15(1):57–69.
  • Millar A, Simeone RS, Carnevale JT. Logic Models: A Systems Tool for Performance Management. Evaluation and Program Planning. 2001;24:73–81.
  • Moyer A, Verhovsek H, Wilson VL. Facilitating the Shift to Population-Based Public Health Programs: Innovation through the Use of Framework and Logic Model Tools. Canadian Journal of Public Health. 1997;88(2):95–8. [PubMed]
  • Owens DK, Sundaram V, Lazzeroni LC, Douglass LR, Tempio P, Holodniy M, Sanders GD, Shadle VM, McWhorter VC, Agoncillo T, Haren N, Chavis D, Borowsky LH, Yano EM, Jensen P, Simberkoff MS, Bozzette SA. HIV Testing of at Risk Patients in a Large Integrated Health Care System. Journal of General Internal Medicine. 2007;22(3):315–20. [PMC free article] [PubMed]
  • Pelz DC. Some Expanded Perspectives on Use of Social Science in Public Policy. In: Yinger JM, Cutler SJ, editors. Major Social Issues: a Multidisciplinary View. New York: Free Press; 1978. pp. 347–57.
  • Rich RF. Measuring Knowledge Utilization: Process and Outcomes. Knowledge and Policy. 1997;10:11–24.
  • Rütten A, Lüschen G, von Lengerke T, Abel T, Kannas L, Rodríguez Diaz JA, Vinck J, van der Zee J. Determinants of Health Policy Impact: A Theoretical Framework for Policy Analysis. Sozial- und Präventivmedizin. 2003;48(5):293–300. [PubMed]
  • Trostle J, Bronfman M, Langer A. How Do Researchers Influence Decision-Makers? Case Studies of Mexican Policies. Health Policy and Planning. 1999;14:103–14. [PubMed]
  • Weiss C. The Many Meanings of Research Utilization. Public Administration Review. 1979;39:429–31.
  • Weiss CH. Improving the Linkage between Social Research and Public Policy. In: Laurence EL, editor. Knowledge and Policy: The Uncertain Connection. Washington, DC: National Academy of Sciences; 1978. pp. 23–81.
  • Wholey JS. Evaluability Assessment: Developing Program Theory. In: Bickman L, editor. Using Program Theory in Evaluation, New Directions for Program Evaluation. San Francisco: Jossey-Bass; 1987. pp. 77–92.

Articles from Health Services Research are provided here courtesy of Health Research & Educational Trust