PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of oenvmedOccupational and Environmental MedicineVisit this articleSubmit a manuscriptReceive email alertsContact usBMJ
 
Occup Environ Med. 2007 May; 64(5): 353–358.
PMCID: PMC2092548

How to undertake a systematic review in an occupational setting

Although there are many narrative reviews of many occupational health topics, there are few high‐quality systematic reviews, and no single and concise source of advice on how to undertake such reviews in the occupational setting.

A “review” is any attempt to synthesise the results and conclusions of two or more publications on a given topic. A “systematic review” aims to identify and appraise all the literature on a topic, ranking the credibility accorded to evidence depending on the likelihood of bias influencing data collection and interpretation. A meta‐analysis incorporates a specific statistical strategy to amass the results of several studies investigating a particular effect—for example, of exposure or intervention into a single estimate.

Systematic reviews provide the evidence‐based findings required for writing scientifically supportable practice guidelines that help to ensure that occupational health professionals and others practise in such a way as to ensure that workers have the best health outcomes. They also resolve uncertainty regarding the potential benefits or harm of workplace and clinical interventions, where there is conflicting research or opinion. Clearly written lay summaries of the main findings of systematic reviews provide employers with a sound evidence base for robust management policy and decisions, and promote good understanding of risks and safe practice among workers, with the intent of preventing ill health caused by work.

The systematic review process has clearly defined and interdependent phases.

Choose a topic and target group for review

Identify a topic where no systematic review has been or is being undertaken. To avoid duplication, search key databases for published or ongoing systematic reviews and contact appropriate experts. Assess the need for a review based on the practical significance to occupational health practitioners and employers and, most importantly, the potential to benefit a significant number of workers. Define clearly the target users of the review‐derived guidelines.1

Choose your “team”

Systematic evidence reviews frequently rely on the work of volunteers who undertake such commitments in addition to their often busy jobs. Performed properly, systematic reviews are demanding with respect to time and effort. Therefore, it is important to make the expected level of commitment explicit when approaching prospective research working group (RWG) members. Choose members who will really contribute and will do so consistently, especially when the going gets tough, to avoid placing unfair burdens on others. Provide each member with terms of reference to define their main responsibilities and a letter of appointment outlining the aims, projected duration of the project and key deliverables.

The RWG must have the right knowledge of and experience in the topic under review and relevant members must have and/or acquire and develop the appropriate competencies in systematic review methods (including critical appraisal skills, epidemiology and statistics).2,3,4 Choose members from different settings: occupational physicians and nurses, allied professionals (eg, clinicians from outside the occupational field, ergonomists, occupational hygienists, toxicologists and so on, as appropriate), workers and employers. These consumers are particularly helpful in determining topics for review, as referees during the editorial process2 and in drafting lay summary leaflets.

A large RWG will have a chairperson and a deputy responsible for leading the work. Choose the chairperson carefully, especially when managing a multidisciplinary group, some of whom will understandably have their own agenda. The chairperson should be someone who can command the respect of the group and whose decisions will be accepted as final, when necessary. Where resources permit, appoint a scientific secretary/information officer to search the literature systematically, acquire documents, manage the critical appraisal process, and draft the evidence review and the guidelines derived from the evidence. Appoint separately a meetings secretary responsible for arranging meetings and maintaining concise minutes and action points.

Scope the work

Invite all members to provide scoping questions, themes, project title, project aims and objectives, and proposed deliverables. Scoping the questions to which evidence‐based answers are needed requires a clear discussion of content. The whole RWG should agree on what should and what should not be included, based on the importance to target groups of workers, employers and health and/or safety professionals. The views and perspectives of healthcare consumers often differ from those of healthcare providers and researchers. Their involvement helps ensure that reviews address problems that are important to people.2

Manage conflict of interest

The RWG must be editorially independent from any organisation that contributes funds to the project, and conflicts of interest of RWG members must be recorded.1 Conflict of interest exists when an RWG member (or their institution) has financial or personal relationships that might inappropriately influence their action—for example, employment, consultancies, stock ownership, honoraria or reviewing papers where the member is an author or had taken part in the research contributing to the published paper. All RWG members and reviewers must confirm that they have read and understood an independence policy and disclose all relationships that could be viewed as a potential conflict of interest. Personal gifts or entertainment from companies or external contacts should not be accepted when representing the RWG.

Project manage the research

Project management helps schedule the time needed to complete a review2 and deliver the project on time, to specification and within budget.5 A SMART plan (table 11)) can be used to identify the specific, measurable, agreed, realistic and timed objectives of the research project.5 Share the SMART plan with the RWG to gain alignment and commitment. Plan phases carefully, identify each strand of work and set realistic completion dates. Consider RWG members' constraints and resolve any conflict.5

Table thumbnail
Table 1 Specific, measurable, agreed, realistic and timed (SMART) objectives plan

Design the evidence review

Scope and plan the research appropriately, ensuring that the aims, design and methodology are justifiable, verifiable and scientifically valid.4 To conduct research to a lower standard may constitute misconduct.6 Specify the overall objectives of the systematic review, the scoping questions, the workers to whom the associated guidelines apply, the search strategy and the criteria for selecting the evidence.1 Supervision and checking are an integral part of the process.3 Send the protocol to external reviewers or an advisory committee for approval. The experts should also peer review the draft evidence report before publication.1,7

The method used for systematic reviews may be separated into three stages: ask, access and appraise. Decide the scoping questions to ask; access relevant studies in computerised search engines, using appropriate key words and free text; and appraise the relevance from relevant full papers to determine robustness and reliability.

Search the literature

Perform a comprehensive search to discover as many studies as possible and minimise selection bias. Using just one search engine is not considered adequate, since studies show that only 30%–80% of all known published randomised, controlled trials (RCTs) are identifiable using Medline.8 Since the overlap in journals covered by Medline and Embase is estimated to be 34%,8 use at least two search engines to ensure more comprehensive results. The search for papers published in peer‐reviewed journals may include personal bibliographies, internet searches, citation tracking and scanning of relevant journals. Narrative reviews, conference proceedings and textbooks, although excluded from contributing to the evidence, should be perused and included in the review process, since they may contain much information to identify references to original studies in peer‐reviewed journals that an RWG will want to review, which can be used to contribute to the evidence base.9 Ensure that the literature search is performed as defined in the scoping document and that studies are selected according to agreed inclusion and exclusion criteria. It may also be helpful to contact relevant research centres and experts in the field of the review in question. Contacts can provide important information on ongoing research and may be useful for involvement in peer review of the systematic review.9

Assess the quality of studies

The first task is to select those papers that potentially address the scoping questions. This is achieved by a double‐blind review of the abstracts of all papers identified by the search. If the abstract provides doubt that the article cannot definitely be rejected, the full text must be obtained. Retrieve these papers in full for an independent critical appraisal using at least two reviewers, who will ultimately agree on a rating of the strength of the evidence. Where reviewers disagree about the score of the paper or its relevance to the scope of this research, they should discuss it to reach resolution. Involve a third reviewer if resolution is not achieved. It is important to critically appraise all studies in a review even if there is no variability in the validity or result, since although the results may be consistent, all the studies may be flawed. Check for multiple publications based on the same data and cite only the most definitive results.7

Many scales and checklists have been used to assess the validity and quality of studies, but there is no gold standard method. Scales with multiple items and complex scoring systems take more time to complete than simple approaches and they have not been shown to provide more reliable assessments of validity. It is preferable to use simple approaches for assessing validity that can be fully reported.10 Assess each study for clinical importance, applicability, validity, study design, selection bias, confounding, blinding, data collection and classification of outcomes, measurement error, follow‐up, attrition bias and analysis.11

Clinical importance and applicability are determined by the magnitude of the estimate of effect and relevance of the outcomes measured.11 Applicability (external validity or generalisability) is the extent to which results of studies provide a correct basis for generalisation to other circumstances. Validity (internal validity) is the extent to which the study design (judged by hierarchy of evidence), conduct and analysis are likely to prevent systematic errors or bias. It implies that the differences observed between comparison groups may, apart from random error, be attributed to the intervention or exposure under investigation.

Randomisation in experimental studies—that is, RCTs—minimises differences between groups by allocating subjects with particular characteristics randomly to exposed and non‐exposed groups. However, observational (analytical) studies—that is, case–referent, cohort and cross‐sectional studies, and case series—are more common than RCTs in occupational health. Four sources of bias should be assessed when appraising the validity of observational studies: selection, performance, attrition and detection biases (table 22).10

Table thumbnail
Table 2 Sources of systematic bias

Although RCTs and observational case–referent and cohort studies are similar in that they compare outcomes in groups, observational studies do not allocate individuals by chance12 and hence can cause selection bias wherein the subjects being studied are not truly representative of the target population. Observational studies only control or adjust for confounders that are known and measured, whereas large RCTs additionally control for those that are unknown and unmeasured.10 Reviewers must determine the important confounders and how effectively these were controlled.

To assess performance bias in observational studies, one must check that there were no differences in the exposure of the comparison groups to other factors, confounders and modifying effects12 that could affect outcomes, and the way in which exposures and health effects were measured or identified. One must also determine whether exposure was measured in a similar and unbiased way in the groups being compared. Measurements of exposure or health effect are subject to measurement error due to variability in measurements of the same quantity on the same individual.13 Several measurements of the same health effect in the same subject or of the same occupational exposures may not give the same results from one day to another, in different hands, with different equipment or at different centres. This may be because of natural variations in the subject and observer or variations in the measurement or identification process, or both.

Concerns about attrition bias are common to all studies and relate to the extent that all subjects in a study are accounted for in the results. Concerns about detection bias relate to the effectiveness of blinding in cohort studies and the case definition that is used in case–referent studies, since people are entered into studies based on a knowledge of the outcome of interest.

Summarising key findings from important studies

The reviewers should not only grade each paper with respect to quality, but should also provide summaries of the key findings that will appear in evidence tables, which will be the working data for writing evidence‐based recommendations. The reviewers must not simply cut and paste abstracts from Medline as the only source of data in evidence tables, since abstracts often do not highlight all important learnings or key words from a study. It is easy to cut and paste abstracts from Medline, whereas it takes longer to type the text manually; ensure that you receive what you need.

Score the evidence and recommendations

Avoid adhering rigidly to a hierarchy of evidence with the RCT at the pinnacle (see box), since this concept neglects methodological appropriateness.14 Adopt a balanced approach, recognising the strengths and limitations of well‐conducted observational studies.15 Hierarchy may be unhelpful when answers are needed to questions that, for scientific or practical reasons, cannot be studied by RCTs16 and when appraising the evidence for community health interventions.14 Other study designs and hierarchies are more appropriate to answer questions about aetiology, pathogenesis, disease frequency, diagnosis, prognosis and adverse effects,17,18 the areas most likely to be of interest in occupational health. Observational studies have proved essential in identifying unexpected and unpredictable adverse effects—for example, of smoking in lung cancer, and of asbestos in mesothelioma.18

Hierarchy of evidence14

  1. Systematic reviews and meta‐analyses
  2. Randomised, controlled trials (RCTs) with definitive results
  3. RCTs with non‐definitive results
  4. Cohort studies
  5. Case–referent studies
  6. Cross‐sectional surveys
  7. Case reports

Observational studies can be grouped hierarchically, with prospective cohort studies being more valid than retrospective studies using historical controls.7 Case–referent studies are more susceptible to bias and must be graded according to how well referents have been matched for confounding variables.

Since definitions of levels vary between hierarchies,17 avoid using numerical level alone to grade evidence. Hierarchies can produce anomalous rankings—for example, a statement about one intervention may be graded level 1 on the basis of a systematic review of a few small, poor‐quality RCTs, whereas a statement about another intervention may be graded level 2 on the basis of one large, well‐conducted, multicentre RCT.17 Use hierarchies that help the target reader understand the strength of evidence readily as well as those required as a matter of academic excellence.

Write the systematic review

Start by outlining the purpose and scope of the review. Describe the method including the search strategy—that is, inclusion and exclusion criteria, search words and phrases, search engines used, dates of the literature covered, the scoping questions asked, the occupational groups at risk, the number of abstracts discovered and papers reviewed, and so on. Focus on consistency and conciseness. If there are several authors, iron out style differences. Use print preview on the computer screen to check whether paragraphs are balanced. Never use a long word where a short one will do, and eliminate words that do not add to the meaning. Use active rather than passive verbs and avoid overusing noun forms of verbs. Omit redundant pairs—for example, “finish” implies “complete”, so the phrase “completely finish” is redundant.

Ensure that RWG members insert comments and track changes so that they are available for all to review. Use reference management software to simplify reordering and renumbering. Allow adequate time between writing and proofing to achieve distance from your work and have others from relevant stakeholder groups proof‐read the text.

Write recommendations

Practitioners may not follow guidelines,19 therefore write recommendations based on the evidence in the review in a way that makes it more likely to be followed, providing solid take‐home messages. Make recommendations specific, unambiguous and easily identifiable and make an explicit link between the recommendations and the supporting evidence.1 The more precisely behaviours are specified, the more likely they are to be performed.20 Use precise behavioural terms—that is, specifying who, what, why, when, where and how to assist implementation.20 In a study of 10 national clinical guidelines, general practitioners followed guidelines on 67% of occasions when they were concrete and precise, and on 36% of occasions when they were vague and non‐specific.21 Recommendations were also adhered to more when the evidence was described clearly, in a straightforward and non‐conflicting manner.

Despite the high‐quality evidence underpinning the first clinical guideline from the National Institute for Health and Clinical Excellence, recommendations are not behaviourally specific and in the short form exceed 20 pages.20 For example, one recommendation states, “Cognitive behavioural therapy should be available as a treatment option for people with schizophrenia.”

To improve that specification of behaviour, Michie and Johnston recommend a more specific version:20

  • What: Offer cognitive behavioural therapy to all patients with schizophrenia
  • Who: Trust board and health professional responsible for offering treatment options

Include good practice points where there is no, nor is there likely to be, research evidence, based on the clinical experience of the RWG, legal requirement or other consensus.

Publish the findings

In addition to the evidence review report and summaries of the evidence, publish in a peer‐reviewed journal so that the findings can be assessed by peers.3,4 Publish in a timely manner and present the results at scientific meetings,3 aiming for the results to appear in journals before they are reported in other media.6 Endeavour to ensure that the findings are reported in a balanced and understandable manner when presenting to the non‐medical press.4

Systematic reviews are sometimes criticised for not providing specific guidance, concluding that little evidence exists to answer the question,22 and for missing the opportunity to identify outstanding gaps in the evidence.23 Uncertainty often remains, and systematic reviews should acknowledge this, mapping the areas of doubt.14 Use the uncertainty to help clarify the options available to clinicians, workers and managers, and to stimulate more and better research to help resolve the uncertainty.24

Attribute contributions properly

Decide who will be credited with authorship early in the planning stages.3 This helps to avoid disputes over attribution of credit for the published review and for papers in peer‐reviewed journals. Authorship should be credited to those who made substantive intellectual contributions to the published study25—that is, (1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; (2) drafting the article or revising it critically for important intellectual content; and (3) final approval of the version to be published.26

Contributors who do not meet the criteria for authorship (eg, a person who provided purely technical help, writing assistance or only general support) should be listed in the acknowledgments section and financial and material support should be acknowledged.3 Because readers may infer their endorsement of the data and conclusions, those cited must give written permission to be acknowledged.

Provide summaries of evidence

There is moderate‐quality evidence that involving consumers in the development of patient information material results in material that is more relevant, readable and understandable to patients, without affecting their anxiety. This “consumer‐informed” material can also improve patients' knowledge.26 Write short executive summaries of the evidence involving the key target audiences. Start with the questions that each audience is likely to ask and summarise the evidence‐based answer for each question. Technical editing should be considered to ensure that the evidence statements from the report are properly reflected in the summaries.

Maintain records

Maintain complete and accurate records and retain them safely in a manner that permits a complete retrospective audit.3,4 Long‐term retention—for example, up to 15 years—of all records and primary outputs is recommended.6 Back up the data regularly, making duplicate copies on a disk and a hard copy of particularly important data,3 archiving older versions. Publication of the report does not negate the need to retain source data3 and notes of RWG meetings recording key decisions.

Key points

  • More high‐quality systematic reviews need to be undertaken in the occupational setting to generate more evidence‐based and practical guidelines.
  • It is important to include employers and workers as healthcare consumers in the process of evidence reviews, since they provide valuable additional insight into the major issues that affect business and people.
  • Literature searches must be comprehensive in order to avoid bias and must include the use of at least two search engines as well as manual searching.
  • Observational studies are particularly important in contributing to evidence in the occupational health setting, more than randomised, controlled trials that, for practical reasons, have not been and cannot be performed to address some of the more important questions of aetiology, diagnosis and prognosis.
  • Describing the required behaviours clearly and precisely helps ensure that practitioners implement guidelines.
  • Publish lay executive summaries for employers to provide them with a sound evidence base for robust management policy and decisions, and for workers to promote good understanding of risks and safe practice.

Seek evaluation and feedback

As with any product, it is important to be in touch with the specific consumers of each product—that is, the full systematic review and each of the associated guidelines for health professionals, employers, workers and their representatives—to determine whether it meets their needs and offers real practical assistance. Systematic reviews should be reviewed at least every five years, not only in light of new evidence but also to attend to consumer feedback as part of a programme of continual improvement. Questions that are useful to ask include

  • Do the guidelines address the most important questions?
  • Are the evidence statements useful in informing and influencing practice?
  • Are the recommendations practicable?
  • How do you rate the overall value of the guidelines, etc?
  • To which other questions would you like answers?

When using numerical scoring, it is useful to score between 1 and 6 rather than between 1 and 5 to avoid people choosing the easy option of the middle number rather than make a judgement.

What needs to be done in the future

We all rely on our colleagues volunteering to engage in undertaking systematic reviews to help us ensure that we practise in a way that is based on, though not dictated by, evidence. Few occupational health practitioners participate in systematic reviews. What can we do to engage, energise and enable more of our colleagues to add to our state of knowledge? Firstly, we must promote existing high‐quality practical reviews widely to demonstrate their value to key stakeholder groups, particularly employers and the government. This will help to win support, be it financial or other resources, for future projects. It is difficult to get employers to release talent in today's competitive world, unless there is a clear, tangible benefit beyond corporate social responsibility. Each one of us can be an ambassador for evidence‐based guidelines within our own spheres of influence, promoting the tangible benefits. We need better access to training in critical appraisal skills, ideally though accredited schemes.

Although evidence relating to the benefits and risks of healthcare interventions is essential for guiding practice, what is the true and wider impact of evidence‐based reviews? It would be beneficial to evaluate the impact of such reviews, including any cost benefit of implementing evidence‐based guidelines.

How can the process be made easier? An effective search requires the use of at least two search engines. Although this ensures that as much published work is discovered, there is considerable overlap that generates work in identifying and removing duplicate discoveries. A one‐stop shop, where one search engine could provide all the relevant published studies, would simplify and accelerate one important part of the process.

We need to conclude the debate regarding hierarchy of evidence. RCTs cannot be performed for many occupational health interventions. This means that there is very little level 1 evidence and therefore few grade A recommendations. Cohort, case–referent and cross‐sectional studies are natural experiments in which outcomes are measured in the real world.27 These are often more appropriate in the occupational setting, but they are scored as level 2 evidence and grade B or C recommendations. It would be helpful if current hierarchies of evidence were applied as they were intended—that is, to answer questions about the effectiveness of different treatments, and if consensus was reached for a different approach in domains such as public health and occupational health.

As more systematic reviews appear, and as the limitation of RCTs in an occupational setting is managed, practitioners both inside and outside the occupational field will become better informed and their practice will be more likely to provide better outcomes for the workers for whom they provide care.

Questions (see answers on p305)

Which statements are true and which are false?

  1. Systematic reviews:
    1. aim to identify and appraise all the relevant literature on a topic
    2. rank the credibility of evidence based on the likelihood of bias affecting data collection and interpretation
    3. must include a meta‐analysis
    4. may be performed adequately using studies obtained from a literature search using one search engine
    5. help to inform policy and decision‐making about the organisation and delivery of health services
  2. When assessing the quality of studies:
    1. it is better that one person is allocated to the task
    2. you should use a validated gold standard checklist
    3. internal validity refers to the extent to which the study design, conduct and analysis prevents error and bias
    4. applicability refers to the extent to which the results may be generalised to routine practice
    5. performance bias refers to how well confounders were controlled
  3. The hierarchy of evidence approach:
    1. considers the appropriateness of the study design in relation to the nature of the problem being investigated
    2. presents issues when appraising the evidence for public health and occupational health interventions
    3. ranks cohort studies above case‐referent and cross‐sectional studies
    4. can result in inconsistent rankings because of inconsistencies between hierarchies and reliance on a rigid pecking order
    5. with the randomised control trial having highest rank is well suited to examine the effects of pharmacological treatment
  4. The written systematic review and the associated evidence‐based guidelines:
    1. should define the methodology to the extent of stating which search engines were used, the key words and phrases used in the search, any language restrictions and the dates searched
    2. may indicate the strength of evidence sufficiently by simply using a numerical level
    3. should make a clear link between recommendations and the supporting evidence
    4. are more likely to be implemented by practitioners when the recommendations are more precise
    5. do not benefit from the involvement of patient or other stakeholders in the process
  5. Cohort studies:
    1. are used to compare outcomes in groups that received a specific intervention with groups that received different interventions or no intervention
    2. allocate individuals to groups by chance
    3. can be conducted retrospectively or prospectively
    4. may require sophisticated multivariate techniques to adjust for confounding when analysing results
    5. and case‐referent studies are controlled observational studies, whereas cross‐sectional studies have no control group

Acknowledgement

I thank Mr Brian Kazer, Chief Executive, BOHRF, for his comments and for the SMART plan.

Footnotes

Competing interests: None declared.

References

1. Appraisal of Guidelines for Research and Evaluation (AGREE) Instrument The AGREE Collaboration. London: 2001, http://www.agreecollaboration.org (accessed 22 Feb 2007) This instrument provides a validated, internationally agreed framework for assessing the quality of clinical practice guideline development in six domains: scope and purpose, stakeholder involvement, rigour of development, clarity and presentation, applicability and editorial independence.
2. Green S, Higgins J. eds. Logistics of doing a review. Cochrane handbook for systematic reviews of interventions. 4.2.6. (updated September 2006). Section 2.3. http://www.cochrane.org/resources/handbook/Handbook4.2.6Sep2006.pdf (accessed 5 March 2007) This resource provides practical guidance on the review team, consumer involvement, advisory groups, resources, training and seeking funding.
3. Good research practice. Medical Research Council. London: MRC, 2000, Updated version September 2005. http://www.mrc.ac.uk/utilities/Documentrecord/index.htm?d = MRC002415 (accessed 5 March 2007)
4. General Medical Council Research: the role and responsibilities of doctors. London: GMC, 2002. http://www.gmc‐uk.org/guidance/current/library/research.asp (accessed 6 October 2006)
5. Lyratzopoulos G, Allen D. How to manage your research and audit work. BMJ Career Focus 2004. 328196–197.197
6. Committee on Publication Ethics Guidelines on good publication practice. The COPE Report 2003. London: BMJ Books, 2004. 69–73.73
7. NHS Centre for Reviews and Dissemination Undertaking systematic reviews of research on effectiveness: CRD guidelines for those carrying out or commissioning reviews. CRD report number 4.2nd edn. York: University of York, 2001. http://www.york.ac.uk/inst/crd/report4.htm (accessed 22 Feb 2007) This publication provides comprehensive and practical guidance about various aspects of systematic reviews on effectiveness in light of the current understanding of methodology.
8. Green S, Higgins J. eds. Locating and selecting studies for review. Cochrane handbook for systematic reviews of interventions. 4.2.6. (updated September 2006). Section 5. http://www.cochrane.org/resources/handbook/Handbook4.2.6Sep2006.pdf (accessed 5 March 2007)
9. NHS Centre for Reviews and Dissemination Finding studies for systematic reviews: a checklist for researchers. York: University of York, 2005. http://www.york.ac.uk/inst/crd/revs.htm (accessed 22 Feb 2007)
10. Green S, Higgins J. eds. Assessment of study quality. Cochrane handbook for systematic reviews of interventions. 4.2.6. (updated September 2006). Section 5. http://www.cochrane.org/resources/handbook/Handbook4.2.6Sep2006.pdf (accessed 5 March 2007) This resource describes the assessment of studies, albeit mainly RCTs, for applicability of findings, validity of individual studies and design characteristics that affect interpretation of results.
11. Rychetnik L, Frommer M, Hawe P. et al Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health 2002. 56119–127.127 [PMC free article] [PubMed]
12. Armstrong B G. Effect of measurement error on epidemiological studies of environmental and occupational exposures. Occup Environ Med 1998. 55651–656.656 [PMC free article] [PubMed]
13. Bland J M, Altman D G. Measurement error. BMJ 1996. 313744–753.753 [PMC free article] [PubMed]
14. Petticrew M, Roberts H. Evidence, hierarchies and typologies: horses for courses. J Epidemiol Community Health 2003. 57527–529.529This paper provides coherent arguments in favour of methodological appropriateness, and on typologies rather than hierarchies of evidence in organising and appraising public health evidence. [PMC free article] [PubMed]
15. Concato J. Observational versus experimental studies: what's the evidence for a hierarchy? NeuroRx 2004. 1341–347.347 [PubMed]
16. Ogilvie D, Egan M, Hamilton V. et al Systematic reviews of health effects of social interventions: 2. Best available evidence: how low should you go, J Epidemiol Community Health 2005. 59886–892.892 [PMC free article] [PubMed]
17. Glasziou P, Vandenbroucke J P, Chalmers I. Assessing the quality of research. BMJ 2004. 32839–41.41This paper presents an excellent argument for broadening the scope by which evidence is assessed, justifying hierarchies other than those that have the RCT at the apex, when answers are needed to questions on aetiology, disease frequency, diagnosis and prognosis. [PMC free article] [PubMed]
18. Vandenbroucke J P. When are observational studies as credible as randomised trials? Lancet 2004. 3631728–1731.1731An excellent paper which elucidates under what circumstances the evidence from observational research is as good as that from RCTs. [PubMed]
19. Sheldon T A, Cullum N, Dawson D. et al What's the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients' notes, and interviews. BMJ 2004. 329999–1004.1004 [PMC free article] [PubMed]
20. Michie S, Johnston M. Changing clinical behaviour by making guidelines specific. BMJ 2004. 328343–345.345A useful paper that describes how the wording of a behavioural instruction affects the likelihood that it will be followed, by influencing comprehension, recall, planning and behaviour. [PMC free article] [PubMed]
21. Grol R, Dalhuijsen J, Thomas S. et al Attributes of clinical guidelines that influence use of guidelines in general practice: observational study. BMJ 1998. 317858–861.861 [PMC free article] [PubMed]
22. Petticrew M. Why certain systematic reviews reach uncertain conclusions. BMJ 2003. 326756–758.758 [PMC free article] [PubMed]
23. Brown P, Brunnhiber K, Chalkidou K. et al How to formulate research recommendations. BMJ 2006. 333804–806.806 [PMC free article] [PubMed]
24. Alderson P, Roberts I. Should journals publish systematic reviews that find no evidence to guide practice? Examples from injury research. BMJ 2000. 320376–377.377 [PMC free article] [PubMed]
25. International Committee of Medical Journal Editors Uniform requirements for manuscripts submitted to biomedical journals. Med Educ 1999. 366–78.78 [PubMed]
26. Nilsen E S, Myrhaug H T, Johansen M. et al Methods of consumer involvement in developing healthcare policy and research, clinical practice guidelines and patient information material. Cochrane Database Syst Rev 2006. (3)CD004563 [PubMed]
27. Rochon P, Gurwitz J H, Sykora K. et al Reader's guide to critical appraisal of cohort studies: 1. Role and design. BMJ 2005. 330895–897.897 [PMC free article] [PubMed]

Articles from Occupational and Environmental Medicine are provided here courtesy of BMJ Group