|Home | About | Journals | Submit | Contact Us | Français|
Although there are many narrative reviews of many occupational health topics, there are few high‐quality systematic reviews, and no single and concise source of advice on how to undertake such reviews in the occupational setting.
A “review” is any attempt to synthesise the results and conclusions of two or more publications on a given topic. A “systematic review” aims to identify and appraise all the literature on a topic, ranking the credibility accorded to evidence depending on the likelihood of bias influencing data collection and interpretation. A meta‐analysis incorporates a specific statistical strategy to amass the results of several studies investigating a particular effect—for example, of exposure or intervention into a single estimate.
Systematic reviews provide the evidence‐based findings required for writing scientifically supportable practice guidelines that help to ensure that occupational health professionals and others practise in such a way as to ensure that workers have the best health outcomes. They also resolve uncertainty regarding the potential benefits or harm of workplace and clinical interventions, where there is conflicting research or opinion. Clearly written lay summaries of the main findings of systematic reviews provide employers with a sound evidence base for robust management policy and decisions, and promote good understanding of risks and safe practice among workers, with the intent of preventing ill health caused by work.
The systematic review process has clearly defined and interdependent phases.
Identify a topic where no systematic review has been or is being undertaken. To avoid duplication, search key databases for published or ongoing systematic reviews and contact appropriate experts. Assess the need for a review based on the practical significance to occupational health practitioners and employers and, most importantly, the potential to benefit a significant number of workers. Define clearly the target users of the review‐derived guidelines.1
Systematic evidence reviews frequently rely on the work of volunteers who undertake such commitments in addition to their often busy jobs. Performed properly, systematic reviews are demanding with respect to time and effort. Therefore, it is important to make the expected level of commitment explicit when approaching prospective research working group (RWG) members. Choose members who will really contribute and will do so consistently, especially when the going gets tough, to avoid placing unfair burdens on others. Provide each member with terms of reference to define their main responsibilities and a letter of appointment outlining the aims, projected duration of the project and key deliverables.
The RWG must have the right knowledge of and experience in the topic under review and relevant members must have and/or acquire and develop the appropriate competencies in systematic review methods (including critical appraisal skills, epidemiology and statistics).2,3,4 Choose members from different settings: occupational physicians and nurses, allied professionals (eg, clinicians from outside the occupational field, ergonomists, occupational hygienists, toxicologists and so on, as appropriate), workers and employers. These consumers are particularly helpful in determining topics for review, as referees during the editorial process2 and in drafting lay summary leaflets.
A large RWG will have a chairperson and a deputy responsible for leading the work. Choose the chairperson carefully, especially when managing a multidisciplinary group, some of whom will understandably have their own agenda. The chairperson should be someone who can command the respect of the group and whose decisions will be accepted as final, when necessary. Where resources permit, appoint a scientific secretary/information officer to search the literature systematically, acquire documents, manage the critical appraisal process, and draft the evidence review and the guidelines derived from the evidence. Appoint separately a meetings secretary responsible for arranging meetings and maintaining concise minutes and action points.
Invite all members to provide scoping questions, themes, project title, project aims and objectives, and proposed deliverables. Scoping the questions to which evidence‐based answers are needed requires a clear discussion of content. The whole RWG should agree on what should and what should not be included, based on the importance to target groups of workers, employers and health and/or safety professionals. The views and perspectives of healthcare consumers often differ from those of healthcare providers and researchers. Their involvement helps ensure that reviews address problems that are important to people.2
The RWG must be editorially independent from any organisation that contributes funds to the project, and conflicts of interest of RWG members must be recorded.1 Conflict of interest exists when an RWG member (or their institution) has financial or personal relationships that might inappropriately influence their action—for example, employment, consultancies, stock ownership, honoraria or reviewing papers where the member is an author or had taken part in the research contributing to the published paper. All RWG members and reviewers must confirm that they have read and understood an independence policy and disclose all relationships that could be viewed as a potential conflict of interest. Personal gifts or entertainment from companies or external contacts should not be accepted when representing the RWG.
Project management helps schedule the time needed to complete a review2 and deliver the project on time, to specification and within budget.5 A SMART plan (table 11)) can be used to identify the specific, measurable, agreed, realistic and timed objectives of the research project.5 Share the SMART plan with the RWG to gain alignment and commitment. Plan phases carefully, identify each strand of work and set realistic completion dates. Consider RWG members' constraints and resolve any conflict.5
Scope and plan the research appropriately, ensuring that the aims, design and methodology are justifiable, verifiable and scientifically valid.4 To conduct research to a lower standard may constitute misconduct.6 Specify the overall objectives of the systematic review, the scoping questions, the workers to whom the associated guidelines apply, the search strategy and the criteria for selecting the evidence.1 Supervision and checking are an integral part of the process.3 Send the protocol to external reviewers or an advisory committee for approval. The experts should also peer review the draft evidence report before publication.1,7
The method used for systematic reviews may be separated into three stages: ask, access and appraise. Decide the scoping questions to ask; access relevant studies in computerised search engines, using appropriate key words and free text; and appraise the relevance from relevant full papers to determine robustness and reliability.
Perform a comprehensive search to discover as many studies as possible and minimise selection bias. Using just one search engine is not considered adequate, since studies show that only 30%–80% of all known published randomised, controlled trials (RCTs) are identifiable using Medline.8 Since the overlap in journals covered by Medline and Embase is estimated to be 34%,8 use at least two search engines to ensure more comprehensive results. The search for papers published in peer‐reviewed journals may include personal bibliographies, internet searches, citation tracking and scanning of relevant journals. Narrative reviews, conference proceedings and textbooks, although excluded from contributing to the evidence, should be perused and included in the review process, since they may contain much information to identify references to original studies in peer‐reviewed journals that an RWG will want to review, which can be used to contribute to the evidence base.9 Ensure that the literature search is performed as defined in the scoping document and that studies are selected according to agreed inclusion and exclusion criteria. It may also be helpful to contact relevant research centres and experts in the field of the review in question. Contacts can provide important information on ongoing research and may be useful for involvement in peer review of the systematic review.9
The first task is to select those papers that potentially address the scoping questions. This is achieved by a double‐blind review of the abstracts of all papers identified by the search. If the abstract provides doubt that the article cannot definitely be rejected, the full text must be obtained. Retrieve these papers in full for an independent critical appraisal using at least two reviewers, who will ultimately agree on a rating of the strength of the evidence. Where reviewers disagree about the score of the paper or its relevance to the scope of this research, they should discuss it to reach resolution. Involve a third reviewer if resolution is not achieved. It is important to critically appraise all studies in a review even if there is no variability in the validity or result, since although the results may be consistent, all the studies may be flawed. Check for multiple publications based on the same data and cite only the most definitive results.7
Many scales and checklists have been used to assess the validity and quality of studies, but there is no gold standard method. Scales with multiple items and complex scoring systems take more time to complete than simple approaches and they have not been shown to provide more reliable assessments of validity. It is preferable to use simple approaches for assessing validity that can be fully reported.10 Assess each study for clinical importance, applicability, validity, study design, selection bias, confounding, blinding, data collection and classification of outcomes, measurement error, follow‐up, attrition bias and analysis.11
Clinical importance and applicability are determined by the magnitude of the estimate of effect and relevance of the outcomes measured.11 Applicability (external validity or generalisability) is the extent to which results of studies provide a correct basis for generalisation to other circumstances. Validity (internal validity) is the extent to which the study design (judged by hierarchy of evidence), conduct and analysis are likely to prevent systematic errors or bias. It implies that the differences observed between comparison groups may, apart from random error, be attributed to the intervention or exposure under investigation.
Randomisation in experimental studies—that is, RCTs—minimises differences between groups by allocating subjects with particular characteristics randomly to exposed and non‐exposed groups. However, observational (analytical) studies—that is, case–referent, cohort and cross‐sectional studies, and case series—are more common than RCTs in occupational health. Four sources of bias should be assessed when appraising the validity of observational studies: selection, performance, attrition and detection biases (table 22).10
Although RCTs and observational case–referent and cohort studies are similar in that they compare outcomes in groups, observational studies do not allocate individuals by chance12 and hence can cause selection bias wherein the subjects being studied are not truly representative of the target population. Observational studies only control or adjust for confounders that are known and measured, whereas large RCTs additionally control for those that are unknown and unmeasured.10 Reviewers must determine the important confounders and how effectively these were controlled.
To assess performance bias in observational studies, one must check that there were no differences in the exposure of the comparison groups to other factors, confounders and modifying effects12 that could affect outcomes, and the way in which exposures and health effects were measured or identified. One must also determine whether exposure was measured in a similar and unbiased way in the groups being compared. Measurements of exposure or health effect are subject to measurement error due to variability in measurements of the same quantity on the same individual.13 Several measurements of the same health effect in the same subject or of the same occupational exposures may not give the same results from one day to another, in different hands, with different equipment or at different centres. This may be because of natural variations in the subject and observer or variations in the measurement or identification process, or both.
Concerns about attrition bias are common to all studies and relate to the extent that all subjects in a study are accounted for in the results. Concerns about detection bias relate to the effectiveness of blinding in cohort studies and the case definition that is used in case–referent studies, since people are entered into studies based on a knowledge of the outcome of interest.
The reviewers should not only grade each paper with respect to quality, but should also provide summaries of the key findings that will appear in evidence tables, which will be the working data for writing evidence‐based recommendations. The reviewers must not simply cut and paste abstracts from Medline as the only source of data in evidence tables, since abstracts often do not highlight all important learnings or key words from a study. It is easy to cut and paste abstracts from Medline, whereas it takes longer to type the text manually; ensure that you receive what you need.
Avoid adhering rigidly to a hierarchy of evidence with the RCT at the pinnacle (see box), since this concept neglects methodological appropriateness.14 Adopt a balanced approach, recognising the strengths and limitations of well‐conducted observational studies.15 Hierarchy may be unhelpful when answers are needed to questions that, for scientific or practical reasons, cannot be studied by RCTs16 and when appraising the evidence for community health interventions.14 Other study designs and hierarchies are more appropriate to answer questions about aetiology, pathogenesis, disease frequency, diagnosis, prognosis and adverse effects,17,18 the areas most likely to be of interest in occupational health. Observational studies have proved essential in identifying unexpected and unpredictable adverse effects—for example, of smoking in lung cancer, and of asbestos in mesothelioma.18
Observational studies can be grouped hierarchically, with prospective cohort studies being more valid than retrospective studies using historical controls.7 Case–referent studies are more susceptible to bias and must be graded according to how well referents have been matched for confounding variables.
Since definitions of levels vary between hierarchies,17 avoid using numerical level alone to grade evidence. Hierarchies can produce anomalous rankings—for example, a statement about one intervention may be graded level 1 on the basis of a systematic review of a few small, poor‐quality RCTs, whereas a statement about another intervention may be graded level 2 on the basis of one large, well‐conducted, multicentre RCT.17 Use hierarchies that help the target reader understand the strength of evidence readily as well as those required as a matter of academic excellence.
Start by outlining the purpose and scope of the review. Describe the method including the search strategy—that is, inclusion and exclusion criteria, search words and phrases, search engines used, dates of the literature covered, the scoping questions asked, the occupational groups at risk, the number of abstracts discovered and papers reviewed, and so on. Focus on consistency and conciseness. If there are several authors, iron out style differences. Use print preview on the computer screen to check whether paragraphs are balanced. Never use a long word where a short one will do, and eliminate words that do not add to the meaning. Use active rather than passive verbs and avoid overusing noun forms of verbs. Omit redundant pairs—for example, “finish” implies “complete”, so the phrase “completely finish” is redundant.
Ensure that RWG members insert comments and track changes so that they are available for all to review. Use reference management software to simplify reordering and renumbering. Allow adequate time between writing and proofing to achieve distance from your work and have others from relevant stakeholder groups proof‐read the text.
Practitioners may not follow guidelines,19 therefore write recommendations based on the evidence in the review in a way that makes it more likely to be followed, providing solid take‐home messages. Make recommendations specific, unambiguous and easily identifiable and make an explicit link between the recommendations and the supporting evidence.1 The more precisely behaviours are specified, the more likely they are to be performed.20 Use precise behavioural terms—that is, specifying who, what, why, when, where and how to assist implementation.20 In a study of 10 national clinical guidelines, general practitioners followed guidelines on 67% of occasions when they were concrete and precise, and on 36% of occasions when they were vague and non‐specific.21 Recommendations were also adhered to more when the evidence was described clearly, in a straightforward and non‐conflicting manner.
Despite the high‐quality evidence underpinning the first clinical guideline from the National Institute for Health and Clinical Excellence, recommendations are not behaviourally specific and in the short form exceed 20 pages.20 For example, one recommendation states, “Cognitive behavioural therapy should be available as a treatment option for people with schizophrenia.”
To improve that specification of behaviour, Michie and Johnston recommend a more specific version:20
Include good practice points where there is no, nor is there likely to be, research evidence, based on the clinical experience of the RWG, legal requirement or other consensus.
In addition to the evidence review report and summaries of the evidence, publish in a peer‐reviewed journal so that the findings can be assessed by peers.3,4 Publish in a timely manner and present the results at scientific meetings,3 aiming for the results to appear in journals before they are reported in other media.6 Endeavour to ensure that the findings are reported in a balanced and understandable manner when presenting to the non‐medical press.4
Systematic reviews are sometimes criticised for not providing specific guidance, concluding that little evidence exists to answer the question,22 and for missing the opportunity to identify outstanding gaps in the evidence.23 Uncertainty often remains, and systematic reviews should acknowledge this, mapping the areas of doubt.14 Use the uncertainty to help clarify the options available to clinicians, workers and managers, and to stimulate more and better research to help resolve the uncertainty.24
Decide who will be credited with authorship early in the planning stages.3 This helps to avoid disputes over attribution of credit for the published review and for papers in peer‐reviewed journals. Authorship should be credited to those who made substantive intellectual contributions to the published study25—that is, (1) substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data; (2) drafting the article or revising it critically for important intellectual content; and (3) final approval of the version to be published.26
Contributors who do not meet the criteria for authorship (eg, a person who provided purely technical help, writing assistance or only general support) should be listed in the acknowledgments section and financial and material support should be acknowledged.3 Because readers may infer their endorsement of the data and conclusions, those cited must give written permission to be acknowledged.
There is moderate‐quality evidence that involving consumers in the development of patient information material results in material that is more relevant, readable and understandable to patients, without affecting their anxiety. This “consumer‐informed” material can also improve patients' knowledge.26 Write short executive summaries of the evidence involving the key target audiences. Start with the questions that each audience is likely to ask and summarise the evidence‐based answer for each question. Technical editing should be considered to ensure that the evidence statements from the report are properly reflected in the summaries.
Maintain complete and accurate records and retain them safely in a manner that permits a complete retrospective audit.3,4 Long‐term retention—for example, up to 15 years—of all records and primary outputs is recommended.6 Back up the data regularly, making duplicate copies on a disk and a hard copy of particularly important data,3 archiving older versions. Publication of the report does not negate the need to retain source data3 and notes of RWG meetings recording key decisions.
As with any product, it is important to be in touch with the specific consumers of each product—that is, the full systematic review and each of the associated guidelines for health professionals, employers, workers and their representatives—to determine whether it meets their needs and offers real practical assistance. Systematic reviews should be reviewed at least every five years, not only in light of new evidence but also to attend to consumer feedback as part of a programme of continual improvement. Questions that are useful to ask include
When using numerical scoring, it is useful to score between 1 and 6 rather than between 1 and 5 to avoid people choosing the easy option of the middle number rather than make a judgement.
We all rely on our colleagues volunteering to engage in undertaking systematic reviews to help us ensure that we practise in a way that is based on, though not dictated by, evidence. Few occupational health practitioners participate in systematic reviews. What can we do to engage, energise and enable more of our colleagues to add to our state of knowledge? Firstly, we must promote existing high‐quality practical reviews widely to demonstrate their value to key stakeholder groups, particularly employers and the government. This will help to win support, be it financial or other resources, for future projects. It is difficult to get employers to release talent in today's competitive world, unless there is a clear, tangible benefit beyond corporate social responsibility. Each one of us can be an ambassador for evidence‐based guidelines within our own spheres of influence, promoting the tangible benefits. We need better access to training in critical appraisal skills, ideally though accredited schemes.
Although evidence relating to the benefits and risks of healthcare interventions is essential for guiding practice, what is the true and wider impact of evidence‐based reviews? It would be beneficial to evaluate the impact of such reviews, including any cost benefit of implementing evidence‐based guidelines.
How can the process be made easier? An effective search requires the use of at least two search engines. Although this ensures that as much published work is discovered, there is considerable overlap that generates work in identifying and removing duplicate discoveries. A one‐stop shop, where one search engine could provide all the relevant published studies, would simplify and accelerate one important part of the process.
We need to conclude the debate regarding hierarchy of evidence. RCTs cannot be performed for many occupational health interventions. This means that there is very little level 1 evidence and therefore few grade A recommendations. Cohort, case–referent and cross‐sectional studies are natural experiments in which outcomes are measured in the real world.27 These are often more appropriate in the occupational setting, but they are scored as level 2 evidence and grade B or C recommendations. It would be helpful if current hierarchies of evidence were applied as they were intended—that is, to answer questions about the effectiveness of different treatments, and if consensus was reached for a different approach in domains such as public health and occupational health.
As more systematic reviews appear, and as the limitation of RCTs in an occupational setting is managed, practitioners both inside and outside the occupational field will become better informed and their practice will be more likely to provide better outcomes for the workers for whom they provide care.
Which statements are true and which are false?
I thank Mr Brian Kazer, Chief Executive, BOHRF, for his comments and for the SMART plan.
Competing interests: None declared.