Search tips
Search criteria 


Logo of archdischArchives of Disease in ChildhoodVisit this articleSubmit a manuscriptReceive email alertsContact usBMJ
Arch Dis Child. 2007 April; 92(4): 309–311.
Published online 2006 May 31. doi:  10.1136/adc.2004.071381
PMCID: PMC2083691

An evaluation of Medline published paediatric audits from 1966 to 1999



To evaluate the quality of paediatric audits from 1966 to 1999.


A Medline search was performed using the MeSH terms audit, child, paediatric (and pediatric). Predefined core elements of audit were used as inclusion criteria for entry of an article into this study. These criteria were as follows: (1) an article deals with a healthcare topic; (2) a standard is predefined; (3) actual practice is evaluated; (4) actual practice is compared with the standard. The fifth criterion of audit, dissemination of information and reaudit, was not an inclusion criterion, as it was not used in the early years covered by this study. Empirical grading of standards was used.


The search yielded 442 articles, of which 303 (100%) were related to paediatric healthcare and were reviewed. Standards were defined in 115 (38%) articles. Audit against the standard was performed in 92 (30.4%) articles, of which 42 (45.6%) were published before, and 50 (54.3%) after, 1990. 18 (5.9%) articles were re‐audited: 6 (14.3%) were published before, and 12 (24%) after, 1990. Of the 188 paediatric studies rejected, 119 (63.3%) described practice observations.


Many articles in paediatrics are published as “audits”, but they do not contain the core elements of audit. Although audit is a potentially valuable tool in clinical medicine, the publication of poor‐quality audits may lead to the decline of the audit concept. Suggestions on ways to improve the quality of published audits are made.

Audit is a valuable tool in modern medicine and its process has been published widely.1 Its educational value is well recognised2 as it lends itself to self‐evaluation and, when used correctly, leads to improvements in clinical performance. It may be used for peer review and self‐evaluation; it is part of the everyday activity of healthcare workers3 and is a contractual requirement for doctors in hospitals.4 Audit measures the extent of implementation of best practice as defined by research or expert opinion.5 The utility of audit and feedback has been reviewed by the Cochrane Collaboration.6 Their findings indicate that audit can sometimes effectively improve the practices of healthcare professionals. The effects seem to be small or moderate but worthwhile. The review group concluded that anyone attempting to enhance or influence the behaviour of medical professionals should not rely solely on audit. Given the current importance of audits in clinical practice, we undertook a study to evaluate their quality.


A Medline literature search, restricted to English language articles, was performed for relevant articles from 1966 to 1999. This search used the MeSH terms audit, paediatric (and pediatric), infant, child, children and adolescent. Neonatal audits were excluded. Combined paediatric and adult audits were included if >50% of patients were <18 years of age.

Inclusion criteria for an audit were defined as: (1) addressing a healthcare topic; (2) developing an audit standard; (3) evaluating actual practice; and (4) comparing practice against the standard. In 1989, this structured audit process was endorsed by the Standing Committee of Post‐Graduate Medical Education.7 We believed that this standard should be adhered to by any audit published after 1990. Re‐auditing was not included in the inclusion criteria to avoid disadvantaging audits published before the end of 1990.

Given the 34 year time span under investigation, and the limited library access and computer search facilities in the 1960s and 1970s, we thought a rigid evidence‐based approach to standard development8 as a requirement would disadvantage early studies. Therefore, we adopted a pragmatic approach to subdividing the audit standards, by asking what a concerned paediatrician would do if searching for evidence. This led to the decision to divide data empirically into three levels: (A) expert consensus group or data from a literature review to define a standard; (B) local consensus group—for example, paediatricians in a particular region together define what the accepted standard should be; and (C) personal opinion.

Each article was reviewed by two authors (CSOG and YZ) to ascertain (1) year of publication, (2) presence of audit inclusion criteria, (3) level of quality of the audit (if present) and (4) whether re‐audit had taken place. When disagreement occurred, the paper was reviewed by all three authors and a consensus reached.


The Medline search identified 442 articles distributed through 192 different journals; 429 were retrieved and reviewed (fig 11).). The remaining 13 articles were unavailable in Ireland or Great Britain. All 429 articles dealt with healthcare topics. Of these, 59 (13.8%) were reviewed by all three authors, as there was a need to clarify uncertainty pertaining to standards in 46 studies and the population was unclear in 13.

figure ac71381.f1
Figure 1 Flow diagram of results.

In all, 126 articles (29.4%) were excluded as their study populations were predominantly adult. A total of 303 (100%) studies dealt with a paediatric healthcare topic.

Of the 92 accepted audits, 49 were published in North America, 30 in the UK and Ireland, 9 in Australia and New Zealand, 2 in Asia, and 1 in each of the rest of Europe and Africa.

Table 11 outlines the decade of publication, number of audits per decade, standard quality and occurrences of re‐audit.

Table thumbnail
Table 1 Summary of accepted audits (n = 92) and frequency of re‐audit

The issues considered in the audits were medical (74), healthcare delivery (6), public health (4), dental (4), nursing (3) and dietetics (1).

Eighteen re‐audits were undertaken, only one of which was published before 1980. There was no statistical difference between the number of re‐audits in the 1980s and 1990s (5/26 and 12/50, respectively; p = 0.78, using Fisher's exact test).

In all, 23 (7.6%) other articles developed standards but did not compare practice against them and thus were excluded. The standard quality was level A 15 (65%), level B 3 (13%) and level C 5 (22%).

A further 188 (62%) paediatric healthcare articles were excluded as they failed to develop a standard. Of these, 115 (61.2%) described, or observed, actual practice. The remaining articles included 14 dealing with healthcare improvements through databases and computers, 10 editorials, 9 personal correspondences, 7 articles dealing with nurse practitioners and stress in nursing, 5 questionnaires and 28 miscellaneous.

In all, 8 abstracts of the 13 articles that could not be retrieved were evaluated: 5 appeared to be observations of practice, 1 was a patient satisfaction survey, 1 a review article and 1 an adult study.


The precision and validity of reviews are directly related to the comprehensiveness of the literature identification process. We experienced barriers to achieving this goal, which included (1) limited library resources, (2) difficulties accessing the Index to Irish Healthcare Library Journal Holdings (a list of journals held by hospitals around Ireland) and therefore (3) a requirement to access journal articles from abroad, with attendant costs and time delays. Owing to resource and time limitations, we did not access non‐peer reviewed journals and did not try to contact hospitals to survey the quality of audits performed. We made a pragmatic decision to evaluate the Medline database with which the senior author (MBO'N) was familiar. If we had used other databases—for example, Embase—we could have accessed more citations which the National Library of Medicine's PubMed Medline Searching System does not cover. However, the overlap encountered is variable with the Cochrane Handbook for Systematic Reviews of Interventions9 citing 10–75%, depending on the topic reviewed. If we had searched multiple databases, our conclusions would have been more robust. Consequently, our paper has limitations.

The difficulties the authors encountered in retrieving 13 articles has been previously described10 and this problem is being considered as many journals are now electronically based.

This paper raises some concerns regarding the methodology of paediatric audits. Many studies are published as paediatric audits without developing a standard (116/303, 38.3%) but describe actual or observed practice. This description of practice is important and is an integral component of the medical process. However, in its own right, it does not indicate the quality of care, as actual practice needs to be evaluated against the desired standard. From the observation of practice, the process of standard development can progress.

What is already known on this topic

  • Audit is an important tool in paediatrics and is used as a method of quality assessment and quality improvement.
  • Audit has recently fallen into disrepute.

What this study adds

  • Paediatric audits are qualitatively poor over the last number of decades.
  • The conduct of well‐structured audits that satisfy all criteria of audit and re‐audit will improve the reputation and quality of published audits.

This study suggests that more care needs to be given to standard development. This core component of audit indicates what the clinician aspires to do and, thus, is a crucial component in the process of audit. This process has been evolving over the past decades with the development of evidence‐based medicine, and the proliferation of groups that evaluate the quality of published studies. A recent report11 suggests that 71% of respondents (n = 337) involved in audit use the research literature when developing review criteria. Concern was expressed, however, that only 27% recorded the validity of the research. Deficiencies in standard development have major implications for the audit process. Deficiencies can undermine the culture of quality improvement in the clinical setting and can bring the process into disrepute. It would be prudent for those reporting audits to indicate the quality of the standards used. Several are available.8

Audit is an innovative process that allows healthcare practice to be evaluated and improved. Persons who respond to innovation can be categorised into five groups: (a) innovators (2.5%); (b) early adapters (13.5%); (c) early majority (34%); (d) late majority (34%); and (e) laggards (16%).12 If audit is truly an innovative process, we would have expected that after the innovators of the 1970s and 1980s had published their data, the early adapters and early majority of the 1990s should have published increasing numbers of audits. As this did not occur, it seems that this innovative process may have stalled. The process of audit, where it is delegated to junior or middle‐grade doctors who are likely to be in a post for a short time, may have contributed to this.

To reverse this trend, we suggest that the following three areas can be used to improve audit quality. Firstly, in every paediatric department, a paediatrician should be designated as an expert in the process of audit, to counsel and advise those interested in audits. The development of a portfolio of audits of varying quality could be used to explain to trainees the common errors in audit performance. The development of a central resource for standards, with a designation of their quality level (either “level A” or evidence gradation based on the Oxford system8), would aid clinicians organising audits. Secondly, in the educational arena, the audit process can be taught at undergraduate and postgraduate levels. During specialist registrar training, each specialist registrar should be required to participate in two series of audits: firstly as a junior to conduct supervised audits and subsequently as a senior trainee to assist and supervise juniors conducting audits. Thirdly, journals should publish only true audits that contain the core elements of audit,7 or audits that develop either assessment tools or new standards that further the process of audit.

Re‐audit is the current goal for those involved in the audit process. We were surprised to find so few audits that closed the audit loop. This study gives an insight into the audit process and quality of the standards used from 1966 to 1999 and can serve as a comparison when current day audits are being evaluated.


Despite clear guidelines on the audit in practice, poor quality audits continue to be published. To deal with this issue, we suggest the following: (1) audit and quality improvement techniques should be incorporated into medical education at the undergraduate level; (2) published audits should have clear indications of the quality of the data analysed to produce the standards; (3) consideration should be given to the establishment of a central resource for clinicians who wish to have standards developed in specific areas; and (4) journals should only publish audits that close the audit loop; the term practice observation could be used where other audit‐like articles are published. Should the quality of future published audits be improved, the findings of a future Cochrane group6 will perhaps be more supportive of audit.


We thank Ms Julia Reynolds, Medical Information Specialist, Mayo General Hospital, Ireland, for her invaluable help.


Competing interests: None declared.

Appendices of articles included and excluded have not been included in this paper. However, these are available from the corresponding author on request.


1. Ellis B W, Sersky T. A clinical guide to setting up audit. BMJ 1991. 302704–707.707 [PMC free article] [PubMed]
2. Batstone G F. Educational aspects of medical audit. BMJ 1990. 30126–28.28 [PMC free article] [PubMed]
3. Stern M, Brennan S. Medical audit in the hospital and community health service. London: Department of Health, 1994
4. Department of Health Working Paper 6. Medical audit. London: HMSO, 1989
5. Smith R. Audit and research. BMJ 1992. 305905–906.906 [PMC free article] [PubMed]
6. Jamtvedt G, Young J M, Kristoffersen D T. et al Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev 2003. (3)CD000259 [PubMed]
7. Standing Committee on Postgraduate Medical Education Medical audit—the educational implications. London: SCOPME, 1989
8. Phillips B, Ball C, Sackett D. et alLevels of evidence and grades of recommendations. Oxford: The Oxford Centre for Evidence‐Based Medicine Levels of Evidence, May, 2001
9. Higgins J P T, Green S. eds. Cochrane handbook for systematic reviews of interventions 4.2.6. (updated September 2006; Section 5.1.1). In: The Cochrane Library, Issue 4, 2006. Chichester, UK: John Wiley & Sons Ltd
10. Hopewell S, Clarke M, Lusher A. et al A comparison of handsearching versus Medline to identify reports of randomised controlled trials. Stat Med 2002. 211625–1634.1634 [PubMed]
11. Hearnshaw H, Harker R, Cheater F. et al A study of the methods used to select review criteria for clinical audit. Health Technol Assess 2002. 61–78.78 [PubMed]
12. Roger E M. Diffusions of innovation. New York: Soman and Schuster, 2003 281,

Articles from Archives of Disease in Childhood are provided here courtesy of BMJ Group