"Readers should not have to infer what was probably done, they should be told explicitly" wrote Altman in support of the recommendations on better reporting of randomised controlled trials [13
]. Disappointingly, in this series of trials it was often not possible to tell what had been done to define or detect the reported ADRs. Most authors were content either to say that adverse effects were assessed routinely (leaving the reader to infer the exact methods), or gave no information at all, ignoring the SORT recommendation that sufficient detail should be given for readers to reproduce the results.
This leaves us in a quandary when we try to interpret rates of ADRs. Perceived differences in drug safety profiles may result simply from the use of different methods, rather than reflecting any genuine disparity. For example, trials that found relatively high rates of ADRs may have done so because patients' diaries, rather than spontaneous reporting, were used in monitoring ADRs [1
]. A similar lack of information on the criteria (or grading system) used in gauging the severity of ADRs makes it difficult to assess their clinical impact. There will be differences in the reported rate, and clinical significance, of an ADR such as hypokalaemia if it was diagnosed at plasma potassium concentrations below 3.0 mmol/l in one trial, and at levels below 3.5 mmol/l in another. As these methodological details are seldom fully described in trial reports, any comparative evaluation of ADR rates between trials should be viewed as being potentially unreliable.
Although deviations from a precise definition of the term "ADR" may seem to be a minor issue, closer analysis reveals how misleading conclusions may result from the inappropriate use of terminology. A systematic review of postoperative analgesia found piroxicam to have a significantly better safety profile than placebo [14
]. However, on closer examination, we found that symptoms such as fever and headache were recorded as adverse effects in the piroxicam trials. Piroxicam (a non-steroidal anti-inflammatory drug) is an effective treatment for fevers and headaches, and it is therefore no surprise that placebo turned out to have a greater rate of "adverse effects" and was considered less safe. In order to avoid potential confusion, authors should provide sufficient information in adverse event reports for readers to be aware of which events were defined as ADRs, and which were considered to be adverse clinical events unrelated to therapy.
We recognise that trials may be insufficiently powered for detection of rare adverse effects and cannot be expected to detect and report all adverse effects associated with a drug. We also do not mean to imply that trials providing long lists of ADRs are better than those that report few. However, our primary concern is that adverse effects data, when available, should be reported in a consistent manner. Specific steps for monitoring ADRs have already been laid down under the International Conference on Harmonisation – Good Clinical Practice guidelines for clinical trials, and it is regrettable that much of the information collected is not reported in a useful format [9
Authors may argue that space constraints in journals prevent them from reporting ADRs in more detail. However, we found that many papers used only a small proportion of space for reporting safety data – 68% of the reviewed trials devoted less than 10% of the total area in the Results and Discussion section to describing adverse effects. Greater clarity in the reporting of ADRs could be achieved if authors were prepared to reduce the emphasis on efficacy and devote a greater proportion of space to safety. In addition, journals that maintain a web presence could be asked to provide dedicated areas for the reporting of adverse effects data.
Systematic reviews summarising the effects of health care have recently been accused of bias, because they emphasise efficacy rather than safety [15
]. However, the disparate nature of ADR reporting does not facilitate systematic analysis, and it is therefore no surprise that systematic reviews are unable to provide a balanced viewpoint. Patients and their physicians will not be able to make accurate judgements on the benefit:harm ratio unless safety reporting receives equal attention. Widespread adoption of the SORT recommendations on the reporting of adverse effects would substantially improve the quality of information without adding greatly to the burden of work for authors.
In summary, we believe that the absence of methodological detail, and lack of coherency in reporting the rates of ADRs creates significant problems for those trying to interpret the data. Authors need to:
• specify the methods used in detecting ADRs (incidental reporting, routine recording, or active seeking)
• define ADRs, and the scale of severity used
• report the frequencies of ADRs for each treatment arm
Valuable evidence on ADRs will continue to be misinterpreted or lost unless these measures are adopted by authors and journal editors.