PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of austmjLink to Publisher's site
 
Australas Med J. 2012; 5(8): 462–467.
Published online Sep 9, 2012. doi:  10.4066/AMJ.2012.1491
PMCID: PMC3442191
Doctors and Medical Science
Moyez Jiwa
Editor AMJ
Corresponding Author: Moyez Jiwa Email:editor/at/amj.net.au
Are practicing medical doctors up to date on the latest advances in their field? Is published research valid and reliable? Why are doctors seldom involved in research? The aim of this editorial is to explore some of these complex issues.
Patients may believe that their doctor is a source of impartial and up to date information in his or her field but how do doctors keep up to date? Gabbayay and Le May reported the following:
‘…clinicians rarely accessed, appraised, and used explicit evidence directly from research or other formal sources; rare exceptions were where they might consult such sources after dealing with a case that had particularly challenged them. Instead, they relied on what we have called “mindlines,” collectively reinforced, internalised tacit guidelines, which were informed by brief reading, but mainly by their interactions with each other…opinion leaders, patients…pharmaceutical representatives and by other sources of largely tacit knowledge that built on their early training and their own and their colleagues' experience’.1
Are doctors who prescribe the latest drugs more likely to be up to date? General practitioners for example have been found to be reactive and opportunistic recipients of new drug information, and rarely report undertaking an active information search. The decision to initiate a new drug is heavily influenced by advertising, endorsement by colleagues and hospital consultants.2 Furthermore new medications offer little, if any, incremental value over existing therapies. The combination of inadequate information about the potential side effects of new drugs plus their limited value strongly argues against their early use except in exceptional circumstances.3
It is telling that drug companies are spending billions every year promoting their products.4 It is also notable that many new drugs are withdrawn within a very short time of their launch.5 Worryingly, there is sometimes a relative lack of urgency when a drug is clearly shown to be harming patients. For example 19.8 million patients were prescribed five questionable drugs before action was taken to remove them from the market. This included painkillers, anti-histamines, drugs used to treat obesity and anti-hypertensive drugs. Not one of these were lifesaving nor, in many cases, were they the only drugs available for that indication.In another case physicians prescribed a new painkiller to 2.5 million patients with acute pain, even though many well-tested similar drugs were available and the drug was known to elevate liver enzymes. Similarly the rationale for not withdrawing an anti-histamine from the market as soon as researchers clearly identified it as causing deaths has not been explained.6 It is surprising that the drug was not removed from the market when the adverse effects were identified, but only after the manufacturer had developed a new product to substitute for it.
For some relatively rare conditions practicing doctors may know little more than they knew when they first qualified.7 Physicians who have been in practice for a long time may be at greatest risk of being out of date in their recommendations and practice. Therefore, this group of physicians may need support to be kept abreast of research.8 Older physicians also seem less likely to adopt newly proven therapies and may be less receptive to new standards of care.9-10 So what is the role of so-called peer reviewed publication?
It is estimated that there are 1.29 papers published in the peer reviewed medical literature every minute.11 Even if a doctor were able to keep up with this volume of reading, it is said that much of what is published is flawed. Richard Smith, former editor of the British Medical Journal (BMJ), is quoted as saying that only 5% of published papers reached minimum standards of scientific soundness and clinical relevance, and in most journals the figure was less than 1%.12
In the period from 2000–2010 a total of 788 papers have been retracted, i.e. expunged from the public record.13 Approximately three-quarters of these papers had been withdrawn because of a serious error; the rest of the retractions were attributed to fraud (data fabrication or falsification). The fakes were more likely to appear in leading publications with a high “impact factor”. The impact factor is a proxy measure of how often research is cited in other peer reviewed journals. More than half (53%) of the faked research papers had been written by a first author who was a “repeat offender”. This was the case in only one in five (18%) of the erroneous papers.13At about the same time it was estimated that the number of articles published between 1950 and 2004 that ought to be retracted should have been as many as 100,000 and at least 10,000.14 The authors further conclude that although high impact journals tend to have fewer undetected flawed articles than their lower-impact peers, even the most vigilant journals potentially host papers that should be retracted.14
Retraction or not, one would like to think that doctors are able to spot flawed papers and, better still, are unlikely to have their clinical practice misled by poor science or glossy leaflets for new and untested treatments. Let us start with the first question: do doctors read research papers? Here is a quote from a doctor writing in the BMJ:
‘The volume of statistical argument [in research papers] also seems part of the same disingenuous process. How many doctors have a clue what it means? Of all the areas of mathematics, probability, and its inscrutable daughter statistics, are the most slippery to grasp. Yet authors routinely drop large chunks of this extremely difficult stuff into papers that are supposed to be there to illuminate practice for doctors. But most doctors, including myself, don’t understand it’.15
What is the point of publishing research papers that cannot be absorbed by the target audience? One author suggested a possible answer:
‘Authors are eager to get their names in print not because they are bursting to tell us something but for more solemn reasons. Another paper means another line on a curriculum vitae, another step towards a job or a research grant.’16
Journals rely on ‘peers’ to decide which papers merit publication and which should be jettisoned. The process of peer review is recognised to be flawed.17 The quality of the reviews varies. There may be divergent views expressed in the review and it is sometimes difficult to determine why an editor rejects or indeed accepts a submission without concluding that the editor’s biases have played a significant role in that decision. In many cases, especially in niche areas a competitor who may or may not declare a conflict of interest may be invited to review the paper. If the identity of the reviewer is kept from the authors, the reviewer is free to recommend rejection or publication without fear of recrimination in what is known as “blind” peer review. In very specialised topics the identity of authors can be very hard to conceal from an expert in the field at the time of review. Secondly publishing is a powerful, prestigious and lucrative business. No journal yet has taken up a long-standing suggestion to remove the names of authors from published papers. This would ensure that papers are published only for the sake of disseminating information. However to do so would be to make the journal much less attractive to authors and therefore advertisers and other cash cows.18
To fully appreciate the value of journal articles to their target audience, namely university researchers, and their host institutions one might consider the value of a paper in a highly-rated journal (impact factor >40) compared to one in a more modestly rated one (impact factor <2). A paper in the high impact journal may have an Eigenfactor score of 0.67. The Eigenfactor score calculation is based on the number of times articles from the journal published in the past five years have been cited in the year.19 A paper published in a ‘lesser’ journal has a Eigenfactor score of 0.003. Naturally a university dean would be impressed with work cited frequently rather than seldom. But what is even more likely is that the academic with the paper in a so-called high impact journal will be more likely to be successful on grant applications and be invited to speak at national and international conferences. All of which may attract postgraduate students, competitive grants and lucrative collaborations. In Australia, for example, universities who employ academics who publish on a predetermined list of journals are more likely to be rewarded with a larger share of government grants and subsidies.20
That is not to say that publication in the high impact journals means living happily ever after. The reputation of a top rated medical journal was damaged by a controversy involving its response to problems with research on a drug used to treat pain.21 A study was published in the journal in 2000 which noted an increase in myocardial infarction amongst those using the drug.21 Concerns about the robustness of that study were raised with the journal in August 2001. At the same time both the US Food and Drug Administration and another major journal also cast doubt on the interpretation of the data that had been published in the journal. However it was not until 2005 that the journal published concern about the original study. During that five-year period funded reprints of the original article were used to promote the offending drug.
Most journals are peer reviewed by an unpaid army of academics and editors. The journals may then be sold to libraries. An annual subscription to some journals may be over $20,000. Publishers make substantial profits. Here is a list of published subscription rates for various top-rated journals:
Table thumbnail
A major publisher of medical journals is a global company based in Amsterdam, employing more than 7,000 people in 24 countries. It claims a global community of 7,000 journal editors, 70,000 editorial board members, 300,000 reviewers and 600,000 authors. In July 2010 the company posted interim profit results with a revenue of almost 3 billion GBP and adjusted profits of 758 million GBP in the six months ending 30 June.22 This is also the company that was reported to have been paid an undisclosed sum by a pharmaceutical company to produce several volumes of a publication that had the look of a peer-reviewed medical journal, but contained only reprinted or summarised articles, most of which presented data favourable to its products with no disclosure of company sponsorship.23
Despite the fact that doctors are key to delivering health care they are seldom involved in research and far less often cited as leaders on research teams. The relationship between the research organisations and doctors is the key to understanding their limited involvement in innovation. ‘Good’ research is a painstaking science in which clearly defined research questions are articulated, appropriate methods are applied, data is efficiently collected and appropriate analysis is conducted to craft conclusions that take into account the limitations and strengths of the study. Seldom, if ever, does a single study, no matter how large, offer robust conclusions that will lead to change in practice. The design and execution of high quality research requires expertise which takes many years of further training and experience. The acquisition of these skills may take doctors out of clinics and at a significant personal opportunity cost.
The subject of clinical research, i.e. patients, must give informed consent before they can be included in a study. This is more complicated than working with uncomplaining rats in a sanitised laboratory. In practice limited control over research subjects means that most clinical research cannot be generalised and is therefore less likely to be published in high impact journals. Most research is also conducted at universities, directly or indirectly. Universities and medical schools have to generate a surplus income to grow in size and influence. Very little research in primary care or public health has a commercial value, therefore to profit from clinical research universities rely on government funding. The government agenda may be driven by political imperative. Therefore a government minister unveiling shiny new machines makes for a far more voter friendly photo opportunity than one launching a more efficient way to rehabilitate people with mental illness or manage incontinence in general practice.
Therefore funding is heavily weighted towards biomedical sciences. Here the focus is on cure rather than prevention or more efficient service delivery. Genetic research, nano particles and the study of prions, is therefore more likely to get generously funded than research on system design that would allow people to die in comfort in their own homes.
In 2010 the Australian National Health and Medical Research Council divided its research funding so that 39% of the funds were awarded to preventive medicine and public health. At the same time the majority of government funding on health care in practice is on so-called primary care services.24 For universities the return on investment does not favour clinical research, so that laboratory-based research on a cure for cancer makes a far more compelling case than research involving therapists in the community or models of disease self-management. And yet, in the scheme of things, research in how to deliver an equitable health service is going to make more of an impression on the community in the short term than research on a cure for cancer that may be 20 years away.
Academics understand that universities are financially rewarded for adopting this paradigm by a system that is driven by priorities related to a return on investment. Given the competitive nature of those who enrol in medical school this is a considerable disincentive and drives clinicians out of research. As if that was not sufficient disincentive, there are major challenges to recruiting participants in clinical practice.25 Patients do not seek help from doctors only to spend most of their consultation negotiating an opportunity to participate in research that may or may not benefit them directly. When the patient is paying for the doctor’s time, as is the case in many countries, doctors have no incentive to introduce distractions to that consultation. In reality many of the patients in clinical practice are excluded from research designs which usually favour young, articulate, English speaking, literate, relatively healthy people and not those living with the conditions for whom the evidence has apparently been generated.26
Doctors are not generally actively involved in research, they may not critically appraise research articles and their knowledge of recent advances in their field may be out of date. For example there are cases of doctors continuing to prescribe drugs that have been reported to cause harm. A vast number of research papers are published every year and most of these have significant limitations and some poor science may even be published in the most influential journals. Publishers and manufacturers of pharmaceuticals have sometimes colluded in ways that do not necessarily benefit patients. The need for specialist research skills as well as research funding structures mean that those most closely involved with patients neither lead research nor participate in research projects. Much of the most generously funded research is aimed at long-term commercial goals rather than to benefit patients.
Footnotes
PEER REVIEW
Not commissioned. Externally peer reviewed.
CONFLICTS OF INTEREST
The author is the editor in chief of the AMJ
Please cite this paper as: Jiwa M. Doctors and Medical Science AMJ 2012, 5, 8, 462-467.http//dx. doi.org/10.4066/AMJ. 2012.1491
1. Gabbay J, le May A. Evidence based guidelines or collectively constructed “mindlines?” Ethnographic study of knowledge management in primary care. 2004;329(7473):1013. BMJ. [PMC free article] [PubMed]
2. Prosser H, Almond S, Walley T. Influences on GPs’ decision to prescribe new drugs-the importance of who says what. Fam Pract. 2003;20(1):61–8. [PubMed]
3. Lexchin J. Should doctors be prescribing new drugs? The International Journal of Risk and Safety in Medicine. 2002;15:213–222.
4. Lexchin J. Models for financing the regulation of pharmaceutical promotion. Global Health. 2012;8:24. [PMC free article] [PubMed]
5. Wood AJ. The safety of new medicines. The importance of asking the right questions. JAMA. 1999;281(18):1753–1754. [PubMed]
6. Woosley RL, Chen Y, Friedman JP, Gillis RA. Mechanism of the cardiotoxic actions of terfenadine. JAMA. 1993;269:1532–1536. [PubMed]
7. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Ann Intern Med. 2005;142(4):260–73. [PubMed]
8. Freiman MP. The rate of adoption of new procedures among physicians. The impact of specialty and practice characteristics. Med Care. 1985;23:939–45. [PubMed]
9. Hlatky MA, Cotugno H, O’Connor C, Mark DB, Pryor DB, Califf RM. Adoption of thrombolytic therapy in the management of acute myocardial infarction. Am J Cardiol. 1988;61:510–4. [PubMed]
10. Young MJ, Fried LS, Eisenberg J, Hershey J, Williams S. Do cardiologists have higher thresholds for recommending coronary arteriography than family physicians? Health Serv Res. 1987;22:623–35. [PMC free article] [PubMed]
11. How many journal articles have been published (ever)? Available from. http://duncan.hull.name/2010/07/15/fifty-million/. Retrieved 9 Aug 2012.
12. O’Donnell M. Why doctors don’t read research papers: scientific papers are not written to disseminate information. BMJ. 2005;330(7485):256. [PMC free article] [PubMed]
13. Steen RG. Retractions in the scientific literature: do authors deliberately commit research fraud? J Med Ethics. 2011 Feb;37(2):113–7. Epub 2010 Nov 15. [PubMed]
14. Cokol M, Iossifov I, Rodriguez-Esteban R, Rzhetsky A. How many scientific papers should be retracted? EMBO reports. 2007;8:422–423. doi:10.1038/sj.embor.7400970. [PubMed]
15. Barraclough K. Why doctors don’t read research papers. BMJ. 2004;329:1411. doi: 10.1136/bmj.329.7479.1411-a.
16. O’Donnell M. Why doctors don’t read research papers: Scientific papers are not written to disseminate information. BMJ. 2005;330:256. doi: 10.1136/bmj.330.7485.256-a. [PMC free article] [PubMed]
17. Smith R. The trouble with medical journals. J R Soc Med. 2006 Mar;99(3):115–119. [PMC free article] [PubMed]
18. Healy JB. Why do you write? Lancet. 1976 Jan 24;1(7952):204. [PubMed]
19. Journal Citation reports. Available from. http://admin-apps.isiknowledge.com/JCR/help/h_eigenfact.htm. Retrieved 11 Aug 2012.
20. Australian Research Council. Available from. http://www.arc.gov.au/era. Retrieved 8 Aug 2012.
21. Krumholz HM, Ross JS, Presler AH, Egilman DS. What have we learnt from Vioxx? BMJ. 2007;334(7585):120–3. [PMC free article] [PubMed]
22. Reed Elsevier. Reed Elsevier 2010 Interim Results.Available from: http://www.reed-elsevier.com/mediacentre/pressreleases/2010/Pages/reed-elsevier-interim-results-2010.aspx. Retrieved 10 Aug 2012.
23. Collier R. Medical journal or marketing device? CMAJ. 2009 Sep 1;181(5):E83–E84. doi:10.1503/cmaj.091326. [PMC free article] [PubMed]
24. National Health and Medical Research Council. Available from. http://www.nhmrc.gov.au/grants/rounds/projects/index.htm#2010. Retrieved 11 Aug 2012.
25. Ngune I, Jiwa M, Dadich A, Lotriet J, Sriram D. Effective recruitment strategies in primary care research: a systematic review. Qual Prim Care. 2012;20(2):115–23. [PubMed]
26. Herland K, Akselsen JP, Skjønsberg OH, Bjermer L. How representative are clinical study patients with asthma or COPD for a larger “real life” population of patients with obstructive lung disease? Respir Med. 2005;99(1):11–9. [PubMed]
Articles from The Australasian Medical Journal are provided here courtesy of
Australasian Medical Journal