PMCCPMCCPMCC

Search tips
Search criteria 

Advanced

 
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
 
Int J Obes (Lond). Author manuscript; available in PMC Jan 1, 2013.
Published in final edited form as:
PMCID: PMC3288675
NIHMSID: NIHMS325016
Is Funding Source Related to Study Reporting Quality in Obesity or Nutrition Randomized Control Trials (RCTs) in Top Tier Medical Journals?
Kathryn A. Kaiser, Ph.D.,1,2 Stacey S. Cofield, Ph.D.,1 Kevin R. Fontaine, Ph.D.,3 Stephen P. Glasser, M.D.,4 Lehana Thabane, Ph.D.,5 Rong Chu, M.Sc.,5 Samir Ambrale, M.D.,6 Ashish D. Dwary, M.D.,7 Ashish Kumar, MBBS,8 Gaurav Nayyar, M.D.,7 Olivia Affuso, Ph.D.,2,9 Mark Beasley, Ph.D.,1 and David B. Allison, Ph.D.1,2
1University of Alabama at Birmingham, School of Public Health, Dept. of Biostatistics, Birmingham, AL, USA
2University of Alabama at Birmingham, Nutrition Obesity Research Center, Birmingham, AL, USA
3Johns Hopkins University, Dept. of Health, Behavior and Society; School of Medicine, Baltimore, MD, USA
4University of Alabama at Birmingham, School of Medicine, Division of Preventive Medicine, Birmingham, AL, USA
5McMaster University, Dept. of Clinical Epidemiology and Biostatistics, Hamilton, ON, Canada
6New York Medical College, Dept. of Medicine, Valhalla, NY
7University of Alabama at Birmingham, Dept. of Nutrition Sciences, Birmingham, AL, USA
8University of Alabama at Birmingham, Dept. of Behavioral Neurobiology, Birmingham, AL, USA
9University of Alabama at Birmingham, School of Public Health, Dept. of Epidemiology, Birmingham, AL, USA
Corresponding Author: David B. Allison, Ph.D., RPHB 327, 1530 3rd Avenue South, Birmingham, AL, 35294-0022, Tel.: 205-975-9169, Fax: 205-975-2541, dallison/at/uab.edu
Background
Faithful and complete reporting of trial results is essential to the validity of the scientific literature. An earlier systematic study of randomized controlled trials (RCTs) found that industry-funded RCTs appeared to be reported with greater quality than were non-industry funded RCTs. The aim of this study was to examine the association between systematic differences in reporting quality and funding status (i.e. industry-funding vs. non-industry funding) amongst recent obesity and nutrition RCTs published in top tier medical journals
Methods
Thirty eight obesity or nutrition intervention RCT articles were selected from high-profile, general medical journals (The Lancet, Annals of Internal Medicine, JAMA, and the British Medical Journal) published between 2000 and 2007. Paired papers were selected from the same journal published in the same year, one with and the other without industry funding. The following identifying information was redacted: journal, title, authors, funding source, and institution(s). Then three raters independently and blindly rated each paper according to the Chalmers Method and total reporting quality scores were calculated.
Findings
The inter-rater reliability (Cronbach’s Alpha) was 0.82 [95% Confidence Interval (C.I.) = 0.80 – 0.84]. The total mean (M) and standard deviation (SD) Chalmer’s Index quality score (out of a possible 100) for industry-funded studies were M = 84.5, SD = 7.04 and for non-industry funded studies they were, M = 79.4, SD = 13.00. A Wilcoxon matched-pairs signed-ranks test indicates no significant rank difference in the distributions of total quality scores between funding sources, Z = −0.966, p = 0 .334 (two-tailed).
Interpretation
Recently published RCTs on nutrition and obesity that appear in top-tier journals seem to be equivalent in quality of reporting, regardless of funding source. This may be a result of recent reporting quality statements and efforts of journal editors to raise all papers to a common standard.
The scientific process, the value of the information it generates, and the ability to properly assess that value all depend fundamentally on the ability of ‘consumers’ (e.g., clinicians, patients, policy-makers and scientists) of research to fully understand the procedures used to generate the reported data and results. This is as true for controlled trials in nutrition and obesity as it is for other scientific reports. With this in mind, guidelines for reporting of trials, especially randomized controlled trials (RCTs), have been developed and disseminated (1), and subsequent reporting quality seems to have improved (2;3).
Nonetheless, concerns have been raised that financial ties between researchers and for-profit entities (industry) may lead to biases in research reporting (4). One way biases may enter the research record is by intentional or inadvertent omissions or unclear or misleading statements in research reporting (58). It is therefore important to assess both the quality of research reporting in general and whether reporting quality differs between industry-funded versus non-industry funded research. This type of examination promotes confidence in the conclusions reported.
We previously assessed the quality of research reporting in obesity RCTs and the extent to which such reporting quality varied by funding source (9). We found that rather than industry-funded research being reported with lower quality, there was some evidence that industry-funding was associated with higher quality reporting. But some caution about the findings was offered because the RCT reports evaluated were all published prior to 2004, and industry versus non-industry funded studies were not adequately matched for some variables. Therefore, in this study, we set out to provide a more focused and updated analysis of this question by collecting a matched sample of paired industry-funded and non-industry-funded obesity and nutrition RCTs and evaluating their relative reporting quality.
Sample Size and Power
Based on the observed means and standard deviations in our earlier paper with a similar focus in reporting quality (9), and conservatively estimating a correlation among paired observations in this study, 19 pairs of papers would achieve a power of .895 at the 2-tailed, .05 alpha level.
Literature Searched
Thirty eight obesity or nutrition intervention RCT articles published between 2000 and 2007 that met inclusion criteria were obtained from high-profile journals (Lancet, Annals of Internal Medicine, JAMA, and British Medical Journal). The New England Journal of Medicine was searched but too few papers matching the inclusion criteria were found. The inclusion criteria were: the topic was an RCT on an obesity intervention or nutritional supplement and that paired papers were available from the same journal published in the same year, one not reporting industry funding and the other reporting industry funding. We coded studies as having non-industry funding if no funding source was reported, even though they may have disclosed that some portion of the protocol supplies (e.g., supplement tablets) had been donated, as was the case with two papers. Papers reporting private foundation or governmental funding were coded as non-industry funded studies. In the case of mixed funding (four studies had both industry monetary support and non-industry support), we coded those in the industry-funded category.
Rating Procedure
For blinded adjudication, papers had the following identifying information redacted: journal, title, authors, funding source, and institution(s). Then, three raters independently (KF, SG, LT) rated each paper according by the Chalmers Method (10). The mode scores for each item were retained as the final score when at least two of the three raters were in agreement. In instances where at least two of the three raters did not agree on a score for the same item on the same paper, a second examination and resolution of scores was performed by a committee of three raters (AD, AK, GN) blinded to prior raters’ scores.
Final ratings were weighted adapting the Chalmers (10) point scheme. Since individual item choices were not assigned points in the original Chalmers paper, we assigned points (e.g., Yes = 3 points, Partial = 1.5 points, No = 0 points, N/A = 3 points for an item that was assigned a maximum of 3 possible points per Chalmers) as agreed upon by three of the authors (KK, OA, SC). We opted for awarding points that were reasonable according to the quality of reporting of the trial information, not the quality of the study design itself. Items that were rated as ‘not applicable’ (N/A) were given the maximum possible points for that rating item. We excluded two items that were specific to survival analysis reporting, as this type of analysis is not typical in obesity or nutrition supplement trials, which are the focus of the present study. Our resulting weighting scheme, like the original Chalmers scheme, provides a maximum of 100 points where 60 points are allocated to the study protocol, 30 points are allocated to the statistical analysis and 10 points are allocated to the presentation of results (10). See the appendix for a list of included manuscript pairs and detailed scoring scheme.
Statistical Analysis
Each paper was coded by funding source prior to consolidation of the raters’ scores. Inter-rater reliability was assessed using Cronbach’s alpha (11). Total quality scores were calculated by a researcher (KK) blinded to the raters’ identities and funding source for each paper. The distribution of the scores was examined using the Shapiro-Wilk (12) test to determine the appropriate comparison statistic, parametric (paired t-test) or non-parametric. A Wilcoxon matched-pairs signed-ranks test was performed to assess the equality of distribution of intervention types between funding categories. Statistical significance was assessed at alpha = .05.
Of the 38 papers meeting inclusion criteria (19 in each funding category), 13 were dietary interventions, 5 were drug studies, 2 were exercise studies, 3 were of mixed interventions and 15 involved dietary supplements. There was no significant difference in the number of study types between funding categories, p = .568, two tailed. Inter-rater reliability, assessed using Cronbach’s alpha (11) (average inter-item correlation), was 0.82 (95% C.I. = 0.80 – 0.84). Only 170 items out of a total possible 4712 items (3.6%) required adjudication (i.e., due to lack of agreement of at least two out of three raters) in the preliminary round of rating. The distribution of the total quality scores for non-industry funded scores exhibited significant skewness (Shapiro-Wilk’s statistic = .869, p = .014), therefore the non-parametric Wilcoxon signed-ranks test was performed. Results indicated no significant difference in paired total quality rankings between the two types of studies (industry funded compared to non- industry-funded) for the total score; see Table 1. Additionally, there was no significant difference between the industry funded and non-industry funded reports for the sub-categories that make up the total score: study protocol, statistical analysis, and presentation of results.
Table 1
Table 1
Descriptive and test statistics for total and subscale scores for each funding category.
In this study we found that in a sample of top-tier general medical journals, recently published nutrition and obesity trials are reported at a similarly high quality level between funding categories. Additionally, reporting quality in the areas of study protocol, statistical analysis, and presentation of results were similar. One limitation of our study is that we categorized funding source based on what was reported in the manuscript acknowledgments section. Therefore, it is possible that some studies’ funding category assignments were incorrect due to incompletely reported information. Also, our assignment system did not have a third category for mixed types of reported funding, but this applied to only four of the 38 studies we examined (10%).
Another possible limitation is that we used a single method for assessing reporting quality. There are many methods available in the literature to assess the quality of a research reporting but there is no consensus as to which method is best and none have risen to the top as the clear, preferred method. Indeed, the Agency for Healthcare Research and Quality (AHRQ) evaluated many scoring systems and reduced the number of those generic systems down to nineteen that they concluded fully address their key domains of quality, these being labeled as “generally informative” (13). The Chalmers method(10) we used numbers among those cited by the AHRQ, although, as noted above, we made minor modifications based on the specific types of studies we examined. The paired samples approach is commonly used and has been supported when comparing groups of studies (14). Additionally, we compared the papers for each Chalmers Index subcategory (10) and found no significant difference between funding categories in any subcategory.
In the earlier analysis that examined obesity trial studies published prior to 2004 (9), the method used was the CONSORT reporting criteria as described by Thabane, et al. (15) This could be one explanation for why our present results are different from the results reported by Thomas, et al., which indicated that reports of industry-funded studies were of a statistically significant, higher quality of reporting (9). Another possible explanation is that the more prevalent use of the CONSORT guidelines (1) by researchers and journal editors of late is having the desired effect in improving the reporting quality of all types of studies. Since our present sample of papers were published in what are generally considered as “top tier” general medical journals, diligence by authors and editorial staff to assure that the level of reporting meets high standards could have been a key factor in our present results showing uniform quality across funding classes.
Future studies of this type might benefit from comparing studies using two or more methods for rating quality most appropriate to the type of study being evaluated. While there is no currently accepted superior method for assessing reporting quality, cross comparisons between methods may illuminate strengths and weaknesses of the methods that may lead to the development of better rating systems. Since our question pertained to potential issues of bias based on funding sources, we opted for a system that allowed for assessing potential key omissions within the publication limits of the published record. Other future studies that would help to resolve perceptions about the influence of funding sources and publication bias might examine the distribution of funding sources across a wide range of journals to assess whether the “top-tier” journal lean towards publishing studies of one type of funding versus another as compared to some less widely read journals. Our focus on selected, matched, “top-tier” journal articles did not address funding type frequencies within and across these journals in general.
The importance of this topic of assessing the quality of reported science extends beyond the obesity literature to important issues of public safety and policy making, such as the case with bisphenol A (16). One source of bias that will likely always be in the scientific dialogue is that of confirmation bias – the tendency to seek or selectively attend to information that supports current attitudes or beliefs (17;18). Recent understanding concerning the mechanisms of this common bias reveal that the “congeniality” of the new information (how much this information supports existing beliefs) impacts the preferences of the receiver of the information, but this bias is reduced when the information is of higher quality (19). Efforts to increase the overall quality of the scientific literature can be achieved through activities that support high quality in reporting and create transparency (4). Research funding is beneficial and necessary, and scientists should remain vigilant against bias of all types, including that which attempts to discredit information based on funding source.
Supplementary Material
Appendix
Acknowledgments
Funding: Supported in part by NIH grants R01DK078826, P30DK056336, and T32HL007457. The opinions expressed are those of the authors and not necessarily the NIH or any organization with which the authors are affiliated.
Footnotes
Disclosure: Dr. Allison has received grants, honoraria, donations, royalties, and consulting fees from numerous publishers, food, beverage, pharmaceutical companies, and other commercial and nonprofit entities with interests in obesity and randomized controlled trials.
Authors’ contributions: David Allison - conceived the project, supervised the execution, designed the analysis plan, and contributed to the drafting and editing of the manuscript. Kathryn Kaiser - managed the resolution of the coding, performed the analysis, contributed to the writing and editing of the manuscript. Stacey Cofield - assisted with selection of the coding system, supervised data collection, assisted with coding scheme and analysis plan. Olivia Affuso - assisted with coding scheme and analysis. Mark Beasley - assisted with analysis, editing of manuscript. All others - scored papers, entered data and provided comments on the manuscript.
Conflict of interest: The authors declare no conflict of interest.
1. Moher D. CONSORT: an evolving tool to help improve the quality of reports of randomized controlled trials. Consolidated Standards of Reporting Trials. JAMA. 1998;279/18:1489–1491. [PubMed]
2. Mills EJ, Wu P, Gagnier J, Devereaux PJ. The quality of randomized trial reporting in leading medical journals since the revised CONSORT statement. Contemp Clin Trials. 2005;26/4:480–487. [PubMed]
3. Moher D, Jones A, Lepage L. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. JAMA. 2001;285/15:1992–1995. [PubMed]
4. Allison DB. The antidote to bias in research. Science. 2009;326/5952:522–523. [PubMed]
5. Allison DB, Cope MB. Randomized controlled trials with statistically nonsignificant results. JAMA. 2010;304/9:965. [PubMed]
6. Cope MB, Allison DB. White hat bias: a threat to the integrity of scientific reporting. Acta Paediatr. 2010;99/11:1615–1617. [PubMed]
7. Cope MB, Allison DB. White hat bias: examples of its presence in obesity research and a call for renewed commitment to faithfulness in research reporting. Int J Obes. 2010;34/1:84–88. [PMC free article] [PubMed]
8. Rennie D. CONSORT revised--improving the reporting of randomized trials. JAMA. 2001;285/15:2006–2007. [PubMed]
9. Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32/10:1531–1536. [PMC free article] [PubMed]
10. Chalmers TC, Smith H, Jr, Blackburn B, Silverman B, Schroeder B, Reitman D, et al. A method for assessing the quality of a randomized control trial. Control Clin Trials. 1981;2/1:31–49. [PubMed]
11. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.
12. Shapiro SS, Wilk MB. An analysis of variance test for normality (complete samples) Biometrika. 1965;52:591–611.
13. Agency for Healthcare Research and Quality. Systems to rate the strength of scientific evidence. 02-E016. Vol. 47. Washington, DC.: U.S. Department of Health and Human Services; 2002. pp. 1–11. Evidence Report.
14. Detsky AS, Naylor CD, O'Rourke K, McGeer AJ, L'Abbe KA. Incorporating variations in the quality of individual randomized trials into meta-analysis. J Clin Epidemiol. 1992;45/3:255–265. [PubMed]
15. Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31/10:1554–1559. [PubMed]
16. Myers JP, vom Saal FS, Akingbemi BT, Arizono K, Belcher S, Colborn T, et al. Why public health agencies cannot depend on good laboratory practices as a criterion for selecting data: The case of bisphenol A. Environ Health Perspect. 2009;117/3:309–315. [PMC free article] [PubMed]
17. Klayman J, Ha Y-W. Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review. 1987;94/2:211–228.
18. Klaczynski PA, Narasimham G. Development of scientific reasoning biases: Cognitive versus ego-protective explanations. Developmental Psychology. 1998;34/1:175–187. [PubMed]
19. Hart W, Albarracin D, Eagly AH, Brechan I, Lindberg MJ, Merrill L. Feeling Validated Versus Being Correct: A Meta-Analysis of Selective Exposure to Information. Psychological Bulletin. 2009;135/4:555–588. [PubMed]