In 2003, 5064 manuscripts (not including letters) were submitted to JAMA. These were distributed to the editors, almost all physicians, who reject at least half outright without obtaining comments from peer reviewers. When manuscripts are returned from peer reviewers, typically two or three, editors can reject directly or bring the manuscript to an editorial meeting. Some editors bring most of their papers to the table for a decision and others bring only papers they feel have a very good chance of acceptance. Because manuscripts are assigned to a single editor for review, only one editor has typically read a manuscript before the meeting. In 2003, approximately 940 manuscripts were brought to the meeting for discussion. All discussions at the meetings are kept in strict confidence, by JAMA's standing rules.
One of us (KD) attended 12 twice weekly editorial meetings, as a visitor, at the JAMA offices in Chicago, Illinois, USA in January and February 2003 (some meetings were missed by KD because of scheduling conflicts), and took notes on the discussion surrounding 102 manuscripts (two related manuscripts were discussed as one, and so we counted them as a single manuscript). Her notes were not verbatim transcripts of the meeting's discussions. The meeting attendees varied somewhat from meeting to meeting, but typically comprised about eight in-house editors, including editors with content, managing, and statistical responsibilities, and the Editor-in-Chief. Other editors (including DR who attended 2 of the 12 meetings in person, and 2 by phone) attended by teleconference if a manuscript for which they were responsible was being discussed. Editors volunteered one-by-one, but in no particular order, to discuss the manuscripts for which they were responsible. Anecdotal reports from editors attending the manuscript meetings is that the meetings attended as part of this study were representative of other meetings.
The discussion of each manuscript began with a description of the paper topic and study characteristics, with details added as necessary. The presentation progressed to comments made by the peer reviewers. The vast majority of presented manuscripts had completed at least one round of peer review.
The note-taker (KD) recorded words and phrases spoken by the editors in the context of each manuscript discussed, the time each discussion took, and the comments and publication recommendation of each of the peer reviewers. In addition, at the end of discussion about each manuscript, editors attending the meeting completed a form on which they listed the reasons they were "inclined to proceed to the next step towards review and/or acceptance" and reasons they were not. The editors were asked not to record their names on these forms. Forms were not collected at meetings KD did not attend.
ES and CM extracted the phrases from KD's notes and from the completed forms and entered them into NVivo 2.0 qualitative analysis software (Qualitative Solutions and Research Pty. Ltd, Australia). Each manuscript was considered a separate document with an array of attributes such as date of discussion, number of positive and negative reviews by outside peer reviewers, time taken for discussion, its categorization by the editors as describing research or not, and final disposition of the manuscript.
We next used an iterative process to develop a classification schema for the 2,463 spoken and written phrases. We considered several possible themes, including the JAMA objectives [see Table ], general journalistic goals, and ad hoc schema suggested by the phrases recorded.
JAMA's Key and Critical Objectives
For the first draft of the schema, CM and KD reviewed the phrases and documents from the editors' discussions and assigned them to 20 categories related to medical editorial decision-making and publication bias (as defined above). This schema was reviewed by two independent epidemiologists. Each performed an independent review followed by a group discussion with KD. We modified the schema further, categorizing phrases by whether they were related to science (eg, likelihood of bias relating to the study design), editorial beliefs or values (eg, likely interpretation by the public), or manuscript features (eg, short, well written). Using Nvivo, CM re-sorted most of the 20 categories into the new schema, retiring some categories and merging others. In a one hour meeting, CM and KD presented the revised schema to two independent social scientists, who suggested additional refinements.
Finally, we revised the concept of editorial beliefs and values to encompass what we called general journalistic goals, and developed a separate construct within the schema. We considered journalism in medicine to encompass a broad mission that includes educational, public health, and strategic goals such as timeliness, serving the readers' interests, presenting important medical issues, and addressing controversies. In the present instance, "journalism goals" were those that spoke to the mission of JAMA and other medical journals – meaning factors and values important to medical (or clinical) journals that publish new research (such as importance to medicine, strategic emphasis for the journal, and interest to the readership).
Thus, we classified each phrase as belonging to one of three mutually exclusive categories: science, journalism, and writing. Each phrase was further classified using a subcode within each category and each phrase was assigned a single code. All categories include phrases that are both favorable and unfavorable, although we describe the category using mainly positive terms [see Table ]. For instance, phrases used to note exemplary ethical processes, as well as phrases used to note suspected conflict of interest, were classified as part of the "Ethics/Conflict of Interest" category.
Classification Schema for 2376 Written and Spoken Phrases
We exported 2,463 coded phrases into Excel 2003 (Microsoft Corp) for counts. We excluded 87 phrases (76 of which were spoken) that were coded as describing the title or topic of the manuscript, leaving 2376 coded phrases for analysis.
Because our goal was to develop new ways to assess manuscript decision-making and publication bias, and because it could in no way influence the fate of manuscripts being discussed, our project did not involve written informed consent. Editors received written material describing the project, and the project was thoroughly discussed, at the initial manuscript meeting before note-taking and form completion began. We consulted officials at the Johns Hopkins Bloomberg School of Public Health Committee on Human Research who requested that we not include identifying information about the editors, peer reviewers, or authors.