CRF standards can be conceptualized at several levels: form, group, section, and item. We summarize the areas of agreement and dispute at each level, and also consider aspects of CRF-design processes that impact consistent research data collection.
Little consensus exists on of the choice and content of CRF standardization candidates. Few CRFs can be reused unchanged across all protocols. Even for seemingly common activities such as physical exam and medical history, structured data capture—explicit recording of findings—varies vastly by disease and protocol. For example, gastrointestinal bleeding or hepatic encephalopathy is recorded explicitly in a cirrhosis study, but not in schizophrenia.
Within a tightly defined disease domain, standard CRFs seem feasible and useful, though their content may change with future variations in study designs. For example, the venerable Hamilton Depression Rating Scale originated in 1960 as a 17-item questionnaire.34
Later, some researchers created different subsets, while others incorporated additional questions.35
Many proposed ‘standard’ CRFs may well meet a similar fate. Long-term content stability may be one measure of CRF-standard success.
The segregation of data items relevant to a research protocol into individual CRFs is often based on considerations other than logical grouping, and may vary with the study design. For example, in a one-time survey, one may well designate a single CRF to capture all items if these are not too numerous. In a longitudinal study, however, items recorded only once at the start of the study are placed in a CRF separate from items that are sampled repeatedly over multiple visits.
One concern about ‘standard’ CRF use is that users should not be pressured to collect parameters defined within the CRF that are not directly related to a given protocol's research objectives: such collection costs resources and violates Good Clinical Practice guidelines.36
Even instructing research staff to ignore specific parameters constitutes unnecessary information overload: presenting extraneous parameters onscreen is poor interface design. Dynamic CRF-rendering offers one way out of this dilemma: protocol-specific CRF customization allows individual investigators to specify, at design time, the subset of parameters that they consider relevant. Web-application software can read the customization metadata and render only applicable items.
A group is a set of semantically closely related parameters. For example, a Concomitant Medications group would include the medication name; how recorded (eg, generic or brand name); dosage details—numeric value, units, frequency, and duration; a start date, end date, whether this was a continuation of previous therapy, therapeutic indications, and possibly compliance information.
Other parameter groupings, such as the components of a differential white-blood-cell count or a liver-function panel, occur naturally in medicine. Typically, a group is associated with a single time-stamp that records when the event (eg, a blood draw) related to its parameters occurred, or two time-stamps to record the start and end of events that have a duration (eg, a course of radiotherapy).
Explicit associations between related parameters within the group include skip logic and expressions for calculated elements. Both LOINC and PhenX standards consider groups (‘panels’) as a series of observations. OpenEHR archetypes can also be used as section building-blocks.
A section encompasses one or more groups. The division of CRFs into sections is often arbitrary. In paper-based data capture, CRFs consisting of a single, giant section are not unknown. For example, the 1989 revision of the Minnesota Multiple Personality Inventory for psychiatric assessment has 567 questions. In real-time EDC, by contrast, subdivision into smaller sections is generally preferred, allowing (or requiring) the subject to save data changes before moving to another section. This minimizes the risks of inadvertent data loss due to failure to save, timeouts, or service interruption. Section size is often determined by the number of items that can be presented on a single desktop-computer screen.
The requirement for CRF-content flexibility to deal with disease and protocol variations impacts the involved sections/groups. It is doubtful whether section names/captions should be standardized. The designation of section headings and explanation that serve to describe the section's purpose is, we believe, best left to individual investigators.
Standardization of items is non-controversial, being the linchpin of semantic interoperability. Survey Design and Measurement Theory provides well-accepted best practices for design of good items such as mutually exclusive and exhaustive answer choices,37
non-leading question text,7
and consistency of scale escalation in answer sets.6
A review of the literature, including the CDASH recommendations, gives useful general guidance on constructing yes/no questions, scale direction, date/time formats, scope of CRF data collection, prepopulated data, and collection of calculated or derived data.5–9
All the standards discussed earlier emphasize use of narrative definitions for items. Such definitions need to be made maximally granular—that is, divided into separate fields—because different parts of the definition such as explanatory text, scripts, instructions, and context of use serve different purposes.
Certain items (especially questionnaire-based ones) have a discrete set of permissible values (also called ‘responses’ or ‘answers’). The set elements may be unordered (eg, ‘Yes, No, Don't Know’) or ordered (eg, severity grades such as ‘Absent, Mild, Moderate, Severe’ or Likert scales). One must record whether enumerations are unordered or ordered, because they impact how data based on these items can be queried. Thus, one can ask for patients who had a severity greater than or equal to ‘Moderate,’ but data based on unordered enumerations can only be compared for equality or inequality to a value.
CRF development process
The notion of process as vital to quality metrics and outcomes is reinforced through standards such as ISO 900038
and the health-outcomes research literature.40–42
While CRF content is necessarily variable, consensus regarding standards for explicit processes for identification or development of quality data is more readily reached.
The CDASH standards document, ‘Recommended Methodologies for Creating Data Collection Instruments,’ presents important and necessary features of the CRF development process. The techniques described include: adequate and ‘cross-functional’ team review, version control, and documented procedures for design, training, and form updates. The FDA also requires rigor in the development, validation, and use of data elements related to patient-reported outcomes as study endpoints in investigational new drug studies.43