CALAEGS intakes electronic laboratory data, and provides grading results through a web-based user interface, web services, and/or a Java API (application programming interface). The user interface allows institutions to customize the system to their specific data source formats and coding. The system is installed behind an institution's firewall to avoid confidentiality issues. Laboratory data can be submitted as comma-separated values, Extensible Markup Language (XML), or Health Level Seven (HL7) version 3 messages. Grading results are returned in a machine readable format compatible with the original input format, and as a human-consumable flowsheet rendered via Portable Document Format (PDF) (see ).
Figure 2 CALAEGS screenshots showing the entry screen for assessing a single laboratory result, for example, from an outside laboratory with no electronic file available (left), and the flowsheet generated to grade multiple laboratory-based adverse events (AEs) (more ...)
CALAEGS incorporates national standards such as the Biomedical Research Information Domain Group (BRIDG) model18
and Unified Code for Units of Measure (UCUM),19
and is certified as bronze-level compatible with NCI's Cancer Biomedical Informatics Grid (caBIGR
It runs on Java 1.5+ in a J2EE web container (Tomcat 5.0+ and JBoss 4.0.5+) and requires a MySQL 5.0+ database.
CALAEGS assesses 39 laboratory-based AE terms based on NCI CTCAE version 3.09
(refer to ). The grading algorithms received thorough testing across several phases, including unit, integration, system, and regression testing. The test approach included a range of conditions, including grade boundaries, simple and complex assessments, and fail conditions. CALAEGS assessments are considered preliminary only, as some laboratory-based AE grades depend on human judgment as well, such as knowledge of additional patient conditions (eg, concomitant life-threatening consequences).
Comparison of laboratory-based adverse events (AEs) detected by manual versus automated grading method by AE term for 643 true AEs*
In a paired retrospective study design, we compared the accuracy and completeness of AE data graded manually, prior to the availability of the automated tool, with results reassessed via CALAEGS. We evaluated 10 sequential in-house therapeutic trials of varying size, diagnoses, and phase, from the time frame just prior to implementing our automated grading service, to minimize confounding factors (eg, CRA expertise). These 10 trials encompassed 40 patients and 18 603 laboratory results ().
Protocols for comparing manual versus automated laboratory-based adverse event (AE) grading
The 18 603 laboratory results were read into CALAEGS, and the automated results compared with manually graded results recorded in our clinical trials system. Discrepancies were categorized as missed AEs (true AEs that were not identified) or misgraded AEs (AEs with an incorrect numeric grade or direction, ie, hypo- vs hyper-). All discordant results were reviewed by our QA experts to verify that each suspected discrepancy was a true error, eliminating any protocol-specific exceptions (eg, if the study only requires recording the highest grade per course.)
To quantify AE grading efficiency, we conducted a prospective paired evaluation comparing time required for manual versus automated AE grading. In timed sessions, four CRAs graded five patients each from their current protocol portfolio, first manually and then 2–4 weeks later utilizing the CALAEGS tool, yielding 20 paired assessments. The assessment sequence was fixed (manual followed by automated), as if CALAEGS was run first, familiarity with the resulting AEs might have increased CRA efficiency when re-grading AEs manually.
A protocol specifying the design and regulatory processes for this evaluation was approved by the COH Institutional Review Board. The protocol stipulated that the Principal Investigator and biostatistician for studies evaluated were to be notified of any grading discrepancies identified; if any serious consequences were identified, the IRB and appropriate regulatory agencies would be notified as well. Analyses were conducted using SAS software version 9.1 (SAS Institute).