Structured reporting uses consistent ordering of results and standardized terminology to improve the quality and reduce the complexity of radiology reports. We sought to define a generalized approach for radiology reporting that produces flexible outline-style reports, accommodates structured information and named reporting elements, allows reporting terms to be linked to controlled vocabularies, uses existing informatics standards, and allows structured report data to be extracted readily. We applied the Regular Language for XML–Next Generation (RELAX NG) schema language to create templates for 110 reporting templates created as part of the Radiological Society of North America reporting initiative. We evaluated how well this approach addressed the project’s goals. The RELAX NG schema language expressed the cardinality and hierarchical relationships of reporting concepts, and allowed reporting elements to be mapped to terms in controlled medical vocabularies, such as RadLex®, Systematized Nomenclature of Medicine Clinical Terms®, and Logical Observation Identifiers Names and Codes®. The approach provided extensibility and accommodated the addition of new features. Overall, the approach has proven to be useful and will form the basis for a supplement to the Digital Imaging and Communication in Medicine Standard.
Radiology; Structured reporting; Standards; Knowledge representation; Extensible Markup Language (XML); RELAX NG; Grammar; Regular language
Radiologists are critically interested in promoting best practices in medical imaging, and to that end, they are actively developing tools that will optimize terminology and reporting practices in radiology. The RadLex® vocabulary, developed by the Radiological Society of North America (RSNA), is intended to create a unifying source for the terminology that is used to describe medical imaging. The RSNA Reporting Initiative has developed a library of reporting templates to integrate reusable knowledge, or meaning, into the clinical reporting process. This report presents the initial analysis of the intersection of these two major efforts. From 70 published radiology reporting templates, we extracted the names of 6,489 reporting elements. These terms were reviewed in conjunction with the RadLex vocabulary and classified as an exact match, a partial match, or unmatched. Of 2,509 unique terms, 1,017 terms (41%) matched exactly to RadLex terms, 660 (26%) were partial matches, and 832 reporting terms (33%) were unmatched to RadLex. There is significant overlap between the terms used in the structured reporting templates and RadLex. The unmatched terms were analyzed using the multidimensional scaling (MDS) visualization technique to reveal semantic relationships among them. The co-occurrence analysis with the MDS visualization technique provided a semantic overview of the investigated reporting terms and gave a metric to determine the strength of association among these terms.
Radiology; Structured reporting; Reporting templates; Standardized terminology; RadLex; Mapping; Visualization; Multidimensional scaling
Imaging signs form an important part of the language of radiology, but are not represented in established lexicons. We sought to incorporate imaging signs into RSNA's RadLex® ontology of radiology terms. Names of imaging signs and their definitions were culled from books, journal articles, dictionaries, and biomedical web sites. Imaging signs were added into RadLex as subclasses of the term “imaging sign,” which was defined in RadLex as a subclass of “imaging observation.” A total of 743 unique imaging signs were added to RadLex with their 392 synonyms to yield a total of 1,135 new terms. All included definitions and related RadLex terms, including imaging modality, anatomy, and disorder, when appropriate. The information will allow RadLex users to identify imaging signs by modality (e.g., ultrasound signs) and to find all signs related to specific pathophysiology. The addition of imaging signs to RadLex augments its use to index the radiology literature, create and interpret clinical radiology reports, and retrieve relevant cases and images.
Knowledge representation; Information storage and retrieval; Image retrieval; RadLex; Imaging signs; Ontology
Imaging modality can aid retrieval of medical images for clinical practice, research, and education. We evaluated whether an ensemble classifier could outperform its constituent individual classifiers in determining the modality of figures from radiology journals. Seventeen automated classifiers analyzed 77,495 images from two radiology journals. Each classifier assigned one of eight imaging modalities—computed tomography, graphic, magnetic resonance imaging, nuclear medicine, positron emission tomography, photograph, ultrasound, or radiograph—to each image based on visual and/or textual information. Three physicians determined the modality of 5,000 randomly selected images as a reference standard. A “Simple Vote” ensemble classifier assigned each image to the modality that received the greatest number of individual classifiers’ votes. A “Weighted Vote” classifier weighted each individual classifier’s vote based on performance over a training set. For each image, this classifier’s output was the imaging modality that received the greatest weighted vote score. We measured precision, recall, and F score (the harmonic mean of precision and recall) for each classifier. Individual classifiers’ F scores ranged from 0.184 to 0.892. The simple vote and weighted vote classifiers correctly assigned 4,565 images (F score, 0.913; 95% confidence interval, 0.905–0.921) and 4,672 images (F score, 0.934; 95% confidence interval, 0.927–0.941), respectively. The weighted vote classifier performed significantly better than all individual classifiers. An ensemble classifier correctly determined the imaging modality of 93% of figures in our sample. The imaging modality of figures published in radiology journals can be determined with high accuracy, which will improve systems for image retrieval.
Computer vision; Content-based image retrieval; Digital libraries; Image analysis; Image retrieval; Classification; Data mining
Radiologists frequently search the Web to find information they need to improve their practice, and knowing the types of information they seek could be useful for evaluating Web resources. Our goal was to develop an automated method to categorize unstructured user queries using a controlled terminology and to infer the type of information users seek. We obtained the query logs from two commonly used Web resources for radiology. We created a computer algorithm to associate RadLex-controlled vocabulary terms with the user queries. Using the RadLex hierarchy, we determined the high-level category associated with each RadLex term to infer the type of information users were seeking. To test the hypothesis that the term category assignments to user queries are non-random, we compared the distributions of the term categories in RadLex with those in user queries using the chi square test. Of the 29,669 unique search terms found in user queries, 15,445 (52%) could be mapped to one or more RadLex terms by our algorithm. Each query contained an average of one to two RadLex terms, and the dominant categories of RadLex terms in user queries were diseases and anatomy. While the same types of RadLex terms were predominant in both RadLex itself and user queries, the distribution of types of terms in user queries and RadLex were significantly different (p < 0.0001). We conclude that RadLex can enable processing and categorization of user queries of Web resources and enable understanding the types of information users seek from radiology knowledge resources on the Web.
Ontologies; terminologies; vocabularies; RadLex; software tools; controlled vocabulary; natural language processing; web technology
We sought to demonstrate the effectiveness of techniques to index radiology images using metadata discovered in their free-text figure captions. The ARRS GoldMiner™ image library incorporated 94,256 figures from 11,712 articles published in peer-reviewed online radiology journals. Algorithms were developed to discover metadata—age, sex, and imaging modality—from the figures’ free-text captions. Age was recorded in years, and was classified as infant (less than 2 years), child (2 to 17 years), or adult (18+ years). Each figure was assigned to one of eight imaging modalities. A random sample of 1,000 images was examined to measure accuracy of the metadata. The patient’s age was identified in 58,994 cases (63%), and the patient’s sex was identified in 58,427 cases (62%). An imaging modality was assigned to 80,402 (85%) of the figures. Based on the 1,000 sampled cases, recall values for age, sex, and imaging modality were 97.2%, 99.7%, and 86.4%, respectively. Precision values for age, sex, and imaging modality were 100%, 100%, and 97.2%, respectively. Automated techniques can accurately discover age, sex, and imaging modality metadata from captions of figures published in radiology journals. The metadata can be used to dynamically filter queries for an image search engine.
Information retrieval; metadata; knowledge discovery; image library; information filtering; abstracting and indexing; search engine
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using “Web 2.0” technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner™ image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an “inline” image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the “More” link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
Web technology; multimedia; internet technology
There is growing interest in bringing medical educational materials to the point of care. We sought to develop a system for just-in-time learning in radiology. A database of 34 learning modules was derived from previously published journal articles. Learning objectives were specified for each module, and multiple-choice test items were created. A web-based system—called TEMPO—was developed to allow radiologists to select and view the learning modules. Web services were used to exchange clinical context information between TEMPO and the simulated radiology work station. Preliminary evaluation was conducted using the System Usability Scale (SUS) questionnaire. TEMPO identified learning modules that were relevant to the age, sex, imaging modality, and body part or organ system of the patient being viewed by the radiologist on the simulated clinical work station. Users expressed a high degree of satisfaction with the system’s design and user interface. TEMPO enables just-in-time learning in radiology, and can be extended to create a fully functional learning management system for point-of-care learning in radiology.
Just-in-time learning; continuing medical education (CME); decision support; education; PACS; systems integration; radiology workflow
An ontology describes a set of classes and the relationships among them. We explored the use of an ontology to integrate picture archiving and communication systems (PACS) with other information systems in the clinical enterprise. We created an ontological model of thoracic radiology that contained knowledge of anatomy, imaging procedures, and performed procedure steps. We explored the use of the model in two use cases: (1) to determine examination completeness and (2) to identify reference (comparison) images obtained in the same imaging projection. The model incorporated a total of 138 classes, including radiology orderables, procedures, procedure steps, imaging modalities, patient positions, and imaging planes. Radiological knowledge was encoded as relationships among these classes. The ontology successfully met the information requirements of the two use-case scenarios. Ontologies can represent radiological and clinical knowledge to integrate PACS with the clinical enterprise and to support the radiology interpretation process.
Ontologies; semantic models; knowledge representation; knowledge sharing and reuse; PACS; systems integration; workflow; Protégé; Web Ontology Language (OWL); Transforming the Radiologic Interpretation Process (TRIP)
Effective learning can occur at the point of care, when opportunities arise to acquire information and apply it to a clinical problem. To assess interest in point-of-care learning, we conducted a survey to explore radiologists' attitudes and preferences regarding the use of just-in-time learning (JITL) in radiology.
Materials and Methods
Following Institutional Review Board approval, we invited 104 current radiology residents and 86 radiologists in practice to participate in a 12-item Internet-based survey to assess their attitudes toward just-in-time learning. Voluntary participation in the survey was solicited by e-mail; respondents completed the survey on a web-based form.
Seventy-nine physicians completed the questionnaire, including 47 radiology residents and 32 radiologists in practice; the overall response rate was 42%. Respondents generally expressed a strong interest for JITL: 96% indicated a willingness to try such a system, and 38% indicated that they definitely would use a JITL system. They expressed apreference for learning interventions of 5–10 min in length.
Current and recent radiology trainees have expressed a strong interest in just-in-time learning. The information from this survey should be useful in pursuing the design of learning interventions and systems for delivering just-in-time learning to radiologists.
Continuing medical education (CME); radiology education; just-in-time learning; survey research; radiology workflow; systems integration
Objective. Collaborative filtering is a knowledge-discovery technique that can help guide readers to items of potential interest based on the experience of prior users. This study sought to determine the impact of collaborative filtering on navigation of a large, Web-based radiology knowledge resource. Materials and Methods. Collaborative filtering was applied to a collection of 1,168 radiology hypertext documents available via the Internet. An item-based collaborative filtering algorithm identified each document’s six most closely related documents based on 248,304 page views in an 18-day period. Documents were amended to include links to their related documents, and use was analyzed over the next 5 days. Results. The mean number of documents viewed per visit increased from 1.57 to 1.74 (P < 0.0001). Conclusions. Collaborative filtering can increase a radiology information resource’s utilization and can improve its usefulness and ease of navigation. The technique holds promise for improving navigation of large Internet-based radiology knowledge resources.
Collaborative filtering; World Wide Web; navigation; teaching files; digital libraries; image libraries; information resources; recommender systems; social filtering; word-of-mouth recommendation; mass customization; knowledge management
The authors developed a Bayesian network to differentiate among five benign and five malignant neoplasms of the appendicular skeleton using the patient’s age and sex and 17 radiographic characteristics. In preliminary evaluation with physicians in training, the model identified the correct diagnosis in 19 cases (68%), and included the correct diagnosis among the two most probable diagnoses in 25 cases (89%). Bayesian networks can capture and apply knowledge of primary bone neoplasms. Further testing and refinement of the model are underway.