The rapid expansion of biomedical research has brought substantial scientific and administrative data management challenges to modern core facilities. Scientifically, a core facility must be able to manage experimental workflow and the corresponding set of large and complex scientific data. It must also disseminate experimental data to relevant researchers in a secure and expedient manner that facilitates collaboration and provides support for data interpretation and analysis. Administratively, a core facility must be able to manage the scheduling of its equipment and to maintain a flexible and effective billing system to track material, resource, and personnel costs and charge for services to sustain its operation. It must also have the ability to regularly monitor the usage and performance of its equipment and to provide summary statistics on resources spent on different categories of research. To address these informatics challenges, we introduce a comprehensive system called MIMI (multimodality, multiresource, information integration environment) that integrates the administrative and scientific support of a core facility into a single web-based environment. We report the design, development, and deployment experience of a baseline MIMI system at an imaging core facility and discuss the general applicability of such a system in other types of core facilities. These initial results suggest that MIMI will be a unique, cost-effective approach to addressing the informatics infrastructure needs of core facilities and similar research laboratories.
Computer system; data collection; data extraction; image data; image distribution; information management; information storage and retrieval; information system; internet; management information systems; open source; cost analysis; biomedical core facilities; unified modeling language
Tomosynthesis is a 3-dimensional mammography technique that generates thin slices separated one to the other by typically 1 mm from source data sets. The relatively high image noise in these thin slices raises the value of 1-cm thick slices computed from the set of reconstructed slices for image interpretation. In an initial evaluation, we investigated the potential of different algorithms for generating thick slices from tomosynthesis source data (maximum intensity projection—MIP; average algorithm—AV, and image generation by means of a new algorithm, so-called softMip). The three postprocessing techniques were evaluated using a homogeneous phantom with one textured slab with a total thickness of about 5 cm in which two 0.5-cm-thick slabs contained objects to simulate microcalcifications, spiculated masses, and round masses. The phantom was examined by tomosynthesis (GE Healthcare). Microcalcifications were simulated by inclusion of calcium particles of four different sizes. The slabs containing the inclusions were examined in two different configurations: adjacent to each other and close to the detector and with the two slabs separated by two 1-cm thick breast equivalent material slabs. The reconstructed tomosynthesis slices were postprocessed using MIP, AV, and softMip to generate 1-cm thick slices with a lower noise level. The three postprocessing algorithms were assessed by calculating the resulting contrast versus background for the simulated microcalcifications and contrast-to-noise ratios (CNR) for the other objects. The CNRs of the simulated round and spiculated masses were most favorable for the thick slices generated with the average algorithm, followed by softMip and MIP. Contrast of the simulated microcalcifications was best for MIP, followed by softMip and average projections. Our results suggest that the additional generation of thick slices may improve the visualization of objects in tomosynthesis. This improvement differs from the different algorithms for microcalcifications, speculated objects, and round masses. SoftMip is a new approach combining features of MIP and average showing image properties in between MIP and AV.
3D Imaging (imaging, 3-dimensional); tomography; x-ray; computed; digital mammography
Brain perfusion diseases such as acute ischemic stroke are detectable through computed tomography (CT)-/magnetic resonance imaging (MRI)-based methods. An alternative approach makes use of ultrasound imaging. In this low-cost bedside method, noise and artifacts degrade the imaging process. Especially stripe artifacts show a similar signal behavior compared to acute stroke or brain perfusion diseases. This document describes how stripe artifacts can be detected and eliminated in ultrasound images obtained through harmonic imaging (HI). On the basis of this new method, both proper identification of areas with critically reduced brain tissue perfusion and classification between brain perfusion defects and ultrasound stripe artifacts are made possible.
Ultrasound; stripe artifacts; moments; artifact segmentation; harmonic imaging; perfusion; stroke
The measurement of temperature variation along the surface of the body, provided by digital infrared thermal imaging (DITI), is becoming a valuable auxiliary tool for the early detection of many diseases in medicine. However, DITI is essentially a 2-D technique and its image does not provide useful anatomical information associated with it. However, multimodal image registration and fusion may overcome this difficulty and provide additional information for diagnosis purposes. In this paper, a new method of registering and merging 2-D DITI and 3-D MRI is presented. Registration of the images acquired from the two modalities is necessary as they are acquired with different image systems. Firstly, the body volume of interest is scanned by a MRI system and a set of 2-D DITI of it, at orthogonal angles, is acquired. Next, it is necessary to register these two different sets of images. This is done by creating 2-D MRI projections from the reconstructed 3-D MRI volume and registering it with the DITI. Once registered, the DITI is then projected over the 3-D MRI. The program developed to assess the proposed method to combine MRI and DITI resulted in a new tool for fusing two different image modalities, and it can help medical doctors.
Infrared thermography; magnetic resonance; image fusion; image registration
Background: Orthopedic trauma care relies on two-dimensional radiograms both before and during the operation. Understanding the three-dimensional nature of complex fractures on plain radiograms is challenging. Modern fluoroscopes can acquire three-dimensional volume datasets even during an operation, but the device limitations constrain the acquired volume to a cube of only 12-cm edge. However, viewing the surrounding intact structures is important to comprehend the fracture in its context. We suggest merging a fluoroscope’s volume scan into a generic bone model to form a composite full-length 3D bone model. Methods: Materials consisted of one cadaver bone and 20 three-dimensional surface models of human femora. Radiograms and computed tomography scans were taken before and after applying a controlled fracture to the bone. A 3D scan of the fracture was acquired using a mobile fluoroscope (Siemens Siremobil). The fracture was fitted into the generic bone models by rigid registration using a modified least-squares algorithm. Registration precision was determined and a clinical appraisal of the composite models obtained. Results: Twenty composite bone models were generated. Average registration precision was 2.0 mm (range 1.6 to 2.6). Average processing time on a laptop computer was 35 s (range 20 to 55). Comparing synthesized radiograms with the actual radiograms of the fractured bone yielded clinically satisfactory results. Conclusion: A three-dimensional full-length representation of a fractured bone can reliably be synthesized from a short scan of the patient’s fracture and a generic bone model. This patient-specific model can subsequently be used for teaching, surgical operation planning, and intraoperative visualization purposes.
3D Imaging (imaging, three-dimensional); bone and bones; fluoroscopy; teaching; tomography; x-ray computed; clinical application; orthopedic surgery; operation planning; composite bone model
During the last ten years or so, diffusion tensor imaging has been used in both research and clinical medical applications. To construct the diffusion tensor images, a large set of direction sensitive magnetic resonance image (MRI) acquisitions are required. These acquisitions in general have a lower signal-to-noise ratio than conventional MRI acquisitions. In this paper, we discuss computationally effective algorithms for noise removal for diffusion tensor magnetic resonance imaging (DTI) using the framework of 3-dimensional shape-adaptive discrete cosine transform. We use local polynomial approximations for the selection of homogeneous regions in the DTI data. These regions are transformed to the frequency domain by a modified discrete cosine transform. In the frequency domain, the noise is removed by thresholding. We perform numerical experiments on 3D synthetical MRI and DTI data and real 3D DTI brain data from a healthy volunteer. The experiments indicate good performance compared to current state-of-the-art methods. The proposed method is well suited for parallelization and could thus dramatically improve the computation speed of denoising schemes for large scale 3D MRI and DTI.
Diffusion tensor; magnetic resonance; DTI; denoising; shape-adaptive DCT; regularization
Virtual microscopy (i.e., the viewing of entire microscope specimens on a computer display) is becoming widely applied in microscopy teaching and clinical laboratory medicine. Despite rapidly increasing use, virtual microscopy currently lacks of a universally accepted image format. A promising candidate is JPEG2000, which has potential advantages for handling gigabyte-sized virtual slides. To date, no JPEG2000-based software has been specifically suited for virtual microscopy. To study the utility of JPEG2000 in virtual microscopy, we first optimized JPEG2000 code-stream parameters for virtual slide viewing (i.e., fast navigation, zooming, and use of an overview window). Compression using ratios 25:1–30:1 with the irreversible wavelet filter were found to provide the best compromise between file size and image quality. Optimal code-stream parameters also consisted of 10 wavelet decomposition levels, progression order Resolution-Position-Component-Layer (RPCL), a precinct size of 128 × 128, and code-block size of 64 × 64. Tiling and the use of multiple quality layers were deemed unnecessary. A compression application (JVScomp) was developed for creating optimally parameterized JPEG2000 virtual slides. A viewing application (JVSview) was developed specifically for virtual microscopy, offering all of the basic viewing functions. JVSview also supports viewing of focus stacks, embedding of textual descriptions, and defining regions of interest as metadata. Combined with our server application (JVSserv), virtual slides can be viewed over networks by employing the JPEG2000 Interactive Protocol (JPIP). The software can be tested using virtual slide examples located on our public JPIP server (http://jvsmicroscope.uta.fi/). The software package is freely downloadable and usable for noncommercial purposes.
JPEG2000; JPIP; telepathology; digital pathology; virtual slide
Clinical data that may be used in a secondary capacity to support research activities are regularly stored in three significantly different formats: (1) structured, codified data elements; (2) semi-structured or unstructured narrative text; and (3) multi-modal images. In this manuscript, we will describe the design of a computational system that is intended to support the ontology-anchored query and integration of such data types from multiple source systems. Additional features of the described system include (1) the use of Grid services-based electronic data interchange models to enable the use of our system in multi-site settings and (2) the use of a software framework intended to address both potential security and patient confidentiality concerns that arise when transmitting or otherwise manipulating potentially privileged personal health information. We will frame our discussion within the specific experimental context of the concept-oriented query and integration of correlated structured data, narrative text, and images for cancer research.
Image retrieval; information retrieval; ontologies; text mining; Grid computing
The objective of this study was to analyze image quality of chest examinations in pediatric patients using computed radiography (CR) obtained with a wide range of doses to suggest the appropriate parameters for optimal image quality. A sample of 240 chest images in four age ranges was randomly selected from the examinations performed during 2004. Images were obtained using a CR system and were evaluated independently by three radiologists. Each image was scored using criteria proposed by the European Guidelines on Quality Criteria in Pediatrics. Mean global scoring and scoring of individual criteria more sensitive to noise were used to evaluate image quality. Agfa dose level (DL) was in the range 1.20 to 2.85. It was found that there was not significant correlation (R < 0.5) between image quality and DL for any of the age ranges for either global score or for individual criteria more related to noise. The mean value of DL was in the ranges 1.9–2.1 for the four age bands. From this study, a DL value of 1.6 is proposed for pediatric CR chest imaging. This could yield a reduction of approximately a factor of 2.5 in mean patient entrance surface doses.
Radiation dose; image quality; chest radiography; computed radiography
The Quality Assurance Review Center (QARC) works to improve the standards of care in treating cancer by improving the quality of clinical trials medicine. QARC operates as a data management and review center providing quality assurance services for multiple external groups including cooperative groups and pharmaceutical companies. As the medical world migrates from analog film to digital files, QARC has developed an innovative and unique digital imaging management system to accommodate this trend. As QARC acquires electronic data from institutions across six continents, the system is continually developed to accommodate Digital Imaging and Communications in Medicine (DICOM) imaging originating from a wide variety of Picture Archival and Communications System (PACS) manufacturers, thus creating one of the largest and most diverse multi-institutional imaging archives in the cancer research community.
PACS; digital imaging and communications in medicine (DICOM); digital imaging; digital image management; workflow; archive; radiotherapy; quality assurance; clinical trial
Digital imaging and communication in medicine (DICOM) specifies that all DICOM objects have globally unique identifiers (UIDs). Creating these UIDs can be a difficult task due to the variety of techniques in use and the requirement to ensure global uniqueness. We present a simple technique of combining a root organization identifier, assigned descriptive identifiers, and JAVA generated unique identifiers to construct DICOM compliant UIDs.
Digital imaging and communications in medicine (DICOM); structured reporting; digital imaging
This paper presents a lossless watermarking scheme in the sense that the original image can be exactly recovered from the watermarked one, with the purpose of verifying the integrity and authenticity of medical images. In addition, the scheme has the capability of not introducing any embedding-induced distortion in the region of interest (ROI) of a medical image. Difference expansion of adjacent pixel values is employed to embed several bits. A region of embedding, which is represented by a polygon, is chosen intentionally to prevent introducing embedding distortion in the ROI. Only the vertex information of a polygon is transmitted to the decoder for reconstructing the embedding region, which improves the embedding capacity considerably. The digital signature of the whole image is embedded for verifying the integrity of the image. An identifier presented in electronic patient record (EPR) is embedded for verifying the authenticity by simultaneously processing the watermarked image and the EPR. Combining with fingerprint system, patient’s fingerprint information is embedded into several image slices and then extracted for verifying the authenticity.
Watermarking; telemedicine; security; integrity; confidentiality; image authentication; PACS; ROI
The transmission of patient and imaging data between imaging centers and other interested individuals is increasingly achieved by means of compact disc digital media (CD). These CDs typically contain, in addition to the patient images, a DICOM reader and information about the origin of the data. While equipment manufacturers attach disclaimers to these discs and specify the intended use of such media, they are often the only practical means of transmitting data for small medical, dental, or veterinary medical centers. Images transmitted by these means are used for clinical diagnosis. This has lead to a heavy reliance on the integrity of the data. This report describes attempts to alter significant patient and study data on CD media and their outcome. The results show that data files are extremely vulnerable to alteration, and alterations are not detectable without detailed analysis of file structure. No alterations to the DICOM readers were required to achieve this; changes were applied only to the data files. CDs with altered data can be readily prepared, and from the point of view of individuals viewing the images, function identically to the original manufacturer’s CD. Such media should be considered unsafe where there is a potential for financial or other gain to be had from altering the data, and the copy cannot be cross-checked with the original data.
Security; telemedicine; medical records
This article outlines the strategy used by our hospital to maximize the knowledge transfer to referring physicians on using a picture archiving and communication system (PACS). We developed an e-learning platform underpinned by the cognitive load theory (CLT) so that in depth knowledge of PACS’ abilities becomes attainable regardless of the user’s prior experience with computers. The application of the techniques proposed by CLT optimizes the learning of the new actions necessary to obtain and manipulate radiological images. The application of cognitive load reducing techniques is explained with several examples. We discuss the need to safeguard the physicians’ main mental processes to keep the patient’s interests in focus. A holistic adoption of CLT techniques both in teaching and in configuration of information systems could be adopted to attain this goal. An overview of the advantages of this instruction method is given both on the individual and organizational level.
Informatics training; computer-assisted instruction; PACS training; clinical image viewing; cognitive load theory; e-learning
Collaborations in biomedical research and clinical studies require that data, software, and computational resources be shared between geographically distant institutions. In radiology, there is a related issue of sharing remote DICOM data over the Internet. This paper focuses on the problem of federating multiple image data resources such that clients can interact with them as if they are stored in a centralized PACS. We present a toolkit, called VirtualPACS, to support this functionality. Using the toolkit, users can perform standard DICOM operations (query, retrieve, and submit) across distributed image databases. The key features of the toolkit are: (1) VirtualPACS makes it easy to use existing DICOM client applications for data access; (2) it can easily be incorporated into an imaging workflow as a DICOM source; (3) using VirtualPACS, heterogeneous collections of DICOM sources are exposed to clients through a uniform interface and common data model; and (4) DICOM image databases without DICOM messaging can be accessed.
Grid computing; teleradiology; PACS; computer networks; Digital Imaging and Communications in Medicine (DICOM); imaging informatics; PACS integration
The radiology community has recognized the need to create a standard terminology to improve the clarity of reports, to reduce radiologist variation, to enable access to imaging information, and to improve the quality of practice. This need has recently led to the development of RadLex, a controlled terminology for radiology. The creation of RadLex has proved challenging in several respects: It has been difficult for users to peruse the large RadLex taxonomies and for curators to navigate the complex terminology structure to check it for errors and omissions. In this work, we demonstrate that the RadLex terminology can be translated into an ontology, a representation of terminologies that is both human-browsable and machine-processable. We also show that creating this ontology permits computational analysis of RadLex and enables its use in a variety of computer applications. We believe that adopting an ontology representation of RadLex will permit more widespread use of the terminology and make it easier to collect feedback from the community that will ultimately lead to improving RadLex.
Ontologies; terminologies; vocabularies; RadLex; software tools
Introduction: To validate a preliminary version of a radiological lexicon (RadLex) against terms found in thoracic CT reports and to index report content in RadLex term categories. Material and Methods: Terms from a random sample of 200 thoracic CT reports were extracted using a text processor and matched against RadLex. Report content was manually indexed by two radiologists in consensus in term categories of Anatomic Location, Finding, Modifier, Relationship, Image Quality, and Uncertainty. Descriptive statistics were used and differences between age groups and report types were tested for significance using Kruskal–Wallis and Mann–Whitney Test (significance level <0.05). Results: From 363 terms extracted, 304 (84%) were found and 59 (16%) were not found in RadLex. Report indexing showed a mean of 16.2 encoded items per report and 3.2 Finding per report. Term categories most frequently encoded were Modifier (1,030 of 3,244, 31.8%), Anatomic Location (813, 25.1%), Relationship (702, 21.6%) and Finding (638, 19.7%). Frequency of indexed items per report was higher in older age groups, but no significant difference was found between first study and follow up study reports. Frequency of distinct findings per report increased with patient age (p < 0.05). Conclusion: RadLex already covers most terms present in thoracic CT reports based on a small sample analysis from one institution. Applications for report encoding need to be developed to validate the lexicon against a larger sample of reports and address the issue of automatic relationship encoding.
Reporting; classification; chest CT; terminology
The Integrating the Healthcare Enterprise (IHE) Teaching File and Clinical Trial Export (TCE) integration profile describes a standard workflow for exporting key images from an image manager/archive to a teaching file, clinical trial, or electronic publication application. Two specific digital imaging and communication in medicine (DICOM) structured reports (SR) reference the key images and contain associated case information. This paper presents step-by-step instructions for translating the TCE document templates into functional and complete DICOM SR objects. Others will benefit from these instructions in developing TCE compliant applications.
Digital imaging and communications in medicine (DICOM); integrating healthcare enterprise (IHE); extensible markup; language (XML); electronic teaching file; clinical trial; electronic; publishing
Continuous voice recognition dictation systems for radiology reporting provide a viable alternative to conventional transcription services with the promise of shorter report turnaround times and increased cost savings. While these benefits may be realized in academic institutions, it is unclear how voice recognition dictation impacts the private practice radiologist who is now faced with the additional task of transcription. In this article, we compare conventional transcription services with a commercially available voice recognition system with the following results: 1) Reports dictated with voice recognition took 50% longer to dictate despite being 24% shorter than those conventionally transcribed, 2) There were 5.1 errors per case, and 90% of all voice recognition dictations contained errors prior to report signoff while 10% of transcribed reports contained errors. 3). After signoff, 35% of VR reports still had errors. Additionally, cost savings using voice recognition systems in non-academic settings may not be realized. Based on average radiologist and transcription salaries, the additional time spent dictating with voice recognition costs an additional $6.10 per case or $76,250.00 yearly. The opportunity costs may be higher. Informally surveyed, all radiologists expressed dissatisfaction with voice recognition with feelings of frustration, and increased fatigue. In summary, in non-academic settings, utilizing radiologists as transcriptionists results in more error ridden radiology reports and increased costs compared with conventional transcription services.
Voice recognition dictation; radiologist; transcriptionist
Attenuation artifacts are the most common sources of error in myocardial single-photon emission computed tomography (SPECT) imaging. Breast artifacts are the most frequent causes of false positive planar images in female subjects. The purpose of this study was to predict breast adverse attenuation by measuring breast tissue thickness with digital x-ray. Sixty-five consecutive female patients with angina pectoris, who were referred to myocardial perfusion scintigraphy were enrolled in this study. Eighteen patients with normal perfusion imaging and normal coronary angiography composed the first group, whereas the second group consisted of 28 patients with a positive exercise electrocardiogram with anterior ischemia on myocardial perfusion imaging and greater than 50% left anterior descending artery stenosis on angiography. Nineteen patients in the third group had normal exercise electrocardiograms and normal coronary angiographies, but anterior ischemia on perfusion imaging. Digital x-ray records were obtained for measuring breast tissue thickness and Hounsfield density. The rate of breast adverse attenuation was 40% (19/47) in patients with anterior ischemia. The sensitivity and specificity of the prediction of breast adverse attenuation (lateral density less than 550 Hounsfield) were 79% and 11%, respectively. When breast attenuation for a breast thickness greater than 6 cm measured in the left anterior oblique view was predicted, the sensitivity and specificity were 79% and 93%, respectively. In conclusion, breast thickness greater than 6 cm measured from the left anterior oblique view with digital x-ray can predict breast adverse attenuation in female patients, and thereby may decrease the number of unnecessary invasive diagnostic procedures to be performed.
Breast thickness; tissue Hounsfield density; attenuation
Despite the potential to dominate radiology reporting, current speech recognition technology is thus far a weak and inconsistent alternative to traditional human transcription. This is attributable to poor accuracy rates, in spite of vendor claims, and the wasted resources that go into correcting erroneous reports. A solution to this problem is post-speech-recognition error detection that will assist the radiologist in proofreading more efficiently. In this paper, we present a statistical method for error detection that can be applied after transcription. The results are encouraging, showing an error detection rate as high as 96% in some cases.
Speech recognition; error detection; radiology reporting; co-occurrence relations; statistical natural language processing; computer-assisted proofreading
Segmentation of volumetric computed tomography (CT) datasets facilitates evaluation of 3D CT angiography renderings, particularly with maximum intensity projection displays. This manuscript describes a novel automated bone editing program that uses an interactive watershed transform (IWT) technique to rapidly extract the skeletal structures from the volume. Advantages of this tool include efficient segmentation of large datasets with minimal need for correction. In the first of this two-part series, the principles of the IWT technique are reviewed, followed by a discussion of clinical utility based on our experience.
3D segmentation; computed tomography; body imaging
The preceding manuscript describes the principles behind the Interactive Watershed Transform (IWT) segmentation tool. The purpose of this manuscript is to illustrate the clinical utility of this editing technique for body multidetector row computed tomography (MDCT) imaging. A series of cases demonstrates clinical applications where automated segmentation of skeletal structures with IWT is most useful. Both CT angiography and orthopedic applications are presented.
3D Segmentation; computed tomography; body imaging
We hypothesized that that the summation or axial slab average intensity projection (AIP) techniques can substitute for the primary reconstruction (PR) from a raw projection data for abdominal applications. To compare with PR datasets (5-mm thick, 20% overlap) in 150 abdominal studies, corresponding summation and AIP datasets were calculated from 2-mm thick images (50% overlap). The root-mean-square error between PR and summation images was significantly greater than that between PR and AIP images (9.55 [median] vs. 7.12, p < 0.0001, Wilcoxon signed-ranks test). Four radiologists independently compared 2,000 test images (PR [as control], summation, or AIP) and their corresponding PR images to prove that the identicalness of summation or AIP images to PR images was not 1% less than the assessed identicalness of PR images to themselves (Wald-type test for clustered matched-pair data in a non-inferiority design). For each reader, both summation and AIP images were not inferior to PR images in terms of being rated identical to PR (p < 0.05). Although summation and AIP techniques produce images that differ from PR images, these differences are not easily perceived by radiologists. Thus, the summation or AIP techniques can substitute for PR for the primary interpretation of abdominal CT.
Tomography; spiral computed-image processing; computer-assisted-imaging; three-dimensional-image interpretation; computer-assisted-information storage and retrieval
The purpose of our study was to develop a user-independent computerized tool for the automated segmentation and quantitative assessment of in vivo-acquired digital subtraction angiography (DSA) images. Vessel enhancement was accomplished based on the concept of image structural tensor. The developed software was tested on a series of DSA images acquired from one animal and two human angiogenesis models. Its performance was evaluated against manually segmented images. A receiver’s operating characteristic curve was obtained for every image with regard to the different percentages of the image histogram. The area under the mean curve was 0.89 for the experimental angiogenesis model and 0.76 and 0.86 for the two clinical angiogenesis models. The coordinates of the operating point were 8.3% false positive rate and 92.8% true positive rate for the experimental model. Correspondingly for clinical angiogenesis models, the coordinates were 8.6% false positive rate and 89.2% true positive rate and 9.8% false positive rate and 93.8% true positive rate, respectively. A new user-friendly tool for the analysis of vascular networks in DSA images was developed that can be easily used in either experimental or clinical studies. Its main characteristics are robustness and fast and automatic execution.
Electronic supplementary material
The online version of this article (doi: 10.1007/s10278-007-9047-2) contains supplementary material, which is available to authorized users.
DSA; image processing; quantification; angiogenesis; experimental