Digital Imaging and Communications in Medicine (DICOM) is the dominant standard for medical imaging data. DICOM-compliant devices and the data they produce are generally designed for clinical use and often do not match the needs of users in research or clinical trial settings. DicomBrowser is software designed to ease the transition between clinically oriented DICOM tools and the specialized workflows of research imaging. It supports interactive loading and viewing of DICOM images and metadata across multiple studies and provides a rich and flexible system for modifying DICOM metadata. Users can make ad hoc changes in a graphical user interface, write metadata modification scripts for batch operations, use partly automated methods that guide users to modify specific attributes, or combine any of these approaches. DicomBrowser can save modified objects as local files or send them to a DICOM storage service using the C-STORE network protocol. DicomBrowser is open-source software, available for download at http://nrg.wustl.edu/software/dicom-browser.
Digital imaging and communications in medicine (DICOM); Workflow; Image viewer; Imaging informatics
Preclinical medical education; Digital teaching files; Web technology; MIRC; Radiology teaching files; Learning management systems; BlackBoard Learn
In many medical imaging applications, it is desirable and important to localize and remove the patient table from CT images. However, existing methods often require user interactions to define the table and sometimes make inaccurate assumptions about the table shape. Due to different patient table designs, shapes, and characteristics, these methods are not robust in identifying and removing the patient table. This paper proposes a new automatic approach which first identifies and locates the patient table in the sagittal planes and then removes it from the axial planes. The method has been tested successfully against different tables in different products from multiple vendors, showing it is both a versatile and robust technique for patient table removal.
Computed tomography; Patient table; Hough transform
The images generated in modern IC laboratories are created with high-quality standard (1,024 × 1,024 pixels and 10–12 bits/pixel) enabling cardiologists to perform interventions in the best conditions. But these images are in most of the cases archived in a basic quality standard (512 × 512 pixels and 8 bits/pixel). The purpose of this work is to complete the research developed in a previous paper and analyze the influence of the matrix size and the bit depth reduction on the image quality acquired on a polymethylmethacrylate (PMMA) phantom with a test object. The variation in contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) were investigated when the matrix size and the bit depth were independently modified for different phantom thicknesses. These two image quality parameters did not suffer noticeable alterations under bits depth reduction from 10 to 8 bits. Such a result seems to imply that bits depth reduction could be used to reduce file sizes with a suitable algorithm and without losing perceptible image quality information. But when the matrix size was reduced from 1,024 × 1,024 to 512 × 512 pixels, a reduction from 17% to 25% in HCSR was noticed when changing phantom thickness, and an increase of 27% in CNR was observed. These findings should be taken into account and it would be wise to conduct further investigations in the field of clinical images.
Image quality; Test object; Matrix size; Bits depth; Image metrics; Cardiology
Surgeons have to deal with many devices from different vendors within the operating room during surgery. Independent communication standards are necessary for the system integration of these devices. For implantations, three new extensions of the Digital Imaging and Communications in Medicine (DICOM) standard make use of a common communication standard that may optimise one of the surgeon's presently very time-consuming daily tasks. The paper provides a brief description of these DICOM Supplements and gives recommendations to their application in practice based on workflows that are proposed to be covered by the new standard extension. Two of the workflows are described in detail and separated into phases that are supported by the new data structures. Examples for the application of the standard within these phases give an impression of the potential usage. Even if the presented workflows are from different domains, we identified a generic core that may benefit from the surgical DICOM Supplements. In some steps of the workflows, the surgical DICOM Supplements are able to replace or optimise conventional methods. Standardisation can only be a means for integration and interoperability. Thus, it can be used as the basis for new applications and system architectures. The influence on current applications and communication processes is limited. Additionally, the supplements provide the basis for further applications, such as the support of surgical navigation systems. Given the support of all involved stakeholders, it is possible to provide a benefit for surgeons and patients.
Digital Imaging and Communications in Medicine (DICOM); Infrastructure; Medical devices; Navigation
Attending radiologists routinely edit radiology trainee dictated preliminary reports as part of standard workflow models. Time constraints, high volume, and spatial separation may not always facilitate clear discussion of these changes with trainees. However, these edits can represent significant teaching moments that are lost if they are not communicated back to trainees. We created an electronic method for retrieving and displaying changes made to resident written preliminary reports by attending radiologists during the process of radiology report finalization. The Radiology Information System is queried. Preliminary and final radiology reports, as well as report metadata, are extracted and stored in a database indexed by accession number and trainee/radiologist identity. A web application presents to trainees their 100 most recent preliminary and final report pairs both side by side and in a “track changes” mode. Web utilization audits showed regular utilization by trainees. Surveyed residents stated they compared reports for educational value, to improve future reports, and to improve patient care. Residents stated that they compared reports more frequently after deployment of this software solution and that regular assessment of their work using the Report Comparator allowed them to routinely improve future report quality and improved radiological understanding. In an era with increasing workload demands, trainee work hour restrictions, and decentralization of department resources (e.g., faculty, PACS), this solution helps to retain an important part of the educational experience that would have otherwise run the risk of being lost and provides it to the trainees in an efficient and highly consumable manner.
Communication; Computers in medicine; Continuing medical education; Databases; Medical education; Efficiency; Electronic medical record; Electronic teaching file; Internship and residency; Internet; Interpretation errors; Medical records systems; PACS support; Radiology reporting
Diabetic retinopathy has become an increasingly important cause of blindness. Nevertheless, vision loss can be prevented from early detection of diabetic retinopathy and monitor with regular examination. Common automatic detection of retinal abnormalities is for microaneurysms, hemorrhages, hard exudates, and cotton wool spot. However, there is a worse case of retinal abnormality, but not much research was done to detect it. It is neovascularization where new blood vessels grow due to extensive lack of oxygen in the retinal capillaries. This paper shows that various combination of techniques such as image normalization, compactness classifier, morphology-based operator, Gaussian filtering, and thresholding techniques were used in developing of neovascularization detection. A function matrix box was added in order to classify the neovascularization from natural blood vessel. A region-based neovascularization classification was attempted as a diagnostic accuracy. The developed method was tested on images from different database sources with varying quality and image resolution. It shows that specificity and sensitivity results were 89.4% and 63.9%, respectively. The proposed approach yield encouraging results for future development.
Biomedical Image Analysis; Digital Image Processing; Image Segmentation; Feature selection; Diabetic Retinopathy; Neovascularization
Clinical picture archiving and communications systems provide convenient, efficient access to digital medical images from multiple modalities but can prove challenging to deploy, configure and use. MRIdb is a self-contained image database, particularly suited to the storage and management of magnetic resonance imaging data sets for population phenotyping. It integrates a mature image archival system with an intuitive web-based user interface that provides visualisation and export functionality. In addition, utilities for auditing, data migration and system monitoring are included in a virtual machine image that is easily deployed with minimal configuration. The result is a freely available turnkey solution, designed to support epidemiological and imaging genetics research. It allows the management of patient data sets in a secure, scalable manner without requiring the installation of any bespoke software on end users’ workstations. MRIdb is an open-source software, available for download at http://www3.imperial.ac.uk/bioinfsupport/resources/software/mridb.
PACS; Digital imaging and communications in medicine (DICOM); MR imaging
In this paper, we introduce an ontology-based technology that bridges the gap between MR images on the one hand and knowledge sources on the other hand. The proposed technology allows the user to express interest in a body region by selecting this region on the MR image he or she is viewing with a mouse device. The proposed technology infers the intended body structure from the manual selection and searches the external knowledge source for pertinent information. This technology can be used to bridge the gap between image data in the clinical workflow and (external) knowledge sources that help to assess the case with increased certainty, accuracy, and efficiency. We evaluate an instance of the proposed technology in the neurodomain by means of a user study in which three neuroradiologists participated. The user study shows that the technology has high recall (>95%) when it comes to inferring the intended brain region from the participant’s manual selection. We are confident that this helps to increase the experience of browsing external knowledge sources.
Human–computer interaction; Image navigation; Image segmentation; Natural language processing; Artificial intelligence
In this paper, we describe and evaluate a system that extracts clinical findings and body locations from radiology reports and correlates them. The system uses Medical Language Extraction and Encoding System (MedLEE) to map the reports’ free text to structured semantic representations of their content. A lightweight reasoning engine extracts the clinical findings and body locations from MedLEE’s semantic representation and correlates them. Our study is illustrative for research in which existing natural language processing software is embedded in a larger system. We manually created a standard reference based on a corpus of neuro and breast radiology reports. The standard reference was used to evaluate the precision and recall of the proposed system and its modules. Our results indicate that the precision of our system is considerably better than its recall (82.32–91.37% vs. 35.67–45.91%). We conducted an error analysis and discuss here the practical usability of the system given its recall and precision performance.
Natural language processing; Knowledge base; Data extraction; BI-RADS
We introduce the concept, benefits, and general architecture for acquiring, storing, and displaying digital photographs along with medical imaging examinations. We also discuss a specific implementation built around an Android-based system for simultaneously acquiring digital photographs along with portable radiographs. By an innovative application of radiofrequency identification technology to radiographic cassettes, the system is able to maintain a tight relationship between these photographs and the radiographs within the picture archiving and communications system (PACS) environment. We provide a cost analysis demonstrating the economic feasibility of this technology. Since our architecture naturally integrates with patient identification methods, we also address patient privacy issues.
Patient identification; Electronic medical records; Medical imaging; Medical errors; Digital camera; DICOM; PACS
Radiology conferences enable participants the opportunity to ask experts questions through question and answer (Q and A) sessions or individually. Given the time limitations and intimidating circumstances, we incorporated conference text messaging (confexting) as a method of increasing interactivity between the audience and speakers. During a 5-day radiology conference, text messaging was utilized for anonymous interactivity between the audience and speakers during Q and A sessions. There were 324 text messages; 76 of these were either follow-up statements or questions related to earlier text messages. Forty-two questions were submitted via paper notes. There was a general trend of an increasing number of text messages and a decreasing number of paper notes. The anonymous text messaging system was found to be an effective method for interactivity between the audience and the speakers. The questions and answers could be presented in a PowerPoint format at the formal Q and A sessions. Questions texted to the authors during their talks could be immediately answered or addressed in subsequent talks. Although difficult for some individuals to embrace technology, confexting allows for interactivity and prompts discussion. Confexting is an effective method for interactivity between the audience and speakers not previously utilized in a conference setting. The anonymity and asynchronous communication enable conference participants to submit more questions than in the traditional setting. The speakers may be able to explain more thoroughly difficult concepts more thoroughly with additional slides at Q and A sessions or may immediately answer texted questions during their talks.
Teaching; Continuing medical education; Computer hardware; Communication; Education; Medical; Experiential; Imaging informatics; User interface
Successful adoption of new technology development can be accentuated by learning and applying the scientific principles of innovation diffusion. This is of particular importance to areas within the medical imaging practice which have lagged in innovation; perhaps, the most notable of which is reporting which has remained relatively stagnant for over a century. While the theoretical advantages of structured reporting have been well documented throughout the medical imaging community, adoption to date has been tepid and largely relegated to the academic and breast imaging communities. Widespread adoption will likely require an alternative approach to innovation, which addresses the heterogeneity and diversity of the practicing radiologist community along with the ever-changing expectations in service delivery. The challenges and strategies for reporting innovation and adoption are discussed, with the goal of adapting and customizing new technology to the preferences and needs of individual end-users.
Innovation adoption; Structured reporting; Medical imaging
In this paper, we present an effective method to determine the reference point of symphysis pubis (SP) in an axial stack of CT images to facilitate image registration for pelvic cancer treatment. In order to reduce the computational time, the proposed method consists of two detection parts, the coarse detector, and the fine detector. The detectors check each image patch whether it contains the characteristic structure of SP. The coarse detector roughly determines the location of the reference point of SP using three types of information, which are the location and intensity of an image patch, the SP appearance, and the geometrical structure of SP. The fine detector examines around the location found by the coarse detection to refine the location of the reference point of SP. In the experiment, the average location error of the propose method was 2.23 mm, which was about the side length of two pixels. Considering that the average location error by a radiologist is 0.77 mm, the proposed method finds the reference point quite accurately. Since it takes about 10 s to locate the reference point from a stack of CT images, it is fast enough to use in real time to facilitate image registration of CT images for pelvic cancer treatment.
Computer-aided diagnosis (CAD); Image registration; Pattern recognition; Computed tomography; Symphysis pubis; Haar-like features; Biased discriminant analysis
We have developed a method to quantify the shape of liver lesions in CT images and to evaluate its performance for retrieval of images with similarly-shaped lesions. We employed a machine learning method to combine several shape descriptors and defined similarity measures for a pair of shapes as a weighted combination of distances calculated based on each feature. We created a dataset of 144 simulated shapes and established several reference standards for similarity and computed the optimal weights so that the retrieval result agrees best with the reference standard. Then we evaluated our method on a clinical database consisting of 79 portal-venous-phase CT liver images, where we derived a reference standard of similarity from radiologists’ visual evaluation. Normalized Discounted Cumulative Gain (NDCG) was calculated to compare this ordering with the expected ordering based on the reference standard. For the simulated lesions, the mean NDCG values ranged from 91% to 100%, indicating that our methods for combining features were very accurate in representing true similarity. For the clinical images, the mean NDCG values were still around 90%, suggesting a strong correlation between the computed similarity and the independent similarity reference derived the radiologists.
Image retrieval; Image analysis; Image processing
The aims of this study were to (1) investigate the repeatability of measured volumes using the atlas-based method in each area of the brain, and (2) validate our hypothesis that the repeatability of the measured volumes with the atlas-based method was improved by using smoothed images. T1-weighted magnetic resonance images were obtained in five healthy subjects using the 1.5-T scanner. We used Statistical Parametric Mapping 5 and WFU PickAtlas software (theory of the Talairach brain atlas). Volumes inside region-of-interest (ROI) were measured in ten sets (five subjects × right and left) on six ROIs, respectively. One set comprises five images (one subject × five 3D-T1WIs). The percentage change was defined as [100 × (measured volume–mean volume in each set)/mean volume in each set)]. As a result, the average percentage changes using non-smoothed image on each ROI were as follows: gray matter, 0.482%; white matter, 0.375%; cerebrospinal fluid images, 0.731%; hippocampus, 0.864%; orbital gyrus, 1.692%; cerebellum posterior lobe, 0.854%. Using smoothed images with large FWHM resulted in improved repeatability on orbital gyrus. This is the first report of repeatability in each brain structure and improved repeatability with smoothed images using the atlas-based method.
Atlas-based method; Brain volumetry; Magnetic resonance imaging; Repeatability; WFU PickAtlas; Brain imaging; Brain mapping; Brain morphology; Clinical application; Computer analysis; Image analysis
Image processing turns out to be essential in the planning and verification of radiotherapy treatments. Before applying a radiotherapy treatment, a dosimetry planning must be performed. Usually, the planning is done by means of an X-ray volumetric analysis using computerized tomography, where the area to be radiated is marked out. During the treatment phase, it is necessary to place the patient under the particle accelerator exactly as considered in the dosimetry stage. Coarse alignment is achieved using fiduciary markers placed over the patient’s skin as external references. Later, fine alignment is provided by comparing a digitally reconstructed radiography (DRR) from the planning stage and a portal image captured by the accelerator in the treatment stage. The preprocessing of DRR and portal images, as well as the minimization of the non-shared information between both kinds of images, is mandatory for the correct operation of the image registration algorithm. With this purpose, mathematical morphology and image processing techniques have been used. The present work describes a fully automatic method to calculate more accurately the necessary displacement of the couch to place the patient exactly at the planned position. The proposed method to achieve the correct positioning of the patient is based on advanced image registration techniques. Preliminary results show a perfect match with the displacement estimated by the physician.
Radiotherapy; Image registration; Image feature enhancement; Biomedical image analysis
This paper presents a novel method which reconstructs any desired 3D image resolution from raw cone-beam CT data. X-ray attenuation through the object is approximated using ridgelet basis functions which allow us to have multiresolution representation levels. Since the Radon data have preferential orientations by nature, a spherical wavelet transform is used to compute the ridgelet coefficients from the Radon shell data. The whole method uses the classical Grangeat’s relation for computing derivatives of the Radon data which are then integrated and projected to a spherical wavelet representation and back-reconstructed using a modified version of the well known back-projection algorithm. Unlike previous reconstruction methods, this proposal uses a multiscale representation of the Radon data and therefore allows fast display of low-resolution data level.
Computed tomography; 3D ridgelet; Spherical wavelets
The current array of PACS products and 3D visualization tools presents a wide range of options for applying advanced visualization methods in clinical radiology. The emergence of server-based rendering techniques creates new opportunities for raising the level of clinical image review. However, best-of-breed implementations of core PACS technology, volumetric image navigation, and application-specific 3D packages will, in general, be supplied by different vendors. Integration issues should be carefully considered before deploying such systems. This work presents a classification scheme describing five tiers of PACS modularity and integration with advanced visualization tools, with the goals of characterizing current options for such integration, providing an approach for evaluating such systems, and discussing possible future architectures. These five levels of increasing PACS modularity begin with what was until recently the dominant model for integrating advanced visualization into the clinical radiologist's workflow, consisting of a dedicated stand-alone post-processing workstation in the reading room. Introduction of context-sharing, thin clients using server-based rendering, archive integration, and user-level application hosting at successive levels of the hierarchy lead to a modularized imaging architecture, which promotes user interface integration, resource efficiency, system performance, supportability, and flexibility. These technical factors and system metrics are discussed in the context of the proposed five-level classification scheme.
PACS; 3D imaging (imaging, three-dimensional); Computer systems; Advanced visualization; Server-based rendering; Application hosting
Browser with Rich Internet Application (RIA) Web pages could be a powerful user interface for handling sophisticated data and applications. Then the RIA solutions would be a potential method for viewing and manipulating the most data generated in clinical processes, which can accomplish the main functionalities as general picture archiving and communication system (PACS) viewing systems. The aim of this study is to apply the RIA technology to present medical images. Both Digital Imaging and Communications in Medicine (DICOM) and non-DICOM data can be handled by our RIA solutions. Some clinical data that are especially difficult to present using PACS viewing systems, such as ECG waveform, pathology virtual slide microscopic image, and radiotherapy plan, are as well demonstrated. Consequently, clinicians can use browser as a unique interface for acquiring all the clinical data located in different departments and information systems. And the data could be presented appropriately and processed freely by adopting the RIA technologies.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-011-9374-1) contains supplementary material, which is available to authorized users.
PACS; Clinical application; Clinical image viewing; DICOM; XML; RIA
The evaluation of the carotid artery wall is essential for the diagnosis of cardiovascular pathologies or for the assessment of a patient’s cardiovascular risk. This paper presents a completely user-independent algorithm, which automatically extracts the far double line (lumen–intima and media–adventitia) in the carotid artery using an Edge Flow technique based on directional probability maps using the attributes of intensity and texture. Specifically, the algorithm traces the boundaries between the lumen and intima layer (line one) and between the media and adventitia layer (line two). The Carotid Automated Ultrasound Double Line Extraction System based on Edge-Flow (CAUDLES-EF) is characterized and validated by comparing the output of the algorithm with the manual tracing boundaries carried out by three experts. We also benchmark our new technique with the two other completely automatic techniques (CALEXia and CULEXsa) we previously published. Our multi-institutional database consisted of 300 longitudinal B-mode carotid images with normal and pathologic arteries. We compared our current new method with previous methods, and showed the mean and standard deviation for the three methods: CALEXia, CULEXsa, and CAUDLES-EF as 0.134 ± 0.088, 0.074 ± 0.092, and 0.043 ± 0.097 mm, respectively. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed a uniform behavior over the entire database. Regarding the Figure of Merit (FoM), CALEXia and CULEXsa showed the values of 84.7% and 91.5%, respectively, while our new approach, CAUDLES-EF, performed the best at 94.8%, showing a good improvement compared to previous methods.
Carotid artery; Ultrasound; Multiresolution; Edge flow; Localization; Intima–media thickness; Hausdorff distance; Polyline distance; Segmentation; Automated measurement; Carotid imaging; Intima-media thickness measurement; Edge-flow operator
Aortoiliac and lower extremity arterial atherosclerotic plaque burden is a risk factor for the development of visceral and peripheral ischemic and aneurismal vascular disease. While prior research allows automated quantification of calcified plaque in these body regions using CT angiograms, no automated method exists to quantify soft plaque. We developed an automatic algorithm that defines the outer wall contour and wall thickness of vessels to quantify non-calcified plaque in CT angiograms of the chest, abdomen, pelvis, and lower extremities. The algorithm encodes the search space as a constrained graph and calculates the outer wall contour by deriving a minimum cost path through the graph, following the visible outer wall contour while minimizing path tortuosity. Our algorithm was statistically equivalent to a reference standard made by two reviewers. Absolute error was 1.9 ± 2.3% compared to the inter-observer variability of 3.9 ± 3.6%. Wall thickness in vessels with atherosclerosis was 3.4 ± 1.6 mm compared to 1.2 ± 0.4 mm in normal vessels. The algorithm shows promise as a tool for quantification of non-calcified plaque in CT angiography. When combined with previous research, our method has the potential to quantify both non-calcified and calcified plaque in all clinically significant systemic arteries, from the thoracic aorta to the arteries of the calf, over a wide range of diameters. This algorithm has the potential to enable risk stratification of patients and facilitate investigations into the relationships between asymptomatic atherosclerosis and a variety of behavioral, physiologic, pathologic, and genotypic conditions.
Soft plaque; Quantification; Algorithm; CT angiography; Arteries; Graph method
Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of 99mTc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of 99mTc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a 99mTc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians’ manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.
Glomerular filtration rate; Diethylenetriaminepentaacetic acid; Image contrast enhancement; Adaptive thresholding; Morphology operation; Renogram
One of the new challenges of Information Technology in the medical world is the protection and authentication of a variety of digital medical files, datasets, and images. In this work, the ability of magnetic resonance imaging (MRI) slice sequences to hide digital data is investigated and more specifically the case that the hidden data are the regions of interest (ROI) of the MRI slices. The regions of non-interest (RONI) are used as cover. The hiding capacity of the whole sequence is taken into account. Any ROI-targeted tampering attempt can be detected, and the original image can be self-restored (under certain conditions) by extracting the ROI from the RONI.
Medical imaging; MRI; ROI; Authentication; Self-correction; Integrity; JPEG2000
The pioneering work performed in the social sciences on diffusion of innovation can be applied to medical imaging and shed valuable insights as to how innovation is analyzed and adopted within the population of end-users. Successful innovation must take into account unique stakeholder differences, changes in communication and social interactions, and shifting priorities in market economics. The dramatic changes currently underway in current medical imaging practice provides unique innovation opportunities to those individuals and companies which can utilize this knowledge and effect change in objective and reproducible means. Successful innovation should rely upon data-driven objective analysis, which can scientifically validate the inherent strengths and weaknesses of the innovation, when compared with the idea or technology it supercedes.
Innovation; Technology development; Data mining