PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (211)
 

Clipboard (0)
None
Journals
Year of Publication
more »
Document Types
issn:1618-727
1.  Diagnosis of Solid Breast Tumors Using Vessel Analysis in Three-Dimensional Power Doppler Ultrasound Images 
Journal of Digital Imaging  2013;26(4):731-739.
This study aims to evaluate whether the distribution of vessels inside and adjacent to tumor region at three-dimensional (3-D) power Doppler ultrasonography (US) can be used for the differentiation of benign and malignant breast tumors. 3-D power Doppler US images of 113 solid breast masses (60 benign and 53 malignant) were used in this study. Blood vessels within and adjacent to tumor were estimated individually in 3-D power Doppler US images for differential diagnosis. Six features including volume of vessels, vascularity index, volume of tumor, vascularity index in tumor, vascularity index in normal tissue, and vascularity index in surrounding region of tumor within 2 cm were evaluated. Neural network was then used to classify tumors by using these vascular features. The receiver operating characteristic (ROC) curve analysis and Student’s t test were used to estimate the performance. All the six proposed vascular features are statistically significant (p < 0.001) for classifying the breast tumors as benign or malignant. The AZ (area under ROC curve) values for the classification result were 0.9138. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the diagnosis performance based on all six proposed features were 82.30 (93/113), 86.79 (46/53), 78.33 (47/60), 77.97 (46/59), and 87.04 % (47/54), respectively. The p value of AZ values between the proposed method and conventional vascularity index method using z test was 0.04.
doi:10.1007/s10278-012-9556-5
PMCID: PMC3705028  PMID: 23296913
3-D ultrasound; Power Doppler ultrasound; Breast tumor; Vascularity
2.  Automated Detection and Grading of Diabetic Maculopathy in Digital Retinal Images 
Journal of Digital Imaging  2013;26(4):803-812.
Diabetic maculopathy is one of the retinal abnormalities in which a diabetic patient suffers from severe vision loss due to the affected macula. It affects the central vision of the person and causes blindness in severe cases. In this article, we propose an automated medical system for the grading of diabetic maculopathy that will assist the ophthalmologists in early detection of the disease. The proposed system extracts the macula from digital retinal image using the vascular structure and optic disc location. It creates a binary map for possible exudate regions using filter banks and formulates a detailed feature vector for all regions. The system uses a Gaussian Mixture Model-based classifier to the retinal image in different stages of maculopathy by using the macula coordinates and exudate feature set. The evaluation of proposed system is performed by using publicly available standard retinal image databases. The results of our system have been compared with other methods in the literature in terms of sensitivity, specificity, positive predictive value and accuracy. Our system gives higher values as compared to others on the same databases which makes it suitable for an automated medical system for grading of diabetic maculopathy.
doi:10.1007/s10278-012-9549-4
PMCID: PMC3705025  PMID: 23325123
Diabetic maculopathy; Exudates; Macula; Feature extraction; Gaussian mixture model
3.  Semi-automatic Segmentation of Brain Tumors Using Population and Individual Information 
Journal of Digital Imaging  2013;26(4):786-796.
Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.
doi:10.1007/s10278-012-9568-1
PMCID: PMC3705006  PMID: 23319111
Neighborhood components analysis; k-nearest neighborhood; Graph cuts; Distance metric learning
4.  Automatic Retrieval of Bone Fracture Knowledge Using Natural Language Processing 
Journal of Digital Imaging  2012;26(4):709-713.
Natural language processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time.
doi:10.1007/s10278-012-9531-1
PMCID: PMC3705014  PMID: 23053906
Natural language processing; Decision support; Information retrieval of bone fractures
5.  Automatic Intracranial Space Segmentation for Computed Tomography Brain Images 
Journal of Digital Imaging  2012;26(3):563-571.
Craniofacial disorders are routinely diagnosed using computed tomography imaging. Corrective surgery is often performed early in life to restore the skull to a more normal shape. In order to quantitatively assess the shape change due to surgery, we present an automated method for intracranial space segmentation. The method utilizes a two-stage approach which firstly initializes the segmentation with a cascade of mathematical morphology operations. This segmentation is then refined with a level-set-based approach that ensures that low-contrast boundaries, where bone is absent, are completed smoothly. We demonstrate this method on a dataset of 43 images and show that the method produces consistent and accurate results.
doi:10.1007/s10278-012-9529-8
PMCID: PMC3649046  PMID: 23129541
Intracranial space segmentation; Computed tomography; Level set methods; Mathematical morphology
6.  Automated, Foot-Bone Registration Using Subdivision-Embedded Atlases for Spatial Mapping of Bone Mineral Density 
Journal of Digital Imaging  2012;26(3):554-562.
We present an atlas-based registration method for bones segmented from quantitative computed tomography (QCT) scans, with the goal of mapping their interior bone mineral densities (BMDs) volumetrically. We introduce a new type of deformable atlas, called subdivision-embedded atlas, which consists of a control grid represented as a tetrahedral subdivision mesh and a template bone surface embedded within the grid. Compared to a typical lattice-based deformation grid, the subdivision control grid possesses a relatively small degree of freedom tailored to the shape of the bone, which allows efficient fitting onto subjects. Compared with previous subdivision atlases, the novelty of our atlas lies in the addition of the embedded template surface, which further increases the accuracy of the fitting. Using this new atlas representation, we developed an efficient and fully automated pipeline for registering atlases of 12 tarsal and metatarsal bones to a segmented QCT scan of a human foot. Our evaluation shows that the mapping of BMD enabled by the registration is consistent for bones in repeated scans, and the regional BMD automatically computed from the mapping is not significantly different from expert annotations. The results suggest that our improved subdivision-based registration method is a reliable, efficient way to replace manual labor for measuring regional BMD in foot bones in QCT scans.
doi:10.1007/s10278-012-9524-0
PMCID: PMC3649048  PMID: 23090209
Bone mineral density; Registration; Atlas; Subdivision
7.  Spatial Distribution of Wall Shear Stress in Common Carotid Artery by Color Doppler Flow Imaging 
Journal of Digital Imaging  2012;26(3):466-471.
The purpose of this study is to provide a novel approach for measuring the spatial distribution of wall shear stress (WSS) in common carotid artery in vivo. WSS distributions were determined by digital image processing from color Doppler flow imaging (CDFI) in 50 healthy volunteers. In order to evaluate the feasibility of the spatial distribution, the mean values of WSS distribution were compared to the results of conventional WSS calculating method (Hagen–Poiseuille formula). In our study, the mean value of WSS distribution from 50 healthy volunteers was (6.91 ± 1.20) dyne/cm2, while it was (7.13 ± 1.24) dyne/cm2 by Hagen–Poiseuille approach. The difference was not statistically significant (t = −0.864, p = 0.604). Hence, the feasibility of the spatial distribution of WSS was proved. Moreover, this novel approach could provide three-dimensional distribution of shear stress and fusion image of shear stress with ultrasonic image for each volunteer, which made WSS “visible”. In conclusion, the spatial distribution of WSS could be used for WSS calculation in vivo. Moreover, it could provide more detailed values of WSS distribution than those of Hagen–Poiseuille formula.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-012-9505-3) contains supplementary material, which is available to authorized users.
doi:10.1007/s10278-012-9505-3
PMCID: PMC3649053  PMID: 22832893
Atherosclerosis; Common carotid artery; Wall shear stress; Color Doppler flow imaging; DICOM
8.  Extracting Fuzzy Classification Rules from Texture Segmented HRCT Lung Images 
Journal of Digital Imaging  2012;26(2):227-238.
Automatic tools for detection and identification of lung and lesion from high-resolution CT (HRCT) are becoming increasingly important both for diagnosis and for delivering high-precision radiation therapy. However, development of robust and interpretable classifiers still presents a challenge especially in case of non-small cell lung carcinoma (NSCLC) patients. In this paper, we have attempted to devise such a classifier by extracting fuzzy rules from texture segmented regions from HRCT images of NSCLC patients. A fuzzy inference system (FIS) has been constructed starting from a feature extraction procedure applied on overlapping regions from the same organs and deriving simple if–then rules so that more linguistically interpretable decisions can be implemented. The proposed method has been tested on 138 regions extracted from CT scan images acquired from patients with lung cancer. Assuming two classes of tissues C1 (healthy tissues) and C2 (lesion) as negative and positive, respectively; preliminary results report an AUC = 0.98 for lesions and AUC = 0.93 for healthy tissue, with an optimal operating condition related to sensitivity = 0.96, and specificity = 0.98 for lesions and sensitivity 0.99, and specificity = 0.94 for healthy tissue. Finally, the following results have been obtained: false-negative rate (FNR) = 6 % (C1), FNR = 2 % (C2), false-positive rate (FPR) = 4 % (C1), FPR = 3 % (C2), true-positive rate (TPR) = 94 %, (C1) and TPR = 98 % (C2).
doi:10.1007/s10278-012-9514-2
PMCID: PMC3597946  PMID: 22890442
NSCLC; IGRT; FIS; Rule-based classification
9.  UMMPerfusion: an Open Source Software Tool Towards Quantitative MRI Perfusion Analysis in Clinical Routine 
Journal of Digital Imaging  2012;26(2):344-352.
To develop a generic Open Source MRI perfusion analysis tool for quantitative parameter mapping to be used in a clinical workflow and methods for quality management of perfusion data. We implemented a classic, pixel-by-pixel deconvolution approach to quantify T1-weighted contrast-enhanced dynamic MR imaging (DCE-MRI) perfusion data as an OsiriX plug-in. It features parallel computing capabilities and an automated reporting scheme for quality management. Furthermore, by our implementation design, it could be easily extendable to other perfusion algorithms. Obtained results are saved as DICOM objects and directly added to the patient study. The plug-in was evaluated on ten MR perfusion data sets of the prostate and a calibration data set by comparing obtained parametric maps (plasma flow, volume of distribution, and mean transit time) to a widely used reference implementation in IDL. For all data, parametric maps could be calculated and the plug-in worked correctly and stable. On average, a deviation of 0.032 ± 0.02 ml/100 ml/min for the plasma flow, 0.004 ± 0.0007 ml/100 ml for the volume of distribution, and 0.037 ± 0.03 s for the mean transit time between our implementation and a reference implementation was observed. By using computer hardware with eight CPU cores, calculation time could be reduced by a factor of 2.5. We developed successfully an Open Source OsiriX plug-in for T1-DCE-MRI perfusion analysis in a routine quality managed clinical environment. Using model-free deconvolution, it allows for perfusion analysis in various clinical applications. By our plug-in, information about measured physiological processes can be obtained and transferred into clinical practice.
doi:10.1007/s10278-012-9510-6
PMCID: PMC3597952  PMID: 22832894
Algorithms; Perfusion; Image processing; Computer-assisted; Magnetic resonance imaging
10.  Segmentation and Visualization of a Large, High-Resolution Micro-CT Data of Mice 
Journal of Digital Imaging  2012;26(2):302-308.
High-resolution large datasets were acquired to improve the understanding of murine bone physiology. The purpose of this work is to present the challenges and solutions in segmenting and visualizing bone in such large datasets acquired using micro-CT scan of mice. The analyzed dataset is more than 50 GB in size with more than 6,000 2,048 × 2,048 slices. The study was performed to automatically measure the bone mineral density (BMD) of the entire skeleton. A global Renyi entropy (GREP) method was initially used for bone segmentation. This method consistently oversegmented skeletal region. A new method called adaptive local Renyi entropy (ALREP) is proposed to improve the segmentation results. To study the efficacy of the ALREP, manual segmentation was performed. Finally, a specialized high-end remote visualization system along with the software, VirtualGL, was used to perform remote rendering of this large dataset. It was determined that GREP overestimated the bone cross-section by around 30 % compared with ALREP. The manual segmentation process took 6,300 min for 6,300 slices while ALREP took only 150 min for segmentation. Automatic image processing with ALREP method may facilitate BMD measurement of the entire skeleton in a significantly reduced time, compared with manual process.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-012-9498-y) contains supplementary material, which is available to authorized users.
doi:10.1007/s10278-012-9498-y
PMCID: PMC3597961  PMID: 22766797
Segmentation; Visualization; High resolution; Bone mineral density
11.  Segmentation of the Common Carotid Artery Walls Based on a Frequency Implementation of Active Contours 
Journal of Digital Imaging  2012;26(1):129-139.
Atherosclerosis is one of the most extended cardiovascular diseases nowadays. Although it may be unnoticed during years, it also may suddenly trigger severe illnesses such as stroke, embolisms or ischemia. Therefore, an early detection of atherosclerosis can prevent adult population from suffering more serious pathologies. The intima–media thickness (IMT) of the common carotid artery (CCA) has been used as an early and reliable indicator of atherosclerosis for years. The IMT is manually computed from ultrasound images, a process that can be repeated as many times as necessary (over different ultrasound images of the same patient), but also prone to errors. With the aim to reduce the inter-observer variability and the subjectivity of the measurement, a fully automatic computer-based method based on ultrasound image processing and a frequency-domain implementation of active contours is proposed. The images used in this work were obtained with the same ultrasound scanner (Philips iU22 Ultrasound System) but with different spatial resolutions. The proposed solution does not extract only the IMT but also the CCA diameter, which is not as relevant as the IMT to predict future atherosclerosis evolution but it is a statistically interesting piece of information for the doctors to determine the cardiovascular risk. The results of the proposed method have been validated by doctors, and these results are visually and numerically satisfactory when considering the medical measurements as ground truth, with a maximum deviation of only 3.4 pixels (0.0248 mm) for IMT.
doi:10.1007/s10278-012-9481-7
PMCID: PMC3553363  PMID: 22552539
Automated measurement; Image segmentation; Ultrasound; Intima–media thickness
12.  Skull Stripping of Neonatal Brain MRI: Using Prior Shape Information with Graph Cuts 
Journal of Digital Imaging  2012;25(6):802-814.
In this paper, we propose a novel technique for skull stripping of infant (neonatal) brain magnetic resonance images using prior shape information within a graph cut framework. Skull stripping plays an important role in brain image analysis and is a major challenge for neonatal brain images. Popular methods like the brain surface extractor (BSE) and brain extraction tool (BET) do not produce satisfactory results for neonatal images due to poor tissue contrast, weak boundaries between brain and non-brain regions, and low spatial resolution. Inclusion of prior shape information helps in accurate identification of brain and non-brain tissues. Prior shape information is obtained from a set of labeled training images. The probability of a pixel belonging to the brain is obtained from the prior shape mask and included in the penalty term of the cost function. An extra smoothness term is based on gradient information that helps identify the weak boundaries between the brain and non-brain region. Experimental results on real neonatal brain images show that compared to BET, BSE, and other methods, our method achieves superior segmentation performance for neonatal brain images and comparable performance for adult brain images.
doi:10.1007/s10278-012-9460-z
PMCID: PMC3491156  PMID: 22354704
Shape prior; Graph cuts; Neonatal; Brain; MRI; Segmentation; Gradient
13.  An Improved Tamper Detection and Localization Scheme for Volumetric DICOM Images 
Journal of Digital Imaging  2012;25(6):751-763.
The development of teleradiology brings the convenience of global medical record access along with the concerns over the security of medical images transmitted over the open network. With the prevailing adoption of three-dimensional (3-D) imaging modalities, it is vital to develop a security mechanism to provide large volumes of medical images with privacy and reliability. This paper presents the development of a new and improved method of implementing tamper detection and localization based on a fully reversible digital watermarking scheme for the protection of volumetric DICOM images. This tamper detection and localization method utilizes the 3-D property of volumetric data to achieve much faster processing time at both sender and receiver sides without compromising tamper localization accuracy. The performance of the proposed scheme was evaluated by using sample volumetric DICOM images. Results show that the scheme achieved on average about 65 % and 72 % reduction in watermarking and dewatermarking processing time, respectively. For cases where the images had been tampered, it is possible to detect and localize the tampered areas with improved localization resolution in the images using the scheme.
doi:10.1007/s10278-012-9518-y
PMCID: PMC3491158  PMID: 22832896
Image authentication; Medical data security; Tamper detection; Watermarking
14.  Quantitative Computed Tomography (QCT) as a Radiology Reporting Tool by Using Optical Character Recognition (OCR) and Macro Program 
Journal of Digital Imaging  2012;25(6):815-818.
The objectives are (1) to introduce a new concept of making a quantitative computed tomography (QCT) reporting system by using optical character recognition (OCR) and macro program and (2) to illustrate the practical usages of the QCT reporting system in radiology reading environment. This reporting system was created as a development tool by using an open-source OCR software and an open-source macro program. The main module was designed for OCR to report QCT images in radiology reading process. The principal processes are as follows: (1) to save a QCT report as a graphic file, (2) to recognize the characters from an image as a text, (3) to extract the T scores from the text, (4) to perform error correction, (5) to reformat the values into QCT radiology reporting template, and (6) to paste the reports into the electronic medical record (EMR) or picture archiving and communicating system (PACS). The accuracy test of OCR was performed on randomly selected QCTs. QCT as a radiology reporting tool successfully acted as OCR of QCT. The diagnosis of normal, osteopenia, or osteoporosis is also determined. Error correction of OCR is done with AutoHotkey-coded module. The results of T scores of femoral neck and lumbar vertebrae had an accuracy of 100 and 95.4 %, respectively. A convenient QCT reporting system could be established by utilizing open-source OCR software and open-source macro program. This method can be easily adapted for other QCT applications and PACS/EMR.
doi:10.1007/s10278-012-9464-8
PMCID: PMC3491163  PMID: 22399206
Computer in medicine; PACS; OCR; QCT; Reading room
15.  DicomBrowser: Software for Viewing and Modifying DICOM Metadata 
Journal of Digital Imaging  2012;25(5):635-645.
Digital Imaging and Communications in Medicine (DICOM) is the dominant standard for medical imaging data. DICOM-compliant devices and the data they produce are generally designed for clinical use and often do not match the needs of users in research or clinical trial settings. DicomBrowser is software designed to ease the transition between clinically oriented DICOM tools and the specialized workflows of research imaging. It supports interactive loading and viewing of DICOM images and metadata across multiple studies and provides a rich and flexible system for modifying DICOM metadata. Users can make ad hoc changes in a graphical user interface, write metadata modification scripts for batch operations, use partly automated methods that guide users to modify specific attributes, or combine any of these approaches. DicomBrowser can save modified objects as local files or send them to a DICOM storage service using the C-STORE network protocol. DicomBrowser is open-source software, available for download at http://nrg.wustl.edu/software/dicom-browser.
doi:10.1007/s10278-012-9462-x
PMCID: PMC3447088  PMID: 22349992
Digital imaging and communications in medicine (DICOM); Workflow; Image viewer; Imaging informatics
16.  USRC: A New Strategy for Adding Digital Images to the Medical School Curriculum 
Journal of Digital Imaging  2012;25(5):682-688.
Many medical schools use learning management systems (LMSs) to give students access to online lecture notes, assignments, quizzes, and other learning resources. LMSs can also be used to provide access to digital radiology images, potentially improving preclinical teaching in anatomy, physiology, and pathology while also allowing students to develop interpretation skills that are important in clinical practice. However, it is unclear how radiology images can best be stored, imported, and displayed in an LMS. We developed University of Saskatchewan Radiology Courseware (USRC), a new web application that allows course designers to import images into pages linked to BlackBoard Learn, a popular LMS. Page content, including images, annotations, captions, and supporting text, are stored as teaching cases on a MIRC (Medical Imaging Resource Center) server. Course designers create cases in MIRC, and then create a corresponding page in BlackBoard by modifying an HTML template so that it holds the URL of a MIRC case. When a user visits the page in BlackBoard, the page requests content from the MIRC case, reformats the text for display in BlackBoard, and loads an image viewer plug-in that allows students to view and interact with the images stored in the case. The USRC technology can be used to reformat MIRC cases for presentation in any website or in any learning management system that supports custom pages written in HTML with embedded JavaScript.
doi:10.1007/s10278-012-9473-7
PMCID: PMC3447102  PMID: 22527988
Preclinical medical education; Digital teaching files; Web technology; MIRC; Radiology teaching files; Learning management systems; BlackBoard Learn
17.  Automatic Patient Table Removal in CT Images 
Journal of Digital Imaging  2012;25(4):480-485.
In many medical imaging applications, it is desirable and important to localize and remove the patient table from CT images. However, existing methods often require user interactions to define the table and sometimes make inaccurate assumptions about the table shape. Due to different patient table designs, shapes, and characteristics, these methods are not robust in identifying and removing the patient table. This paper proposes a new automatic approach which first identifies and locates the patient table in the sagittal planes and then removes it from the axial planes. The method has been tested successfully against different tables in different products from multiple vendors, showing it is both a versatile and robust technique for patient table removal.
doi:10.1007/s10278-012-9454-x
PMCID: PMC3389083  PMID: 22258731
Computed tomography; Patient table; Hough transform
18.  Influence of Image Metrics When Assessing Image Quality from a Test Object in Cardiac X-ray Systems: Part II 
Journal of Digital Imaging  2012;25(4):537-541.
The images generated in modern IC laboratories are created with high-quality standard (1,024 × 1,024 pixels and 10–12 bits/pixel) enabling cardiologists to perform interventions in the best conditions. But these images are in most of the cases archived in a basic quality standard (512 × 512 pixels and 8 bits/pixel). The purpose of this work is to complete the research developed in a previous paper and analyze the influence of the matrix size and the bit depth reduction on the image quality acquired on a polymethylmethacrylate (PMMA) phantom with a test object. The variation in contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) were investigated when the matrix size and the bit depth were independently modified for different phantom thicknesses. These two image quality parameters did not suffer noticeable alterations under bits depth reduction from 10 to 8 bits. Such a result seems to imply that bits depth reduction could be used to reduce file sizes with a suitable algorithm and without losing perceptible image quality information. But when the matrix size was reduced from 1,024 × 1,024 to 512 × 512 pixels, a reduction from 17% to 25% in HCSR was noticed when changing phantom thickness, and an increase of 27% in CNR was observed. These findings should be taken into account and it would be wise to conduct further investigations in the field of clinical images.
doi:10.1007/s10278-011-9448-0
PMCID: PMC3389093  PMID: 22223157
Image quality; Test object; Matrix size; Bits depth; Image metrics; Cardiology
19.  DICOM for Implantations—Overview and Application 
Journal of Digital Imaging  2011;25(3):352-358.
Surgeons have to deal with many devices from different vendors within the operating room during surgery. Independent communication standards are necessary for the system integration of these devices. For implantations, three new extensions of the Digital Imaging and Communications in Medicine (DICOM) standard make use of a common communication standard that may optimise one of the surgeon's presently very time-consuming daily tasks. The paper provides a brief description of these DICOM Supplements and gives recommendations to their application in practice based on workflows that are proposed to be covered by the new standard extension. Two of the workflows are described in detail and separated into phases that are supported by the new data structures. Examples for the application of the standard within these phases give an impression of the potential usage. Even if the presented workflows are from different domains, we identified a generic core that may benefit from the surgical DICOM Supplements. In some steps of the workflows, the surgical DICOM Supplements are able to replace or optimise conventional methods. Standardisation can only be a means for integration and interoperability. Thus, it can be used as the basis for new applications and system architectures. The influence on current applications and communication processes is limited. Additionally, the supplements provide the basis for further applications, such as the support of surgical navigation systems. Given the support of all involved stakeholders, it is possible to provide a benefit for surgeons and patients.
doi:10.1007/s10278-011-9416-8
PMCID: PMC3348981  PMID: 21858592
Digital Imaging and Communications in Medicine (DICOM); Infrastructure; Medical devices; Navigation
20.  Radiology Report Comparator: A Novel Method to Augment Resident Education 
Journal of Digital Imaging  2011;25(3):330-336.
Attending radiologists routinely edit radiology trainee dictated preliminary reports as part of standard workflow models. Time constraints, high volume, and spatial separation may not always facilitate clear discussion of these changes with trainees. However, these edits can represent significant teaching moments that are lost if they are not communicated back to trainees. We created an electronic method for retrieving and displaying changes made to resident written preliminary reports by attending radiologists during the process of radiology report finalization. The Radiology Information System is queried. Preliminary and final radiology reports, as well as report metadata, are extracted and stored in a database indexed by accession number and trainee/radiologist identity. A web application presents to trainees their 100 most recent preliminary and final report pairs both side by side and in a “track changes” mode. Web utilization audits showed regular utilization by trainees. Surveyed residents stated they compared reports for educational value, to improve future reports, and to improve patient care. Residents stated that they compared reports more frequently after deployment of this software solution and that regular assessment of their work using the Report Comparator allowed them to routinely improve future report quality and improved radiological understanding. In an era with increasing workload demands, trainee work hour restrictions, and decentralization of department resources (e.g., faculty, PACS), this solution helps to retain an important part of the educational experience that would have otherwise run the risk of being lost and provides it to the trainees in an efficient and highly consumable manner.
doi:10.1007/s10278-011-9419-5
PMCID: PMC3348990  PMID: 21956519
Communication; Computers in medicine; Continuing medical education; Databases; Medical education; Efficiency; Electronic medical record; Electronic teaching file; Internship and residency; Internet; Interpretation errors; Medical records systems; PACS support; Radiology reporting
21.  Detection of Neovascularization in Diabetic Retinopathy 
Journal of Digital Imaging  2011;25(3):437-444.
Diabetic retinopathy has become an increasingly important cause of blindness. Nevertheless, vision loss can be prevented from early detection of diabetic retinopathy and monitor with regular examination. Common automatic detection of retinal abnormalities is for microaneurysms, hemorrhages, hard exudates, and cotton wool spot. However, there is a worse case of retinal abnormality, but not much research was done to detect it. It is neovascularization where new blood vessels grow due to extensive lack of oxygen in the retinal capillaries. This paper shows that various combination of techniques such as image normalization, compactness classifier, morphology-based operator, Gaussian filtering, and thresholding techniques were used in developing of neovascularization detection. A function matrix box was added in order to classify the neovascularization from natural blood vessel. A region-based neovascularization classification was attempted as a diagnostic accuracy. The developed method was tested on images from different database sources with varying quality and image resolution. It shows that specificity and sensitivity results were 89.4% and 63.9%, respectively. The proposed approach yield encouraging results for future development.
doi:10.1007/s10278-011-9418-6
PMCID: PMC3348992  PMID: 21901535
Biomedical Image Analysis; Digital Image Processing; Image Segmentation; Feature selection; Diabetic Retinopathy; Neovascularization
22.  MRIdb: Medical Image Management for Biobank Research 
Journal of Digital Imaging  2013;26(5):886-890.
Clinical picture archiving and communications systems provide convenient, efficient access to digital medical images from multiple modalities but can prove challenging to deploy, configure and use. MRIdb is a self-contained image database, particularly suited to the storage and management of magnetic resonance imaging data sets for population phenotyping. It integrates a mature image archival system with an intuitive web-based user interface that provides visualisation and export functionality. In addition, utilities for auditing, data migration and system monitoring are included in a virtual machine image that is easily deployed with minimal configuration. The result is a freely available turnkey solution, designed to support epidemiological and imaging genetics research. It allows the management of patient data sets in a secure, scalable manner without requiring the installation of any bespoke software on end users’ workstations. MRIdb is an open-source software, available for download at http://www3.imperial.ac.uk/bioinfsupport/resources/software/mridb.
doi:10.1007/s10278-013-9604-9
PMCID: PMC3782593  PMID: 23619930
PACS; Digital imaging and communications in medicine (DICOM); MR imaging
23.  Bridging the Text-Image Gap: a Decision Support Tool for Real-Time PACS Browsing 
Journal of Digital Imaging  2011;25(2):227-239.
In this paper, we introduce an ontology-based technology that bridges the gap between MR images on the one hand and knowledge sources on the other hand. The proposed technology allows the user to express interest in a body region by selecting this region on the MR image he or she is viewing with a mouse device. The proposed technology infers the intended body structure from the manual selection and searches the external knowledge source for pertinent information. This technology can be used to bridge the gap between image data in the clinical workflow and (external) knowledge sources that help to assess the case with increased certainty, accuracy, and efficiency. We evaluate an instance of the proposed technology in the neurodomain by means of a user study in which three neuroradiologists participated. The user study shows that the technology has high recall (>95%) when it comes to inferring the intended brain region from the participant’s manual selection. We are confident that this helps to increase the experience of browsing external knowledge sources.
doi:10.1007/s10278-011-9414-x
PMCID: PMC3295965  PMID: 21809171
Human–computer interaction; Image navigation; Image segmentation; Natural language processing; Artificial intelligence
24.  Automatically Correlating Clinical Findings and Body Locations in Radiology Reports Using MedLEE 
Journal of Digital Imaging  2011;25(2):240-249.
In this paper, we describe and evaluate a system that extracts clinical findings and body locations from radiology reports and correlates them. The system uses Medical Language Extraction and Encoding System (MedLEE) to map the reports’ free text to structured semantic representations of their content. A lightweight reasoning engine extracts the clinical findings and body locations from MedLEE’s semantic representation and correlates them. Our study is illustrative for research in which existing natural language processing software is embedded in a larger system. We manually created a standard reference based on a corpus of neuro and breast radiology reports. The standard reference was used to evaluate the precision and recall of the proposed system and its modules. Our results indicate that the precision of our system is considerably better than its recall (82.32–91.37% vs. 35.67–45.91%). We conducted an error analysis and discuss here the practical usability of the system given its recall and precision performance.
doi:10.1007/s10278-011-9411-0
PMCID: PMC3295967  PMID: 21796490
Natural language processing; Knowledge base; Data extraction; BI-RADS
25.  Integrating Patient Digital Photographs with Medical Imaging Examinations 
Journal of Digital Imaging  2013;26(5):875-885.
We introduce the concept, benefits, and general architecture for acquiring, storing, and displaying digital photographs along with medical imaging examinations. We also discuss a specific implementation built around an Android-based system for simultaneously acquiring digital photographs along with portable radiographs. By an innovative application of radiofrequency identification technology to radiographic cassettes, the system is able to maintain a tight relationship between these photographs and the radiographs within the picture archiving and communications system (PACS) environment. We provide a cost analysis demonstrating the economic feasibility of this technology. Since our architecture naturally integrates with patient identification methods, we also address patient privacy issues.
doi:10.1007/s10278-013-9579-6
PMCID: PMC3782605  PMID: 23408010
Patient identification; Electronic medical records; Medical imaging; Medical errors; Digital camera; DICOM; PACS

Results 1-25 (211)