This study aims to evaluate whether the distribution of vessels inside and adjacent to tumor region at three-dimensional (3-D) power Doppler ultrasonography (US) can be used for the differentiation of benign and malignant breast tumors. 3-D power Doppler US images of 113 solid breast masses (60 benign and 53 malignant) were used in this study. Blood vessels within and adjacent to tumor were estimated individually in 3-D power Doppler US images for differential diagnosis. Six features including volume of vessels, vascularity index, volume of tumor, vascularity index in tumor, vascularity index in normal tissue, and vascularity index in surrounding region of tumor within 2 cm were evaluated. Neural network was then used to classify tumors by using these vascular features. The receiver operating characteristic (ROC) curve analysis and Student’s t test were used to estimate the performance. All the six proposed vascular features are statistically significant (p < 0.001) for classifying the breast tumors as benign or malignant. The AZ (area under ROC curve) values for the classification result were 0.9138. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the diagnosis performance based on all six proposed features were 82.30 (93/113), 86.79 (46/53), 78.33 (47/60), 77.97 (46/59), and 87.04 % (47/54), respectively. The p value of AZ values between the proposed method and conventional vascularity index method using z test was 0.04.
3-D ultrasound; Power Doppler ultrasound; Breast tumor; Vascularity
Diabetic maculopathy is one of the retinal abnormalities in which a diabetic patient suffers from severe vision loss due to the affected macula. It affects the central vision of the person and causes blindness in severe cases. In this article, we propose an automated medical system for the grading of diabetic maculopathy that will assist the ophthalmologists in early detection of the disease. The proposed system extracts the macula from digital retinal image using the vascular structure and optic disc location. It creates a binary map for possible exudate regions using filter banks and formulates a detailed feature vector for all regions. The system uses a Gaussian Mixture Model-based classifier to the retinal image in different stages of maculopathy by using the macula coordinates and exudate feature set. The evaluation of proposed system is performed by using publicly available standard retinal image databases. The results of our system have been compared with other methods in the literature in terms of sensitivity, specificity, positive predictive value and accuracy. Our system gives higher values as compared to others on the same databases which makes it suitable for an automated medical system for grading of diabetic maculopathy.
Diabetic maculopathy; Exudates; Macula; Feature extraction; Gaussian mixture model
Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.
Neighborhood components analysis; k-nearest neighborhood; Graph cuts; Distance metric learning
Natural language processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time.
Natural language processing; Decision support; Information retrieval of bone fractures
Craniofacial disorders are routinely diagnosed using computed tomography imaging. Corrective surgery is often performed early in life to restore the skull to a more normal shape. In order to quantitatively assess the shape change due to surgery, we present an automated method for intracranial space segmentation. The method utilizes a two-stage approach which firstly initializes the segmentation with a cascade of mathematical morphology operations. This segmentation is then refined with a level-set-based approach that ensures that low-contrast boundaries, where bone is absent, are completed smoothly. We demonstrate this method on a dataset of 43 images and show that the method produces consistent and accurate results.
Intracranial space segmentation; Computed tomography; Level set methods; Mathematical morphology
We present an atlas-based registration method for bones segmented from quantitative computed tomography (QCT) scans, with the goal of mapping their interior bone mineral densities (BMDs) volumetrically. We introduce a new type of deformable atlas, called subdivision-embedded atlas, which consists of a control grid represented as a tetrahedral subdivision mesh and a template bone surface embedded within the grid. Compared to a typical lattice-based deformation grid, the subdivision control grid possesses a relatively small degree of freedom tailored to the shape of the bone, which allows efficient fitting onto subjects. Compared with previous subdivision atlases, the novelty of our atlas lies in the addition of the embedded template surface, which further increases the accuracy of the fitting. Using this new atlas representation, we developed an efficient and fully automated pipeline for registering atlases of 12 tarsal and metatarsal bones to a segmented QCT scan of a human foot. Our evaluation shows that the mapping of BMD enabled by the registration is consistent for bones in repeated scans, and the regional BMD automatically computed from the mapping is not significantly different from expert annotations. The results suggest that our improved subdivision-based registration method is a reliable, efficient way to replace manual labor for measuring regional BMD in foot bones in QCT scans.
Bone mineral density; Registration; Atlas; Subdivision
The purpose of this study is to provide a novel approach for measuring the spatial distribution of wall shear stress (WSS) in common carotid artery in vivo. WSS distributions were determined by digital image processing from color Doppler flow imaging (CDFI) in 50 healthy volunteers. In order to evaluate the feasibility of the spatial distribution, the mean values of WSS distribution were compared to the results of conventional WSS calculating method (Hagen–Poiseuille formula). In our study, the mean value of WSS distribution from 50 healthy volunteers was (6.91 ± 1.20) dyne/cm2, while it was (7.13 ± 1.24) dyne/cm2 by Hagen–Poiseuille approach. The difference was not statistically significant (t = −0.864, p = 0.604). Hence, the feasibility of the spatial distribution of WSS was proved. Moreover, this novel approach could provide three-dimensional distribution of shear stress and fusion image of shear stress with ultrasonic image for each volunteer, which made WSS “visible”. In conclusion, the spatial distribution of WSS could be used for WSS calculation in vivo. Moreover, it could provide more detailed values of WSS distribution than those of Hagen–Poiseuille formula.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-012-9505-3) contains supplementary material, which is available to authorized users.
Atherosclerosis; Common carotid artery; Wall shear stress; Color Doppler flow imaging; DICOM
Automatic tools for detection and identification of lung and lesion from high-resolution CT (HRCT) are becoming increasingly important both for diagnosis and for delivering high-precision radiation therapy. However, development of robust and interpretable classifiers still presents a challenge especially in case of non-small cell lung carcinoma (NSCLC) patients. In this paper, we have attempted to devise such a classifier by extracting fuzzy rules from texture segmented regions from HRCT images of NSCLC patients. A fuzzy inference system (FIS) has been constructed starting from a feature extraction procedure applied on overlapping regions from the same organs and deriving simple if–then rules so that more linguistically interpretable decisions can be implemented. The proposed method has been tested on 138 regions extracted from CT scan images acquired from patients with lung cancer. Assuming two classes of tissues C1 (healthy tissues) and C2 (lesion) as negative and positive, respectively; preliminary results report an AUC = 0.98 for lesions and AUC = 0.93 for healthy tissue, with an optimal operating condition related to sensitivity = 0.96, and specificity = 0.98 for lesions and sensitivity 0.99, and specificity = 0.94 for healthy tissue. Finally, the following results have been obtained: false-negative rate (FNR) = 6 % (C1), FNR = 2 % (C2), false-positive rate (FPR) = 4 % (C1), FPR = 3 % (C2), true-positive rate (TPR) = 94 %, (C1) and TPR = 98 % (C2).
NSCLC; IGRT; FIS; Rule-based classification
To develop a generic Open Source MRI perfusion analysis tool for quantitative parameter mapping to be used in a clinical workflow and methods for quality management of perfusion data. We implemented a classic, pixel-by-pixel deconvolution approach to quantify T1-weighted contrast-enhanced dynamic MR imaging (DCE-MRI) perfusion data as an OsiriX plug-in. It features parallel computing capabilities and an automated reporting scheme for quality management. Furthermore, by our implementation design, it could be easily extendable to other perfusion algorithms. Obtained results are saved as DICOM objects and directly added to the patient study. The plug-in was evaluated on ten MR perfusion data sets of the prostate and a calibration data set by comparing obtained parametric maps (plasma flow, volume of distribution, and mean transit time) to a widely used reference implementation in IDL. For all data, parametric maps could be calculated and the plug-in worked correctly and stable. On average, a deviation of 0.032 ± 0.02 ml/100 ml/min for the plasma flow, 0.004 ± 0.0007 ml/100 ml for the volume of distribution, and 0.037 ± 0.03 s for the mean transit time between our implementation and a reference implementation was observed. By using computer hardware with eight CPU cores, calculation time could be reduced by a factor of 2.5. We developed successfully an Open Source OsiriX plug-in for T1-DCE-MRI perfusion analysis in a routine quality managed clinical environment. Using model-free deconvolution, it allows for perfusion analysis in various clinical applications. By our plug-in, information about measured physiological processes can be obtained and transferred into clinical practice.
Algorithms; Perfusion; Image processing; Computer-assisted; Magnetic resonance imaging
High-resolution large datasets were acquired to improve the understanding of murine bone physiology. The purpose of this work is to present the challenges and solutions in segmenting and visualizing bone in such large datasets acquired using micro-CT scan of mice. The analyzed dataset is more than 50 GB in size with more than 6,000 2,048 × 2,048 slices. The study was performed to automatically measure the bone mineral density (BMD) of the entire skeleton. A global Renyi entropy (GREP) method was initially used for bone segmentation. This method consistently oversegmented skeletal region. A new method called adaptive local Renyi entropy (ALREP) is proposed to improve the segmentation results. To study the efficacy of the ALREP, manual segmentation was performed. Finally, a specialized high-end remote visualization system along with the software, VirtualGL, was used to perform remote rendering of this large dataset. It was determined that GREP overestimated the bone cross-section by around 30 % compared with ALREP. The manual segmentation process took 6,300 min for 6,300 slices while ALREP took only 150 min for segmentation. Automatic image processing with ALREP method may facilitate BMD measurement of the entire skeleton in a significantly reduced time, compared with manual process.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-012-9498-y) contains supplementary material, which is available to authorized users.
Segmentation; Visualization; High resolution; Bone mineral density
Atherosclerosis is one of the most extended cardiovascular diseases nowadays. Although it may be unnoticed during years, it also may suddenly trigger severe illnesses such as stroke, embolisms or ischemia. Therefore, an early detection of atherosclerosis can prevent adult population from suffering more serious pathologies. The intima–media thickness (IMT) of the common carotid artery (CCA) has been used as an early and reliable indicator of atherosclerosis for years. The IMT is manually computed from ultrasound images, a process that can be repeated as many times as necessary (over different ultrasound images of the same patient), but also prone to errors. With the aim to reduce the inter-observer variability and the subjectivity of the measurement, a fully automatic computer-based method based on ultrasound image processing and a frequency-domain implementation of active contours is proposed. The images used in this work were obtained with the same ultrasound scanner (Philips iU22 Ultrasound System) but with different spatial resolutions. The proposed solution does not extract only the IMT but also the CCA diameter, which is not as relevant as the IMT to predict future atherosclerosis evolution but it is a statistically interesting piece of information for the doctors to determine the cardiovascular risk. The results of the proposed method have been validated by doctors, and these results are visually and numerically satisfactory when considering the medical measurements as ground truth, with a maximum deviation of only 3.4 pixels (0.0248 mm) for IMT.
Automated measurement; Image segmentation; Ultrasound; Intima–media thickness
In this paper, we propose a novel technique for skull stripping of infant (neonatal) brain magnetic resonance images using prior shape information within a graph cut framework. Skull stripping plays an important role in brain image analysis and is a major challenge for neonatal brain images. Popular methods like the brain surface extractor (BSE) and brain extraction tool (BET) do not produce satisfactory results for neonatal images due to poor tissue contrast, weak boundaries between brain and non-brain regions, and low spatial resolution. Inclusion of prior shape information helps in accurate identification of brain and non-brain tissues. Prior shape information is obtained from a set of labeled training images. The probability of a pixel belonging to the brain is obtained from the prior shape mask and included in the penalty term of the cost function. An extra smoothness term is based on gradient information that helps identify the weak boundaries between the brain and non-brain region. Experimental results on real neonatal brain images show that compared to BET, BSE, and other methods, our method achieves superior segmentation performance for neonatal brain images and comparable performance for adult brain images.
Shape prior; Graph cuts; Neonatal; Brain; MRI; Segmentation; Gradient
The development of teleradiology brings the convenience of global medical record access along with the concerns over the security of medical images transmitted over the open network. With the prevailing adoption of three-dimensional (3-D) imaging modalities, it is vital to develop a security mechanism to provide large volumes of medical images with privacy and reliability. This paper presents the development of a new and improved method of implementing tamper detection and localization based on a fully reversible digital watermarking scheme for the protection of volumetric DICOM images. This tamper detection and localization method utilizes the 3-D property of volumetric data to achieve much faster processing time at both sender and receiver sides without compromising tamper localization accuracy. The performance of the proposed scheme was evaluated by using sample volumetric DICOM images. Results show that the scheme achieved on average about 65 % and 72 % reduction in watermarking and dewatermarking processing time, respectively. For cases where the images had been tampered, it is possible to detect and localize the tampered areas with improved localization resolution in the images using the scheme.
Image authentication; Medical data security; Tamper detection; Watermarking
The objectives are (1) to introduce a new concept of making a quantitative computed tomography (QCT) reporting system by using optical character recognition (OCR) and macro program and (2) to illustrate the practical usages of the QCT reporting system in radiology reading environment. This reporting system was created as a development tool by using an open-source OCR software and an open-source macro program. The main module was designed for OCR to report QCT images in radiology reading process. The principal processes are as follows: (1) to save a QCT report as a graphic file, (2) to recognize the characters from an image as a text, (3) to extract the T scores from the text, (4) to perform error correction, (5) to reformat the values into QCT radiology reporting template, and (6) to paste the reports into the electronic medical record (EMR) or picture archiving and communicating system (PACS). The accuracy test of OCR was performed on randomly selected QCTs. QCT as a radiology reporting tool successfully acted as OCR of QCT. The diagnosis of normal, osteopenia, or osteoporosis is also determined. Error correction of OCR is done with AutoHotkey-coded module. The results of T scores of femoral neck and lumbar vertebrae had an accuracy of 100 and 95.4 %, respectively. A convenient QCT reporting system could be established by utilizing open-source OCR software and open-source macro program. This method can be easily adapted for other QCT applications and PACS/EMR.
Computer in medicine; PACS; OCR; QCT; Reading room
Digital Imaging and Communications in Medicine (DICOM) is the dominant standard for medical imaging data. DICOM-compliant devices and the data they produce are generally designed for clinical use and often do not match the needs of users in research or clinical trial settings. DicomBrowser is software designed to ease the transition between clinically oriented DICOM tools and the specialized workflows of research imaging. It supports interactive loading and viewing of DICOM images and metadata across multiple studies and provides a rich and flexible system for modifying DICOM metadata. Users can make ad hoc changes in a graphical user interface, write metadata modification scripts for batch operations, use partly automated methods that guide users to modify specific attributes, or combine any of these approaches. DicomBrowser can save modified objects as local files or send them to a DICOM storage service using the C-STORE network protocol. DicomBrowser is open-source software, available for download at http://nrg.wustl.edu/software/dicom-browser.
Digital imaging and communications in medicine (DICOM); Workflow; Image viewer; Imaging informatics
Preclinical medical education; Digital teaching files; Web technology; MIRC; Radiology teaching files; Learning management systems; BlackBoard Learn
In many medical imaging applications, it is desirable and important to localize and remove the patient table from CT images. However, existing methods often require user interactions to define the table and sometimes make inaccurate assumptions about the table shape. Due to different patient table designs, shapes, and characteristics, these methods are not robust in identifying and removing the patient table. This paper proposes a new automatic approach which first identifies and locates the patient table in the sagittal planes and then removes it from the axial planes. The method has been tested successfully against different tables in different products from multiple vendors, showing it is both a versatile and robust technique for patient table removal.
Computed tomography; Patient table; Hough transform
The images generated in modern IC laboratories are created with high-quality standard (1,024 × 1,024 pixels and 10–12 bits/pixel) enabling cardiologists to perform interventions in the best conditions. But these images are in most of the cases archived in a basic quality standard (512 × 512 pixels and 8 bits/pixel). The purpose of this work is to complete the research developed in a previous paper and analyze the influence of the matrix size and the bit depth reduction on the image quality acquired on a polymethylmethacrylate (PMMA) phantom with a test object. The variation in contrast-to-noise ratio (CNR) and high contrast spatial resolution (HCSR) were investigated when the matrix size and the bit depth were independently modified for different phantom thicknesses. These two image quality parameters did not suffer noticeable alterations under bits depth reduction from 10 to 8 bits. Such a result seems to imply that bits depth reduction could be used to reduce file sizes with a suitable algorithm and without losing perceptible image quality information. But when the matrix size was reduced from 1,024 × 1,024 to 512 × 512 pixels, a reduction from 17% to 25% in HCSR was noticed when changing phantom thickness, and an increase of 27% in CNR was observed. These findings should be taken into account and it would be wise to conduct further investigations in the field of clinical images.
Image quality; Test object; Matrix size; Bits depth; Image metrics; Cardiology
Surgeons have to deal with many devices from different vendors within the operating room during surgery. Independent communication standards are necessary for the system integration of these devices. For implantations, three new extensions of the Digital Imaging and Communications in Medicine (DICOM) standard make use of a common communication standard that may optimise one of the surgeon's presently very time-consuming daily tasks. The paper provides a brief description of these DICOM Supplements and gives recommendations to their application in practice based on workflows that are proposed to be covered by the new standard extension. Two of the workflows are described in detail and separated into phases that are supported by the new data structures. Examples for the application of the standard within these phases give an impression of the potential usage. Even if the presented workflows are from different domains, we identified a generic core that may benefit from the surgical DICOM Supplements. In some steps of the workflows, the surgical DICOM Supplements are able to replace or optimise conventional methods. Standardisation can only be a means for integration and interoperability. Thus, it can be used as the basis for new applications and system architectures. The influence on current applications and communication processes is limited. Additionally, the supplements provide the basis for further applications, such as the support of surgical navigation systems. Given the support of all involved stakeholders, it is possible to provide a benefit for surgeons and patients.
Digital Imaging and Communications in Medicine (DICOM); Infrastructure; Medical devices; Navigation
Attending radiologists routinely edit radiology trainee dictated preliminary reports as part of standard workflow models. Time constraints, high volume, and spatial separation may not always facilitate clear discussion of these changes with trainees. However, these edits can represent significant teaching moments that are lost if they are not communicated back to trainees. We created an electronic method for retrieving and displaying changes made to resident written preliminary reports by attending radiologists during the process of radiology report finalization. The Radiology Information System is queried. Preliminary and final radiology reports, as well as report metadata, are extracted and stored in a database indexed by accession number and trainee/radiologist identity. A web application presents to trainees their 100 most recent preliminary and final report pairs both side by side and in a “track changes” mode. Web utilization audits showed regular utilization by trainees. Surveyed residents stated they compared reports for educational value, to improve future reports, and to improve patient care. Residents stated that they compared reports more frequently after deployment of this software solution and that regular assessment of their work using the Report Comparator allowed them to routinely improve future report quality and improved radiological understanding. In an era with increasing workload demands, trainee work hour restrictions, and decentralization of department resources (e.g., faculty, PACS), this solution helps to retain an important part of the educational experience that would have otherwise run the risk of being lost and provides it to the trainees in an efficient and highly consumable manner.
Communication; Computers in medicine; Continuing medical education; Databases; Medical education; Efficiency; Electronic medical record; Electronic teaching file; Internship and residency; Internet; Interpretation errors; Medical records systems; PACS support; Radiology reporting
Diabetic retinopathy has become an increasingly important cause of blindness. Nevertheless, vision loss can be prevented from early detection of diabetic retinopathy and monitor with regular examination. Common automatic detection of retinal abnormalities is for microaneurysms, hemorrhages, hard exudates, and cotton wool spot. However, there is a worse case of retinal abnormality, but not much research was done to detect it. It is neovascularization where new blood vessels grow due to extensive lack of oxygen in the retinal capillaries. This paper shows that various combination of techniques such as image normalization, compactness classifier, morphology-based operator, Gaussian filtering, and thresholding techniques were used in developing of neovascularization detection. A function matrix box was added in order to classify the neovascularization from natural blood vessel. A region-based neovascularization classification was attempted as a diagnostic accuracy. The developed method was tested on images from different database sources with varying quality and image resolution. It shows that specificity and sensitivity results were 89.4% and 63.9%, respectively. The proposed approach yield encouraging results for future development.
Biomedical Image Analysis; Digital Image Processing; Image Segmentation; Feature selection; Diabetic Retinopathy; Neovascularization
Clinical picture archiving and communications systems provide convenient, efficient access to digital medical images from multiple modalities but can prove challenging to deploy, configure and use. MRIdb is a self-contained image database, particularly suited to the storage and management of magnetic resonance imaging data sets for population phenotyping. It integrates a mature image archival system with an intuitive web-based user interface that provides visualisation and export functionality. In addition, utilities for auditing, data migration and system monitoring are included in a virtual machine image that is easily deployed with minimal configuration. The result is a freely available turnkey solution, designed to support epidemiological and imaging genetics research. It allows the management of patient data sets in a secure, scalable manner without requiring the installation of any bespoke software on end users’ workstations. MRIdb is an open-source software, available for download at http://www3.imperial.ac.uk/bioinfsupport/resources/software/mridb.
PACS; Digital imaging and communications in medicine (DICOM); MR imaging
In this paper, we introduce an ontology-based technology that bridges the gap between MR images on the one hand and knowledge sources on the other hand. The proposed technology allows the user to express interest in a body region by selecting this region on the MR image he or she is viewing with a mouse device. The proposed technology infers the intended body structure from the manual selection and searches the external knowledge source for pertinent information. This technology can be used to bridge the gap between image data in the clinical workflow and (external) knowledge sources that help to assess the case with increased certainty, accuracy, and efficiency. We evaluate an instance of the proposed technology in the neurodomain by means of a user study in which three neuroradiologists participated. The user study shows that the technology has high recall (>95%) when it comes to inferring the intended brain region from the participant’s manual selection. We are confident that this helps to increase the experience of browsing external knowledge sources.
Human–computer interaction; Image navigation; Image segmentation; Natural language processing; Artificial intelligence
In this paper, we describe and evaluate a system that extracts clinical findings and body locations from radiology reports and correlates them. The system uses Medical Language Extraction and Encoding System (MedLEE) to map the reports’ free text to structured semantic representations of their content. A lightweight reasoning engine extracts the clinical findings and body locations from MedLEE’s semantic representation and correlates them. Our study is illustrative for research in which existing natural language processing software is embedded in a larger system. We manually created a standard reference based on a corpus of neuro and breast radiology reports. The standard reference was used to evaluate the precision and recall of the proposed system and its modules. Our results indicate that the precision of our system is considerably better than its recall (82.32–91.37% vs. 35.67–45.91%). We conducted an error analysis and discuss here the practical usability of the system given its recall and precision performance.
Natural language processing; Knowledge base; Data extraction; BI-RADS
We introduce the concept, benefits, and general architecture for acquiring, storing, and displaying digital photographs along with medical imaging examinations. We also discuss a specific implementation built around an Android-based system for simultaneously acquiring digital photographs along with portable radiographs. By an innovative application of radiofrequency identification technology to radiographic cassettes, the system is able to maintain a tight relationship between these photographs and the radiographs within the picture archiving and communications system (PACS) environment. We provide a cost analysis demonstrating the economic feasibility of this technology. Since our architecture naturally integrates with patient identification methods, we also address patient privacy issues.
Patient identification; Electronic medical records; Medical imaging; Medical errors; Digital camera; DICOM; PACS