This study aimed to investigate a computer-aided system for detecting breast masses using dynamic contrast-enhanced magnetic resonance imaging for clinical use. Detection performance of the system was analyzed on 61 biopsy-confirmed lesions (21 benign and 40 malignant lesions) in 34 women. The breast region was determined using the demons deformable algorithm. After the suspicious tissues were identified by kinetic feature (area under the curve) and the fuzzy c-means clustering method, all breast masses were detected based on the rotation-invariant and multi-scale blob characteristics. Subsequently, the masses were further distinguished from other detected non-tumor regions (false positives). Free-response operating characteristics (FROC) curve and detection rate were used to evaluate the detection performance. Using the combined features, including blob, enhancement, morphologic, and texture features with 10-fold cross validation, the mass detection rate was 100 % (61/61) with 15.15 false positives per case and 91.80 % (56/61) with 4.56 false positives per case. In conclusion, the proposed computer-aided detection system can help radiologists reduce inter-observer variability and the cost associated with detection of suspicious lesions from a large number of images. Our results illustrated that breast masses can be efficiently detected and that enhancement and morphologic characteristics were useful for reducing non-tumor regions.
Breast; Magnetic resonance imaging; Detection; Morphologic; Hessian
Radiologists come across interesting patient cases almost every day. This work proposes a novel case database server for quick and easy storage of such cases including whole image series, patient data, and annotations. Cases can be added to the database by saving DICOM images into a predefined directory on the local network. The application automatically extracts patient and study data from the DICOM header and saves it in the database while images are stored as anonymized JPEG files. Users can mark their cases as private or public (visible to all users). Different data fields for annotations and categorization of a case are available. The user frontend also provides several retrieval mechanisms allowing for browsing the cases and performing different kinds of search queries. The stored series can be scrolled interactively in the form of scrollable image stacks. The project is realized as a web-based application using a portable web and database server software package (XAMPP). This makes the system very lightweight and easy to run on almost any desktop computer, even from a USB flash drive, without the need for deeper IT knowledge and administrative rights.
Image libraries; Information storage and retrieval; Internet technology; Web technology
Cancer screening with magnetic resonance imaging (MRI) is currently recommended for very high risk women. The high variability in the diagnostic accuracy of radiologists analyzing screening MRI examinations of the breast is due, at least in part, to the large amounts of data acquired. This has motivated substantial research towards the development of computer-aided diagnosis (CAD) systems for breast MRI which can assist in the diagnostic process by acting as a second reader of the examinations. This retrospective study was performed on 184 benign and 49 malignant lesions detected in a prospective MRI screening study of high risk women at Sunnybrook Health Sciences Centre. A method for performing semi-automatic lesion segmentation based on a supervised learning formulation was compared with the enhancement threshold based segmentation method in the context of a computer-aided diagnostic system. The results demonstrate that the proposed method can assist in providing increased separation between malignant and radiologically suspicious benign lesions. Separation between malignant and benign lesions based on margin measures improved from a receiver operating characteristic (ROC) curve area of 0.63 to 0.73 when the proposed segmentation method was compared with the enhancement threshold, representing a statistically significant improvement. Separation between malignant and benign lesions based on dynamic measures improved from a ROC curve area of 0.75 to 0.79 when the proposed segmentation method was compared to the enhancement threshold, also representing a statistically significant improvement. The proposed method has potential as a component of a computer-aided diagnostic system.
Computer-aided diagnosis; Magnetic resonance imaging; Breast; Cancer; Supervised learning; Pattern recognition
Picture archiving and communication systems (PACS) play a critical role in radiology. This paper presents the criteria important to PACS administrators for selecting a PACS. A set of criteria are identified and organized into an integrative hierarchical framework. Survey responses from 48 administrators are used to identify the relative weights of these criteria through an analytical hierarchy process. The five main dimensions for PACS selection in order of importance are system continuity and functionality, system performance and architecture, user interface for workflow management, user interface for image manipulation, and display quality. Among the subdimensions, the highest weights were assessed for security, backup, and continuity; tools for continuous performance monitoring; support for multispecialty images; and voice recognition/transcription. PACS administrators’ preferences were generally in line with that of previously reported results for radiologists. Both groups assigned the highest priority to ensuring business continuity and preventing loss of data through features such as security, backup, downtime prevention, and tools for continuous PACS performance monitoring. PACS administrators’ next high priorities were support for multispecialty images, image retrieval speeds from short-term and long-term storage, real-time monitoring, and architectural issues of compatibility and integration with other products. Thus, next to ensuring business continuity, administrators’ focus was on issues that impact their ability to deliver services and support. On the other hand, radiologists gave high priorities to voice recognition, transcription, and reporting; structured reporting; and convenience and responsiveness in manipulation of images. Thus, radiologists’ focus appears to be on issues that may impact their productivity, effort, and accuracy.
Picture archiving and communication system; PACS; Analytical hierarchy process; AHP; RIS; Structured reporting; Voice recognition; Transcription; Open systems; Proprietary systems; Display quality; System continuity; Security; Backup; Recovery; Downtime prevention; PACS performance monitoring; Configuration; Upgrade; Cardiology images; Pathology images; System architecture and performance; User interface for image manipulation; User interface workflow management; Worklist management
We tested the accuracy and efficiency of a novel automated program capable of extracting 15 cardiac computed tomography angiography (CTA) parameters from clinical CTA reports. Five hundred cardiac CTA reports were retrospectively collected and processed. All reports were pre-populated with a structured template per guideline. The program extracted 15 parameters with high accuracy (97.3 %) and efficiency (84 s). This program may be used at other institutions with similar accuracy if its report format follows the Society of Cardiovascular Computed Tomography (SCCT) guideline recommendation.
Algorithm; Database management; Data extraction; Efficiency; Radiation dose
Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.
Personalized radiology education; Making error predication; Content-based predictor; Collaborative filtering
The quantitative, multiparametric assessment of brain lesions requires coregistering different parameters derived from MRI sequences. This will be followed by analysis of the voxel values of the ROI within the sequences and calculated parametric maps, and deriving multiparametric models to classify imaging data. There is a need for an intuitive, automated quantitative processing framework that is generalized and adaptable to different clinical and research questions. As such flexible frameworks have not been previously described, we proceeded to construct a quantitative post-processing framework with commonly available software components. Matlab was chosen as the programming/integration environment, and SPM was chosen as the coregistration component. Matlab routines were created to extract and concatenate the coregistration transforms, take the coregistered MRI sequences as inputs to the process, allow specification of the ROI, and store the voxel values to the database for statistical analysis. The functionality of the framework was validated using brain tumor MRI cases. The implementation of this quantitative post-processing framework enables intuitive creation of multiple parameters for each voxel, facilitating near real-time in-depth voxel-wise analysis. Our initial empirical evaluation of the framework is an increased usage of analysis requiring post-processing and increased number of simultaneous research activities by clinicians and researchers with non-technical backgrounds. We show that common software components can be utilized to implement an intuitive real-time quantitative post-processing framework, resulting in improved scalability and increased adoption of post-processing needed to answer important diagnostic questions.
Brain imaging; Computer-Aided Diagnoses (CAD); User interface; Algorithms; Biomedical Image Analysis; Brain Morphology; Digital Image Processing; Digital Imaging and Communications in Medicine (DICOM); Image analysis; MR imaging; Segmentation; Software design; Systems integration
Accurate quantification of bone morphology is important for monitoring the progress of bony deformation in patients with cerebral palsy. The purpose of the study was to develop an automatic bone morphology measurement method using one or two radiographs. The study focused on four morphologic measurements—neck-shaft angle, femoral anteversion, shaft bowing angle, and neck length. Fifty-four three-dimensional (3D) geometrical femur models were generated from the computed tomography (CT) of cerebral palsy patients. Principal component analysis was performed on the combined data of geometrical femur models and manual measurements of the four morphologic measurements to generate a statistical femur model. The 3D–2D registration of the statistical femur model for radiography computes four morphological measurements of the femur in the radiographs automatically. The prediction performance was tested here by means of leave-one-out cross-validation and was quantified by the intraclass correlation coefficient (ICC) and by measuring the absolute differences between automatic prediction from two radiographs and manual measurements using original CT images. For the neck-shaft angle, femoral anteversion, shaft bowing angle, and neck length, the ICCs were 0.812, 0.960, 0.834, and 0.750, respectively, and the mean absolute differences were 2.52°, 2.85°, 0.92°, and 1.88 mm, respectively. Four important dimensions of the femur could be predicted from two views with very good agreement with manual measurements from CT and hip radiographs. The proposed method can help young patients avoid instances of large radiation exposure from CT, and their femoral deformities can be quantified robustly and effectively from one or two radiograph(s).
Femoral morphology; Cerebral palsy; Automatic morphology quantification; Statistical shape model
The color error in images taken by digital cameras is evaluated with respect to its sensitivity to the image capture conditions. A parametric study was conducted to investigate the dependence of image color error on camera technology, illumination spectra, and lighting uniformity. The measurement conditions were selected to simulate the variation that might be expected in typical telemedicine situations. Substantial color errors were observed, depending on the measurement conditions. Several image post-processing methods were also investigated for their effectiveness in reducing the color errors. The results of this study quantify the level of color error that may occur in the digital camera image capture process, and provide guidance for improving the color accuracy through appropriate changes in that process and in post-processing.
Color error; Camera; Color difference; Color correction; Telemedicine
The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into “blocks” within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK “Virtual Block” has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.
Software design; Image processing; Image segmentation; Image registration; Visual programming
Picture Archive and Communication System (PACS) is a globally adopted concept and plays a fundamental role in patient care flow within healthcare institutions. However, the deployment of medical imaging repositories over multiple sites still brings several practical challenges namely related to operation and management (O&M). This paper describes a Web-based centralized console that provides remote monitoring, testing, and management over multiple geo-distributed PACS. The system allows the PACS administrator to define any kind of service or operation, reducing the need for local technicians and providing a 24/7 monitoring solution.
Medical Imaging; PACS Management; DICOM Monitoring; PACS Testing; Geo-Distributed PACS
This study aims to evaluate whether the distribution of vessels inside and adjacent to tumor region at three-dimensional (3-D) power Doppler ultrasonography (US) can be used for the differentiation of benign and malignant breast tumors. 3-D power Doppler US images of 113 solid breast masses (60 benign and 53 malignant) were used in this study. Blood vessels within and adjacent to tumor were estimated individually in 3-D power Doppler US images for differential diagnosis. Six features including volume of vessels, vascularity index, volume of tumor, vascularity index in tumor, vascularity index in normal tissue, and vascularity index in surrounding region of tumor within 2 cm were evaluated. Neural network was then used to classify tumors by using these vascular features. The receiver operating characteristic (ROC) curve analysis and Student’s t test were used to estimate the performance. All the six proposed vascular features are statistically significant (p < 0.001) for classifying the breast tumors as benign or malignant. The AZ (area under ROC curve) values for the classification result were 0.9138. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the diagnosis performance based on all six proposed features were 82.30 (93/113), 86.79 (46/53), 78.33 (47/60), 77.97 (46/59), and 87.04 % (47/54), respectively. The p value of AZ values between the proposed method and conventional vascularity index method using z test was 0.04.
3-D ultrasound; Power Doppler ultrasound; Breast tumor; Vascularity
Diabetic maculopathy is one of the retinal abnormalities in which a diabetic patient suffers from severe vision loss due to the affected macula. It affects the central vision of the person and causes blindness in severe cases. In this article, we propose an automated medical system for the grading of diabetic maculopathy that will assist the ophthalmologists in early detection of the disease. The proposed system extracts the macula from digital retinal image using the vascular structure and optic disc location. It creates a binary map for possible exudate regions using filter banks and formulates a detailed feature vector for all regions. The system uses a Gaussian Mixture Model-based classifier to the retinal image in different stages of maculopathy by using the macula coordinates and exudate feature set. The evaluation of proposed system is performed by using publicly available standard retinal image databases. The results of our system have been compared with other methods in the literature in terms of sensitivity, specificity, positive predictive value and accuracy. Our system gives higher values as compared to others on the same databases which makes it suitable for an automated medical system for grading of diabetic maculopathy.
Diabetic maculopathy; Exudates; Macula; Feature extraction; Gaussian mixture model
Efficient segmentation of tumors in medical images is of great practical importance in early diagnosis and radiation plan. This paper proposes a novel semi-automatic segmentation method based on population and individual statistical information to segment brain tumors in magnetic resonance (MR) images. First, high-dimensional image features are extracted. Neighborhood components analysis is proposed to learn two optimal distance metrics, which contain population and patient-specific information, respectively. The probability of each pixel belonging to the foreground (tumor) and the background is estimated by the k-nearest neighborhood classifier under the learned optimal distance metrics. A cost function for segmentation is constructed through these probabilities and is optimized using graph cuts. Finally, some morphological operations are performed to improve the achieved segmentation results. Our dataset consists of 137 brain MR images, including 68 for training and 69 for testing. The proposed method overcomes segmentation difficulties caused by the uneven gray level distribution of the tumors and even can get satisfactory results if the tumors have fuzzy edges. Experimental results demonstrate that the proposed method is robust to brain tumor segmentation.
Neighborhood components analysis; k-nearest neighborhood; Graph cuts; Distance metric learning
Natural language processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time.
Natural language processing; Decision support; Information retrieval of bone fractures
Craniofacial disorders are routinely diagnosed using computed tomography imaging. Corrective surgery is often performed early in life to restore the skull to a more normal shape. In order to quantitatively assess the shape change due to surgery, we present an automated method for intracranial space segmentation. The method utilizes a two-stage approach which firstly initializes the segmentation with a cascade of mathematical morphology operations. This segmentation is then refined with a level-set-based approach that ensures that low-contrast boundaries, where bone is absent, are completed smoothly. We demonstrate this method on a dataset of 43 images and show that the method produces consistent and accurate results.
Intracranial space segmentation; Computed tomography; Level set methods; Mathematical morphology
We present an atlas-based registration method for bones segmented from quantitative computed tomography (QCT) scans, with the goal of mapping their interior bone mineral densities (BMDs) volumetrically. We introduce a new type of deformable atlas, called subdivision-embedded atlas, which consists of a control grid represented as a tetrahedral subdivision mesh and a template bone surface embedded within the grid. Compared to a typical lattice-based deformation grid, the subdivision control grid possesses a relatively small degree of freedom tailored to the shape of the bone, which allows efficient fitting onto subjects. Compared with previous subdivision atlases, the novelty of our atlas lies in the addition of the embedded template surface, which further increases the accuracy of the fitting. Using this new atlas representation, we developed an efficient and fully automated pipeline for registering atlases of 12 tarsal and metatarsal bones to a segmented QCT scan of a human foot. Our evaluation shows that the mapping of BMD enabled by the registration is consistent for bones in repeated scans, and the regional BMD automatically computed from the mapping is not significantly different from expert annotations. The results suggest that our improved subdivision-based registration method is a reliable, efficient way to replace manual labor for measuring regional BMD in foot bones in QCT scans.
Bone mineral density; Registration; Atlas; Subdivision
The purpose of this study is to provide a novel approach for measuring the spatial distribution of wall shear stress (WSS) in common carotid artery in vivo. WSS distributions were determined by digital image processing from color Doppler flow imaging (CDFI) in 50 healthy volunteers. In order to evaluate the feasibility of the spatial distribution, the mean values of WSS distribution were compared to the results of conventional WSS calculating method (Hagen–Poiseuille formula). In our study, the mean value of WSS distribution from 50 healthy volunteers was (6.91 ± 1.20) dyne/cm2, while it was (7.13 ± 1.24) dyne/cm2 by Hagen–Poiseuille approach. The difference was not statistically significant (t = −0.864, p = 0.604). Hence, the feasibility of the spatial distribution of WSS was proved. Moreover, this novel approach could provide three-dimensional distribution of shear stress and fusion image of shear stress with ultrasonic image for each volunteer, which made WSS “visible”. In conclusion, the spatial distribution of WSS could be used for WSS calculation in vivo. Moreover, it could provide more detailed values of WSS distribution than those of Hagen–Poiseuille formula.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-012-9505-3) contains supplementary material, which is available to authorized users.
Atherosclerosis; Common carotid artery; Wall shear stress; Color Doppler flow imaging; DICOM
Automatic tools for detection and identification of lung and lesion from high-resolution CT (HRCT) are becoming increasingly important both for diagnosis and for delivering high-precision radiation therapy. However, development of robust and interpretable classifiers still presents a challenge especially in case of non-small cell lung carcinoma (NSCLC) patients. In this paper, we have attempted to devise such a classifier by extracting fuzzy rules from texture segmented regions from HRCT images of NSCLC patients. A fuzzy inference system (FIS) has been constructed starting from a feature extraction procedure applied on overlapping regions from the same organs and deriving simple if–then rules so that more linguistically interpretable decisions can be implemented. The proposed method has been tested on 138 regions extracted from CT scan images acquired from patients with lung cancer. Assuming two classes of tissues C1 (healthy tissues) and C2 (lesion) as negative and positive, respectively; preliminary results report an AUC = 0.98 for lesions and AUC = 0.93 for healthy tissue, with an optimal operating condition related to sensitivity = 0.96, and specificity = 0.98 for lesions and sensitivity 0.99, and specificity = 0.94 for healthy tissue. Finally, the following results have been obtained: false-negative rate (FNR) = 6 % (C1), FNR = 2 % (C2), false-positive rate (FPR) = 4 % (C1), FPR = 3 % (C2), true-positive rate (TPR) = 94 %, (C1) and TPR = 98 % (C2).
NSCLC; IGRT; FIS; Rule-based classification
To develop a generic Open Source MRI perfusion analysis tool for quantitative parameter mapping to be used in a clinical workflow and methods for quality management of perfusion data. We implemented a classic, pixel-by-pixel deconvolution approach to quantify T1-weighted contrast-enhanced dynamic MR imaging (DCE-MRI) perfusion data as an OsiriX plug-in. It features parallel computing capabilities and an automated reporting scheme for quality management. Furthermore, by our implementation design, it could be easily extendable to other perfusion algorithms. Obtained results are saved as DICOM objects and directly added to the patient study. The plug-in was evaluated on ten MR perfusion data sets of the prostate and a calibration data set by comparing obtained parametric maps (plasma flow, volume of distribution, and mean transit time) to a widely used reference implementation in IDL. For all data, parametric maps could be calculated and the plug-in worked correctly and stable. On average, a deviation of 0.032 ± 0.02 ml/100 ml/min for the plasma flow, 0.004 ± 0.0007 ml/100 ml for the volume of distribution, and 0.037 ± 0.03 s for the mean transit time between our implementation and a reference implementation was observed. By using computer hardware with eight CPU cores, calculation time could be reduced by a factor of 2.5. We developed successfully an Open Source OsiriX plug-in for T1-DCE-MRI perfusion analysis in a routine quality managed clinical environment. Using model-free deconvolution, it allows for perfusion analysis in various clinical applications. By our plug-in, information about measured physiological processes can be obtained and transferred into clinical practice.
Algorithms; Perfusion; Image processing; Computer-assisted; Magnetic resonance imaging
High-resolution large datasets were acquired to improve the understanding of murine bone physiology. The purpose of this work is to present the challenges and solutions in segmenting and visualizing bone in such large datasets acquired using micro-CT scan of mice. The analyzed dataset is more than 50 GB in size with more than 6,000 2,048 × 2,048 slices. The study was performed to automatically measure the bone mineral density (BMD) of the entire skeleton. A global Renyi entropy (GREP) method was initially used for bone segmentation. This method consistently oversegmented skeletal region. A new method called adaptive local Renyi entropy (ALREP) is proposed to improve the segmentation results. To study the efficacy of the ALREP, manual segmentation was performed. Finally, a specialized high-end remote visualization system along with the software, VirtualGL, was used to perform remote rendering of this large dataset. It was determined that GREP overestimated the bone cross-section by around 30 % compared with ALREP. The manual segmentation process took 6,300 min for 6,300 slices while ALREP took only 150 min for segmentation. Automatic image processing with ALREP method may facilitate BMD measurement of the entire skeleton in a significantly reduced time, compared with manual process.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-012-9498-y) contains supplementary material, which is available to authorized users.
Segmentation; Visualization; High resolution; Bone mineral density
Atherosclerosis is one of the most extended cardiovascular diseases nowadays. Although it may be unnoticed during years, it also may suddenly trigger severe illnesses such as stroke, embolisms or ischemia. Therefore, an early detection of atherosclerosis can prevent adult population from suffering more serious pathologies. The intima–media thickness (IMT) of the common carotid artery (CCA) has been used as an early and reliable indicator of atherosclerosis for years. The IMT is manually computed from ultrasound images, a process that can be repeated as many times as necessary (over different ultrasound images of the same patient), but also prone to errors. With the aim to reduce the inter-observer variability and the subjectivity of the measurement, a fully automatic computer-based method based on ultrasound image processing and a frequency-domain implementation of active contours is proposed. The images used in this work were obtained with the same ultrasound scanner (Philips iU22 Ultrasound System) but with different spatial resolutions. The proposed solution does not extract only the IMT but also the CCA diameter, which is not as relevant as the IMT to predict future atherosclerosis evolution but it is a statistically interesting piece of information for the doctors to determine the cardiovascular risk. The results of the proposed method have been validated by doctors, and these results are visually and numerically satisfactory when considering the medical measurements as ground truth, with a maximum deviation of only 3.4 pixels (0.0248 mm) for IMT.
Automated measurement; Image segmentation; Ultrasound; Intima–media thickness
In this paper, we propose a novel technique for skull stripping of infant (neonatal) brain magnetic resonance images using prior shape information within a graph cut framework. Skull stripping plays an important role in brain image analysis and is a major challenge for neonatal brain images. Popular methods like the brain surface extractor (BSE) and brain extraction tool (BET) do not produce satisfactory results for neonatal images due to poor tissue contrast, weak boundaries between brain and non-brain regions, and low spatial resolution. Inclusion of prior shape information helps in accurate identification of brain and non-brain tissues. Prior shape information is obtained from a set of labeled training images. The probability of a pixel belonging to the brain is obtained from the prior shape mask and included in the penalty term of the cost function. An extra smoothness term is based on gradient information that helps identify the weak boundaries between the brain and non-brain region. Experimental results on real neonatal brain images show that compared to BET, BSE, and other methods, our method achieves superior segmentation performance for neonatal brain images and comparable performance for adult brain images.
Shape prior; Graph cuts; Neonatal; Brain; MRI; Segmentation; Gradient
The development of teleradiology brings the convenience of global medical record access along with the concerns over the security of medical images transmitted over the open network. With the prevailing adoption of three-dimensional (3-D) imaging modalities, it is vital to develop a security mechanism to provide large volumes of medical images with privacy and reliability. This paper presents the development of a new and improved method of implementing tamper detection and localization based on a fully reversible digital watermarking scheme for the protection of volumetric DICOM images. This tamper detection and localization method utilizes the 3-D property of volumetric data to achieve much faster processing time at both sender and receiver sides without compromising tamper localization accuracy. The performance of the proposed scheme was evaluated by using sample volumetric DICOM images. Results show that the scheme achieved on average about 65 % and 72 % reduction in watermarking and dewatermarking processing time, respectively. For cases where the images had been tampered, it is possible to detect and localize the tampered areas with improved localization resolution in the images using the scheme.
Image authentication; Medical data security; Tamper detection; Watermarking
The objectives are (1) to introduce a new concept of making a quantitative computed tomography (QCT) reporting system by using optical character recognition (OCR) and macro program and (2) to illustrate the practical usages of the QCT reporting system in radiology reading environment. This reporting system was created as a development tool by using an open-source OCR software and an open-source macro program. The main module was designed for OCR to report QCT images in radiology reading process. The principal processes are as follows: (1) to save a QCT report as a graphic file, (2) to recognize the characters from an image as a text, (3) to extract the T scores from the text, (4) to perform error correction, (5) to reformat the values into QCT radiology reporting template, and (6) to paste the reports into the electronic medical record (EMR) or picture archiving and communicating system (PACS). The accuracy test of OCR was performed on randomly selected QCTs. QCT as a radiology reporting tool successfully acted as OCR of QCT. The diagnosis of normal, osteopenia, or osteoporosis is also determined. Error correction of OCR is done with AutoHotkey-coded module. The results of T scores of femoral neck and lumbar vertebrae had an accuracy of 100 and 95.4 %, respectively. A convenient QCT reporting system could be established by utilizing open-source OCR software and open-source macro program. This method can be easily adapted for other QCT applications and PACS/EMR.
Computer in medicine; PACS; OCR; QCT; Reading room