Film-based radiographs are still being used to teach in a conference format, which presents several viewing challenges amongst other problems. In the age of cloud computing, which enables the use of online server storage space, this information could be used more effectively if it were digitized. However, digitizing film-based radiographs and making them available for use in the cloud is not as easy as it seems. In order to address the issue of digitizing the film-based radiograph libraries in our radiology department, we looked at several options. The option that we chose was a consumer-grade scanner, and this decision was based on price, resolution, shades of gray, built-in transparency function, and its physical attributes. Our goal was to digitize the film-based radiograph teaching files so they could be stored in a digital file locker such as Google Picassa for organization and quick access later. These files would constantly be updated in a Google document by residents, and this document would be called the “Living Document” based on its continuous expandability. This method would allow even the smallest radiology department to benefit from the use of modern technology to gain access to valuable information stored in film-based radiographs and give every resident the opportunity to benefit from it.
Teaching; Radiology teaching files; Radiography; Internship and residency; Internet technology; Image libraries; Digital radiography; Digital libraries; Digital imaging; Digital image processing; Digital image management; Clinical image viewing
The goal of this study was to develop and validate text-mining algorithms to automatically identify radiology reports containing critical results including tension or increasing/new large pneumothorax, acute pulmonary embolism, acute cholecystitis, acute appendicitis, ectopic pregnancy, scrotal torsion, unexplained free intraperitoneal air, new or increasing intracranial hemorrhage, and malpositioned tubes and lines. The algorithms were developed using rule-based approaches and designed to search for common words and phrases in radiology reports that indicate critical results. Certain text-mining features were utilized such as wildcards, stemming, negation detection, proximity matching, and expanded searches with applicable synonyms. To further improve accuracy, the algorithms utilized modality and exam-specific queries, searched under the “Impression” field of the radiology report, and excluded reports with a low level of diagnostic certainty. Algorithm accuracy was determined using precision, recall, and F-measure using human review as the reference standard. The overall accuracy (F-measure) of the algorithms ranged from 81% to 100%, with a mean precision and recall of 96% and 91%, respectively. These algorithms can be applied to radiology report databases for quality assurance and accreditation, integrated with existing dashboards for display and monitoring, and ported to other institutions for their own use.
Algorithms; Communication; Critical Results Reporting; Data Mining; Natural Language Processing; Quality Assurance; Quality Control
The aim of this study was to determine the feasibility of automated detection of adrenal nodules, a common finding on CT, using a newly developed search engine that mines dictated radiology reports. To ensure Health Insurance Portability and Accountability Act compliance, we utilized a preexisting de-identified database of 32,974 CT reports from February 1, 2009 to February 28, 2010. Common adrenal descriptors from 29 staff radiologists were used to develop an automated rule-based algorithm targeting adrenal findings. Each sentence within the free text of reports was searched with an adapted NegEx negation algorithm. The algorithm was refined using a 2-week test period of reports and subsequently validated using a 6-week period. Manual review of the 3,693 CT reports in the validation period identified 222 positive reports while the algorithm detected 238 positive reports. The algorithm identified one true positive report missed on manual review for a total of 223 true positive reports. This resulted in a precision of 91% (217 of 238) and a recall of 97% (217 of 223). The sensitivity of the query was 97.3% (95% confidence interval (CI), 93.9–98.9%), and the specificity was 99.3% (95% CI, 99.1–99.6%). The positive predictive value of the algorithm was 91.0% (95% CI, 86.6–94.3%), and the negative predictive value was 99.8% (95% CI, 99.6–99.9%). The prevalence of true positive adrenal findings identified by the query (7.1%) was nearly identical to the true prevalence (7.2%). Automated detection of language describing common findings in imaging reports, such as adrenal nodules on CT, is feasible.
Data mining; Radiology information systems (RIS); Natural language; Processing; Computed tomography; Radiology reporting; Adrenal nodules; Negation algorithm; Unstructured radiology reports
Previous studies suggests that cone beam computerized tomography (CBCT) images could provide reliable information regarding the fate of bone grafts in the maxillofacial region, but no systematic information regarding the standardization of CBCT settings and properties is available, i.e., there is a lack of information on how the images were generated, exported, and analyzed when bone grafts were evaluated. The aim of this study was to (1) do a systematic review on which type of CBCT-based DICOM images have been used for the evaluation of the fate of bone grafts in humans and (2) use a software suggested in the literature to test DICOM-based data sets, exemplifying the effect of variation in selected parameters (windowing/contrast control, plane definition, slice thickness, and number of measured slices) on the final image characteristics. The results from review identified three publications that used CBCT to evaluate maxillofacial bone grafts in humans, and in which the methodology/results comprised at least one of the expected outcomes (image acquisition protocol, image reconstruction, and image generation information). The experimental shows how the influence of information that was missing in the retrieved papers, can influence the reproducibility and the validity of image measurements. Although the use of CBCT-based images for the evaluation of bone grafts in humans has become more common, this does not reflect on a better standardization of the developed studies. Parameters regarding image acquisition and reconstruction, while important, are not addressed in the proper way in the literature, compromising the reproducibility and scientific impact of the studies.
CBCT; Bone graft; Windowing; Plane definition; Slice thickness
Boundary extraction of carpal bone images is a critical operation of the automatic bone age assessment system, since the contrast between the bony structure and soft tissue are very poor. In this paper, we present an edge following technique for boundary extraction in carpal bone images and apply it to assess bone age in young children. Our proposed technique can detect the boundaries of carpal bones in X-ray images by using the information from the vector image model and the edge map. Feature analysis of the carpal bones can reveal the important information for bone age assessment. Five features for bone age assessment are calculated from the boundary extraction result of each carpal bone. All features are taken as input into the support vector regression (SVR) that assesses the bone age. We compare the SVR with the neural network regression (NNR). We use 180 images of carpal bone from a digital hand atlas to assess the bone age of young children from 0 to 6 years old. Leave-one-out cross validation is used for testing the efficiency of the techniques. The opinions of the skilled radiologists provided in the atlas are used as the ground truth in bone age assessment. The SVR is able to provide more accurate bone age assessment results than the NNR. The experimental results from SVR are very close to the bone age assessment by skilled radiologists.
Boundary extraction; Bone age; Carpal bones; Edge following; Support vector machine
The use and benefits of a multimodality approach in the context of breast cancer imaging are discussed. Fusion techniques that allow multiple images to be viewed simultaneously are discussed. Many of these fusion techniques rely on the use of color tables. A genetic algorithm that generates color tables that have desired properties such as satisfying the order principle, the rows, and columns principle, have perceivable uniformity and have maximum contrast is introduced. The generated 2D color tables can be used for displaying fused datasets. The advantage the proposed method has over other techniques is the ability to consider a much larger set of possible color tables, ensuring that the best one is found. We asked radiologists to perform a set of tasks reading fused PET/MRI breast images obtained using eight different fusion techniques. This preliminary study clearly demonstrates the need and benefit of a joint display by estimating the inaccuracies incurred when using a side-by-side display. The study suggests that the color tables generated by the genetic algorithm are good choices for fusing MR and PET images. It is interesting to note that popular techniques such as the Fire/Gray and techniques based on the HSV color space, which are prevalent in the literature and clinical practice, appear to give poorer performance.
Image fusion; Image display; Visual perception; Image perception; Image visualization; Clinical image viewing; Magnetic resonance imaging; Positron emission tomography; Breast; Genetic algorithm; Color mixing; Image enhancement; Artificial intelligence; Image analysis; Human visual system
It is difficult for radiologists to classify pneumoconiosis with small nodules on chest radiographs. Therefore, we have developed a computer-aided diagnosis (CAD) system based on the rule-based plus artificial neural network (ANN) method for distinction between normal and abnormal regions of interest (ROIs) selected from chest radiographs with and without pneumoconiosis. The image database consists of 11 normal and 12 abnormal chest radiographs. These abnormal cases included five silicoses, four asbestoses, and three other pneumoconioses. ROIs (matrix size, 32 × 32) were selected from normal and abnormal lungs. We obtained power spectra (PS) by Fourier transform for the frequency analysis. A rule-based method using PS values at 0.179 and 0.357 cycles per millimeter, corresponding to the spatial frequencies of nodular patterns, were employed for identification of obviously normal or obviously abnormal ROIs. Then, ANN was applied for classification of the remaining normal and abnormal ROIs, which were not classified as obviously abnormal or normal by the rule-based method. The classification performance was evaluated by the area under the receiver operating characteristic curve (Az value). The Az value was 0.972 ± 0.012 for the rule-based plus ANN method, which was larger than that of 0.961 ± 0.016 for the ANN method alone (P ≤ 0.15) and that of 0.873 for the rule-based method alone. We have developed a rule-based plus pattern recognition technique based on the ANN for classification of pneumoconiosis on chest radiography. Our CAD system based on PS would be useful to assist radiologists in the classification of pneumoconiosis.
Computer-aided diagnosis (CAD); Pneumoconiosis; Chest radiography; Power spectra; Artificial neural network
This paper presents a fast and efficient method for classifying X-ray images using random forests with proposed local wavelet-based local binary pattern (LBP) to improve image classification performance and reduce training and testing time. Most studies on local binary patterns and its modifications, including centre symmetric LBP (CS-LBP), focus on using image pixels as descriptors. To classify X-ray images, we first extract local wavelet-based CS-LBP (WCS-LBP) descriptors from local parts of the images to describe the wavelet-based texture characteristic. Then we apply the extracted feature vector to decision trees to construct random forests, which are an ensemble of random decision trees. Using the random forests with local WCS-LBP, we classified one test image into the category having the maximum posterior probability. Compared with other feature descriptors and classifiers, the proposed method shows both improved performance and faster processing time.
X-ray image classification; Random forests; Local binary patterns; Image analysis; Pattern recognition; Automated; Decision trees; Diagnostic imaging; Digital image management
Image processing turns out to be essential in the planning and verification of radiotherapy treatments. Before applying a radiotherapy treatment, a dosimetry planning must be performed. Usually, the planning is done by means of an X-ray volumetric analysis using computerized tomography, where the area to be radiated is marked out. During the treatment phase, it is necessary to place the patient under the particle accelerator exactly as considered in the dosimetry stage. Coarse alignment is achieved using fiduciary markers placed over the patient’s skin as external references. Later, fine alignment is provided by comparing a digitally reconstructed radiography (DRR) from the planning stage and a portal image captured by the accelerator in the treatment stage. The preprocessing of DRR and portal images, as well as the minimization of the non-shared information between both kinds of images, is mandatory for the correct operation of the image registration algorithm. With this purpose, mathematical morphology and image processing techniques have been used. The present work describes a fully automatic method to calculate more accurately the necessary displacement of the couch to place the patient exactly at the planned position. The proposed method to achieve the correct positioning of the patient is based on advanced image registration techniques. Preliminary results show a perfect match with the displacement estimated by the physician.
Radiotherapy; Image registration; Image feature enhancement; Biomedical image analysis
This paper presents a novel method which reconstructs any desired 3D image resolution from raw cone-beam CT data. X-ray attenuation through the object is approximated using ridgelet basis functions which allow us to have multiresolution representation levels. Since the Radon data have preferential orientations by nature, a spherical wavelet transform is used to compute the ridgelet coefficients from the Radon shell data. The whole method uses the classical Grangeat’s relation for computing derivatives of the Radon data which are then integrated and projected to a spherical wavelet representation and back-reconstructed using a modified version of the well known back-projection algorithm. Unlike previous reconstruction methods, this proposal uses a multiscale representation of the Radon data and therefore allows fast display of low-resolution data level.
Computed tomography; 3D ridgelet; Spherical wavelets
The aim of the study was to evaluate the effect of two lossy image compression methods on fractal dimension (FD) calculation. Ten periapical images of the posterior teeth with no restorations or previous root canal therapy were obtained using storage phosphor plates and were saved in TIF format. Then, all images were compressed with lossy JPEG and JPEG2000 compression methods at five compression levels, i.e., 90, 70, 50, 30, and 10. Compressed file sizes from all images and compression ratios were calculated. On each image, two regions of interest (ROIs) containing healthy trabecular bone in the posterior periapical area were selected. The FD of each ROI on the original and compressed images was calculated using differential box counting method. Both image compression and analysis were performed by a public domain software. Altogether, the FD of 220 ROIs was calculated. FDs were compared using ANOVA and Dunnett tests. The FD decreased gradually with compression level. A statistically significant decrease of the FD values was found for JPEG 10, JPEG2000 10, and JPEG2000 30 compression levels (p < 0.05). At comparable file sizes, the JPEG induced a smaller FD difference. In conclusion, lossy compressed images with appropriate compression level may be used for FD calculation.
Compression; Computer analysis; Computer-assisted detection
The current array of PACS products and 3D visualization tools presents a wide range of options for applying advanced visualization methods in clinical radiology. The emergence of server-based rendering techniques creates new opportunities for raising the level of clinical image review. However, best-of-breed implementations of core PACS technology, volumetric image navigation, and application-specific 3D packages will, in general, be supplied by different vendors. Integration issues should be carefully considered before deploying such systems. This work presents a classification scheme describing five tiers of PACS modularity and integration with advanced visualization tools, with the goals of characterizing current options for such integration, providing an approach for evaluating such systems, and discussing possible future architectures. These five levels of increasing PACS modularity begin with what was until recently the dominant model for integrating advanced visualization into the clinical radiologist's workflow, consisting of a dedicated stand-alone post-processing workstation in the reading room. Introduction of context-sharing, thin clients using server-based rendering, archive integration, and user-level application hosting at successive levels of the hierarchy lead to a modularized imaging architecture, which promotes user interface integration, resource efficiency, system performance, supportability, and flexibility. These technical factors and system metrics are discussed in the context of the proposed five-level classification scheme.
PACS; 3D imaging (imaging, three-dimensional); Computer systems; Advanced visualization; Server-based rendering; Application hosting
We propose the use of a context-sensitive support vector machine (csSVM) to enhance the performance of a conventional support vector machine (SVM) for identifying diffuse interstitial lung disease (DILD) in high-resolution computerized tomography (HRCT) images. Nine hundred rectangular regions of interest (ROIs), each 20 × 20 pixels in size and consisting of 150 ROIs representing six regional disease patterns (normal, ground-glass opacity, reticular opacity, honeycombing, emphysema, and consolidation), were marked by two experienced radiologists using consensus HRCT images of various DILD. Twenty-one textual and shape features were evaluated to characterize the ROIs. The csSVM classified an ROI by simultaneously using the decision value of each class and information from the neighboring ROIs, such as neighboring region feature distances and class differences. Sequential forward-selection was used to select the relevant features. To validate our results, we used 900 ROIs with fivefold cross-validation and 84 whole lung images categorized by a radiologist. The accuracy of the proposed method for ROI and whole lung classification (89.88 ± 0.02%, and 60.30 ± 13.95%, respectively) was significantly higher than that provided by the conventional SVM classifier (87.39 ± 0.02%, and 57.69 ± 13.31%, respectively; paired t test, p < 0.01, and p < 0.01, respectively). We conclude that our csSVM provides better overall quantification of DILD.
Computed tomography; Computer-aided diagnosis; Image processing; Lung diseases
Browser with Rich Internet Application (RIA) Web pages could be a powerful user interface for handling sophisticated data and applications. Then the RIA solutions would be a potential method for viewing and manipulating the most data generated in clinical processes, which can accomplish the main functionalities as general picture archiving and communication system (PACS) viewing systems. The aim of this study is to apply the RIA technology to present medical images. Both Digital Imaging and Communications in Medicine (DICOM) and non-DICOM data can be handled by our RIA solutions. Some clinical data that are especially difficult to present using PACS viewing systems, such as ECG waveform, pathology virtual slide microscopic image, and radiotherapy plan, are as well demonstrated. Consequently, clinicians can use browser as a unique interface for acquiring all the clinical data located in different departments and information systems. And the data could be presented appropriately and processed freely by adopting the RIA technologies.
Electronic supplementary material
The online version of this article (doi:10.1007/s10278-011-9374-1) contains supplementary material, which is available to authorized users.
PACS; Clinical application; Clinical image viewing; DICOM; XML; RIA
The objective of this work is to develop and implement a medical decision-making system for an automated diagnosis and classification of ultrasound carotid artery images. The proposed method categorizes the subjects into normal, cerebrovascular, and cardiovascular diseases. Two contours are extracted for each and every preprocessed ultrasound carotid artery image. Two types of contour extraction techniques and multilayer back propagation network (MBPN) system have been developed for classifying carotid artery categories. The results obtained show that MBPN system provides higher classification efficiency, with minimum training and testing time. The outputs of decision support system are validated with medical expert to measure the actual efficiency. MBPN system with contour extraction algorithms and preprocessing scheme helps in developing medical decision-making system for ultrasound carotid artery images. It can be used as secondary observer in clinical decision making.
US carotid artery image analysis; Contour extraction; Multilayer back propagation network; Neural network classifier; Carotid artery classification; Medical decision-making system; Digital image processing; Image segmentation; Decision support techniques; Neural networks; Carotid artery
The evaluation of the carotid artery wall is essential for the diagnosis of cardiovascular pathologies or for the assessment of a patient’s cardiovascular risk. This paper presents a completely user-independent algorithm, which automatically extracts the far double line (lumen–intima and media–adventitia) in the carotid artery using an Edge Flow technique based on directional probability maps using the attributes of intensity and texture. Specifically, the algorithm traces the boundaries between the lumen and intima layer (line one) and between the media and adventitia layer (line two). The Carotid Automated Ultrasound Double Line Extraction System based on Edge-Flow (CAUDLES-EF) is characterized and validated by comparing the output of the algorithm with the manual tracing boundaries carried out by three experts. We also benchmark our new technique with the two other completely automatic techniques (CALEXia and CULEXsa) we previously published. Our multi-institutional database consisted of 300 longitudinal B-mode carotid images with normal and pathologic arteries. We compared our current new method with previous methods, and showed the mean and standard deviation for the three methods: CALEXia, CULEXsa, and CAUDLES-EF as 0.134 ± 0.088, 0.074 ± 0.092, and 0.043 ± 0.097 mm, respectively. Our IMT was slightly underestimated with respect to the ground truth IMT, but showed a uniform behavior over the entire database. Regarding the Figure of Merit (FoM), CALEXia and CULEXsa showed the values of 84.7% and 91.5%, respectively, while our new approach, CAUDLES-EF, performed the best at 94.8%, showing a good improvement compared to previous methods.
Carotid artery; Ultrasound; Multiresolution; Edge flow; Localization; Intima–media thickness; Hausdorff distance; Polyline distance; Segmentation; Automated measurement; Carotid imaging; Intima-media thickness measurement; Edge-flow operator
The purpose of this study was to measure users’ perceived benefits of a picture archiving and communication system (PACS) upgrade, and compare their responses to those predicted by developers. The Task–Technology Fit (TTF) model served as the theoretical framework to study the relation between TTF, utilization, and perceived benefits. A self-administered survey was distributed to radiologists working in a university hospital undergoing a PACS upgrade. Four variables were measured: impact, utilization, TTF, and perceived net benefits. The radiologists were divided into subgroups according to their utilization profiles. Analysis of variance was performed and the hypotheses were tested with regression analysis. Interviews were conducted with developers involved in the PACS upgrade who were asked to predict impact and TTF. Users identified only a moderate fit between the PACS enhancements and their tasks, while developers predicted a high level of TTF. The combination of a moderate fit and an underestimation of the potential impact of changes in the PACS led to a low score for perceived net benefits. Results varied significantly among user subgroups. Globally, the data support the hypotheses that TTF predicts utilization and perceived net benefits, but not that utilization predicts perceived net benefits. TTF is a valid tool to assess perceived benefits, but it is important to take into account the characteristics of users. In the context of a technology that is rapidly evolving, there needs to be an alignment of what users perceive as a good fit and the functionality developers incorporate into their products.
Picture archiving and communication system (PACS); task–technology fit; radiology information systems; technology assessment; questionnaires; organizational innovation; evaluation research; models; theoretical
Aortoiliac and lower extremity arterial atherosclerotic plaque burden is a risk factor for the development of visceral and peripheral ischemic and aneurismal vascular disease. While prior research allows automated quantification of calcified plaque in these body regions using CT angiograms, no automated method exists to quantify soft plaque. We developed an automatic algorithm that defines the outer wall contour and wall thickness of vessels to quantify non-calcified plaque in CT angiograms of the chest, abdomen, pelvis, and lower extremities. The algorithm encodes the search space as a constrained graph and calculates the outer wall contour by deriving a minimum cost path through the graph, following the visible outer wall contour while minimizing path tortuosity. Our algorithm was statistically equivalent to a reference standard made by two reviewers. Absolute error was 1.9 ± 2.3% compared to the inter-observer variability of 3.9 ± 3.6%. Wall thickness in vessels with atherosclerosis was 3.4 ± 1.6 mm compared to 1.2 ± 0.4 mm in normal vessels. The algorithm shows promise as a tool for quantification of non-calcified plaque in CT angiography. When combined with previous research, our method has the potential to quantify both non-calcified and calcified plaque in all clinically significant systemic arteries, from the thoracic aorta to the arteries of the calf, over a wide range of diameters. This algorithm has the potential to enable risk stratification of patients and facilitate investigations into the relationships between asymptomatic atherosclerosis and a variety of behavioral, physiologic, pathologic, and genotypic conditions.
Soft plaque; Quantification; Algorithm; CT angiography; Arteries; Graph method
Optical imaging using near-infrared light is used for noninvasive probing of tissues to recover vascular and molecular status of healthy and diseased tissues using hemoglobin contrast arising due to absorption of light. While multimodality optical techniques exist, visualization techniques in this area are limited. Addressing this issue, we present a simple framework for image overlay of optical and magnetic resonance (MRI) or computerized tomographic images which is intuitive and easily usable, called NIRViz. NIRViz is a multimodality software platform for the display and navigation of Digital Imaging and Communications in Medicine (DICOM) MRI datasets and 3D optical image solutions geared toward visualization and coregistration of optical contrast in diseased tissues such as cancer. We present the design decisions undertaken during the design of the software, the libraries used in the implementation, and other implementation details as well as preliminary results from the software package. Our implementation uses the Visualization Toolkit library to do most of the work, with a Qt graphical user interface for the front end. Challenges encountered include reslicing DICOM image data and coregistration of image space and mesh space. The resulting software provides a simple and customized platform to display surface and volume meshes with optical parameters such as hemoglobin concentration, overlay them on magnetic resonance images, allow the user to interactively change transparency of different image sets, rotate geometries, clip through the resulting datasets, obtain mesh and optical solution information, and successfully interact with both functional and structural medical image information.
MRI; Multimodality imaging; NIRViz; Visualization Toolkit; Insight Segmentation Toolkit
Glomerular filtration rate (GFR) is a common accepted standard estimation of renal function. Gamma camera-based methods for estimating renal uptake of 99mTc-diethylenetriaminepentaacetic acid (DTPA) without blood or urine sampling have been widely used. Of these, the method introduced by Gates has been the most common method. Currently, most of gamma cameras are equipped with a commercial program for GFR determination, a semi-quantitative analysis by manually drawing region of interest (ROI) over each kidney. Then, the GFR value can be computed from the scintigraphic determination of 99mTc-DTPA uptake within the kidney automatically. Delineating the kidney area is difficult when applying a fixed threshold value. Moreover, hand-drawn ROIs are tedious, time consuming, and dependent highly on operator skill. Thus, we developed a fully automatic renal ROI estimation system based on the temporal changes in intensity counts, intensity-pair distribution image contrast enhancement method, adaptive thresholding, and morphological operations that can locate the kidney area and obtain the GFR value from a 99mTc-DTPA renogram. To evaluate the performance of the proposed approach, 30 clinical dynamic renograms were introduced. The fully automatic approach failed in one patient with very poor renal function. Four patients had a unilateral kidney, and the others had bilateral kidneys. The automatic contours from the remaining 54 kidneys were compared with the contours of manual drawing. The 54 kidneys were included for area error and boundary error analyses. There was high correlation between two physicians’ manual contours and the contours obtained by our approach. For area error analysis, the mean true positive area overlap is 91%, the mean false negative is 13.4%, and the mean false positive is 9.3%. The boundary error is 1.6 pixels. The GFR calculated using this automatic computer-aided approach is reproducible and may be applied to help nuclear medicine physicians in clinical practice.
Glomerular filtration rate; Diethylenetriaminepentaacetic acid; Image contrast enhancement; Adaptive thresholding; Morphology operation; Renogram
Picture Archiving and Communication Systems (PACS) have been widely deployed in healthcare institutions, and they now constitute a normal commodity for practitioners. However, its installation, maintenance, and utilization are still a burden due to their heavy structures, typically supported by centralized computational solutions. In this paper, we present Dicoogle, a PACS archive supported by a document-based indexing system and by peer-to-peer (P2P) protocols. Replacing the traditional database storage (RDBMS) by a documental organization permits gathering and indexing data from file-based repositories, which allows searching the archive through free text queries. As a direct result of this strategy, more information can be extracted from medical imaging repositories, which clearly increases flexibility when compared with current query and retrieval DICOM services. The inclusion of P2P features allows PACS internetworking without the need for a central management framework. Moreover, Dicoogle is easy to install, manage, and use, and it maintains full interoperability with standard DICOM services.
PACS; Digital Imaging and Communications in Medicine (DICOM); Medical imaging; Peer-to-peer; Computer communication networks; Open source; PACS implementation; Information storage and retrieval
A novel medical image quality index using grey relational coefficient calculation is proposed in this study. Three medical modalities, DR, CT and MRI, using 30 or 60 images with a total of 120 images used for experimentation. These images were first compressed at ten different compression ratios (10 ∼ 100) using a medical image compression algorithm named JJ2000. Following that, the quality of the reconstructed images was evaluated using the grey relational coefficient calculation. The results were shown consistent with popular objective quality metrics. The impact of different image aspects on four grey relational coefficient methods were further tested. The results showed that these grey relational coefficients have different slopes but very high consistency for various image areas. Nagai’s grey relational coefficient was chosen in this study because of higher calculation speed and sensitivity. A comparison was also made between this method and other windows-based objective metrics for various window sizes. Studies found that the grey relational coefficient results are less sensitive to window size changes. The performance of this index is better than some windows-based objective metrics and can be used as an image quality index.
Image compression; Image quality analysis; JPEG2000
Our practice has long been concerned with the effects of display quality, including color accuracy and matching among paired color displays. Three years of data have been collected on the historical behavior of color stability on our clinical displays. This has permitted an analysis of the color-aging behavior of those displays over that time. The results of that analysis show that all displays tend to yellow over time, but that they do so together. That is, neither the intra- nor inter-display color variances observed at initial deployment diverge over time as measured by a mean radial distance metric in color space (Commission Internationale d’Eclairage L’, u’, v’ 1976). The consequence of this result is that color displays that are matched at deployment tend to remain matched over their lifetime even as they collectively yellow.
Digital display; Diagnostic image quality; Diagnostic display; Monitors
Appearance changes resulting from breast cancer treatment impact the quality of life of breast cancer survivors, but current approaches to evaluating breast characteristics are very limited. It is challenging, even for experienced plastic surgeons, to describe how different aspects of breast morphology impact overall assessment of esthetics. Moreover, it is difficult to describe what they are looking for in a manner that facilitates quantification. The goal of this study is to assess the potential of using eye-tracking technology to understand how plastic surgeons assess breast morphology by recording their gaze path while they rate physical characteristics of the breasts, e.g., symmetry, based on clinical photographs. In this study, dwell time, transition frequency, dwell sequence conditional probabilities, and dwell sequence joint probabilities were analyzed across photographic poses and three observers. Dwell-time analysis showed that all three surgeons spent the majority of their time on the anterior–posterior (AP) views. Similarly, transition frequency analysis between regions showed that there were substantially more transitions between the breast regions in the AP view, relative to the number of transitions between other views. The results of both the conditional and joint probability analyses between the breast regions showed that the highest probabilities of transitions were observed between the breast regions in the AP view (APRB, APLB) followed by the oblique views and the lateral views to complete evaluation of breast surgical outcomes.
Breast neoplasm; Eye movements; Biomedical image analysis; Decision support; Evaluation research
Online social networking is an immature, but rapidly evolving industry of web-based technologies that allow individuals to develop online relationships. News stories populate the headlines about various websites which can facilitate patient and doctor interaction. There remain questions about protecting patient confidentiality and defining etiquette in order to preserve the doctor/patient relationship and protect physicians. How much social networking-based communication or other forms of E-communication is effective? What are the potential benefits and pitfalls of this form of communication? Physicians are exploring how social networking might provide a forum for interacting with their patients, and advance collaborative patient care. Several organizations and institutions have set forth policies to address these questions and more. Though still in its infancy, this form of media has the power to revolutionize the way physicians interact with their patients and fellow health care workers. In the end, physicians must ask what value is added by engaging patients or other health care providers in a social networking format. Social networks may flourish in health care as a means of distributing information to patients or serve mainly as support groups among patients. Physicians must tread a narrow path to bring value to interactions in these networks while limiting their exposure to unwanted liability.
E-communication; Doctor patient relationship; Facebook; Sermo