To evaluate the detectability of urinary stones on virtual nonenhanced images generated at pyelographic-phase dual-energy computed tomography (CT).
Materials and Methods
This retrospective HIPAA-compliant study was institutional review board approved. All included patients had previously consented to the use of their medical records for research. Sixty-two patients (38 men, 24 women; age range, 35–91 years) had undergone CT urography, which consisted of nonenhanced and pyelographic-phase dual-energy CT performed by using a dual-source scanner. Commercial software was used to create virtual non-enhanced images by suppressing the iodine signal from the pyelographic-phase dual-energy CT scans. Two radiologists, in consensus, evaluated the virtual nonenhanced images for the presence of stones. Sensitivity for detecting stones was calculated on a per-stone basis. Sensitivity, specificity, and accuracy were also calculated on a per–renal unit (defined as the intrarenal collecting system and ureter of one kidney) basis. The true nonenhanced scan was considered the reference standard. A jackknife method was used because any patient may have multiple stones.
Of 62 patients with 122 renal units, 21 patients with 25 renal units had a total of 43 stones (maximal transverse diameter range, 1–24 mm; median, 3 mm). The overall sensitivity for detecting stones was 63% (27 of 43 stones) per stone. Sensitivities were 29% (four of 14 stones) for 1–2-mm stones, 64% (nine of 14 stones) for 3–4-mm stones, 83% (five of six stones) for 5–6-mm stones, and 100% (nine of nine stones) for 7-mm or larger (7, 7, 7, 8, 8, 9, 11, 15, and 24 mm) stones. All three ureteral stones (3, 4, and 8 mm) were correctly identified. The sensitivity, specificity, and accuracy for detecting stones on a per–renal unit basis were 65% (17 of 26 renal units), 92% (88 of 96 renal units), and 86% (105 of 122 renal units), respectively.
Virtual nonenhanced images generated at pyelographic-phase dual-energy CT enabled the detection of urinary stones with moderate accuracy. The detection of small (1–2-mm) stones was limited.
Hip prosthesis is one of the most common types of metal implants and can cause significant artifacts in computed tomography (CT) examinations. The purpose of this work was to develop a projection-based method for reducing metal artifacts caused by hip prostheses in multislice helical CT.
Method and Materials
The proposed method is based on a novel concept, reformatted projection, which is formed by combining the projection data at the same view angle over the full longitudinal scan range. Detection and segmentation of the metal were performed on each reformatted projection image. Two dimensional interpolation based on Delaunay triangulation was used to fill voids left after removal of the metal in the reformatted projection. The corrected data were then reconstructed using a commercially available algorithm. The main advantage of this method is that both the detection of the metal objects and the interpolations are performed on complete reformatted projections with the entire metal region present, which is particularly useful for long hip prostheses. Twenty clinical abdominal/pelvis exams with hip prostheses were corrected and clinically evaluated.
The overall image quality and the conspicuity in some critical organs were significantly improved compared with the uncorrected images: overall quality (P = 0.0024); bladder base (P = 0.0027), and rectum (P = 0.0078). The average noise level in the bladder base was reduced from 86.7 HU to 36.2 HU. In 17 of 20 cases, the radiologists preferred either coronal (13) or axial (4) views of the corrected images.
A novel method for reducing metal artifact in multislice helical CT was developed. Initial clinical results showed that the proposed method can effectively reduce the artifacts caused by metal implants for the cases of unilateral and bilateral hip prothesis.
computed tomography (CT); multi-slice helical CT; metal artifact reduction
Cathartic bowel preparation is a major barrier for colorectal cancer screening. We examined noncathartic CT colonography (CTC) quality and performance using four similar bowel-tagging regimens in an asymptomatic screening cohort.
SUBJECTS AND METHODS
This prospective study included 564 asymptomatic subjects who underwent noncathartic CTC without dietary modification but with 21 g of barium with or without iodinated oral contrast material (four regimens). The quality of tagging with oral agents was evaluated. A gastrointestinal radiologist evaluated examinations using primary 2D search supplemented by electronic cleansing (EC) and 3D problem solving. Results were compared with complete colonoscopy findings after bowel purgation and with retrospective unblinded evaluation in 556 of the 564 (99%) subjects.
Of the 556 subjects, 7% (37/556) and 3% (16/556) of patients had 52 and 20 adenomatous polyps ≥ 6 and ≥ 10 mm, respectively. The addition of iodine significantly improved the percentage of labeled stool (p ≤ 0.0002) and specificity (80% vs 89–93%, respectively; p = 0.046). The overall sensitivity of noncathartic CTC for adenomatous polyps ≥ 6 mm was 76% (28/37; 95% CI, 59–88%), which is similar to the sensitivity of the iodinated regimens with most patients (sensitivity: 231 patients, 74% [14/19; 95% CI, 49–91%]; 229 patients, 80% [12/15; 95% CI, 52–96%]). The negative predictive value was 98% (481/490), and the lone cancer was detected (0.2%, 1/556). EC was thought to improve conspicuity of 10 of 21 visible polyps ≥ 10 mm.
In this prospective study of asymptomatic subjects, the per-patient sensitivity of noncathartic CTC for detecting adenomas ≥ 6 mm was approximately 76%. Inclusion of oral iodine contrast material improves examination specificity and the percentage of labeled stool. EC may improve polyp conspicuity.
bowel preparation; colon cancer screening; CT colonography; patient compliance
AIM: To evaluate the ability of contrast-enhanced computerized tomography (CECT) to characterize the nature of peripancreatic collections.
METHODS: Twenty five patients with peripancreatic collections on CECT and who underwent operative intervention for severe acute pancreatitis were retrospectively studied. The collections were classified into (1) necrosis without frank pus; (2) necrosis with pus; and (3) fluid without necrosis. A blinded radiologist assessed the preoperative CTs of each patient for necrosis and peripancreatic fluid collections. Peripancreatic collections were described in terms of volume, location, number, heterogeneity, fluid attenuation, wall perceptibility, wall enhancement, presence of extraluminal gas, and vascular compromise.
RESULTS: Fifty-four collections were identified at operation, of which 45 (83%) were identified on CECT. Of these, 25/26 (96%) had necrosis without pus, 16/19 (84%) had necrosis with pus, and 4/9 (44%) had fluid without necrosis. Among the study characteristics, fluid heterogeneity was seen in a greater proportion of collections in the group with necrosis and pus, compared to the other two groups (94% vs 48% and 25%, P = 0.002 and 0.003, respectively). Among the wall characteristics, irregularity was seen in a greater proportion of collections in the groups with necrosis with and without pus, when compared to the group with fluid without necrosis (88% and 71% vs 25%, P = 0.06 and P < 0.01, respectively). The combination of heterogeneity and presence of extraluminal gas had a specificity and positive likelihood ratio of 92% and 5.9, respectively, in detecting pus.
CONCLUSION: Most of the peripancreatic collections seen on CECT in patients with severe acute pancreatitis who require operative intervention contain necrotic tissue. CECT has a somewhat limited role in differentiating the different types of collections.
Contrast-enhanced computerized tomography; Correlation; Pancreatic necrosis; Pancreatitis; Peripancreatic fluid collection; Surgery
Little information exists concerning the frequency of clinically significant incidental findings (IFs) identified in the course of imaging research across a broad spectrum of imaging modalities and body regions.
To estimate the frequency with which research imaging IFs generate further clinical action, and the medical benefit/burden of identifying these IFs.
Design, Setting, and Participants
Retrospective review of subjects undergoing a research imaging exam that was interpreted by a radiologist for IFs in the first quarter of 2004, with 3-year clinical follow-up. An expert panel reviewed IFs generating clinical action to determine medical benefit/burden based on predefined criteria.
Main Outcome Measures
Frequency of (1) IFs that generated further clinical action by modality, body part, age, gender, and (2) IFs resulting in clear medical benefit or burden.
1376 patients underwent 1426 research imaging studies. 40% (567/1426) of exams had at least one IF (1055 total). Risk of an IF increased significantly by age (OR=1.5; [1.4–1.7=95% C.I.] per decade increase). Abdominopelvic CT generated more IFs than other exams (OR=18.9 compared with ultrasound; 9.2% with subsequent clinical action), with CT Thorax and MR brain next (OR=11.9 and 5.9; 2.8% and 2.2% with action, respectively). Overall 6.2% of exams (35/567) with an IF generated clinical action, resulting in clear medical benefit in 1.1% (6/567) and clear medical burden in 0.5% (3/567). In most instances, medical benefit/burden was unclear (4.6%; 26/567).
The frequency of IFs in imaging research exams varies significantly by imaging modality, body region and age. Research imaging studies at high risk for generating IFs can be identified. Routine evaluation of research images by radiologists may result in identification of IFs in a substantial number of cases and subsequent clinical action to address them in much smaller number. Such clinical action can result in medical benefit to a small number of patients.
Despite universal consensus that computed tomography (CT) overwhelmingly benefits patients when used for appropriate indications, concerns have been raised regarding the potential risk of cancer induction from CT due to the exponentially increased use of CT in medicine. Keeping radiation dose as low as reasonably achievable, consistent with the diagnostic task, remains the most important strategy for decreasing this potential risk. This article summarizes the general technical strategies that are commonly used for radiation dose management in CT. Dose-management strategies for pediatric CT, cardiac CT, dual-energy CT, CT perfusion and interventional CT are specifically discussed, and future perspectives on CT dose reduction are presented.
computed tomography; CT; CT technology; radiation dose reduction; radiation risk
Rational and Objectives
To optimize and validate projection space denoising (PSDN) strategies for application to 80 kV computed tomography (CT) data to achieve 50% dose reduction.
Materials and Methods
This retrospective HIPAA-compliant study had IRB approval. We utilized 80 kV image data (mean CTDIvol 7.9 mGy) obtained from dual-source dual-energy CTE exams in 42 patients. For each exam, nine 80 kV image datasets were reconstructed using PSDN (3 levels of intensity) ± image-based denoising and compared to commercial reconstruction kernels. For optimization, qualitative analysis selected optimal denoising strategies, with quantitative analysis measuring image contrast, noise and sharpness (FWHM bowel wall thickness, maximum CT number gradient). For validation, two radiologists examined image quality, comparing low-dose 80 kV optimally denoised images to full dose mixed kV images.
PSDN algorithms generated the best 80 kV image quality (41/42 patients), while the commercial kernels produced the worst (39/42, p < 0.001). Overall 80 kV PSDN approaches resulted in higher contrast (mean 332 HU vs. 290 HU), slightly less noise (mean 20 HU vs. 26 HU), but slightly decreased images sharpness (relative bowel wall thickness, 1.069 vs. 1.000) compared to full-dose mixed kV images. Mean image quality scores for full-dose CTE images was 4.9 compared to 4.5 for optimally-denoised half-dose 80 kV CTE images, and 3.1 for non-denoised 80 kV CTE images (p<0.001).
Optimized denoising strategies improve the quality of 80 kV CT enterography images such that CT data obtained at 50% of routine dose levels approaches the image quality of full-dose exams.
radiation dose; CT enterography; low-energy CT; image quality; image noise; noise reduction; image denoising; projection-space algorithms; bilateral filtering
The objective of this article is to describe the experience of the National CT Colonography Trial with radiologist training and qualification testing at CT colonography (CTC) and to correlate this experience with subsequent performance in a prospective screening study.
SUBJECTS AND METHODS
Ten inexperienced radiologists participated in a 1-day educational course, during which partial CTC examinations of 27 cases with neoplasia and full CTC examinations of 15 cases were reviewed using primary 2D and 3D search. Subsequently 15 radiologists took a qualification examination composed of 20 CTC cases. Radiologists who did not pass the first qualification examination attended a second day of focused retraining of 30 cases, which was followed by a second qualification examination. The results of the initial and subsequent qualification tests were compared with reader performance in a large prospective screening trial.
All radiologists took and passed the qualification examinations. Seven radiologists passed the qualification examination the first time it was offered, and eight radiologists passed after focused retraining. Significantly better sensitivities were obtained on the second versus the first examination for the retrained radiologists (difference = 16%, p < 0.001). There was no significant difference in sensitivities between the groups who passed the qualification examination the first time versus those who passed the second time in the prospective study (88% vs 92%, respectively; p = 0.612). In the prospective study, the odds of correctly identifying diseased cases increased by 1.5 fold for every 50-case increase in reader experience or formal training (p < 0.025).
A significant difference in performance was observed among radiologists before formalized training, but testing and focused retraining improved radiologist performance, resulting in an overall high sensitivity across radiologists in a subsequent, prospective screening study.
ACRIN; CT; CT colonography; radiologist training; reader performance
To investigate the effect on radiation dose and image quality of the use of additional spectral filtration for dual-energy CT (DECT) imaging using dual-source CT (DSCT).
Materials and Methods
A commercial DSCT scanner was modified by adding tin filtration to the high-kV tube, and radiation output and noise measured in water phantoms. Dose values for equivalent image noise were compared among DE-modes with and without tin filtration and single-energy (SE) mode. To evaluate DECT material discrimination, the material-specific DEratio for calcium and iodine were determined using images of anthropomorphic phantoms. Data were additionally acquired in 38 and 87 kg pigs, and noise for the linearly mixed and virtual non-contrast (VNC) images compared between DE-modes. Finally, abdominal DECT images from two patients of similar sizes undergoing clinically-indicated CT were compared.
Adding tin filtration to the high-kV tube improved the DE contrast between iodine and calcium as much as 290%. Pig data showed that the tin filtration had no effect on noise in the DECT mixed images, but decreased noise by as much as 30% in the VNC images. Patient VNC-images acquired using 100/140 kV with added tin filtration had improved image quality compared to those generated with 80/140 kV without tin filtration.
Tin filtration of the high-kV tube of a DSCT scanner increases the ability of DECT to discriminate between calcium and iodine, without increasing dose relative to SECT. Furthermore, use of 100/140 kV tube potentials allows improved DECT imaging of large patients.
Dual-energy CT; dual-source CT; material differentiation; beam filtration; CT image quality; CT radiation dose
Rapid technical developments, and an expanding list of applications that have supplanted less accurate or more invasive diagnostic tests, have led to a dramatic increase in the use of body CT imaging in medical practice since its introduction in 1975. Our purpose here is to discuss medical justification of the small risk associated with the ionizing radiation used in CT and to provide perspectives on practice-specific decisions that can maximize overall patient benefit. In addition, we review available dose management and optimization technique.
For diagnosis, assessing disease activity, complications and extraintestinal manifestations, and monitoring response to therapy, patients with inflammatory bowel disease undergo many radiological studies employing ionizing radiation. However, the extent of radiation exposure in these patients is unknown.
A population-based inception cohort of 215 patients with inflammatory bowel disease from Olmsted County, Minnesota, diagnosed between 1990 and 2001, was identified. The total effective dose of diagnostic ionizing radiation was estimated for each patient. Linear regression was used to assess the median total effective dose since symptom onset.
The number of patients with Crohn's disease and ulcerative colitis was 103 and 112, with a mean age at diagnosis of 38.6 and 39.4 yr, respectively. Mean follow-up was 8.9 yr for Crohn's disease and 9.0 yr for ulcerative colitis. Median total effective dose for Crohn's disease was 26.6 millisieverts (mSv) (range, 0–279) versus 10.5 mSv (range, 0–251) for ulcerative colitis (P < 0.001). Computed tomography accounted for 51% and 40% of total effective dose, respectively. Patients with Crohn's disease had 2.46 times higher total effective dose than ulcerative colitis patients (P = 0.001), adjusting for duration of disease.
Annualizing our data, the radiation exposure in the inflammatory bowel disease population was equivalent to the average annual background radiation dose from naturally occurring sources in the U.S. (3.0 mSv). However, a subset of patients had substantially higher doses. The development of imaging management guidelines to minimize radiation dose, dose-reduction techniques in computed tomography, and faster, more robust magnetic resonance techniques are warranted.
To study the effect of motion velocity on image quality to determine the requirements for 4-dimensional (4D; ie, 3D + time) musculoskeletal computed tomographic (CT) imaging.
Materials and Methods
A phantom with resolution targets in both axial (x-y) and coronal (x-z) planes was attached to a motion device and scanned with 64-slice CT using a retrospectively gated CT protocol with pitch values of 0.1 and 0.2. Data were acquired with the phantom at rest and while moving periodically along the x axis at several velocities. Spatial resolution and motion artifacts were assessed both for the axial and coronal targets.
A linear relationship was found between motion artifact severity and phantom velocity. Spatial resolution was better preserved in the coronal target. However, coronal images displayed banding artifacts, with band displacements being linearly related to motion velocity.
The 4D CT imaging of periodically moving objects with velocities up to 20 mm/s is feasible using a pitch value of 0.1 and a motion frequency of 30 cycles per minute.
64-slice computed tomography; 4D imaging, motion artifacts; ECG gating
Rationale and Objectives
We sought to examine heart rate and heart rate variability during cardiac computed tomography (CT).
Materials and Methods
Ninety patients (59.0 ± 13.5 years) underwent coronary CT angiography (CTA), with 52 patients also undergoing coronary artery calcium scanning (CAC). Forty-two patients with heart rate greater than 70 bpm were pretreated with oral β-blockers (in five patients, use of β-blocker was not known). Sixty-four patients were given sublingual nitroglycerin. Mean heart rate and percentage of beats outside a ±5 bpm region about the mean were compared between baseline (free breathing), prescan hyperventilation, and scan acquisition (breath-hold).
Mean scan acquisition time was 13.1 ± 1.5 seconds for CAC scanning and 14.2 ± 2.9 seconds for coronary CTA. Mean heart rate during scan acquisition was significantly lower than at baseline (CAC 58.2 ± 8.5 bpm; CTA 59.2 ± 8.8 bpm; baseline 62.8 ± 8.9 bpm; P < .001). The percentage of beats outside a ±5 bpm about the mean were not different between baseline and CTA scanning (3.5% versus 3.3%, P = .87). The injection of contrast had no significant effect on heart rate (58.2 bpm versus 59.2 bpm, P = .24) or percentage of beats outside a ±5 bpm about the mean (3.0% versus 3.3%, P = .64). No significant difference was found between gender and age groups (P > .05).
Breath-holding during cardiac CT scan acquisition significantly lowers the mean heart rate by approximately 4 bpm, but heart rate variability is the same or less compared with normal breathing.
Heart rate; computed tomography; coronary angiography
Rationale and Objectives
To determine the accuracy and sensitivity for dual-energy computed tomography (DECT) discrimination of uric acid (UA) stones from other (non-UA) renal stones in a commercially implemented product.
Materials and Methods
Forty human renal stones comprising uric acid (n = 16), hydroxyapatite (n = 8), calcium ox-alate (n = 8), and cystine (n = 8) were inserted in four porcine kidneys (10 each) and placed inside a 32-cm water tank anterior to a cadaver spine. Spiral dual-energy scans were obtained on a dual-source, 64-slice computed tomography (CT) system using a clinical protocol and automatic exposure control. Scanning was performed at two different collimations (0.6 mm and 1.2 mm) and within three phantom sizes (medium, large, and extra large) resulting in a total of six image datasets. These datasets were analyzed using the dual-energy software tool available on the CT system for both accuracy (number of stones correctly classified as either UA or non-UA) and sensitivity (for UA stones). Stone characterization was correlated with micro-CT.
For the medium and large phantom sizes, the DECT technique demonstrated 100% accuracy (40/40), regardless of collimation. For the extra large phantom size and the 0.6-mm collimation (resulting in the noisiest dataset), three (two cystine and one small UA) stones could not be classified (93% accuracy and 94% sensitivity). For the extra large phantom size and the 1.2-mm collimation, the dual-energy tool failed to identify two small UA stones (95% accuracy and 88% sensitivity).
In an anthropomorphic phantom model, dual-energy CT can accurately discriminate uric acid stones from other stone types.
Kidney stones; renal calculi; dual-energy computed tomography; uric acid; urolithiasis
Dual-energy CT scanning has significant potential for disease identification and classification. However, it dramatically increases the amount of data collected and therefore impacts the clinical workflow. One way to simplify image review is to fuse CT datasets of different tube energies into a unique blended dataset with desirable properties.
A non-linear blending method based on a modified sigmoid function was compared to a standard 0.3 linear blending method. The methods were evaluated in both a liver phantom and patient study. The liver phantom contained six syringes of known CT contrast which were placed in a bovine liver. After scanning at multiple tube currents (45, 55, 65, 75, 85, 95, 105, and 115 mAs for the 140-kV tube), the datasets were blended using both methods. A contrast-to-noise (CNR) measure was calculated for each syringe. In addition, all eight scans were normalized using the effective dose and statistically compared. In the patient study, 45 dual-energy CT scans were retrospectively mixed using the 0.3 linear blending and modified sigmoid blending functions. The scans were compared visually by two radiologists.
For the 15, 45, and 64 HU syringes, the non-linear blended images exhibited similar CNR to the linear blended images; however, for the 79, 116, and 145 HU syringes, the non-linear blended images consistently had a higher CNR across dose settings. The radiologists qualitatively preferred the non-linear blended images of the phantom. In the patient study, the radiologists preferred non-linear blending in 31 of 45 cases with a strong preference in bowel and liver cases.
Non-linear blending of dual energy data can provide an improvement in CNR over linear blending and is accompanied by a visual preference for non-linear blended images. Further study on selection of blending parameters and lesion conspicuity in non-linear blended images is being pursued.
Dual-energy computed tomography; Image processing; Data fusion
The objective of our study was to evaluate the feasibility of virtual unenhanced images reconstructed from a dual-energy CT scan to depict urinary stones in an iodine solution in a phantom study.
MATERIALS AND METHODS
Twenty urinary stones of different sizes (1.4-4.2 mm in short-axis diameter) were placed in plastic containers. The containers were consecutively filled with different concentrations of iodine solution (21, 43, 64, 85, and 107 mg/dL; CT attenuation value range, 510-2,310 H at 120 kVp). Dual-energy CT was repeated with 80-140 and 100-140 kVp pairs, two collimation-slice thickness combinations, and the presence or absence of a 4-cm-thick oil gel around the phantom. The iodine-subtraction virtual unenhanced images were reconstructed using commercial software. The images were evaluated by three radiologists in consensus for the visibility of the stones and the presence of residual nonsubtracted iodine. Stone visibility rates were compared between the 80-140 and 100-140 kVp pairs and the five different iodine concentrations.
Stone visibility rates with the 80-140 kVp pair were 99%, 93%, 96%, 94%, and 3% and those with the 100-140 kVp pair were 98%, 95%, 99%, 94%, and 99% for an iodine concentration of 21, 43, 64, 85, and 107 mg/dL, respectively. The poor visibility rate with 80-140 kVp and 107 mg/dL iodine concentration was due to the failure of iodine subtraction.
Dual-energy CT iodine-subtraction virtual unenhanced technique is capable of depicting urinary stones in iodine solutions of a diverse range of concentrations in a phantom study.
dual-energy CT; genitourinary imaging; iodine-subtraction imaging technique; reconstructed images; urinary stones
Computer-aided diagnosis (CAD) systems must show sufficient versatility to produce robust analysis on a large variety of data. In the case of colonography, CAD has not been designed to cope with the presence of stool, although labeling the stool with high contrast agents replaces the use of laxatives and reduces the patient discomfort. This procedure introduces additional challenges for the diagnosis, such as poorly tagged stool, stool sticking to colonic walls, and heterogeneous stool (tagged stool mixed with air or untagged stool). Our study proposes a robust algorithm for heterogeneous stool removal to be employed as a preprocessing module for CAD systems in colonic cancer detection. Colonoscopy data are automatically cleansed of residual stool to enhance the polyp appearance for improved diagnosis. The algorithm uses expectation-maximization, quadratic regression, level sets and minimum variance. Results show stool removal accuracy on polyps which are partially or fully covered by stool. The results are robust on stool lining and large pools of heterogeneous and weakly-tagged stool. The automatic detection of colon polyps using our CAD system on cathartic-free data improves considerably with the addition of the automatic stool removal module from 74% to 86% true positive (TP) rate at 6.4 false positives (FP)/case.
Crohn's disease (CD) is a chronic progressive destructive disease. Currently available instruments measure disease activity at a specific point in time. An instrument to measure cumulative structural damage to the bowel, which may predict long-term disability, is needed. The aim of this article is to outline the methods to develop an instrument that can measure cumulative bowel damage. The project is being conducted by the International Program to develop New Indexes in Crohn's disease (IPNIC) group. This instrument, called the Crohn's Disease Digestive Damage Score (the Lémann score), should take into account damage location, severity, extent, progression, and reversibility, as measured by diagnostic imaging modalities and the history of surgical resection. It should not be “diagnostic modality driven”: for each lesion and location, a modality appropriate for the anatomic site (for example: computed tomography or magnetic resonance imaging enterography, and colonoscopy) will be used. A total of 24 centers from 15 countries will be involved in a cross-sectional study, which will include up to 240 patients with stratification according to disease location and duration. At least 120 additional patients will be included in the study to validate the score. The Lémann score is expected to be able to portray a patient's disease course on a double-axis graph, with time as the x-axis, bowel damage severity as the y-axis, and the slope of the line connecting data points as a measure of disease progression. This instrument could be used to assess the effect of various medical therapies on the progression of bowel damage. (Inflamm Bowel Dis 2011)
Crohn's disease; illness index severity; magnetic resonance imaging