Requirements for data quantification
Although conventional T1- and T2-weighted images have been valuable tools to diagnose gross brain injuries, more subtle or diffuse damage has often been difficult to study. To improve the accuracy of abnormality detection and to extract more objective findings, a quantitative evaluation is required. If one can provide quantitative measures, such as volumes, shapes, and various MR parameters (e.g., T1, T2, anisotropy, diffusivity, etc.) of various brain regions of the patient, and the normal range and cut-off values, one can use those measures much like the results of blood tests are used. This not only would draw attention to potentially abnormal areas, but also provide new ways to evaluate MR images, thus expanding the ability of MR-based diagnosis by enabling one to detect previously hard-to-define abnormalities. The numbers would enable a clinician to conduct a statistical comparison between diagnostic groups, and also provide the potential to estimate the impact on the neurologic symptom or future neurologic outcomes, which is important for neonatal brain studies. Such research will be an important foundation to create diagnostic guidelines. Toward this end, there is a need to establish a data quantification method as an initial step.
Strategy for data quantification
For data quantification, areas from which to extract the MR parameters need to be defined (e.g., T2 values). The defined area is often called a region of interest (ROI). There are three axes to define the ROI: size (large or small); number (single or multiple); and the way the area is defined (manual or automated). For example, the manual ROI method is a straightforward approach to measure MR parameters of the specific brain structure with rich localized information. However, the number of ROIs is usually limited because drawing an ROI is labor-intensive and time-consuming. This causes insufficient spatial specificity of the findings because huge brain areas remain unsurveyed. The ROIs are placed according to the a priori hypothesis, which means that this type of analysis can only be applicable for hypothesis-driven studies. There is also an issue with reproducibility. For the ROI drawing, corresponding image slice levels and locations of the brain structures among different subjects are judged based on anatomic features. However, adjusting brain position and angle at the time of the scan is not easy in the neonate brain. The ROI drawing itself requires anatomic knowledge about the neonatal brain; therefore, the reproducibility depends on the operator’s skill. To achieve high reproducibility, one can increase the size of the ROI. In an extreme case, one can identify the entire brain as an ROI. In this case, one can define the ROI within a subject or across subjects with almost perfect reproducibility, but there is no localized information. In general, a manual ROI method has the inverse relationship between reproducibility and spatial information. Therefore, the other end of the extreme is using a voxel as the ROI, which has the most localization information. However, matching voxel-to-voxel manually across subjects is almost impossible.
For the initial step toward clinical application, it is necessary to screen a whole brain with rich spatial information. To handle large amounts of image data, an automated method is preferable. Voxel-based analyses, which perform the automated whole-brain voxel matching using computer software, seemed suitable for this purpose. Matching all voxels to corresponding voxels between two brains means transforming the shape of one to the other. This procedure is often called “normalization.” It has the highest possible localized information. The reproducibility depends on the method used for the image normalization, which will be discussed later.
Statistical analysis after normalization of an individual brain to an atlas space is an effective quantification strategy to detect differences between a target group and a control group without an a priori hypothesis. This strategy is also suitable for automated detection of the pathology. One important drawback of this voxel-based statistical comparison is the low sensitivity for detecting wide-spread subtle abnormalities. Because of the large number of the voxels and the noise, it is not easy to achieve statistical significance, especially after a multiple-comparison correction. To partially address this issue, isotropic “smoothing” or “filtering” of the image is often used. Another idea to increase the statistical power to detect widespread subtle abnormalities is to use an atlas-based analysis. In this method, a pre-segmented set of ROIs, which covers the entire brain, is overlaid on the normalized image to measure MR parameters inside the ROIs. The automatically placed ROI is regarded as a filter to group voxels in anatomically reasonable way. Therefore, if one wants to achieve higher statistical power, one can increase the size of each ROI and reduce the total number of ROIs. In contrast, if one wants to know the precise localization of the abnormality, one can reduce the size of each ROI and increase the total number or ROIs. Again, the most extreme case is to use each voxel as an ROI. In either case, accurate image normalization is key for the quantitative image analysis.