Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
3D Variational Brain Tumor Segmentation using a High Dimensional Feature Set Dana Cobzas, Neil Birkbeck Computer Science University of Alberta Mark Schmidt Computer Science Univ. of British Columbia Martin Jagersand Computer Science University of Alberta Albert Murtha Department of Oncology University of Alberta Abstract Tumor segmentation from MRI data is an important but time consuming task performed manually by medical experts. Automating this process is challenging due to the high diversity in appearance of tumor tissue, among different patients and, in many cases, similarity between tumor and normal tissue. One other challenge is how to make use of prior information about the appearance of normal brain. In this paper we propose a variational brain tumor segmentation algorithm that extends current approaches from texture segmentation by using a high dimensional feature set calculated from MRI data and registered atlases. Using manually segmented data we learn a statistical model for tumor and normal tissue. We show that using a conditional model to discriminate between normal and abnormal regions significantly improves the segmentation results compared to traditional generative models. Validation is performed by testing the method on several cancer patient MRI scans. 1. Introduction Radiation oncologists, radiologists, and other medical experts spend a substantial portion of their time segmenting medical images. Accurately labeling brain tumors and associated edema in MRI (Magnetic Resonance Images) is a particularly time-consuming task, and considerable variation is observed between labelers. Furthermore, in most settings the task is performed on a 3D data set by labeling the tumor slice-by-slice in 2D, limiting the global perspective and potentially generating sub-optimal segmentations. Subsequently, over the last 15 years, a large amount of research has been focused on semi-automatic and fully automatic methods for detecting and/or segmenting brain tumors from MRI scans. The process of segmenting tumors in MRI images as op- posed to natural scenes is particularly challenging [22]. The tumors vary greatly in size and position, have a variety of shape and appearance properties, have intensities overlapping with normal brain tissue, and often an expanding tumor can deflect and deform nearby structures in the brain giving an abnormal geometry also for healthy tissue. Therefore, in general it is difficult to segment a tumor by simple unsupervised thresholding [12]. Other more successful approaches consider fuzzy clustering [7, 19] or texture information [1, 6]. Often the information extracted from the MRI images is incorporated into a supervised approach that uses labeled data and automatically learns a model for segmentation. Different machine learning (ML) classification techniques have been investigated: Neural Networks [6], SVMs (Support Vector Machines) [8, 28], MRFs (Markov Random Fields) [11, 2], and most recently CRFs (Conditional Random Fields) [16]. But, as previously mentioned, statistical classification may not allow differentiation between non-enhancing tumor and normal tissue due to overlapping intensity distributions of healthy tissue with tumor and surrounding edema. One major advantage when segmenting medical images as opposed to natural scenes is that structural and intensity characteristics are well known up to a natural biological variability or the presence of pathology. Therefore, a geometric prior can be used by atlas-based segmentation in which a fully labeled template MR volume is registered to an unknown data set [15, 33, 17, 23]. Having extracted different types of information from the MRI data (e.g., texture, symmetry, atlas-based priors), one challenge is to formulate a segmentation process that accounts for it. Although classification techniques have been widely explored in the field of medical image segmentation and for this problem, variational and level set techniques are a powerful alternative segmentation strategy that is now of substantial interest to this field [24, 19]. It has been shown that variational techniques can integrate different types of information in a principled way (e.g., boundary information [3], region information [4, 21], shape priors [5, 27], texture for vector valued images [26, 25]). Some advantages of using a level set representation is that since the curve/surface is implicitly represented, topological changes are naturally possible. There also exist efficient numerical implementations [29]. In this paper, we propose a variational MRI tumor segmentation method that incorporates both atlas-based priors and learned statistical models for tumor and healthy tissue. The formulation extends the Chan-Vese region-based segmentation model [4] in a similar way to texture-based approaches [26, 25]. But instead of using an unsupervised approach we learn a statistical model from a set of features specifically engineered for the MRI brain tumor segmentation task. The set of features that we use are calculated from the original MRI data, in addition to registered brain templates and atlases similar to Schmidt [28]. We show the advantage of using a conditional model based on logistic regression compared to the generative model (e.g. Gaussian) usually used in variational segmentation. Our multimodal feature set uses specific anatomical priors fully integrated into a region-based variational segmentation, as opposed to Prastawa et al. [24] who uses the level set only to smooth the segmentation results at the final stage. It also differs from Ho et al. [13], who proposed a region competition method implemented on a level set ’snake’ but without considering template and atlas priors. In summary the main contributions of this paper are: • We extract a high dimensional multiscale feature set from brain MRI images using brain atlases and templates registered with the data. The multi-modal, multi-scale feature set incorporates both anatomical priors for the brain tissue and texture information. • We incorporate this set of features into a 3D variational region based segmentation method that uses a learned statistical model defined on the same set of features to differentiate between normal and pathological tissue (tumor). The remaining of this paper is organized as follows. The next section briefly presents the problem formulation. In Section 3 we introduce the multidimensional feature set and mention the methods used for registration and preprocessing. Section 4 describes the choice of statistical model used for characterizing the regions and Section 5 gives an overview of the entire system. Finally, Section 6 presents the experimental results. 2. Problem formulation This section presents the general formulation for the variational 3D segmentation problem without committing to a general set of features or statistical model for the data. The next two sections will give detail on the specific choices for our particular MRI segmentation problem. Assume we have a multivariate M -dimensional feature volume V = {Vi |i = 1 . . . M } where Vi : Ω ⊂ ℜ3 → ℜ+ and the domain Ω is assumed open and bounded . The segmentation task consists of finding a surface S (assumed regular) that splits the domain Ω in two disjoint regions Ω1 , Ω2 . S represents the interface between the regions denoted ∂Ω. Following [21] we can segment by maximizing the a posteriori partitioning probability p(P(Ω)|V(x)) with P(Ω) = {Ω1 , Ω2 } and x ∈ Ω. This optimization is equivalent to an energy minimization [21]. Two assumptions are necessary: (i) all partitions are equally possible and (ii) the pixels within each region are independent. Denote the two probability density functions for the value V(x) to be in region Ω1 and Ω2 with p1 (V(x)) and p2 (V(x)), respectively. The optimal segmentation is then found by minimizing the energy: Z Z log p2 (V(x))dx log p1 (V(x))dx − E(Ω1 , Ω2 ) = − Ω2 Ω1 (1) +α Z dS (2) S The first two terms are referred to as data terms and the last term represents the regularization on the area of S. One further challenge is defining a family of probability density functions (PDF) p1 , p2 that approximate the information of each region and are able to discriminate between the two regions. Section 4 gives detail on the choice of statistical models used for the MRI segmentation. We now introduce the level set representation by extending the integrals in Equation 1 to the whole domain using the level set function:  Φ(x) = D(x, S), if x ∈ Ω1 (3) Φ(x) = −D(x, S), if x ∈ Ω2 where D(x, S) represents the distance from point x to the surface (interface) S. Further, set Hǫ (z) and δǫ (z) the regularized Heaviside and Dirac function, respectively. The energy function from Equation 1 can then be written as: E(Ω1 , Ω2 ) = Z ( − Hǫ (Φ) log p1 (V(x)) (4) Ω − (1 − Hǫ (Φ)) log p2 (V(x)) (5) + α|∇H(Φ)|) dx (6) The Euler Lagrange evolution equation for Φ is 3. Intra-volume bias field correction (Nonuniform intensity normalization N3 [30]). (see [26]): Φt (x) = δǫ (Φ(x))( log p1 (V(x)) − log p2 (V(x))   ∇Φ ) + αdiv |∇Φ| (7) (8) This region segmentation strategy was proposed in [25, 26], using an unsupervised approach where the parameters for the region probability distributions are updated at each step using the corresponding Euler Lagrange equations derived from (4). We instead adopt a supervised approach where the parameters are learned a priori from labeled training data. 3. Feature Extraction Our experimental MRI data consists of T1, T1c (T1 after injection with contrast agent - gadolinium), and T2 images. We used two types of features. The first type of features - image-based features - are extracted from image intensities alone. The second type of features - alignment-based features - use templates and tissue prior probability atlases registered with the data. The motivation in using the two types of features is that although carefully designed imagebased features (such a textures) can improve the segmentation results [1, 6], they do not take advantage of anatomical prior information that is known about the brain, and hence require a lot of training data. Therefore, as recently shown [28, 15, 24, 11] spatially aligned templates overcome many image-based problems such as intensity overlap, intensity heterogeneity and the lack of contrast at structure boundary. They also allow accurate results to be obtained with a relatively small number of training images. 4. Alignment of different modalities (normalized mutual information). 5. Linear and non-linear alignment of the template with the data (We use the T1 template from [14]). 6. Resampling (β-splines) and intensity normalization of the template with the data (weighted regression). Figure 1 illustrates the pre-processing step for two cases. The left image is the original T1, the middle image shows the effect of step 1-3 and the right image shown the registered T1 template. Figure 1. The effect of pre-processing : (left) original T1 (middle) noise-free and intensity corrected (step 1-3) (right) registered template for T1. 3.1. Data pre-processing 3.2. Features To define alignment-based features, the data and templates have to be registered both spatially and in intensity. We perform a pre-processing step that reduces the effect of noise and intensity variations within and between images, in addition to achieving spatial registration. This is done in a pre-processing pipeline as in [28, 34]. For segmentation purposes, we chose to always align the templates with the original data in order to minimize the amount of distortion applied to the original data (e.g. to preserve texture properties). This is also the reason why we used the noise free and intensity corrected images (step 1,2,3) only for registration purposes and use the original ones in the segmentation. For the registration and resampling stages we used Statistical Parametric Mapping implementations [32]. The pipeline consist of: The image-based features include the original data modalities and a multi-scale texture characterization. These types of features are well studied in the context of texture segmentation. In the framework of variational segmentation, the most common texture features are Gabor (wavelet) filters (e.g. [21]) and structure tensors [25]. Wavelet-based texture features have been previously used also for medical image segmentation [1]. Another approach that has been discussed in this field is to include intensities of neighboring pixels as additional features and let a classifier learn the method to combine these intensities [6, 8]. In the present approach we use a multi-scale Gabor-type feature set [18]. Figure 2 shows 6 examples of texture features. Alignment-based features were previously used by Kaus et al. [15] where they define a ’distance transform’ based on a labeled template. The abnormality metric from [10] could also be used as a feature of this type. Schmidt [28] extended the use of templates by defining features that use the template data directly. This approach is valid when using 1. Noise reduction (non-linear filtering). 2. Inter-slice intensity variation correction (weighted regression). Figure 2. Example of texture-based features (MR8 [18]) - from top/left t bottom/right: Gaussian, Laplacian of Gaussian, symmetric (3) and antisymmetric (3) features at 3 scales. machine learning on a pixel-by-pixel basis for segmentation but is not be appropriate for a variational energy based segmentation. We use three types of alignment-based features. The first type is the spatial likelihoods of 3 normal tissues (WMwhite matter, GM-gray matter and CSF) obtained from [14]. The actual features are calculated by taking the differences between the registered priors with T1 (for GM) and T2 (for WM,CSF). Figure 3 shows the GM and WM priors (first row, last two images) and the corresponding features (second row, last two images). A second type is the average intensity maps from a set of individuals aligned with the template coordinate system (also obtained from [14]). Again, the features were calculated by taking the difference between the registered templates and the data (comparing images of corresponding modalities). Finally a third type of alignment-based feature is a characterization of left-to-right symmetry. Tumors are typically asymmetric while normal areas are typically symmetric. This type of feature was originally proposed by Gering [11]. We characterize the symmetry by subtracting the intensity value of the pixel on the opposite side of the line of symmetry. Figure 3 shows the symmetry features corresponding to T1 and T2 (first two images on the bottom row). The line of symmetry is extracted from the template and registered with the data (along with the template). In summary, the original data is transformed in a high dimensional feature set, where each voxel in the brain volume is characterized by a M dimensional vector x. 4. Region Statistics The variational segmentation framework presented in Section 2 requires a statistical model (in the form of a probability density function) to describe the consistency of the region. The most common choice is the Gaussian distribution [25, 26]. Alternatively, the PDF can be estimated based on the histogram(s). Rousson et al. [25] proposed a continuous version of the Parzen density while Malik et al. [20] Figure 3. Example of alignment-based features: (top:) T1, T2, prior on gray matter GM and prior on white matter WM; (bottom) symmetry features for T1, T2, features based on GM and WM priors used a more complex measure based on texon histograms. We implemented and evaluated two types of PDFs. The first one is based on a Gaussian distribution and the other one is based on a discriminatively-trained Generalized Linear Model that accounts for the discrete nature of the labels (Logistic Regression). 4.1. Gaussian approximation First, a general Gaussian approximation is used to model the vector valued intensity information for ’tumor’ (Ω1 ) and ’normal brain’ (Ω2 ) regions. Since we are working with multivariate data volumes, the parameters of the Gaussian model are the M dimensional vector mean µi , and a covariance matrix Σi of dimension M × M (i = 1, 2 one set for each region: Ω1 - ’tumor’, Ω2 ’normal brain’). The probability of a voxel V(x) to be in Ωi is: pi (V(x)) = g(V(x)|µi , Σi ) (9) T −1 1 1 = e− 2 (V(x)−µi ) Σi (V(x)−µi ) (2π)2 |Σi |1/2 (10) The parameters {µi , Σi } are estimated from the N labeled data volumes: µi = N 1 XX V(x) ni j=1 (11) N 1 XX (V(x) − µi )(V(x) − µi )T ni − 1 j=1 (12) x∈Ωi Σi = x∈Ωi Under the hypothesis that the channels are not correlated, the class-conditional PDF can be estimated using the joint density probabilities from each component. This is equivalent to having a joint diagonal covariance matrix on class variables. For modeling the ’normal brain’ area we also tried using a mixture of two Gaussians as there are two major histogram Figure 4. Three stages in the surface evolution. (top) 3D surface and T2 brain intensities; (bottom) horizontal section. the color code shows how far is the surface from the manual label. peaks (corresponding to white and gray matter). However, we did not get better results with this strategy. 4.2. Logistic regression approximation As a second choice for computing the PDF we used the Logistic Regression, a discriminative (rather than generative) training strategy. The PDF for ’tumor’ and ’normal brain’ pixels are given by: p1 (V(x)) = lr(V(x)|α, β) = 1 1 + exp(−α − β T V(x)) (13) p2 (V(x)) = 1 − p1 (V(x)); (14) The Maximum (Log-)Likelihood parameters α (scalar) and β (vector of dimension M - the number of features), are estimated from the labeled data (x) using a 2nd-order nonlinear optimization strategy: {α, β} = arg max(lr(V(x)|α, β, x)) α,β (15) 5. Segmentation System and Implementation Details We now present an overview of the segmentation system and some implementation related details (see Figure 5 for an overview). In the training phase we used data manually labeled by radiation oncologists and radiologists. The training data is pre-processed as described in Section 3.1. Next, the imagebased and alignment-based features presented in Section 3.2 are extracted. We also perform a skull removal using FSL tools [31] and mask all the features such that we further process only brain area (note that this remains fully automatic). The PDF (probability density functions) for ’tumor’ and ’normal brain’ regions are computed from all data using voxels inside and outside the labels, respectively. We used both models (Gaussian and Logistic Regression) outlined in Equations 11 and 15. For the actual segmentation the data set is run through the same pipeline as the training data including feature extraction and skull removal. The level set is initialized with the mask corresponding to the extracted brain area. The evolution uses Equation 7. The voxel probabilities for ’tumor’/’normal brain’ are calculated using Equations 9 and 13 (using the pre-computed PDF parameters). The evolution PDE is solved using an explicit finite differences scheme on an irregular grid (the data has 1mm resolution on the slice plane while about 5mm resolution on the vertical dimension due to inter-slice distances used during acquisition). The discretization of the data term is straightforward while for the parabolic term representing the mean curvature motion we used central differences. The only parameter that controls the evolution is α, the ratio between the regularization and data terms. The parameter was fixed during evolution and set so that the data and regularization terms were balanced in magnitude (e.g. about 1/8 for Logistic Regression and 1/300 for the Gaussian). The evolution is stopped when the surface reaches a stable position. The level set can change topology during evolution. Although this is an advantage in other applications, it might not be desirable in the case of tumor segmentation (that is in most cases localized in one spot). In a post-processing step, we remove small surface pieces that are not part of the tumor area (we remove pieces that are smaller than half size of the biggest piece). Figure 4 shows 3 stages in the surface evolution. For visualization, we have created a user interface (see Figure 6) where the user can visualize the surface evolution in 3D and on 2D slices (one horizontal and two vertical Training: (1) preprocess labeled training data (Section 3.1) (2) extract features (Section 3.2) (3) compute PDF:’tumor’ p1 and ’brain’ p2 (Section 4) Segmentation: Initialization: (1) preprocess (Section 3.1) (2) extract features (Section 3.2) (3) skull removal and init level set Evolution: (4) evolve level set until convergence (Equation 7) Postprocess: (5) remove small surface pieces Figure 5. Overview of segmentation method Figure 6. Visualization interface (top) and two other data modalities (bottom: left - slices; right - labeled data). parallel to the tree main plains). The three planes corresponding to the 2D slices can be moved along their axis. There are two modes of displaying the 3D brain information (as transparent voxels or using the slices). If a manual segmentation is provided a color code shows the distance error between the level set and manual segmented surface. Also, different 3D brain modalities can be displayed (e.g., T1, T2, labeled data). 6. Experiments We validated the proposed method using data from 9 patients having either a grade 2 astrocytoma, an anaplastic astrocytoma or a glioblastoma multiforme. The tumor area was manually segmented slice-by-slice is each data set by an expert radiologist. We performed inter-patient training (training on 8 patients and testing on 1). For qualitative evaluation we present in Figure 7 the results of using our technique with the two types of statistics (Gaussian and Logistic Regression as described in Section 4) for three patients (each row corresponds to a patient). We present both the segmented volumes as 3D surfaces as well as one slice corresponding to the same segmentation. The first two columns show one slice of the original T1, T2 data, the third column shown the manual segmentation, the next two column present results for the automatic segmentation and the last two columns illustrate the final segmented 3D surfaces. The color code on the volume shows the distance from the manual segmentation. For quantitative evaluation we used the VALMET validation framework [9]. The results for the 9 tumor data sets are shown in Table 1. The volume overlap measures the normalized intersection in voxel space of the two segmentations (manual X and automatic Y ), given by Overlap(X, Y ) = (X ∩ Y )/(X ∪ Y ) (equivalent to Jaccard similarity measure). The Hausdorff distance from X to Y is Hausdorff(X, Y ) = maxx∈X dist(x, Y ). To make the Hausdorff distance symmetric the greater of Hausdorff(X, Y ) and Hausdorff(Y, X) is taken. The last measure MeanDist(X, Y ) represents the mean absolute surface distance. We first compared the two types of statistics - Gaussian (G) and Logistic Regression (LG). As expected the Logistic Regression always performs better due to the fact that the generative distribution for both classes is complex, while the Logistic forms a better model of binary data. For one volume the Gaussian model completely failed. For showing the importance of using the feature set we compared the method with the conventional variational segmentation on T1,T2,T1c alone. Results show that when using the features the method performs about 20% better. Over the data sets for the Logistic Regression, the highest mean distance was about 1 voxels (4 mm) and the highest Hausdorff distance 3.5 voxels (14 mm) which is good compared to the 20 mm margin that is commonly used in brain tumor surgery. All techniques tend to over-segment (results are smaller than the manual labels) probably due to the influence of regularization term. 7. Conclusions We have presented a variational method for brain tumor segmentation. Existing region-based variational segmentation methods based on texture features are not suited for tumor segmentation as they are not discriminative enough when the appearance of tumor and normal tissue overlap. Using priors of the appearance of anatomical structures in the normal brain in the form of templates and atlases, we define a set of multidimensional features and use them to calculate statistics for ’tumor’ and ’normal brain’ area from labeled MRI data. We show that a discriminatively-trained conditional model based on Logistic Regression gives better results than traditional generative models. To further improve the results we are going to investigate more sophisticated probability models, including Regularized (MAP) Logistic Regression to reduce the effects of noise, using kernels to expand the feature representation, and Bayesian parameter estimation. We are also interested in exploring multi-class scenarios, where anatomical prior 9 6 3 0 mm T1 T2 Original data Manual labeling 2D view LogReg 2D view Gauss Segmentation 3D view LogReg 3D view Gauss 3D Surface with distance error Figure 7. Results for the automatic segmentation compared to the manual segmentation. Each row represents a patient data set. The color code on the volume shows the distance error from the manual segmentation (see the bar color code). Case 1 2 3 4 5 6 7 8 9 LRNF 16% 76% 61% 64% 46% 53% 38% 68% 35% Overlap G 30% 72% 71% 37% 47% 61% 08% 51% 39% LR 56% 82% 74% 47% 48% 68% 48% 72% 47% Hausdorff LRNF G LR 4.83 3.36 2.91 1.32 1.79 1.71 2.90 2.41 1.75 2.13 5.04 2.99 2.69 8.37 2.67 3.63 9.94 3.32 3.84 8.00 3.49 2.89 3.12 2.74 3.09 2.29 2.61 Mean Dist LRNF G LR 1.83 1.09 1.02 0.29 0.35 0.22 0.72 0.51 0.43 0.47 1.10 0.57 0.73 0.76 0.57 0.86 0.76 0.56 0.90 1.54 0.67 0.57 0.81 0.46 0.98 0.78 0.71 Table 1. VALMET scores for the 9 patient data sets with two types of statistics G(Gaussian) LR(Logistic Regression) with full set of features and without features - only Log. Reg. (LFNF). The overlap score represents the percentage of overlap between the automatic and manual segmentation (with respect to tumor size). The Hassdorf distance and mean distance are in voxel units (about 3mm). information could be used to help initialization (as in [23]). An advantage of variational methods compared to discrete ones (e.g., MRFs) is that any type of regularization can be easily incorporated into the energy function. We plan to investigate anisotropic regularization that would preserve discontinuities at boundaries, and encode the expected shape information of tumor volumes. References [1] C. Busch. Wavelet based texture segmentation of multimodal tomographic images. Computer and Graphics, 21(3):347–358, 1997. [2] A. Capelle, O. Colot, and C. Fernandez-Maloigne. Evidential segmentation scheme of multi-echo MR images for the detection of brain tumors using neighborhood information. Information Fusion, 5(3):203–216, September 2004. [3] V. Caselles, R. Kimmel, and G. Sapiro. Geodesic active contours. Int. J. Comput. Vision, 22(1):61–79, 1997. [4] T. Chan and L. Vese. Active contours without edges. IEEE Trans. Image Processing, 10(2):266–277, 2001. [5] D. Cremers, T. Kohlberger, and C. Schnorr. Nonlinear shape statistics in mumford-shah functional. In ECCV, pages 93– 108, 2002. [6] S. Dickson and B. Thomas. Using neural networks to automatically detect brain tumours in MR images. International Journal of Neural Systems, 4(1):91–99, 1997. [7] L. Fletcher-Heath, L. Hall, D. Goldgof, and F. R. Murtagh. Automatic segmentation of non-enhancing brain tumors in magnetic resonance images. Artificial Intelligence in Medicine, 21:43–63, 2001. [8] C. Garcia and J. Moreno. Kernel based method for segmentation and modeling of magnetic resonance images. Lecture Notes in Computer Science, 3315:636–645, Oct 2004. [9] G. Gerig, M. Jomier, and M. Chakos. ”valmet”: a new validation tool for assessing and improving 3d object segmentation. In MICCAI, pages 516–523, 2001. [10] D. Gering. Diagonalized nearest neighbor pattern matching for brain tumor segmentation. MICCAI, 2003. [11] D. Gering. Recognizing Deviations from Normalcy for Brain Tumor Segmentation. PhD thesis, MIT, 2003. [12] P. Gibbs, D. Buckley, S. Blackb, and A. Horsman. Tumour volume determination from MR images by morphological segmentation. Physics in Medicine and Biology, 41:2437– 2446, 1996. [13] S. Ho, E. Bullitt, and G. Gerig. Level set evolution with region competition: automatic 3D segmentation of brain tumors. In 16th International Conference on Pattern Recognition, pages 532–535, 2002. [14] Icbm view: an interactive web visualization tool for stereotaxic data from the icbm and other projects, http://www.bic.mni.mcgill.ca/icbmview/, Online. [15] M. Kaus, S. Warfield, A. Nabavi, P. Black, F. Jolesz, and R. Kikinis. Automated segmentation of MR images of brain tumors. Radiology, 218:586–591, 2001. [16] C.-H. Lee, M. Schmidt, A. Murtha, A. Biaritz, J. Sander, and R. Greiner. Segmenting brain tumors with conditional random fields and support vector machines. In Workshop on Computer Vision for Biomedical Image Applications at ICCV, 2005. [17] K. Leemput, F. Maes, D. Vandermeulen, and P. Suetens. Automated model-based tissue classification of MR images of the brain. IEEE Transactions on Medical Imaging, 18(10):897–908, October 1999. [18] T. Leung and J. Malik. Representing and recognizing the visual appearance of materials using three-dimensional textons. Interational Journal of Computer Vision, 43(1):29–44, Jun 2001. [19] J. Liu, J. Udupa, D. Odhner, D. Hackney, and G. Moonis. A system for brain tumor volume estimation via mr imaging and fuzzy connectedness. Comput. Medical Imaging Graph., 21(9):21–34, 2005. [20] J. Malik, S. Belongie, T. Leung, and J. Shi. Color and texture analisys for image segmentation. IJCV, 43(1):7–27, 2001. [21] N. Paragios and R. Deriche. Geodesic active regions: A new paradigm to deal with frame partition problems in computer vision. Visual Communication and Image Representation, 13:249–268, 2002. [22] M. R. Patel and V. Tse. Diagnosis and staging of brain tumors. Semonars in Roentgenology, 39(3):347–360, 2004. [23] M. Prastawa, E. Bullitt, S. Ho, and G. Gerig. A brain tumor segmentation framework based on outlier detection. Medical Image Analysis, 8(3):275–283, September 2004. [24] M. Prastawa, E. Bullitt, N. Moon, K. Leemput, and G. Gerig. Automatic brain tumor segmentation by subject specific modification of atlas priors. Academic Radiology, 10(12):1341–1348, December 2003. [25] M. Rousson, T. Brox, and R. Deriche. Active unsupervised texture segmentation on a diffusion based feature space. In CVPR, 2003. [26] M. Rousson and R. Deriche. A variational framework for active and adaptative segmentation of vector valued images. In MOTION ’02: Proceedings of the Workshop on Motion and Video Computing, page 56, 2002. [27] M. Rousson and N. Paragios. Shape priors for level set representations. In ECCV (2), pages 78–92, 2002. [28] M. Schmidt. Automatic brain tumor segmentation. Master’s thesis, University of Alberta, 2005. [29] J. Sethian. Level Set Methods. Cambridge University Press, 1996. [30] J. Sled, A. Zijdenbos, and A. Evans. A nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Transactions on Medical Imaging, 17:87– 97, February 1999. [31] S. Smith, M. Jenkinson, M. Woolrich, C. Beckmann, T. Behrens, H. Johansen-Berg, P. Bannister, M. D. Luca, I. Drobnjak, D. Flitney, R. Niazy, J. Saunders, J. Vickers, Y. Zhang, N. D. Stefano, J. Brady, and P. Matthews. Advances in functional and structural mr image analysis and implementation as fsl. NeuroImage, 23(1):208–219, 2004. [32] Statistical parametric mapping, http://www.fil.ion.bpmf.ac.uk/spm/, Online. [33] W. Wells, P. Viola, and R. Kikinis. Multi-modal volume registration by maximization of mutual information. In Medical Robotics and Computer Assisted Surgery, pages 55–62. Wiley, 1995. [34] A. Zijdenbos, R. Forghani, and A. Evans. Automatic “pipeline” analysis of 3-d mri data for clinical trials: application to multiple sclerosis. IEEE Transactions on Medical Imaging, 21(10):1280–1291, Oct 2002.