Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

tumor segmentation
Recently Published Documents


TOTAL DOCUMENTS

1797
(FIVE YEARS 1119)

H-INDEX

41
(FIVE YEARS 17)

2022 ◽  
Vol 73 ◽  
pp. 103460
Author(s):  
Chi Zhang ◽  
Jingben Lu ◽  
Qianqian Hua ◽  
Chunguo Li ◽  
Pengwei Wang

2022 ◽  
Vol 109 ◽  
pp. 104649
Author(s):  
Junting Zhao ◽  
Meng Dang ◽  
Zhihao Chen ◽  
Liang Wan

2022 ◽  
Vol 73 ◽  
pp. 103438
Author(s):  
Weijin Xu ◽  
Huihua Yang ◽  
Mingying Zhang ◽  
Zhiwei Cao ◽  
Xipeng Pan ◽  
...  

2022 ◽  
Vol 22 (1) ◽  
pp. 1-30
Author(s):  
Rahul Kumar ◽  
Ankur Gupta ◽  
Harkirat Singh Arora ◽  
Balasubramanian Raman

Brain tumors are one of the critical malignant neurological cancers with the highest number of deaths and injuries worldwide. They are categorized into two major classes, high-grade glioma (HGG) and low-grade glioma (LGG), with HGG being more aggressive and malignant, whereas LGG tumors are less aggressive, but if left untreated, they get converted to HGG. Thus, the classification of brain tumors into the corresponding grade is a crucial task, especially for making decisions related to treatment. Motivated by the importance of such critical threats to humans, we propose a novel framework for brain tumor classification using discrete wavelet transform-based fusion of MRI sequences and Radiomics feature extraction. We utilized the Brain Tumor Segmentation 2018 challenge training dataset for the performance evaluation of our approach, and we extract features from three regions of interest derived using a combination of several tumor regions. We used wrapper method-based feature selection techniques for selecting a significant set of features and utilize various machine learning classifiers, Random Forest, Decision Tree, and Extra Randomized Tree for training the model. For proper validation of our approach, we adopt the five-fold cross-validation technique. We achieved state-of-the-art performance considering several performance metrics, 〈 Acc , Sens , Spec , F1-score , MCC , AUC 〉 ≡ 〈 98.60%, 99.05%, 97.33%, 99.05%, 96.42%, 98.19% 〉, where Acc , Sens , Spec , F1-score , MCC , and AUC represents the accuracy, sensitivity, specificity, F1-score, Matthews correlation coefficient, and area-under-the-curve, respectively. We believe our proposed approach will play a crucial role in the planning of clinical treatment and guidelines before surgery.


2022 ◽  
Vol 73 ◽  
pp. 103442
Author(s):  
Yujian Liu ◽  
Jie Du ◽  
Chi-Man Vong ◽  
Guanghui Yue ◽  
Juan Yu ◽  
...  

Author(s):  
Layth Kamil Adday Almajmaie ◽  
Ahmed Raad Raheem ◽  
Wisam Ali Mahmood ◽  
Saad Albawi

<span>The segmented brain tissues from magnetic resonance images (MRI) always pose substantive challenges to the clinical researcher community, especially while making precise estimation of such tissues. In the recent years, advancements in deep learning techniques, more specifically in fully convolution neural networks (FCN) have yielded path breaking results in segmenting brain tumour tissues with pin-point accuracy and precision, much to the relief of clinical physicians and researchers alike. A new hybrid deep learning architecture combining SegNet and U-Net techniques to segment brain tissue is proposed here. Here, a skip connection of the concerned U-Net network was suitably explored. The results indicated optimal multi-scale information generated from the SegNet, which was further exploited to obtain precise tissue boundaries from the brain images. Further, in order to ensure that the segmentation method performed better in conjunction with precisely delineated contours, the output is incorporated as the level set layer in the deep learning network. The proposed method primarily focused on analysing brain tumor segmentation (BraTS) 2017 and BraTS 2018, dedicated datasets dealing with MRI brain tumour. The results clearly indicate better performance in segmenting brain tumours than existing ones.</span>


Author(s):  
Nermeen Elmenabawy ◽  
Mervat El-Seddek ◽  
Hossam El-Din Moustafa ◽  
Ahmed Elnakib

A pipelined framework is proposed for accurate, automated, simultaneous segmentation of the liver as well as the hepatic tumors from computed tomography (CT) images. The introduced framework composed of three pipelined levels. First, two different transfers deep convolutional neural networks (CNN) are applied to get high-level compact features of CT images. Second, a pixel-wise classifier is used to obtain two output-classified maps for each CNN model. Finally, a fusion neural network (FNN) is used to integrate the two maps. Experimentations performed on the MICCAI’2017 database of the liver tumor segmentation (LITS) challenge, result in a dice similarity coefficient (DSC) of 93.5% for the segmentation of the liver and of 74.40% for the segmentation of the lesion, using a 5-fold cross-validation scheme. Comparative results with the state-of-the-art techniques on the same data show the competing performance of the proposed framework for simultaneous liver and tumor segmentation.


Export Citation Format

Share Document