Το έργο με τίτλο Εφαρμογή και αξιολόγηση μεθόδων τεχνητής νοημοσύνης σε προβλήματα ιατρικής διάγνωσης από τον/τους δημιουργό/ούς Spiliotis Georgios διατίθεται με την άδεια Creative Commons Αναφορά Δημιουργού 4.0 Διεθνές
Βιβλιογραφική Αναφορά
Γεώργιος Σπηλιώτης, "Εφαρμογή και αξιολόγηση μεθόδων τεχνητής νοημοσύνης σε προβλήματα ιατρικής διάγνωσης", Διπλωματική Εργασία, Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών, Πολυτεχνείο Κρήτης, Χανιά, Ελλάς, 2018
https://doi.org/10.26233/heallink.tuc.74271
Magnetic Resonance Imaging (MRI) is a widely used medical imaging modality that provides accurate information about the human tissue, anatomy and pathology, of a non-invasive form. If used to scan a human brain it can provide images of high contrast, therefore distinguishing between the three major brain tissues: Cerebro-Spinal Fluid (CSF), Grey Matter (GM) and White Matter (WM). In this way MRI can greatly assist radiologists and doctors in providing a more precise diagnosis and therapy. Because of their unpredictable appearance and shape, segmenting brain tumors from multi-modal imaging data is one of the most challenging tasks in medical image analysis. Manual detection and classification of brain tumor by an expert is still considered the most acceptable method, but it is too time-consuming, especially because of the large amount of data that have to be analysed manually. In this thesis we examine, optimize and finally combine specific state-of-the-art methods comprising of four consistent methods for Computer-Aided Diagnosis (CAD) processes for detection of a brain tumor from MRI, of T2 weighted modality, from the axial plane (T2 MRI). We denote the four proposed methodologies as “Method 1” to “Method 4”. These methodologies are based on image pre-processing and classification by utilizing neural networks (NN) or a hybrid combination of neural networks and fuzzy logic (ANFIS).In order to gauge the current innovation status in automated brain tumor segmentation and to compare various proposed methods in bibliography, we use a large dataset of brain tumor MR scans, in which the relevant tumor structures have been delineated. These are provided freely from Multimodal Brain Tumor Image Segmentation (BRATS) MICCAI 2015.Our dataset of the training and testing data set, referred to male and female adult persons, includes 24 non-tumorous cases and 202 tumorous cases that all have been segmented visually by our experienced neurosurgeon partner, Dr. A. Krasoudakis. The healthy MRI scans are from “St. George” general hospital of Chania, Crete and from Harvard General Hospital database. Our data set contains about 5% high-grade, 82% low-grade glioma cases, 3% unhealthy but not recognizable cases and 10% healthy cases. At the pre-processing stage we apply a skull-stripping algorithm to isolate the brain region. Subsequently, we use a high-pass Gaussian filter for sharpening and a median filter for noise reduction. In post-processing stage, for image segmentation we use Otsu’s threshold as well as we implement morphological operators for region of interest (ROI) definition. In our proposed CAD Method 1, feature extraction is made using Grey-level co-occurrence matrix (GLCM) and 13 statistical features are calculated. In Method 2, feature extraction is made by using Discrete Wavelet Transform (DWT) and dimensionality reduction is implemented using Principal Components Analysis (PCA). In Method 3, all the above methods are combined and GLCM matrix is applied after the DWT and PCA stages, so to provide the necessary statistical features. In Method 4, the Mean-Shift algorithm is implemented at the post-processing stage for better segmentation results and features extraction is made according to Method 3. The features extracted from every proposed CAD method are processed at first with a feed-forward artificial neural network (ANN) with back-propagation training algorithm and then with an adaptive neuro-fuzzy inference system (ANFIS) for the methods with GLCM matrix. The experimental results of the proposed methods have been validated and evaluated for performance over a testing set of images based on sensitivity, specificity and accuracy with the best results reaching 98.8% sensitivity, 62.5% specificity and 95.6% accuracy.