Fig 1 - uploaded by Zarin Anjuman Sejuti
Content may be subject to copyright.
working procedure flowchart

working procedure flowchart

Similar publications

Article
Full-text available
In a brain tumor classification physician's ability and experience is considered as an important step which depends on ability and experience of physician's. An improvement in the current methods is suggested with the identification of brain tumors for suitable treatments. It is recommended for the radiologist and physicians to classify the type of...
Article
Full-text available
In several medical vision implementations, segmentation and marking are also the weakest measures. This paper shows a system focused on watershed transformations, which are structured to solve popular problems in a number of applications and are controllable by parameter adaptation. Lung cancer identification, a system for segmenting cancer regions...
Article
Full-text available
This paper proposes a methodology in which detection, extraction and classification of brain tumour is done with the help of a patient’s MRI image. Processing of medical images is currently a huge emerging issue and it has attracted lots of research all over the globe. Several techniques have been developed so far to process the images efficiently...

Citations

... The training set is fed into the SVM algorithm, which takes some time to build correlations between the complex images in the dataset. Once the model is successfully generated, it displays the accuracy, confusion matrix and F1 score [11][12][13] of the current learning. The model is also deployed to interact with users who can input MRI images for prediction and the results will be shown accordingly. ...
Chapter
Computer network & communication book series provides a premier interdisciplinary platform for researchers, practitioners and educators to publish not only the most recent innovations, trends, and concerns but also practical challenges encountered and solutions adopted in the fields of networks and communication. This book will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Computer network & communication. The book series looks for significant contributions to all major fields of the networking and communication technologies in theoretical and practical aspects. It provides a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
... Correctly mapping these pixel values, as demonstrated in our alternative experiment, resulted in improved performance. Sejuti et al. [5] presented two distinct approaches: i) a CNN model and ii) a CNN-SVM Hybrid model. Notably, they did not employ any preprocessing techniques. ...
Conference Paper
Full-text available
In the realm of image classification, traditional algorithms, encompassing both machine learning and deep learning, grapple with formidable challenges arising from uneven pixel ranges and dimensionality reduction. This results in a significant impediment to achieving accurate image categorization. Numerous examples of such traditional methods, including KNN, Random Forest, SVM, DNN, CNN etc, have encountered persistent issues such as inefficient performance of feature engineering, limited accuracy, etc. In response to these challenges, this paper introduces a novel image classification method that integrates pixel mapping, DWT, and CNN for improved efficiency and reliability. By resolving irregular pixel ranges through initial pixel mapping, our method establishes uniformity as a foundation for subsequent image analysis. Subsequently, DWT is employed to dissect and reduce image dimensionality, extracting essential features while lowering computational complexity. This two-step preprocessing approach forms a robust foundation for effective data classification. Within this framework, our proposed CNN architecture plays a pivotal role, utilizing both spectral and spatial information to address image categorization challenges. The network’s capacity to learn complex patterns enhances classification accuracy. In extensive evaluations, our methodology surpasses conventional classification techniques, yielding impressive results. With an Overall Accuracy (OA) of 96.9% and a Kappa statistic of 95.16%, our method showcases excellence and practical potential. These compelling achievements underscore the significance of our approach in tackling image classification challenges, paving the way for enhanced precision and efficiency across various domains.
... Pooling layer is layer that used to manipulate the dimensions of the image without reducing the information contained in the image [32]. Fully Connected Layer is layer that used for the classification process based on information that had been obtain from the previous layer so that the appropriate class can be determined [33]. While the output layer is a layer used to display the results from the process that had been done [34]. ...
Article
Flowers are plants that had many types and often found around. But because the many types of flowers, sometimes difficult to distinguish the type from one flower to another. Therefore, in this study, will discuse about the process of identification and classification of flower types, namely daisy, dandelion, rose, sunflower and tulip. The data that would used in this research is image data that consisting of 764 daisy images, 1052 dandelion images, 784 rose images, 733 sunflower images and 984 tulip images. From the total images used, would be divided again into 60% training data, 30% testing data and 10% validation data that would been used to train and evaluate the CNN model. In this study, the classification process would using transfer learning CNN method using the DenseNet and NasNetLarge architectures, which later from these two architectures would compare to find which architecture is best for classifying flower types. The results that obtained after testing in this study are in the flower classification process using the DenseNet architecture to get a test accuracy of 89% and using the NasLargeNet architecture to get a test accuracy of 86%.
... The authors, in their framework, incorporated local and global features with lower weights by using small kernels that resulted in an accuracy score of 96%. Sejuti et al. [12] presented a CNN-SVM-based method to identify and classify brain tumors in MRI images and attained 97.1% accuracy. Whereas the authors of Abiwinanda et al. [13] designed five different CNN frameworks to detect tumors in the brain. ...
... However, it is worth mentioning that relatively basic CNNs cannot extract complex high-level features, leading to mediocre overall accuracy. Due to this, the CNNs in [11][12][13] have resulted in poor performance because of very simple architectural designs. ...
Article
Full-text available
One of the most severe types of cancer caused by the uncontrollable proliferation of brain cells inside the skull is brain tumors. Hence, a fast and accurate tumor detection method is critical for the patient's health. Many automated artificial intelligence (AI) methods have recently been developed to diagnose tumors. These approaches, however, result in poor performance; hence, there is a need for an efficient technique to perform precise diagnoses. This paper suggests a novel approach for brain tumor detection via an ensemble of deep and hand-crafted feature vectors (FV). The novel FV is an ensemble of hand-crafted features based on the GLCM (gray level co-occurrence matrix) and in-depth features based on VGG16. The novel FV contains robust features compared to independent vectors, which improve the suggested method's discriminating capabilities. The proposed FV is then classified using SVM or support vector machines and the k-nearest neighbor classifier (KNN). The framework achieved the highest accuracy of 99% on the ensemble FV. The results indicate the reliability and efficacy of the proposed methodology; hence, radiologists can use it to detect brain tumors through MRI (magnetic resonance imaging). The results show the robustness of the proposed method and can be deployed in the real environment to detect brain tumors from MRI images accurately. In addition, the performance of our model was validated via cross-tabulated data.
... SNN-based methods [19][20][21] use spike neural networks [22] to process input event streams asynchronously; however, they are difficult to train because of the lack of efficient back-propagation [23] algorithms. Existing CNN-based methods [24,25] transform asynchronous event data into fixed-rate frame-like representations and feed them into standard deep neural networks [26]. The time resolution according to the event frame leads to a loss of information in other spatial or temporal dimensions. ...
Article
Full-text available
Sign language recognition has been utilized in human–machine interactions, improving the lives of people with speech impairments or who rely on nonverbal instructions. Thanks to its higher temporal resolution, less visual redundancy information and lower energy consumption, the use of an event camera with a new dynamic vision sensor (DVS) shows promise with regard to sign language recognition with robot perception and intelligent control. Although previous work has focused on event camera-based, simple gesture datasets, such as DVS128Gesture, event camera gesture datasets inspired by sign language are critical, which poses a great impediment to the development of event camera-based sign language recognition. An effective method to extract spatio-temporal features from event data is significantly desired. Firstly, the event-based sign language gesture datasets are proposed and the data have two sources: traditional sign language videos to event stream (DVS_Sign_v2e) and DAVIS346 (DVS_Sign). In the present dataset, data are divided into five classification, verbs, quantifiers, position, things and people, adapting to actual scenarios where robots provide instruction or assistance. Sign language classification is demonstrated in spike neuron networks with a spatio-temporal back-propagation training method, leading to the best recognition accuracy of 77%. This work paves the way for the combination of event camera-based sign language gesture recognition and robotic perception for the future intelligent systems.
... These issues highlight the importance of developing an entirely automated brain tumor categorization based on machine learning. CNN's design is based on a deep learning model, a neural network that is particularly good at picture recognition and classification [10,11]. The goal of this study is to create a fully self-contained PDCNN model for brain tumor categorization using publicly available Kaggle and Figshare datasets [9,12,13]. ...
Article
Full-text available
Convolutional neural network (CNN) is widely used to classify brain tumors with high accuracy. Since CNN collects features randomly without knowing the local and global features and causes overfitting problems, this research proposes a novel parallel deep convolutional neural network (PDCNN) topology to extract both global and local features from the two parallel stages and deal with the over-fitting problem by utilizing dropout regularizer alongside batch normalization. To begin, input images are resized and grayscale transformation is conducted, which helps to reduce complexity. After that, data augmentation has been used to maximize the number of datasets. The benefits of parallel pathways are provided by combining two simultaneous deep con-volutional neural networks with two different window sizes, allowing this model to learn local and global information. Three forms of MRI datasets are used to determine the effectiveness of the proposed method. The binary tumor identification dataset-I, Figshare dataset-II, and Multiclass Kaggle dataset-III provide accuracy of 97.33%, 97.60%, and 98.12%, respectively. The proposed structure is not only accurate but also efficient, as the proposed method extracts both low-level and high-level features, improving results compared to state-of-the-art techniques.
... The model explained is known as ANN SVM because it incorporates numerous ANN and one SVM in the work. In [17], this paper combines the CNN + SVM for image classification of brain tumours with an accuracy of 97.1%. SVM cast-off to improve the accuracy of the propsed model using the features extracted from the CNN model. ...
Chapter
This work focuses on hybrid models in the classification of ultrasound scan planes in the detection of congenital heart abnormalities. The key elements of both the Deep Learning model and Machine Learning model are combined in this hybrid model. The Deep Learning model serves as a feature extractor in the proposed model, while the Machine Learning model serves as a binary classifier. As classification of fetal cardiac ultrasound scan plane plays an important role in detection of CHD. İn Ultrasound scan planes such as 3 Vessel View (3VV), 4-Chamber View (4CV), and 3 Vessel Tracheal (3VT) are useful in detecting foetal heart abnormalities during the 18 to 24 week gestational period. In this paper, Convolutional Neural Network (CNN) + eXtreme Gradient Boosting (XGBoost) is used in order to classify the foetal ultrasound scan planes. The proposed hybrid model shows that the CNN + XGBoost model achieved a test accuracy of 98.65% using the custom dataset. This shows that hybrid model strategies are better than more standard deep learning and machine learning techniques. KeywordsConvolutional neural networkXGBoost classifierUltrasound images
... [8] described a novel medical image ID approach based on a combination of the highlevel deep features and a few textural properties. [9] investigated and combined both CNN-SVM to classify the big data of brain tumors that contained 3064 images. The CNN was with 19 layers and used to extract the features, while the SVM was employed to classify the targets that have 3 classes of brain tumors, and the maximum accuracy MRI brain tumor classification using robust Convolutional Neural Network CNN approach can be reached to 97.1%. ...
... In [1] CNN-SVM hybrid model may be used for classifying EEG signals too, as shown in [3]. The method of transfer learning can be used for retraining pre-trained models for smaller datasets. ...
Conference Paper
Brain tumor is a type of cancerous growth that may occur in the brain. Early diagnosis of the disease is crucial for proper treatment. Diagnosis of brain tumors is usually done using images obtained through magnetic resonance imaging (MRI). MRI images can be classified using a Convolutional Neural Network (CNN), which is a technique in deep learning. It is suitable for classifying large image datasets. Support Vector Machine (SVM) is a technique in machine learning that is predominantly used for classification and in various regression problems. In this paper, we classified brain MRI images using pre-trained models like AlexNet, VGG16, InceptionV3, and ResNet50. Finally, a CNN model and an SVM model are trained with the same dataset. Using the results thus obtained a hybrid CNN-SVM model has been built to get better accuracy and prediction results.
... Technique Accuracy% Anarkari et al. [30] CNN 94.2 Afshar et al. [32] CapsNet 90.8 Sejuti and Islam [50] CNN + SVM 97.1 Kang et al. [8] DenseNet169 + Inception-v3 + ResNeXt50 98.5 Ari et al. [23] AlexNet + VGG16 96.6 Proposed AlexNet + GoogLeNet + ResNet18 99.7 ...
Article
Full-text available
Brain tumors are difficult to treat and cause substantial fatalities worldwide. Medical professionals visually analyze the images and mark out the tumor regions to identify brain tumors, which is time-consuming and prone to error. Researchers have proposed automated methods in recent years to detect brain tumors early. These approaches, however, encounter difficulties due to their low accuracy and large false-positive values. An efficient tumor identification and classification approach is required to extract robust features and perform accurate disease classification. This paper proposes a novel multiclass brain tumor classification method based on deep feature fusion. The MR images are preprocessed using min-max normalization, and then extensive data augmentation is applied to MR images to overcome the lack of data problem. The deep CNN features obtained from transfer learned architectures such as AlexNet, GoogLeNet, and ResNet18 are fused to build a single feature vector and then loaded into Support Vector Machine (SVM) and K-nearest neighbor (KNN) to predict the final output. The novel feature vector contains more information than the independent vectors, boosting the proposed method’s classification performance. The proposed framework is trained and evaluated on 15,320 Magnetic Resonance Images (MRIs). The study shows that the fused feature vector performs better than the individual vectors. Moreover, the proposed technique performed better than the existing systems and achieved accuracy of 99.7%; hence, it can be used in clinical setup to classify brain tumors from MRIs.