Schematic diagram of artificial intelligence, machine learning, and deep learning.

Schematic diagram of artificial intelligence, machine learning, and deep learning.

Source publication
Article
Full-text available
Breast cancer is the most frequently diagnosed cancer in women; it poses a serious threat to women’s health. Thus, early detection and proper treatment can improve patient prognosis. Breast ultrasound is one of the most commonly used modalities for diagnosing and detecting breast cancer in clinical practice. Deep learning technology has made signif...

Contexts in source publication

Context 1
... is a broad area of computer science related to building smart machines that can perform tasks that normally require human intelligence [22]. Machine learning is a term introduced by Arthur Samuel in 1959 to describe one of the technologies of AI (Figure 1). Machine learning provides a system with the ability to automatically learn and improve from experience without explicit programming. ...
Context 2
... is a broad area of computer science related to building smart machines that can perform tasks that normally require human intelligence [22]. Machine learning is a term introduced by Arthur Samuel in 1959 to describe one of the technologies of AI (Figure 1). Machine learning provides a system with the ability to automatically learn and improve from experience without explicit programming. ...
Context 3
... classic machine learning, expert humans discern and encode features that appear distinctive in the data, and statistical techniques are used to organize or segregate the data based on these features [22,23]. Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning (Figure 1). It can make flexible decisions according to the situation by learning a large amount of data and automatically extracting common feature quantities [23,24]. ...
Context 4
... learning has dramatically improved research in areas such as speech recognition, visual image recognition, and object detection [24]. In image processing, the deep learning architecture called Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning (Figure 1). It can make flexible decisions according to the situation by learning a large amount of data and automatically extracting common feature quantities [23,24]. ...

Citations

... Deep learning methods, especially CNNs, have shown impressive achievements in diverse medical imaging tasks, such as image segmentation, classification, and detection [2,[21][22][23]. Researchers have applied CNNs to analyze breast ultrasound images to detect abnormalities and tumors [24][25][26]. Studies such as [27][28][29][30] explored different architectures and attention mechanisms to improve the performance of tumor segmentation in breast ultrasound images. ...
Article
Full-text available
Breast cancer remains a critical global concern, underscoring the urgent need for early detection and accurate diagnosis to improve survival rates among women. Recent developments in deep learning have shown promising potential for computer-aided detection (CAD) systems to address this challenge. In this study, a novel segmentation method based on deep learning is designed to detect tumors in breast ultrasound images. Our proposed approach combines two powerful attention mechanisms: the novel Positional Convolutional Block Attention Module (PCBAM) and Shifted Window Attention (SWA), integrated into a Residual U-Net model. The PCBAM enhances the Convolutional Block Attention Module (CBAM) by incorporating the Positional Attention Module (PAM), thereby improving the contextual information captured by CBAM and enhancing the model’s ability to capture spatial relationships within local features. Additionally, we employ SWA within the bottleneck layer of the Residual U-Net to further enhance the model’s performance. To evaluate our approach, we perform experiments using two widely used datasets of breast ultrasound images and the obtained results demonstrate its capability in accurately detecting tumors. Our approach achieves state-of-the-art performance with dice score of 74.23% and 78.58% on BUSI and UDIAT datasets, respectively in segmenting the breast tumor region, showcasing its potential to help with precise tumor detection. By leveraging the power of deep learning and integrating innovative attention mechanisms, our study contributes to the ongoing efforts to improve breast cancer detection and ultimately enhance women’s survival rates. The source code of our work can be found here: https://github.com/AyushRoy2001/DAUNet.
... Over a decade ago, the first computeraided diagnosis (CAD) systems were proposed to assist radiologists in the interpretation of breast ultrasound exams (12). AI can use deep learning algorithms to analyze and learn from a large number of breast ultrasound images, interpret structural observations and pathological features accurately, and assist physicians in diagnosis (13,14). This can further enable physicians to quickly and accurately determine the nature of tumors, thereby enhancing diagnostic efficiency, providing more accurate diagnostic results and aiding in the detection of early breast cancer lesions that might have gone unnoticed. ...
Article
Full-text available
Background Accurate classification of breast nodules into benign and malignant types is critical for the successful treatment of breast cancer. Traditional methods rely on subjective interpretation, which can potentially lead to diagnostic errors. Artificial intelligence (AI)-based methods using the quantitative morphological analysis of ultrasound images have been explored for the automated and reliable classification of breast cancer. This study aimed to investigate the effectiveness of AI-based approaches for improving diagnostic accuracy and patient outcomes. Methods In this study, a quantitative analysis approach was adopted, with a focus on five critical features for evaluation: degree of boundary regularity, clarity of boundaries, echo intensity, and uniformity of echoes. Furthermore, the classification results were assessed using five machine learning methods: logistic regression (LR), support vector machine (SVM), decision tree (DT), naive Bayes, and K-nearest neighbor (KNN). Based on these assessments, a multifeature combined prediction model was established. Results We evaluated the performance of our classification model by quantifying various features of the ultrasound images and using the area under the receiver operating characteristic (ROC) curve (AUC). The moment of inertia achieved an AUC value of 0.793, while the variance and mean of breast nodule areas achieved AUC values of 0.725 and 0.772, respectively. The convexity and concavity achieved AUC values of 0.988 and 0.987, respectively. Additionally, we conducted a joint analysis of multiple features after normalization, achieving a recall value of 0.98, which surpasses most medical evaluation indexes on the market. To ensure experimental rigor, we conducted cross-validation experiments, which yielded no significant differences among the classifiers under 5-, 8-, and 10-fold cross-validation (P>0.05). Conclusions The quantitative analysis can accurately differentiate between benign and malignant breast nodules.
... While the text highlights challenges in selecting and tuning AI model architectures, considering technical constraints and multivariable data complexity, it emphasizes the need for enhanced collaboration among stakeholders. [81][82][83][84] The involvement of legal experts in these teams is pivotal in navigating the legal landscape, ensuring compliance, and offering valuable insights for risk management. Their presence significantly enhances the efficiency and robustness of the process, preempting and addressing legal challenges proactively. ...
... Furthermore, the training database can be reduced by focusing on transfer learning techniques, over knowledge transfer, minimizing the spent time in the processing phase. So, transfer learning in the medical context offers a pragmatic approach to overcome the limitations of a limited medical dataset, or, where it is imbalanced or lacks diversity [8][9][10][11]. ...
... The selected architectures of VGG16, VGG19, and Inception V3, have demonstrated success in various computer vision applications, particularly in the field of breast cancer data, providing a strong foundation for our study [7][8][9][10][11][12]. They are widely recognized and well-established CNN architectures known for their effectiveness which is proven in medical image classification, particularly on breast cancer [8][9][10][11][12][13][14]. ...
... The selected architectures of VGG16, VGG19, and Inception V3, have demonstrated success in various computer vision applications, particularly in the field of breast cancer data, providing a strong foundation for our study [7][8][9][10][11][12]. They are widely recognized and well-established CNN architectures known for their effectiveness which is proven in medical image classification, particularly on breast cancer [8][9][10][11][12][13][14]. Pre-trained on large datasets, the usefulness of these models makes them suitable candidates for transfer learning. ...
Article
Full-text available
Breast cancer is still the primary reason for women’s mortality. It will be necessary to detect the micro-neoplasm signs, to assess the presence or the absence of malignancy precisely. In this context, transfer learning is an efficient resolution to the difficulties of deep network training, through small medical datasets, which can be computationally expensive and time-consuming. So, by leveraging prior knowledge gained from generic data (imagenet), transfer learning enables the formation of more performant, faster, and more generalizable models for the classification of ultrasound breast cancer data. Transfer learning overcomes the unsatisfactory performance of traditional approaches, benefitting from the knowledge encoded in pre-trained models, significantly reducing the training time and resource requirements for the explored task, resulting in an accurate conception of a convolutional neural network. In this work, we adopt this paradigm to extend the abilities of three convolutional neural network architectures in six models, to classify tumoral ultrasound breast data. This paper proposes an ultrasound breast tumoral classification for computer-aided diagnosis by a new adaptive pre-trained convolutional neural network models through fine-tuned VGG19, VGG16, and Inception V3 to analyse two public datasets: Egyptian ultrasound breast cancer BUSI, and Spanish US data, to discriminate mammary neoplasms from two distinguished populations. This adaptation is reached by refining the cited neural architectures, using partial fine-tuning and the total one into the training phase, by substituting the fully connected layer with several ones, and adding layers as a regularization, for the model’s stability. The attaints results show that inception V3 is the most efficient model compared to VGG16 (accuracy of 97.38% and an F1-score of 97%), with an accuracy of 98.73%, an F1-score of 98.54%, a global sensitivity of 98.22%, and a total specificity of 98.23% to distinguish malignant versus benign pathologies and normal tissues in BUSI data. The best scores obtained on US data over inception V3, for the binary classification (malignant and benign lesions) provide an accuracy of 98%, a specificity of 98.2%, a sensitivity of 98%, and an F1-score of 98.24%.
... These features captured vital patterns and characteristics within the segmented regions, facilitating classi cation in the later part of the problem. By synergizing the spatial information from segmentation with the semantic information from feature extraction, this approach produced more precise and meaningful insights from breast ultrasound data [32]. ...
Preprint
Full-text available
Background Breast cancer remains a significant global health challenge, demanding accurate and effective diagnostic methods for timely treatment. Ultrasound imaging stands out as a valuable diagnostic tool for breast cancer due to its affordability, accessibility, and non-ionizing radiation properties. Methods We evaluate the proposed method using a publicly available breast ultrasound images. This paper introduces a novel approach to classifying breast ultrasound images based on segmentation and feature extraction algorithm. The proposed methodology involves several key steps. Firstly, breast ultrasound images undergo preprocessing to enhance image quality and eliminate potential noise. Subsequently, a U-Net + + is applied for the segmentation. A classification model is then trained and validated after extracting features by using Mobilenetv2 and Inceptionv3 of segmented images. This model utilizes modern machine learning and deep learning techniques to distinguish between malignant and benign breast masses. Classification performance is assessed using quantitative metrics, including recall, precision and accuracy. Our results demonstrate improved precision and consistency compared to classification approaches that do not incorporate segmentation and feature extraction. Feature extraction using InceptionV3 and MobileNetV2 showed high accuracy, with MobileNetV2 outperforming InceptionV3 across various classifiers. Results The ANN classifier, when used with MobileNetV2, demonstrated a significant increase in test accuracy (0.9658) compared to InceptionV3 (0.7280). In summary, our findings suggest that the integration of segmentation techniques and feature extraction has the potential to enhance classification algorithms for breast cancer ultrasound images. Conclusion This approach holds promise for supporting radiologists, enhancing diagnostic accuracy, and ultimately improving outcomes for breast cancer patients. In future our focus will be to use comprehensive datasets to validate our methodology.
... Studies which have addressed abnormality classification on single modality have often considered magnetic resonance imaging (MRI) 2 , digital mammography, www.nature.com/scientificreports/ and ultrasound technology 3 , mammography 4-7 , contrast-enhanced mammography 8 , digital tomosynthesis 9 , sonography 10 , sonoelastography 11,12 , magnetic elastography, diffusion-weighted imaging 13 , magnetic spectroscopy, nuclear medicine 14,15 , image-guided breast biopsy [16][17][18] , optical imaging 19,20 , and microwave imaging 21 . The unimodal approach to detection of breast cancer disease is limited to using insufficient information in diagnosing physical condition. ...
Article
Full-text available
There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modalities of visual and textual content. Mammography, magnetic resonance imaging (MRI) and image-guided breast biopsy represent a few of multimodal visual streams considered by physicians in isolating cases of breast cancer. Unfortunately, most studies applying deep learning techniques to solving classification problems in digital breast images have often narrowed their study to unimodal samples. This is understood considering the challenging nature of multimodal image abnormality classification where the fusion of high dimension heterogeneous features learned needs to be projected into a common representation space. This paper presents a novel deep learning approach combining a dual/twin convolutional neural network (TwinCNN) framework to address the challenge of breast cancer image classification from multi-modalities. First, modality-based feature learning was achieved by extracting both low and high levels features using the networks embedded with TwinCNN. Secondly, to address the notorious problem of high dimensionality associated with the extracted features, binary optimization method is adapted to effectively eliminate non-discriminant features in the search space. Furthermore, a novel method for feature fusion is applied to computationally leverage the ground-truth and predicted labels for each sample to enable multimodality classification. To evaluate the proposed method, digital mammography images and digital histopathology breast biopsy samples from benchmark datasets namely MIAS and BreakHis respectively. Experimental results obtained showed that the classification accuracy and area under the curve (AUC) for the single modalities yielded 0.755 and 0.861871 for histology, and 0.791 and 0.638 for mammography. Furthermore, the study investigated classification accuracy resulting from the fused feature method, and the result obtained showed that 0.977, 0.913, and 0.667 for histology, mammography, and multimodality respectively. The findings from the study confirmed that multimodal image classification based on combination of image features and predicted label improves performance. In addition, the contribution of the study shows that feature dimensionality reduction based on binary optimizer supports the elimination of non-discriminant features capable of bottle-necking the classifier.
... Previous CAD systems generally relied on manually created visual information that posed difficulties in generalizing ultrasound images acquired from diverse techniques [42][43][44][45][46][47]. The development of artificial intelligence (AI) technology for the automated detection of breast cancers using ultrasound images has been aided by some recent breakthroughs [48][49][50]. Deep learning can have benefits for medical image analysis, including breast cancer [51]. CNN is a kind of deep learning that has several layer hierarchies and translates the pixels of an image into features. ...
Article
Full-text available
Introduction Breast cancer stands as the second most deadly form of cancer among women worldwide. Early diagnosis and treatment can significantly mitigate mortality rates. Purpose The study aims to classify breast ultrasound images into benign and malignant tumors. This approach involves segmenting the breast's region of interest (ROI) employing an optimized UNet architecture and classifying the ROIs through an optimized shallow CNN model utilizing an ablation study. Method Several image processing techniques are utilized to improve image quality by removing text, artifacts, and speckle noise, and statistical analysis is done to check the enhanced image quality is satisfactory. With the processed dataset, the segmentation of breast tumor ROI is carried out, optimizing the UNet model through an ablation study where the architectural configuration and hyperparameters are altered. After obtaining the tumor ROIs from the fine-tuned UNet model (RKO-UNet), an optimized CNN model is employed to classify the tumor into benign and malignant classes. To enhance the CNN model's performance, an ablation study is conducted, coupled with the integration of an attention unit. The model's performance is further assessed by classifying breast cancer with mammogram images. Result The proposed classification model (RKONet-13) results in an accuracy of 98.41 %. The performance of the proposed model is further compared with five transfer learning models for both pre-segmented and post-segmented datasets. K-fold cross-validation is done to assess the proposed RKONet-13 model's performance stability. Furthermore, the performance of the proposed model is compared with previous literature, where the proposed model outperforms existing methods, demonstrating its effectiveness in breast cancer diagnosis. Lastly, the model demonstrates its robustness for breast cancer classification, delivering an exceptional performance of 96.21 % on a mammogram dataset. Conclusion The efficacy of this study relies on image pre-processing, segmentation with hybrid attention UNet, and classification with fine-tuned robust CNN model. This comprehensive approach aims to determine an effective technique for detecting breast cancer within ultrasound images.
... The development of efficient algorithms is more challenging due to the complex nature of multi-variable data essential for clinical reasoning. This data includes multi-modality imaging, multi-parametric protocols, clinical knowledge, and previous or contralateral examinations (n = 28) [29,44]. ...
Article
Full-text available
Objective Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. Method A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. Results A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). Conclusion This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. Clinical relevance statement The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. Key Points • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education . • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education . • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI .
... In recent years, artificial intelligence (AI) has been gradually applied in breast imaging to improve workflows, perform automatic image segmentation, enable intelligent diagnosis, and predict ALN metastasis accurately (17)(18)(19). To avoid the complicated feature extraction process and extract more abundant information from the image, the deep learning (DL) method has been widely used in medical image research in recent years (20,21). ...
... The accurate detection of lymph node micro metastasis is essential for guiding surgical decision making, and adjuvant therapy. Recent advances in AI models, such as DL technology, have been applied to breast US imaging (18,39), and some studies have reported the effectiveness of DL models using a convolutional neural network (CNN) for the prediction of clinical ALN metastasis, with AUCs ranging from 0.72 to 0.89 (23,40). However, these studies were based solely on ultrasound static (single frame) images, which may have resulted in the loss of many subtle or important lesion features and even caused the neglect of small lesions, as reported previously. ...
Article
Full-text available
Objective To develop a deep learning (DL) model for predicting axillary lymph node (ALN) metastasis using dynamic ultrasound (US) videos in breast cancer patients. Methods A total of 271 US videos from 271 early breast cancer patients collected from Xiang’an Hospital of Xiamen University andShantou Central Hospitabetween September 2019 and June 2021 were used as the training, validation, and internal testing set (testing set A). Additionally, an independent dataset of 49 US videos from 49 patients with breast cancer, collected from Shanghai 10th Hospital of Tongji University from July 2021 to May 2022, was used as an external testing set (testing set B). All ALN metastases were confirmed using pathological examination. Three different convolutional neural networks (CNNs) with R2 + 1D, TIN, and ResNet-3D architectures were used to build the models. The performance of the US video DL models was compared with that of US static image DL models and axillary US examination performed by ultra-sonographers. The performances of the DL models and ultra-sonographers were evaluated based on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Additionally, gradient class activation mapping (Grad-CAM) technology was also used to enhance the interpretability of the models. Results Among the three US video DL models, TIN showed the best performance, achieving an AUC of 0.914 (95% CI: 0.843-0.985) in predicting ALN metastasis in testing set A. The model achieved an accuracy of 85.25% (52/61), with a sensitivity of 76.19% (16/21) and a specificity of 90.00% (36/40). The AUC of the US video DL model was superior to that of the US static image DL model (0.856, 95% CI: 0.753-0.959, P<0.05). The Grad-CAM technology confirmed the heatmap of the model, which highlighted important subregions of the keyframe for ultra-sonographers’ review. Conclusion A feasible and improved DL model to predict ALN metastasis from breast cancer US video images was developed. The DL model in this study with reliable interpretability would provide an early diagnostic strategy for the appropriate management of axillary in the early breast cancer patients.
... There are more studies, been emerged through the last decade, concerning breast cancer detection, segmentation, and classification [1,[8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][31][32][33][34][35][36][37][38][39][40][41][42]. Six samples are presented in the following six paragraphs. ...
... Fujioka T et al [18]; have presented their review about using DL in breast ultrasound imaging. They have discussed the current issues, applications, and future perspectives of DL technology in breast ultrasound imaging. ...