Figure 2 - uploaded by Gang Yu
Content may be subject to copyright.
Network configurations of the classifier. The dropout layer is denoted as dropout and probability setting. The kernel size of the average pooling layer is one-eighth the size of the feature map from the previous layer. The output of the average pooling layer was flattened and then connected to the fully connected layer for output.

Network configurations of the classifier. The dropout layer is denoted as dropout and probability setting. The kernel size of the average pooling layer is one-eighth the size of the feature map from the previous layer. The output of the average pooling layer was flattened and then connected to the fully connected layer for output.

Source publication
Article
Full-text available
Background: The successful recognition of benign and malignant breast nodules using ultrasound images is based mainly on supervised learning that requires a large number of labeled images. However, because high-quality labeling is expensive and time-consuming, we hypothesized that semi-supervised learning could provide a low-cost and powerful alte...

Contexts in source publication

Context 1
... bounding box's nodule image was scaled to 128×128 pixels and input to the subsequent classifier network. The classifier network was composed of a multilayer convolutional neural network including three consecutive modules, where each module included some convolution layers, a maximum pooling layer, and a dropout layer, as shown in Figure 2. The network output included benign and malignant categories. ...
Context 2
... and SL were used to represent semi-supervised learning and supervised learning, respectively. Three versions of the classifier network ( Figure 2) were obtained based on different training datasets and learning methods (SSL and SL), Figure 1 Network configurations of the Faster R-CNN, which was used for image preprocessing to find the breast nodule region in the ultrasound image. The convolutional layer is denoted as conv, kernel size, and the number of channels/stride size. ...
Context 3
... bounding box's nodule image was scaled to 128×128 pixels and input to the subsequent classifier network. The classifier network was composed of a multilayer convolutional neural network including three consecutive modules, where each module included some convolution layers, a maximum pooling layer, and a dropout layer, as shown in Figure 2. The network output included benign and malignant categories. ...
Context 4
... and SL were used to represent semi-supervised learning and supervised learning, respectively. Three versions of the classifier network ( Figure 2) were obtained based on different training datasets and learning methods (SSL and SL), Figure 1 Network configurations of the Faster R-CNN, which was used for image preprocessing to find the breast nodule region in the ultrasound image. The convolutional layer is denoted as conv, kernel size, and the number of channels/stride size. ...

Citations

... The unlabeled dataset is typically much larger than the labeled dataset, i.e., N ≫ n. Examples of this setting include medical applications requiring experts' annotations [33], [34] and engineering applications, such as wireless systems, in which labels may require the execution of costly optimizations on real-world data [2], [11], [16], [32]. ...
... In order to enable the training of the CKM function H = c(X), and thus of the model f (X) in (32), we assume access to a labeled dataset D Z = {X i , Z i } n i=1 containing pairs of UE locations X i and the corresponding path information Z i . Using the channel function (34) and the optimal beam index (28), one can recover the labeled data ...
Preprint
Full-text available
In many wireless application scenarios, acquiring labeled data can be prohibitively costly, requiring complex optimization processes or measurement campaigns. Semi-supervised learning leverages unlabeled samples to augment the available dataset by assigning synthetic labels obtained via machine learning (ML)-based predictions. However, treating the synthetic labels as true labels may yield worse-performing models as compared to models trained using only labeled data. Inspired by the recently developed prediction-powered inference (PPI) framework, this work investigates how to leverage the synthetic labels produced by an ML model, while accounting for the inherent bias with respect to true labels. To this end, we first review PPI and its recent extensions, namely tuned PPI and cross-prediction-powered inference (CPPI). Then, we introduce a novel variant of PPI, referred to as tuned CPPI, that provides CPPI with an additional degree of freedom in adapting to the quality of the ML-based labels. Finally, we showcase two applications of PPI-based techniques in wireless systems, namely beam alignment based on channel knowledge maps in millimeter-wave systems and received signal strength information-based indoor localization. Simulation results show the advantages of PPI-based techniques over conventional approaches that rely solely on labeled data or that apply standard pseudo-labeling strategies from semi-supervised learning. Furthermore, the proposed tuned CPPI method is observed to guarantee the best performance among all benchmark schemes, especially in the regime of limited labeled data.
... The detection accuracy was obtained as 86-88%. The assigned images were used as input for semi-supervised classifier through a model for better recognition [12]. ...
Conference Paper
Full-text available
The human brain, an unparalleled paradigm for efficient signal processing, fuels the quest to design brain-like architectures in addressing real-world challenges. Among mimicked computational models, Spike Neural Networks (SNNs) have emerged notable in information processing due to their distinct resemblance to brain-inspired computing, with distinguishing features from conventional methodologies. In this paper, we introduce an innovative surrogate gradient-based technique tailored for medical image classification. Specifically, our approach propels a gradient-based SNN framework with the explicit objective of achieving proficient classification of breast cancer images. Employing the SNN architecture, our results reached an accuracy level of 92.52%. The voltage-driven spike behavior and spike patterns have also been apprehended to support the model's efficacy. This outcome highlights the latent potential of SNNs in achieving precise medical image classification. The framework we present signifies substantial advancements in comprehending the brain-inspired attributes of SNNs, with the way for streamlined and accurate diagnostic solutions in medical imaging.
... These articles also focused on data augmentation techniques and performance evaluation in the context of breast cancer dataset analysis. The achievements of the authors and a discussion comparing the ndings are presented below: 3. Gao Y et al. [42] proposed a detection and recognition method for breast nodules based on SSL, which was trained on a small amount of labeled data. The proposed method's performance was as good as that of SL trained on a large number of nodules and was better than the accuracy of four out of ve sonographers. ...
Preprint
Full-text available
Breast cancer is one of the leading causes of death among women worldwide, and early detection through medical imaging techniques is crucial for effective treatment. Deep learning models have shown promising results in medical image analysis tasks, but traditional data augmentation methods often do not preserve the accuracy of bounding box and segmentation mask annotations. To address this issue, a proposed method for fine-tuning new coordinates of bounding box and segmentation mask during data augmentation methods cropping and rotation in the breast cancer dataset has been introduced. This method involves generating new images by applying cropping and rotation to the original images and adjusting the coordinates of the bounding box and segmentation mask to match the new image. Experiments conducted on a publicly available breast cancer dataset showed that the proposed method improved the accuracy of the bounding box and segmentation mask annotations while preserving the original information in the image. The proposed method is a promising approach to improve the accuracy of deep learning models for medical image analysis tasks. By dynamically adjusting the coordinates during augmentation, the proposed method can better preserve object shape and improve the accuracy of object detection and segmentation tasks. The approach can be easily integrated into existing data augmentation pipelines and has the potential to improve performance on a range of computer vision applications.
... Classifying the breast mass into benign and malignant, or BI-RADS categories, using ultrasound can help facilitate earlier decision-making in breast mass management. AlexNet [62][63][64], VGG [65,66], ResNet [62,63,65,[67][68][69][70], and Inception [62,65,69], including GoogleNet, Faster R-CNN [63,66,70], and generative adversarial networks [62,71], were mostly used for breast mass classification during this period (stated in Tables 1 and 2). AlexNet is composed of multiple convolutional layers, pooling layers, fully connected layers, and a softmax classifier [62][63][64]. ...
... Classifying the breast mass into benign and malignant, or BI-RADS categories, using ultrasound can help facilitate earlier decision-making in breast mass management. AlexNet [62][63][64], VGG [65,66], ResNet [62,63,65,[67][68][69][70], and Inception [62,65,69], including GoogleNet, Faster R-CNN [63,66,70], and generative adversarial networks [62,71], were mostly used for breast mass classification during this period (stated in Tables 1 and 2). AlexNet is composed of multiple convolutional layers, pooling layers, fully connected layers, and a softmax classifier [62][63][64]. ...
... AlexNet is composed of multiple convolutional layers, pooling layers, fully connected layers, and a softmax classifier [62][63][64]. VGG is composed of 16 or 19 weight layers, 3 × 3 convolutional filters, and max-pooling layers to extract features [65,66]. ResNet uses residual connections to learn residual mapping [62,63,65,[67][68][69][70]. ...
Article
Full-text available
Simple Summary Breast cancer is one of the leading causes of cancer death among women. Ultrasound is a harmless imaging modality used to help make decisions about who should undergo biopsies and several aspects of breast cancer management. It shows high false positivity due to high operator dependency and has the potential to make overall breast mass management cost-effective. Deep learning, a variant of artificial intelligence, may be very useful to reduce the workload of ultrasound operators in resource-limited settings. These deep learning models have been tested for various aspects of the diagnosis of breast masses, but there is not enough research on their impact beyond diagnosis and which methods of ultrasound have been mostly used. This article reviews current trends in research on various deep learning models for breast cancer management, including limitations and future directions for further research. Abstract Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
... The combination of artificial intelligence (AI) with medical imaging had led to the development of a static AI ultrasound intelligent diagnosis system that not only ensures accuracy, but also improves diagnostic efficacy. This system has been widely used in clinical practice for prenatal examinations (10), cardiac ultrasound examinations (11), and the diagnosis of breast (12) and thyroid (13)(14)(15) nodules. However, only a single view of the nodule can be diagnosed by static AI, and the nature of the nodule cannot be judged in real time. ...
Article
Full-text available
Background: A dynamic artificial intelligence (AI) ultrasonic intelligent assistant diagnosis system (dynamic AI) is a joint application of AI technology and medical imaging, which can conduct real-time synchronous dynamic analysis of nodules from multiple sectional views with different angles. This study explored the diagnostic value of dynamic AI for benign and malignant thyroid nodules in patients with Hashimoto thyroiditis (HT) and its significance in guiding surgical treatment strategies. Methods: Data of 487 patients (154 with and 333 without HT) with 829 thyroid nodules who underwent surgery were collected. Differentiation of benign and malignant nodules was performed using dynamic AI, and diagnostic effects (specificity, sensitivity, negative predictive value, positive predictive value, accuracy, misdiagnosis rate and missed diagnosis rate) was assessed. Differences in diagnostic efficacy were compared among AI, preoperative ultrasound based on the American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS), and fine needle aspiration cytology (FNAC) diagnoses. Results: The accuracy, specificity and sensitivity of dynamic AI reached 88.06%, 80.19%, and 90.68%, respectively; besides, there was consistency with postoperative pathological consequences (κ=0.690; P<0.001). The diagnostic efficacy of dynamic AI was equivalent between patients with and without HT, and there were no significant differences in sensitivity, specificity, accuracy, positive predictive value, negative predictive value, missed diagnosis rate, and misdiagnosis rate. In patients with HT, dynamic AI had significantly higher specificity and a lower misdiagnosis rate than did preoperative ultrasound based on the ACR TI-RADS (P<0.05). Compared with FNAC diagnosis, dynamic AI had a significantly higher sensitivity and a lower missed diagnosis rate (P<0.05). Conclusions: Dynamic AI possessed an elevated diagnostic worth of malignant and benign thyroid nodules in patients with HT, which can provide a new method and valuable information for the diagnosis and development of management strategy of patients.
... Multiple Tasks. A complete CAD system often requires multiple functions, such as lesion detection and classification (Shin et al., 2018;Cao et al., 2019;Tanaka et al., 2019;Gao et al., 2021) or lesion segmentation and classification Vigil et al., 2022;Ragab et al., 2022;Podda et al., 2022). either in a manner of sequential modeling or multi-task learning. ...
Preprint
Full-text available
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in the deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. In this paper, we provide an extensive survey of deep learning-based breast cancer imaging research, covering studies on mammogram, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods, publicly available datasets, and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are described in detail. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
... Numerous studies of artificial intelligence (AI) systems in breast US have reported high diagnostic performances, with area under the receiver operating characteristic (ROC) curve (AUROC) values of 0.84-0.98 (11)(12)(13)(14)(15)(16)(17)(18). ...
... (11)(12)(13)(14)(15). Recent studies using weakly supervised algorithms have reported noninferior or better diagnostic performances, with AUROC values of 0.86-0.98 (16)(17)(18). In a previous large study by Shen et al. (17), the AI system showed a higher diagnostic performance (AUROC 0.98) than did prior studies of AI systems. ...
... In this study, AI achieved a higher AUROC than the average of 10 breast radiologists and reduced radiologists' false-positive rates by 37.3%, while maintaining the same level of sensitivity. Gao et al. (18) reported that the semisupervised model can achieve similar performance to the fully supervised model for the detection of breast nodules on US. This semisupervised method could reduce the number of labeled images required for training, thereby alleviating the difficulty in data preparation of medical AI. ...
Article
Full-text available
Background: The aim of this study was to evaluate the diagnostic performance of a deep learning (DL) algorithm for breast masses smaller than 1 cm on ultrasonography (US). We also evaluated a hybrid model that combines the predictions of the DL algorithm from US images and a patient's clinical factors including age, family history of breast cancer, BRCA mutation, and mammographic breast density. Methods: A total of 1,041 US images (including 633 benign and 408 malignant masses) were obtained from 1,041 patients who underwent US between January 2014 and June 2021. All US images were randomly divided into training (513 benign and 288 malignant lesions), validation (60 benign and 60 malignant lesions), and test (60 benign and 60 malignant lesions) data sets. A mask region-based convolutional neural network (R-CNN) was used to generate a feature map of the input image with a CNN and a pre-trained ResNet101 structure. For the clinical model, the multilayer perceptron (MLP) structure was used to calculate the likelihood that the tumor was benign or malignant from the clinical risk factors. We compared the diagnostic performance of an image-based DL algorithm, a combined model with regression, and a combined model with the decision tree method. Results: Using the US images, the area under the receiver operating characteristics curve (AUROC) of the DL algorithm was 0.85 [95% confidence interval (CI), 0.78-0.92]. With the combined model using a regression model, the sensitivity was 78.3% (95% CI, 67.9-88.8%) and the specificity was 85% (95% CI, 76-94%). The sensitivity of the combined model using a regression model was significantly higher than that of the imaging model (P=0.003). The specificity values of the two models were not significantly different (P=0.083). The sensitivity and specificity of the combined model using a decision tree model were 75% (95% CI, 62.1-85.3%) and 91.7% (95% CI, 81.6-97.2%), respectively. The sensitivity of the combined model using the decision tree model was higher than that of the image model but the difference was not statistically significant (P=0.081). The specificity values of the two models were not significantly different (P=0.748). Conclusions: The DL model could feasibly be used to predict breast cancers smaller than 1 cm. The combined model using clinical factors outperformed the standalone US-based DL model.
... and 92.3% vs. 92%, respectively. (29) Zhou et al. (33) used 3 different CNNs: Inception V3, Inception-ResNet V2, and ResNet-101 architectures with imageNet using 680 patients and 5 radiologists. ...
Article
Breast cancer is considered the most commonly diagnosed cancer among women worldwide. Several studies have shown that mammography screening could significantly decrease breast cancer mortality. Despite other screening modalities, such as MRI and ultrasound (US), mammography plays a vital role in detecting cancer and following up on it, due to its qualities and properties. The aim of this literature review is to look at recent studies that use AI with different medical imaging mammograms, MRI, and US, in detecting breast lesions. A literature search was carried out using Google Scholar, Semantic Scholar, medRxiv, and PubMed databases for a period of the last four years. The search terms were "breast lesion," "breast imaging," and "breast cancer" combined with "machine learning," "deep learning," and "artificial intelligence." Among these studies, only the medical imaging related to breast lesions with AI was selected. A total of 25 articles were extracted from the following databases: 4 Google Scholar, 3 Semantic Scholar, 4 medRxiv, and 14 PubMed. Only papers related to breast lesions with medical imaging modalities were extracted, and all duplications were removed. In this study, the papers were reviewed by medical imaging professionals. This literature review summarizes the most recent articles on utilizing AI in detecting breast lesions for different imaging modalities: mammogram, ultrasound, and MRI. Reviewed studies showed that AI performance in detecting lesions was significant, associated with high accuracy, sensitivity, and specificity for these modalities.
... Moreover, an 8-gauge or 11-gauge VAE needle retrieves a larger volume of specimen, thus allowing a more accurate diagnosis of breast lesions, including atypical ductal hyperplasia or ductal carcinoma in situ, compared to a typical 14-gauge core needle biopsy (5)(6)(7). Given the important role that VAE plays in the diagnosis and resection of breast masses, acquisition of this skill requires appropriate training, as well as the accumulation of time and cases, like stereotactic breast core biopsy or deep learning of breast masses detection and diagnose (8)(9)(10). The varying experience of surgeons and ultrasound physicians, as well as their collaboration with physicians, could affect the operation time and efficiency. ...
Article
Full-text available
Background: The varying experience of surgeons and ultrasound physicians, and their collaboration with physicians, may affect operation time and efficiency. We evaluated the learning curve of ultrasound-guided vacuum-assisted excision (VAE) of breast lesion with collaboration between different physicians, and assessed characteristics associated with operation time. Methods: The sample population of this retrospective study was divided into two groups: 49 consecutive patient surgeries completed by skilled surgeons and novice ultrasound physicians (U group); and 30 consecutive patient surgeries completed by skilled ultrasound physicians and novice surgeons (S group). Cumulative summation graphs were used to evaluate operation time and calculate the turning point of the learning curve. Patients in the U and S groups were divided into exploration stage and proficiency stage according to the turning point, and the differences in influencing factors were compared. A total of 548 patients who underwent vacuum-assisted breast excision performed by a combination of skilled surgeons and skilled ultrasound physicians were selected as the reference group (R group). The differences among the three groups were compared. The relationship between the operation time and other factors in the different groups was analyzed using linear regression. Results: The best learning curve of the sample population was the quadratic fitting equation, and the turning point was the 19th case in the U group and the 14th case in the S group. The total operation times in the proficiency stage were significantly shorter than those in the exploration stage in the U and S groups (P=0.012 and P=0.003, separately). Patient age, long diameter, short diameter, and depth of masses related to the operation time. Conclusions: Our data suggest the existence of different learning curves in ultrasound-guided vacuum-assisted excision for the collaborations of different stages surgeons and ultrasound physicians. Through the accumulation of experience, it is feasible to safely perform ultrasound-guided VAE of breast lesions.
... Deep learning (DL) is a powerful tool and, due to its successful application in a range of settings, it is expected that DL will be able to perform the simple but labor-intensive measurement component of radiologic examinations (10,11). In prior studies, DL was adopted for the automated and rapid measurement of LLD, including leg segmentationbased measurement (12) and anatomical landmark localization-based measurement (13). ...
Article
Full-text available
Background: Deep learning (DL) has been suggested for the automated measurement of leg length discrepancy (LLD) on radiographs, which could free up time for pediatric radiologists to focus on value-adding duties. The purpose of our study was to develop a unified solution using DL for both automated LLD measurements and comprehensive assessments in a large and comprehensive radiographic dataset covering children at all stages, from infancy to adolescence, and with a wide range of diagnoses. Methods: The bilateral femurs and tibias were segmented by a cascaded convolutional neural network (CNN), referred to as LLDNet. Each LLDNet was conducted through use of residual blocks to learn more abundant features, a residual convolutional block attention module (Res-CBAM) to integrate both spatial and channel attention mechanisms, and an attention gate structure to alleviate the semantic gap. The leg length was calculated by localizing anatomical landmarks and computing the distances between them. A comprehensive assessment based on 9 indices (5 similarity indices and 4 stability indices) and the paired Wilcoxon signed-rank test was undertaken to demonstrate the superiority of the cascaded LLDNet for segmenting pediatric legs through comparison with alternative DL models, including ResUNet, TransUNet, and the single LLDNet. Furthermore, the consistency between the ground truth and the DL-calculated measurements of leg length was also comprehensively evaluated, based on 5 indices and a Bland-Altman analysis. The sensitivity and specificity of LLD >5 mm were also calculated. Results: A total of 976 children were identified (0-19 years old; male/female 522/454; 520 children between 0 and 2 years, 456 children older than 2 years, 4 children excluded). Experiments demonstrated that the proposed cascaded LLDNet achieved the best pediatric leg segmentation in both similarity indices (0.5-1% increase; P<0.05) and stability indices (13-47% percentage decrease; P<0.05) compared with the alternative DL methods. A high consistency of LLD measurements between DL and the ground truth was also observed using Bland-Altman analysis [Pearson correlation coefficient (PCC) =0.94; mean bias =0.003 cm]. The sensitivity and specificity established for LLD >5 mm were 0.792 and 0.962, respectively, while those for LLD >10 mm were 0.938 and 0.992, respectively. Conclusions: The cascaded LLDNet was able to achieve promising pediatric leg segmentation and LLD measurement on radiography. A comprehensive assessment in terms of similarity, stability, and measurement consistency is essential in computer-aided LLD measurement of pediatric patients.