Figure - available from: OncoTargets and Therapy
This content is subject to copyright. Terms and conditions apply.
Scheme for restricted Boltzmann machine. The restricted Boltzmann machine is a fully connected bipartite graph.

Scheme for restricted Boltzmann machine. The restricted Boltzmann machine is a fully connected bipartite graph.

Source publication
Article
Full-text available
Lung cancer has a poor prognosis when not diagnosed early and unresectable lesions are present. The management of small lung nodules noted on computed tomography scan is controversial due to uncertain tumor characteristics. A conventional computer-aided diagnosis (CAD) scheme requires several image processing and pattern recognition steps to accomp...

Similar publications

Article
Full-text available
Background Histological feature representation is advantageous for computer aided diagnosis (CAD) and disease classification when using predictive techniques based on machine learning. Explicit feature representations in computer tissue models can assist explainability of machine learning predictions. Different approaches to feature representation...

Citations

... ML, a branch of artificial intelligence (AI), empowers computer systems to learn from data patterns and iteratively improve performance without explicit programming. By leveraging large datasets encompassing diverse patient profiles and imaging features, ML algorithms hold the potential to discern subtle patterns in SPN characteristics that elude human perception [13,14]. ...
Article
Full-text available
The study investigates the efficiency of integrating Machine Learning (ML) in clinical practice for diagnosing solitary pulmonary nodules’ (SPN) malignancy. Patient data had been recorded in the Department of Nuclear Medicine, University Hospital of Patras, in Greece. A dataset comprising 456 SPN characteristics extracted from CT scans, the SUVmax score from the PET examination, and the ultimate outcome (benign/malignant), determined by patient follow-up or biopsy, was used to build the ML classifier. Two medical experts provided their malignancy likelihood scores, taking into account the patient’s clinical condition and without prior knowledge of the true label of the SPN. Incorporating human assessments into ML model training improved diagnostic efficiency by approximately 3%, highlighting the synergistic role of human judgment alongside ML. Under the latter setup, the ML model had an accuracy score of 95.39% (CI 95%: 95.29–95.49%). While ML exhibited swings in probability scores, human readers excelled in discerning ambiguous cases. ML outperformed the best human reader in challenging instances, particularly in SPNs with ambiguous probability grades, showcasing its utility in diagnostic grey zones. The best human reader reached an accuracy of 80% in the grey zone, whilst ML exhibited 89%. The findings underline the collaborative potential of ML and human expertise in enhancing SPN characterization accuracy and confidence, especially in cases where diagnostic certainty is elusive. This study contributes to understanding how integrating ML and human judgement can optimize SPN diagnostic outcomes, ultimately advancing clinical decision-making in PET/CT screenings.
... Synthesizing and analyzing large volumes of data, AI can incorporate "unstructured" information including images or text and, in turn, identify a complex interplay among various data points [3,4] . In turn, the integration of AI into healthcare has the potential to yield numerous benefits, including tools to refine patient outcomes, streamline healthcare delivery processes, and inform medical research [5] . ...
Article
Full-text available
The rapid evolution of modern technology has made artificial intelligence (AI) an important emerging tool in healthcare. AI, which is a broad field of computer science, can be used to develop systems or machines equipped with the ability to tackle tasks that traditionally necessitate human intelligence. AI can be used to perform multifaceted tasks that involve the synthesis of large amounts of data with the generation of solutions, algorithms, and decision support tools. Various AI approaches, including machine learning (ML) and natural language processing (NLP), are increasingly being used to analyze vast healthcare datasets. In addition, visual AI has the potential to revolutionize surgery and the intraoperative experience for surgeons through augmented reality enhancing surgical navigation in real-time. Specific applications of AI in hepatobiliary tumors such as hepatocellular carcinoma and biliary tract cancer can improve patient diagnosis, prognostic risk stratification, as well as treatment allocation based on ML-based models. The integration of radiomics data and AI models can also improve clinical decision making. We herein review how AI may be of particular interest in the care of patients with complex cancers, such as hepatobiliary tumors, as these patients often require a multimodal treatment approach.
... However, the synthetic data framework introduced here can be used with other model architectures that use unique computation modules or multimodal data. While the utility of deep learning models in processing medical CT images has been widely explored (Ardakani et al., 2020;Cheng et al., 2016;Gozes et al., 2020;Hua et al., 2015;Serte & Demirel, 2021;Wang et al., 2019;Zhou et al., 2021) its application to AM is still in a nascent stage. AM CT images are intrinsically different than medical images, and the classification tasks are fundamentally different. ...
Article
Full-text available
Automated methods for defect detection are a major goal of intelligent manufacturing systems, and additively manufactured (AM) parts presents unique challenges with complex internal features that are difficult to inspect. X-ray computed tomography (CT) is one of the only methods to inspect the interior of AM parts. This paper shows how deep machine learning (ML) models trained using computer-generated images of defects can automatically identify defects in CT images of real parts that were never previously seen by the model. To create an experimental dataset for testing, we designed a nozzle part having internal three-dimensional (3D) geometries and for some parts introduced intentional defects. Two different resin-based AM processes fabricated 227 parts, some of which were defect free and some of which included intentionally designed defects. CT scans were collected for each part which generated 100,334 cross section image slices that were labeled as defect free (86.4%) or having a defect (13.6%). To train a ML model for defect detection, we developed a novel method to create computer-generated images of defects from defect-free parts. More than 50,000 images of defective parts were generated and used to train a Vision Transformer (ViT) model. The model was tested on 572 defects in experimental parts. The defects that appear in the real parts for testing do not appear in the computer-generated training dataset. The model accurately detects and classifies defective parts with over 90% accuracy. The research demonstrates the potential of synthetic data to train deep learning models capable of detecting previously unseen defects. Such methods could be generalized to many types of part designs and defect types while greatly reducing the time and cost of training ML models for defect detection.
... To address these limitations in interpreting CT scan results, artificial intelligence (AI) models, particularly deep learning (DL) models, have been employed to enhance the accuracy of lung cancer diagnoses. DL models, especially convolutional neural networks, have been trained to identify subtle patterns in imaging data, with their accuracy potentially surpassing that of human experts [4,5]. DL models offer a consistent and rapid analysis, which is particularly beneficial in managing large volumes of imaging data [6]. ...
Article
Full-text available
Purpose To compare the diagnostic performance of standalone deep learning (DL) algorithms and human experts in lung cancer detection on chest computed tomography (CT) scans. Materials and methods This study searched for studies on PubMed, Embase, and Web of Science from their inception until November 2023. We focused on adult lung cancer patients and compared the efficacy of DL algorithms and expert radiologists in disease diagnosis on CT scans. Quality assessment was performed using QUADAS-2, QUADAS-C, and CLAIM. Bivariate random-effects and subgroup analyses were performed for tasks (malignancy classification vs invasiveness classification), imaging modalities (CT vs low-dose CT [LDCT] vs high-resolution CT), study region, software used, and publication year. Results We included 20 studies on various aspects of lung cancer diagnosis on CT scans. Quantitatively, DL algorithms exhibited superior sensitivity (82%) and specificity (75%) compared to human experts (sensitivity 81%, specificity 69%). However, the difference in specificity was statistically significant, whereas the difference in sensitivity was not statistically significant. The DL algorithms’ performance varied across different imaging modalities and tasks, demonstrating the need for tailored optimization of DL algorithms. Notably, DL algorithms matched experts in sensitivity on standard CT, surpassing them in specificity, but showed higher sensitivity with lower specificity on LDCT scans. Conclusion DL algorithms demonstrated improved accuracy over human readers in malignancy and invasiveness classification on CT scans. However, their performance varies by imaging modality, underlining the importance of continued research to fully assess DL algorithms’ diagnostic effectiveness in lung cancer. Clinical relevance statement DL algorithms have the potential to refine lung cancer diagnosis on CT, matching human sensitivity and surpassing in specificity. These findings call for further DL optimization across imaging modalities, aiming to advance clinical diagnostics and patient outcomes. Key Points Lung cancer diagnosis by CT is challenging and can be improved with AI integration . DL shows higher accuracy in lung cancer detection on CT than human experts . Enhanced DL accuracy could lead to improved lung cancer diagnosis and outcomes .
... Deep learning algorithms are extensively used in radiology imaging analysis. Various deep learning models showed promising results in detecting and staging the diseases including breast cancer [25], lung cancer [26,27], and Alzheimer's disease [28,29,30]. ...
Article
Full-text available
Artificial Intelligence (AI) is the technology related to simulating human behaviour in machines. Machine intelligence is a subfield of AI in which available raw data is processed to learn inherent patterns and build a model to adapt to new data. Deep learning models utilize very large amount of data and extract important features and classify the data. Multiagent systems or distributed AI systems are autonomous, proactive, reactive and have ability to interact with humans and other agents. Medicine includes all the processes involved in preventing, diagnosing and curing diseases. It includes medical staff and supporting staff records, drug information, decision support information for medical professionals, clinical lab tests, X-rays, magnetic resonance images, surgeries, and so on. AI has a number of applications in medicine including expert systems, medical robots, medial image analysis and distributed medical agents etc. Expert systems can function as medical experts and helpful for patients who are unable to reach a medical specialist due to cost, or being in a remote area. The role of AI is significant in radiology as abnormal data is labelled in medical images obtained from computed tomography, X-rays, and magnetic resonance imaging etc. more accurately. Medical robots assist in patient care, clinical settings, surgeries and in many other ways. Distributed medical agents enable the availability of a number of medical experts online to examine critical cases. In this paper the role of artificial intelligence in the above mentioned medical applications is elaborated with relevant examples. It is concluded that AI is indispensable in medicine for effective and efficient healthcare. Introduction Computers were traditionally used for applications involving high-speed computations, and storage and retrieval purpose. Medicine includes all the processes involved in preventing, diagnosing and curing diseases. Traditional applications of computers in medicine include hospital administration with information systems for patients, doctors, paramedical staff, and drugs and clinical data. In medical imaging, computers were used for measurement, interpretation, reporting, filtering and retrieval of data with limited capabilities for decision support. Telemedicine allows telecommunication between doctors and patients or other doctors using Internet. Public health organizations like world health organization (WHO) and the centres for disease control have prepared huge databases of information related to diseases and health statistics. Computer networks and the Internet have increased the means of communication between medical professionals via emails, video chats and webinars. Electronic health record is the digital version of a patient's health record that is instantly available to authorized health providers and provides health history of the patient. Computer based patient monitoring machines allow heart rate, respiratory activity, blood pressure and other vital parameters to be collected automatically in digital form and notify
... The study by Yang et al. [3] showed that simple geometric features cannot capture important features of lung nodules that support classification using original images and nodule masks containing rich nodule information. Hua et al. [4] introduced deep belief networks and CNN in the context of nodule classification. Experimental results show that deep learning methods can achieve better recognition results and have broad application prospects in the field of CAD diagnostic applications. ...
Article
Full-text available
Lung cancer has the highest morbidity and mortality rates worldwide. Pulmonary nodules are an early manifestation of lung cancer. Therefore, accurate classification of pulmonary nodules is of great significance for the early diagnosis and treatment of lung cancer. However, the classification of lung nodules is a complex and time-consuming task requiring extensive image reading and analysis by expert radiologists. Therefore, using deep learning technology to assist doctors in detecting and classifying pulmonary nodules has become a current research trend. A lightweight classification model named Res-VGG is proposed for classifying lung nodules as benign or malignant. The Res-VGG model improves on VGG16 by reducing the use of convolutional and fully connected layers. To reduce overfitting, residual connections are introduced. The training of the model was performed on the LUNA16 database, and a ten-fold cross-validation method was used to evaluate the performance of the model. In addition, the Res-VGG model was compared with three other common classification networks, and the results showed that the Res-VGG model outperformed the other models in terms of accuracy, sensitivity, and specificity.
... However, there are limitations. According to previous studies, although CT AI has higher positive predictive values and sensitivity, its specificity is not ideal, ranging from 70 to 80% (27)(28)(29)(30). Therefore, relying solely on radiological imaging to differentiate between benign and malignant lung nodules is too one-sided. ...
Article
Full-text available
Background and objective Accurately predicting the extent of lung tumor infiltration is crucial for improving patient survival and cure rates. This study aims to evaluate the application value of an improved CT index combined with serum biomarkers, obtained through an artificial intelligence recognition system analyzing CT features of pulmonary nodules, in early prediction of lung cancer infiltration using machine learning models. Patients and methods A retrospective analysis was conducted on clinical data of 803 patients hospitalized for lung cancer treatment from January 2020 to December 2023 at two hospitals: Hospital 1 (Affiliated Changshu Hospital of Soochow University) and Hospital 2 (Nantong Eighth People’s Hospital). Data from Hospital 1 were used for internal training, while data from Hospital 2 were used for external validation. Five algorithms, including traditional logistic regression (LR) and machine learning techniques (generalized linear models [GLM], random forest [RF], gradient boosting machine [GBM], deep neural network [DL], and naive Bayes [NB]), were employed to construct models predicting early lung cancer infiltration and were analyzed. The models were comprehensively evaluated through receiver operating characteristic curve (AUC) analysis based on LR, calibration curves, decision curve analysis (DCA), as well as global and individual interpretative analyses using variable feature importance and SHapley additive explanations (SHAP) plots. Results A total of 560 patients were used for model development in the training dataset, while a dataset comprising 243 patients was used for external validation. The GBM model exhibited the best performance among the five algorithms, with AUCs of 0.931 and 0.99 in the validation and test sets, respectively, and accuracies of 0.857 and 0.955 in the validation and test groups, respectively, outperforming other models. Additionally, the study found that nodule diameter and average CT value were the most significant features for predicting lung cancer infiltration using machine learning models. Conclusion The GBM model established in this study can effectively predict the risk of infiltration in early-stage lung cancer patients, thereby improving the accuracy of lung cancer screening and facilitating timely intervention for infiltrative lung cancer patients by clinicians, leading to early diagnosis and treatment of lung cancer, and ultimately reducing lung cancer-related mortality.
... Most of the research work has been conducted in the field of pneumonia, and coronavirus identification is based on using deep learning. In order to increase the accuracy of lung disease detection in comparison to conventional methods, Hua et al. [18] tackled the issue of lung disease diagnosis by integrating deep belief networks (DBNs) with convolutional neural networks. In order to train deep convolutional neural networks (DCNNs) for the classification of chest pathology images, Salehinejad et al. [19] employed generative adversarial networks (GANs) to create artificial images in the absence of image datasets for medical data. ...
... In [24], in order to classify the lung nodules, four architectures from the CNN network was used. In [25], for classifying the nodules, they used a deep belief network and CNN network. In [26], in order to classify the nodules, was used from CNN network. ...
Article
Full-text available
Introduction: Lung cancer, a highly prevalent disease worldwide, poses a significant risk to individuals. Nodules, which manifest as minuscule masses in the lungs, serve as crucial indicators of the early stages of the disease, with the possibility of being either benign or malignant in nature. Prompt diagnosis of this ailment plays a pivotal role in saving patients' lives, thus rendering the utilization of computed aided diagnosis methods exceedingly valuable within this domain.Material and Methods: The methodology employed for presentation purposes is deeply rooted in the principles of deep learning, a field that epitomizes the amalgamation of artificial intelligence and neuroscience. Delving into the specifics, the initial phase of this process entails the preprocessing of data, wherein the lung area is meticulously isolated from computed tomography (CT) scan images. Subsequently, in the second stage, the identification of nodules is facilitated through the employment of the mask region convolutional neural network (RCNN) technique, which effectively entails the delineation of masks and bounding boxes. The third and final step involves the classification of the identified nodules, achieved through the utilization of a singular convolutional neural network, ultimately segregating the nodules into three distinct categories: benign, malignant, and ambiguous. In order to evaluate the efficacy of the proposed method, the LIDC-IDRI dataset was employed as a means of testing, thereby furnishing tangible evidence to substantiate the claim that the presented method is on par with its counterparts within the realm of detecting and classifying lung nodules.Results: It is worth noting that the proposed method has yielded a remarkable accuracy rate of 95% in the phase of nodule detection, further bolstering its credibility and reliability. Furthermore, the accuracy rate achieved during the step of nodule classification stands at an impressive 97.3%, thereby cementing the efficacy of the proposed method in a comprehensive manner.Conclusion: The purpose of this work to provide an intelligent system for reducing the amount of the workload of the physicians in this field. After examining and studying some data set, the LIDC-IDRI data set is presented, for this work because of being suitable for two works of detecting nodule place and nodule classification, lots of data, and because of being reliable and used in previous reliable works that can provide the ability to compare the results.
... In recent years, deep learning and other data-driven machine learning approaches have become increasingly popular in computed tomography. Deep neural networks have achieved strong results in X-ray CT applications by improving reconstruction quality [1], reducing metal artifacts [2], performing beam hardening correction [3], and classification [4][5][6]. The progress in deep learning has shown the power of data driven end-to-end optimization using auto-differentiation software, often in combination with hardware acceleration using graphical processing units (GPUs). ...
Article
Full-text available
Many of the recent successes of deep learning-based approaches have been enabled by a framework of flexible, composable computational blocks with their parameters adjusted through an automatic differentiation mechanism to implement various data processing tasks. In this work, we explore how the same philosophy can be applied to existing “classical” (i.e., non-learning) algorithms, focusing on computed tomography (CT) as application field. We apply four key design principles of this approach for CT workflow design: end-to-end optimization, explicit quality criteria, declarative algorithm construction by building the forward model, and use of existing classical algorithms as computational blocks. Through four case studies, we demonstrate that auto-differentiation is remarkably effective beyond the boundaries of neural-network training, extending to CT workflows containing varied combinations of classical and machine learning algorithms.