Fig 3 - uploaded by Jan Egger
Content may be subject to copyright.
The internal network of the DataPreparation macro module. https://doi.org/10.1371/journal.pone.0212550.g003

The internal network of the DataPreparation macro module. https://doi.org/10.1371/journal.pone.0212550.g003

Source publication
Article
Full-text available
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an...

Similar publications

Conference Paper
Full-text available
Feature attributions methods are popular in Explainable AI as they give intuitive readings on relations between features and predictions. However, evaluation of feature attribution meth�ods is lacking, due to the fact that typical datasets do not contain “explanation ground truth” - thus it is not possible to know “true” contributions of features a...
Preprint
Full-text available
Using knowledge graph embedding models (KGEMs) is a popular approach for predicting links in knowledge graphs (KGs). Traditionally, the performance of KGEMs for link prediction is assessed using rank-based metrics, which evaluate their ability to give high scores to ground-truth entities. However, the literature claims that the KGEM evaluation proc...
Preprint
Full-text available
Weakly-supervised Temporal Action Localization (W-TAL) aims to classify and localize all action instances in an untrimmed video under only video-level supervision. However, without frame-level annotations, it is challenging for W-TAL methods to identify false positive action proposals and generate action proposals with precise temporal boundaries....
Article
Full-text available
This work explores methodologies for dynamic trajectory generation for urban driving environments by utilizing coarse global plan representations. In contrast to state-of-the-art architectures for autonomous driving that often leverage lane-level high-definition (HD) maps, we focus on minimizing required map priors that are needed to navigate in dy...

Citations

... Gsaxner et al. [72] conducted research on semantic segmentation and used 18 F-FDG accumulation in the urinary bladder in PET scans to produce ground truth labels. They applied data augmentation to enlarge a small dataset. ...
... Data Augmentation for MIA Most of the researchers implement DA in the DLbased MIA [23], and the commonly employed methods are twofold. The former method is the general DA [24][25][26], which employs flip, rotation, contrast, scale, noise injection, etc., or several of their combinations to augment the input image. However, when implementing general DA, selection of operation, adjustment of sequence, and determination of magnitude heavily rely on manual design with experience. ...
Preprint
Data Augmentation (DA) technique has been widely implemented in the computer vision field to relieve the data shortage, while the DA in Medical Image Analysis (MIA) is still mostly experience-driven. Here, we develop a plug-and-use DA method, named MedAugment, to introduce the automatic DA argumentation to the MIA field. To settle the difference between natural images and medical images, we divide the augmentation space into pixel augmentation space and spatial augmentation space. A novel operation sampling strategy is also proposed when sampling DA operations from the spaces. To demonstrate the performance and universality of MedAugment, we implement extensive experiments on four classification datasets and three segmentation datasets. The results show that our MedAugment outperforms most state-of-the-art DA methods. This work shows that the plug-and-use MedAugment may benefit the MIA community. Code is available at https://github.com/NUS-Tim/MedAugment_Pytorch.
... PET Images PET imaging is a relatively advanced clinical laboratory technology in the field of nuclear medicine. The general method is to label the short-lived radioactive elements with certain substances, which are generally necessary for the metabolism of biological life, such as glucose and protein [16][17][18]. After being injected into the human body, the metabolism of the substance is used to reflect the metabolism of life. ...
Article
Full-text available
Recently, deep learning, especially convolutional neural networks, has achieved the remarkable results in natural image classification and segmentation. At the same time, in the field of medical image segmentation, researchers use deep learning techniques for tasks such as tumor segmentation, cell segmentation, and organ segmentation. Automatic tumor segmentation plays an important role in radiotherapy and clinical practice and is the basis for the implementation of follow-up treatment programs. This paper reviews the tumor segmentation methods based on deep learning in recent years. We first introduce the common medical image types and the evaluation criteria of segmentation results in tumor segmentation. Then, we review the tumor segmentation methods based on deep learning from technique view and tumor view, respectively. The technique view reviews the researches from the architecture of the deep learning and the tumor view reviews from the type of tumors.
... Another topic that is currently being actively researched -also for medical applications -is (medical) deep learning [24][25][26]. For a reliable deep neural network, a massive quantity of training data is needed. ...
Article
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster ( www.studierfenster.at ): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448-456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
... Multimodal learning uses data of different modalities in a learning strategy. An example for data from different modalities are the acquisitions from positron emission tomography-computed tomography (PET-CT) scanners, where the tissue data from the CT and metabolically active regions from the PET are acquired from a patient (Gsaxner et al., 2019b). In their survey, Ramachandram & Taylor (2017) first classify the architectures for deep multimodal learning. ...
Article
Full-text available
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by drawing inspiration from the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10,000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides already over 11,000 results in Q3 2020 for the search term ‘deep learning’, and around 90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-survey of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time. The categories (computer vision, language processing, medical informatics and additional works) have been chosen according to the underlying data sources (image, language, medical, mixed). In addition, we review the common architectures, methods, pros, cons, evaluations, challenges and future directions for every sub-category.
... However, a fully automated image-to-patient registration eliminates the need for a complicated calibration and adjustment procedure. 66 Construct3D AR system allows students to collaborate, operate, measure, and manipulate virtual 3D objects in a real world setting, using handheld computers (mobile devices, wi-fi, location-registered technology) to enable collaborative and situated learning enhanced by digital simulations, games, models, and virtual 2D or 3D objects in real environment. This offers the advantages of portability, social interactivity, context sensitivity, connectivity, and individuality. ...
Article
Full-text available
The last few decades have seen an exponential growth in the development and adoption of novel technologies in medical and surgical training of residents globally. Simulation is an active and innovative teaching method, and can be achieved via physical or digital models. Simulation allows the learners to repeatedly practice without the risk of causing any error in an actual patient and enhance their surgical skills and efficiency. Simulation may also allow the clinical instructor to objectively test the ability of the trainee to carry out the clinical procedure competently and independently prior to trainee's completion of the program. This review aims to explore the role of emerging simulation technologies globally in craniofacial training of students and residents in improving their surgical knowledge and skills. These technologies include 3D printed biomodels, virtual and augmented reality, use of google glass, hololens and haptic feedback, surgical boot camps, serious games and escape games and how they can be implemented in low and middle income countries. Craniofacial surgical training methods will probably go through a sea change in the coming years, with the integration of these new technologies in the surgical curriculum, allowing learning in a safe environment with a virtual patient, through repeated exercise. In future, it may also be used as an assessment tool to perform any specific procedure, without putting the actual patient on risk. Although these new technologies are being enthusiastically welcomed by the young surgeons, they should only be used as an addition to the actual curriculum and not as a replacement to the conventional tools, as the mentor-mentee relationship can never be replaced by any technology.
... Multimodal learning uses data of different modalities in a learning strategy. An example for data from different modalities are the acquisitions from positron emission tomography-computed tomography (PET-CT) scanners, where the tissue data from the CT and metabolically active regions from the PET are acquired from a patient [64]. In their survey, Ramachandram and Taylor [65] first classify the architectures for deep multimodal learning. ...
Preprint
Full-text available
Deep learning belongs to the field of artificial intelligence, where machines perform tasks that typically require some kind of human intelligence. Deep learning tries to achieve this by mimicking the learning of a human brain. Similar to the basic structure of a brain, which consists of (billions of) neurons and connections between them, a deep learning algorithm consists of an artificial neural network, which resembles the biological brain structure. Mimicking the learning process of humans with their senses, deep learning networks are fed with (sensory) data, like texts, images, videos or sounds. These networks outperform the state-of-the-art methods in different tasks and, because of this, the whole field saw an exponential growth during the last years. This growth resulted in way over 10 000 publications per year in the last years. For example, the search engine PubMed alone, which covers only a sub-set of all publications in the medical field, provides over 11 000 results for the search term $'$deep learning$'$ in Q3 2020, and ~90% of these results are from the last three years. Consequently, a complete overview over the field of deep learning is already impossible to obtain and, in the near future, it will potentially become difficult to obtain an overview over a subfield. However, there are several review articles about deep learning, which are focused on specific scientific fields or applications, for example deep learning advances in computer vision or in specific tasks like object detection. With these surveys as a foundation, the aim of this contribution is to provide a first high-level, categorized meta-analysis of selected reviews on deep learning across different scientific disciplines and outline the research impact that they already have during a short period of time.
... ANNs can help with image reconstruction or to create standard-dose from low-dose images, as well as to improve scatter and attenuation correction (9)(10)(11)(12)(13)(14)(15)(16)(17)(18). ANNs can also assist with disease detection and segmentation (19)(20)(21)(22)(23)(24)(25)(26), disease diagnosis, and outcome predictions (27)(28)(29)(30)(31)(32). In this paper we have chosen to focus on a few applications with specific examples. ...
... Similar to denoising, input and output images are at the same resolution and training is usually supervised, using combinations of raw and segmented images. Several papers have been written on lesion detection and segmentation using neural networks (20)(21)(22)(23)(24)(25)(26) with differing architectural designs, although often a U-Net. ...
Article
This article is the second part in our machine learning series. Part 1 provided a general overview of machine learning in nuclear medicine. Part 2 focuses on neural networks. We start with an example illustrating how neural networks work and a discussion of potential applications. Recognizing there is a spectrum of applications, we focus on recent publications in the areas of image reconstruction, low-dose PET, disease detection and models used for diagnosis and outcome prediction. Finally, since the way machine learning algorithms are reported in the literature is extremely variable, we conclude with a call to arms regarding the need for standardized reporting of design and outcome metrics and we propose a basic checklist our community might follow going forward.
Article
Recent advances in Deep Learning have largely benefited from larger and more diverse training sets. However, collecting large datasets for medical imaging is still a challenge due to privacy concerns and labeling costs. Data augmentation makes it possible to greatly expand the amount and variety of data available for training without actually collecting new samples. Data augmentation techniques range from simple yet surprisingly effective transformations such as cropping, padding, and flipping, to complex generative models. Depending on the nature of the input and the visual task, different data augmentation strategies are likely to perform differently. For this reason, it is conceivable that medical imaging requires specific augmentation strategies that generate plausible data samples and enable effective regularization of deep neural networks. Data augmentation can also be used to augment specific classes that are underrepresented in the training set, e.g., to generate artificial lesions. The goal of this systematic literature review is to investigate which data augmentation strategies are used in the medical domain and how they affect the performance of clinical tasks such as classification, segmentation, and lesion detection. To this end, a comprehensive analysis of more than 300 articles published in recent years (2018–2022) was conducted. The results highlight the effectiveness of data augmentation across organs, modalities, tasks, and dataset sizes, and suggest potential avenues for future research.