Article
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
The identification of bacterial colonies is deemed to be crucial in microbiology as it helps in identifying specific categories of bacteria. The careful examination of colony morphology plays a crucial role in microbiology laboratories for the identification of microorganisms. Quantifying bacterial colonies on culture plates is a necessary task in Clinical Microbiology Laboratories, but it can be time‐consuming and susceptible to inaccuracies. Therefore, there is a need to develop an automated system that is both dependable and cost‐effective. Advancements in Deep Learning have played a crucial role in improving processes by providing maximum accuracy with a negligible amount of error. This research proposes an automated technique to extract the bacterial colonies using SegNet, a semantic segmentation network. The segmented colonies are then counted with the assistance of blob counter to accomplish the activity of colony counting. Furthermore, to ameliorate the proficiency of the segmentation network, the network weights are optimized using a swarm optimizer. The proposed methodology is both cost‐effective and time‐efficient, while also providing better accuracy and precise colony counts, ensuring the elimination of human errors involved in traditional colony counting techniques. The investigative assessments were carried out on three distinct sets of data: Microorganism, DIBaS, and tailored datasets. The results obtained from these assessments revealed that the suggested framework attained an accuracy rate of 88.32%, surpassing other conventional methodologies with the utilization of an optimizer.
Article
Full-text available
Magnetic resonance imaging (MRI) plays an important role in disease diagnosis. The noise that appears in MRI images is commonly governed by a Rician distribution. The bendlets system is a second-order shearlet transform with bent elements. Thus, the bendlets system is a powerful tool with which to represent images with curve contours, such as the brain MRI images, sparsely. By means of the characteristic of bendlets, an adaptive denoising method for microsection images with Rician noise is proposed. In this method, the curve contour and texture can be identified as low-frequency components, which is not the case with other methods, such as the wavelet, shearlet, and so on. It is well known that the Rician noise belongs to a high-frequency channel, so it can be easily removed without blurring the clarity of the contour. Compared with other algorithms, such as the shearlet transform, block matching 3D, bilateral filtering, and Wiener filtering, the values of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) obtained by the proposed method are better than those of other methods.
Article
Full-text available
This paper presents newly developed two high-precision CMOS proximity capacitance image sensors: Chip A with 12 μm pitch pixels with a large detection area of 1.68 cm2; Chip B with 2.8 μm pitch 1.8 M pixels for a higher resolution. Both fabricated chips achieved a capacitance detection precision of less than 100 zF (10-19 F) at an input voltage of 20 V and less than 10 zF (10-20 F) at 300 V due to the noise cancelling technique. Furthermore, by using multiple input pulse amplitudes, a capacitance detection dynamic range of up to 123 dB was achieved. The spatial resolution improvement was confirmed by the experimentally obtained modulation transfer function for Chip B with various line and space pattens. The examples of capacitance imaging using the fabricated chips were also demonstrated.
Article
Full-text available
With the rapid advance of quantum machine learning, several proposals for the quantum-analogue of convolutional neural network (CNN) have emerged. In this work, we benchmark fully parameterized quantum convolutional neural networks (QCNNs) for classical data classification. In particular, we propose a quantum neural network model inspired by CNN that only uses two-qubit interactions throughout the entire algorithm. We investigate the performance of various QCNN models differentiated by structures of parameterized quantum circuits, quantum data encoding methods, classical data pre-processing methods, cost functions and optimizers on MNIST and Fashion MNIST datasets. In most instances, QCNN achieved excellent classification accuracy despite having a small number of free parameters. The QCNN models performed noticeably better than CNN models under the similar training conditions. Since the QCNN algorithm presented in this work utilizes fully parameterized and shallow-depth quantum circuits, it is suitable for Noisy Intermediate-Scale Quantum (NISQ) devices.
Article
Full-text available
Recently, the demand of a high resolution complementary metal-oxide semiconductor (CMOS) image sensor is dramatically increasing. As the pixel size reduces to submicron, however, the quality of the sensor image decreases. In particular, the dark current can act as a large noise source resulting in reduction of the quality of the sensor image. Fluorine ion implantation was commonly used to improve the dark current by reducing the trap state density. However, the implanted fluorine diffused to the outside of the silicon surface and disappeared after annealing process. In this paper, we analyzed the effects of carbon implantation on the fluorine diffusion and the dark current characteristics of the CMOS image sensor. As the carbon was implanted with dose of 5.0 × 1014 and 1 × 1015 ions/cm2 in N+ area of FD region, the retained dose of fluorine was improved by more than 131% and 242%, respectively than no carbon implantation indicating that the higher concentration of the carbon implantation, the higher the retained dose of fluorine after annealing. As the retained fluorine concentration increased, the minority carriers of electrons or holes decreased by more Si-F bond formation, resulting in increasing the sheet resistance. When carbon was implanted with 1.0 × 1015 ions/cm2, the defective pixel, dark current, transient noise, and flicker were much improved by 25%, 9.4%, 1%, and 28%, respectively compared to no carbon implantation. Therefore, the diffusion of fluorine after annealing could be improved by the carbon implantation leading to improvement of the dark current characteristics.
Article
Full-text available
For visually impaired people (VIPs), the ability to convert text to sound can mean a new level of independence or the simple joy of a good book. With significant advances in optical character recognition (OCR) in recent years, a number of reading aids are appearing on the market. These reading aids convert images captured by a camera to text which can then be read aloud. However, all of these reading aids suffer from a key issue—the user must be able to visually target the text and capture an image of sufficient quality for the OCR algorithm to function—no small task for VIPs. In this work, a sound-emitting document image quality assessment metric (SEDIQA) is proposed which allows the user to hear the quality of the text image and automatically captures the best image for OCR accuracy. This work also includes testing of OCR performance against image degradations, to identify the most significant contributors to accuracy reduction. The proposed no-reference image quality assessor (NR-IQA) is validated alongside established NR-IQAs and this work includes insights into the performance of these NR-IQAs on document images. SEDIQA is found to consistently select the best image for OCR accuracy. The full system includes a document image enhancement technique which introduces improvements in OCR accuracy with an average increase of 22% and a maximum increase of 68%.
Article
Full-text available
Healthcare professionals have been increasingly viewing medical images and videos in their routine clinical practice, and this in a wide variety of environments. Both the perception and interpretation of medical visual information, across all branches of practice or medical specialties (e.g., diagnostic, therapeutic, or surgical medicine), career stages, and practice settings (e.g., emergency care), appear to be critical for patient care. However, medical images and videos are not self-explanatory and, therefore, need to be interpreted by humans, i.e., medical experts. In addition, various types of degradations and artifacts may appear during image acquisition or processing, and consequently affect medical imaging data. Such distortions tend to impact viewers' quality of experience, as well as their clinical practice. It is accordingly essential to better understand how medical experts perceive the quality of visual content. Thankfully, progress has been made in the recent literature towards such understanding. In this article, we present an up-to-date state of the art of relatively recent (i.e., not older than ten years old) existing studies on the subjective quality assessment of medical images and videos, as well as research works using task-based approaches. Furthermore, we discuss the merits and drawbacks of the methodologies used, and we provide recommendations about experimental designs and statistical processes to evaluate the perception of medical images and videos for future studies, which could then be used to optimise the visual experience of image readers in real clinical practice. Finally, we tackle the issue of the lack of available annotated medical image and video quality databases, which appear to be indispensable for the development of new dedicated objective metrics.
Article
Full-text available
This paper presents a model based on Convolution Neural Network (CNN) to identify and classify the fungi those causes disease to apple plant leaf. In this paper, apple scab, rust, black rot, and healthy leaf are studied and classified. The plant pathology dataset (publically available) consists of 9164 images are used for experimentation. The proposed CNN model identifies and classifies the apple leaves into these four categories. This model can successfully detect and classify diseases with an accuracy of 88.9%.
Article
Full-text available
It is unknown whether near-term quantum computers are advantageous for machine learning tasks. In this work we address this question by trying to understand how powerful and trainable quantum machine learning models are in relation to popular classical neural networks. We propose the effective dimension—a measure that captures these qualities—and prove that it can be used to assess any statistical model’s ability to generalize on new data. Crucially, the effective dimension is a data-dependent measure that depends on the Fisher information, which allows us to gauge the ability of a model to train. We demonstrate numerically that a class of quantum neural networks is able to achieve a considerably better effective dimension than comparable feedforward networks and train faster, suggesting an advantage for quantum machine learning, which we verify on real quantum hardware. A class of quantum neural networks is presented that outperforms comparable classical feedforward networks. They achieve a higher capacity in terms of effective dimension and at the same time train faster, suggesting a quantum advantage.
Article
Full-text available
Usage of sketches for offender recognition has turned out to be one of the law enforcement agencies and defense systems’ typical practices. Usual practices involve producing a convict’s sketch through the crime observer’s explanations. Nevertheless, researches have effectively proved the failure of customary practices as they carry a maximum level of discrepancies in the process of identification. The advent of computer vision techniques has replaced this traditional procedure with intelligent machines capable of ruling out the possible discrepancies, thus assisting the investigation process and considering the relevant points mentioned earlier. This research paper has investigated an adversarial network toward achieving color photograph images out of sketches, which are then classified using pre-trained transfer learning models to accomplish the identification process. Further, to enhance the adversarial network’s performance factor in terms of photogeneration, we also employed a novel sketch generator based on the gamma adjustment technique. Experimental trials are steered with image datasets open to the research community. The trials’ outcomes evidenced that the proposed system achieved the lowest similarity score of 91% and the average identification accuracy of more than 70% on all the datasets. Comparative analysis portrayed in this work also attests that the proposed technique performs ably better than any other state-of-the-art techniques.
Conference Paper
Full-text available
The Aspergillus genus is deemed relevant for distinction and classification in the field of food, agriculture and medicine. As there are harmful and useful ones, it adds to the necessity of correct classification. Categorization of this conidial fungi is usually done through manual microscopical procedures which apparently has a degree of subjectiveness. In order to classify Aspergillus samples faster and more accurately, technology, specifically image processing and machine learning are incorporated in this study. Pre-trained deep learning models are employed in classifying 9 kinds of Aspergillus. The methodology is generally comprised of preprocessing, deep-learning (training) and performance evaluation. Performance evaluation pertains to the validation accuracy and running times of the system after training through visual display of graphs and tabulation of acquired data. This study achieved a 93.3333% testing accuracy proving that the transferred knowledge is accurate, compatible and reliable.
Article
Full-text available
We extend the concept of transfer learning, widely applied in modern machine learning algorithms, to the emerging context of hybrid neural networks composed of classical and quantum elements. We propose different implementations of hybrid transfer learning, but we focus mainly on the paradigm in which a pre-trained classical network is modified and augmented by a final variational quantum circuit. This approach is particularly attractive in the current era of intermediate-scale quantum technology since it allows to optimally pre-process high dimensional data (e.g., images) with any state-of-the-art classical network and to embed a select set of highly informative features into a quantum processor. We present several proof-of-concept examples of the convenient application of quantum transfer learning for image recognition and quantum state classification. We use the cross-platform software library PennyLane to experimentally test a high-resolution image classifier with two different quantum computers, respectively provided by IBM and Rigetti.
Chapter
Full-text available
Noise reduction is a perplexing undertaking for the researchers in digital image processing and has a wide range of applications in automation, IoT (Internet of Things), medicine, etc. Noise generates maximum critical disturbances as well as touches the medical images quality, ultrasound images in the field of biomedical imaging. The image is normally considered as a gathering of data and the existence of noises degradation the image quality. It ought to be vital to reestablish the original image noises for accomplishing maximum data from images. Digital images are debased through noise through its transmission and procurement. Noisy image reduces the image contrast, edges, textures, object details, and resolution, thereby decreasing the performance of postprocessing algorithms. This paper mainly focuses on Gaussian noise, salt and pepper noise, uniform noise, speckle noise. Different filtering techniques can be adapted for noise declining to improve the visual quality as well as a reorganization of images. Here four types of noises have been undertaken and applied to process images. Besides linear and nonlinear filtering methods like Gaussian filter, median filter, mean filter and Weiner filter applied for noise reduction as well as estimate the performance of filter through the parameters like mean square error (MSE), peak signal to noise ratio (PSNR), average difference value (AD) and maximum difference value (MD) to diminish the noises without corrupting the medical image data.
Article
Full-text available
In facial expression recognition applications, the classification accuracy decreases because of the blur, illumination and localization problems in images. Therefore, a robust emotion recognition technique is needed. In this work, a Multi-scale and Rotation-Invariant Phase Pattern (MRIPP) is proposed. The MRIPP extracts the features from facial images, and the extracted patterns are blur-insensitive, rotation-invariant and robust. The performance of classification algorithms like Fisher faces, Support Vector Machine (SVM), Extreme Learning Machine (ELM), Convolutional Neural Network (CNN) and Deep Neural Network (DNN) are analyzed. In order to reduce the time for classification, an OPTICS-based pre-processing of the features is proposed that creates a non-redundant and compressed training set to classify the test set. Ten-fold cross validation is used in experimental analysis and the performance metric classification accuracy is used. The proposed approach has been evaluated with six datasets Japanese Female Facial Expression (JAFFE), Cohn Kanade (CK +), Multi- media Understanding Group (MUG), Static Facial Expressions in the Wild (SFEW), Oulu-Chinese Academy of Science, Institute of Automation (Oulu-CASIA) and Man–Machine Interaction (MMI) datasets to meet a classification accuracy of 98.2%, 97.5%, 95.6%, 35.5%, 87.7% and 82.4% for seven class emotion detection using a stack of Restricted Boltzmann Machines(RBM), which is high when compared to other latest methods.
Article
Full-text available
Convolutional neural networks (CNNs) have rapidly risen in popularity for many machine learning applications, particularly in the field of image recognition. Much of the benefit generated from these networks comes from their ability to extract features from the data in a hierarchical manner. These features are extracted using various transformational layers, notably the convolutional layer which gives the model its name. In this work, we introduce a new type of transformational layer called a quantum convolution, or quanvolutional layer. Quanvolutional layers operate on input data by locally transforming the data using a number of random quantum circuits, in a way that is similar to the transformations performed by random convolutional filter layers. Provided these quantum transformations produce meaningful features for classification purposes, then this algorithm could be of practical use for near-term quantum computers as it requires small quantum circuits with little to no error correction. In this work, we empirically evaluated the potential benefit of these quantum transformations by comparing three types of models built on the MNIST dataset: CNNs, quantum convolutional neural networks (QNNs), and CNNs with additional non-linearities introduced. Our results showed that the QNN models had both higher test set accuracy as well as faster training compared with the purely classical CNNs.
Article
Full-text available
Brain tumor is one of the most death defying diseases nowadays. The tumor contains a cluster of abnormal cells grouped around the inner portion of human brain. It affects the brain by squeezing/ damaging healthy tissues. It also amplifies intra cranial pressure and as a result tumor cells growth increases rapidly which may lead to death. It is, therefore desirable to diagnose/ detect brain tumor at an early stage that may increase the patient survival rate. The major objective of this research work is to present a new technique for the detection of tumor. The proposed architecture accurately segments and classifies the benign and malignant tumor cases. Different spatial domain methods are applied to enhance and accurately segment the input images. Moreover Alex and Google networks are utilized for classification in which two score vectors are obtained after the softmax layer. Further, both score vectors are fused and supplied to multiple classifiers along with softmax layer. Evaluation of proposed model is done on top medical image computing and computer-assisted intervention (MICCAI) challenge datasets i.e., multimodal brain tumor segmentation (BRATS) 2013, 2014, 2015, 2016 and ischemic stroke lesion segmentation (ISLES) 2018 respectively.
Article
Full-text available
The aim of this work is to develop a Computer-Aided-Brain-Diagnosis (CABD) system that can determine if a brain scan shows signs of Alzheimer’s disease. The method utilizes Magnetic Resonance Imaging (MRI) for classification with several feature extraction techniques. MRI is a non-invasive procedure, widely adopted in hospitals to examine cognitive abnormalities. Images are acquired using the T2 imaging sequence. The paradigm consists of a series of quantitative techniques: filtering, feature extraction, Student’s t-test based feature selection, and k-Nearest Neighbor (KNN) based classification. Additionally, a comparative analysis is done by implementing other feature extraction procedures that are described in the literature. Our findings suggest that the Shearlet Transform (ST) feature extraction technique offers improved results for Alzheimer’s diagnosis as compared to alternative methods. The proposed CABD tool with the ST + KNN technique provided accuracy of 94.54%, precision of 88.33%, sensitivity of 96.30% and specificity of 93.64%. Furthermore, this tool also offered an accuracy, precision, sensitivity and specificity of 98.48%, 100%, 96.97% and 100%, respectively, with the benchmark MRI database.
Article
Full-text available
Fungi have diverse biotechnological applications in, among others, agriculture, bioenergy generation, or remediation of polluted soil and water. In this context, culture media based on colour change in response to degradation of dyes are particularly relevant, but measuring dye decolourisation of fungal strains mainly relies on a visual and semiquantitative classification of colour intensity changes. Such a classification is a subjective, time-consuming, and difficult to reproduce process. In order to deal with these problems, we have performed a systematic evaluation of different image-classification approaches considering ad hoc expert features, traditional computer vision features, and transfer-learning features obtained from deep neural networks. Our results favour the transfer learning approach reaching an accuracy of 96.5% in the evaluated dataset. In this paper, we provide the first, at least up to the best of our knowledge, method to automatically characterise dye decolourisation level of fungal strains from images of inoculated plates.
Article
Full-text available
Aspergillus flavus is a saprophytic fungus that infects corn, peanuts, tree nuts and other agriculturally important crops. Once the crop is infected the fungus has the potential to secrete one or more mycotoxins, the most carcinogenic of which is aflatoxin. Aflatoxin contaminated crops are deemed unfit for human or animal consumption, which results in both food and economic losses. Within A. flavus, two morphotypes exist: the S strains (small sclerotia) and L strains (large sclerotia). Significant morphological and physiological differences exist between the two morphotypes. For example, the S-morphotypes produces sclerotia that are smaller (< 400 μm), greater in quantity, and contain higher concentrations of aflatoxin than the L-morphotypes (>400 μm). The morphotypes also differ in pigmentation, pH homeostasis in culture and the number of spores produced. Here we report the first full genome sequence of an A. flavus S morphotype, strain AF70. We provide a comprehensive comparison of the A. flavus S-morphotype genome sequence with a previously sequenced genome of an L-morphotype strain (NRRL 3357), including an in-depth analysis of secondary metabolic clusters and the identification SNPs within their aflatoxin gene clusters.
Article
Full-text available
Quantum neural network is a useful tool which has seen more development over the years mainly after twentieth century. Like artificial neural network (ANN), a novel, useful and applicable concept has been proposed recently which is known as quantum neural network (QNN). QNN has been developed combining the basics of ANN with quantum computation paradigm which is superior than the traditional ANN. QNN is being used in computer games, function approximation, handling big data etc. Algorithms of QNN are also used in modelling social networks, associative memory devices, and automated control systems etc. Different models of QNN has been proposed by different researchers throughout the world but systematic study of these models have not been done till date. Moreover, application of QNN may also be seen in some of the related research papers. As such, this paper includes different models which have been developed and further the implement of the same in various applications. In order to understand the powerfulness of QNN, few results and reasons are incorporated to show that these new models are more useful and efficient than traditional ANN.
Article
Full-text available
Visible/near-infrared (Vis/NIR) hyperspectral imaging (400-1000 nm) was applied to identify the growth process of Aspergillus flavus and Aspergillus parasiticus. The hyperspectral images of the two fungi that were growing on rose bengal medium were recorded daily for 6 days. A band ratio using two bands at 446 nm and 460 nm separated A. flavus and A. parasiticus on day 1 from other days. Image at band of 520 nm classified A. parasiticus on day 6. Principle component analysis (PCA) was performed on the cleaned hyperspectral images. The score plot of the second to sixth principal components (PC2 to PC6) gave a rough clustering of fungi in the same incubation time. However, in the plot, A. flavus on day 3 and day 4 and A. parasiticus on day 2 and day 3 overlapped. The average spectra of each fungus in each growth day were extracted, then PCA and support vector machine (SVM) classifier were applied to the full spectral range. SVM models built by PC2 to PC6 could identify fungal growth days with accuracies of 92.59% and 100% for A. flavus and A. parasiticus individually. In order to simplify the prediction models, competitive adaptive reweighted sampling (CARS) was employed to choose optimal wavelengths. As a result, nine (402, 442, 487, 502, 524, 553, 646, 671, 760 nm) and seven (461, 538, 542, 742, 753, 756, 919 nm) wavelengths were selected for A. flavus and A. parasiticus, respectively. New optimal wavelengths SVM models were built, and the identification accuracies were 83.33% and 98.15% for A. flavus and A. parasiticus, respectively. Finally, the visualized prediction images for A. flavus and A. parasiticus in different growth days were made by applying the optimal wavelength's SVM models on every pixel of the hyperspectral image.
Article
Full-text available
Quantum information technologies, on the one side, and intelligent learning systems, on the other, are both emergent technologies that will likely have a transforming impact on our society in the future. The respective underlying fields of basic research -- quantum information (QI) versus machine learning and artificial intelligence (AI) -- have their own specific questions and challenges, which have hitherto been investigated largely independently. However, in a growing body of recent work, researchers have been probing the question to what extent these fields can indeed learn and benefit from each other. QML explores the interaction between quantum computing and machine learning, investigating how results and techniques from one field can be used to solve the problems of the other. In recent time, we have witnessed significant breakthroughs in both directions of influence. For instance, quantum computing is finding a vital application in providing speed-ups for machine learning problems, critical in our ``big data'' world. Conversely, machine learning already permeates many cutting-edge technologies, and may become instrumental in advanced quantum technologies. Aside from quantum speed-up in data analysis, or classical machine learning optimization used in quantum experiments, quantum enhancements have also been (theoretically) demonstrated for interactive learning tasks, highlighting the potential of quantum-enhanced learning agents. Finally, works exploring the use of artificial intelligence for the very design of quantum experiments, and for performing parts of genuine research autonomously, have reported their first successes. Beyond the topics of mutual enhancement -- exploring what ML/AI can do for quantum physics, and vice versa -- researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. In this review, we describe the main ideas, recent developments, and progress in a broad spectrum of research investigating machine learning and artificial intelligence in the quantum domain.
Article
Partial least squares regression (PLSR) is an essential multivariate correlation analysis method in machine learning field. In this paper, we propose a variational quantum algorithm for partial least regression (VQPLSR). By exploring the relationship between standard basis states and optimization, we design a cost function that can train regression parameters and weight vectors simultaneously. The VQPLS requires only one copy of variables as input, which reduces the complexity of quantum circuit implementation. Compared with PLSR, the VQPLSR achieves an exponential speed-up in the independent variable dimension n and dependent variable dimension w . Simulation results show that regression parameters and weight vectors can be constructed with the error ∼10 ⁻⁵ for 4 × 2 dimensional variable matrix. This algorithm inspires us to explore more quantum applications in machine learning.
Article
Quality assessment of natural images is influenced by perceptual mechanisms, e.g., attention and contrast sensitivity, and quality perception can be generated in a hierarchical process. This paper proposes an architecture of Attention Integrated Hierarchical Image Quality networks (AIHIQnet) for no-reference quality assessment. AIHIQnet consists of three components: general backbone network, perceptually guided neck network, and head network. Multi-scale features extracted from the backbone network are fused to simulate image quality perception in a hierarchical manner. The attention and contrast sensitivity mechanisms modelled by an attention module capture essential information for quality perception. Considering that image rescaling potentially affects perceived quality, appropriate pooling methods in the non-convolution layers in AIHIQnet are employed to accept images with arbitrary resolutions. Comprehensive experiments on publicly available databases demonstrate outstanding performance of AIHIQnet compared to state-of-the-art models. Ablation experiments were performed to investigate the variants of the proposed architecture and reveal importance of individual components. (The complete source code is published at https://github.com/junyongyou/aihiqnet)
Article
Crowdsourcing makes it much faster and cheaper to obtain labels for a large amount of data used in supervised learning. In the crowdsourcing scenario, an integrated label is inferred from a multiple noisy label set for each instance using ground truth inference algorithms, which is called label integration. However, a certain level of label noise remains in the integrated dataset, which degrades the performance of the models trained on it. To the best of our knowledge, existing label noise correction algorithms only use the original attribute space and do not use the information contained in the multiple noisy label sets for building models. To solve these problems, we propose a novel integrated label noise correction algorithm called co-training-based noise correction (CTNC). In CTNC, the weight is first calculated from the information provided by the multiple noisy label set for each instance. Subsequently, a label noise filter is used to identify noisy instances; a clean set and a noisy set are thus obtained. Another attribute view of each instance in both the clean and noisy sets is then generated by the classifiers trained on the original attribute view of the clean set. Finally, a co-training framework is used to train two classifiers to relabel the integrated instances. The performance on 34 simulated datasets and 2 real-world datasets demonstrates that our proposed CTNC outperforms all state-of-the-art label noise correction algorithms used for comparison.
Article
Microbial electrosynthesis (MES) is an electrochemical reduction technology that converts carbon dioxide (CO2) efficiently into chemicals by electrically driving microorganisms attached to electrodes. However, due to the limited solubility of CO2 and hydrogen (H2), low mass transfer efficiency affects the performance of MES. In this study, fillers were introduced into the MES system and combined with a vertical or horizontal cathode. Through the barrier and shearing effect of fillers, it realized the reduction of bubbles volume, the increase of gas residence time and the optimization of mass transfer rate, thereby providing a sufficient substrate supply for the biocatalyst. The results showed that MES with horizontal cathode and 4 series of fillers generated the highest acetate production rate (0.18 gL⁻¹ day⁻¹), which was 1.6 times that of the control group. Furthermore, the acetate concentration reached 5.28 ± 0.2 g L⁻¹ within 30 days. Scanning electron microscope and microbial community analyses showed that the filler was beneficial to the growth of biofilm on cathodes and fillers, and improved the enrichment of Acetobacterium. The presence of fillers significantly enhanced the performance of MES and demonstrated the potential as a new and simple strategy for the MES reactor improvement.
Article
Facial alignment is an essential task for many higher level facial analysis applications, such as animation, human activity recognition and human - computer interaction. Although the recent availability of big datasets and powerful deep-learning approaches have enabled major improvements on the state of the art accuracy, the performance of current approaches can severely deteriorate when dealing with images in highly unconstrained conditions, which limits the real-life applicability of such models. In this paper, we propose a composite recurrent tracker with internal denoising that jointly address both single image facial alignment and deformable facial tracking in the wild. Specifically, we incorporate multilayer LSTMs to model temporal dependencies with variable length and introduce an internal denoiser which selectively enhances the input images to improve the robustness of our overall model. We achieve this by combining 4 different sub-networks that specialize in each of the key tasks that are required, namely face detection, bounding-box tracking, facial region validation and facial alignment with internal denoising. These blocks are endowed with novel algorithms resulting in a facial tracker that is both accurate, robust to in-the-wild settings and resilient against drifting. We demonstrate this by testing our model on 300-W and Menpo datasets for single image facial alignment, and 300-VW dataset for deformable facial tracking. Comparison against 20 other state of the art methods demonstrates the excellent performance of the proposed approach.
Article
Three new pyrazine derivatives, named talaropyrazines A–C (1–3), were isolated from the chemical investigation of the fungus Talaromyces minioluteus. Their planar structures were established by extensive 1D and 2D NMR and HRESIMS spectroscopic data analyses. The absolute configurations of 2 and 3 were defined by the synthesis of all the four isomers, respectively. The detailed comparison of the NMR and HPLC spectra of the natural product and the synthetic ones allows us to propose a general mechanism for the formation of diastereomers of 2 and 3. Moreover, the in vitro bioassay showed the synthetic congener (2′R,3′R)-3 exhibited potential inhibitory activity in the murine splenocytes stimulated by anti-CD3/anti-CD28 mAbs, with IC50 values of 5.85 μM.
Article
Objective Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. Materials and methods To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. Results After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. Discussion This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation.
Conference Paper
The acknowledgement of the bacterial species is essential since the organic information on microorganisms is critical in medication, veterinary science, organic chemistry, nourishment industry or cultivation. A large portion of the organisms have a positive effect on everyday issues; they can be an explanation of numerous illnesses. Along these lines, automating the procedure of acknowledgement can discover application in restorative counteractive action and treatment sciences. One of the most significant highlights that can be perceived in the pictures is the state of a microscopic organisms cell. In this research article, the indispensable shapes like round and hollow, circular and winding are categorized using deep learning approach. Nevertheless, the procedure of perceiving microbes is dependent on the shape would be a troublesome one because numerous bacteria share primarily the same forms. Secondly, the most separating highlight is the shape and the size of the microscopic organisms. Overall if it was not an automated approach, then it is difficult to classify the bacterial colonies. Experimental outcomes of the proposed methodology are carried out using three different models, namely VGG. Mobile Net and Inception out of which VGG has shown remarkable progress of 99%.
Book
Quantum machine learning investigates how quantum computers can be used for data-driven prediction and decision making. The books summarises and conceptualises ideas of this relatively young discipline for an audience of computer scientists and physicists from a graduate level upwards. It aims at providing a starting point for those new to the field, showcasing a toy example of a quantum machine learning algorithm and providing a detailed introduction of the two parent disciplines. For more advanced readers, the book discusses topics such as data encoding into quantum states, quantum algorithms and routines for inference and optimisation, as well as the construction and analysis of genuine ``quantum learning models''. A special focus lies on supervised learning, and applications for near-term quantum devices.
Chapter
The facial sketch of the offender may be one of the essential evidences in arresting the offender. Usually, sketch artist or software packages are used to generate the facial sketch on the basis of the eyewitness’s description, which includes discrete areas of the face such as nose, mouth and eyebrows. Most of the time, eyewitness cannot describe all the facial cues of a criminal in terms of length and width. So it may be a time-consuming process for both the artist and the eyewitness. To overcome this difficulty, our work aims to make the job of the eyewitness easier by describing the criminal’s facial parts in terms of natural language such as ‘long nose’ and ‘big eyes’. To address this, we have adopted standard linguistic descriptors in English language which provides an easy human perception of facial regions. In this paper, we have trained more than 200 sketches to detect the faces and extract the specific corner points. Then, they are classified hierarchically in terms of linguistic hedges based on the Euclidean distance. Finally, the sketches are retrieved based on the score. Experimental results are obtained by using the benchmark datasets like PRIP_HDC and CUHK.
Article
One of the most significant and widely used methods for identifying a culprit in the field of forensic science is generating a sketch of the suspect from descriptions given by an eyewitness to the crime. However, there is a high level of uncertainty in recognizing an individual solely from a sketch. Recog- nition of sketches based on photos of an individual is non-trivial, as there are differences between the domain features of a sketch and a photo. To streamline this process, this article presents a methodology for generating a colored photo from a sketch, which can then be used for identification using a variety of classification techniques. Implementation of the proposed method involves a trained Convolution Neural Network for sketch generation paired with a conditional Generative Adversarial Network’s pix2pix model for color photo generation. Experimental results of the work are validated using standard datasets, and the proposed model achieved a minimum average rate of 65% similarity index value on all employed datasets with a training efficiency of more than 98% in every epoch level.
Article
Emotions are noteworthy signs to understand the intentions of others peoples during communication with them. Similarly, in order to identify the different emotional states of individual such as joy, sadness, and anger, using facial expressions and vocal tone are less effective due to the variations in facial and vocal outputs. Thus, for recognizing emotions intensely without variation physique postures analysis can give effective outputs. Thus, physique posture analysis of the individual can be obtained by mapping the joints using Reeb graph which plots all the joints into carvative structure to learn posture, and the angles between the joints of posture are detected by law of cosines which depicts the vigorous expression along with postures. In addition to that much more reliable recognition of emotions without distractions can be obtained through the detailed features by preprocessing the input image through the fusion of Median and Wiener filter which avoids all the five types of noise occurrence so detailed features such as Invariant, Depth sequential silhouettes and Spatiotemporal body joint can be obtained to aid efficient analysis of posture that can pave a way to identify the different emotions by tactic Tree based classifier to get better performance in terms of execution time and accuracy.
Article
Medical image assessment is an essential practice in most of the disease identification events. A recent imaging procedure, infrared thermal imaging, has attracted wide consumers due to its noninvasive nature, cost, and accuracy. This paper considers the inspection of breast malignancy. This paper presents a hybrid framework with a heuristic algorithm-driven preprocessing practice and a semi/fully automated postprocessing. The result of the proposed technique is also validated against other existing segmentation methods.
Chapter
An improved algorithm for deep learning of convolutional neural network is proposed in this paper to automatically extract feature of the fungal images. Firstly, the target image of the connected area is used to detect the targets of the fungal image, and several small images of conidia in the original image are obtained. Secondly, the small image is augmented by some operations, the augmented small images are proportionally divided into training sets and validation sets, and the training accuracy and validation accuracy are obtained. Finally, the test unknown images are input into the model, and the test accuracy is obtained. Experimental results show that the measures of data augmentation and fine-tuning not only effectively avoid the over-fitting of deep learning algorithm in small samples, but also improve the accuracy. The training accuracy of the algorithm can reach 95%, the validation accuracy can reach 96%, and the test accuracy can reach 69.23%, which has good robustness and generalization.
Article
Brain tumor detection is an active area of research in brain image processing. In this work, a methodology is proposed to segment and classify the brain tumor using magnetic resonance images (MRI). Deep Neural Networks (DNN) based architecture is employed for tumor segmentation. In the proposed model, 07 layers are used for classification that consist of 03 convolutional, 03 ReLU and a softmax layer. First the input MR image is divided into multiple patches and then the center pixel value of each patch is supplied to the DNN. DNN assign labels according to center pixels and perform segmentation. Extensive experiments are performed using eight large scale benchmark datasets including BRATS 2012 (image dataset and synthetic dataset), 2013 (image dataset and synthetic dataset), 2014, 2015 and ISLES (Ischemic stroke lesion segmentation) 2015 and 2017. The results are validated on accuracy (ACC), sensitivity (SE), specificity (SP), Dice Similarity Coefficient (DSC), precision, false positive rate (FPR), true positive rate (TPR) and Jaccard similarity index (JSI) respectively.
Article
Aspergillus fumigatus is the most common Aspergillus species worldwide; however, A. flavus has also been shown to be prevalent in North India. Herein, we investigate the prevalence of sensitization to A. flavus in subjects with allergic bronchopulmonary aspergillosis (ABPA). We also evaluate the occurrence of allergic bronchopulmonary mycosis (ABPM) due to A. flavus. Treatment-naive subjects with ABPA underwent sputum culture; and, skin testing, fungal-specific immunoglobulin E (IgE) and serum precipitation tests for A. fumigatus and A. flavus. Sensitization to A. flavus was diagnosed if any immunological test for A. flavus was positive in subjects with ABPA. ABPM was labelled as probable if sputum cultures grew A. flavus and A. flavus-specific IgE was greater than A. fumigatus-specific IgE; and, possible if only A. flavus-specific IgE was greater than A. fumigatus-specific IgE. Fifty-three subjects with a mean (SD) age of 34.2 (12.8) years were included. Sensitization to A. flavus was seen in 51 (96.2%) subjects, with overlap occurring in 49 (92.5%), 21 (39.6%), and 12 (22.6%) instances on fungal-specific IgE, skin prick test and precipitins, respectively. Sputum culture was positive in 18 (33.9%; A. flavus [n = 12], A. fumigatus [n = 6]) subjects. ABPM due to A. flavus was diagnosed in 16 (30.2%) subjects (10 probable, 6 possible). They were more likely to have high-attenuation mucus and a trend towards higher occurrence of sinusitis, compared to ABPA. We found a high occurrence of sensitization to A. flavus in subjects with ABPA. Subjects with A. flavus-related ABPM had a higher likelihood of high-attenuation mucus and probability of sinusitis. More studies are required to confirm this observation.
Chapter
Contamination of certain crops with the toxic and carcinogenic aflatoxins is a serious concern for agriculture and for animal and human health. The predominant species associated with this crop contamination is Aspergillus flavus. The ability of A. flavus to produce other toxins could be an additional concern. Phylogenetic evidence suggests that this species has a history of recombination and has relatively recently adapted to growth on plants from a normally saprophytic terrestrial existence. Efforts to control aflatoxin contamination involve preharvest introduction of nonaflatoxigenic competitors, as well as enhancement of the resistance of affected crops against fungal infection.