Fig 12 - uploaded by Felicitas Hopf
Content may be subject to copyright.
3 Representative images of the 17 pollen types. Images from different angles are presented for selected pollen types. a: Asteraceae (AS), b: Coprosma (CO), c: Echium (EC), d, e: Geniostoma (GE), f: Griselinia (GR), g: Ixerba brexiodes (IX), h, I: Knightia excelsa (KN), j, k: Leptospermum/Kunzea type , l: Lotus (LO), m, n: Lycopodium (LY), O, p: Metrosideros (ME), q: Poaceae (PO), r: Quintinia (QU), s: Salix (SA). t: Taraxacum (TA), u: Trifolium (TR), v: Weinmannia (WE) 

3 Representative images of the 17 pollen types. Images from different angles are presented for selected pollen types. a: Asteraceae (AS), b: Coprosma (CO), c: Echium (EC), d, e: Geniostoma (GE), f: Griselinia (GR), g: Ixerba brexiodes (IX), h, I: Knightia excelsa (KN), j, k: Leptospermum/Kunzea type , l: Lotus (LO), m, n: Lycopodium (LY), O, p: Metrosideros (ME), q: Poaceae (PO), r: Quintinia (QU), s: Salix (SA). t: Taraxacum (TA), u: Trifolium (TR), v: Weinmannia (WE) 

Source publication
Article
Full-text available
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to...

Contexts in source publication

Context 1
... are the most similar amongst the species in the data set. Similar observations can be made from the confusion matrices for the SVM and RF models and so their confusion matrices are not displayed here. Table 12.4 shows the full confusion matrix for the LDA model on the COMB data. The two species with the lowest classification accuracy are PA with 69 % and IP with 72 %. The high number of IP observations accounts for most of the model’s overall classification error. Another feature of the confusion matrix is that all species other than AR are confused with the PA species. Despite this, the error is more balanced between species which would appear to be a desirable result. In practice, the classification results need not be the end point of an investigation. Typically a palynolgist would review and adjust the classification results. The LDA model presents a simple means for assisting the review stage. LDA works by transforming the data into an optimal space for discrimination. For each species, a mean value for each discriminant is then calculated. The model then measures the Mahalanobis distance between a sample and the set of mean discriminants for each species, with the lowest distance informing the choice of species for classification. This distance itself provides a measure of how far from the training data a particular sample is. Taking the ratio of lowest over the second lowest distance score gives a measure of how ‘borderline’ a classification decision is. So, for example, a ratio close to 0 would indicate a strong decision while a value close to 1 would indicate possible confusion. Table 12.5 shows the top 20 classification results for the LDA model on the COMB FF data ranked on decreasing values of this ratio. It shows that out of the 20 results only 4 are correctly classified. So in the context of reviewing the data, if a user were to sort their observations based on the ratio, they could easily and efficiently adjust decisions for the most borderline cases. The ratio is similar in spirit to the posterior probability which can be calculated for the LDA models and the RF models. For each classification, a posterior probability is assigned for each class. It is then possible to use this to rank the data in a similar fashion to the ranking, noting that a posterior probability close to 1 corresponds to a strong decision. This would allow the user to use the RF model to perform a similar type of assisted review. Another, more automated approach is to simply exclude a proportion of the classification results based on the ratio. For example, after ranking the results on decreasing values of the ratio, one can exclude the worst N percent of the results. For the LDA model on the COMB FF data, if we exclude 20 % of the results based on this strategy, the overall performance of the classifier increases to 0.94. This compared to the performance 0.84, when all data is used, is a significant increase. If we excluded 50 % of the data the performance increases to 0.99. However, when one examines the confusion matrix corresponding to only 50 % of the data, the relative proportions of the species are modified in a reasonably substantial way. For example, the IP, NP and AP species account for 18, 15 and 10 % of the species present in the COMB FF data, respectively. When we exclude the worst 20 % of the data, the relative percentages are 16, 16 and 11 which is not too dissimilar. However, when we exclude 50 % of the data, the relative percentages are 5, 23 and 13 which is very different to the known abundances. So if the goal of a palynologist is to study relative abundance of species in a sample, one would need to find an appropriate percentage for exclusion which would preserve relative abundance. In this section we work through a real world case study to demonstrate the use of the Classifynder and how various classification strategies employing neural networks and linear discriminant analysis can be incorporated into the workflow. The data source for this section is a suite of slides containing pollen extracted from 45 different New Zealand honeys. The pollen composition of these slides has already been determined through manual counting, with the results presented (as proportions) by Mildenhall and Tremain [23]. Across the 45 samples, Mildenhall and Tremain recognize 21 common pollen types, however, the individual classifications at the image level are unavailable. The slides also contain Lycopodium marker spores, which are added to determine pollen concentration per gram of honey [31]. A subset of slides were selected to supply images for a library. These slides were scanned on the Classifynder and the resulting images manually classified and sorted into folders to create a master set (Table 12.6). Insufficient numbers of images were obtained for 5 of the 21 common honey pollen types, preventing the generation of a library class for these types. The final library thus contained images of 17 taxa (Table 12.6 and Fig. 12.3). The image set for each taxon was divided into two, with one half to be used for training the classifiers, and the other half for testing. This training and testing set was used to train and test both the Classifynder neural network classifier and the LDA classifier. The software of the Classifynder trains the neural network by splitting the training set provided to it and first training itself on half of this test set. It then tests its performance on the other half of the training set. It then takes any misclassified images and adds them to its training set file and retests itself in an attempt to improve performance of the neural network. Once this automated training and testing was complete, the neural network was tested on the separate manually generated test set. Classification accuracy on the training set was 89 %. The LDA classifier was also applied to the training and test sets also achieving an accuracy of 89 %. Once trained, both classifiers were applied to image sets gathered from three different slides from the original 45 slide set, none of which had been used as a source of images for training and testing. The three slides contain a different combination and number of the 17 taxa in the training set. These three slides were scanned on the Classifynder. The Classifynder software sourced preliminary morphological limits from the neural network for use in identifying pollen grains on the slides at low magnification. These were then imaged using the high resolution camera. The resulting image sets were then classified by the two different classifiers (Classifynder NN and LDA). In assessing the performance of the classifiers, allowances had to be made for the fact that these ‘real world’ datasets contained numerous images that were of objects not present in the training set. These images fell into one of several categories described in Table 12.7. When assessing the accuracy of the classifications (i.e., proportion of images correctly classified), images assigned to these categories by the analyst were omitted from the calculations. Because both classifiers are forced, they have to classify every image as something within the training set. Therefore, because these types of image were not included in the training set, it is not reasonable (or in many cases possible) to expect correct classification. Although, in some cases images of clumps comprising grains of the same taxon were correctly classified as that taxon. Likewise, images of clumps of different taxa were sometimes classified as one of the taxa in the clump. But because at this stage there is no mechanism to adequately deal with clumps (i.e., recognizing multiple grains in the one image) we have chosen to omit these altogether. As discussed earlier, in contrast to the NN, the LDA provides an indication of the strength of the classification of each image and these scores potentially provide a basis to further improve the results of the classification. The majority of incorrectly classified images have very high (i.e., weak) classification strength scores. When examining these, many are images which belong to the clumps, unknown, other or junk categories. One possible option to improve the classification is to automatically ignore or disregard a portion of images with the highest (weakest) classification strength scores. We have assessed this approach with two subsets of the LDA classification of the initial complete image set, which take the top 80 % (LDA80) and top 50 % (LDA50) of the classified images, based on their classification strength scores, and then calculated the proportions for these in the same way as was done with the previous classifications. Results are presented in Table 12.8. After assessing classification accuracy, any incorrectly classified images were sorted into their correct categories manually by a human palynologist, and the proportions of the different taxa as detected by the Classifynder’s imaging systems calculated. These are presented in Table 12.9, and compared with the original human counts of the samples/slides, as presented in [23]. This data is a measure of how well the Classifynder can find and image the pollen using the basic morphological information extracted from the neural network, but it still has an element of manual classification through the correcting of misclassifications. These proportions are calculated by taking the sum of all images of single pollen grains (i.e., no clump images) of a taxon (whether correctly classified or not) and dividing it by the sum of all images of single pollen grains, excluding images of non-pollen, clumps, and unknown, and also Lycopodium spores. Images of clumps, unknowns, other pollen and junk are also excluded, for the reasons mentioned earlier Lycopodium is excluded because the number of Lycopodium grains in each slide was not originally presented by Mildenhall and Tremain [23]. Lycopodium spores are added to honey samples to aid in determination ...
Context 2
... intervals are calculated for the human counts (following the method of [21], and assuming a total count of 500 grains), the Classifynder counts fall within these limits. The degree of difference between the original human counts and the Classifynder counts is much greater for slide 2, with the Classifynder values lying outside the 95. The raw proportions produced by the classifiers (i.e., any misclassifications left uncorrected) are also compared with the two sets of ‘true proportions’ for each slide in Table 12.10. These proportions are calculated by taking the total number of images classified as a taxon, whether correct or not, is then divided by two different totals to give two sets of proportions. The first total includes all images captured, including clumps, non-pollen, other and unknown pollen, but excludes images of Lycopodium spores (for reasons discussed previously). The second total excludes images of clumps, non-pollen, other pollen, unknown pollen, and Lycopodium . The purpose of these two different totals is to investigate what impact the presence of junk, clumps and unrecognizable pollen has on the raw results and to determine whether there are any taxa which these types of images are more likely to be classified as. The root mean square error calculated is also presented to provide an additional indication of the success of each classification. The first point of interest from Table 12.8 is that the LDA performs slightly better than the NN on 2 out of the 3 slides. When the best 80 and 50 % sets are considered, the LDA performance is significantly better than that of the NN. However, this is not true for slide 2 where the performance of the LDA is only comparable to the NN when the best 50 % of the data is retained. It is unclear why this difference occurs. One possibility is that the number of taxa present in slide 2, based on human counts, is larger (10 for slide 2 compared with, 5 and 3 for slides 12 and 26) and has an impact on classification, although, our initial assessments of classifier performance (in Table 12.2 for example) suggest comparable performance for a relatively similar number of classes. Another notable issue is highlighted in Table 12.10. When proportions are compared with the manual classifications, the majority are outside the 95 % confidence limits (for the human counts). Further to this, there are many instances where images have been classified as taxa which are not actually present in the sample (i.e., ‘false positives’). In some cases these false positives comprise in excess of 20 % of the total (i.e., ‘LDA’ classification for slide 2 in Table 12.10). False positives are particularly problematic to the ideal goal of applying automated classification to routine/commercial honey pollen analysis. Obviously, it is far from satisfactory to have the proportions of the taxa present in the sample incorrect. But reporting the presence of species in a sample which aren’t actually there (or completely omitting types which are present) adds an additional layer of complication. Often, certain pollen types are specific to a particular geographic region, and presence/absence information (regardless of abundance) can be used to trace where a honey sample has come from. Therefore false positives/negatives can further influence the interpretation of the results, beyond just determining the nectar contributions. Even though reducing the images sets based on classification strength scores improves classification accuracy overall, it does not deal to the problem of false positives, with 3.7 % FP remaining in slide 26 (L5*). In all cases, ignoring the images of clumps, unknowns, other and junk has resulted in proportions closer to actual (i.e., lower RMSE), indicating that these are in part responsible for inaccurate results. However, when the proportions for the classifications with reduced images calculated against the sum excluding the clumps, unknowns, other and junk are examined, the proportions are still outside the 95 % confidence limits in most cases. This indicates that not all error can be explained by the presence of these types of images, and that some of the error is still resulting from confusion. Some of the most common confusions in both the NN and LDA classifications (excluding clumps, junk, etc.) are between the following: • ‘LE’, ‘EC’, ‘LO’ and ‘ME’ (e.g. slides 12 and 26) • ‘TR’, ‘GE’, ‘GR’ and ‘ME’ (e.g. slides 2 and 26) Confusions of images as EC, GE or GR are responsible for many of the false positive instances discussed earlier. Confusions between LE, EC, and LO seem logical, due to the similarities in size (Fig. 12.1). Likewise, confusions between LE and ME are likely related to overall morphological similarity, even though there is a reasonable size difference. Also, in the case of slide 12, ME was by far the dominant pollen type, so therefore it is likely to be best represented in misclassifications. TR and GR pollen types are also morphologically similar (Fig. 12.3), although less so with GE. Many of these more common confusables have relatively low numbers of images in their training sets (Table 12.6). Perhaps notably, EC, GE and GR, none of which are actually present in any of the samples all have less than 50 images in their training sets. Likewise, TR, which is a dominant pollen type in 2 out of the 3 slides has only 92 images in its training set. Therefore, confusions between TR, GR and GE may be explained by undertraining. LE, LO and ME have higher numbers in their training sets (greater than 120) so undertraining is likely not the problem for confusions between these three, but can explain confusions between them and ...

Similar publications

Article
Full-text available
Text classification is a vital process due to the large volume of electronic articles. One of the drawbacks of text classification is the high dimensionality of feature space. Scholars developed several algorithms to choose relevant features from article text such as Chi-square (x 2), Information Gain (IG), and Correlation (CFS). These algorithms h...
Article
Full-text available
This article explains a decision support model (DSM) construction for employee recruitment in a particular IT consultant company using the conception of data mining classification. The created model addresses the company's need in recruiting employees objectively by utilizing historical data to find patterns of potential employees for the company....
Article
Full-text available
Freedom of opinion through social media is frequently affect a negative impact that spreads hatred. This study aims to automatically detect Indonesian tweets that contain hate speech on Twitter social media. The data used amounted to 4,002 tweets related to politics, religion, ethnicity and race in Indonesia. The application model uses classificati...
Article
Full-text available
Content based retrieval and recognition of objects represented in images is a challenging problem making it an active research topic. Shape analysis is one of the main approaches to the problem. In this paper we propose the use of a reduced set of features to describe 2D shapes in images. The design of the proposed technique aims to result in a sho...
Article
Full-text available
Recently, a precise and stable machine learning algorithm, i.e. eigenvalue classification method (EigenClass), has been developed by using the concept of generalised eigenvalues in contrast to common approaches, such as k-nearest neighbours, support vector machines, and decision trees. In this paper, we offer a new classification algorithm called f...

Citations

... The discriminatory features, once properly weighted, are then used to train a classifier. Numerous papers proposed such pollen classification models featuring varying degrees of user input in the feature selection and extraction phases [15][16][17][18]. In recent years, the advent of convolutional neural networks (CNNs), a type of deep learning network, has permitted the automatic extraction and selection of relevant features without human supervision [19]. ...
Article
Full-text available
The automation of pollen identification has seen vast improvements in the past years, with Convolutional Neural Networks coming out as the preferred tool to train models. Still, only a small portion of works published on the matter address the identification of fossil pollen. Fossil pollen is commonly extracted from organic sediment cores and are used by paleoecologists to reconstruct past environments, flora, vegetation, and their evolution through time. The automation of fossil pollen identification would allow paleoecologists to save both time and money while reducing bias and uncertainty. However, Convolutional Neural Networks require a large amount of data for training and databases of fossilized pollen are rare and often incomplete. Since machine learning models are usually trained using labelled fresh pollen associated with many different species, there exists a gap between the training data and target data. We propose a method for a large-scale fossil pollen identification workflow. Our proposed method employs an accelerated fossil pollen extraction protocol and Convolutional Neural Networks trained on the labelled fresh pollen of the species most commonly found in Northeastern American organic sediments. We first test our model on fresh pollen and then on a full fossil pollen sequence totalling 196,526 images. Our model achieved an average per class accuracy of 91.2% when tested against fresh pollen. However, we find that our model does not perform as well when tested on fossil data. While our model is overconfident in its predictions, the general abundance patterns remain consistent with the traditional palynologist IDs. Although not yet capable of accurately classifying a whole fossil pollen sequence, our model serves as a proof of concept towards creating a full large-scale identification workflow.
... A number of approaches have been taken in an attempt to address this, including the Pollen Identification and Geolocation Technology (PIGLT) system. PIGLT is a standardized digital database of pollen images and software to augment taxonomic geographic distribution information to reduce the amount of expertise required (12,13) and an automated system for pollen microscopic imaging and characterization (14). DNA barcoding has been used for taxonomic identification for over two decades in environmental tracking, biodiversity studies, and product authentication (15)(16)(17), and has more recently been applied to pollen 4 classification (18). ...
Preprint
Full-text available
Information obtained from the analysis of dust, particularly biological particles such as pollen, plant parts, and fungal spores, has great utility in forensic geolocation. As an alternative to manual microscopic analysis, we developed a pipeline that utilizes the environmental DNA (eDNA) from plants in dust samples to estimate previous sample location(s). The species of plant-derived eDNA within dust samples were identified using metabarcoding and their geographic distributions were then derived from occurrence records in the USGS Biodiversity in Service of Our Nation (BISON) database. The distributions for all plant species identified in a sample were used to generate a probabilistic estimate of the sample source. With settled dust collected at four U.S. sites over a 15-month period, we demonstrated positive regional geolocation (within 600 km 2 of the collection point) with 47.6% (20 of 42) of the samples analyzed. Attribution accuracy and resolution was dependent on the number of plant species identified in a dust sample, which was greatly affected by the season of collection. In dust samples that yielded a minimum of 20 identified plant species, positive regional attribution improved to 66.7% (16 of 24 samples). Using dust samples collected from 31 different U.S. sites, trace plant eDNA provided relevant regional attribution information on provenance in 32.2%. This demonstrated that analysis of plant eDNA in dust can provide an accurate estimate regional provenance within the U.S., and relevant forensic information, for a substantial fraction of samples analyzed.
... Traditionally, image-based methods typically involve defining and extracting discriminating features from pollen images, followed by sorting via statistical or machine learning-based classifiers. An example which uses the same type of images as this research is [16]. As per [10], these image-based methods fall into three different categories based on the type of features being used: (1) discriminant features are largely visual/geometrical (e.g. ...
... In its routine operation, the Classifynder extracts values for 43 different image features (both geometrical and textural, see [16] for a list) and tags them on to each image as metadata. Images are classified using a simple neural network (feed-forward with a single hidden layer), which compares the feature values of the unknown pollen types with the feature values of known 'library' images. ...
... Our images were automatically collected from pollen reference slides. Images gathered from each reference slide were manually sorted, and were chosen with the intention of creating a set that was fully representative of the pollen type in question, i.e. accounting for variability in appearance resulting from [16]. ...
Article
Full-text available
In palynology, the visual classification of pollen grains from different species is a hard task which is usually tackled by human operators using microscopes. Many industries, including medical and pharmaceutical, rely on the accuracy of this manual classification process, which is reported to be around 67%. In this paper, we propose a new method to automatically classify pollen grains using deep learning techniques that improve the correct classification rates in images not previously seen by the models. Our proposal manages to properly classify up to 98% of the examples from a dataset with 46 different classes of pollen grains, produced by the Classifynder classification system. This is an unprecedented result which surpasses all previous attempts both in accuracy and number and difficulty of taxa under consideration, which include types previously considered as indistinguishable.
... Traditionally, image-based methods typically involve defining and extracting 45 discriminating features from pollen images, followed by sorting via statistical or machine 46 learning-based classifiers. An example which uses the same type of images as this 47 research is [14]. As per [10], these image-based methods fall into three different 48 categories based on the type of features being used: (1) discriminant features are largely 49 visual/geometrical (e.g. ...
... The copyright holder for this preprint (which was not peer-reviewed) is the . https://doi.org/10.1101/2020.02.14.949149 doi: bioRxiv preprint low numbers of taxa [20], performance of the neural network classifier declines with 93 greater numbers of taxa [14]. 94 The image dataset we use comprises a total of 19,500 images, from 46 different 95 pollen types, representing 37 different families. ...
... Samples of seven pollen types found in the dataset. By rows, pollen types usually considered as indistinguishable.In its routine operation, the Classifynder extracts values for 43 different image 86 features (both geometrical and textural, see[14] for a list) and tags them on to each 87 image as metadata. Images are classified using a simple neural network (feed-forward 88 with a single hidden layer), which compares the feature values of the unknown pollen 89 types with the feature values of known 'library' images. ...
Preprint
Full-text available
In palynology, the visual classification of pollen grains from different species is a hard task which is usually tackled by human operators using microscopes. Many industries, including medical and farmaceutical, rely on the accuracy of this manual classification process, which is reported to be around 67%. In this paper, we propose a new method to automatically classify pollen grains using deep learning techniques that improve the correct classification rates in images not previously seen by the models. Our proposal manages to properly classify up to 98% of the examples from a dataset with 46 different classes of pollen grains, produced by the Classifynder classification system. This is an unprecedented result which surpasses all previous attempts both in accuracy and number and difficulty of taxa under consideration, which include types previously considered as indistinguishable.
... In order to make rapid phytolith measurement practical for routine use, methods must be developed that will allow software to automatically define, or segment, object boundaries on slide images containing many phytolith and non-phytolith particles. Automatic image segmentation has been successfully used in a number of scientific and medical fields and has recently been applied to microfossils (Du Buf and Bayer, 2002; Kloster et al., 2014; Han et al., 2014; Lagerstrom et al., 2013). Particularly promising is SHERPA, a program developed for digitally processing images of diatom frustules (Kloster et al., 2014), which are also composed of silica and similar to phytoliths regarding problems with segmentation . ...
... 2013); curvature scale-space (Kumar et al., 2012); bounding-box splitting (Bauckhage and Tsotsos 2006); wavelet analysis (Essendelft, 2013; Gui et al., 2014); shock graphs (Macrini, 2003; Sebastian et al., 2004; Siddiqi et al., 1999); and boundary-line analysis (Liang et al., 2011). Several of these approaches have been tested and compared with each other on non-phytolith data sets, including: artificial image databases (Selvarajah and Kodituwakku, 2011); feathers (Sheets et al., 2006); wheat seeds (Williams et al., 2013); plant roots (Lootens et al., 2007); and pollen (Lagerstrom et al., 2013). ...
... Bolano et al., 2011; Tsolakidis et al., 2014); neural networks for coffee beans and regions in satellite images (Mahi and Kaouadji, 2014; Oyama, 2014); decision trees for vegetation types (Tooke et al., 2009); and random forests for diatoms (Dimitrovski et al., 2011 ). Efficacy of some of these approaches has been demonstrated for pollen images by Lagerstrom et al. (2013). ...
... Despite this, the error is more balanced between species which would appear to be a desirable result. [19]. ...
... Five sample outlying images from different image features. (a) Geometry, (b) Histogram, (c) Moments, (d) Grey Level Co-occurrence Matrix, and (e) Gabor (Reprinted with permission from[19]. Copyright 2013, AIP Publishing LLC) ...
... Column three is an abbreviation for species. The final column is the number of image samples (Reprinted with permission from[19]. Copyright 2013, AIP Publishing LLC) ...
Article
Full-text available
We describe an investigation into how Massey University's Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University's pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set. We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder's native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Conference Paper
The identification of pollen grains is a task needed in many scientific and industrial applications, ranging from climate research to petroleum exploration. It is also a time-consuming task. To produce data, pollen experts spend hours, sometimes months, visually counting thousands of pollen grains from hundreds of images acquired by microscopes. Most current automation of pollen identification rely on single-focus images. While this type of image contains characteristic texture and shape, it lacks information about how these visual cues vary across the grain’s surface. In this paper, we propose a method that recognizes pollen species from stacks of multi-focal images. Here, each pollen grain is represented by a multi-focal stack. Our method matches unknown stacks to pre-learned ones using the Longest-Common Sub-Sequence (LCSS) algorithm. The matching process relies on the variations of visual texture and contour that occur along the image stack, which are captured by a low-rank and sparse decomposition technique. We tested our method on 392 image stacks from 10 species of pollen grains. The proposed method achieves a remarkable recognition rate of 99.23%.
Book
With an emphasis on applications of computational models for solving modern challenging problems in biomedical and life sciences, this book aims to bring collections of articles from biologists, medical/biomedical and health science researchers together with computational scientists to focus on problems at the frontier of biomedical and life sciences. The goals of this book are to build interactions of scientists across several disciplines and to help industrial users apply advanced computational techniques for solving practical biomedical and life science problems. This book is for users in the fields of biomedical and life sciences who wish to keep abreast with the latest techniques in signal and image analysis. The book presents a detailed description to each of the applications. It can be used by those both at graduate and specialist levels.