ResearchPDF Available

Issue 2 JETIR (ISSN-2349-5162) JETIR1802014 Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir

Authors:
  • C-Quad Research
  • East West First Grade College (Affiliated to Bangalore Univiersity)

Abstract and Figures

Automated classification of human In Vitro Fertilized (IVF) embryos using Convolution Neural Networks is presented in the paper. Embryos comprise smaller radii cell structures and get differentiated quickly in early days after fertilization, making it difficult to algorithmically track and identify viability of embryo. Machine learning algorithms are yielding better results than alternate methods like Hugh's circle transforms and modified vesselness filters. The method is useful in increasing the implantation efficiency.
Content may be subject to copyright.
February 2018, Volume 5, Issue 2 JETIR (ISSN-2349-5162)
JETIR1802014
Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org
100
Deep Learning Techniques for Automatic
Classification and Analysis of Human in Vitro
Fertilized (IVF) embryos.
Prof. Sujata N Patil1, Dr. Uday Wali 2, Dr. M K Swamy 3, Nagaraj S P 4& Dr. Nandeshwar Patil 5
1 Research Scholar, KLE Academy of Higher Education & Research, Belagavi, India
2 Dept. of Electronics & Communication, KLEDr MSS College of Engineering & Technology, Belagavi,
3 J N Medical College, KLE Academy of Higher Education, Belagavi, India
4LIVFC & SSFC, Jayanagar, Bangalore, India
Email:nagarajbgm@gmail.com
5KLE Dr Prabhakar Kore Hospital & MRC, Belagavi, India
Abstract.Automated classification of human In Vitro Fertilized (IVF) embryos using Convolution Neural
Networks is presented in the paper. Embryos comprise smaller radii cell structures and get differentiated quickly
in early days after fertilization, making it difficult to algorithmically track and identify viability of embryo.
Machine learning algorithms are yielding better results than alternate methods like Hugh’s circle transforms and
modified vesselness filters. The method is useful in increasing the implantation efficiency.
Keywords: In vitro fertilization, Machine Learning, Convolution Neural Networks, Cloud Services
I. INTRODUCTION
Recognizing viability of human embryos from microscopic images is an extremely tedious process that is susceptible to
error and subject to intra- and inter-individual unpredictability [1,2]. Automating classification of these embryo images
will have the benefit of reducing time and cost, minimizing errors, and improving outcome, consistency of results
between individuals and clinics. Several techniques have been discussed in literature to ease the process of automation
taking into consideration of day2 as well as day3 embryo images. However, grading the embryos based on the cell
division, size of the cell and the fragments present becomes difficult because of constraints in the imaging process. Like,
the exposure time (embryos are sensitive to the temperature), the light intensity variation and the transparency of the
specimen all cause variations in the image. Embryo quality assessment based on Blastomere circle and grading do not
yield sufficiently reliable classification.
Convolutional Neural networks (CNNs) are a type of deep learning models that can act directly on the raw inputs, thus
automating the process of feature construction. Machine learning algorithms are proving to be better in checking
viability ofembryo and prediction of the outcome of implantation.
The convolutional neural network APIs used for the recognition of microscopic images will detect growth and grading is
done based on the previous trained dataset. Also this framework will allow us to automatically classifyand grade the
human embryos based on the previous. The framework employs a deep convolutional neural network model trained to
count cells from raw microscopy images. Here we exhibit the effectiveness of proposed approach on a data set of 350
human embryos. The results depict that the deep CNN APIs will provide strongassessmentson the training precision of
about 87.5 % and recall is 86% for the embryos of initial, day1, day2, day3 and day5.
(a) (b) (c) (d) (e)
Fig.1. Examples of developing embryos: (a) one-cell stage, (b) two-cell stage, (c) three cell stage, (d) four-cell stage and (e) 5-or-more cell stage.
February 2018, Volume 5, Issue 2 JETIR (ISSN-2349-5162)
JETIR1802014
Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org
101
(a) (b) (c) (d) (e)
Fig.2. stages of embryo development: (a) initial stage, (b) day 1, (c) day 2, (d) day 3, and (e). day 4
The intensity variation between the cell and boundaries will reduce quality and result in faint cell boundaries. Classification
and analysis of these human embryos cells makes it challenging by the fact that the cells exhibit variability in their appearance, size
and shape. Also, each embryo grows (cells undergo divisions) in a compact manner where cells severely overlap with each e.g.,
dividing cells. Here, we focus on the literature related to human embryo image analysis. other. Moreover, cells are surrounded by
distracting noise such as extra cellular material (fragments) attached to the growing embryo and surrounding gel material (see Fig. 1
and Fig. 2 (a)(e) for examples). All these diculties make hand crafted algorithms for automated cell counting fragile.
Here in this paper, the network is trained using the preprocessed microscopic images, we utilize the APIs to test the images
using convolutional neural network (CNN). The datasets are initially trained with respect to day1 images, than with day2, day3 and
day5. The available features of the embryos will give the precise grading and the size of the blastomere. When these images are fed to
the neural network the precision and recall is better, the system is able to recognize and validate whether the selected embryo grade
based on previous training. Our goal is to automatically count the number of cells in the developing embryos up to the 5-cell stage
(with higher cardinality being grouped into a “5-or-more” category). We do so using a convolutional neural network (CNN) that can
learn from vast amount of training data to overcome the diculties presented above.
II. Related work:
Human embryo detection, cell detection, segmentation, and tracking are well-studied problems in computer vision. Most of the
research work has been done in developing techniques to deal with the difficulties that generally occur such as deformable objects,
groups of moving objects, missing data, and occlusions [2, 3]. However, the combination of these complexities within a single
application presents significant difficulties, which is the case for many of the medical image analysis tasks. These difficulties are even
more acute in the context of cell images, which are often noisy, feature-poor, and undergoing topology changes, Human embryo
development is more fragile than many other species [5]. The automation of human embryonic cell monitoring, in addition to above-
mentioned difficulties, is further challenged by the fact that development varies substantially betweenembryoswithadifferent
daysandthatembryosexhibitawiderangeof behavior during cell divisions. Also, in many cases embryo morphology varies within a cell
stage and conforms between cell stages (for example, for 4-cell cases see Fig. 2 (a)(e)). This variability makes it difficult to design
feature descriptors that could reliably express the information we want to obtain from the images under analysis. Human embryonic
cell analysis is non-invasive, which makes these techniques not appropriate.In practice, however, and in particular in the case of
human embryos, the images background contains a lot of noise, such as fragments, and the cells of the growing embryo greatly
overlap each other.
In this paper, we therefore address the cell automation problem and grading which is challenging scenario. As to this end, we
introduce a CNN-based counting approach that requires minimal annotations, i.e., only the number of cell in each image.
Additionally, trained sequence of images fed to the CNN network which intern smoothes the individual CNN predictions across the
entire sequence. Our resultsdepicts that our approach outperforms, by a large margin, the Tensor flow method on the challenging task
of counting cells in early stage human embryo development.
III. Convolutional Neural Network
Aconvolutional neural network (CNN or ConvNet) is a type of feed-forward artificial neural network made up of neurons
that have learnable weights and biases, very similar to ordinary multi-layer perceptron (MLP) networks. The CNNs take advantage of
the spatial nature of the data. In nature, we perceive different objects by their shapes, size and colors. These primitives are often
identified using different detectors (e.g., edge detection, color detector) or combination of detectors interacting to facilitate image
interpretation (object classification, region of interest detection, scene description etc.) in real world vision related tasks. These
detectors are also known as filters. Convolution is a mathematical operator that takes an image and a filter as input and produces a
filtered output (representing say edges, corners, colors etc in the input image). Historically, these filters are a set of weights that were
often hand crafted or modeled with mathematical functions. The filter outputs are mapped through non-linear activation functions
mimicking human brain cells called neurons.Convolutional networks provide a machinery to learn these filters from the data directly
instead of explicit mathematical models and have been found to be superior (in real world tasks) compared to historically crafted
February 2018, Volume 5, Issue 2 JETIR (ISSN-2349-5162)
JETIR1802014
Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org
102
filters. With convolutional networks, the focus is on learning the filter weights instead of learning individually fully connected pair-
wise (between inputs and outputs) weights. In this way, the number of weights to learn is reduced when compared with the traditional
MLP networks from the previous tutorials. In a convolutional network, one learns several filters ranging from few single digits to few
thousands depending on the network complexity.
We have successfully compared the results if both i.e. Vesselness Filtered algorithm and CNNs algorithm. The CNN
algorithm is giving a better result for the grading of embryos and mapping the circles in case of blastomeres. Further different days of
the cells is clearly identified by the trained network, with respect to radius, number of cell division i.e 2 PN, 2-cell, 4-cell, 8-cell etc.
A carefully-designed RESTful web API defines the resources, relationships, and navigation schemes that are accessible and
implementable to client applications. When we implement and deploy this web API, we are considering the requirements of the
embryo features such as size of the blastomere, division, radius of all cells, alignment, grades, day of cell and the way in which the
web API is been developed for our data. The implementation considers the data fed to the neural network and focusses on best
practices available in order to train the system so as to grade them automatically
IV. Deep Learning Techniques:
Microsoft Azure APIs for the classification of human IVF images:
Custom Vision Service is a tool for construction of custom image classifiers. It makes it easy and fast to build, deploy, and
improve an image classifier. Here Microsoft provides a REST API and a web interface to upload your images and train. RESTFUL
API‟s are machine learning system that operates at large scale and in heterogeneous environments. Custom vision services use
dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph
across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general
purpose GPUs, and custom-designed ASICs. Custom Vision Service accepts training images in JPG/JPEG, PNG, and BMP format, up
to 6 MB per image (prediction images can be up to 4 MB per image). Images are recommended to be 256 pixels on the shortest edge.
Any images shorter than 256 pixels on the shortest edge will be scaled up by Custom Vision Service. Trained images are assigned
with tags based on the features which are selected to the embryos.
These tags will be the embryo day, radius, grade as measured by the previous algorithm. Embryo features which are to be
considered for the training sequences, then uploaded to train the network. The CNN will take some time to train because the images
are large and high number. The evaluation of this trained system is validated by the use of precision and recall indicators. Custom
Vision Service uses K-fold cross validation for training the set of images given to the network and to calculate the percentage of
precision, recall. The precision and recall indicators will tell how good the classifier is, based on automatic testing, so that the network
can be retrained to improve the precision
Fig 3: Precision and recall percentage
Note: Each time we hit the "Train" button, we will create a new iteration of the trained classifier. It is possible to view all old
iterations in the Performance tab, and we can delete any that may be obsolete.
The classifier uses all the images to create a model that identifies each tag. To test the quality of the model, the classifier then
tries each image on its model to see what the model finds.
After the training is completed we can test any of the embryo image using the Quick Test, here the image is uploaded to test
the classifier, if the trainingis good then the test results will identify the day, grade and the percentage of precision for a particular
embryo.
February 2018, Volume 5, Issue 2 JETIR (ISSN-2349-5162)
JETIR1802014
Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org
103
Feature Extraction:
Convolutional Neural Network which is trained using different tags represents the features of the human embryos. The
feature will be supplied to the network while training, so that the precision of the system is 100%. Normal embryo images as well as
abnormal images are considered for training the network. The features such as radius of the cell, division whether it is day-2, day-3,
day-4 or Day-5 will be identified based on these features.
Fig 4: Features to be extracted from embryos i.e day, cell division, radius of cell.
Classification:
This work uses an ensemble of neural network to perform the classification task. A convolutional neural is a layered network of
simple processing elements connected together to form a network of nodes that uses a mathematical model for information
processing. Different neural network classifiers can be obtained by varying the network architecture and the choice of the algorithm
designed to infer the strength (weights) of the connections in the network to produce a desired signal flow. The network can also be
trained for the more number of sample images so as to train the classifier efficiently.
Statistical Analysis:
The accuracy of the classifier depends on how well the network is trained and is it separates the group of images being tested
(embryos) into the four classesi.e D1, D2, D3, D4 so as to have the healthy birth of the child. Accuracy is measured by the frequency
of the cells being identified by the software and using the result obtained by the embryologists. A rough measurement of the images
graded for the respective days is as show below.
In these 500 samples of embryo images, each of Day1, Day2, Day3, Day4 and Day5 there are totally 700 blastomeres and
almost all the cells detected were true cells. Among them, except a few (15 cells) cells all other cells are correct detections.
Here, we define true positive (TP) as the correct detection, false alarm/false positive (FP) as the detected circle/ellipses which
are not correspond to any real cell, misdetection/false negative (FN) as the real cells that are not detected. Afterwards, the precision
and recall are again used to measure the performance.
The total precision for all Blastomere is 0.9172. This means for a given detection, the probability that it is correct detection is
0.9172. The total recall is 0.9332, which means for a given real cell, the probability that it will be correctly detected is 0.9332.
The average precision is 0.8755, which means the average for the precisions of all embryo images. Similarly, the average recall is
0.7562.
Fig5: Analysis of the embryo data for 100 images for each of the Day1, Day2, Day3, Day4.
February 2018, Volume 5, Issue 2 JETIR (ISSN-2349-5162)
JETIR1802014
Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org
104
Unpaired t test results:
P value and statistical significance:
The two-tailed P value is less than 0.0001 by conventional criteria; this difference is considered to be extremely statistically
significant.
Confidence interval:
The mean of Group One minus Group Two equals -0.92
95% confidence interval of this difference: From -0.98 to -0.87
Intermediate values used in calculations:
t = 35.5106
df = 208
standard error of difference = 0.026
V. RESULTS:
Using Custom Vision APIs the system is been trained with sufficient number of embryo images related to day1, day2, day3
and day5 with parameters like shape, cell division and size. Initially all the normal images are taken to train the system i.e the images
without the fragmentation. Initial period all the embryo images were taken and trained, here the recall rate is 89.2% and the precision
is 85.7%.
It is clearly understood that the response is better in the training period. The images are tagged according to the days and
grades, once this is done further any embryo image taken for the quick test and the observation results were good enough to classify
these images as grade1, grade2, grade3, grade4, grade5. Some of the performance results tested are shown below.
Table 1 :Automated Embryo Classification Results
VI. DISCUSSION:
The paper presented focuses on new techniques for human embryo image classification based on a training sequence for the
convolutional neural network system. The results clearly depicts as to how the APIs used to classify these embryos and are
encouraging, in particular considering that they have been obtained using a „small‟ training set with very few positive samples. For
pronucleated (initial stage of embryo development) oocytes and embryos, it is probable that other type of descriptors not specifically
designed for prevalent textural images might be used. In fact, the alignment and the number of nucleoli or the number and size of
blastomeres are not textural features. Certainly time lapse imaging technique is a significant advantage over a static assessment
scheme but it does not necessarily rule out the possibility of adding valuable information coming from large databases of stored
images especially when used with new technologies such as pattern recognition and artificial intelligence techniques. The two
possibilities (dynamic and static observation) might be used together and integrated in a more thorough analysis for practical aims in a
normal IVF clinical setting.
February 2018, Volume 5, Issue 2 JETIR (ISSN-2349-5162)
JETIR1802014
Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org
105
An important application for the selection of viable embryo might be the optimization of the cryopreservation strategy and
the avoidance of embryo selection in countries where it is not permitted. The most practical and original perspective of this study is
the possibility of obtaining a reliable method to help physicians and biologists in selecting embryos or oocytes. This study group is
planning to test the proposed method on a larger data set, using an automated segmentation procedure and to combine the information
coming from the oocytes, pronuclei and embryos. The proposed approach might become a tool shared among several IVF laboratories
for objective, automatic and non-invasive oocyte or embryo assessment.
VII. REFERENCES:
[1] A. Khan, S. Gould, and M. Salzmann. Detecting abnormal cell division patterns in early stage human embryo
development”, MLMI, 2015.
[2] A. Khan, S. Gould, and M. Salzmann. Automated monitoring of human embryonic cells up to the 5-cell stage in time-lapse
microscopy images.In ISBI, 2015.
[3] A. Khan, S. Gould, and M. Salzmann. A linear chain markov model for detection and localization of cells in early stage
embryo development.In WACV, 2015.
[4] V. Burruel, K. Klooster, C. M. Barker, R. R. Pera, and S. Meyers.Abnormal early cleavage events predict early embryo
demise: Sperm oxidative stress and early abnormal cleavage”. Scientific reports, 2014.
[5] F. Moussavi, W. Yu, P. Lorenzen, J. Oakley, D. Russako, and S. Gould.A unified graphical model framework for
automated human embryo tracking.In ISBI, 2014.
[6] M. Schiegg, P. Hanslovsky, B. X. Kausler, L. Hufnagel, and F. A. Hamprecht.Conservation tracking.In ICCV, 2013.
[7] Y. Wang, F. Moussavi, and P. Lorenzen.Automated embryo stage classification in tlm video of early human embryo
development.”,In MICCAI. 2013.
[8] A. Krizhevsky, I. Sutskever, and G. E. Hinton.Imagenet classification with deep convolutional neural networks.In NIPS,
2012.
[9] D. C. Cire¸san, U. Meier, L. M. Gambardella, and J. Schmidhuber. Conv. neural network committees for handwritten
character classification.In ICDAR, 2011.
[10] X. Lou, F. Kaster, M. Lindner, B. Kausler, U. Köthe, B. Höckendorf, J. Wittbrodt, H. Jänicke, and F. A. Hamprecht.
DELTR: Digital Embryo Lineage Tree Reconstructor. In Eighth IEEE International Symposium on Biomedical Imaging
(ISBI).Proceedings, 2011.
[11] D. C. Ciresan, U. Meier, J. Masci, L. Maria Gambardella, and J. Schmidhuber. Flexible, high performance convolutional
neural networks for image classification.In IJCAI, 2011.
[12] D. Strigl, K. Kofler, and S. Podlipnig. Performance and scalability of gpubased convolutional neural networks.In PDP,
2010.
[13] R. Szeliski. Computer vision: algorithms and applications. Springer Science and Business Media, 2010.
[14] C. Wong, K. Loewke, N. Bossert, B. Behr, C. D. Jonge, T. Baer, and R. R. Pera. Non-invasive imaging of human embryos
before embryonic genome activation predicts development to the blastocyst stage. Nature Bio., 2010.
[15] K. Li, E. Miller, M. Chen, T. Kanade, L. Weiss, and P. Campbell. Computer vision tracking of stemness.In ISBI, 2008.
[16] F. Aguet, C. Vonesch, J.-L.Vonesch, and M. Unser.An introduction to fluorescence microscopy: Basic principles,
challenges, and opportunities.In Microscopic Image Analysis for Life Science Applications.Artech House, BostonMA,USA,2008
[17] H. Peng. Bio image informatics: a new area of engineering biology. Bioinformatics, 2008.
[18] J. Zhou and H. Peng.Automatic recognition and annotation of gene expression patterns of fly embryos.Bioinformatics,
2007.
[19] A. F. Ning, D. Delhomme, Y. LeCun, F. Piano, L. Bottou, and P. E. Barbano.Toward automatic phenotyping of developing
embryos from videos. Image Processing,”IEEE Transactions on, 2005.
February 2018, Volume 5, Issue 2 JETIR (ISSN-2349-5162)
JETIR1802014
Journal of Emerging Technologies and Innovative Research (JETIR) www.jetir.org
106
[20] Yilmaz, O. Javed, and M. Shah. Object tracking: A survey. Acm computing surveys(CSUR), 2006.
[21] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to
handwritten zip code recognition. Neural computation, 1989
... BSON is based on JSON style documents. JSON (JavaScript Object Notation) is a format that is easy for computers to parse and generate [9][10][11][12]. ...
Conference Paper
Full-text available
NoSQL Solutions are basically meant to solve a big data problem that relational databases are either not well suited for, too expensive to use or require you to implement something that breaks the relational nature of your DB anyways. In this research paper, we are discussing about reasons for fall other database, introduction to NoSQL database, types, examples, benefits, advantages, challenges of NoSQL and when use what types of NoSQL. Further, discussing about introduction to MongoDB (NoSQL database), how to use MongoDB in blogpost, features, advantages, limitations of MongoDB and at last covering installation process of MongoDB on Ubuntu 14.04 LTS. The aim of this paper is to show an importance of NoSQL database (MongoDB) that give effective way to handling big data problems.
Article
Full-text available
background: Applications of deep learning for the societal issues are one of the debatable concerns where the community medicine and implication of artificial intelligence for the societal issues are a big concern. This article, it is shown the applications of neural networks in clinical practice for reproduction procedure enhancement. And this is a well-known issue where image analysis has the exact applications. In Embryology, fetal abnormality early-stage detection and diagnosis is one of the challenging tasks and thus, needs automation in the process of tomography and ultrasonic imaging. Also, Interpretation and accuracy in the medical imaging process are very important for accurate results.
Article
Full-text available
Human embryos resulting from abnormal early cleavage can result in aneuploidy and failure to develop normally to the blastocyst stage. The nature of paternal influence on early embryo development has not been directly demonstrated although many studies have suggested effects from spermatozoal chromatin packaging, DNA damage, centriolar and mitotic spindle integrity, and plasma membrane integrity. The goal of this study was to determine whether early developmental events were affected by oxidative damage to the fertilizing sperm. Survival analysis was used to compare patterns of blastocyst formation based on P2 duration. Kaplan-Meier survival curves demonstrate that relatively few embryos with short (<1 hr) P2 times reached blastocysts, and the two curves diverged beginning on day 4, with nearly all of the embryos with longer P2 times reaching blastocysts by day 6 (p < .01). We determined that duration of the 2nd to 3rd mitoses were sensitive periods in the presence of spermatozoal oxidative stress. Embryos that displayed either too long or too short cytokineses demonstrated an increased failure to reach blastocyst stage and therefore survive for further development. Although paternal-derived gene expression occurs later in development, this study suggests a specific role in early mitosis that is highly influenced by paternal factors.
Conference Paper
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 dif- ferent classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implemen- tation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called dropout that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry
Conference Paper
Recently, it has been shown that early division patterns, such as cell division timing biomarkers, are crucial to predict human embryo viability. Precise and accurate measurement of these markers requires cell lineage analysis to identify normal and abnormal division patterns. However, current approaches to early-stage embryo analysis only focus on estimating the number of cells and their locations, thus failing to detect abnormal division patterns and potentially yielding incorrect timing biomarkers. In this work we propose an automated tool that can perform lineage tree analysis up to the 5-cell stage, which is sufficient to accurately compute all the known important biomarkers. To this end, we introduce a CRF-based cell localization framework. We demonstrate the benefits of our approach on a data set of 22 human embryos, resulting in correct identification of all abnormal division patterns in the data set.
Article
Measurement of the proliferative behavior of human embryonic cells in vitro is important to many biomedical applications ranging from basic biology research to advanced applications, such as determining embryo viability during in vitro fertilization (IVF) treatments. Automated prediction of the embryo viability, by tracking cell divisions up to the 4-cell stage, improves embryo selection and may lead to increased success rates in IVF pregnancies. Recent research in cell biology has suggested that tracking cell divisions beyond the 4-cell stage further improves embryo selection. In the current state-of-the-art, later events (e.g., time to reach the 5-cell stage) can only be assessed manually. In this work we automatically predict the number of cells at every time point, and predict when the embryo divides beyond four cells in a time-lapse microscopy sequence. Our approach employs a conditional random field (CRF) that compactly encodes various aspects of the evolving embryo and estimates the number of cells at each time step via exact inference. We demonstrate the effectiveness of our method on a data set of 33 developing human embryos.
Article
We address the problem of detecting and localizing cells in time lapse microscopy images during early stage embryo development. Our approach is based on a linear chain Markov model that estimates the number and location of cells at each time step. The state space for each time step is derived from a randomized ellipse fitting algorithm that attempts to find individual cell candidates within the embryo. These cell candidates are combined into embryo hypotheses, and our algorithm finds the most likely sequence of hypotheses over all time steps. We restrict our attention to detect and localize up to four cells, which is sufficient for many important applications such as predicting blast cyst and can be used for assessing embryos in vitro fertilization procedures. We evaluate our method on twelve sequences of developing embryos and find that we can reliably detect and localize cells up to the four cell stage.
Conference Paper
Time lapse microscopy has emerged as an important modality for studying early human embryo development. Detection of certain events can provide insight into embryo health and fate. Embryo tracking is challenged by a high dimensional search space, weak features, outliers, occlusions, missing data, multiple interacting deformable targets, changing topology, and a weak motion model. We address these with a data driven approach that uses a rich set of discriminative image and geometric features and their spatiotemporal context. We pose the mitosis detection problem as augmented simultaneous segmentation and classification in a conditional random field framework that combines tracking based and tracking free elements. For 275 clinical image sequences we measured division events during the first 48 hours of embryo development to within 30 minutes resulting in an improvement of 24.2% over a tracking-based approach and a 35.7% improvement over a tracking-free approach, and more than an order of magnitude improvement over a traditional particle filter, demonstrating the success of our framework.
Conference Paper
The accurate and automated measuring of durations of certain human embryo stages is important to assess embryo viability and predict its clinical outcomes in in vitro fertilization (IVF). In this work, we present a multi-level embryo stage classification method to identify the number of cells at every time point of a time-lapse microscopy video of early human embryo development. The proposed method employs a rich set of hand-crafted and automatically learned embryo features for classification and avoids explicit segmentation or tracking of individual embryo cells. It was quantitatively evaluated using a total of 389 human embryo videos, resulting in a 87.92% overall embryo stage classification accuracy.
Conference Paper
In 2010, after many years of stagnation, the MNIST handwriting recognition benchmark record dropped from 0.40% error rate to 0.35%. Here we report 0.27% for a committee of seven deep CNNs trained on graphics cards, narrowing the gap to human performance. We also apply the same architecture to NIST SD 19, a more challenging dataset including lower and upper case letters. A committee of seven CNNs obtains the best results published so far for both NIST digits and NIST letters. The robustness of our method is verified by analyzing 78125 different 7-net committees.