Deep-Learning-Based Polar-Body Detection for Automatic Cell Manipulation
<p>Three-dimensional illustration of oocyte rotation.</p> "> Figure 2
<p>Three typical problems for polar-body detection in oocyte-rotation processes. Polar bodies are indicated by red circles. (<b>a</b>) Polar body in defocused situation. (<b>b</b>) Polar-body deformation caused by micropipette. (<b>c</b>) Polar body in different sizes (height and width of the image are 256 pixels in the whole paper, with 16 pixels representing 10 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m. Scale bars were added in (<b>a</b>).</p> "> Figure 3
<p>Segmentation-network architecture.</p> "> Figure 4
<p>Three data-augmentation methods.</p> "> Figure 5
<p>Flowchart of polar-body detection process.</p> "> Figure 6
<p>Examples of training data. (<b>a</b>) Polar-body images of different oocytes. (<b>b</b>) Polar-body images of the same oocyte in the first rotation stage. (<b>c</b>) Polar-body images of the same oocyte in the second rotation stage. (<b>d</b>) Oocyte images without a polar body as negative samples. Each union contains two rows. First row, oocyte images; second row, corresponding masks. White regions show the regions of the polar bodies.</p> "> Figure 7
<p>Polar-body detection results in (<b>a</b>) defocus situation; (<b>b</b>) deformation situation; and (<b>c</b>) different oocytes. For each detection result, top row shows the original oocyte image, bottom row shows corresponding segmented map. Polar bodies are marked by red rectangles for better view.</p> "> Figure 8
<p>Comparison with traditional standard image-segmentation methods. Each column represents segmentation results of four image-processing methods: (<b>a</b>) original image, (<b>b</b>) our method, (<b>c</b>) Otsu thresholding algorithm [<a href="#B22-micromachines-10-00120" class="html-bibr">22</a>], (<b>d</b>) Gibbs algorithm [<a href="#B23-micromachines-10-00120" class="html-bibr">23</a>], (<b>e</b>) GrabCut algorithm [<a href="#B24-micromachines-10-00120" class="html-bibr">24</a>].</p> "> Figure 9
<p>Polar-body-detection results during cell rotation. (<b>a</b>) In the first rotation stage, the polar body is from invisible to visible, from blurred to clear. (<b>b</b>) In the second rotation stage, the polar body is rotated to the desired position in the focal plane. (<b>c</b>) In the second rotation stage, the polar body is often deformed due to contact with the injection micropipette. For each process, the first row represents the original oocyte image. Polar bodies are in different positions along the time series of the rotation process. The second row represents the corresponding segmented map in which the brightness of the white pixel represents the confidence of the prediction. The third row represents the polar body in a 3D view in which the position of the polar body is marked by a blue circle.</p> "> Figure 10
<p>Detection result of two different oocytes during the rotation process. (<b>a</b>) Polar body from invisible to visible. (<b>b</b>) Polar body rotated around the cell circle.</p> ">
Abstract
:1. Introduction
2. Deep-Learning-Based Polar-Body Detection
2.1. Segmentation Based on Convolutional Network
2.1.1. Network Architecture
2.1.2. Loss Function for Network Training
2.2. Data Collection and Augmentation for CNN Training
- Generate random displacement fields for each pixel:
- Convolve displacement fields with a Gaussian of standard deviation and mean value :
- Generate deformed image according to the new displacements on original image:
2.3. Polar-Body Detection Process
- Feed the oocyte image (referred to as the first picture in the right) into the trained network; then, a segmented map in black and white is acquired (the second picture).
- Perform nonmaximum suppression to obtain the most area of the polar body possible. Most of the time, the segmented map has multiple connected regions that represent all possible areas of the polar body. Nonmaximum suppression is used to suppress the areas of lower possibilities. In non-maximum suppression, two constraints are utilized. First is the maximum value in the region. We set the threshold to 0.5 and ignored the regions with a max value lower than 0.5, as the pixel value of the segmented result represents the possibility. Second is the threshold of pixel numbers for the area. After that, areas of the satisfied regions are calculated.
- Judge if there is a satisfied region. If yes, calculate the mean value of all pixel locations in this region as the center of the polar body; otherwise, put out that there is no polar body in the image.
3. Experimental Results
3.1. Dataset and Platform
3.2. Polar-Body Detection Results
3.3. Method Comparison
3.4. Application of Polar-Body Detection in Cell Rotation
4. Conclusions
Supplementary Materials
Author Contributions
Funding
Conflicts of Interest
References
- Gianaroli, L. Preimplantation genetic diagnosis: Polar body and embryo biopsy. Hum. Reprod. 2000, 15, 69–75. [Google Scholar] [CrossRef] [PubMed]
- Stein, P.; Schultz, R. Icsi in the mouse. In Methods in Enzymology; Elsevier: Amsterdam, The Netherlands, 2010; Volume 476, pp. 251–262. [Google Scholar]
- Leung, C.; Lu, Z.; Zhang, X.; Sun, Y. Three-dimensional rotation of mouse embryos. IEEE Trans. Biomed. Eng. 2012, 59, 1049–1056. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.L.; Zhao, X.; Zhao, Q.L.; Lu, G.Z. Illumination intensity evaluation of microscopic image based on texture information and application on locating polar body in oocytes. In Proceedings of the China Automation Conference, Beijing, China, 7–10 August 2011; pp. 7–10. [Google Scholar]
- Wang, Z.; Feng, C.; Ang, W.; Tan, S.Y.; Latt, W. Autofocusing and polar body detection in automated cell manipulation. IEEE Trans. Biomed. Eng. 2017, 64, 1099–1105. [Google Scholar] [CrossRef] [PubMed]
- Chen, D.; Sun, M.; Zhao, X. Oocytes polar body detection for automatic enucleation. Micromachines 2016, 7, 27. [Google Scholar] [CrossRef] [PubMed]
- Hearst, M.; Dumais, S.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the 2017 IEEE International Conference onComputer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
- Xu, Y.; Mo, T.; Feng, Q.; Zhong, P.; Lai, M.; Eric, I.; Chang, C. Deep learning of feature representation with multiple instance learning for medical image analysis. In Proceedings of the 2014 IEEE Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 1626–1630. [Google Scholar]
- Albarqouni, S.; Baur, C.; Achilles, F.; Belagiannis, V.; Demirci, S.; Navab, N. Aggnet: Deep learning from crowds for mitosis detection in breast cancer histology images. IEEE Trans. Med. Imaging 2016, 35, 1313–1321. [Google Scholar] [CrossRef] [PubMed]
- Shen, W.; Zhou, M.; Yang, F.; Yang, C.; Tian, J. Multi-scale convolutional neural networks for lung nodule classification. In Proceedings of the International Conference on Information Processing in Medical Imaging, Sabhal Mor Ostaig, UK, 28 June–3 July 2015; pp. 588–599. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing And Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Simard, P.; Steinkraus, D.; Platt, J. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis and Recognition, Edinburgh, UK, 6 August 2003; p. 958. [Google Scholar]
- Chollet, F. Keras. Available online: https://github.com/fchollet/keras (accessed on 13 February 2019).
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow: A system for large-scale machine learning. OSDI 2016, 16, 265–283. [Google Scholar]
- Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef]
- Derin, H.; Elliott, H. Modeling and segmentation of noisy and textured images using random fields. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 1, 39–55. [Google Scholar] [CrossRef]
- Rother, C.; Kolmogorov, V.; Blake, A. Grabcut: Interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 2004, 23, 309–314. [Google Scholar] [CrossRef]
Class | Test Samples | Correct Predictions | ACC |
---|---|---|---|
Positive | 800 | 787 | 98.3% |
Negative | 200 | 200 | 100% |
Summary | 1000 | 987 | 98.7% |
Situations | Test Samples | Correct Predictions | ACC |
---|---|---|---|
Defocused | 128 | 126 | 98.4% |
Deformation | 103 | 100 | 97.1% |
Different oocytes | 74 | 73 | 98.6% |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Y.; Liu, Y.; Sun, M.; Zhao, X. Deep-Learning-Based Polar-Body Detection for Automatic Cell Manipulation. Micromachines 2019, 10, 120. https://doi.org/10.3390/mi10020120
Wang Y, Liu Y, Sun M, Zhao X. Deep-Learning-Based Polar-Body Detection for Automatic Cell Manipulation. Micromachines. 2019; 10(2):120. https://doi.org/10.3390/mi10020120
Chicago/Turabian StyleWang, Yuqing, Yaowei Liu, Mingzhu Sun, and Xin Zhao. 2019. "Deep-Learning-Based Polar-Body Detection for Automatic Cell Manipulation" Micromachines 10, no. 2: 120. https://doi.org/10.3390/mi10020120