A Multi-Feature Fusion Based on Transfer Learning for Chicken Embryo Eggs Classification
<p>Specific Pathogen Free (SPF) chicken embryo egg samples of five- to seven-day hatching (captured by the industrial color camera of 120M pixels (1292 × 964) produced by China Daheng Group Inc., with candling with LED white light directly from the top): (<b>a</b>) Fertile embryo, representing the normally hatching egg with rich vascular net; (<b>b</b>) weak embryo, representing the late hatching egg with less vascular net; (<b>c</b>) hemolytic embryo, representing the dying egg with hemolysis and gradually becoming an infected embryo; (<b>d</b>) cracked embryo, representing the cracks or breakages which occurred in the eggshell; (<b>e</b>) infected embryo, representing the fully dead egg which is infected by viruses and has become inconsistent in shape and opaque; and (<b>f</b>) infertile embryo, representing the unfertilized egg and where it is most transparent.</p> "> Figure 2
<p>The proposed architecture on multi-feature fusion with transfer learning for SPF chicken embryo egg classification.</p> "> Figure 3
<p>The cropped regions of interest (ROI) of a chicken embryo.</p> "> Figure 4
<p>The process of transfer learning, including the loading of pre-trained models of AlexNet through the removal of the last three layers and adding three new layers of AlexNet: fc6, fc7, and the Softmax layer; and fine-tuning the previous layers and training the new layers of AlexNet with the chicken embryo dataset.</p> "> Figure 5
<p>Illustration of the inception5b structure.</p> "> Figure 6
<p>Training loss of AlexNet.</p> "> Figure 7
<p>Training loss of GoogLeNet.</p> "> Figure 8
<p>Speeded Up Robust Feature (SURF) of interest points of fertile embryo.</p> "> Figure 9
<p>Histogram of Oriented Gradient (HOG) and its visualization of crack embryo.</p> "> Figure 10
<p>Data augmentation: (<b>a</b>) Original image; (<b>b</b>) 90°; (<b>c</b>) 180°; (<b>d</b>) 270°; (<b>e</b>) horizontal flipping; (<b>f</b>) Gaussian noise; (<b>g</b>) random cropping; and (<b>h</b>) vertical flipping.</p> "> Figure 10 Cont.
<p>Data augmentation: (<b>a</b>) Original image; (<b>b</b>) 90°; (<b>c</b>) 180°; (<b>d</b>) 270°; (<b>e</b>) horizontal flipping; (<b>f</b>) Gaussian noise; (<b>g</b>) random cropping; and (<b>h</b>) vertical flipping.</p> "> Figure 11
<p>Confusion matrix of the proposed Alex-Google-HS method, with an average accuracy rate of 98.4% on the test dataset.</p> "> Figure 12
<p>The effect curve of initial learning rate on the training of the AlexNet and GoogLeNet models.</p> ">
Abstract
:1. Introduction
- In order to resolve the issues of insufficient embryo samples and overfitting of the DCNNs during the training process, this paper employed data augmentation to greatly expand the dataset. Original images were preprocessed to obtain the Region of Interest (ROI), and then the ROI was augmented with image processing technologies. Two pretrained state-of-the-art AlexNet and GoogLeNet models for image classification were fine-tuned with transfer learning to further prevent over-fitting and to learn deep features. The verified results show that the accuracy of this proposed model is higher than that of training from scratch.
- Multi-feature fusion of local features of SURF and HOG features, as well as deep features, provided complementary information for better generalization ability to illuminate the various changes and random spots of the eggshell. A comparable analysis of the classification accuracy rate between different DCNNs on the same embryo sample dataset has been provided. The experimental results show that the accuracy rate of this proposal is higher than that of other popular learning methods based on the color and texture of chicken embryos.
2. Preprocessing
3. Methodology
3.1. Transfer Learning
3.2. Network Training and Deep Feature Extraction
3.3. SURF and HOG Feature Extraction
4. Experimentation and Discussions
4.1. Datasets
4.2. Performance Comparison
4.3. The Influence of Learning Rate on Learning Model
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
DCNN | Deep Convolutional Neural Network |
CNN | Convolutional Neural Network |
ROI | Region of Interest |
SURF | Speeded Up Robust Feature |
HOG | Histogram Oriented Gradient |
SVM | Support Vector Machine |
SPF | Specific Pathogen Free |
BPNN | Back Propagation Neural Network |
ILSVRC | ImageNet Large Scale Visual Classification Challenge |
PCA | Principal Component Analysis |
LVQNN | Learning Vector Quantization Neural Network |
PLSR | Partial Least Squares Regression |
CLAHE | Contrast Limited Adaptive Histogram Equalization |
LSSVM | Least Square Support Vector Machine |
fc/FC | fully-connected |
SGD | Stochastic Gradient Descent |
Alex-DF | AlexNet deep feature |
Google-DF | fusion of GoogLeNet deep feature |
Alex-HS | fusion of AlexNet deep feature, HOG and SURF |
Google-HS | fusion of GoogLeNet deep feature, HOG and SURF |
Alex-Google-HS | fusion of AlexNet Deep Feature, GoogLeNet deep feature, HOG and SURF |
MLP | Multi-Layer Perception |
GPU | Graphics Processing Unit |
DCNN | Deep Convolutional Neural Network |
References
- Xu, Q.; Cui, F. Non-destructive detection on the fertility of injected SPF eggs in vaccine manufacture. In Proceedings of the 26th Chinese Control and Decision Conference, CCDC 2014, Changsha, China, 31 May–2 June 2014; pp. 1574–1579. [Google Scholar]
- Liu, L.; Ngadi, M.O. Detecting Fertility and Early Embryo Development of Chicken Eggs Using Near-Infrared Hyperspectral Imaging. Food Bioprocess Technol. 2013, 6, 2503–2513. [Google Scholar] [CrossRef]
- Hashemzadeh, M.; Farajzadeh, N. A Machine Vision System for Detecting Fertile Eggs in the Incubation Industry. Int. J. Comput. Int. Syst. 2016, 9, 850–862. [Google Scholar] [CrossRef]
- Geng, L.; Liu, H.; Xiao, Z.; Yan, T.; Zhang, F.; Li, Y. Hatching egg classification based on CNN with channel weighting and joint supervision. Multimed. Tools Appl. 2018, 78. [Google Scholar] [CrossRef]
- Shan, B. Fertility detection of middle-stage hatching egg in vaccine production using machine vision. In Proceedings of the 2nd International Workshop on Education Technology and Computer Science, ETCS 2010, Wuhan, China, 6–7 March 2010; pp. 95–98. [Google Scholar]
- Zhang, W.; Pan, L.; Tu, K.; Zhang, Q.; Liu, M. Comparison of Spectral and Image Morphological Analysis for Egg Early Hatching Property Detection Based on Hyperspectral Imaging. PLoS ONE 2014, 9, e886592. [Google Scholar] [CrossRef]
- Geng, L.; Yan, T.; Xiao, Z.; Xi, J.; Li, Y. Hatching eggs classification based on deep learning. Multimed. Tools Appl. 2018, 77, 22071–22082. [Google Scholar] [CrossRef]
- Xu, M.; Papageorgiou, D.P.; Abidi, S.Z.; Dao, M.; Zhao, H.; Karniadakis, G.E. A deep convolutional neural network for classification of red blood cells in sickle cell anemia. PLoS Comput. Biol. 2017, 13, e100574610. [Google Scholar] [CrossRef]
- Shi, W.; Gong, Y.; Tho, X.; Cheng, D.; Zheng, N. Fine-Grained Image Classification Using Modified DCNNs Trained by Cascaded Softmax and Generalized Large-Margin Losses. IEEE Trans. Neural Netw. Learn. 2019, 30, 683–694. [Google Scholar] [CrossRef]
- Hu, K.; Zhang, Z.; Niu, X.; Zhang, Y.; Cao, C.; Xiao, F.; Gao, X. Retinal vessel segmentation of color fundus images using multiscale convolutional neural network with an improved cross-entropy loss function. Neurocomputing 2018, 309, 179–191. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Galea, C.; Farrugia, R.A. Forensic Face Photo-Sketch Recognition Using a Deep Learning-Based Architecture. IEEE Signal Proc. Lett. 2017, 24, 1586–1590. [Google Scholar] [CrossRef]
- Barbedo, J.G.A. Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 2018, 153, 46–53. [Google Scholar] [CrossRef]
- Shao, L.; Zhu, F.; Li, X. Transfer Learning for Visual Categorization: A Survey. IEEE Trans. Neural Netw. Learn. 2015, 26, 1019–1034. [Google Scholar] [CrossRef] [PubMed]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Karen Simonyan, A.Z. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations (ICLR) 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Zuo, Z.; Wang, G. Learning Discriminative Hierarchical Features for Object Recognition. IEEE Signal Proc. Lett. 2014, 21, 1159–1163. [Google Scholar] [CrossRef]
- Hu, J.; Chen, Z.; Yang, M.; Zhang, R.; Cui, Y. A Multiscale Fusion Convolutional Neural Network for Plant Leaf Recognition. IEEE Signal Proc. Lett. 2018, 25, 853–857. [Google Scholar] [CrossRef]
- Kumar, S.; Pandey, A.; Satwik, K.S.R.; Kumar, S.; Singh, S.K.; Singh, A.K.; Mohan, A. Deep learning framework for recognition of cattle using muzzle point image pattern. Measurement 2018, 116, 1–17. [Google Scholar] [CrossRef]
- Cai, L.; Zhu, J.; Zeng, H.; Chen, J.; Cai, C.; Ma, K. HOG-assisted deep feature learning for pedestrian gender recognition. J. Frankl. Inst. 2018, 355, 1991–2008. [Google Scholar] [CrossRef]
- Ferreira, A.D.S.; Freitas, D.M.; Da Silva, G.G.; Pistori, H.; Folhes, M.T. Weed detection in soybean crops using ConvNets. Comput. Electron. Agric. 2017, 143, 314–324. [Google Scholar] [CrossRef]
- Zhang, S.; Yang, H.; Yin, Z. Transferred Deep Convolutional Neural Network Features for Extensive Facial Landmark Localization. IEEE Signal Proc. Lett. 2016, 23, 478–482. [Google Scholar] [CrossRef]
- Liu, B.; Zhang, Y.; He, D.; Li, Y. Identification of Apple Leaf Diseases Based on Deep Convolutional Neural Networks. Symmetry 2018, 10, 111. [Google Scholar] [CrossRef]
- Lin, C.; Yeh, P.T.; Chen, D.; Chiou, Y.; Lee, C. The identification and filtering of fertilized eggs with a thermal imaging system. Comput. Electron. Agric. 2013, 91, 94–105. [Google Scholar] [CrossRef]
- Lawrence, K.C.; Smith, D.P.; Windham, W.R.; Heitschmidt, G.W.; Park, B. Egg embryo development detection with hyperspectral imaging. In Optics for Natural Resources, Agriculture, and Foods; SPIE: Boston, MA, USA, 2006. [Google Scholar]
- Zhu, Z.; Ma, M. The identification of white fertile eggs prior to incubation based on machine vision and least square support vector machine. Afr. J. Agric. Res. 2011, 6, 2699–2704. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Proceedings of the 13th European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 818–833. [Google Scholar]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, San Diego, CA, USA, 20–25 June 2005; IEEE Computer Society; pp. 886–893. [Google Scholar]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Proceedings of the 28th Annual Conference on Neural Information Processing Systems 2014 (NIPS 2014), Montreal, QC, Canada, 8–13 December 2014; pp. 3320–3328. [Google Scholar]
- Sawada, Y.; Sato, Y.; Nakada, T.; Yamaguchi, S.; Ujimoto, K.; Hayashi, N. Improvement in Classification Performance Based on Target Vector Modification for All-Transfer Deep Learning. Appl. Sci. 2019, 9, 1281. [Google Scholar] [CrossRef]
Imaging Type | Categories | Methods | Objectives | Accuracy | Ref. |
---|---|---|---|---|---|
infrared thermal imager | 1800 fertile eggs | Sobel operator; fuzzy rules; grayscale co-occurrence matrix | identification of fertilized and dead eggs | 96% | [26] |
near-infrared hyperspectral imaging with tungsten halogen lamp candling | 156 fertile and 18 infertile eggs | Gabor filter; ROI; Principal Component Analysis (PCA); K-means classifier; 1066-1076nm wavelength | fertility detecting of early Embryo hatching eggs | over 74.1% | [2] |
hyperspectral imaging with halogen tungsten lamp at bottom | 150 green shell embryos of 0- and 4-day including 30 eggs of each | PCA; morphologic characteristics extraction; Learning Vector Quantization Neural Network (LVQNN) | non-destructive detection on egg hatching properties | rate of over 97% and over 81% for weak embryo | [6] |
96 images of 6-day embryo with 48 trial samples | One-out cross validation with Partial Least Squares Regression (PLSR) model | egg embryo development detection | over 91.7% | [27] | |
CCD imaging with LED candling | 360 SPF eggs including 282 fertile and 78 infertile eggs | ROI; SUSAN for speckle elimination; Bottom-Hat filter for blood vessels features | classification on fertile and infertile SPF eggs | 97.78% | [1,5] |
240 5-day eggs including 190 fertile and 50 infertile eggs | ROI; Contrast Limited Adaptive Histogram Equalization (CLAHE); binarization; BPNN | detecting fertile and infertile eggs in incubation industry | 98.25% | [3] | |
22,000 images of 9-day embryo including 20,000 for training set | CNNs with Squeeze- and -excitation weighing module | classification on fertile and dead eggs | 98.7% | [4] | |
2,000 images of 5-day hatching eggs including 1,200 for training | CNNs features | classification on fertile, infertile and dead hatching eggs | 99.5% | [7] | |
60 fertile and 40 infertile eggs | Least Square Support Vector Machine (LSSVM) | identification of white fertile eggs | 92.5% | [28] | |
CCD imaging with LED candling | 10,000 SPF eggs including 6 kinds of embryos | ROI; Features fusion on AlexNet and GoogLeNet deep features, SURF and HOG; Support Vector Machine (SVM) | classification of 6 kinds: fertile, weak, hemolytic, crack, infected & infertile) | over 98.4% | Ours |
Channel Value | Red Channel | Green Channel | Blue Channel |
---|---|---|---|
Air cell | 200 | 200 | 15 |
Excretory Region | 113 | 30 | 5 |
Uninteresting Region | 30 | 10 | 4 |
Categories | Training Images | Test Images |
---|---|---|
fertile embryo | 5000 | 500 |
weak embryo | 1000 | 100 |
hemolytic embryo | 1000 | 100 |
cracked embryo | 1000 | 100 |
infected embryo | 1000 | 100 |
infertile embryo | 1000 | 100 |
total | 10,000 | 1000 |
Feature Fusions | MLP | HS | Alex-DF | Google-DF | Alex-HS | Google-HS | Alex-Google-HS |
---|---|---|---|---|---|---|---|
(%) | 65.0 | 74.0 | 90.4 | 91.2 | 92.8 | 94.8 | 98.4 |
AlexNet Transferred Layers | (%) |
---|---|
conv1-conv4 | 92.6 |
conv1-conv5 | 98.4 |
conv1-conv5+fc6 | 98.1 |
conv1-conv5+fc6+fc7 | 97.9 |
Training Method | AlexNet | GoogLeNet | ||
---|---|---|---|---|
Training Time | (%) | Training Time | (%) | |
transfer learning | 16m 37s | 90.4 | 20 m 47s | 91.2 |
train from scratch | 39m 42s | 87.5 | 45 m34s | 84.6 |
Method | (%) |
---|---|
VGG16-HS | 97.2 |
ResNet50-HS | 97.6 |
ResNet101-HS | 97.8 |
Ours | 98.4 |
Chicken Embryos | Fertile | Weak | Hemolytic | Crack | Infected | Infertile |
---|---|---|---|---|---|---|
Accuracy ϵ (%) | 98.8 | 96.0 | 97.0 | 99.0 | 99.0 | 99.0 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, L.; He, A.; Zhai, M.; Wang, Y.; Bai, R.; Nie, X. A Multi-Feature Fusion Based on Transfer Learning for Chicken Embryo Eggs Classification. Symmetry 2019, 11, 606. https://doi.org/10.3390/sym11050606
Huang L, He A, Zhai M, Wang Y, Bai R, Nie X. A Multi-Feature Fusion Based on Transfer Learning for Chicken Embryo Eggs Classification. Symmetry. 2019; 11(5):606. https://doi.org/10.3390/sym11050606
Chicago/Turabian StyleHuang, Lvwen, Along He, Mengqun Zhai, Yuxi Wang, Ruige Bai, and Xiaolin Nie. 2019. "A Multi-Feature Fusion Based on Transfer Learning for Chicken Embryo Eggs Classification" Symmetry 11, no. 5: 606. https://doi.org/10.3390/sym11050606
APA StyleHuang, L., He, A., Zhai, M., Wang, Y., Bai, R., & Nie, X. (2019). A Multi-Feature Fusion Based on Transfer Learning for Chicken Embryo Eggs Classification. Symmetry, 11(5), 606. https://doi.org/10.3390/sym11050606