A Fast Segmentation Method for Fire Forest Images Based on Multiscale Transform and PCA
<p>Overview of the proposed segmentation framework. <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>f</mi> <mi>i</mi> </mrow> </semantics></math> is the <math display="inline"><semantics> <mrow> <msup> <mi>i</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </mrow> </semantics></math> image of Gabor features.</p> "> Figure 2
<p>Spatial localization in 2D sinusoid (<b>left row</b>), Gaussian function (<b>middle row</b>), and corresponding 2D Gabor filter (<b>right row</b>).</p> "> Figure 3
<p>Example of the receptive field of the 2D-Gabor filter.</p> "> Figure 4
<p>Example of object boundaries extraction using Gabor filters of <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>,</mo> <mtext> </mtext> <msub> <mi>θ</mi> <mn>3</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. First row: original images. Second row: image of boundaries.</p> "> Figure 5
<p>Convolution outputs of a synthetic image of sinusoids with various properties (orientations, frequencies and magnitudes) by Gabor filters of different frequencies and orientations <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>f</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>θ</mi> <mi>l</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p> "> Figure 6
<p>MMGR-WT robustness test. First row: original images from the MSRC-dataset. Second row: Superpixels extraction results with original images. 3rd, 4th, and 5th rows: the obtained results for corrupted images by (5%, 10%, 15%) salt and pepper noise.</p> "> Figure 7
<p>MMGR-WT superpixels extraction: test on uniform and textured regions. First row: original images from the MSRC_dataset. Second row: obtained results.</p> "> Figure 8
<p>Example of the “End to End” segmentation pipeline with our proposed method.</p> "> Figure 9
<p>Synthetic test images of <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> pixels with manually created (GT). (<b>a</b>) Images with different regions of real contents. (<b>b</b>) The corresponding desired segmentation (GT).</p> "> Figure 9 Cont.
<p>Synthetic test images of <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> pixels with manually created (GT). (<b>a</b>) Images with different regions of real contents. (<b>b</b>) The corresponding desired segmentation (GT).</p> "> Figure 10
<p>Comparison of SFFCM and our proposed method (<span class="html-italic">G-WT</span>) robustness. Application on the set of synthetic images SI1-SI6 (<a href="#sensors-20-06429-f009" class="html-fig">Figure 9</a>) corrupted by the (10%, 20%, 30%, 40%) Gaussian noise.</p> "> Figure 10 Cont.
<p>Comparison of SFFCM and our proposed method (<span class="html-italic">G-WT</span>) robustness. Application on the set of synthetic images SI1-SI6 (<a href="#sensors-20-06429-f009" class="html-fig">Figure 9</a>) corrupted by the (10%, 20%, 30%, 40%) Gaussian noise.</p> "> Figure 11
<p>Comparison of SFFCM and our proposed method (<span class="html-italic">G-WT</span>) robustness. Application on the set of synthetic images SI1–SI6 (<a href="#sensors-20-06429-f009" class="html-fig">Figure 9</a> corrupted by the (10%, 20%, 30%, 40%) Salt and Pepper noise.</p> "> Figure 11 Cont.
<p>Comparison of SFFCM and our proposed method (<span class="html-italic">G-WT</span>) robustness. Application on the set of synthetic images SI1–SI6 (<a href="#sensors-20-06429-f009" class="html-fig">Figure 9</a> corrupted by the (10%, 20%, 30%, 40%) Salt and Pepper noise.</p> "> Figure 12
<p>A set of real test fire forest images [<a href="#B21-sensors-20-06429" class="html-bibr">21</a>].</p> "> Figure 13
<p>30 humans’ observations about the number of clusters of the real images given by <a href="#sensors-20-06429-f012" class="html-fig">Figure 12</a>.</p> "> Figure 14
<p>Comparison of segmentation results of original image Ima 1 (<b>a</b>) and corrupted by a 10% salt and pepper noise (<b>b</b>) obtained by: SFFCM (the first row), the proposed method based on Simple Linear Iterative Clustering (SLIC) (the second row), and the proposed method based on Watersheds Transform (WT) (the third row).</p> "> Figure 15
<p>Comparison of segmentation results of original image Ima 2 (<b>a</b>) and corrupted by a 10% salt and pepper noise, (<b>b</b>) obtained by: SFFCM (the first row), the proposed method based on SLIC (the second row), and the proposed method based on WT (the third row).</p> "> Figure 16
<p>Comparison of segmentation results of original image Ima 3 (<b>a</b>) and corrupted by a 10% salt and pepper noise (<b>b</b>) obtained by: SFFCM (the first row), the proposed method based on SLIC (the second row), and the proposed method based on WT (the third row).</p> "> Figure 17
<p>Comparison of segmentation results of original image Ima 4 (<b>a</b>) and corrupted by a 10% salt and pepper noise. (<b>b</b>) obtained by: SFFCM (the first row), the proposed method based on SLIC (the second row), and the proposed method based on WT (the third row).</p> "> Figure 18
<p>Comparison of segmentation results of original image Ima 5 (<b>a</b>) and corrupted by a 10% salt and pepper noise (<b>b</b>) obtained by: SFFCM (the first row), the proposed method based on SLIC (the second row), and the proposed method based on WT (the third row).</p> "> Figure 19
<p>Comparison of segmentation results of original image Ima 6 (<b>a</b>) and corrupted by a 10% salt and pepper noise (<b>b</b>) obtained by: SFFCM (the first row), the proposed method based on SLIC (the second row), and the proposed method based on WT (the third row).</p> "> Figure 20
<p>Comparison of SFFCM and our proposed method with WT and SLIC pre-segmentation techniques (G-WT, G-SLIC) based on averaged Sensitivity (<b>a</b>,<b>c</b>) and Specificity results (<b>b</b>,<b>d</b>) of 10 experiments. First row: test on natural images illustrated by <a href="#sensors-20-06429-t002" class="html-table">Table 2</a>. Second row: test on corrupted images with 10% Salt and Pepper noise.</p> ">
Abstract
:1. Introduction
- (1)
- A multiresolution image transformation based on 2-D Gabor filtering combined with a morphological gradient construction to generate a superpixel image with accurate boundaries. This proposition integrates a multiscale neighboring system to solve the problems of rotation, illumination, scale, and translation variance. This is very useful specially with images of high resolution.
- (2)
- We introduce a Principal Component Analysis (PCA) to reduce the number of extracted Gabor features. From obtained regions we compute a simple color histogram to reduce the number of different intensities (pixels) and achieve a fast clustering for color image segmentation.
2. Motivation
2.1. Motivation for Using Superpixels with Gabor Filtering
2.2. Motivation for Using Color Images Histograms
2.3. Fire Forest Image Application
3. Methodology
- -
- The Pre-segmentation, also called the Superpixels Extraction,
- -
- The Clustering of firstly extracted superpixels
3.1. Superpixels Based on Gabor Filtering and Morphological Operations
3.1.1. Superpixels Extraction: An Overview
3.1.2. Gabor Filters and Their Characteristics
3.1.3. Gabor Feature Reduction Based on PCA
3.1.4. Pre-segmentation Based on Gabor-WT
3.2. Fuzzy Superpixels Clustering
3.2.1. Overview
Algorithm 1. Traditional FCM algorithm. |
|
3.2.2. The Proposed Clustering Method
Algorithm 2. SFFCM Algorithm. |
|
3.3. Evaluation Criteria
- a.
- Segmentation accuracy
- b.
- Sensitivity and Specificity
- —True Positives: intersection between segmentation and ground truth
- —True Negatives: part of the image beyond the intersection mentioned above
- —False Positives: segmented parts not overlapping the ground truth
- —False negatives: missed parts of the ground truth
4. Experimental Results
5. Application on Fire Forest Images
5.1. Results on Synthetic Images
5.2. Results on Real Images
5.3. Application on Other Natural Images
6. Conclusions and Future Works
Author Contributions
Funding
Conflicts of Interest
References
- Nemalidinne, S.M.; Gupta, D. Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering. Fire Saf. J. 2018, 101, 84–101. [Google Scholar] [CrossRef]
- L-Dhief, F.T.A.; Sabri, N.; Fouad, S.; Latiff, N.M.A.; Albader, M.A.A. A review of forest fire surveillance technologies: Mobile ad-hoc network routing protocols perspective. J. King Saud Univ. Comput. Inf. Sci. 2019, 31, 135–146. [Google Scholar]
- Ajith, M.; Martinez-Ramon, M. Unsupervised Segmentation of Fire and Smoke from Infra-Red Videos. IEEE Access 2019, 7, 182381–182394. [Google Scholar] [CrossRef]
- Yuan, F.; Shi, J.; Xia, X.; Zhang, L.; Li, S. Encoding pairwise Hamming distances of Local Binary Patterns for visual smoke recognition. Comput. Vis. Image Underst. 2019, 178, 43–53. [Google Scholar] [CrossRef]
- Gonçalves, W.N.; Machado, B.B.; Bruno, O.M. Spatiotemporal Gabor filters: A new method for dynamic texture recognition. arXiv 2012, arXiv:1201.3612. [Google Scholar]
- Dileep, R.; Appana, K.; Kim, J. Smoke Detection Approach Using Wavelet Energy And Gabor Directional Orientations. In Proceedings of the 12th IRF International Conference, Hyderabad, India, 26 June 2016. [Google Scholar]
- Yuan, F. Video-based smoke detection with histogram sequence of LBP and LBPV pyramids. Fire Saf. J. 2011, 46, 132–139. [Google Scholar] [CrossRef]
- Xu, G.; Zhang, Y.; Zhang, Q.; Lin, G.; Wang, Z.; Jia, Y.; Wang, J. Video smoke detection based on deep saliency network. Fire Saf. J. 2019, 105, 277–285. [Google Scholar] [CrossRef] [Green Version]
- Lei, T.; Jia, X.; Zhang, Y.; Liu, S.; Meng, H.; Nandi, A.K. Superpixel-Based Fast Fuzzy C-Means Clustering for Color Image Segmentation. IEEE Trans. Fuzzy Syst. 2019, 27, 1753–1766. [Google Scholar] [CrossRef] [Green Version]
- Guo, L.; Chen, L.; Chen, C.L.P.; Zhou, J. Integrating guided filter into fuzzy clustering for noisy image segmentation. Digit. Signal Process. A Rev. J. 2018, 83, 235–248. [Google Scholar] [CrossRef]
- Miao, J.; Zhou, X.; Huang, Z.T. Local segmentation of images using an improved fuzzy C-means clustering algorithm based on self-adaptive dictionary learning. Appl. Soft Comput. J. 2020, 91, 106200. [Google Scholar] [CrossRef]
- Li, C.; Huang, Y.; Zhu, L. Color texture image retrieval based on Gaussian copula models of Gabor wavelets. Pattern Recognit 2017, 64, 118–129. [Google Scholar] [CrossRef]
- Dios, J.R.M.; Arrue, B.C.; Ollero, A.; Merino, L.; Gómez-Rodríguez, F. Computer vision techniques for forest fire perception. Image Vis. Comput. 2008, 26, 550–562. [Google Scholar] [CrossRef]
- Wang, Y.; Chua, C.S. Face recognition from 2D and 3D images using 3D Gabor filters. Image Vis. Comput. 2005, 231, 1018–1028. [Google Scholar] [CrossRef]
- Kaljahi, M.A.; Shivakumara, P.; Idris MY, I.; Anisi, M.H.; Lu, T.; Blumenstein, M.; Noor, N.M. An automatic zone detection system for safe landing of UAVs. Expert Syst. Appl. 2019, 122, 319–333. [Google Scholar] [CrossRef] [Green Version]
- Parida, P.; Bhoi, N. 2-D Gabor filter based transition region extraction and morphological operation for image segmentation. Comput. Electr. Eng. 2017, 62, 119–134. [Google Scholar] [CrossRef]
- Riabchenko, E.; Kämäräinen, J.K. Generative part-based Gabor object detector. Pattern Recognit. Lett. 2015, 68, 1–8. [Google Scholar] [CrossRef]
- Wong, A.K.K.; Fong, N.K. Experimental study of video fire detection and its applications. Procedia Eng. 2014, 71, 316–327. [Google Scholar] [CrossRef] [Green Version]
- Çetin, A.E.; Dimitropoulos, K.; Gouverneur, B.; Grammalidis, N.; Günay, O.; Habiboǧlu, Y.H.; Verstockt, S. Video fire detection—Review. Digit. Signal Process. Rev. J. 2013, 23, 1827–1843. [Google Scholar] [CrossRef] [Green Version]
- Sudhakar, S.; Vijayakumar, V.; Kumar, C.S.; Priya, V.; Ravi, L.; Subramaniyaswamy, V. Unmanned Aerial Vehicle (UAV) based Forest Fire Detection and monitoring for reducing false alarms in forest-fires. Comput. Commun. 2020, 149, 1–16. [Google Scholar] [CrossRef]
- Toulouse, T.; Rossi, L.; Akhloufi, M.; Celik, T.; Maldague, X. Benchmarking of wildland fire colour segmentation algorithms. IET Image Process. 2015, 92, 1064–1072. [Google Scholar] [CrossRef] [Green Version]
- Ganesan, P.; Sathish, B.S.; Sajiv, G. A comparative approach of identification and segmentation of forest fire region in high resolution satellite images. In Proceedings of the 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), Coimbatore, India, 29 February–1 March 2016; pp. 3–8. [Google Scholar]
- Ko, B.; Jung, J.H.; Nam, J.Y. Fire detection and 3D surface reconstruction based on stereoscopic pictures and probabilistic fuzzy logic. Fire Saf. J. 2014, 68, 61–70. [Google Scholar] [CrossRef]
- Wang, M.; Liu, X.; Gao, Y.; Ma, X.; Soomro, N.Q. Superpixel segmentation: A benchmark. Signal Process. Image Commun. 2017, 56, 28–39. [Google Scholar] [CrossRef]
- Ma, J.; Li, S.; Qin, H.; Hao, A. Unsupervised Multi-Class Co-Segmentation via Joint-Cut over L1-Manifold Hyper-Graph of Discriminative Image Regions. IEEE Trans. Image Process. 2017, 26, 1216–1230. [Google Scholar] [CrossRef]
- Shang, R.; Tian, P.; Jiao, L.; Stolkin, R.; Feng, J.; Hou, B.; Zhang, X. Metric Based on Immune Clone for SAR Image Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 1–13. [Google Scholar] [CrossRef]
- Neubert, P.; Protzel, P. Compact Watershed and Preemptive SLIC: On Improving Trade-Offs of Superpixel Segmentation Algorithms. In Proceedings of the 2014 22nd International Conference on Pattern Recognition, Stockholm, Sweden, 24–28 August 2014; pp. 996–1001. [Google Scholar]
- Shang, R.; Chen, C.; Wang, G.; Jiao, L.; Okoth, M.A.; Stolkin, R. A thumbnail-based hierarchical fuzzy clustering algorithm for SAR image segmentation. Signal Process. 2020, 171, 107518. [Google Scholar] [CrossRef]
- Tlig, L.; Sayadi, M.; Fnaiech, F. A new fuzzy segmentation approach based on S-FCM type 2 using LBP-GCO features. Signal Process. Image Commun. 2012, 27, 694–708. [Google Scholar] [CrossRef]
- Zhu, Z.; Jia, S.; He, S.; Sun, Y.; Ji, Z.; Shen, L. Three-dimensional Gabor feature extraction for hyperspectral imagery classification using a memetic framework. Inf. Sci. 2015, 298, 274–287. [Google Scholar] [CrossRef]
- Tadic, V.; Popovic, M.; Odry, P. Fuzzified Gabor filter for license plate detection. Eng. Appl. Artif. Intell. 2016, 48, 40–58. [Google Scholar] [CrossRef]
- Tan, X.; Triggs, B. Fusing gabor and LBP feature sets for kernel-based face recognition. Lect. Notes Comput. Sci. 2007, 4778, 235–249. [Google Scholar]
- Kim, S.; Yoo, C.D.; Member, S.; Nowozin, S.; Kohli, P. Higher-Order Correlation Clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1761–1774. [Google Scholar] [CrossRef]
- Zanaty, E.A. Determining the number of clusters for kernelized fuzzy C-means algorithms for automatic medical image segmentation. Egypt. Inform. J. 2012, 13, 39–58. [Google Scholar] [CrossRef] [Green Version]
- Qu, F.; Hu, Y.; Xue, Y.; Yang, Y. A modified possibilistic fuzzy c-means clustering algorithm. Proc. Int. Conf. Nat. Comput. 2013, 13, 858–862. [Google Scholar]
- Zheng, Y.; Jeon, B.; Xu, D.; Wu, Q.M.J.; Zhang, H. Image segmentation by generalized hierarchical fuzzy C-means algorithm. J. Intell. Fuzzy Syst. 2015, 28, 961–973. [Google Scholar] [CrossRef]
- Gu, J.; Jiao, L.; Yang, S.; Zhao, J. Sparse learning based fuzzy c-means clustering. Knowl.-Based Syst. 2017, 119, 113–125. [Google Scholar] [CrossRef]
- Stutz, D.; Hermans, A.; Leibe, B. Superpixels: An evaluation of the state-of-the-art. Comput. Vis. Image Underst. 2018, 166, 1–27. [Google Scholar] [CrossRef] [Green Version]
- Gamino-Sánchez, F.; Hernández-Gutiérrez, I.V.; Rosales-Silva, A.J.; Gallegos-Funes, F.J.; Mújica-Vargas, D.; Ramos-Díaz, E.; Kinani, J.M.V. Block-Matching Fuzzy C-Means clustering algorithm for segmentation of color images degraded with Gaussian noise. Eng. Appl. Artif. Intell. 2018, 73, 31–49. [Google Scholar] [CrossRef]
- Xu, G.; Li, X.; Lei, B.; Lv, K. Unsupervised color image segmentation with color-alone feature using region growing pulse coupled neural network. Neurocomputing 2018, 306, 1–16. [Google Scholar] [CrossRef]
Method | Pre-segmentation | Classification |
---|---|---|
SFFCM [9] | Min, Max radius: | , |
Gabor-SLIC | Number of desired pixels: Weighting factor: Threshold for region merging: | , |
Gabor-WT | Structured Element SE: a disk Radius: | , |
Image | Name | Dataset | Images |
---|---|---|---|
I1 | “55067” | BSDS500 | |
I2 | “41004” | ||
I3 | “311068” | ||
I4 | “3096” | ||
I5 | “66075” | ||
I6 | “5_26_s” | MSRC | |
I7 | “9_10_s” | ||
I8 | “10_1_s” | ||
I9 | “2_21_s” | ||
I10 | “2_22_s” | ||
I11 | “2_27_s” | ||
I12 | “4_13_s” | ||
I13 | “2_8_s” | ||
I14 | “4_26_s” | ||
I15 | “3_20_s” | ||
I16 | “2_17_s” | ||
I17 | “2_3_s” | ||
I18 | “3_24_s” | ||
I19 | “2_20_s” | ||
I20 | “5_22_s” |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tlig, L.; Bouchouicha, M.; Tlig, M.; Sayadi, M.; Moreau, E. A Fast Segmentation Method for Fire Forest Images Based on Multiscale Transform and PCA. Sensors 2020, 20, 6429. https://doi.org/10.3390/s20226429
Tlig L, Bouchouicha M, Tlig M, Sayadi M, Moreau E. A Fast Segmentation Method for Fire Forest Images Based on Multiscale Transform and PCA. Sensors. 2020; 20(22):6429. https://doi.org/10.3390/s20226429
Chicago/Turabian StyleTlig, Lotfi, Moez Bouchouicha, Mohamed Tlig, Mounir Sayadi, and Eric Moreau. 2020. "A Fast Segmentation Method for Fire Forest Images Based on Multiscale Transform and PCA" Sensors 20, no. 22: 6429. https://doi.org/10.3390/s20226429