Robust Feature Matching Method for SAR and Optical Images by Using Gaussian-Gamma-Shaped Bi-Windows-Based Descriptor and Geometric Constraint
<p>Interest point detection test. Point features are marked with red crosses. (<b>a</b>) Results of the original DoG detector in SIFT method [<a href="#B8-remotesensing-09-00882" class="html-bibr">8</a>]; (<b>b</b>) results of the improved DoG detector [<a href="#B14-remotesensing-09-00882" class="html-bibr">14</a>,<a href="#B15-remotesensing-09-00882" class="html-bibr">15</a>,<a href="#B16-remotesensing-09-00882" class="html-bibr">16</a>]; (<b>c</b>) results of the proposed method.</p> "> Figure 2
<p>Comparison of gradient computation. (<b>a</b>) A pair of registered TerraSAR-X (X band, HH polarization) and Google earth (panchromatic band) images in the Netherlands. The size of each image is 330 × 330 pixels. Image spatial resolution is 1.85 (unit: meter). (<b>b</b>) The gradient magnitude and orientation of the SAR and optical images computed by the grayscale difference-based gradient operator [<a href="#B8-remotesensing-09-00882" class="html-bibr">8</a>]. (<b>c</b>) The magnitude and orientation of the SAR and optical images computed by the gradient operator in the SAR-SIFT [<a href="#B17-remotesensing-09-00882" class="html-bibr">17</a>] matching method. (<b>d</b>) The magnitude and orientation of the SAR and optical images computed by the proposed gradient operator. The orientation values shown in these figures are colored with a standard jet color map where blue and red are adopted to denote minimum and maximum values, respectively. Those values in between are interpolated through blue, green, yellow, and red.</p> "> Figure 3
<p>Flowchart of descriptor computation.</p> "> Figure 4
<p>Descriptor evaluation.</p> "> Figure 5
<p>Geometric constraint based on feature distance and orientation. <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <msubsup> <mi>F</mi> <mi>j</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>r</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>j</mi> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> <mo stretchy="false">)</mo> <mo>,</mo> <mo stretchy="false">(</mo> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mi>J</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math> is a pair of match in <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>h</mi> </msub> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <mo stretchy="false">(</mo> <msubsup> <mi>C</mi> <mi>i</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>r</mi> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>C</mi> <mi>i</mi> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> <mo stretchy="false">)</mo> <mo>,</mo> <mo stretchy="false">(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mi>I</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math> is a pair of candidate match. <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> is the angle between the vertical axis and the line passing from <math display="inline"> <semantics> <mrow> <msubsup> <mi>F</mi> <mi>j</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics> </math> to <math display="inline"> <semantics> <mrow> <msubsup> <mi>C</mi> <mi>i</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> is the angle between the vertical axis and the line passing from <math display="inline"> <semantics> <mrow> <msubsup> <mi>F</mi> <mi>j</mi> <mrow> <mi>O</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> </mrow> </semantics> </math> to <math display="inline"> <semantics> <mrow> <msubsup> <mi>C</mi> <mi>i</mi> <mrow> <mi>O</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <msub> <mi>l</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> is the Euclidean distance between <math display="inline"> <semantics> <mrow> <msubsup> <mi>F</mi> <mi>j</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>C</mi> <mi>i</mi> <mrow> <mi>s</mi> <mi>a</mi> <mi>r</mi> </mrow> </msubsup> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <msub> <mi>l</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> is the Euclidean distance between <math display="inline"> <semantics> <mrow> <msubsup> <mi>F</mi> <mi>j</mi> <mrow> <mi>O</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>C</mi> <mi>i</mi> <mrow> <mi>O</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> </mrow> </semantics> </math>. The spatial resolution (unit: meter) of the reference image and the searching image are <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>, respectively.</p> "> Figure 6
<p>Outliers elimination based on local geometric constraint. The extension Delaunay triangles are constructed in each image based on the matches in <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>p</mi> </msub> </mrow> </semantics> </math>. Images are divided into many pairs of corresponding sub-regions. <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>i</mi> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>R</mi> <mi>i</mi> <mo>′</mo> </msubsup> </mrow> </semantics> </math> are a pair of corresponding sub-regions. <math display="inline"> <semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <msub> <mi>Q</mi> <mn>1</mn> </msub> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> <msub> <mi>Q</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <msub> <mi>P</mi> <mn>3</mn> </msub> <msub> <mi>Q</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> are three pairs of matches constructing the two corresponding sub-regions <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>i</mi> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>R</mi> <mi>i</mi> <mo>′</mo> </msubsup> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>i</mi> </msub> </mrow> </semantics> </math> denotes the affine transformation between <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mi>i</mi> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>R</mi> <mi>i</mi> <mo>′</mo> </msubsup> </mrow> </semantics> </math> computed by the three pairs of matches.</p> "> Figure 7
<p>Experimental images. The images in (<b>a</b>–<b>d</b>) are the COSMO-Skymed image, RADARSAT-2 image, TerraSAR-X image, and ZY-3 visible image of a building area in China, respectively. The images in (<b>e</b>–<b>h</b>) are the COSMO-Skymed image, RADARSAT-2 image, TerraSAR-X image, and ZY-3 visible image of a rural area in China, respectively. (<b>i</b>,<b>j</b>) are a pair of TerraSAR-X and Google earth images in Netherlands. (<b>k</b>,<b>l</b>) are a pair of TerraSAR-X and Google earth images in Canada. (<b>m</b>,<b>n</b>) are a pair of TerraSAR-X and Google earth images in China. (<b>p</b>,<b>q</b>) are a pair of HJ-1C and ZY-3 images in China.</p> "> Figure 8
<p>Experimental results of feature local support region radius tuning. (<b>a</b>) The test results of the criteria NCM for image pairs with different spatial resolution. Where <math display="inline"> <semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> (unit: pixel) is the local support region radius of the features in the image with resolution <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> </mrow> </semantics> </math>. The local support region radius of those features in another image with resolution <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> is computed as <math display="inline"> <semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>⋅</mo> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mo>/</mo> <msub> <mi>λ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>. (<b>b</b>) The test results of the criteria MP for image pairs with different spatial resolution.</p> "> Figure 9
<p>Experimental results of GGS bi-windows thresholds tuning. (<b>a</b>,<b>b</b>) are the test results of the criteria NCM and MP for different values of <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math>. (<b>c</b>,<b>d</b>) are the test results of the criteria NCM and MP for different values of <math display="inline"> <semantics> <mi>α</mi> </semantics> </math>. (<b>e</b>,<b>f</b>) are the test results of the criteria NCM and MP for different values of <math display="inline"> <semantics> <mi>β</mi> </semantics> </math>.</p> "> Figure 10
<p>Matching results of the proposed method between SAR and SAR images. In the matching results, correct matches are marked as red points. False matches are marked as blue points. (<b>a</b>) Matching results of the proposed method for the first pair of images described in <a href="#remotesensing-09-00882-t002" class="html-table">Table 2</a>; (<b>b</b>) matching results of the proposed method for the second pair of images described in <a href="#remotesensing-09-00882-t002" class="html-table">Table 2</a>.</p> "> Figure 11
<p>Matching results of the proposed method between SAR and optical images. In the matching results, correct matches are marked as red points. False matches are marked as blue points. (<b>a</b>)–(<b>j</b>) are the matching results of the proposed method for the third to the twelfth pairs of images described in <a href="#remotesensing-09-00882-t002" class="html-table">Table 2</a>, respectively.</p> "> Figure 12
<p>Registration results of the proposed method. (<b>a</b>) Registration result of image pair 9; (<b>b</b>) registration result of image pair 12. In the figures, the yellow dots in the top-left image are the control points detected by the proposed method on the SAR image. The yellow dots in the top-middle image are the control points detected by the proposed method on the optical image. The top-right image shows the registration result. Four small sub-regions marked by red, green, yellow, and blue boxes are magnified to show the registration details in the bottom of the corresponding images.</p> ">
Abstract
:1. Introduction
2. Methodology
2.1. Detection of Logarithmic Phase Congruency Interest Points
2.2. GGS Bi-Windows-Based Feature Descriptor Computation
2.2.1. Local Support Region Size and Orientation Determination for Interest Points
2.2.2. GGS Bi-Windows-Based Gradient Operator
2.2.3. Feature Descriptor Construction
2.3. Image Matching Based on Descriptor Similarity and Geometric Constraint
2.4. Outliers Elimination
- Step 1:
- Select six correspondences from randomly to construct a sample set.
- Step 2:
- Compute the polynomials of degree two with the current sample set.
- Step 3:
- Test all matches in by using the polynomials and save the correspondences with residuals smaller than a threshold (e.g., 1 pixel) into a set .
- Step 4:
- Perform iterative computation. If the correspondences in the current set are more than the correspondences in the previous set, the current polynomials and correspondences set are preserved. The previous polynomials and set are discarded.
- Step 5:
- After the iterative calculation, matches in the final correspondence set are regarded as correct matches.
- Step 6:
- The extension Delaunay triangles are constructed in each image based on points in the final . The images are divided into sub-regions.
- Step 7:
- A local transformation is fitted for each pair of corresponding sub-regions based on the matches in the sub-regions from the initial matches set . For each initial match, if the position transformation error based on is larger than two pixels, it is regarded as false match.
3. Experimental Results and Discussion
3.1. Experimental Datasets
3.2. Evaluation Criteria
3.3. Thresholds Setting
3.3.1. Tuning of Feature Local Support Region Radius
3.3.2. Tuning of GGS Bi-Windows Thresholds
3.4. Descriptors Comparison
3.5. Matching Strategies Comparison
3.6. Matching Performances of Different Framework between SAR and SAR Images
3.7. Matching Performances of Different Framework between SAR and Optical Images
3.8. Registration Accuracy
4. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Liesenberg, V.; de Souza Filho, C.R.; Gloaguen, R. Evaluating Moisture and Geometry Effects on L-Band SAR Classification Performance over a Tropical Rain Forest Environment. IEEE J-STARS 2016, 9, 5357–5368. [Google Scholar] [CrossRef]
- Pereira, L.O.; Freitas, C.C.; Sant’Anna, S.J.S.; Reis, M.S. ALOS/PALSAR Data Evaluation for Land Use and Land Cover Mapping in the Amazon Region. IEEE J-STARS 2016, 9, 5413–5423. [Google Scholar] [CrossRef]
- Corbane, C.; Faure, J.F.; Baghdadi, N.; Villeneuve, N.; Petit, M. Rapid Urban Mapping Using SAR/Optical Imagery Synergy. Sensors 2008, 8, 7125–7143. [Google Scholar] [CrossRef] [PubMed]
- Werner, A.; Storie, C.D.; Storie, J. Evaluating SAR-Optical Image Fusions for Urban LULC Classification in Vancouver Canada. Can. J. Remote Sens. 2014, 40, 278–290. [Google Scholar] [CrossRef]
- Errico, A.; Angelino, C.V.; Cicala, L.; Persechino, G.; Ferrara, C.; Lega, M.; Vallario, A.; Parente, C.; Masi, G.; Gaetano, R.; et al. Detection of environmental hazards through the feature-based fusion of optical and SAR data: A case study in southern Italy. Int. J. Remote Sens. 2015, 36, 3345–3367. [Google Scholar] [CrossRef]
- Xiong, Z.; Zhang, Y. A Novel Interest-Point-Matching Algorithm for High-Resolution Satellite Images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4189–4200. [Google Scholar] [CrossRef]
- Gruen, A. Development and Status of Image Matching in Photogrammetry. Photogramm. Rec. 2012, 27, 36–57. [Google Scholar] [CrossRef]
- Lowe, D. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Und. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Morel, J.; Yu, G. ASIFT: A New Framework for Fully Affine Invariant Image Comparison. SIAM J. Imaging Sci. 2009, 2, 1–31. [Google Scholar] [CrossRef]
- Chen, M.; Shao, Z.; Li, D.; Liu, J. Invariant Matching Method for Different Viewpoint Angle Images. Appl. Opt. 2013, 52, 96–104. [Google Scholar] [CrossRef] [PubMed]
- Liu, J.; Yu, X. Research on SAR image matching technology based on SIFT. In Proceedings of the International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 3–11 July 2008; pp. 403–408. [Google Scholar]
- Suri, S.; Schwind, P.; Reinartz, P.; Uhl, J. Combining mutual information and scale invariant feature transform for fast and robust multisensor SAR image registration. In Proceedings of the American Society of Photogrammetry and Remote Sensing (ASPRS) Annual Conference, Baltimore, MD, USA, 9–13 March 2009. [Google Scholar]
- Schwind, P.; Suri, S.; Reinartz, P.; Siebert, A. Applicability of the SIFT operator to geometric SAR image registration. Int. J. Remote Sens. 2010, 31, 1959–1980. [Google Scholar] [CrossRef]
- Suri, S.; Schwind, P.; Uhl, J.; Reinartz, P. Modifications in the SIFT operator for effective SAR image matching. Int. J. Image Data Fusion 2010, 1, 243–256. [Google Scholar] [CrossRef] [Green Version]
- Fan, B.; Huo, C.; Pan, C.; Kong, Q. Registration of optical and SAR satellite images by exploring the spatial relationship of the improved SIFT. IEEE Geosci. Remote Sens. 2013, 10, 657–661. [Google Scholar] [CrossRef]
- Dellinger, F.; Delon, J.; Gousseau, Y.; Michel, J.; Tupin, F. SAR-SIFT: A SIFT-Like Algorithm for SAR Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 453–466. [Google Scholar] [CrossRef] [Green Version]
- Ye, Y.; Shen, L.; Hao, M.; Wang, J.; Xu, Z. Robust optical-to-SAR image matching based on shape properties. IEEE Geosci. Remote Sens. 2017, 14, 564–568. [Google Scholar] [CrossRef]
- Ye, Y.; Shen, L. HOPC: A novel similarity metric based on geometric structural properties for multi-modal remote sensing image matching. In Proceedings of the ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; pp. 9–16. [Google Scholar]
- Ye, Y.; Shan, J. A local descriptor based registration method for multispectral remote sensing images with non-linear intensity differences. ISPRS J. Photogramm. Remote Sens. 2014, 90, 83–95. [Google Scholar] [CrossRef]
- Shechtman, E.; Irani, M. Matching local self-similarities across images and videos. In Proceedings of the IEEE Conference on CVPR, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
- Kovesi, P. Image Features from Phase Congruency. Videre J. Comput. Vis. Res. 1999, 1, 1–26. [Google Scholar]
- Kovesi, P. Phase Congruency Detects Corners and Edges. In Proceedings of the Australian Pattern Recognition Society Conference, Sydney, Australia, 10–12 December 2003; pp. 309–318. [Google Scholar]
- Sui, H.; Xu, C.; Liu, J.; Hua, F. Automatic optical-to-SAR image registration by iterative line extraction and voronoi integrated spectral point matching. IEEE Trans. Geosci. Remote Sens. 2015, 53, 6058–6072. [Google Scholar] [CrossRef]
- Shui, P.; Cheng, D. Edge Detector of SAR Images Using Gaussian-Gamma-Shaped Bi-Windows. IEEE Geosci. Remote Sens. 2012, 9, 846–850. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the IEEE Conference on CVPR, San Diego, CA, USA, 20–25 June 2005; pp. 886–893. [Google Scholar]
- Chen, M.; Zhu, Q.; Zhu, J. GGSOR: A Gaussian-Gamma-Shaped bi-windows based descriptor for optical and SAR images matching. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; pp. 3683–3686. [Google Scholar]
- Xie, H.; Pierce, L.E.; Ulaby, F.T. Statistical Properties of Logarithmically Transformed Speckle. IEEE Trans. Geosci. Remote Sens. 2002, 40, 721–727. [Google Scholar] [CrossRef]
- Wang, C.; Su, W.; Gu, H.; Shao, H. Edge detection of SAR images using incorporate shift-invariant DWT and binarization method. In Proceedings of the IEEE International Conference on Signal Processing, Beijing, China, 21–25 October 2012; pp. 745–748. [Google Scholar]
- Germain, O.; Refregier, P. On the bias of the likelihood ratio edge detector for SAR images. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1455–1457. [Google Scholar] [CrossRef]
- Schou, J.; Skriver, H.; Nielsen, A.; Conradsen, K. CFAR edge detector for polarimetric SAR images. IEEE Trans. Geosci. Remote Sens. 2003, 41, 20–32. [Google Scholar] [CrossRef] [Green Version]
- Germain, O.; Refregier, P. Edge location in SAR images: Performance of the likelihood ratio filter and accuracy improvement with an active contour approach. IEEE Trans. Image Process. 2001, 10, 72–78. [Google Scholar] [CrossRef] [PubMed]
- Goshtasby, A. Piecewise linear mapping functions for image registration. Pattern Recognit. 1986, 19, 459–466. [Google Scholar] [CrossRef]
- Wu, Y.; Ma, W.; Gong, M. A Novel Point-Matching Algorithm Based on Fast Sample Consensus for Image Registration. IEEE Geosci. Remote Sens. 2015, 12, 43–47. [Google Scholar] [CrossRef]
- Ma, W.; Wen, Z.; Wu, Y.; Jiao, L.; Gong, M.; Zheng, Y.; Liu, L. Remote Sensing Image Registration With Modified SIFT and Enhanced Feature Matching. IEEE Geosci. Remote Sens. 2017, 14, 3–7. [Google Scholar] [CrossRef]
Interest Point Detectors | ||
---|---|---|
DoG | Improved DoG | Proposed |
53.2% | 36.3% | 57.8% |
Category | Pair Number | Image Number | Image Source | Image Size (Unit: Pixel) | Spatial Resolution (Unit: Meter) |
---|---|---|---|---|---|
SAR & SAR | Pair 1 | (a) | COSMO-Skymed | 724 × 951 | 2.70 |
(b) | RADARSAT-2 | 848 × 1114 | 2.30 | ||
Pair 2 | (f) | RADARSAT-2 | 497 × 642 | 2.30 | |
(g) | TerraSAR-X | 592 × 765 | 1.85 | ||
SAR & Optical | Pair 3 | (a) | COSMO-Skymed | 724 × 951 | 2.70 |
(d) | ZY-3 | 934 × 1227 | 2.10 | ||
Pair 4 | (b) | RADARSAT-2 | 848 × 1114 | 2.30 | |
(d) | ZY-3 | 934 × 1227 | 2.10 | ||
Pair 5 | (c) | TerraSAR-X | 1011 × 1328 | 1.85 | |
(d) | ZY-3 | 934 × 1227 | 2.10 | ||
Pair 6 | (e) | COSMO-Skymed | 424 × 584 | 2.70 | |
(h) | ZY-3 | 547 × 707 | 2.10 | ||
Pair 7 | (f) | RADARSAT-2 | 497 × 642 | 2.30 | |
(h) | ZY-3 | 547 × 707 | 2.10 | ||
Pair 8 | (g) | TerraSAR-X | 592 × 765 | 1.85 | |
(h) | ZY-3 | 547 × 707 | 2.10 | ||
Pair 9 | (i) | TerraSAR-X | 715 × 1073 | 1.85 | |
(j) | Google Earth | 715 × 1073 | 1.85 | ||
Pair 10 | (k) | TerraSAR-X | 1165 × 1315 | 1.75 | |
(l) | Google Earth | 1699 × 1918 | 1.19 | ||
Pair 11 | (m) | TerraSAR-X | 1751 × 2030 | 2.75 | |
(n) | Google Earth | 986 × 1351 | 4.78 | ||
Pair 12 | (p) | HJ-1C | 2450 × 3575 | 5.00 | |
(q) | ZY-3 | 1600 × 2000 | 4.60 |
Thresholds | Description |
---|---|
local support region radius of features in the image with spatial resolution | |
to control the length of the GGS bi-windows | |
to control the width of the GGS bi-windows | |
to control the spacing between the two windows of the GGS bi-windows | |
the number of initial candidate correspondences for each interest point in the SAR image | |
the number of seed matches | |
the distance ratio threshold of the geometric constraint in the matching strategy | |
the orientation difference threshold of the geometric constraint in the matching strategy |
Number of Simulated Image Pair | Spatial Resolution (Unit: Meter) | Image Size (Unit: Pixel) |
---|---|---|
1 | 3.00 | 1000 × 1000 |
2 | 5.00 | 689 × 681 |
3 | 7.00 | 654 × 670 |
4 | 9.00 | 664 × 681 |
5 | 11.00 | 515 × 538 |
6 | 13.00 | 468 × 463 |
7 | 15.00 | 403 × 405 |
Thresholds | |||||||
---|---|---|---|---|---|---|---|
64 | 3.4 | 3.2 | 1.5 | 25 | 10 | 0.2 |
Image Pair | SIFT | SAR-SIFT | Proposed | |||
---|---|---|---|---|---|---|
NCM | MP | NCM | MP | NCM | MP | |
Pair 1 | 0 | 0.0% | 0 | 0.0% | 2 | 11.1% |
Pair 2 | 1 | 10.0% | 1 | 50.0% | 2 | 50.0% |
Pair 3 | 0 | 0.0% | 0 | 0.0% | 1 | 20.0% |
Pair 4 | 0 | 0.0% | 0 | 0.0% | 2 | 33.3% |
Pair 5 | 0 | 0.0% | 1 | 20.0% | 1 | 5.3% |
Pair 6 | 0 | 0.0% | 1 | 33.3% | 3 | 50.0% |
Pair 7 | 0 | 0.0% | 1 | 25.0% | 4 | 44.4% |
Pair 8 | 0 | 0.0% | 1 | 33.3% | 3 | 37.5% |
Pair 9 | 51 | 67.8% | 21 | 75.0% | 58 | 69.0% |
Pair 10 | 0 | 0.0% | 0 | 0.0% | 1 | 7.7% |
Pair 11 | 1 | 16.7% | 1 | 33.3% | 5 | 41.7% |
Pair 12 | 1 | 20.0% | 2 | 28.6% | 5 | 22.7% |
Image Pair | NNDR | FSC | EM | GC | ||||
---|---|---|---|---|---|---|---|---|
NCM | MP | NCM | MP | NCM | MP | NCM | MP | |
Pair 1 | 2 | 11.1% | 0 | 0.0% | 0 | 0.0% | 75 | 21.4% |
Pair 2 | 2 | 50.0% | 0 | 0.0% | 0 | 0.0% | 76 | 30.6% |
Pair 3 | 1 | 20.0% | 0 | 0.0% | 0 | 0.0% | 46 | 51.7% |
Pair 4 | 2 | 33.3% | 0 | 0.0% | 0 | 0.0% | 113 | 24.6% |
Pair 5 | 1 | 5.3% | 0 | 0.0% | 0 | 0.0% | 137 | 30.4% |
Pair 6 | 3 | 50.0% | 6 | 66.7% | 8 | 72.7% | 33 | 42.3% |
Pair 7 | 4 | 44.4% | 5 | 62.5% | 9 | 75.0% | 25 | 41.7% |
Pair 8 | 3 | 37.5% | 7 | 63.6% | 13 | 76.5% | 60 | 49.2% |
Pair 9 | 58 | 69.0% | 73 | 96.0% | 98 | 100.0% | 489 | 69.7% |
Pair 10 | 1 | 7.7% | 0 | 0.0% | 0 | 0.0% | 118 | 39.2% |
Pair 11 | 5 | 41.7% | 12 | 63.2% | 26 | 65.0% | 155 | 33.6% |
Pair 12 | 5 | 22.7% | 16 | 64.0% | 37 | 74.0% | 432 | 38.5% |
Image Pair | Matching Methods | |||||
---|---|---|---|---|---|---|
SIFT | Schwind | Suri | Fan | SAR-SIFT | Proposed | |
Pair 1 | 0 | 0 | 2 | 5 | 4 | 75 |
Pair 2 | 1 | 2 | 3 | 7 | 5 | 76 |
Image Pair | Matching Methods | |||||
---|---|---|---|---|---|---|
SIFT | Schwind | Suri | Fan | SAR-SIFT | Proposed | |
Pair 1 | 0.0% | 0.0% | 16.7% | 6.1% | 1.8% | 46.8% |
Pair 2 | 4.8% | 20.0% | 33.3% | 36.8% | 31.3% | 50.7% |
Image Pair | Matching Methods | |||||
---|---|---|---|---|---|---|
SIFT | Schwind | Suri | Fan | SAR-SIFT | Proposed | |
Pair 3 | 0 | 0 | 2 | 0 | 0 | 46 |
Pair 4 | 0 | 0 | 1 | 0 | 4 | 113 |
Pair 5 | 0 | 0 | 0 | 0 | 2 | 137 |
Pair 6 | 0 | 0 | 0 | 4 | 1 | 33 |
Pair 7 | 0 | 0 | 0 | 13 | 4 | 25 |
Pair 8 | 1 | 1 | 0 | 1 | 5 | 60 |
Pair 9 | 29 | 20 | 39 | 14 | 58 | 489 |
Pair 10 | 0 | 0 | 0 | 0 | 0 | 118 |
Pair 11 | 1 | 1 | 2 | 0 | 1 | 155 |
Pair 12 | 5 | 10 | 5 | 174 | 9 | 432 |
Image Pair | Matching Methods | |||||
---|---|---|---|---|---|---|
SIFT | Schwind | Suri | Fan | SAR-SIFT | Proposed | |
Pair 3 | 0.0% | 0.0% | 9.1% | 0.0% | 0.0% | 57.5% |
Pair 4 | 0.0% | 0.0% | 3.7% | 0.0% | 4.2% | 59.2% |
Pair 5 | 0.0% | 0.0% | 0.0% | 0.0% | 2.2% | 72.1% |
Pair 6 | 0.0% | 0.0% | 0.0% | 36.4% | 25.0% | 68.8% |
Pair 7 | 0.0% | 0.0% | 0.0% | 43.3% | 40.0% | 62.5% |
Pair 8 | 2.3% | 8.3% | 0.0% | 5.9% | 41.7% | 76.9% |
Pair 9 | 25.9% | 43.5% | 47.0% | 19.2% | 55.2% | 75.2% |
Pair 10 | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 59.0% |
Pair 11 | 0.1% | 0.7% | 1.1% | 0.0% | 6.7% | 67.4% |
Pair 12 | 1.0% | 5.1% | 2.1% | 19.8% | 21.4% | 71.4% |
Image Pair | SIFT | Schwind | Suri | Fan | SAR-SIFT | Proposed |
---|---|---|---|---|---|---|
Pair 9 | 2.15 | 2.53 | 2.39 | 2.27 | 1.47 | 1.01 |
Pair 12 | 3.43 | 3.19 | 2.03 | 1.74 | 1.22 | 0.88 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, M.; Habib, A.; He, H.; Zhu, Q.; Zhang, W. Robust Feature Matching Method for SAR and Optical Images by Using Gaussian-Gamma-Shaped Bi-Windows-Based Descriptor and Geometric Constraint. Remote Sens. 2017, 9, 882. https://doi.org/10.3390/rs9090882
Chen M, Habib A, He H, Zhu Q, Zhang W. Robust Feature Matching Method for SAR and Optical Images by Using Gaussian-Gamma-Shaped Bi-Windows-Based Descriptor and Geometric Constraint. Remote Sensing. 2017; 9(9):882. https://doi.org/10.3390/rs9090882
Chicago/Turabian StyleChen, Min, Ayman Habib, Haiqing He, Qing Zhu, and Wei Zhang. 2017. "Robust Feature Matching Method for SAR and Optical Images by Using Gaussian-Gamma-Shaped Bi-Windows-Based Descriptor and Geometric Constraint" Remote Sensing 9, no. 9: 882. https://doi.org/10.3390/rs9090882