[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = IKONOS

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 57724 KiB  
Article
MDSCNN: Remote Sensing Image Spatial–Spectral Fusion Method via Multi-Scale Dual-Stream Convolutional Neural Network
by Wenqing Wang, Fei Jia, Yifei Yang, Kunpeng Mu and Han Liu
Remote Sens. 2024, 16(19), 3583; https://doi.org/10.3390/rs16193583 - 26 Sep 2024
Viewed by 477
Abstract
Pansharpening refers to enhancing the spatial resolution of multispectral images through panchromatic images while preserving their spectral features. However, existing traditional methods or deep learning methods always have certain distortions in the spatial or spectral dimensions. This paper proposes a remote sensing spatial–spectral [...] Read more.
Pansharpening refers to enhancing the spatial resolution of multispectral images through panchromatic images while preserving their spectral features. However, existing traditional methods or deep learning methods always have certain distortions in the spatial or spectral dimensions. This paper proposes a remote sensing spatial–spectral fusion method based on a multi-scale dual-stream convolutional neural network, which includes feature extraction, feature fusion, and image reconstruction modules for each scale. In terms of feature fusion, we propose a multi cascade module to better fuse image features. We also design a new loss function aim at enhancing the high degree of consistency between fused images and reference images in terms of spatial details and spectral information. To validate its effectiveness, we conduct thorough experimental analyses on two widely used remote sensing datasets: GeoEye-1 and Ikonos. Compared with the nine leading pansharpening techniques, the proposed method demonstrates superior performance in multiple key evaluation metrics. Full article
Show Figures

Figure 1

Figure 1
<p>The multi-scale dual-stream fusion network framework.</p>
Full article ">Figure 2
<p>The structure of the LCMs. (<b>a</b>) A diagram of the LCM modules; (<b>b</b>) a diagram of the residual module.</p>
Full article ">Figure 3
<p>Decoder module. (<b>a</b>) Overall structure of the decoder; (<b>b</b>) a diagram of the residual module.</p>
Full article ">Figure 4
<p>The fused images of simulated experiments on the GeoEye-1 dataset.</p>
Full article ">Figure 5
<p>The residual images corresponding to the fusion results in <a href="#remotesensing-16-03583-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>The fused images of the simulated experiments on the Ikonos dataset.</p>
Full article ">Figure 7
<p>The residual images corresponding to the fusion results in <a href="#remotesensing-16-03583-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>The fused images of the real experiments on the Ikonos dataset.</p>
Full article ">Figure 9
<p>The fused images of real experiments on the GeoEye-1 dataset.</p>
Full article ">
16 pages, 4099 KiB  
Article
Multi-Frequency Spectral–Spatial Interactive Enhancement Fusion Network for Pan-Sharpening
by Yunxuan Tang, Huaguang Li, Guangxu Xie, Peng Liu and Tong Li
Electronics 2024, 13(14), 2802; https://doi.org/10.3390/electronics13142802 - 16 Jul 2024
Viewed by 515
Abstract
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including [...] Read more.
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including insufficient edge detail, spectral distortion, increased noise, and limited robustness. To address these challenges, we propose a multi-frequency spectral–spatial interaction enhancement network (MFSINet) that comprises the spectral–spatial interactive fusion (SSIF) and multi-frequency feature enhancement (MFFE) subnetworks. The SSIF enhances both spatial and spectral fusion features by optimizing the characteristics of each spectral band through band-aware processing. The MFFE employs a variant of wavelet transform to perform multiresolution analyses on remote sensing scenes, enhancing the spatial resolution, spectral fidelity, and the texture and structural features of the fused images by optimizing directional and spatial properties. Moreover, qualitative analysis and quantitative comparative experiments using the IKONOS and WorldView-2 datasets indicate that this method significantly improves the fidelity and accuracy of the fused images. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed MFSINet.</p>
Full article ">Figure 2
<p>Framework of the proposed SSIF.</p>
Full article ">Figure 3
<p>Framework of the proposed MFFE.</p>
Full article ">Figure 4
<p>Results images of the nine methods and GT on the IKONOS simulated dataset, as well as images of the absolute error.</p>
Full article ">Figure 5
<p>Resulting images of the nine methods and GT on the WV-2 simulated dataset, as well as images of the absolute error.</p>
Full article ">Figure 6
<p>Resulting images of the nine methods on the IKONOS real dataset. The lower part indicates the magnified details of the fused results (red and blue boxes).</p>
Full article ">Figure 7
<p>Resulting images of the nine methods on the WV-2 real dataset. The lower part indicates the magnified details of the fused results (red and blue boxes).</p>
Full article ">Figure 8
<p>Resulting images of different types of ablation experiments on the IKONOS (<b>top</b>) and WV-2 (<b>bottom</b>) simulated datasets, along with the absolute error images.</p>
Full article ">
22 pages, 2925 KiB  
Review
Review of Applications of Remote Sensing towards Sustainable Agriculture in the Northern Savannah Regions of Ghana
by Abdul-Wadood Moomen, Lily Lisa Yevugah, Louvis Boakye, Jeff Dacosta Osei and Francis Muthoni
Agriculture 2024, 14(4), 546; https://doi.org/10.3390/agriculture14040546 - 29 Mar 2024
Viewed by 1648
Abstract
This paper assesses evidence-based applications of Remote Sensing for Sustainable and Precision Agriculture in the Northern Savanna Regions of Ghana for three decades (1990–2023). During this period, there have been several government policy intervention schemes and pragmatic support actions from development agencies towards [...] Read more.
This paper assesses evidence-based applications of Remote Sensing for Sustainable and Precision Agriculture in the Northern Savanna Regions of Ghana for three decades (1990–2023). During this period, there have been several government policy intervention schemes and pragmatic support actions from development agencies towards improving agriculture in this area with differing level of success. Over the same period, there have been dramatic advances in remote sensing (RS) technologies with tailored applications to sustainable agriculture globally. However, the extent to which intervention schemes have harnessed the incipient potential of RS for achieving sustainable agriculture in the study area is unknown. To the best of our knowledge, no previous study has investigated the synergy between agriculture policy interventions and applications of RS towards optimizing results. Thus, this study used systematic literature review and desk analysis to identify previous and current projects and studies that have applied RS tools and techniques to all aspects of agriculture in the study area. Databases searched include Web of Science, Google Scholar, Scopus, AoJ, and PubMed. To consolidate the gaps identified in the literature, ground-truthing was carried out. From the 26 focused publications found on the subject, only 13 (54%) were found employing RS in various aspects of agriculture observations in the study area. Out of the 13, 5 studies focused on mapping the extents of irrigation areas; 2 mapped the size of crop and pasturelands; 1 focused on soil water and nutrient retention; 1 study focused on crop health monitoring; and another focused on weeds/pest infestations and yield estimation in the study area. On the type of data, only 1 (7%) study used MODIS, 2 (15%) used ASTER image, 1 used Sentinel-2 data, 1 used Planetscope, 1 used IKONOS, 5 used Landsat images, 1 used Unmanned Aerial Vehicles (UAVs) and another 1 used RADAR for mapping and monitoring agriculture activities in the study area. There is no evidence of the use of LiDAR data in the area. These results validate the hypothesis that failing agriculture in the study area is due to a paucity of high-quality spatial data and monitoring to support informed farm decision-making. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

Figure 1
<p>Map of Study Area.</p>
Full article ">Figure 2
<p>Studies involved with the application of RS to agriculture in Northern Savannah of Ghana.</p>
Full article ">Figure 3
<p>A pie chart showing the remote sensing sensor used in the study area.</p>
Full article ">Figure 4
<p>Emergence of a Natural Dam at Karaga in the Northern Region.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) “Burkina” Invasive Grass Species, which dominates most farmlands in the Northern Region.</p>
Full article ">
24 pages, 15242 KiB  
Article
Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet)
by Jing Wang, Jiaqing Miao, Gaoping Li, Ying Tan, Shicheng Yu, Xiaoguang Liu, Li Zeng and Guibing Li
Remote Sens. 2024, 16(1), 75; https://doi.org/10.3390/rs16010075 - 24 Dec 2023
Viewed by 1306
Abstract
Achieving a balance between spectral resolution and spatial resolution in multi-spectral remote sensing images is challenging due to physical constraints. Consequently, pan-sharpening technology was developed to address this challenge. While significant progress was recently achieved in deep-learning-based pan-sharpening techniques, most existing deep learning [...] Read more.
Achieving a balance between spectral resolution and spatial resolution in multi-spectral remote sensing images is challenging due to physical constraints. Consequently, pan-sharpening technology was developed to address this challenge. While significant progress was recently achieved in deep-learning-based pan-sharpening techniques, most existing deep learning approaches face two primary limitations: (1) convolutional neural networks (CNNs) struggle with long-range dependency issues, and (2) significant detail loss during deep network training. Moreover, despite these methods’ pan-sharpening capabilities, their generalization to full-sized raw images remains problematic due to scaling disparities, rendering them less practical. To tackle these issues, we introduce in this study a multi-spectral remote sensing image fusion network, termed TAMINet, which leverages a two-stream coordinate attention mechanism and multi-detail injection. Initially, a two-stream feature extractor augmented with the coordinate attention (CA) block is employed to derive modal-specific features from low-resolution multi-spectral (LRMS) images and panchromatic (PAN) images. This is followed by feature-domain fusion and pan-sharpening image reconstruction. Crucially, a multi-detail injection approach is incorporated during fusion and reconstruction, ensuring the reintroduction of details lost earlier in the process, which minimizes high-frequency detail loss. Finally, a novel hybrid loss function is proposed that incorporates spatial loss, spectral loss, and an additional loss component to enhance performance. The proposed methodology’s effectiveness was validated through experiments on WorldView-2 satellite images, IKONOS, and QuickBird, benchmarked against current state-of-the-art techniques. Experimental findings reveal that TAMINet significantly elevates the pan-sharpening performance for large-scale images, underscoring its potential to enhance multi-spectral remote sensing image quality. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Detailed architectures of the TAMINet. FEN consists of FEN-1 and FEN-2; FN consists of FN-1 and FN-2; REC consists of REC-1, REC-2, REC-3 and REC-4.</p>
Full article ">Figure 2
<p>The visualization of the IKONOS dataset: (<b>I</b>) represents the LRMS image, PAN image and the image has been sharpened by the network TAMINet. (<b>II</b>) represents the result graph of pan-sharpening. (<b>III</b>) plot of the difference between the GroundTruth image and the resulting graph in the blue band. (<b>IV</b>) histogram of GroundTruth image with graph of results in blue band. Where lowercase letter (<b>a</b>) is LRMS image, (<b>b</b>) is PAN image, and (<b>c</b>) is TAMINet. Lowercase letters (<b>d</b>–<b>m</b>) are the method of comparison. (<b>d</b>) GS. (<b>e</b>) IHS. (<b>f</b>) Brovey. (<b>g</b>) PRACS. (<b>h</b>) PNN. (<b>i</b>) PanNet. (<b>j</b>) TFNet. (<b>k</b>) MSDCNN. (<b>l</b>) SRPPNN. (<b>m</b>) <math display="inline"><semantics> <mi>λ</mi> </semantics></math>-PNN.</p>
Full article ">Figure 3
<p>The visualization of the QuickBird dataset: (<b>I</b>) represents the LRMS image, PAN image and the image has been sharpened by the network TAMINet. (<b>II</b>) represents the resulting plot of pan-sharpening. (<b>III</b>) plot of the difference between the GroundTruth image and the resulting graph in the NIR band. (<b>IV</b>) histogram of the GroundTruth image with the graph of result in the NIR band. Where lowercase letter (<b>a</b>) is LRMS image, (<b>b</b>) is PAN image, and (<b>c</b>) is TAMINet. Lowercase letters (<b>d</b>–<b>m</b>) are the method of comparison. (<b>d</b>) GS. (<b>e</b>) IHS. (<b>f</b>) Brovey. (<b>g</b>) PRACS. (<b>h</b>) PNN. (<b>i</b>) PanNet. (<b>j</b>) TFNet. (<b>k</b>) MSDCNN. (<b>l</b>) SRPPNN. (<b>m</b>) <math display="inline"><semantics> <mi>λ</mi> </semantics></math>-PNN.</p>
Full article ">Figure 4
<p>The visualization of the WorldView-2 dataset: (<b>I</b>) represents the LRMS image, PAN image and the image has been sharpened by the network TAMINet. (<b>II</b>) represents the resulting plot of pan-sharpening. (<b>III</b>) plot of the difference between the GroundTruth image and the resulting graph in the NIR band. (<b>IV</b>) histogram of the GroundTruth image with result plot in the NIR band. Where lowercase letter (<b>a</b>) is LRMS image, (<b>b</b>) is PAN image, and (<b>c</b>) is TAMINet. Lowercase letters (<b>d</b>–<b>m</b>) are the method of comparison. (<b>d</b>) GS. (<b>e</b>) IHS. (<b>f</b>) Brovey. (<b>g</b>) PRACS. (<b>h</b>) PNN. (<b>i</b>) PanNet. (<b>j</b>) TFNet. (<b>k</b>) MSDCNN. (<b>l</b>) SRPPNN. (<b>m</b>) <math display="inline"><semantics> <mi>λ</mi> </semantics></math>-PNN.</p>
Full article ">Figure 5
<p>The values of weights <math display="inline"><semantics> <mi>α</mi> </semantics></math>, <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math>: (<b>a</b>) represents <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> vary according to formula <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>+</mo> <mi>β</mi> <mo>+</mo> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>; (<b>b</b>) represents <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> vary according to formula <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>+</mo> <mi>β</mi> <mo>+</mo> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> vary according to formula <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>+</mo> <mi>β</mi> <mo>+</mo> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>, where <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> vary according to formula <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>+</mo> <mi>β</mi> <mo>+</mo> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">
23 pages, 31350 KiB  
Article
Evaluating the Impact of Seismic Activity on the Slope Stability of the Western Coast of Lefkada Island Using Remote Sensing Techniques, Geographical Information Systems, and Field Data
by Konstantinos G. Nikolakopoulos, Ioannis K. Koukouvelas, Aggeliki Kyriou, Dionysios Apostolopoulos and George Pappas
Appl. Sci. 2023, 13(16), 9434; https://doi.org/10.3390/app13169434 - 20 Aug 2023
Cited by 3 | Viewed by 1662
Abstract
The current research aims to examine the long-term evolution of the western cliffs of Lefkada Island following the occurrence of the last two strong earthquakes, on 14 August 2003 and 17 November 2015, respectively. Medium resolution satellite data (Landsat) and very high-resolution data [...] Read more.
The current research aims to examine the long-term evolution of the western cliffs of Lefkada Island following the occurrence of the last two strong earthquakes, on 14 August 2003 and 17 November 2015, respectively. Medium resolution satellite data (Landsat) and very high-resolution data (Ikonos, Pleiades, and airphotos) were processed in Google Earth Engine and Erdas imagine software, respectively. The study area covers a 20 km-long region of the western cliffs of Lefkada Island, extending from Egremni beach to the South to Komilio beach to the North. Relief, vegetation, and inclination changes were detected in the ArcGis environment. The results were associated with in situ data provided through the installation of a sediment trap. The analysis of the results proved that seismicity is the main factor that formed the western coastline of Lefkada Island, affecting the integrity of the cliffs. Specifically, large earthquakes cause immediate vegetation and topographic (inclination changes, mass movements) modifications in the western cliffs of the island. Meanwhile, small earthquakes (magnitudes < 4.1) contribute to the cliff’s evolution during the inter-seismic era. The intensity of these aforementioned changes was closely related to the seismic activity that occurred in the vicinity of the study area. In addition, it was found that precipitation and wind do not exert a similar influence on the cliff’s evolution. Full article
(This article belongs to the Special Issue GIS and Spatial Planning for Natural Hazards Mitigation)
Show Figures

Figure 1

Figure 1
<p>Location of the study area within Greece and the respective Pleiades orthophoto.</p>
Full article ">Figure 2
<p>Outcrop of screes in the study area showing that angular mixtures of sediments are prevalent, despite the fact that there are also wide grain size distributions primarily concentrated in the sizes of the cobbles and sands. Boulders in this assemblage are rare. The width of the photo is 4 m.</p>
Full article ">Figure 3
<p>(<b>a</b>) Panoramic view of Egremni coastal area from the north to the south. Mass-wasting materials and rocks detached from the cliff are laying on the beach. (<b>b</b>) A close-up photo of the coast facing to the east. Several rocks detached from the upper cliff are accumulated on the coast.</p>
Full article ">Figure 4
<p>The sediment trap that was placed on the overhanging cliff located in Egremni beach.</p>
Full article ">Figure 5
<p>Photos during sieving showing the maximum clast collected from the trap during the four sampling semesters.</p>
Full article ">Figure 6
<p>Diagram illustrating the applied methodology.</p>
Full article ">Figure 7
<p>NDVI results before (<b>a</b>) and after (<b>b</b>) the 2015 earthquake, as derived from the processing of Landsat imagery in the GEE. (<b>c</b>) NDVI difference between the years 2015 and 2016, as derived from the processing of Landsat imagery in the GEE.</p>
Full article ">Figure 8
<p>On the (<b>left</b>), a Pleiades orthophoto of the Egremni coast in 2016 is presented. On the (<b>right</b>), the same area has been displayed in the Pleiades orthophoto in the year 2015. The yellow line indicates the landslide extent. The backwards movement of the cliff is evident on the left image. The orange line indicates the landslide crown while the black line represents the old road to the beach that was destroyed.</p>
Full article ">Figure 9
<p>On the (<b>left</b>), the slope map of the Egremni coast in 2016 is displayed. In the (<b>middle</b>), the same area is displayed in the year 2015. Color changes indicate the changes to the cliff inclination before and after the 2015 earthquake. The slope difference values are presented in the (<b>right</b>) image.</p>
Full article ">Figure 10
<p>On the (<b>left</b>), a Pleiades orthophoto of the Egremni coast in 2016 is presented. On the (<b>right</b>), the NDVI difference between the years 2015 and 2016 is displayed. The black dotted line indicates the landslide extent. It is obvious that there is a great loss of vegetation as a result of the earthquake that occurred in 2015.</p>
Full article ">Figure 11
<p>On the (<b>left</b>), a Pleiades orthophoto of the Okeanos coast in 2016 is presented. On the (<b>right</b>), the same area is displayed in the year 2015, as captured by the Pleiades orthophoto. The yellow line indicates the landslide extent. The backwards movement of the cliff is evident on the left image.</p>
Full article ">Figure 12
<p>Enlargement of the previous figure. On the (<b>top</b>), a Pleiades orthophoto of the Okeanos southern coast in the year 2016 is displayed. On the (<b>bottom</b>), the same area has been captured with the Pleiades orthophoto in the year 2015. The backwards movement of the cliff is evident on the upper image. The landslide affected the buildings presented on the right part of the area. The orange line indicates the crown of the landslide.</p>
Full article ">Figure 13
<p>On the (<b>left</b>), the slope map of the Okeanos coast in 2016 is displayed, while in the (<b>middle</b>), there is the respective map of the year 2015. Color changes indicate the changes to the cliff inclination before and after the earthquake occurrence. The slope difference values are presented on the (<b>right</b>) image.</p>
Full article ">Figure 14
<p>On the (<b>left</b>), a Pleiades orthophoto of the Gialos coast in 2016 is displayed. On the (<b>right</b>), the same area has been presented in a 2015 Pleiades orthophoto. The yellow line indicates the landslide extent. The backwards movement of the cliff is evident on the left image. The green dashed line indicates areas with deep erosion.</p>
Full article ">Figure 15
<p>On the (<b>left</b>), the slope map of the Gialos coast in 2016 is displayed. In the (<b>middle</b>), the same area is displayed in the year 2015. Color changes indicate the changes to the cliff inclination before and after the 2015 earthquake. On the (<b>right</b>) image, the slope difference values are presented.</p>
Full article ">Figure 16
<p>On the (<b>left</b>), a Pleiades orthophoto of the Komilio coast in 2016 is displayed. On the (<b>right</b>), the same area has been presented in a 2015 Pleiades orthophoto. The backwards movement of the cliff is evident on the left image. The green dashed line indicates areas with deep erosion, while the blue line shows areas with debris accumulation.</p>
Full article ">Figure 17
<p>On the (<b>left</b>), the slope map of the Komilio coast in 2016 is displayed. In the (<b>middle</b>), the same area has been presented in the year 2015. Color changes indicate the changes to the cliff inclination before and after the 2015 earthquake. On the (<b>right</b>) image, the slope difference values are presented.</p>
Full article ">Figure 18
<p>Analysis of the slope classes between 2015 and 2016 for the Egremni cliff.</p>
Full article ">Figure 19
<p>Analysis of the slope classes between 2015 and 2016 for the Okeanos cliff.</p>
Full article ">Figure 20
<p>Analysis of the slope classes between 2015 and 2016 for the Gialos cliff.</p>
Full article ">Figure 21
<p>Analysis of the slope classes between 2015 and 2016 for the Komilio cliff.</p>
Full article ">Figure 22
<p>Sieves showing angular and subangular gravels with a diameter &gt; 8 mm and the coarse sand of the first semester sample.</p>
Full article ">Figure 23
<p>Relief changes on the Egremni cliff after the 2003 earthquake. On the <b>left</b> part, an orthophoto mosaic of the coast of Egremni after the 2003 earthquake is presented, while the <b>right</b> part corresponds to the same area in 2000. The slope has transformed in a steeper manner in the left image.</p>
Full article ">Figure 24
<p>The trap sediment concentration along the four semesters of observation in accordance to the recorded precipitation and seismicity.</p>
Full article ">
19 pages, 11700 KiB  
Article
The First Rock Glacier Inventory for the Greater Caucasus
by Levan G. Tielidze, Alessandro Cicoira, Gennady A. Nosenko and Shaun R. Eaves
Geosciences 2023, 13(4), 117; https://doi.org/10.3390/geosciences13040117 - 13 Apr 2023
Cited by 6 | Viewed by 3607
Abstract
Rock glaciers are an integral part of the periglacial environment. At the regional scale in the Greater Caucasus, there have been no comprehensive systematic efforts to assess the distribution of rock glaciers, although some individual parts of ranges have been mapped before. In [...] Read more.
Rock glaciers are an integral part of the periglacial environment. At the regional scale in the Greater Caucasus, there have been no comprehensive systematic efforts to assess the distribution of rock glaciers, although some individual parts of ranges have been mapped before. In this study we produce the first inventory of rock glaciers from the entire Greater Caucasus region—Russia, Georgia, and Azerbaijan. A remote sensing survey was conducted using Geo-Information System (GIS) and Google Earth Pro software based on high-resolution satellite imagery—SPOT, Worldview, QuickBird, and IKONOS, based on data obtained during the period 2004–2021. Sentinel-2 imagery from the year 2020 was also used as a supplementary source. The ASTER GDEM (2011) was used to determine location, elevation, and slope for all rock glaciers. Using a manual approach to digitize rock glaciers, we discovered that the mountain range contains 1461 rock glaciers with a total area of 297.8 ± 23.0 km2. Visual inspection of the morphology suggests that 1018 rock glaciers with a total area of 199.6 ± 15.9 km2 (67% of the total rock glacier area) are active, while the remaining rock glaciers appear to be relict. The average maximum altitude of all rock glaciers is found at 3152 ± 96 m above sea level (a.s.l.) while the mean and minimum altitude are 3009 ± 91 m and 2882 ± 87 m a.s.l., respectively. We find that the average minimum altitude of active rock glaciers is higher (2955 ± 98 m a.s.l.) than in relict rock glaciers (2716 ± 83 m a.s.l.). No clear difference is discernible between the surface slope of active (41.4 ± 3°) and relict (38.8 ± 4°) rock glaciers in the entire mountain region. This inventory provides a database for understanding the extent of permafrost in the Greater Caucasus and is an important basis for further research of geomorphology and palaeoglaciology in this region. The inventory will be submitted to the Global Land Ice Measurements from Space (GLIMS) database and can be used for future studies. Full article
(This article belongs to the Special Issue Mountain Glaciers, Permafrost, and Snow)
Show Figures

Figure 1

Figure 1
<p>The extent of rock glaciers relative to alpine glaciers in the Greater Caucasus. The location of the Caucasus region is shown in the inset map at the top right. Peak elevations are given in meters above sea level [<a href="#B57-geosciences-13-00117" class="html-bibr">57</a>].</p>
Full article ">Figure 2
<p>An example of rock glacier mapping and classification: (<b>a</b>)—relict rock glaciers (43°38′21.36″ N 41°5′53.19″ E); (<b>b</b>)—active rock glaciers (42°32′54.05″ N 44°46′31.21″ E). The yellow line corresponds to the boundaries of the rock glacier. The white dashed line indicates the width of the rock glacier. Google Earth imagery 16/08/2019 and 16/08/2022 is used as the background. © Google Earth 2022.</p>
Full article ">Figure 3
<p>Rock glacier area and count comparison for the different sections and slopes of the Greater Caucasus.</p>
Full article ">Figure 4
<p>Color-coded map of active and relict rock glaciers for the Greater Caucasus.</p>
Full article ">Figure 5
<p>(<b>a</b>)—Individual rock glacier area versus maximum and minimum elevation for the entire Greater Caucasus. (<b>b</b>)—Individual rock glacier area versus minimum elevation (snout position) of relict and active rock glaciers for the entire Greater Caucasus.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>)—Individual rock glacier area versus maximum and minimum elevation for the entire Greater Caucasus. (<b>b</b>)—Individual rock glacier area versus minimum elevation (snout position) of relict and active rock glaciers for the entire Greater Caucasus.</p>
Full article ">Figure 6
<p>Mean average elevation of all rock glaciers inventoried across the Greater Caucasus according to the different sections (western, central, and eastern) and slopes (northern and southern). Error bars are based on standard deviation (1σ).</p>
Full article ">Figure 7
<p>Color-coded map of mean elevation for all rock glaciers larger than 0.01 km<sup>2</sup> in the Greater Caucasus.</p>
Full article ">Figure 8
<p>(<b>a</b>)—Mean average slope of all rock glaciers inventoried across the Greater Caucasus according to the different sections (western, central, and eastern) and slopes (northern and southern). Error bars are based on standard deviation (1σ). (<b>b</b>)—Spatial distribution of mean elevation versus average slope for all rock glaciers larger than 0.01 km<sup>2</sup> in the Greater Caucasus.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>)—Mean average slope of all rock glaciers inventoried across the Greater Caucasus according to the different sections (western, central, and eastern) and slopes (northern and southern). Error bars are based on standard deviation (1σ). (<b>b</b>)—Spatial distribution of mean elevation versus average slope for all rock glaciers larger than 0.01 km<sup>2</sup> in the Greater Caucasus.</p>
Full article ">Figure 9
<p>Examples of complex rock glacier terrain with hard-defined boundaries. (<b>a</b>)—active rock glaciers (upper part) versus relict rock glaciers (lower part) (43°30′6.58″ N 41°6′46.16″ E). (<b>b</b>)—debris-covered glacier (upper part) versus active rock glacier (lower part) (42°42′3.67″ N 44°50′44.61″ E). Google Earth imagery 26/10/2020 (<b>a</b>) and 22/09/2011 (<b>b</b>) is used as the background. © Google Earth 2022.</p>
Full article ">
19 pages, 6439 KiB  
Article
Monitoring Coastal Changes and Assessing Protection Structures at the Damietta Promontory, Nile Delta, Egypt, to Secure Sustainability in the Context of Climate Changes
by Hesham M. El-Asmar and Maysa M. N. Taha
Sustainability 2022, 14(22), 15415; https://doi.org/10.3390/su142215415 - 20 Nov 2022
Cited by 4 | Viewed by 2412
Abstract
The Damietta Promontory is a distinct coastal region in the Nile Delta Egypt, which comprises several communities with strategic economic projects. The promontory has experienced numerous inundation crises due to anthropogenic intervention and/or sea level rise (SLR). The recorded rate of erosion detected [...] Read more.
The Damietta Promontory is a distinct coastal region in the Nile Delta Egypt, which comprises several communities with strategic economic projects. The promontory has experienced numerous inundation crises due to anthropogenic intervention and/or sea level rise (SLR). The recorded rate of erosion detected is from −18 to −53 m/yr., and −28 to −210 m/yr. along the promontory’s western and eastern coasts, respectively, with a total loss of about 3 km during the past century. It is critical to ensure sustainability of this coastal region in case of future climate changes and expected SLR; accordingly, the state has implemented a long-term plan of coastal protection. The current study updates the coastal changes and assesses the efficiency of the protection structures. For such study, Ikonos satellite images of 1 m high-resolution were acquired on 30 July 2014 and 10 August 2022, respectively. These were compared to multitemporal Landsat images dated 30 June 2015, 29 September 1987, 15 October 1984, and the Landsat 4 MSS images dated 20 October 1972. The results confirm the presence of accretion along the western jetty of the Damietta Harbor with an average of +10.91 m/yr., while erosion of −4.7 m/yr. was detected at the east of the eastern harbor jetty. At the detached breakwaters along Ras El-Bar, an accretion of +4 m/yr. was detected, and then erosion was measured westward to the tip of the detached breakwaters with an average of −1.77 m/yr. At the eastern coast of the promontory, eastward erosion was recorded with rates of −44.16, −34.33, and −20.33 m/yr., respectively, then the erosion stopped after the construction of the seawall. The current study confirms the efficiency of the detached breakwaters and seawalls as coastal protection structures. However, the seawalls lack swimming-friendly long, wide beaches like those found on the detached breakwaters. The groins seem ineffective with rips and reversed currents like those at Ras El -Bar. To develop a fishing community at the Manzala triangle similar in nature to Venice, it is recommended to extend the seawall to 12 km and then construct detached breakwaters eastward to the El-Diba inlet. To secure sustainability of the coast, a continuous maintenance of the protection structures to keep their elevations between 4–6 m above sea level (a.s.l.) is a critical task, in order to reduce the potential risks that could arise from a tsunami, with sand nourishment as a preferred strategy. Full article
Show Figures

Figure 1

Figure 1
<p>Satellite image from 1972 of the Damietta Promontory with the western coast including Ras El-Bar resort and Damietta Harbor, with the eastern coast comprising the Ezbet El-Borg fishing city and the Manzala lagoon triangle (<b>a</b>,<b>b</b>). The black rectangle refers to the location of Figure 4c. The situation at 2015 and comparison with 1984 satellite image showing segments of accretion in red color and erosion in yellow (<b>c</b>) W.J. and E.J. are the western and eastern jetties of the Damietta Harbor. The yellow rectangle refers to the location of Figure 5f–h, while the red rectangle refers to the location of Figure 5a–e (<b>c</b>).</p>
Full article ">Figure 2
<p>Flow chart showing the procedures applied on the selected satellite images used in the current study.</p>
Full article ">Figure 3
<p>Summary of the recorded data base of shoreline changes along the Damietta Promontory (references are listed in the last column) compared with the current results (first raw). The erosion is given in negative values while accretion rates expressed in positive values [<a href="#B1-sustainability-14-15415" class="html-bibr">1</a>,<a href="#B4-sustainability-14-15415" class="html-bibr">4</a>,<a href="#B5-sustainability-14-15415" class="html-bibr">5</a>,<a href="#B8-sustainability-14-15415" class="html-bibr">8</a>,<a href="#B16-sustainability-14-15415" class="html-bibr">16</a>,<a href="#B22-sustainability-14-15415" class="html-bibr">22</a>,<a href="#B38-sustainability-14-15415" class="html-bibr">38</a>,<a href="#B39-sustainability-14-15415" class="html-bibr">39</a>,<a href="#B40-sustainability-14-15415" class="html-bibr">40</a>,<a href="#B41-sustainability-14-15415" class="html-bibr">41</a>,<a href="#B42-sustainability-14-15415" class="html-bibr">42</a>,<a href="#B43-sustainability-14-15415" class="html-bibr">43</a>,<a href="#B44-sustainability-14-15415" class="html-bibr">44</a>,<a href="#B45-sustainability-14-15415" class="html-bibr">45</a>,<a href="#B46-sustainability-14-15415" class="html-bibr">46</a>,<a href="#B47-sustainability-14-15415" class="html-bibr">47</a>,<a href="#B48-sustainability-14-15415" class="html-bibr">48</a>].</p>
Full article ">Figure 4
<p>Comparison of satellite images of 1984 and 2015 showing the values of eroded and accreted coast in meters. The yellow bar shows the location of the seawall at “b” (<b>a</b>). The seawall constructed in 2000; A–A´ is a location of cross section along the seawall (<b>b</b>) illustrated section at (<b>c</b>) [<a href="#B14-sustainability-14-15415" class="html-bibr">14</a>], with basalt, dolomite and tetrapod dolos of 3 tons (<b>c</b>). The two jetties at the Damietta branch river mouth (<b>d</b>). The red circle and yellow rectangle at (<b>b</b>,<b>d</b>) refer to the eastern and western jetties at the Damietta branch, respectively.</p>
Full article ">Figure 5
<p>The coast west of the promontory at Ras El-Bar showing the concrete seawall. (<b>a</b>) The yellow rectangle refers to the lighthouse; the modified 1200 m seawall extends westward of the Damietta Nile branch. (<b>b</b>) One of the three 120 m long concrete groins that were constructed to the west of the seawall at “b” and (<b>c</b>) later renewed. (<b>d</b>) The successful detached breakwaters that create shadow zones of 25% of the waves’ energy. (<b>e</b>) Eastward evidence of erosion including shell accumulation, cuspate beach “red arrows” (<b>f</b>) at the coast east of the harbor eastern jetty (see <a href="#sustainability-14-15415-f001" class="html-fig">Figure 1</a>c yellow area), with developed rip currents (yellow arrows) (<b>g</b>) and waves rolled mud balls along the eroded coast (pink arrows) (<b>g</b>), [<a href="#B4-sustainability-14-15415" class="html-bibr">4</a>]. The new Y groins east of the harbor jetty (blue arrows) (<b>h</b>) with accretion along the groins and erosion along the spaces in between.</p>
Full article ">Figure 6
<p>Comparison of summer houses at Ras El-Bar during inundation in 1986 where water sea water attacks the houses and they lose the swimming beaches, and later in 2000 after the construction of the detached breakwaters the same house have regained a wide sandy beach (<b>a</b>–<b>d</b>). Partial collapse “red arrows” and sedimentation “yellow arrow” at the protection structures, such collapse lowered the elevation of the breakwaters to less than 1 m a.s.l. (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 7
<p>GeoEye (Ikonos) satellite images captured 30 July 2014 and 10 August 2022 showing the changes in three coastal zones A, B, and C (<b>a</b>) the erosion and accretion around the Y-groins (<b>a</b>) and at the detached breakwaters Area B (<b>b</b>). The changes after the construction of the seawall “the yellow bar location” and erosion to the down drift “pink shadow” and at the sand spit (<b>c</b>).</p>
Full article ">Figure 8
<p>Two photos showing the unsuccessful detached breakwaters at Pot Said (<b>a</b>), and at Baltim (<b>b</b>) due to short length of the shadow area. Field photos showing unsuccessful sand nourishment along El-Gamiel tourist village at Port Said the waves attacked the coast and obliterated all sands due to the non-efficient quantities of sands m<sup>3</sup>/m and the grain size (<b>c</b>). Finally, the coastal dunes natural defense measure subjected to erosion (<b>d</b>) at Baltim, threats the International coastal road and the Burullus coast.</p>
Full article ">
20 pages, 11207 KiB  
Article
Optimization of Remote Sensing Image Segmentation by a Customized Parallel Sine Cosine Algorithm Based on the Taguchi Method
by Fang Fan, Gaoyuan Liu, Jiarong Geng, Huiqi Zhao and Gang Liu
Remote Sens. 2022, 14(19), 4875; https://doi.org/10.3390/rs14194875 - 29 Sep 2022
Cited by 6 | Viewed by 1965
Abstract
Affected by solar radiation, atmospheric windows, radiation aberrations, and other air and sky environmental factors, remote sensing images usually contain a large amount of noise and suffer from problems such as non-uniform image feature density. These problems bring great difficulties to the segmentation [...] Read more.
Affected by solar radiation, atmospheric windows, radiation aberrations, and other air and sky environmental factors, remote sensing images usually contain a large amount of noise and suffer from problems such as non-uniform image feature density. These problems bring great difficulties to the segmentation of high-precision remote sensing image. To improve the segmentation effect of remote sensing images, this study adopted an improved metaheuristic algorithm to optimize the parameter settings of pulse-coupled neural networks (PCNNs). Using the Taguchi method, the optimal parallelism scheme of the algorithm was effectively tailored for a specific target problem. The blindness in the design of the algorithm parallel structure was effectively avoided. The superiority of the customized parallel SCA based on the Taguchi method (TPSCA) was demonstrated in tests with different types of benchmark functions. In this study, simulations were performed using IKONOS, GeoEye-1, and WorldView-2 satellite remote sensing images. The results showed that the accuracy of the proposed remote sensing image segmentation model was significantly improved. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Working principle diagram of the PCNN.</p>
Full article ">Figure 2
<p>Execution flow of Strategy 1.</p>
Full article ">Figure 3
<p>Execution flow of Strategy 2.</p>
Full article ">Figure 4
<p>Optimization results of the benchmark: (<b>a</b>) 2D position distribution; (<b>b</b>) Trajectory in the first dimension; (<b>c</b>) Average fitness; (<b>d</b>) Convergence curves.</p>
Full article ">Figure 4 Cont.
<p>Optimization results of the benchmark: (<b>a</b>) 2D position distribution; (<b>b</b>) Trajectory in the first dimension; (<b>c</b>) Average fitness; (<b>d</b>) Convergence curves.</p>
Full article ">Figure 5
<p>Signal-to-noise ratio main effect map: (<b>a</b>) the SNR main effect graph of unimodal function <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) the SNR main effect graph of multimodal function <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) the SNR main effect graph of complex function <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mrow> <mn>18</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Flowchart of TPSCA–PCNN.</p>
Full article ">Figure 7
<p>Comparison of image segmentation effects before and after preprocessing: (<b>a</b>) before image preprocessing; (<b>b</b>) after image preprocessing.</p>
Full article ">Figure 8
<p>Segmentation results of IKONOS satellite remote sensing image 1: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 9
<p>Segmentation results of IKONOS satellite remote sensing image 2: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 10
<p>Segmentation results of GeoEye-1 satellite remote sensing image 3: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 11
<p>Segmentation results of GeoEye-1 satellite remote sensing image 4: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 12
<p>Segmentation results of WorldView-2 satellite remote sensing image 5: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 13
<p>Segmentation results of WorldView-2 satellite remote sensing image 6: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) EML segmentation result.</p>
Full article ">
23 pages, 8425 KiB  
Article
An Impervious Surface Spectral Index on Multispectral Imagery Using Visible and Near-Infrared Bands
by Shanshan Su, Jia Tian, Xinyu Dong, Qingjiu Tian, Ning Wang and Yanbiao Xi
Remote Sens. 2022, 14(14), 3391; https://doi.org/10.3390/rs14143391 - 14 Jul 2022
Cited by 14 | Viewed by 2849
Abstract
The accurate mapping of urban impervious surfaces from remote sensing images is crucial for understanding urban land-cover change and addressing impervious-surface-change-related environment issues. To date, the authors of most studies have built indices to map impervious surfaces based on shortwave infrared (SWIR) or [...] Read more.
The accurate mapping of urban impervious surfaces from remote sensing images is crucial for understanding urban land-cover change and addressing impervious-surface-change-related environment issues. To date, the authors of most studies have built indices to map impervious surfaces based on shortwave infrared (SWIR) or thermal infrared (TIR) bands from middle–low-spatial-resolution remote sensing images. However, this limits the use of high-spatial-resolution remote sensing data (e.g., GaoFen-2, Quickbird, and IKONOS). In addition, the separation of bare soil and impervious surfaces has not been effectively solved. In this article, on the basis of the spectra analysis of impervious surface and non-impervious surface (vegetation, water, soil and non-photosynthetic vegetation (NPV)) data acquired from world-recognized spectral libraries and Sentinel-2 MSI images in different regions and seasons, a novel spectral index named the Normalized Impervious Surface Index (NISI) was proposed for extracting impervious area information by using blue, green, red and near-infrared (NIR) bands. We performed comprehensive assessments for the NISI, and the results demonstrated that the NISI provided the best studied performance in separating the soil and impervious surfaces from Sentinel-2 MSI images. Furthermore, regarding impervious surfaces mapping accuracy, the NISI had an overall accuracy (OA) of 89.28% (±0.258), a producer’s accuracy (PA) of 89.76% (±1.754), and a user’s accuracy (UA) of 90.68% (±1.309), which were higher than those of machine learning algorithms, thus supporting the NISI as an effective measurement for urban impervious surfaces mapping and analysis. The results indicate the NISI has a high robustness and a good applicability. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The map of China. (<b>b</b>) False-color composite Sentinel-2 image of Beijing. (<b>c</b>) False-color composite Sentinel-2 image of Guangzhou. (<b>d</b>) False-color composite Sentinel-2 image of Nanjing.</p>
Full article ">Figure 2
<p>(<b>a</b>) Spectral curves of typical ground objects in USGS spectral library. (<b>b</b>) Spectral curves of typical ground objects in the USGS spectral library resampled to the resolution of Sentinel-2 spectra. (<b>c</b>) The spectral reflectance in NIR and visible bands (red, green, and blue) based on the resampled spectral curves of typical ground objects in the USGS spectral library. The blue, green, red, and black dotted lines in <a href="#remotesensing-14-03391-f002" class="html-fig">Figure 2</a> indicate the central wavelength of blue, green, red, and NIR bands of Sentinel-2 reflectance spectra, respectively.</p>
Full article ">Figure 3
<p>Surface reflectance comparison of ground object classes (mean values and standard deviation values) in multitemporal Sentinel-2 images. (<b>a</b>–<b>c</b>) Reflectance of the same ground classes covering four seasons in Beijing, Nanjing, and Guangzhou, China, respectively.</p>
Full article ">Figure 3 Cont.
<p>Surface reflectance comparison of ground object classes (mean values and standard deviation values) in multitemporal Sentinel-2 images. (<b>a</b>–<b>c</b>) Reflectance of the same ground classes covering four seasons in Beijing, Nanjing, and Guangzhou, China, respectively.</p>
Full article ">Figure 4
<p>The NISI results based on multitemporal Sentinel-2 MSI images. (<b>a</b>–<b>c</b>) NISI gray-scale images in downtown of Beijing, Nanjing, and Guangzhou, respectively. White surfaces are water, high-brightness surfaces are impervious surfaces, and low-brightness surfaces are non-impervious surfaces (vegetation, bare soil and NPV).</p>
Full article ">Figure 5
<p>Comparison of mixture degree between impervious surfaces and bare soil. Yellow ellipses and red rectangles represent bare soil and impervious surfaces in (<b>a</b>–<b>c</b>), respectively. (<b>a</b>) False-color composite Sentinel-2 image of Beijing International airport. (<b>b</b>) True-color composite Sentinel-2 image of Beijing International airport. (<b>c</b>) NISI image of Beijing International airport. (<b>d</b>–<b>i</b>) Spectral confusion or index-value-overlapping histograms of bare soil and NISI derived from original images, NISI images, and other typical indices images. The horizontal axis is the index value, and the vertical axis is the frequency of index value occurrence in the index images.</p>
Full article ">Figure 6
<p>The impervious extraction results of different machine learning methods. Row P1 shows Sentinel-2 false-color images. Rows P2–P5 show the results of RF, SVM, CART, and our method (NISI), respectively. (<b>a-</b><b>1</b>–<b>a-4</b>,b<b>-1</b>–<b>b-4</b>,c<b>-1</b>–<b>c-4</b>) show the results of RF, SVM, CART, and NISI in Nanjing, Beijing, and Guangzhou, respectively. The yellow polygons denote some examples where our method outperformed RF, SVM, and CART. The black polygons denote some examples where RF, SVM, and CART outperformed our method.</p>
Full article ">
23 pages, 2551 KiB  
Article
MSAC-Net: 3D Multi-Scale Attention Convolutional Network for Multi-Spectral Imagery Pansharpening
by Erlei Zhang, Yihao Fu, Jun Wang, Lu Liu, Kai Yu and Jinye Peng
Remote Sens. 2022, 14(12), 2761; https://doi.org/10.3390/rs14122761 - 8 Jun 2022
Cited by 5 | Viewed by 2201
Abstract
Pansharpening fuses spectral information from the multi-spectral image and spatial information from the panchromatic image, generating super-resolution multi-spectral images with high spatial resolution. In this paper, we proposed a novel 3D multi-scale attention convolutional network (MSAC-Net) based on the typical U-Net framework for [...] Read more.
Pansharpening fuses spectral information from the multi-spectral image and spatial information from the panchromatic image, generating super-resolution multi-spectral images with high spatial resolution. In this paper, we proposed a novel 3D multi-scale attention convolutional network (MSAC-Net) based on the typical U-Net framework for multi-spectral imagery pansharpening. MSAC-Net is designed via 3D convolution, and the attention mechanism replaces the skip connection between the contraction and expansion pathways. Multiple pansharpening layers at the expansion pathway are designed to calculate the reconstruction results for preserving multi-scale spatial information. The MSAC-Net performance is verified on the IKONOS and QuickBird satellites’ datasets, proving that MSAC-Net achieves comparable or superior performance to the state-of-the-art methods. Additionally, 2D and 3D convolution are compared, and the influences of the number of convolutions in the convolution block, the weight of multi-scale information, and the network’s depth on the network performance are analyzed. Full article
(This article belongs to the Special Issue Deep Reinforcement Learning in Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Classic U-Net design.</p>
Full article ">Figure 2
<p>The proposed MSAC-Net architecture.</p>
Full article ">Figure 3
<p>The attention gate (AG) module.</p>
Full article ">Figure 4
<p>Display images in datasets. (<b>a</b>–<b>d</b>) are the IKONOS dataset’s images. (<b>e</b>–<b>h</b>) are the QuickBird dataset’s images.</p>
Full article ">Figure 5
<p>The reference image and pansharpening image are compared at different scales. The first line is the reference image for each scale, and the second line is pansharpening image for each scale. (<b>a</b>–<b>e</b>) are the reference images of the first to fifth layers respectively, and corresponding, (<b>f</b>–<b>j</b>) are the reconstructed images of the first to fifth layers respectively.</p>
Full article ">Figure 6
<p>The comparison of feature maps with and without the AG module. (<b>a</b>) The ground truth. (<b>b</b>) The 53rd feature map in <math display="inline"><semantics> <msub> <mi>F</mi> <msub> <mi>L</mi> <mn>1</mn> </msub> </msub> </semantics></math>. (<b>c</b>) The 53rd feature map in <math display="inline"><semantics> <msub> <mi>F</mi> <msub> <mi>H</mi> <mn>1</mn> </msub> </msub> </semantics></math>. (<b>d</b>) the ground truth. (<b>e</b>) The 23rd feature map in <math display="inline"><semantics> <msub> <mi>F</mi> <msub> <mi>L</mi> <mn>1</mn> </msub> </msub> </semantics></math>. (<b>f</b>) The 23rd feature map in <math display="inline"><semantics> <msub> <mi>F</mi> <msub> <mi>H</mi> <mn>1</mn> </msub> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>The comparison of the results for different structures. (<b>a</b>) The ground truth. (<b>b</b>) U-net. (<b>c</b>) U-net + AG: U-net with AG on the skip connections. (<b>d</b>) U-net + scale: U-Net with multi-scale cost function. (<b>e</b>) MSAC-Net. (<b>f</b>–<b>i</b>) are the spectral distortion maps corresponding to (<b>b</b>–<b>e</b>).</p>
Full article ">Figure 8
<p>The comparison of 2D and 3D convolution results on the QuickBird dataset. (<b>a</b>) PAN. (<b>b</b>) The ground truth. (<b>c</b>) The 2D convolutional method. (<b>d</b>) The 3D convolutional method.</p>
Full article ">Figure 9
<p>The fused results on the simulated IKONOS dataset. (<b>a</b>) LR-MS. (<b>b</b>) PAN. (<b>c</b>) The ground truth. (<b>d</b>) GS. (<b>e</b>) Indusion. (<b>f</b>) SR. (<b>g</b>) PNN. (<b>h</b>) PanNet. (<b>i</b>) MSDCNN. (<b>j</b>) MIPSM. (<b>k</b>) GTP-PNet. (<b>l</b>) MSAC-Net.</p>
Full article ">Figure 10
<p>The fused results on the simulated QuickBird dataset. (<b>a</b>) LR-MS. (<b>b</b>) PAN. (<b>c</b>) The ground truth. (<b>d</b>) GS. (<b>e</b>) Indusion. (<b>f</b>) SR. (<b>g</b>) PNN. (<b>h</b>) PanNet. (<b>i</b>) MSDCNN. (<b>j</b>) MIPSM. (<b>k</b>) GTP-PNet. (<b>l</b>) MSAC-Net.</p>
Full article ">Figure 11
<p>The real results on the IKONOS dataset. (<b>a</b>) GS. (<b>b</b>) Indusion. (<b>c</b>) SR. (<b>d</b>) PNN. (<b>e</b>) PanNet. (<b>f</b>) MSDCNN. (<b>g</b>) MIPSM. (<b>h</b>) GTP-PNet. (<b>i</b>) MSAC-Net. (<b>j</b>) PAN.</p>
Full article ">Figure 12
<p>The real results on the QuickBird dataset. (<b>a</b>) GS. (<b>b</b>) Indusion. (<b>c</b>) SR. (<b>d</b>) PNN. (<b>e</b>) PanNet. (<b>f</b>) MSDCNN. (<b>g</b>) MIPSM. (<b>h</b>) GTP-PNet. (<b>i</b>) MSAC-Net. (<b>j</b>) PAN.</p>
Full article ">Figure 13
<p>Discuss the effects of parameters (<b>a</b>) The effect of convolution times. (<b>b</b>) The effect of multi-scale information weight <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. (<b>c</b>) The effect of network depth.</p>
Full article ">
20 pages, 2401 KiB  
Article
From Regression Based on Dynamic Filter Network to Pansharpening by Pixel-Dependent Spatial-Detail Injection
by Xuan Liu, Ping Tang, Xing Jin and Zheng Zhang
Remote Sens. 2022, 14(5), 1242; https://doi.org/10.3390/rs14051242 - 3 Mar 2022
Cited by 3 | Viewed by 1765
Abstract
Compared with hardware upgrading, pansharpening is a low-cost way to acquire high-quality images, which usually combines multispectral images (MS) in low spatial resolution with panchromatic images (PAN) in high spatial resolution. This paper proposes a pixel-dependent spatial-detail injection network (PDSDNet). Based on a [...] Read more.
Compared with hardware upgrading, pansharpening is a low-cost way to acquire high-quality images, which usually combines multispectral images (MS) in low spatial resolution with panchromatic images (PAN) in high spatial resolution. This paper proposes a pixel-dependent spatial-detail injection network (PDSDNet). Based on a dynamic filter network, PDSDNet constructs nonlinear mapping of the simulated panchromatic band from low-resolution multispectral bands through filtering convolution regression. PDSDNet reduces the possibility of spectral distortion and enriches spatial details by improving the similarity between the simulated panchromatic band and the real panchromatic band. Moreover, PDSDNet assumes that if an ideal multispectral image that has the same resolution with the panchromatic image exists, each band of it should have the same spatial details as in the panchromatic image. Thus, the details we fill into each multispectral band are the same and they can be extracted effectively in one pass. Experimental results demonstrate that PDSDNet can generate high-quality fusion images with multispectral images and panchromatic images. Compared with BDSD, MTF-GLP-HPM-PP, and PanNet, which are widely applied on IKONOS, QuickBird, and WorldView-3 datasets, pansharpened images of the proposed method have rich spatial details and present superior visual effects without noticeable spectral and spatial distortion. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Spectral response of the QuickBird panchromatic and multispectral imagery.</p>
Full article ">Figure 2
<p>The structure of PDSDNet, which inputs four bands MS and outputs filters for simulating PAN.</p>
Full article ">Figure 3
<p>The process of our fusion method.</p>
Full article ">Figure 4
<p>Visualization of IKONOS test image: (<b>a</b>) PAN; (<b>b</b>) MS; (<b>c</b>) BDSD; (<b>d</b>) MTF-GLP-HPM-PP; (<b>e</b>) PanNet; (<b>f</b>) PDSDNet.</p>
Full article ">Figure 5
<p>Visualization of local details of IKONOS test image: (<b>a</b>) PAN; (<b>b</b>) MS; (<b>c</b>) BDSD; (<b>d</b>) MTF-GLP-HPM-PP; (<b>e</b>) PanNet; (<b>f</b>) PDSDNet.</p>
Full article ">Figure 6
<p>Quality indexes of results of BDSD, MTF-GLP-HPM-PP, PanNet, and PDSDNet.</p>
Full article ">Figure 7
<p>Visualization of QuickBird test images: (<b>a</b>) PAN; (<b>b</b>) MS; (<b>c</b>) BDSD; (<b>d</b>) MTF-GLP-HPM-PP; (<b>e</b>) PanNet; (<b>f</b>) PDSDNet.</p>
Full article ">Figure 8
<p>Visualization of local details of QuickBird test images: (<b>a</b>) PAN; (<b>b</b>) MS; (<b>c</b>) BDSD; (<b>d</b>) MTF-GLP-HPM-PP; (<b>e</b>) PanNet; (<b>f</b>) PDSDNet.</p>
Full article ">Figure 9
<p>Quality indexes of results of BDSD, MTF-GLP-HPM-PP, PanNet, and PDSDNet.</p>
Full article ">Figure 10
<p>Visualization of WorldView-3 test images: (<b>a</b>) PAN; (<b>b</b>) MS; (<b>c</b>) BDSD; (<b>d</b>) MTF-GLP-HPM-PP; (<b>e</b>) PanNet; (<b>f</b>) PDSDNet.</p>
Full article ">Figure 11
<p>Visualization of local details of WorldView-3 test images: (<b>a</b>) PAN; (<b>b</b>) MS; (<b>c</b>) BDSD; (<b>d</b>) MTF-GLP-HPM-PP; (<b>e</b>) PanNet; (<b>f</b>) PDSDNet.</p>
Full article ">Figure 12
<p>Quality indexes of results of BDSD, MTF-GLP-HPM-PP, PanNet, and PDSDNet.</p>
Full article ">Figure 13
<p>(<b>a</b>) PAN; (<b>b</b>,<b>c</b>) the eighth, the third, and the first band for false color composite; (<b>b</b>) MS; (<b>c</b>) PDSDNet.</p>
Full article ">Figure 14
<p>Local details: (<b>a</b>) PAN; (<b>b</b>,<b>c</b>) the eighth, the third, and the first band for false color composite; (<b>b</b>) MS; (<b>c</b>) PDSDNet.</p>
Full article ">Figure 15
<p>The heat map of the PDSDNet filters corresponding to the first row test image in <a href="#remotesensing-14-01242-f008" class="html-fig">Figure 8</a>.</p>
Full article ">
20 pages, 1989 KiB  
Article
An Improved Version of the Generalized Laplacian Pyramid Algorithm for Pansharpening
by Paolo Addesso, Rocco Restaino and Gemine Vivone
Remote Sens. 2021, 13(17), 3386; https://doi.org/10.3390/rs13173386 - 26 Aug 2021
Cited by 8 | Viewed by 2096
Abstract
The spatial resolution of multispectral data can be synthetically improved by exploiting the spatial content of a companion panchromatic image. This process, named pansharpening, is widely employed by data providers to augment the quality of images made available for many applications. The huge [...] Read more.
The spatial resolution of multispectral data can be synthetically improved by exploiting the spatial content of a companion panchromatic image. This process, named pansharpening, is widely employed by data providers to augment the quality of images made available for many applications. The huge demand requires the utilization of efficient fusion algorithms that do not require specific training phases, but rather exploit physical considerations to combine the available data. For this reason, classical model-based approaches are still widely used in practice. We created and assessed a method for improving a widespread approach, based on the generalized Laplacian pyramid decomposition, by combining two different cost-effective upgrades: the estimation of the detail-extraction filter from data and the utilization of an improved injection scheme based on multilinear regression. The proposed method was compared with several existing efficient pansharpening algorithms, employing the most credited performance evaluation protocols. The capability of achieving optimal results in very different scenarios was demonstrated by employing data acquired by the IKONOS and WorldView-3 satellites. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An example of pansharpening: (<b>a</b>) MS (interpolated) image; (<b>b</b>) PAN image; (<b>c</b>) fused image (namely, via the MBFE-BDSD-MLR method, presented in <a href="#sec5-remotesensing-13-03386" class="html-sec">Section 5</a>).</p>
Full article ">Figure 2
<p>Block scheme of the pansharpening framework.</p>
Full article ">Figure 3
<p>Reduced resolution datasets: <span class="html-italic">China dataset</span> (on the left), <span class="html-italic">Tripoli dataset</span> (on the right). The first row reports the original MS image (used as the ground-truth) and the second row reports the degraded MS image, upsampled to the ground-truth size.</p>
Full article ">Figure 4
<p>Close-ups of the fused results using the reduced resolution China dataset: (<b>a</b>) GT; (<b>b</b>) GSA; (<b>c</b>) MF-HG; (<b>d</b>) MBFEBDSD-HPM; (<b>e</b>) MBFE-GSA-HPM; (<b>f</b>) FE-HPM; (<b>g</b>) GLP-HPM; (<b>h</b>) MBFE-BDSD-CBD; (<b>i</b>) MBFE-GSA-CBD; (<b>j</b>) FE-CBD; (<b>k</b>) GLP-CBD; (<b>l</b>) MBFE-BDSD-MLR; (<b>m</b>) MBFE-GSA-MLR; (<b>n</b>) FE-MLR; (<b>o</b>) GLP-MLR.</p>
Full article ">Figure 4 Cont.
<p>Close-ups of the fused results using the reduced resolution China dataset: (<b>a</b>) GT; (<b>b</b>) GSA; (<b>c</b>) MF-HG; (<b>d</b>) MBFEBDSD-HPM; (<b>e</b>) MBFE-GSA-HPM; (<b>f</b>) FE-HPM; (<b>g</b>) GLP-HPM; (<b>h</b>) MBFE-BDSD-CBD; (<b>i</b>) MBFE-GSA-CBD; (<b>j</b>) FE-CBD; (<b>k</b>) GLP-CBD; (<b>l</b>) MBFE-BDSD-MLR; (<b>m</b>) MBFE-GSA-MLR; (<b>n</b>) FE-MLR; (<b>o</b>) GLP-MLR.</p>
Full article ">Figure 5
<p>Close-ups of the fused results using the reduced resolution Tripoli dataset: (<b>a</b>) GT; (<b>b</b>) GSA; (<b>c</b>) MF-HG; (<b>d</b>) MBFE-BDSD-HPM; (<b>e</b>) MBFE-GSA-HPM; (<b>f</b>) FE-HPM; (<b>g</b>) GLP-HPM; (<b>h</b>) MBFE-BDSD-CBD; (<b>i</b>) MBFE-GSA-CBD; (<b>j</b>) FE-CBD; (<b>k</b>) GLP-CBD; (<b>l</b>) MBFE-BDSD-MLR; (<b>m</b>) MBFE-GSA-MLR; (<b>n</b>) FE-MLR; (<b>o</b>) GLP-MLR.</p>
Full article ">Figure 6
<p>Tripoli dataset (RR): differences between the Q<math display="inline"><semantics> <msup> <mn>2</mn> <mi>n</mi> </msup> </semantics></math> maps computed for the best algorithm, i.e., MBFE-GSA-MLR, and the baseline methods, i.e., (<b>a</b>) GSA, (<b>b</b>) GLP-MLR, (<b>c</b>) GLP-CBD, (<b>d</b>) MBFE-GSA-CBD, (<b>e</b>) GLP-HPM and (<b>f</b>) MBFE-GSA-HPM. Green values: better results obtained by MBFE-GSA-MLR; red values: better results obtained by the other algorithm.</p>
Full article ">Figure 7
<p>Close-ups of the details of the fused results using the full resolution Tripoli dataset: (<b>a</b>) PAN; (<b>b</b>) EXP; (<b>c</b>) details for GSA; (<b>d</b>) details for MF-HG; (<b>e</b>) details for MBFE-BDSD-HPM; (<b>f</b>) details for MBFE-GSA-HPM; (<b>g</b>) details for FE-HPM; (<b>h</b>) details for GLP-HPM; (<b>i</b>) details for MBFE-BDSD-CBD; (<b>j</b>) details for MBFE-GSA-CBD; (<b>k</b>) details for FE-CBD; (<b>l</b>) details for GLP-CBD; (<b>m</b>) details for MBFE-BDSD-MLR; (<b>n</b>) details for MBFE-GSA-MLR; (<b>o</b>) details for FE-MLR; (<b>p</b>) details for GLP-MLR.</p>
Full article ">
20 pages, 15379 KiB  
Review
Forest Aboveground Biomass Estimation and Mapping through High-Resolution Optical Satellite Imagery—A Literature Review
by Adeel Ahmad, Hammad Gilani and Sajid Rashid Ahmad
Forests 2021, 12(7), 914; https://doi.org/10.3390/f12070914 - 14 Jul 2021
Cited by 22 | Viewed by 8798
Abstract
This paper provides a comprehensive literature review on forest aboveground biomass (AGB) estimation and mapping through high-resolution optical satellite imagery (≤5 m spatial resolution). Based on the literature review, 44 peer-reviewed journal articles were published in 15 years (2004–2019). Twenty-one studies were conducted [...] Read more.
This paper provides a comprehensive literature review on forest aboveground biomass (AGB) estimation and mapping through high-resolution optical satellite imagery (≤5 m spatial resolution). Based on the literature review, 44 peer-reviewed journal articles were published in 15 years (2004–2019). Twenty-one studies were conducted in Asia, eight in North America and Africa, five in South America, and four in Europe. This review article gives a glance at the published methodologies for AGB prediction modeling and validation. The literature review suggested that, along with the integration of other sensors, QuickBird, WorldView-2, and IKONOS satellite images were most widely used for AGB estimations, with higher estimation accuracies. All studies were grouped into six satellite-derived independent variables, including tree crown, image textures, tree shadow fraction, canopy height, vegetation indices, and multiple variables. Using these satellite-derived independent variables, most of the studies used linear regression (41%), while 30% used linear multiple regression and 18% used non-linear (machine learning) regression, while very few (11%) studies used non-linear (multiple and exponential) regression for estimating AGB. In the context of global forest AGB estimations and monitoring, the advantages, strengths, and limitations were discussed to achieve better accuracy and transparency towards the performance-based payment mechanism of the REDD+ program. Apart from technical limitations, we realized that very few studies talked about real-time monitoring of AGB or quantifying AGB change, a dimension that needs exploration. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Workflow of the literature review.</p>
Full article ">Figure 2
<p>Summary of satellite-derived independent variables used, along with the regression model used, for the estimation and mapping of forest AGB using high-resolution optical satellite imagery published in 44 papers.</p>
Full article ">Figure 3
<p>Satellite-derived independent variables for each satellite sensor used to calculate these specific variables.</p>
Full article ">Figure 4
<p>Number of times high-resolution optical data were used for each distinctive forest type.</p>
Full article ">Figure 5
<p>Boxplot for estimated mean AGB and RMSE (t ha<sup>−1</sup>) for each sensor reviewed in all reported studies.</p>
Full article ">Figure 6
<p>Percentage of studies conducted per sensor vs. average model of coefficient of determination (R<sup>2</sup>).</p>
Full article ">
16 pages, 6970 KiB  
Article
Unsupervised Multistep Deformable Registration of Remote Sensing Imagery Based on Deep Learning
by Maria Papadomanolaki, Stergios Christodoulidis, Konstantinos Karantzalos and Maria Vakalopoulou
Remote Sens. 2021, 13(7), 1294; https://doi.org/10.3390/rs13071294 - 29 Mar 2021
Cited by 16 | Viewed by 3158
Abstract
Image registration is among the most popular and important problems of remote sensing. In this paper we propose a fully unsupervised, deep learning based multistep deformable registration scheme for aligning pairs of satellite imagery. The presented method is based on the expression power [...] Read more.
Image registration is among the most popular and important problems of remote sensing. In this paper we propose a fully unsupervised, deep learning based multistep deformable registration scheme for aligning pairs of satellite imagery. The presented method is based on the expression power of deep fully convolutional networks, regressing directly the spatial gradients of the deformation and employing a 2D transformer layer to efficiently warp one image to the other, in an end-to-end fashion. The displacements are calculated with an iterative way, utilizing different time steps to refine and regress them. Our formulation can be integrated into any kind of fully convolutional architecture, providing at the same time fast inference performances. The developed methodology has been evaluated in two different datasets depicting urban and periurban areas; i.e., the very high-resolution dataset of the East Prefecture of Attica, Greece, as well as the high resolution ISPRS Ikonos dataset. Quantitative and qualitative results demonstrated the high potentials of our method. Full article
(This article belongs to the Special Issue Signal and Image Processing for Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The proposed multistep deformable registration approach. At every time step <span class="html-italic">t</span>, the current transformation parameters are added to the transformation parameters of the previous time step <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>. Through this iterative process, the network learns to properly refine the initial spatial gradients, providing more accurate deformation grids.</p>
Full article ">Figure 2
<p>Overview of the proposed fully convolutional network employed during the first time step. The concatenation of the source image <span class="html-italic">S</span> and the target image <span class="html-italic">R</span> is given as input to the network.</p>
Full article ">Figure 3
<p>Qualitative evaluation on different pairs from the Attica VHR testing regions. Different approaches from [<a href="#B30-remotesensing-13-01294" class="html-bibr">30</a>] are compared i.e., from left to right: Unregistered raw data, results: only <span class="html-italic">A</span>, only <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">A</span> &amp; <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">Proposed</span>.</p>
Full article ">Figure 4
<p>Qualitative evaluation on different pairs from the Attica VHR testing regions. Different approaches from [<a href="#B36-remotesensing-13-01294" class="html-bibr">36</a>] are compared i.e., from left to right: Unregistered raw data, results: only <span class="html-italic">A</span>, only <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">A</span> &amp; <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">Proposed</span>.</p>
Full article ">Figure 5
<p>Qualitative evaluation on different pairs from the ISPRS Ikonos testing regions. Different approaches from [<a href="#B30-remotesensing-13-01294" class="html-bibr">30</a>] are compared, i.e., from left to right: Unregistered raw data, results: only <span class="html-italic">A</span>, only <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">A</span> &amp; <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">Proposed</span>.</p>
Full article ">Figure 6
<p>Training and validation loss curves for the different methods on the ISPRS Ikonos dataset. From left to right: only <span class="html-italic">A</span>, only <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">A</span> &amp; <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>, <span class="html-italic">Proposed</span>.</p>
Full article ">
3613 KiB  
Proceeding Paper
Refining IKONOS DEM for Dehradun Region Using Photogrammetry Based DEM Editing Methods, Orthoimage Generation and Quality Assessment of Cartosat-1 DEM
by Ashutosh Bhardwaj, Kamal Jain and Rajat Subhra Chatterjee
Environ. Sci. Proc. 2021, 5(1), 3; https://doi.org/10.3390/IECG2020-06966 - 2 Dec 2020
Cited by 1 | Viewed by 1314
Abstract
The correct representation of the topography of terrain is an important requirement to generate photogrammetric products such as orthoimages and maps from high-resolution (HR) or very high-resolution (VHR) satellite datasets. The refining of the digital elevation model (DEM) for the generation of an [...] Read more.
The correct representation of the topography of terrain is an important requirement to generate photogrammetric products such as orthoimages and maps from high-resolution (HR) or very high-resolution (VHR) satellite datasets. The refining of the digital elevation model (DEM) for the generation of an orthoimage is a vital step with a direct effect on the final accuracy achieved in the orthoimages. The refined DEM has potential applications in various domains of earth sciences such as geomorphological analysis, flood inundation mapping, hydrological analysis, large-scale mapping in an urban environment, etc., impacting the resulting output accuracy. Manual editing is done in the presented study for the automatically generated DEM from IKONOS data consequent to the satellite triangulation with a root mean square error (RMSE) of 0.46, using the rational function model (RFM) and an optimal number of ground control points (GCPs). The RFM includes the rational polynomial coefficients (RPCs) to build the relation between image space and ground space. The automatically generated DEM initially represents the digital surface model (DSM), which is used to generate a digital terrain model (DTM) in this study for improving orthoimages for an area of approximately 100 km2. DSM frequently has errors due to mass points in hanging (floating) or digging, which need correction while generating DTM. The DTM assists in the removal of the geometric effects (errors) of ground relief present in the DEM (i.e., DSM here) while generating the orthoimages and thus improves the quality of orthoimages, especially in areas such as Dehradun that have highly undulating terrain with a large number of natural drainages. The difference image of reference, i.e., edited IKONOS DEM (now representing DTM) and automatically generated IKONOS DEM, i.e., DSM, has a mean difference of 1.421 m. The difference DEM (dDEM) for the reference IKONOS DEM and generated Cartosat-1 DEM at a 10 m posting interval (referred to as Carto10 DEM) results in a mean difference of 8.74 m. Full article
Show Figures

Figure 1

Figure 1
<p>Location map of the experimental site.</p>
Full article ">Figure 2
<p>IKONOS-2 stereopair images (one each): (<b>a</b>) Scene 1 (source image ID: 2010112605334560000011608311) and (<b>b</b>) Scene 2 (source image ID: 2010112605331100000011608312).</p>
Full article ">Figure 3
<p>Cartosat-1 image for the Dehradun site.</p>
Full article ">Figure 4
<p>Photogrammetric methodology used for satellite triangulation, DEM, and orthoimage generation.</p>
Full article ">Figure 5
<p>(<b>a</b>) Orthoimage for the reference DEM site, and comparison of (<b>b</b>) reference data (IKONOS DEM) with (<b>c</b>) Cartosat-1 data.</p>
Full article ">Figure 6
<p>Screenshot of manual editing carried out in the Interactive Terrain Editor (ITE, LPS). The mass point, break lines, and TIN are visible in the figure on (overlaid) the IKONOS image in the stereo.</p>
Full article ">Figure 7
<p>Effect of spatial and spectral resolution on (<b>a</b>) an urban region (Cartosat-1), (<b>b</b>) an urban region (IKONOS), (<b>c</b>) a road (Cartosat-1), (<b>d</b>) a road (IKONOS), (<b>e</b>) an urban area (Cartosat-1), and (<b>f</b>) an urban area (IKONOS).</p>
Full article ">
Back to TopTop