[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = acoustic remote sensing technique

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 29294 KiB  
Article
Ghost Removal from Forward-Scan Sonar Views near the Sea Surface for Image Enhancement and 3-D Object Modeling
by Yuhan Liu and Shahriar Negahdaripour
Remote Sens. 2024, 16(20), 3814; https://doi.org/10.3390/rs16203814 - 14 Oct 2024
Viewed by 316
Abstract
Underwater sonar is the primary remote sensing and imaging modality within turbid environments with poor visibility. The two-dimensional (2-D) images of a target near the air–sea interface (or resting on a hard seabed), acquired by forward-scan sonar (FSS), are generally corrupted by the [...] Read more.
Underwater sonar is the primary remote sensing and imaging modality within turbid environments with poor visibility. The two-dimensional (2-D) images of a target near the air–sea interface (or resting on a hard seabed), acquired by forward-scan sonar (FSS), are generally corrupted by the ghost and sometimes mirror components, formed by the multipath propagation of transmitted acoustic beams. In the processing of the 2-D FSS views to generate an accurate three-dimensional (3-D) object model, the corrupted regions have to be discarded. The sonar tilt angle and distance from the sea surface are two important parameters for the accurate localization of the ghost and mirror components. We propose a unified optimization technique for improving both the measurements of these two parameters from inexpensive sensors and the accuracy of a 3-D object model using 2-D FSS images at known poses. The solution is obtained by the recursive updating of sonar parameters and 3-D object model. Utilizing the 3-D object model, we can enhance the original images and generate synthetic views for arbitrary sonar poses. We demonstrate the performance of our method in experiments with the synthetic and real images of three targets: two dominantly convex coral rocks and a highly concave toy wood table. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Ghost component overlaps with and is indistinguishable from the object region in every view. Mirror component at reference position (elevation axis pointing upward) overlaps with both object and ghost regions (<b>a</b>). As the sonar rotates about the viewing direction (from 0° to 67.5° in increments of 22.5°, here), it separating from the object (<b>b</b>), and forms a distinct blob (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>) For a sonar beam in <math display="inline"><semantics> <mi>θ</mi> </semantics></math> direction, image intensity <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <mi>I</mi> <mspace width="-0.166667em"/> </mrow> </semantics></math> of pixel <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> depends on cumulative echos from unknown number of surface patches within volume <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <msub> <mi>V</mi> <mrow> <mspace width="-0.166667em"/> <mi>ϕ</mi> </mrow> </msub> <mspace width="-0.166667em"/> </mrow> </semantics></math> arriving at sonar receiver simultaneously; <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <msub> <mi>V</mi> <mi>ϕ</mi> </msub> <mspace width="-0.166667em"/> </mrow> </semantics></math> covers elevation-angle interval <math display="inline"><semantics> <mrow> <mspace width="-0.166667em"/> <mo>[</mo> <mo>−</mo> <msub> <mi>W</mi> <mi>ϕ</mi> </msub> <mo>,</mo> <msub> <mi>W</mi> <mi>ϕ</mi> </msub> <mo>]</mo> </mrow> </semantics></math>, range interval [<span class="html-italic">ℜ</span>, <math display="inline"><semantics> <mrow> <mo>ℜ</mo> <mspace width="-0.166667em"/> <mo>+</mo> <mrow> <mi>δ</mi> <mo>ℜ</mo> </mrow> </mrow> </semantics></math>] along the beam covering azimuthal-angle interval [<math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>,</mo> <mi>θ</mi> <mspace width="-0.166667em"/> <mo>+</mo> <mspace width="-0.166667em"/> <mrow> <mi>δ</mi> <mi>θ</mi> </mrow> </mrow> </semantics></math>]. (<b>b</b>) A coral rock with voxelated volume and triangular surface mesh of SC solution. (<b>c</b>) Virtual mirror object geometry: transmitted sound waves in direction <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mn>1</mn> </msub> </semantics></math> are scattered by surface at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>s</mi> </msub> </semantics></math>. Reflected portion along “unique direction” <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mn>2</mn> </msub> </semantics></math> towards <math display="inline"><semantics> <msub> <mi>P</mi> <mi>W</mi> </msub> </semantics></math> on water surface (with surface normal <math display="inline"><semantics> <mi mathvariant="bold-italic">n</mi> </semantics></math>) is specularly reflected towards the sonar along <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mn>3</mn> </msub> </semantics></math>, leading to the appearance of a virtual mirror object point at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>m</mi> </msub> </semantics></math>. (<b>d</b>) Virtual ghost object geometry: considering the reverse direction of the mirror-point pathway, sound waves traveling along <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="bold-italic">R</mi> <mn>3</mn> </msub> </mrow> </semantics></math> are specularly reflected towards the object along <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="bold-italic">R</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and are scattered at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>s</mi> </msub> </semantics></math>, of which components along <math display="inline"><semantics> <mrow> <mo>−</mo> <msub> <mi mathvariant="bold-italic">R</mi> <mn>1</mn> </msub> </mrow> </semantics></math> are captured by the sonar. This leads to the appearance of ghost point <math display="inline"><semantics> <msub> <mi>P</mi> <mi>g</mi> </msub> </semantics></math> along the sonar beam directed at <math display="inline"><semantics> <msub> <mi>P</mi> <mi>s</mi> </msub> </semantics></math> (at a longer range <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">R</mi> <mi>g</mi> </msub> </semantics></math>).</p>
Full article ">Figure 3
<p>(<b>a</b>) Block diagram of entire algorithm; (<b>b</b>) steps in 3-D shape optimization by displacement of model vertices, computed from 3-D vertex motions that are estimated from the 2-D image motions aligning the object regions in the data and synthetic views.</p>
Full article ">Figure 4
<p>(<b>a</b>) The 2-D vectors <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mrow> <mi mathvariant="bold">v</mi> </mrow> <mrow> <mi>m</mi> <mi>i</mi> </mrow> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mrow> <mi mathvariant="bold">v</mi> </mrow> <mrow> <mi>m</mi> <mi>j</mi> </mrow> <mi>M</mi> </msubsup> <mo>}</mo> </mrow> </semantics></math> align the frontal contours <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mi mathvariant="script">C</mi> <mi>m</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mi mathvariant="script">C</mi> <mi>m</mi> <mi>M</mi> </msubsup> <mo>}</mo> </mrow> </semantics></math> of the object and mirror regions in the real images with counterparts <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mover accent="true"> <mi mathvariant="script">C</mi> <mo>˜</mo> </mover> <mi>m</mi> <mi>O</mi> </msubsup> <mo>,</mo> <msubsup> <mover accent="true"> <mi mathvariant="script">C</mi> <mo>˜</mo> </mover> <mi>m</mi> <mi>M</mi> </msubsup> <mo>}</mo> </mrow> </semantics></math> in the synthetic views; (<b>b</b>) magnified view of relevant regions.</p>
Full article ">Figure 5
<p>Processing steps in the decomposition of sonar data into object and ghost components. (<b>a</b>) Generation of synthetic object image from image formation model, and localizing the ghost and mirror components to identify regions overlapping with object image. (<b>b</b>) Segmentation of real and synthetic object regions into overlapping and non-overlapping parts, using non-overlapping region in generating the LUT for synthetic-to-real object transformation, and apply the LUT to reconstruct overlapping object region to complete the object image by fusing with non-overlapping part. (<b>c</b>) Segmentation of ghost area into overlapping and non-overlapping regions, producing the non-overlapping part. (<b>d</b>) Discounting for the object image within overlap area to generate the ghost component. (<b>e</b>) Generation of ghost image from overlapping and non-overlapping components.</p>
Full article ">Figure 6
<p>Three targets—two dominantly convex coral rocks with mild local concavities and a highly concave wood table—with height, maximum width, and imaging conditions.</p>
Full article ">Figure 7
<p>Coral-one experiment—(<b>a</b>–<b>d</b>) synthetic and (<b>a’</b>–<b>d’</b>) real data. (<b>a</b>,<b>b</b>,<b>a’</b>,<b>b’</b>) Optimization of sonar depth and tilt parameters. (<b>c</b>,<b>c’</b>) Image <math display="inline"><semantics> <msub> <mi>E</mi> <mi>I</mi> </msub> </semantics></math> and volumetric <math display="inline"><semantics> <msub> <mi>E</mi> <mi>V</mi> </msub> </semantics></math> errors moving in tandem confirm 3-D model improvement with reduced image error. (<b>d</b>) Initialized SC solution (top) and optimized 3-D model (bottom), shown by blue surface mesh, are superimposed on Kinect model (black mesh); (<b>d’</b>) optimized SC (blue mesh) and Kinect (red mesh) models.</p>
Full article ">Figure 8
<p>Coral-two experiment—(<b>a</b>–<b>d</b>) synthetic and (<b>a’</b>–<b>d’</b>) real data. (<b>a</b>,<b>b</b>,<b>a’</b>,<b>b’</b>) Optimization of sonar depth and tilt parameters. (<b>c</b>,<b>c’</b>) Improving 3-D model leads to smaller volumetric <math display="inline"><semantics> <msub> <mi>E</mi> <mi>V</mi> </msub> </semantics></math> and image <math display="inline"><semantics> <msub> <mi>E</mi> <mi>I</mi> </msub> </semantics></math> errors. (<b>d</b>) Kinect model (black mesh) superimposed on initialized SC solution (top) and optimized 3-D model (bottom), shown by blue surface meshes. (<b>d’</b>) Optimized SC (blue mesh) and Kinect (red mesh) models.</p>
Full article ">Figure 9
<p>Wood table experiment— (<b>a</b>–<b>d</b>) synthetic and (<b>a’</b>–<b>d’</b>) real data. (<b>a</b>,<b>b</b>,<b>a’</b>,<b>b’</b>) Optimization of sonar depth and tilt parameters. (<b>c</b>,<b>c’</b>) Improving 3-D model reduces both volumetric <math display="inline"><semantics> <msub> <mi>E</mi> <mi>V</mi> </msub> </semantics></math> and image <math display="inline"><semantics> <msub> <mi>E</mi> <mi>I</mi> </msub> </semantics></math> errors. (<b>d</b>) Kinect model (black mesh) superimposed on initialized SC solution (top) and optimized 3-D model (bottom), shown by blue surface meshes. (<b>d’</b>) Optimized SC (blue mesh) and Kinect (red mesh) models.</p>
Full article ">Figure 10
<p>Coral-one experiment—(<b>a</b>) data; (<b>b</b>) data over image region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 11
<p>Coral-two experiment—(<b>a</b>) data; (<b>b</b>) data over image region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 12
<p>Wood table experiment—(<b>a</b>) data; (<b>b</b>) data within object region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 13
<p>Sets of images as in previous experiments for (<b>a1</b>–<b>d1</b>) coral-one and (<b>a2</b>–<b>d2</b>) coral-two views, in which object, ghost, and mirror components overlap (not used in the optimization). (<b>a1</b>,<b>a2</b>) data; (<b>b1</b>,<b>b2</b>) data within object region only; (<b>c1</b>,<b>c2</b>) initial and (<b>d1</b>,<b>d2</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">Figure 14
<p>Sets of wood table images as in previous figures for views in which object, ghost, and mirror components overlap (not used in optimization process). (<b>a</b>) data; (<b>b</b>) data within object region only; (<b>c</b>) initial and (<b>d</b>) optimized synthetic view generated by the 3-D model.</p>
Full article ">
19 pages, 5934 KiB  
Article
Detection of Typical Transient Signals in Water by XGBoost Classifier Based on Shape Statistical Features: Application to the Call of Southern Right Whale
by Zemin Zhou, Yanrui Qu, Boqing Zhu and Bingbing Zhang
J. Mar. Sci. Eng. 2024, 12(9), 1596; https://doi.org/10.3390/jmse12091596 - 9 Sep 2024
Viewed by 642
Abstract
Whale sound is a typical transient signal. The escalating demands of ecological research and marine conservation necessitate advanced technologies for the automatic detection and classification of underwater acoustic signals. Traditional energy detection methods, which focus primarily on amplitude, often perform poorly in the [...] Read more.
Whale sound is a typical transient signal. The escalating demands of ecological research and marine conservation necessitate advanced technologies for the automatic detection and classification of underwater acoustic signals. Traditional energy detection methods, which focus primarily on amplitude, often perform poorly in the non-Gaussian noise conditions typical of oceanic environments. This study introduces a classified-before-detect approach that overcomes the limitations of amplitude-focused techniques. We also address the challenges posed by deep learning models, such as high data labeling costs and extensive computational requirements. By extracting shape statistical features from audio and using the XGBoost classifier, our method not only outperforms the traditional convolutional neural network (CNN) method in accuracy but also reduces the dependence on labeled data, thus improving the detection efficiency. The integration of these features significantly enhances model performance, promoting the broader application of marine acoustic remote sensing technologies. This research contributes to the advancement of marine bioacoustic monitoring, offering a reliable, rapid, and training-efficient method suitable for practical deployment. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

Figure 1
<p>The workflow for whale sound detection and classification using XGBoost.</p>
Full article ">Figure 2
<p>SNR comparison among different noise reduction methods.</p>
Full article ">Figure 3
<p>Detection results of dataset 1.</p>
Full article ">Figure 4
<p>XGBoost optimization process.</p>
Full article ">Figure 5
<p>Spectrogram images: (<b>a</b>) spectrogram image of dataset 1; (<b>b</b>) spectrogram image of dataset 2 (deployment name: NRS_01_2020-2022); (<b>c</b>) spectrogram image of dataset 2 (deployment name: NRS_01_2014-2015); (<b>d</b>) spectrogram image of dataset 2 (deployment name: NRS_08_2016-2018); (<b>e</b>) spectrogram image of dataset 2 (deployment name: NRS_04_2015-2016); (<b>f</b>) spectrogram image of dataset 2 (deployment name: NRS_03_2017-2019).</p>
Full article ">Figure 5 Cont.
<p>Spectrogram images: (<b>a</b>) spectrogram image of dataset 1; (<b>b</b>) spectrogram image of dataset 2 (deployment name: NRS_01_2020-2022); (<b>c</b>) spectrogram image of dataset 2 (deployment name: NRS_01_2014-2015); (<b>d</b>) spectrogram image of dataset 2 (deployment name: NRS_08_2016-2018); (<b>e</b>) spectrogram image of dataset 2 (deployment name: NRS_04_2015-2016); (<b>f</b>) spectrogram image of dataset 2 (deployment name: NRS_03_2017-2019).</p>
Full article ">Figure 6
<p>Precision–recall curves: (<b>a</b>) four features and the XGBoost model of dataset 1; (<b>b</b>) XGBoost model of dataset 2.</p>
Full article ">Figure 7
<p>Precision–recall curves: the training data: 45 min.</p>
Full article ">Figure 8
<p>Precision–recall curves: (<b>a</b>) using Std; (<b>b</b>) using kurtosis.</p>
Full article ">Figure 9
<p>SHAP summary diagram.</p>
Full article ">
34 pages, 3684 KiB  
Review
Artificial Intelligence-Based Underwater Acoustic Target Recognition: A Survey
by Sheng Feng, Shuqing Ma, Xiaoqian Zhu and Ming Yan
Remote Sens. 2024, 16(17), 3333; https://doi.org/10.3390/rs16173333 - 8 Sep 2024
Viewed by 1332
Abstract
Underwater acoustic target recognition has always played a pivotal role in ocean remote sensing. By analyzing and processing ship-radiated signals, it is possible to determine the type and nature of a target. Historically, traditional signal processing techniques have been employed for target recognition [...] Read more.
Underwater acoustic target recognition has always played a pivotal role in ocean remote sensing. By analyzing and processing ship-radiated signals, it is possible to determine the type and nature of a target. Historically, traditional signal processing techniques have been employed for target recognition in underwater environments, which often exhibit limitations in accuracy and efficiency. In response to these limitations, the integration of artificial intelligence (AI) methods, particularly those leveraging machine learning and deep learning, has attracted increasing attention in recent years. Compared to traditional methods, these intelligent recognition techniques can autonomously, efficiently, and accurately identify underwater targets. This paper comprehensively reviews the contributions of intelligent techniques in underwater acoustic target recognition and outlines potential future directions, offering a forward-looking perspective on how ongoing advancements in AI can further revolutionize underwater acoustic target recognition in ocean remote sensing. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the separability between underwater signals from the perspective of pattern recognition.</p>
Full article ">Figure 2
<p>Schematic diagram of a typical underwater signal propagation channel.</p>
Full article ">Figure 3
<p>The workflow of a typical intelligent UATR system.</p>
Full article ">Figure 4
<p>Physical significance feature of an underwater signal sample, including its LOFAR and DEMON [<a href="#B26-remotesensing-16-03333" class="html-bibr">26</a>] spectrograms.</p>
Full article ">Figure 5
<p>An example of a fused LOFAR and DEMON spectrogram with comb filtering [<a href="#B31-remotesensing-16-03333" class="html-bibr">31</a>]. A1–A4 and B1–B3 represent the primary spectral lines.</p>
Full article ">Figure 6
<p>The Mel triangular filters implemented by Librosa package.</p>
Full article ">Figure 7
<p>Two aspects of the multidimensional features.</p>
Full article ">Figure 8
<p>Accuracy comparison using various feature extraction methods with ResNet18 on the Shipsear dataset, as adapted from Wu et al. [<a href="#B64-remotesensing-16-03333" class="html-bibr">64</a>].</p>
Full article ">Figure 9
<p>The basic architechture of AE.</p>
Full article ">Figure 10
<p>A standard prediction framework of DL-based UATR methods.</p>
Full article ">Figure 11
<p>Representative DL neural networks in the field of intelligent UATR. The advantages and disadvantages of each method are marked in red and green, respectively.</p>
Full article ">Figure 12
<p>Accuracy comparison with different DNNs on the Deepship dataset. (<b>a</b>) Random partition, adapted from Zhou et al. [<a href="#B119-remotesensing-16-03333" class="html-bibr">119</a>], (<b>b</b>) causal partition, adapted from Irfan et al. [<a href="#B2-remotesensing-16-03333" class="html-bibr">2</a>] and Xu et al. [<a href="#B117-remotesensing-16-03333" class="html-bibr">117</a>].</p>
Full article ">Figure 13
<p>The computational units in LSTM.</p>
Full article ">Figure 14
<p>The UATR framework based on ResNet18, which commonly accepts the acoustic spectrograms as model input.</p>
Full article ">Figure 15
<p>The MHSA mechanism in Transformer. The left side describes the self-attention mechanism, and the right side describes the MHSA. <math display="inline"><semantics> <msub> <mi>W</mi> <mi>q</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>W</mi> <mi>k</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>W</mi> <mi>v</mi> </msub> </semantics></math> comprise the learnable projection matrix.</p>
Full article ">Figure 16
<p>Contrastive and generative SSL methods for UATR.</p>
Full article ">Figure 17
<p>Accuracy comparison on three few-shot tasks based on Shipsear dataset, as adapted from Cui et al. [<a href="#B179-remotesensing-16-03333" class="html-bibr">179</a>].</p>
Full article ">Figure 18
<p>Interpretable methods in intelligent UATR.</p>
Full article ">Figure 19
<p>Misclassification caused by adversarial attacks on intelligent UATR systems.</p>
Full article ">
18 pages, 4081 KiB  
Article
A Dual-Stream Deep Learning-Based Acoustic Denoising Model to Enhance Underwater Information Perception
by Wei Gao, Yining Liu and Desheng Chen
Remote Sens. 2024, 16(17), 3325; https://doi.org/10.3390/rs16173325 - 8 Sep 2024
Viewed by 1280
Abstract
Estimating the line spectra of ship-radiated noise is a crucial remote sensing technique for detecting and recognizing underwater acoustic targets. Improving the signal-to-noise ratio (SNR) makes the low-frequency components of the target signal more prominent. This enhancement aids in the detection of underwater [...] Read more.
Estimating the line spectra of ship-radiated noise is a crucial remote sensing technique for detecting and recognizing underwater acoustic targets. Improving the signal-to-noise ratio (SNR) makes the low-frequency components of the target signal more prominent. This enhancement aids in the detection of underwater acoustic signals using sonar. Based on the characteristics of low-frequency narrow-band line spectra signals in underwater target radiated noise, we propose a dual-stream deep learning network with frequency characteristics transformation (DS_FCTNet) for line spectra estimation. The dual streams predict amplitude and phase masks separately and use an information exchange module to swap learn features between the amplitude and phase spectra, aiding in better phase information reconstruction and signal denoising. Additionally, a frequency characteristics transformation module is employed to extract convolutional features between channels, obtaining global correlations of the amplitude spectrum and enhancing the ability to learn target signal features. Through experimental analysis on ShipsEar, a dataset of underwater acoustic signals by hydrophones deployed in shallow water, the effectiveness and rationality of different modules within DS_FCTNet are verified.Under low SNR conditions and with unknown ship types, the proposed DS_FCTNet model exhibits the best line spectrum enhancement compared to methods such as SEGAN and DPT_FSNet. Specifically, SDR and SSNR are improved by 14.77 dB and 13.58 dB, respectively, enabling the detection of weaker target signals and laying the foundation for target localization and recognition applications. Full article
Show Figures

Figure 1

Figure 1
<p>The T-F domain of clean signal and its mixed signal after adding noise.</p>
Full article ">Figure 2
<p>The PSD of clean signal and its mixed signal after adding noise.</p>
Full article ">Figure 3
<p>The architecture of the proposed DS_FCTNet model.</p>
Full article ">Figure 4
<p>The diagram of the encoder. Above is the amplitude encoding layer, and below is the phase encoding layer.</p>
Full article ">Figure 5
<p>The diagram of DSB, including the amplitude-stream block, phase-stream block, and communication.</p>
Full article ">Figure 6
<p>The diagram of FCT.</p>
Full article ">Figure 7
<p>The diagram of the Decoder Layer. Above is the amplitude encoding layer, and below is the phase encoding layer.</p>
Full article ">Figure 8
<p>The arrangement of three DSB modules.</p>
Full article ">Figure 9
<p>Comparison of the clean signal with the denoised signal using ablation experiments in the T-F domain.</p>
Full article ">Figure 10
<p>Comparison of the clean signal with the denoised signal using ablation experiments in PSD.</p>
Full article ">Figure 11
<p>Comparison of the clean signal with the denoised signal using methods in the T-F domain on Dataset-I.</p>
Full article ">Figure 12
<p>Comparison of the clean signal with the denoised signal using methods in PSD on Dataset-I.</p>
Full article ">Figure 13
<p>Comparison of the clean signal with the denoised signal using ablation experiments in the time domain.</p>
Full article ">Figure 14
<p>Comparison of the clean signal with the denoised signal using methods in the T-F domain on Dataset-II.</p>
Full article ">Figure 15
<p>Comparison of the clean signal with the denoised signal using methods in PSD on Dataset-II.</p>
Full article ">Figure 16
<p>Comparison of the clean signal with the denoised signal using methods for unknown ship types in PSD on Dataset-III.</p>
Full article ">Figure 17
<p>Comparison of the clean signal with the denoised signal using methods for unknown ship types in the T-F domain on Dataset-III.</p>
Full article ">
23 pages, 5232 KiB  
Article
Continual Monitoring of Respiratory Disorders to Enhance Therapy via Real-Time Lung Sound Imaging in Telemedicine
by Murdifi Muhammad, Minghui Li, Yaolong Lou and Chang-Sheng Lee
Electronics 2024, 13(9), 1669; https://doi.org/10.3390/electronics13091669 - 26 Apr 2024
Viewed by 2675
Abstract
This work presents a configurable Internet of Things architecture for acoustical sensing and analysis for frequent remote respiratory assessments. The proposed system creates a foundation for enabling real-time therapy and patient feedback adjustment in a telemedicine setting. By allowing continuous remote respiratory monitoring, [...] Read more.
This work presents a configurable Internet of Things architecture for acoustical sensing and analysis for frequent remote respiratory assessments. The proposed system creates a foundation for enabling real-time therapy and patient feedback adjustment in a telemedicine setting. By allowing continuous remote respiratory monitoring, the system has the potential to give clinicians access to assessments from which they could make decisions about modifying therapy in real-time and communicate changes directly to patients. The system comprises a wearable wireless microphone array interfaced with a programmable microcontroller with embedded signal conditioning. Experiments on the phantom model were conducted to demonstrate the feasibility of reconstructing acoustic lung images for detecting obstructions in the airway and provided controlled validation of noise resilience and imaging capabilities. An optimized denoising technique and design innovations provided 7 dB more SNR and 7% more imaging accuracy for the proposed system, benchmarked against digital stethoscopes. While further clinical studies are warranted, initial results suggest potential benefits over single-point digital stethoscopes for internet-enabled remote lung monitoring needing noise immunity and regional specificity. The flexible architecture aims to bridge critical technical gaps in frequent and connected respiratory function at home or in busy clinical settings challenged by ambient noise interference. Full article
(This article belongs to the Special Issue Smart Communication and Networking in the 6G Era)
Show Figures

Figure 1

Figure 1
<p>Proposed IoT system architecture transmitting acoustic lung signals from a wearable microphone array via Bluetooth to the patient’s computer for processing, then over a 5G/6G network to the doctor’s computer for analysis and real-time therapy adjustment through cloud connectivity.</p>
Full article ">Figure 2
<p>The process flow of the proposed system for remote and continual lung function monitoring.</p>
Full article ">Figure 3
<p>Overview of the proposed system hardware components. (<b>a</b>) Digital pin connection between nRF52832, MEMS microphone, and the Teensy 3.6 microcontroller for capturing lung sound signals; (<b>b</b>) the ICS-52000 MEMS microphone digital pin and its modules; (<b>c</b>) the proposed system overview; and (<b>d</b>) the interconnection between the array of MEMS microphones and the flexible printed cable.</p>
Full article ">Figure 4
<p>Overview of the connections between daisy-chained MEMS microphones and microcontroller. (<b>a</b>) System block diagram of digital pin connections for an array of MEMS microphones; (<b>b</b>) Teensy 3.6 boards’ connection for multiple arrays of a maximum of 16 MEMS microphones each, with the first Teensy 3.6 board as a control (master) with an activation switch and the subsequent Teensy 3.6 boards as slaves. Grey represents the ground connection, and blue represents the interconnection of digital pin 4 (SDA2).</p>
Full article ">Figure 5
<p>Proposed system’s input and output flow of the acoustic signal. (<b>a</b>) The proposed system’s software process flow chart; and (<b>b</b>) the proposed system’s acoustic signal processing flow chart.</p>
Full article ">Figure 6
<p>The setup for acquiring lung sound signals. (<b>a</b>) The lung sound simulator and its block diagram; and (<b>b</b>) the capturing of resampled lung sound signals flow chart.</p>
Full article ">Figure 7
<p>The experimental set for the acquisition of lung signals and imaging. (<b>a</b>) The schematic diagram of the experimental setup for capturing lung sound signals and nidus detection in the airways with waterbags. x denotes the positions of the acoustic sensors, such as MEMS microphones and digital stethoscopes. The blue circular block presents an obstruction in the airways. (<b>b</b>) Binarized acoustic imaging is used to analyze experimental results, and the control (healthy) is shown for comparison purposes.</p>
Full article ">Figure 8
<p>Synchronization of an array of lung signals captured at different times via the breathing phase. Blue denotes the asynchronous lung signals captured due to single point data. Red represents the synchronized lung signals via the breathing phase.</p>
Full article ">Figure 9
<p>The captured signal quality with commercial digital stethoscopes as a benchmark. (<b>a</b>) The mean RMSE result between the three sensors capturing lung sound signals in a noisy environment. RMSE is unitless as all three sensors output normalized digital amplitude. (<b>b</b>) The mean SNR performance between various sensors capturing lung sound signals in a noisy environment.</p>
Full article ">Figure 10
<p>Recorded digital amplitude in relation to the respiratory sound signals and the frequency spectrum of the recorded lung signals. (<b>a</b>) Thinklabs One time-domain respiratory signals’ output, (<b>b</b>) Littmann 3200 time-domain respiratory signals’ output, (<b>c</b>) proposed system time-domain respiratory signals’ output, and (<b>d</b>) the frequency of interest for the three devices.</p>
Full article ">Figure 11
<p>Acoustic imaging of obstructed airway translated from acquired lung signals with 50 mm nidus length via the waterbag simulation, where the encircled dotted line indicates the actual waterbag size. (<b>a</b>) Thinklabs One, (<b>b</b>) Littmann 3200, and (<b>c</b>) the proposed system.</p>
Full article ">Figure 12
<p>Comparison between the proposed system and digital stethoscopes in detecting a nidus through acoustic imaging with (<b>a</b>) 18 and (<b>b</b>) 30 sensors.</p>
Full article ">
16 pages, 8323 KiB  
Technical Note
Prediction of Water Temperature Based on Graph Neural Network in a Small-Scale Observation via Coastal Acoustic Tomography
by Pan Xu, Shijie Xu, Kequan Shi, Mingyu Ou, Hongna Zhu, Guojun Xu, Dongbao Gao, Guangming Li and Yun Zhao
Remote Sens. 2024, 16(4), 646; https://doi.org/10.3390/rs16040646 - 9 Feb 2024
Cited by 1 | Viewed by 918
Abstract
Coastal acoustic tomography (CAT) is a remote sensing technique that utilizes acoustic methodologies to measure the dynamic characteristics of the ocean in expansive marine domains. This approach leverages the speed of sound propagation to derive vital ocean parameters such as temperature and salinity [...] Read more.
Coastal acoustic tomography (CAT) is a remote sensing technique that utilizes acoustic methodologies to measure the dynamic characteristics of the ocean in expansive marine domains. This approach leverages the speed of sound propagation to derive vital ocean parameters such as temperature and salinity by inversely estimating the acoustic ray speed during its traversal through the aquatic medium. Concurrently, analyzing the speed of different acoustic waves in their round-trip propagation enables the inverse estimation of dynamic hydrographic features, including flow velocity and directional attributes. An accurate forecasting of inversion answers in CAT rapidly contributes to a comprehensive analysis of the evolving ocean environment and its inherent characteristics. Graph neural network (GNN) is a new network architecture with strong spatial modeling and extraordinary generalization. We proposed a novel method: employing GraphSAGE to predict inversion answers in OAT, using experimental datasets collected at the Huangcai Reservoir for prediction. The results show an average error 0.01% for sound speed prediction and 0.29% for temperature predictions along each station pairwise. This adequately fulfills the real-time and exigent requirements for practical deployment. Full article
(This article belongs to the Special Issue Recent Advances in Underwater and Terrestrial Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Geographical layout of the Huangcai Reservoir and its surrounding areas. The left panel provides an overview, while the right panel zooms in on the southeast portion of the reservoir. This detailed view highlights the locations of CAT stations (S1, S2, and S3), the TD point, and the trajectory of ADCP sailings. The green solid lines represent the projection of acoustic ray paths in the horizontal slice. Additionally, the upper right corner of the magnified figure features a photograph of the CAT system utilized at station S1.</p>
Full article ">Figure 2
<p>Layer-averaged water temperature between three sound stations. (<b>a</b>–<b>c</b>) show the temperature inversion results of vertical slices between stations (blue lines indicate the temperature of each layer (7.5 m, 20 m, 29 m), red lines indicate the results passing through the 1 h weighted moving average).</p>
Full article ">Figure 3
<p>Sound propagation structure and layer division. Two sound transceivers are deployed in the water and transmit sound reciprocally. Sound waves are reflected by the interface, where multi-path sound propagation is achieved. (<b>a</b>) Sound speed profile. (<b>b</b>) Corresponding acoustic rays.</p>
Full article ">Figure 4
<p>The vertical grid division. Vertical grid division aims to intricately partition the vertical slice into grids, facilitating acoustic ray propagation and conveying water parameter information. This introduces added complexity with more unknown parameters.</p>
Full article ">Figure 5
<p>Visualization of GraphSAGE. (<b>a</b>) Sample neighborhood. Pink represents all features associated with red dots. (<b>b</b>) Aggregate feature information from neighbors. Colored points are associated with their respective neighboring points. (<b>c</b>) Predict graph feature with aggregated information. Save the relationships as labels and attempt to predict the relationships in the vicinity of the red point.</p>
Full article ">Figure 6
<p>Prediction flow chart of CAT data inversion results.</p>
Full article ">Figure 7
<p>Temperature and sound speed prediction results of S1→S2. The blue line represents the predicted value and the red line represents the true value. (<b>a</b>) Results of the initial temperature prediction. (<b>b</b>) Results of the initial sound speed prediction. (<b>c</b>) Results of the second temperature prediction. (<b>d</b>) Results of the second sound speed prediction. (<b>e</b>) Results of the third temperature prediction. (<b>f</b>) Results of the third sound speed prediction.</p>
Full article ">Figure 8
<p>Temperature and sound speed prediction results of S1→S3. The blue line represents the predicted value and the red line represents the true value. (<b>a</b>) Results of the initial temperature prediction. (<b>b</b>) Results of the initial sound speed prediction. (<b>c</b>) Results of the second temperature prediction. (<b>d</b>) Results of the second sound speed prediction. (<b>e</b>) Results of the third temperature prediction. (<b>f</b>) Results of the third sound speed prediction.</p>
Full article ">Figure 9
<p>Temperature and sound speed prediction results of S2→S3. The blue line represents the predicted value and the red line represents the true value. (<b>a</b>) Results of the initial temperature prediction. (<b>b</b>) Results of the initial sound speed prediction. (<b>c</b>) Results of the second temperature prediction. (<b>d</b>) Results of the second sound speed prediction. (<b>e</b>) Results of the third temperature prediction. (<b>f</b>) Results of the third sound speed prediction.</p>
Full article ">Figure 10
<p>Temperature and sound speed prediction results of S1→S2. (<b>a</b>) True data of the initial temperature. (<b>b</b>) Results of the initial temperature prediction. (<b>c</b>) True data of the initial sound speed. (<b>d</b>) Results of the initial sound speed prediction. (<b>e</b>) True data of the second temperature. (<b>f</b>) Results of the second temperature prediction. (<b>g</b>) True data of the second sound speed. (<b>h</b>) Results of the second sound speed prediction. (<b>i</b>) True data of the third temperature. (<b>j</b>) Results of the third temperature prediction. (<b>k</b>) True data of the third sound speed. (<b>l</b>) Results of the third sound speed prediction.</p>
Full article ">Figure 11
<p>Temperature and sound speed prediction results of S1→S3. (<b>a</b>) True data of the initial temperature. (<b>b</b>) Results of the initial temperature prediction. (<b>c</b>) True data of the initial sound speed. (<b>d</b>) Results of the initial sound speed prediction. (<b>e</b>) True data of the second temperature. (<b>f</b>) Results of the second temperature prediction. (<b>g</b>) True data of the second sound speed. (<b>h</b>) Results of the second sound speed prediction. (<b>i</b>) True data of the third temperature. (<b>j</b>) Results of the third temperature prediction. (<b>k</b>) True data of the third sound speed. (<b>l</b>) Results of the third sound speed prediction.</p>
Full article ">Figure 12
<p>Temperature and sound speed prediction results of S2→S3. (<b>a</b>) True data of the initial temperature. (<b>b</b>) Results of the initial temperature prediction. (<b>c</b>) True data of the initial sound speed. (<b>d</b>) Results of the initial sound speed prediction. (<b>e</b>) True data of the second temperature. (<b>f</b>) Results of the second temperature prediction. (<b>g</b>) True data of the second sound speed. (<b>h</b>) Results of the second sound speed prediction. (<b>i</b>) True data of the third temperature. (<b>j</b>) Results of the third temperature prediction. (<b>k</b>) True data of the third sound speed. (<b>l</b>) Results of the third sound speed prediction.</p>
Full article ">
19 pages, 2802 KiB  
Article
Remote Multi-Person Heart Rate Monitoring with Smart Speakers: Overcoming Separation Constraint
by Thu Tran, Dong Ma and Rajesh Balan
Sensors 2024, 24(2), 382; https://doi.org/10.3390/s24020382 - 8 Jan 2024
Cited by 1 | Viewed by 2212
Abstract
Heart rate is a key vital sign that can be used to understand an individual’s health condition. Recently, remote sensing techniques, especially acoustic-based sensing, have received increasing attention for their ability to non-invasively detect heart rate via commercial mobile devices such as smartphones [...] Read more.
Heart rate is a key vital sign that can be used to understand an individual’s health condition. Recently, remote sensing techniques, especially acoustic-based sensing, have received increasing attention for their ability to non-invasively detect heart rate via commercial mobile devices such as smartphones and smart speakers. However, due to signal interference, existing methods have primarily focused on monitoring a single user and required a large separation between them when monitoring multiple people. These limitations hinder many common use cases such as couples sharing the same bed or two or more people located in close proximity. In this paper, we present an approach that can minimize interference and thereby enable simultaneous heart rate monitoring of multiple individuals in close proximity using a commonly available smart speaker prototype. Our user study, conducted under various real-life scenarios, demonstrates the system’s accuracy in sensing two users’ heart rates when they are seated next to each other with a median error of 0.66 beats per minute (bpm). Moreover, the system can successfully monitor up to four people in close proximity. Full article
(This article belongs to the Special Issue Smart Mobile and Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>Practical scenarios of multi-person heart rate monitoring: (<b>a</b>) two people sitting in line and (<b>b</b>) Two people sharing a bed.</p>
Full article ">Figure 2
<p>Transmitted and reflected FMCW signals.</p>
Full article ">Figure 3
<p>Heart rate–distance heatmap showing heart rates and interference from the reflected signals. The <span class="html-italic">x</span>-axis represents the distance <span class="html-italic">D</span> from the users to the device. In the figure, the two people with heart rates of 72 and 67 bpm located in front of the device cannot be distinguished visually from the heatmap.</p>
Full article ">Figure 4
<p>Overview of system for detecting the heart rates of <span class="html-italic">k</span> users.</p>
Full article ">Figure 5
<p>Amplitude changes and frequency domain of distances: (<b>a</b>) FFT amplitude changes by distance and (<b>b</b>) breathing and heart rate obtained by applying FFT at 1.08 m.</p>
Full article ">Figure 6
<p>Interference removal: (<b>a</b>) original heatmap S; (<b>b</b>) heatmap S after step 1; (<b>c</b>) heatmap S after step 2; (<b>d</b>) smoothed heatmap S.</p>
Full article ">Figure 7
<p>Three heart rates of 69 bpm, 83 bpm, and 98 bpm: (<b>a</b>) heatmap with three people and (<b>b</b>) the three brightest blobs.</p>
Full article ">Figure 8
<p>Source and circular microphone array with L = 6.</p>
Full article ">Figure 9
<p>Device and example showing the experimental setup: (<b>a</b>) the device and (<b>b</b>) one of the experimental setups.</p>
Full article ">Figure 10
<p>Extracted heartbeats of an individual located at 1 m and ground truth from ECG. The signal extracted from our system then undergoes FFT to obtain the heart rate in bpm.</p>
Full article ">Figure 11
<p>Overall evaluation of the system: (<b>a</b>) detected and ground truth heart rates in bpm and (<b>b</b>) cumulative distribution function of the error.</p>
Full article ">Figure 12
<p>Impact of distance.</p>
Full article ">Figure 13
<p>Results for users sitting at different angles; P1 and P2 refer to the two participants.</p>
Full article ">Figure 14
<p>Impact of angle.</p>
Full article ">Figure 15
<p>Impact of noise.</p>
Full article ">Figure 16
<p>Impact of posture.</p>
Full article ">Figure 17
<p>Impact of blanket.</p>
Full article ">Figure 18
<p>Impact of movement.</p>
Full article ">Figure 19
<p>Impact of number of targets.</p>
Full article ">Figure 20
<p>Impact of number of microphones.</p>
Full article ">Figure 21
<p>Heart rate detection by smartphone.</p>
Full article ">
22 pages, 9907 KiB  
Article
An Automatic Deep Learning Bowhead Whale Whistle Recognizing Method Based on Adaptive SWT: Applying to the Beaufort Sea
by Rui Feng, Jian Xu, Kangkang Jin, Luochuan Xu, Yi Liu, Dan Chen and Linglong Chen
Remote Sens. 2023, 15(22), 5346; https://doi.org/10.3390/rs15225346 - 13 Nov 2023
Viewed by 1216
Abstract
The bowhead whale is a vital component of the maritime environment. Using deep learning techniques to recognize bowhead whales accurately and efficiently is crucial for their protection. Marine acoustic remote sensing technology is currently an important method to recognize bowhead whales. Adaptive SWT [...] Read more.
The bowhead whale is a vital component of the maritime environment. Using deep learning techniques to recognize bowhead whales accurately and efficiently is crucial for their protection. Marine acoustic remote sensing technology is currently an important method to recognize bowhead whales. Adaptive SWT is used to extract the acoustic features of bowhead whales. The CNN-LSTM deep learning model was constructed to recognize bowhead whale voices. Compared to STFT, the adaptive SWT used in this study raises the SCR for the stationary and nonstationary bowhead whale whistles by 88.20% and 92.05%, respectively. Ten-fold cross-validation yields an average recognition accuracy of 92.85%. The method efficiency of this work was further confirmed by the consistency found in the Beaufort Sea recognition results and the fisheries ecological study. The research results in this paper help promote the application of marine acoustic remote sensing technology and the conservation of bowhead whales. Full article
(This article belongs to the Special Issue Advanced Techniques for Water-Related Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Distribution ofPAM sites in the Beaufort Sea. Sites selected for this paper include Nrs_01_2014-2015 and Nrs_01_2015-2017.</p>
Full article ">Figure 2
<p>Diagrams of whale whistles from several species.</p>
Full article ">Figure 3
<p>The structure of the data preprocessing.</p>
Full article ">Figure 4
<p>Schematic diagram of the three steps.</p>
Full article ">Figure 5
<p>The structure of the CNN-LSTM neural network.</p>
Full article ">Figure 6
<p>Comparison of time-frequency diagrams of the whistle based on STFT, fixed parameter SWT and adaptive SWT. Yellow indicates bowhead whale whistles and blue indicates background voices. The red box shows an enlargement of one of the signals. (<b>a1</b>) Time-frequency diagram of bowhead whale’s stationary whistle based on STFT; (<b>a2</b>) time-frequency diagram of bowhead whale’s stationary whistle based on fixed parameter SWT; (<b>a3</b>) time-frequency diagram of bowhead whale’s stationary whistle based on adaptive SWT; (<b>b1</b>) time-frequency diagram of bowhead whale’s nonstationary whistle based on STFT; (<b>b2</b>) time-frequency diagram of bowhead whale’s nonstationary whistle based on fixed parameter SWT; (<b>b3</b>) time-frequency diagram of bowhead whale’s nonstationary whistle based on adaptive SWT.</p>
Full article ">Figure 7
<p>Boxplot distribution of the train and test set’s recognition accuracy and loss value. (<b>a</b>) Boxplot distribution of the train and test set’s recognition accuracy; (<b>b</b>) boxplot distribution of the train and test set’s loss.</p>
Full article ">Figure 8
<p>Comparison of bowhead whale stationary whistle signals added to various signal-to-noise ratios.</p>
Full article ">Figure 9
<p>Comparison of bowhead whale nonstationary whistle signals added to various signal-to-noise ratios.</p>
Full article ">Figure 10
<p>Ten-fold cross-validation diagram.</p>
Full article ">Figure 11
<p>Recognition results of measured recordings. (<b>a</b>) Statistical chart of the average accuracy of the measured recording recognition; (<b>b</b>) statistical chart of the average loss value of the measured recording recognition; (<b>c</b>) statistical chart of the average precision of the measured recording recognition; (<b>d</b>) statistical chart of the average recall of the measured recording recognition.</p>
Full article ">Figure 12
<p>Recognition accuracy of LDA, KNN and CNN-LSTM.</p>
Full article ">Figure 13
<p>Percentage of hours containing bowhead whale whistles each season.</p>
Full article ">
30 pages, 11936 KiB  
Article
The Potential of Multibeam Sonars as 3D Turbidity and SPM Monitoring Tool in the North Sea
by Nore Praet, Tim Collart, Anouk Ollevier, Marc Roche, Koen Degrendele, Maarten De Rijcke, Peter Urban and Thomas Vandorpe
Remote Sens. 2023, 15(20), 4918; https://doi.org/10.3390/rs15204918 - 11 Oct 2023
Viewed by 1825
Abstract
Monitoring turbidity is essential for sustainable coastal management because an increase in turbidity leading to diminishing water clarity has a detrimental ecological impact. Turbidity in coastal waters is strongly dependent on the concentration and physical properties of particles in the water column. In [...] Read more.
Monitoring turbidity is essential for sustainable coastal management because an increase in turbidity leading to diminishing water clarity has a detrimental ecological impact. Turbidity in coastal waters is strongly dependent on the concentration and physical properties of particles in the water column. In the Belgian part of the North Sea, turbidity and suspended particulate matter (SPM) concentrations have been monitored for decades by satellite remote sensing, but this technique only focuses on the surface layer of the water column. Within the water column, turbidity and SPM concentrations are measured in stations or transects with a suite of optical and acoustic sensors. However, the dynamic nature of SPM variability in coastal areas and the recent construction of offshore windmill parks and dredging and dumping activities justifies the need to monitor natural and human-induced SPM variability in 3D instead. A possible solution lies in modern multibeam echosounders (MBES), which, in addition to seafloor bathymetry data, are also able to deliver acoustic backscatter data from the water column. This study investigates the potential of MBES as a 3D turbidity and SPM monitoring tool. For this purpose, a novel empirical approach is developed, in which 3D MBES water column and in-situ optical sensor datasets were collected during ship transects to yield an empirical relation using linear regression modeling. This relationship was then used to predict SPM volume concentrations from the 3D acoustic measurements, which were further converted to SPM mass concentrations using calculated densities. Our results show that these converted mean mass concentrations at the Kwinte and Westdiep swale areas are within the limits of the reported yearly averages. Moreover, they are in the same order of magnitude as the measured mass concentrations from Niskin water samples during each campaign. While there is still need for further improvement of acquisition and processing workflows, this study presents a promising approach for converting MBES water column data to turbidity and SPM measurements. This opens possibilities for improving future monitoring tools, both in scientific and industrial sectors. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The Belgian part of the North Sea: 20 m bathymetric grid from Maritime and coastal services-Flemish hydrography [<a href="#B64-remotesensing-15-04918" class="html-bibr">64</a>], with indication of stations W05 (51°25.000′N, 2°48.500′E) and W08 (51°27.610′N, 2°20.910′E) and the 20 g/m<sup>3</sup> SPM concentration contours in winter (black) and summer (grey) [<a href="#B10-remotesensing-15-04918" class="html-bibr">10</a>]. (<b>B</b>) Detail of the Kwinte and Westdiep study areas with indications of the July 2021 survey lines. (<b>C</b>–<b>F</b>) Detail of the Kwinte study area with the survey lines for the October 2020 (<b>C</b>), February 2021 (<b>D</b>), March 2021 (<b>E</b>) and May 2021 (<b>F</b>) campaigns. The location of the Kwinte acoustic reference area (KARA) and water sampling stations LW215 and Timbers 15 are indicated.</p>
Full article ">Figure 2
<p>(<b>Top</b>)—the sensor toolbox used within this study comprises optical (LISST-200X, Eco FLNTU OBS) and acoustic (EM2040 MBES hull-mounted on RV Simon Stevin) sensors, as well as (Niskin) water samples. (<b>Bottom</b>)—overview of turbidity and SPM measurement techniques based on different principles: acoustic backscatter (MBES), optical backscatter (OBS), laser diffraction (LISST) and filtration of discrete water samples (Niskin).</p>
Full article ">Figure 3
<p>Overview of the sampling strategy: continuous recording of MBES water column data, yoyo-movement of the in-situ sensor frame (with attached turbidity sensors), and Niskin bottle deployments.</p>
Full article ">Figure 4
<p>Processing steps of water column MBES data in SonarScope. The polar echograms were modified after Urban et al. [<a href="#B58-remotesensing-15-04918" class="html-bibr">58</a>].</p>
Full article ">Figure 5
<p>Overview of the workflow of the empirical approach used in this study to acquire and process acoustic and optical data in order to model Turbidity and TVC.</p>
Full article ">Figure 6
<p>Overview of the main steps of the python code with specification of the extract and model, and grid and predict pipelines. CV = cross validation.</p>
Full article ">Figure 7
<p>(<b>left</b>): 3D water column MBES point cloud visualization in Potree Viewer. (<b>right</b>): Extraction MBES spheres with predefined radius (here 0.5 m) around each in-situ sensor measurement. Data from October 2020 campaign.</p>
Full article ">Figure 8
<p>Linear regression models between S<sub>v</sub> (in dB), (<b>A</b>) log optical turbidity (in log NTU) and (<b>B</b>–<b>F</b>) different size ranges of log TVC (in log µL/L) (TVC<sub>1–500 µm</sub>, TVC<sub>1–3 µm</sub>, TVC<sub>3–20 µm</sub>, TVC<sub>20–200 µm</sub>, TVC<sub>200–500 µm</sub>) using data from multiple campaigns combined.</p>
Full article ">Figure 9
<p>Horizontal slice at −14 m LAT depth (<b>top</b>) and a vertical cutaway (<b>bottom</b>) through a 3D volume of the converted mean mass concentration of total suspended particulate matter (SPMC<sub>1–500 µm</sub>) showing clear temporal variability of SPM within the water column. Data from the March 2021 campaign. Coordinates are given in UTM (zone 31 N). The red dotted line indicates the location of the vertical cutaway (top) and the depth of the horizontal slice (bottom).</p>
Full article ">Figure 10
<p>In situ sensor summary box plots of ranges of Turbidity (derived from OBS), Total Volume concentration, and D50 (derived from LISST), averaged over all depths. OBS measurements were only available for the “autumn/winter” months (October 2020, February 2021, March 2021), while LISST measurements were only retained for the “summer/autumn” (October 2020, May 2021, July 2021) months. Box plots parameters: outliers, Q1−1.5 × IQR, Q1 25%, Q2 50%, Q3 75%, Q3 + 1.5 × IQR, outliers.</p>
Full article ">Figure 11
<p>PSD plots (derived from the LISST-200X), binned every 2 m, for the “summer/autumn” (October 2020, May 2021, July 2021) dataset. The volume concentration is normalized by dividing the values with the total volume concentration.</p>
Full article ">Figure 12
<p>Overview of the probabilistic densities of the converted mean SPMC<sub>1–500 µm</sub> for different depths (in m LAT), showing the SPMC ranges with depth for different seasons and areas. The measured SPMC from Niskin water samples (at a fixed location) was provided for comparison. The depth was cut off at 1 m LAT because no co-located MBES and in situ sensor data were collected above this depth.</p>
Full article ">Figure A1
<p>Optical misalignment of the LISST 200X laser beam in the February 2021 campaign (<b>bottom</b>) enhanced out-of-range effects in the particle size distribution data, which is clear when comparing to the previous campaign in October 2020 (<b>top</b>). Particle size distribution plots are shown for two size ranges (0–500 µm and 0–100 µm) and different binned depths (every 2 m; see color bar).</p>
Full article ">Figure A2
<p>Figure showing how the apparent density and error margins exponentially increase with decreasing particle size using the fractal floc model [<a href="#B96-remotesensing-15-04918" class="html-bibr">96</a>,<a href="#B97-remotesensing-15-04918" class="html-bibr">97</a>].</p>
Full article ">
30 pages, 33713 KiB  
Article
Geophysical and Geochemical Exploration of the Pockmark Field in the Gulf of Patras: New Insights on Formation, Growth and Activity
by Dimitris Christodoulou, George Papatheodorou, Maria Geraga, Giuseppe Etiope, Nikos Giannopoulos, Sotiris Kokkalas, Xenophon Dimas, Elias Fakiris, Spyros Sergiou, Nikos Georgiou, Efthimios Sokos and George Ferentinos
Appl. Sci. 2023, 13(18), 10449; https://doi.org/10.3390/app131810449 - 19 Sep 2023
Cited by 2 | Viewed by 1591
Abstract
The Patras Gulf Pockmark field is located in shallow waters offshore Patras City (Greece) and is considered one of the most spectacular and best-documented fluid seepage activities in the Ionian Sea. The field has been under investigation since 1996, though surveying was partially [...] Read more.
The Patras Gulf Pockmark field is located in shallow waters offshore Patras City (Greece) and is considered one of the most spectacular and best-documented fluid seepage activities in the Ionian Sea. The field has been under investigation since 1996, though surveying was partially sparse and fragmentary. This paper provides a complete mapping of the field and generates new knowledge regarding the fluid escape structures, the fluid pathways, their origin and the link with seismic activity. For this, data sets were acquired utilising high-resolution marine remote sensing techniques, including multibeam echosounders, side-scan sonars, sub-bottom profilers and remotely operated vehicles, and laboratory techniques focusing on the chemical composition of the escaping fluids. The examined morphometric parameters and spatial distribution patterns of the pockmarks are directly linked to tectonic structures. Acoustic anomalies related to the presence of gas in sediments and in the water column document the activity of the field at present and in the past. Methane is the main component of the fluids and is of microbial origin. Regional and local tectonism, together with the Holocene sedimentary deposits, appear to be the main contributors to the growth of the field. The field preserves evidence that earthquake activity prompts the activation of the field. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Simplified geological map of Patras Gulf showing the major faults in the area and the location of the pockmark field; index map of Greece showing study area. (<b>b</b>) Tectonic map of Western Greece, (Geological data from [<a href="#B44-applsci-13-10449" class="html-bibr">44</a>]; Fault data from [<a href="#B45-applsci-13-10449" class="html-bibr">45</a>]) (AT.F: Agia Triada Fault; RPTF: Rio-Patras Transfer Fault; P.G.: Patras Gulf; A.G.: Amvrakikos Gulf; T.L.: Trichonida Lake; A.R.: Alfios River). (<b>c</b>) Seismicity map of Patras Gulf (Earthquake location data were retrieved from NOA-IG [<a href="#B46-applsci-13-10449" class="html-bibr">46</a>]. The color scale depicts depth distribution of the epicentre of the events).</p>
Full article ">Figure 2
<p>Map of the survey area showing the geophysical survey tracklines, ROV transects and the position of the well where gas samples were collected; yellow thick lines indicate the location of seismic profiles presented in the paper.</p>
Full article ">Figure 3
<p>(<b>a</b>) Bathymetric map of the PGPF. The map also shows the location of Agia Triada Fault (AT.F) and the location of the largest pockmark ‘P4′. Dotted red lines separate the two sectors (North and South) of the pockmark field and delimit areas of high pockmark density (HD1 and HD2); red arrows indicate the location of the pockmarks north of AT.F discovered in the present study. (<b>b</b>) The bathymetry of the PGPF before the construction of the harbour was derived from single beam data (cell size 2.5 m). The dashed polygons delineate the areas of PGPF covered by the harbour installations. (<b>c</b>–<b>f</b>) Detailed bathymetry showing (<b>c</b>) unit pockmarks; (<b>d</b>) a cluster of normal and small composite pockmarks deliminated by black line; (<b>e</b>) the largest composite pockmark of the field; (<b>f</b>) a pockmark ‘’chain’’ of linearly arranged pockmarks deliminated by black line.</p>
Full article ">Figure 4
<p>Diagrams of (<b>a</b>) water depth vs. pockmark depth; (<b>b</b>) diameter vs. pockmark depth; (<b>c</b>) slope vs. pockmark depth; (<b>d</b>) sediment thickness vs. pockmark depth; (<b>e</b>) Rose diagram displaying the orientation of the pockmarks’ long axes ((<b>i</b>): all pockmarks; (<b>ii</b>): normal pockmarks and (<b>iii</b>): composite pockmarks). The orientation of the axes has been grouped by 30 degrees.</p>
Full article ">Figure 5
<p>(<b>a</b>) The 100 kHz side-scan sonar mosaic of the PGPF area. Sonographs showing (<b>b</b>) pockmarks near Agia Triada Fault (AT.F); (<b>c</b>) a string of circular and composite pockmarks; (<b>d</b>) the largest composite pockmark of the field.</p>
Full article ">Figure 6
<p>R.O.V. photos recovered inside selected pockmarks, showing the presence of small holes (diameter about 10–20 cm) in the muddy cover of the pockmarked floor.</p>
Full article ">Figure 7
<p>High-resolution seismic profile showing a typical stratigraphic pattern of the area consisting of two seismic sequences (upper SS I and lower SS II). The sedimentological interpretation of the well drilling [<a href="#B58-applsci-13-10449" class="html-bibr">58</a>] is superposed on the seismic profile (ER: Enhanced Reflector; ATZ: Acoustic Turbid Zone; HOL/PL: Holocene Pleistocene boundary; TWTT two-way travel time).</p>
Full article ">Figure 8
<p>Holocene sediment thickness map of the PGPF. The thickness was derived under the assumption that the bottom is smooth without the presence of pockmarks.</p>
Full article ">Figure 9
<p>(<b>a</b>) Map of PGPF showing the spatial distribution of the acoustic types. (<b>b</b>) Representative seismic profiles of each Acoustic Type (AT) recognised by a high-resolution sub-bottom profiler (TWTT two-way travel time).</p>
Full article ">Figure 10
<p>Representative high-resolution seismic profiles (<b>a</b>,<b>b</b>) showing the presence of Enhanced Reflectors, Acoustic Turbid zones, Intrasedimentary Plumes and Seabed Doming due to gas migration to the upper seismic sequence (I). (ER: Enhanced Reflector; ATZ: Acoustic Turbid Zone; IGP: Intrasedimentary Gas Plumes; D: Seabed Doming; F: Fault; HOL/PL: Holocene Pleistocene boundary; TWTT two-way travel time; SSI: upper seismic sequence; SSII: lower seismic sequence).</p>
Full article ">Figure 11
<p>(<b>a</b>) Map showing the spatial distribution of Acoustic Turbid Zones (ATZ) and Enhanced Reflectors (ER) and their relationship with the presence of pockmarks. (<b>b</b>) Map showing the depth below the seabed where the top of ATZ and ER has been recorded. A and B delimitate areas without ATZ and ER.</p>
Full article ">Figure 12
<p>High-resolution seismic profile showing numerous normal and composite pockmarks, as well as the biggest pockmark in the field (right), located near the southeastern part of the study area. ATZs are found immediately beneath the majority of medium- to large-sized pockmarks. Above one of the pockmarks, a gas flare is recorded. The HOL/PL boundary is partially visible when the gas-related reflections in the upper seismic sequence disappear (PM: pockmark; ER: Enhanced Reflector; ATZ: Acoustic Turbid Zone; IGP: Intrasedimentary Gas Plumes; GFl: Gas Flare; DB: Detached Block; HOL/PL: Holocene Pleistocene boundary; TWTT two-way travel time).</p>
Full article ">Figure 13
<p>High-resolution seismic profiles showing (<b>a</b>) pockmarks with narrow floor on the north sector of the field near AT.F; (<b>b</b>) two pockmarks, a composite (right) in which the Acoustic Turbid Zone is close to the seabed and a Detached Block is recorded at the sidewall and a normal one (left) where ER and ATZ are interrupted below it; (<b>c</b>) a large pockmark (left) where a transparent acoustic character (indicated by red arrows) is observed under its floor and; an Intrasedimentary Gas Plume (right) located between two minor synthetic normal faults and a small pockmark (centre). (PM: pockmark; ER: Enhanced Reflector; DB: Detached Block; ATZ: Acoustic Turbid Zone; AT.F: Agia Triada Fault; HOL/PL: Holocene Pleistocene boundary; TWTT two-way travel time).</p>
Full article ">Figure 14
<p>(<b>a</b>) High-resolution seismic profile showing a gas flare over a pockmark (ER: Enhanced Reflector; ATZ: Acoustic Turbid Zone; GFl: Gas Flare; HOL/PL: Holocene Pleistocene boundary; TWTT two-way travel time); (<b>b</b>,<b>c</b>) slant range uncorrected 400 kHz side-scan sonar sonographs showing gas flares (white arrows indicate their position) in the water column rise from the centre of the pockmarks (horizontal lines every 20m).</p>
Full article ">Figure 15
<p>High-resolution seismic profile showing six normal faults that affect the structure of the layers. Enhanced Reflectors and Acoustic Turbid Zones are present in the upper sequence. (F: Fault; AT.F: Agia Triada Fault; ER: Enhanced Reflector; ATZ: Acoustic Turbid Zone; HOL/PL: Holocene Pleistocene boundary).</p>
Full article ">Figure 16
<p>(<b>a</b>) Tectonic map of the pockmark field area. (<b>b</b>) Three seismic profiles parallel to the shoreline from shallow (<b>i</b>); medium (<b>ii</b>) to deep water (<b>iii</b>) showing the highest displacement, with a characteristic reverse drag on the hanging-wall block in the medium-depth seismic profile (AT.F: Agia Triada Fault; F: fault).</p>
Full article ">Figure 17
<p>(<b>a</b>) Genetic diagram of δ<sup>13</sup>C<sub>CH4</sub> versus molecular composition of hydrocarbon gases (C1/ (C2 + C3)) (VPDB = Vienna Peedee Belemnite Standard) after [<a href="#B80-applsci-13-10449" class="html-bibr">80</a>]. (<b>b</b>) Genetic diagram of δ<sup>13</sup>C<sub>CH4</sub> versus δ<sup>2</sup>H<sub>CH4</sub> (CR: CO<sub>2</sub> reduction, F: fermentation, EMT: early mature thermogenic gas; LMT: late mature thermogenic gas) after [<a href="#B80-applsci-13-10449" class="html-bibr">80</a>].</p>
Full article ">Figure 18
<p>(<b>a</b>) Bathymetric and tectonic map of the PGPF showing the locations of seepages after the 1993 and 2008 major earthquakes which affected the field; (<b>b</b>) map showing the location of the epicentres of the two major earthquakes, their focal mechanism and the distribution of the post-earthquake events (earthquake data of 1993 from [<a href="#B54-applsci-13-10449" class="html-bibr">54</a>,<a href="#B113-applsci-13-10449" class="html-bibr">113</a>] and 2008 from [<a href="#B114-applsci-13-10449" class="html-bibr">114</a>]).</p>
Full article ">
33 pages, 10500 KiB  
Article
A Package of Script Codes, POSIBIOM for Vegetation Acoustics: POSIdonia BIOMass
by Erhan Mutlu
J. Mar. Sci. Eng. 2023, 11(9), 1790; https://doi.org/10.3390/jmse11091790 - 13 Sep 2023
Cited by 2 | Viewed by 1065
Abstract
Macrophytes and seagrasses play a crucial role in a variety of functions in marine ecosystems and respond in a synchronized manner to a changing climate and the subsequent ecological status. The monitoring of seagrasses is one of the most important issues in the [...] Read more.
Macrophytes and seagrasses play a crucial role in a variety of functions in marine ecosystems and respond in a synchronized manner to a changing climate and the subsequent ecological status. The monitoring of seagrasses is one of the most important issues in the marine environment. One rapidly emerging monitoring technique is the use of acoustics, which has advantages compared to other remote sensing techniques. The acoustic method alone is ambiguous regarding the identities of backscatterers. Therefore, a computer program package was developed to identify and estimate the leaf biometrics (leaf length and biomass) of one of the most common seagrasses, Posidonia oceanica. Some problems in the acoustic data were resolved in order to obtain estimates related to problems with vegetation as well as fisheries and plankton acoustics. One of the problems was the “lost” bottom that occurred during the data collection and postprocessing due to the presence of acoustic noise, reverberation, interferences and intense scatterers, such as fish shoals. Another problem to be eliminated was the occurrence of near-bottom echoes belonging to submerged vegetation, such as seagrasses, followed by spurious echoes during the survey. The last one was the recognition of the seagrass to estimate the leaf length and biomass, the calibration of the sheaths/vertical rhizomes of the seagrass and the establishment of relationships between the acoustic units and biometrics. As a result, an autonomous package of code written in MATLAB was developed to perform all the processes, named “POSIBIOM”, an acronym for POSIdonia BIOMass. This study presents the algorithms, methodology, acoustic–biometric relationship and mapping of biometrics for the first time, and discusses the advantages and disadvantages of the package compared to the software dedicated to the bottom types, habitat and vegetation acoustics. Future studies are recommended to improve the package. Full article
(This article belongs to the Section Marine Environmental Science)
Show Figures

Figure 1

Figure 1
<p>Schematic configuration (solid line) and projection (grey dashed line) of successive pings (Rp: previous range; Rc: current range; Rn: next range) by the transducers to estimate the dead zone. θ, beam angle; τ, pulse width; β, angle of bottom slope; Hd, horizontal distance to previous or next ping; Vd, vertical distance to bottom depth referring to previous or next ping. DZ is the dead zone height of the flat bottom and DZ1 is the height of the angled bottom referred to as Rc. The shortest distance here was Rn for the present study and TSD in Equation (4) for the EchoView method.</p>
Full article ">Figure 2
<p>Acoustic data profile versus depth (see <a href="#app1-jmse-11-01790" class="html-app">Figure S1</a>) for estimates of expected SNRb (<b>a</b>) and SNRir (<b>b</b>). The red line indicates Sv without TVG, the blue line indicates Sv with TVG and the green line indicates the regression curve.</p>
Full article ">Figure 3
<p>An example file or data for the calibration of the meadow; raw data (<b>a</b>), removal of weak scatterers (<b>b</b>), removal of strong scatterers (<b>c</b>), removal and filtering of leaves (red continuous line denotes the ground and green the dead zone) (<b>d</b>) and vertical rhizomes and sheaths (<b>e</b>) on the enhanced echogram.</p>
Full article ">Figure 3 Cont.
<p>An example file or data for the calibration of the meadow; raw data (<b>a</b>), removal of weak scatterers (<b>b</b>), removal of strong scatterers (<b>c</b>), removal and filtering of leaves (red continuous line denotes the ground and green the dead zone) (<b>d</b>) and vertical rhizomes and sheaths (<b>e</b>) on the enhanced echogram.</p>
Full article ">Figure 4
<p>Sequence of estimation of leaf length and biomass of the meadow in the main menu, POSIBIOM (<b>a</b>) and detailed flowchart of POSIBIOM (<b>b</b>). Numbers in the chart denote sequence of the processing. Dashed line denotes subcalls (algorithms) executed in linkage to main menu, POSIBIOM.</p>
Full article ">Figure 4 Cont.
<p>Sequence of estimation of leaf length and biomass of the meadow in the main menu, POSIBIOM (<b>a</b>) and detailed flowchart of POSIBIOM (<b>b</b>). Numbers in the chart denote sequence of the processing. Dashed line denotes subcalls (algorithms) executed in linkage to main menu, POSIBIOM.</p>
Full article ">Figure 5
<p>Graphical menu for control, configuration and information of “Lost Bottom” (see <a href="#app1-jmse-11-01790" class="html-app">Table S1</a> for the actions of each coded option and function with numbers).</p>
Full article ">Figure 6
<p>Graphical menu for control, configuration and information of “Noise &amp; Reverberation” (see <a href="#app1-jmse-11-01790" class="html-app">Table S2</a> for actions of each coded option and function with numbers).</p>
Full article ">Figure 7
<p>Graphical outputs of the “Noise &amp; Reverberation” analysis. (<b>a</b>) Enhanced echogram before the start of the analysis, (<b>b</b>) an algorithm to figure out the detection of the total noise, spurious echoes (optional), (<b>c</b>) the SNR for each background natural and artificial noise (optional) and (<b>d</b>) removed noise and reverberations; in this example, the configuration needed to be optimized to remove the total reverberations.</p>
Full article ">Figure 8
<p>Graphical menu for control, settings and info menu of “SheathFinder &amp; Leaf and Biomass” (see <a href="#app1-jmse-11-01790" class="html-app">Table S3</a> for actions of each coded option and function with numbers).</p>
Full article ">Figure 8 Cont.
<p>Graphical menu for control, settings and info menu of “SheathFinder &amp; Leaf and Biomass” (see <a href="#app1-jmse-11-01790" class="html-app">Table S3</a> for actions of each coded option and function with numbers).</p>
Full article ">Figure 9
<p>Fixation and estimation of seagrass leaves during the analysis of acoustic data with “SheathFinder&amp; Leaf and Biomass”; (<b>a</b>) scanning of sheaths or vertical rhizomes with green vertical line to determine false or real seagrass, (<b>b</b>) estimation of seagrass after the analysis of the current file was finished (green line denotes the canopy height) and (<b>c</b>) results table when checking the option (option no. 25 in <a href="#jmse-11-01790-f008" class="html-fig">Figure 8</a>).</p>
Full article ">Figure 9 Cont.
<p>Fixation and estimation of seagrass leaves during the analysis of acoustic data with “SheathFinder&amp; Leaf and Biomass”; (<b>a</b>) scanning of sheaths or vertical rhizomes with green vertical line to determine false or real seagrass, (<b>b</b>) estimation of seagrass after the analysis of the current file was finished (green line denotes the canopy height) and (<b>c</b>) results table when checking the option (option no. 25 in <a href="#jmse-11-01790-f008" class="html-fig">Figure 8</a>).</p>
Full article ">Figure 10
<p>Three-dimensional demonstration (latitude, longitude and depth) of the efficiency of the algorithm “Lost Bottom and Dead Zone” in recovering the real bottom and depth with the dead zone estimates in July 2011 (<b>a</b>), September 2011 (<b>b</b>), January 2012 (<b>c</b>) and April 2012 (<b>d</b>). The black line shows the bottom depth estimated with the Visual Analyzer program, and the green line shows the bottom depth corrected with the algorithm of this study. Vertical lines indicate errors with overestimates and underestimates on the acoustic trackline.</p>
Full article ">Figure 11
<p>Removal of background noise; different patterns of interferences. (<b>a</b>–<b>c</b>) Original data and (<b>a’</b>–<b>c’</b>) removal of interferences, respectively. Removal of background noise and reverberations; different patterns of surface and volume reverberations produced by SCUBA divers, (<b>d</b>) original data and (<b>d’</b>) removal of reverberations.</p>
Full article ">Figure 11 Cont.
<p>Removal of background noise; different patterns of interferences. (<b>a</b>–<b>c</b>) Original data and (<b>a’</b>–<b>c’</b>) removal of interferences, respectively. Removal of background noise and reverberations; different patterns of surface and volume reverberations produced by SCUBA divers, (<b>d</b>) original data and (<b>d’</b>) removal of reverberations.</p>
Full article ">Figure 12
<p>Interpolated estimate of leaf biomass distribution (g/m<sup>2</sup>) in July 2011 (<b>a</b>), December 2011 (<b>b</b>), January 2012 (<b>d</b>) and April 2012 (<b>e</b>), and meters of leaf length in December 2011 (<b>c</b>) and April 2012 (<b>f</b>). X label (longitude); Y label (latitude).</p>
Full article ">Figure 12 Cont.
<p>Interpolated estimate of leaf biomass distribution (g/m<sup>2</sup>) in July 2011 (<b>a</b>), December 2011 (<b>b</b>), January 2012 (<b>d</b>) and April 2012 (<b>e</b>), and meters of leaf length in December 2011 (<b>c</b>) and April 2012 (<b>f</b>). X label (longitude); Y label (latitude).</p>
Full article ">Figure A1
<p>Coastline file downloaded from Marine Region [<a href="#B76-jmse-11-01790" class="html-bibr">76</a>] and then converted to SURFER (Golden Software) and MATLAB (MathWorks Inc.) formats. Each colored area is a large sector described with polygons (13 polygons) consisting of polygons of the islands to be used for mapping and blanking data outside the islands, as formatted by Marine Region [<a href="#B76-jmse-11-01790" class="html-bibr">76</a>]. 1. Gibraltar Straits; 2. Alboran Sea; 3. Balearic Sea; 4. West Mediterranean Sea; 5. Ligurian Sea; 6. Tyrrhenian Sea; 7. Adriatic Sea; 8. Ionian Sea; 9. East Mediterranean Sea; 10. Aegean Sea; 11. Sea of Marmara; 12. Black Sea; 13. Azov Sea.</p>
Full article ">Figure A2
<p>Automatic appearance of the option in the case of the existence of a false bottom at the first ping due to volume or surface reverberations or interferences.</p>
Full article ">Figure A3
<p>Screenshot of the Lost Bottom analysis during processing. Two numbers in the center of the left panel are optional if the “Figure on/off” function is enabled.</p>
Full article ">Figure A4
<p>Tools menu to load files (<b>a</b>), to configure the plot of the map, e.g., here biomass (g/m<sup>2</sup>), in contour (<b>b</b>,<b>c</b>) or leaf length (m) in trackline (<b>d</b>) depending on the maximum value set by the user. When selecting the contour plot, some entries appeared to configure the interpolation to create the grid (Xspace (longitude) versus Yspace (latitude)), the interpolation type and the saving of the figure (<b>b</b>,<b>c</b>) and when selecting the line plot (<b>b</b>,<b>d</b>), an entry only for the line width. The data were an example from July 2011 (<b>c</b>,<b>d</b>).</p>
Full article ">Figure A5
<p><span class="html-italic">Posidonia oceanica</span>: leaf biomass (kg/m<sup>2</sup>) (<b>left panel</b>) and mean leaf length (cm, maximum circle is 46 cm of leaf length) (<b>right panel</b>) estimated from SCUBA sampling in December 2011 (<b>a</b>), January 2012 (<b>b</b>), April 2012 (<b>c</b>) and July 2011 (<b>d</b>) for comparison with acoustic estimates (see <a href="#jmse-11-01790-f012" class="html-fig">Figure 12</a> for comparison). (+: SCUBA sampling stations.).</p>
Full article ">
34 pages, 62588 KiB  
Technical Note
Use of ICEsat-2 and Sentinel-2 Open Data for the Derivation of Bathymetry in Shallow Waters: Case Studies in Sardinia and in the Venice Lagoon
by Massimo Bernardis, Roberto Nardini, Lorenza Apicella, Maurizio Demarte, Matteo Guideri, Bianca Federici, Alfonso Quarati and Monica De Martino
Remote Sens. 2023, 15(11), 2944; https://doi.org/10.3390/rs15112944 - 5 Jun 2023
Cited by 4 | Viewed by 2677
Abstract
Despite the high accuracy of conventional acoustic hydrographic systems, measurement of the seabed along coastal belts is still a complex problem due to the limitations arising from shallow water. In addition to traditional echo sounders, airborne LiDAR also suffers from high application costs, [...] Read more.
Despite the high accuracy of conventional acoustic hydrographic systems, measurement of the seabed along coastal belts is still a complex problem due to the limitations arising from shallow water. In addition to traditional echo sounders, airborne LiDAR also suffers from high application costs, low efficiency, and limited coverage. On the other hand, remote sensing offers a practical alternative for the extraction of depth information, providing fast, reproducible, low-cost mapping over large areas to optimize and minimize fieldwork. Satellite-derived bathymetry (SDB) techniques have proven to be a promising alternative to supply shallow-water bathymetry data. However, this methodology is still limited since it usually requires in situ observations as control points for multispectral imagery calibration and bathymetric validation. In this context, this paper illustrates the potential for bathymetric derivation conducted entirely from open satellite data, without relying on in situ data collected using traditional methods. The SDB was performed using multispectral images from Sentinel-2 and bathymetric data collected by NASA’s ICESat-2 on two areas of relevant interest. To assess outcomes’ reliability, bathymetries extracted from ICESat-2 and derived from Sentinel-2 were compared with the updated and reliable data from the BathyDataBase of the Italian Hydrographic Institute. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Representation of the two areas of operation (AOOs): (<b>a</b>) Gulf of Congianus, (<b>b</b>) Venice Lagoon and the open sea outside it. (Picture generated using Google Earth, 5 February 2023).</p>
Full article ">Figure 2
<p>Representation of S-2 images selected and cropped according to areas of interest: (<b>a</b>) Gulf of Congianus and (<b>b</b>) Venice Lagoon.</p>
Full article ">Figure 3
<p>Projection of soundings contained in the trajectories of ICESat-2 beams selected and cleaned for the study areas: (<b>a</b>) Gulf of Congianus and (<b>b</b>) Venice Lagoon.</p>
Full article ">Figure 4
<p>MBES bathymetric surveys of (<b>a</b>) Venice Lagoon and (<b>b</b>) Gulf of Congianus. To the left of each image is the graduated scale reflecting the reference depth.</p>
Full article ">Figure 5
<p>(<b>a</b>) Geographical distribution of tide gauges inside the Venice Lagoon and (<b>b</b>) the geographical division in two macro areas inside and outside the lagoon, with further subdivisions inside them.</p>
Full article ">Figure 6
<p>Example of a profile obtained from the first phase of ICESat-2 data processing, with the photons dataset referenced to ellipsoid WGS84 (<b>blue</b>) and the photons shifted and referenced to the geoid EGM-2008 (<b>green</b>).</p>
Full article ">Figure 7
<p>The measured height (blue dots) relative to the geoid was transformed into the water column depth (fuchsia dots) relative to a zero level (yellow line), which represents the shift of the water surface (red line) as a reference level. Zoom-in on the 16–17 m along-track distance of the profile in <a href="#remotesensing-15-02944-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Bathymetry points (in purple) corrected by refraction (green dots) and their tide level correction (red dots) relative to a specific date and time, using local tide gauge measurements. The ICESat-2 data were acquired during a lower tide than the reference time, causing an upward shift.</p>
Full article ">Figure 9
<p>Example of tide correction applied to the ICESat-2 and MBES data to reference them to the time of S-2 image acquisition.</p>
Full article ">Figure 10
<p>The profile obtained from the ATL03_20200114165545_02900602_005_01_gt2r beam at the end of the <span class="html-italic">Data automatic download and preparation</span> phase and after referencing to the geoid EGM-2008. The diagram shows the profile of the emerged land, the waterline, and the seabed, which remains well distinct from the noise points, especially in the more superficial layers. The profile section in rectangle A is zoomed in <a href="#remotesensing-15-02944-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 11
<p>The bathymetric points (green dots) extracted automatically, and their resulting depths after the refraction and tide level correction (red dots) relative to a specific date and time, using local tide gauge measurement and local temperature and salinity data (Section A from <a href="#remotesensing-15-02944-f010" class="html-fig">Figure 10</a>).</p>
Full article ">Figure 12
<p>Result of the seabed classification process in the Gulf of Congianus. Seabed classes are sand (yellow), rock (grey), and vegetation and other seabed cover (green).</p>
Full article ">Figure 13
<p>Calibration results, showing the relationship between the Blue/Green ratio (in green) or the Blue/Red ratio (in red) and the depth of the ICESat-2 calibration point set. (<b>a</b>) Sand, 0–5 m; (<b>b</b>) Rocks, 0–5 m; (<b>c</b>) Sand, 5–10 m; (<b>d</b>) Rocks, 5–10 m. Gulf of Congianus.</p>
Full article ">Figure 13 Cont.
<p>Calibration results, showing the relationship between the Blue/Green ratio (in green) or the Blue/Red ratio (in red) and the depth of the ICESat-2 calibration point set. (<b>a</b>) Sand, 0–5 m; (<b>b</b>) Rocks, 0–5 m; (<b>c</b>) Sand, 5–10 m; (<b>d</b>) Rocks, 5–10 m. Gulf of Congianus.</p>
Full article ">Figure 14
<p>SDB validation: error scatter plots in different depth ranges, showing the relationship between the depth of the ICESat-2 bathymetric points and the estimated SDB. N is the number of ICESat-2 points used. (<b>a</b>) Sand, 0–5 m; (<b>b</b>) Rocks, 0–5 m; (<b>c</b>) Sand, 5–10 m; (<b>d</b>) Rocks, 5–10 m. Gulf of Congianus.</p>
Full article ">Figure 15
<p>SDBs obtained down to 5 m depth using the Blue/Red ratio for sand and rocks and using Blue/Green ratio in the 5–10 m range for sand. No bathymetries were derived for rocky areas in the 5–10 m range.</p>
Full article ">Figure 16
<p>BIAS distribution for the Gulf of Congianus.</p>
Full article ">Figure 17
<p>Profile obtained from beam ATL03_20200101053635_00840606_005_01_gt3l at the end of phase 1 <span class="html-italic">Data automatic download and preparation</span>, with a sub-section, and after the referencing to the geoid EGM-2008. The main elements of this image are the profile of the hinterland of Venice partially below the current sea level on the left, the Venice Lagoon in the middle, and the open sea with the seabed on the right. The lagoon shows a complex structure of water layers. The profile sections in rectangles A and B are zoomed in <a href="#remotesensing-15-02944-f018" class="html-fig">Figure 18</a> and <a href="#remotesensing-15-02944-f019" class="html-fig">Figure 19</a>.</p>
Full article ">Figure 18
<p>(<b>a</b>) Automatically extracted bathymetric points (red dots). The highlighted points (red dots) are not representative of the seabed. (<b>b</b>) Manually extracted bathymetric points (green dots) and their resulting depths after refraction and tidal level correction (red dots) relative to a specific date and time, using local tide gauge measurements and local temperature and salinity data. Section A of <a href="#remotesensing-15-02944-f017" class="html-fig">Figure 17</a> relating to the lagoon area.</p>
Full article ">Figure 18 Cont.
<p>(<b>a</b>) Automatically extracted bathymetric points (red dots). The highlighted points (red dots) are not representative of the seabed. (<b>b</b>) Manually extracted bathymetric points (green dots) and their resulting depths after refraction and tidal level correction (red dots) relative to a specific date and time, using local tide gauge measurements and local temperature and salinity data. Section A of <a href="#remotesensing-15-02944-f017" class="html-fig">Figure 17</a> relating to the lagoon area.</p>
Full article ">Figure 19
<p>Automatically extracted bathymetric points (green dots) and their resulting depths after refraction and tide level correction (red dots) relative to a specific date and time, using local tide gauge measurements and local temperature and salinity data. Section B from the outer sea in <a href="#remotesensing-15-02944-f017" class="html-fig">Figure 17</a>.</p>
Full article ">Figure 20
<p>Result of the seabed classification process in the Venice Lagoon. Classes of the seabed are identified as sand in yellow areas and marine vegetation in green.</p>
Full article ">Figure 21
<p>Results of the calibration phase showing the relationship between the Blue/Green ratio (green color) or the Blue/Red ratio (red color) and the depth of the ICESat-2 calibration point sets. (<b>a</b>) Lagoon; (<b>b</b>) Open sea, 0–5 m; (<b>c</b>) Open sea, 5–10 m. Venice Lagoon.</p>
Full article ">Figure 22
<p>SDB validation: error scatter plots of the relationship between the depth of the ICESat-2 bathymetric points and the estimated depth (SDB). The black line is the regression line. (<b>a</b>) Lagoon; (<b>b</b>) Open sea, 0–5 m; (<b>c</b>) Open sea, 5–10 m. Venice Lagoon.</p>
Full article ">Figure 23
<p>Sentinel-derived bathymetry (SDB) for the Venice Lagoon and the open sea area in front of Venice.</p>
Full article ">Figure 24
<p>BIAS distribution for the areas inside and outside the Venice Lagoon.</p>
Full article ">Figure 25
<p>Overlapping of ICESat-2 bathymetric points (yellow points) and MBES bathymetric data (the colored area from the range red–blue) and zoom-in of the coastal area with more ICESat-2 points. Sardinia.</p>
Full article ">Figure 26
<p>Histogram of the differences in the depth values measured by ICESat-2 and MBES in the Gulf of Congianus.</p>
Full article ">Figure 27
<p>ICESat-2 (red points) and MBES (area colored from red to blue) data. Zoom-in of two characteristic areas. Venice Lagoon.</p>
Full article ">Figure 28
<p>Bar charts of the differences in the depth values measured with MBES surveys The red dashed line represents the ±0.5 m range, the Total Vertical Uncertainty for the Order 1 Standard of the IHO-S44 Publication. (<b>a</b>) MBES-SDB, Gulf of Congianus, Sand; (<b>b</b>) MBES-SDB, Gulf of Congianus, Rocks; (<b>c</b>) MBES-SDB, Venice, Lagoon; (<b>d</b>) MBES-SDB, Venice, Sea.</p>
Full article ">Figure 29
<p>SDB results in the Gulf of Congianus area after a 5 m depth (filter only to rocky seabed areas) is applied for the range of acceptability of vertical uncertainty of IHO standards. On the left are the S-2 images and on the right are the SDB results. (<b>a</b>) Cugnana Gulf, the shallower part of the case study area; (<b>b</b>) a jagged coastal area from the Marinella and Aranci Gulfs.</p>
Full article ">Figure 30
<p>SDB results from the Venice Lagoon area, after a 3.5 m depth is filter applied for the range of acceptability of vertical uncertainty of IHO standards.</p>
Full article ">Figure 31
<p>SDB results from the Venice Lagoon area, after a 3.5 m depth filter is applied for the range of acceptability of vertical uncertainty of IHO standards. On the left are the S-2 images, and on the right are the SDB results: (<b>a</b>) the northern part of the Venice Lagoon, (<b>b</b>) the lagoon area near Venice Town, (<b>c</b>) the lagoon area near Malamocco, (<b>d</b>) the lagoon area near Chioggia Town.</p>
Full article ">
25 pages, 5934 KiB  
Article
Sedimentation and Erosion Patterns of the Lena River Anabranching Channel
by Sergey Chalov and Kristina Prokopeva
Water 2022, 14(23), 3845; https://doi.org/10.3390/w14233845 - 26 Nov 2022
Cited by 5 | Viewed by 2836
Abstract
Lena River is one of the largest “pristine” undammed river systems in the World. In the middle and low (including delta) 1500 km course of the Lena main stem river forms complex anabranching patterns which are affected by continuous permafrost, degradation of the [...] Read more.
Lena River is one of the largest “pristine” undammed river systems in the World. In the middle and low (including delta) 1500 km course of the Lena main stem river forms complex anabranching patterns which are affected by continuous permafrost, degradation of the frozen ground and changes in vegetation (taiga and tundra). This study provides a high-resolution assessment of sediment behavior along this reach. Comprehensive hydrological field studies along the anabranching channel located in the middle, low and delta courses of the Lena River were performed from 2016 to 2022 including acoustic Doppler current profiler (ADCP) discharge measurements and sediment transport estimates by gravimetric analyses of sediment concentration data and surrogate measurements (optical by turbidity meters and acoustic by ADCP techniques). These data were used to construct regional relationships between suspended sediment concentrations (SSC, mg/L), turbidity (T, NTU) and backscatter intensity (BI, dB) values applicable for the conditions of the Lena River. Further, field data sets were used to calibrate the seasonal relationships between Landsat reflectance intensities and field surface sediment concentration data. Robust empirical models were derived between the field surface sediment concentration and surface reflectance data for various hydrological seasons. Based on the integration of in situ monitoring and remote sensing data we revealed significant discrepancies in the spatial and seasonal patterns of the suspended sediment transport between various anabranching reaches of the river system. In the middle course of the Lena River, due to inundation of vegetated banks and islands, a downward decrease in sediment concentrations is observed along the anabranching channel during peak flows. Bed and lateral scour during low water seasons effects average increase in sediment load along the anabranching channels, even though a significant (up to 30%) decline in SSC occurs within the particular reaches of the main channel. Deposition patterns are typical for the secondary channels. The anabranching channel that was influenced by the largest tributaries (Aldan and Viluy) is characterized by the sediment plumes which dominate the spatial and temporal sediment distribution. Finally, in the distributary system of the Lena delta, sediment transport is mostly increased downwards, predominantly under higher discharges and along main distributary channels due to permafrost-dominated bank degradation. Full article
(This article belongs to the Special Issue Sediment Transport, Budgets and Quality in Riverine Environments)
Show Figures

Figure 1

Figure 1
<p>Lena River basin and the locations of case studies.</p>
Full article ">Figure 2
<p>Schematic map of the sections and profiles, where SSC changes were calculated: (<b>a</b>) numbers represent the profile locations of the SSC sections in case study 1; (<b>b</b>) numbers represent the profile locations of the SSC sections in case study 2; (<b>c</b>) numbers represent channels in the Lena River delta (case study 3): 1—main channel; 2—Bykovskaya, 3—Trofimovskaya, 4—Tumatskaya, 5—Olenekskaya branches; grey boxes are polygons for calculating average sediment concentrations.</p>
Full article ">Figure 3
<p>Locations of the field measurements of the SSC in the middle reach of the Lena River: 9 July 2016 (<b>a</b>), 20 June till 9 July 2020 (<b>b</b>) and in the Lena River delta: 13–15 August 2022 (<b>c</b>) used for regression model construction.</p>
Full article ">Figure 4
<p>The <span class="html-italic">SSC = f(T)</span> relationship for the middle and low courses of the Lena River. 2016-model: 82 measurements from 20 to 29 June and from 8 to 10 July 2016 (Lena River Yakutsk anabranching system). 2020-model: 58 measurements from 25 June to 10 July (Lena River from Pokrovsk to Saham anabranching system). 2021-models: 106 measurements 2 expeditions: from 3 to 5 July (Saham branching system) and from September 23 to September 25 (Yakutsk anabranching system). 2022-model: 38 measurements from 10 to 19 June (Lena River Yakutsk anabranching system) and 22 measurements from 10 to 16 August (Lena River delta).</p>
Full article ">Figure 5
<p>A regression model derived for the Lena River. <span class="html-italic">SSC</span>—suspended sediment concentration of water, <span class="html-italic">ρ</span>—the reflectance coefficient of the Landsat 8 images in the red band. The lines show the regression equation lines from <a href="#water-14-03845-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 6
<p>SSC maps for the Lena case study 1 demonstrating examples of longitudinal sediment concentration increase: (<b>a</b>) 15 June 2007 under a Lena discharge of 36,300 m<sup>3</sup>s<sup>−1</sup>; (<b>b</b>) 21 June 2021 under a Lena discharge of 24,200 m<sup>3</sup>s<sup>−1</sup>; and a decline: (<b>c</b>) 19 September 2013 under a water discharge of 8680 m<sup>3</sup>s<sup>−1</sup>; (<b>d</b>) 30 July 2021 under a Lena discharge of 9480 m<sup>3</sup>s<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Longitudinal surface SSC changes (<span class="html-italic">ΔS</span>, %) and sediment load budget (<span class="html-italic">ΔW<sub>R</sub>,</span> kg s<sup>-1</sup>) in case study 1 of the Lena River (non-outlier range, mean, STD and 25–75%-interval) (profiles in <a href="#water-14-03845-f002" class="html-fig">Figure 2</a>). Blue circles represent <span class="html-italic">W<sub>R</sub></span> values during the flood-water period, and yellow circles during low-water period.</p>
Full article ">Figure 8
<p>The surface SSC changes <span class="html-italic">ΔS</span> along case study 1 of the Lena River (62 situations in the period from 1992 to 2018).</p>
Full article ">Figure 9
<p>Examples of SSC maps along case study 2 demonstrating significant domination of the Lena River water discharge over the Aldan river (<b>a</b>)—28 August 2009, Q<sub>L</sub>/Q<sub>A</sub> = 3.53 (Q<sub>L</sub> = 17,700 m<sup>3</sup>s<sup>−1</sup>, Q<sub>A</sub> = 5 100 m<sup>3</sup>s<sup>−1</sup><span class="html-italic">)</span>, discharge ratio group—1) and domination of the Aldan River discharge over the Lena river (<b>b</b>)—31 August 2016, Q<sub>L</sub>/Q<sub>A</sub> = 0.74 (Q<sub>L</sub> = 13,200 m<sup>3</sup>s<sup>−1</sup>, Q<sub>A</sub> = 9 730 m<sup>3</sup>s<sup>−1</sup>), discharge ratio group—4).</p>
Full article ">Figure 10
<p>The suspended sediment changes ΔS (%) along the middle reach of the Lena River by relative discharges of the particular distributaries.</p>
Full article ">Figure 11
<p>Relationship between the suspended sediment concentration (<span class="html-italic">SSC</span>) and (<b>a</b>) velocity (transport capacity—<span class="html-italic">R<sub>tr</sub></span>) from the measurements from 9 July 2016; (<b>b</b>)– mean sediment diameter (<span class="html-italic">D50</span>, μm) from the measurement from 23 to 24 September 2021.</p>
Full article ">Figure 12
<p>Changes in <span class="html-italic">S<sub>l</sub>/S<sub>r</sub></span> ratio along the Aldan–Viluy reach of the Lena River under various hydrological conditions <span class="html-italic">Q<sub>L</sub>/Q<sub>A</sub></span>.</p>
Full article ">Figure 13
<p>Relationship between the surface suspended sediment changes (<span class="html-italic">ΔS</span>) and air temperature (<span class="html-italic"><sup>0</sup>C</span>) (Tiksi water station) in the Lena delta.</p>
Full article ">
17 pages, 11818 KiB  
Article
Application of Remote Sensing Techniques to Identification of Underwater Airplane Wreck in Shallow Water Environment: Case Study of the Baltic Sea, Poland
by Artur Grządziel
Remote Sens. 2022, 14(20), 5195; https://doi.org/10.3390/rs14205195 - 17 Oct 2022
Cited by 7 | Viewed by 2827
Abstract
Multibeam echo sounders (MBES), side-scan sonars (SSS), and remotely operated vehicles (ROVs) are irreplaceable devices in contemporary hydrographic works. However, a highly reliable method of identifying detected wrecks is visual inspection through diving surveys. During underwater research, it is sometimes hard to obtain [...] Read more.
Multibeam echo sounders (MBES), side-scan sonars (SSS), and remotely operated vehicles (ROVs) are irreplaceable devices in contemporary hydrographic works. However, a highly reliable method of identifying detected wrecks is visual inspection through diving surveys. During underwater research, it is sometimes hard to obtain images in turbid water. Moreover, on-site diving operations are time-consuming and expensive. This article presents the results of the remote sensing surveys that were carried out at the site of a newly discovered wreck, in the southern part of the Baltic Sea (Poland). Remote sensing techniques can quickly provide a detailed overview of the wreckage area and thus considerably reduce the time required for ground truthing. The goal of this paper is to demonstrate the process of identification of a wreck based on acoustic data, without involving a team of divers. The findings, in conjunction with the collected archival documentation, allowed for the identification of the wreck of a Junkers Ju-88, a bomber from World War II. Full article
(This article belongs to the Special Issue Remote Sensing for Shallow and Deep Waters Mapping and Monitoring)
Show Figures

Figure 1

Figure 1
<p>Location of study area where the survey was conducted by <span class="html-italic">Arctowski</span>.</p>
Full article ">Figure 2
<p>The moment of detecting the wreck by means of MBES on the survey line number 70.</p>
Full article ">Figure 3
<p>Instruments used for surveying the airplane wreck: (<b>a</b>) multibeam echo sounder EM-3002D; (<b>b</b>) side-scan sonar Klein 3900 System; (<b>c</b>) scanning sonar ver. 1071; (<b>d</b>) remotely operated vehicle Falcon SAAB SeaEye.</p>
Full article ">Figure 4
<p>Grids of bathymetric survey lines planned in the wreck site.</p>
Full article ">Figure 5
<p>Grid of bathymetric data and location of the wreck prepared based on multibeam echo sounder measurements.</p>
Full article ">Figure 6
<p>Digital models of the plane wreck: (<b>a</b>,<b>b</b>) models developed based on data recorded during one pass over the wreck; (<b>c</b>,<b>d</b>) models developed based on the increased density of bathymetric data.</p>
Full article ">Figure 7
<p>Side scan sonar data of little utility and value: (<b>a</b>) sonar is flying too low and too close to the wreckage; (<b>b</b>) sonar is flying too high above the bottom; (<b>c</b>) sonar is flying too far from the wreckage.</p>
Full article ">Figure 8
<p>Side scan sonar data of considerable utility and value: (<b>a</b>) sonar is flying 20 m from the wreckage, range R = 40 m; (<b>b</b>) sonar is flying 15 m from the wreckage, range R = 30 m.</p>
Full article ">Figure 9
<p>Side scan sonar data recorded with the Klein 3900 system: (<b>a</b>) sonar track line parallel to the axis of the airplane wings; (<b>b</b>) sonar track line at the angle of 45 degrees to the axis of the airplane wings.</p>
Full article ">Figure 10
<p>Video data recorded with the ROV camera: (<b>a</b>) part of the right wing; (<b>b</b>) part of the cockpit; (<b>c</b>) part of the left wing; (<b>d</b>) engine propeller; (<b>e</b>) part of the tail detached; (<b>f</b>) engine cover on the right wing.</p>
Full article ">Figure 11
<p>Four types of aircraft selected in the first phase of identification: (<b>a</b>) Szcze-2 aircraft; (<b>b</b>) B-25 Mitchell aircraft; (<b>c</b>) Junkers Ju-88 aircraft; (<b>d</b>) Dornier Do 217 aircraft.</p>
Full article ">Figure 12
<p>Comparison of the sonar image of the wreck with the B-25 Mitchell bomber: (<b>a</b>) sonar imagery of the wreck; (<b>b</b>) the floor plan of the B-25 Mitchel bomber; (<b>c</b>) identifying the wing feature in the sonar image; (<b>d</b>) identification of the engine on the construction plans.</p>
Full article ">Figure 13
<p>Comparison of the distance between the wing-mounted engines based on sonar data and technical data of the Szcze-2 aircraft.</p>
Full article ">Figure 14
<p>Comparison of the shape and arrangement of the wing ailerons: (<b>a</b>) side-scan sonar image of the airplane wreck’s wing, (<b>b</b>) plane projection of the Dornier Do 217.</p>
Full article ">Figure 15
<p>Comparison of the wing arrangement and wingspan: (<b>a</b>) wing arrangement of the Do 217; (<b>b</b>) wing arrangement of the discovered wreck of plane, vertical cross-section of the multibeam echo sounder data.</p>
Full article ">Figure 16
<p>Comparison of the wingspan and distance between engines: (<b>a</b>) data recorded with MBES; (<b>b</b>) sonar imagery and dimensioning; (<b>c</b>) actual dimensions of the Junkers Ju 88.</p>
Full article ">Figure 17
<p>Comparison of the wing elevation angle of the wreck: (<b>a</b>) wing arrangement of the airplane wreck, a vertical cross-section of the MBES data; (<b>b</b>) actual model of the Junkers Ju-88.</p>
Full article ">Figure 18
<p>Identification of selected parts of the wreck of bomber on the basis of data recorded with ROV and side-scan sonar.</p>
Full article ">
15 pages, 1839 KiB  
Article
Geotechnical Measurements for the Investigation and Assessment of Arctic Coastal Erosion—A Review and Outlook
by Nina Stark, Brendan Green, Nick Brilli, Emily Eidam, Kevin W. Franke and Kaleb Markert
J. Mar. Sci. Eng. 2022, 10(7), 914; https://doi.org/10.3390/jmse10070914 - 1 Jul 2022
Cited by 6 | Viewed by 3962
Abstract
Geotechnical data are increasingly utilized to aid investigations of coastal erosion and the development of coastal morphological models; however, measurement techniques are still challenged by environmental conditions and accessibility in coastal areas, and particularly, by nearshore conditions. These challenges are exacerbated for Arctic [...] Read more.
Geotechnical data are increasingly utilized to aid investigations of coastal erosion and the development of coastal morphological models; however, measurement techniques are still challenged by environmental conditions and accessibility in coastal areas, and particularly, by nearshore conditions. These challenges are exacerbated for Arctic coastal environments. This article reviews existing and emerging data collection methods in the context of geotechnical investigations of Arctic coastal erosion and nearshore change. Specifically, the use of cone penetration testing (CPT), which can provide key data for the mapping of soil and ice layers as well as for the assessment of slope and block failures, and the use of free-fall penetrometers (FFPs) for rapid mapping of seabed surface conditions, are discussed. Because of limitations in the spatial coverage and number of available in situ point measurements by penetrometers, data fusion with geophysical and remotely sensed data is considered. Offshore and nearshore, the combination of acoustic surveying with geotechnical testing can optimize large-scale seabed characterization, while onshore most recent developments in satellite-based and unmanned-aerial-vehicle-based data collection offer new opportunities to enhance spatial coverage and collect information on bathymetry and topography, amongst others. Emphasis is given to easily deployable and rugged techniques and strategies that can offer near-term opportunities to fill current gaps in data availability. This review suggests that data fusion of geotechnical in situ testing, using CPT to provide soil information at deeper depths and even in the presence of ice and using FFPs to offer rapid and large-coverage geotechnical testing of surface sediments (i.e., in the upper tens of centimeters to meters of sediment depth), combined with acoustic seabed surveying and emerging remote sensing tools, has the potential to provide essential data to improve the prediction of Arctic coastal erosion, particularly where climate-driven changes in soil conditions may bias the use of historic observations of erosion for future prediction. Full article
(This article belongs to the Section Coastal Engineering)
Show Figures

Figure 1

Figure 1
<p>Simplified conceptual sketch of some processes affecting geotechnical Arctic coastal and nearshore sediment properties.</p>
Full article ">Figure 2
<p>Portable free-fall penetrometer <span class="html-italic">BlueDrop</span> during YUKON14 deployments.</p>
Full article ">Figure 3
<p>UAV-based 3D SfM reconstruction of an Arctic coastal slope near Anchorage that was impacted by an earthquake and landslides in 2018 (localized landslides are circled in yellow).</p>
Full article ">Figure 4
<p>Conceptual sketch of combined data collection strategies for optimized coastal geotechnical site characterization in Arctic environments.</p>
Full article ">
Back to TopTop