[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (25,763)

Search Parameters:
Keywords = noise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 7824 KiB  
Article
Research on a Feature Point Detection Algorithm for Weld Images Based on Deep Learning
by Shaopeng Kang, Hongbin Qiang, Jing Yang, Kailei Liu, Wenbin Qian, Wenpeng Li and Yanfei Pan
Electronics 2024, 13(20), 4117; https://doi.org/10.3390/electronics13204117 (registering DOI) - 18 Oct 2024
Abstract
Laser vision seam tracking enhances robotic welding by enabling external information acquisition, thus improving the overall intelligence of the welding process. However, camera images captured during welding often suffer from distortion due to strong noises, including arcs, splashes, and smoke, which adversely affect [...] Read more.
Laser vision seam tracking enhances robotic welding by enabling external information acquisition, thus improving the overall intelligence of the welding process. However, camera images captured during welding often suffer from distortion due to strong noises, including arcs, splashes, and smoke, which adversely affect the accuracy and robustness of feature point detection. To mitigate these issues, we propose a feature point extraction algorithm tailored for weld images, utilizing an improved Deeplabv3+ semantic segmentation network combined with EfficientDet. By replacing Deeplabv3+’s backbone with MobileNetV2, we enhance prediction efficiency. The DenseASPP structure and attention mechanism are implemented to focus on laser stripe edge extraction, resulting in cleaner laser stripe images and minimizing noise interference. Subsequently, EfficientDet extracts feature point positions from these cleaned images. Experimental results demonstrate that, across four typical weld types, the average feature point extraction error is maintained below 1 pixel, with over 99% of errors falling below 3 pixels, indicating both high detection accuracy and reliability. Full article
Show Figures

Figure 1

Figure 1
<p>Test platform.</p>
Full article ">Figure 2
<p>Welding noise image. 1—Arc, 2—diffuse reflection, 3—smoke.</p>
Full article ">Figure 3
<p>Image of welding characteristic points. (<b>a</b>) Corner joint. (<b>b</b>) Butt joint. (<b>c</b>) Lap joint. (<b>d</b>) Groove.</p>
Full article ">Figure 4
<p>Process of extracting image features.</p>
Full article ">Figure 5
<p>Deep learning network model based on DeeplabV3+.</p>
Full article ">Figure 6
<p>DV3p-Weld network model.</p>
Full article ">Figure 7
<p>DenseASPP module.</p>
Full article ">Figure 8
<p>CSE channel attention module.</p>
Full article ">Figure 9
<p>CoordAttention model.</p>
Full article ">Figure 10
<p>EfficientDet network architecture diagram.</p>
Full article ">Figure 11
<p>BiFPN feature fusion network.</p>
Full article ">Figure 12
<p>Collected partial welding images. (<b>a</b>) Fillet joint. (<b>b</b>) Butt joint. (<b>c</b>) Lap joint. (<b>d</b>) Groove.</p>
Full article ">Figure 13
<p>Part of the DV3p-Weld network dataset image. (<b>a</b>) Original image. (<b>b</b>) Labeled image.</p>
Full article ">Figure 14
<p>Part of the EfficientDet network model dataset image. (<b>a</b>) Original image. (<b>b</b>) Labeled image.</p>
Full article ">Figure 15
<p>The attenuation of the loss function and MIoU curve during the segmentation training process.</p>
Full article ">Figure 16
<p>Laser fringe image segmentation results. (<b>a</b>) Original image. (<b>b</b>) Prediction image.</p>
Full article ">Figure 17
<p>The attenuation of loss function and MIou curve during network training process.</p>
Full article ">Figure 18
<p>Image feature point detection results. (<b>a</b>) Original image, (<b>b</b>) DV3p-Weld denoised image, and (<b>c</b>) EfficientDet feature point extraction image.</p>
Full article ">Figure 19
<p>Error diagram of the feature point extraction algorithm based on DV3p-Weld-E image.</p>
Full article ">Figure 20
<p>Welding seam tracking results based on DV3p-Weld-E.</p>
Full article ">
23 pages, 8565 KiB  
Article
Anomaly Detection in Embryo Development and Morphology Using Medical Computer Vision-Aided Swin Transformer with Boosted Dipper-Throated Optimization Algorithm
by Alanoud Al Mazroa, Mashael Maashi, Yahia Said, Mohammed Maray, Ahmad A. Alzahrani, Abdulwhab Alkharashi and Ali M. Al-Sharafi
Bioengineering 2024, 11(10), 1044; https://doi.org/10.3390/bioengineering11101044 (registering DOI) - 18 Oct 2024
Abstract
Infertility affects a significant number of humans. A supported reproduction technology was verified to ease infertility problems. In vitro fertilization (IVF) is one of the best choices, and its success relies on the preference for a higher-quality embryo for transmission. These have been [...] Read more.
Infertility affects a significant number of humans. A supported reproduction technology was verified to ease infertility problems. In vitro fertilization (IVF) is one of the best choices, and its success relies on the preference for a higher-quality embryo for transmission. These have been normally completed physically by testing embryos in a microscope. The traditional morphological calculation of embryos shows predictable disadvantages, including effort- and time-consuming and expected risks of bias related to individual estimations completed by specific embryologists. Different computer vision (CV) and artificial intelligence (AI) techniques and devices have been recently applied in fertility hospitals to improve efficacy. AI addresses the imitation of intellectual performance and the capability of technologies to simulate cognitive learning, thinking, and problem-solving typically related to humans. Deep learning (DL) and machine learning (ML) are advanced AI algorithms in various fields and are considered the main algorithms for future human assistant technology. This study presents an Embryo Development and Morphology Using a Computer Vision-Aided Swin Transformer with a Boosted Dipper-Throated Optimization (EDMCV-STBDTO) technique. The EDMCV-STBDTO technique aims to accurately and efficiently detect embryo development, which is critical for improving fertility treatments and advancing developmental biology using medical CV techniques. Primarily, the EDMCV-STBDTO method performs image preprocessing using a bilateral filter (BF) model to remove the noise. Next, the swin transformer method is implemented for the feature extraction technique. The EDMCV-STBDTO model employs the variational autoencoder (VAE) method to classify human embryo development. Finally, the hyperparameter selection of the VAE method is implemented using the boosted dipper-throated optimization (BDTO) technique. The efficiency of the EDMCV-STBDTO method is validated by comprehensive studies using a benchmark dataset. The experimental result shows that the EDMCV-STBDTO method performs better than the recent techniques. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Figure 1

Figure 1
<p>Overall process of EDMCV-STBDTO model.</p>
Full article ">Figure 2
<p>Structure of <span class="html-italic">BF</span> model.</p>
Full article ">Figure 3
<p>Framework of ST model.</p>
Full article ">Figure 4
<p>Architecture of VAE technique.</p>
Full article ">Figure 5
<p>Workflow of BDTO approach.</p>
Full article ">Figure 6
<p>Sample images: (<b>a</b>) Good and (<b>b</b>) Not-Good.</p>
Full article ">Figure 7
<p>Confusion matrices of EDMCV-STBDTO technique: (<b>a</b>–<b>f</b>) Epochs 500–3000.</p>
Full article ">Figure 8
<p>Average outcome of EDMCV-STBDTO technique: (<b>a</b>–<b>f</b>) Epochs 500–3000.</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <mrow> <mi>A</mi> <mi>c</mi> <mi>c</mi> <msub> <mrow> <mi>u</mi> </mrow> <mrow> <mi>y</mi> </mrow> </msub> </mrow> </semantics></math> curve of EDMCV-STBDTO technique: (<b>a</b>–<b>f</b>) Epochs 500–3000.</p>
Full article ">Figure 10
<p>Loss curve of EDMCV-STBDTO technique: (<b>a</b>–<b>f</b>) Epochs 500–3000.</p>
Full article ">Figure 11
<p>PR curve of EDMCV-STBDTO technique: (<b>a</b>–<b>f</b>) Epochs 500–3000.</p>
Full article ">Figure 12
<p>ROC curve of EDMCV-STBDTO technique: (<b>a</b>–<b>f</b>) Epochs 500–3000.</p>
Full article ">Figure 13
<p>Comparative analysis of EDMCV-STBDTO technique with recent methods.</p>
Full article ">Figure 14
<p>PT outcome of EDMCV-STBDTO technique with recent models.</p>
Full article ">
18 pages, 1212 KiB  
Article
To Sustainably Ride or Not to Ride: Examining the Green Consumption Intention of Ride-Hailing Services in the Sharing Economy by University Students
by Muhammad Ishfaq Khan, Syed Afzal Moshadi Shah, Mudassar Ali and Abdullah Faisal Al Naim
Sustainability 2024, 16(20), 9047; https://doi.org/10.3390/su16209047 (registering DOI) - 18 Oct 2024
Abstract
An increase in ride-hailing services in the sharing economy can help to reduce the number of vehicles on the road, which will lead to a decrease in air pollution and noise pollution, an improvement in environmental conditions, a decrease in travel costs, and [...] Read more.
An increase in ride-hailing services in the sharing economy can help to reduce the number of vehicles on the road, which will lead to a decrease in air pollution and noise pollution, an improvement in environmental conditions, a decrease in travel costs, and an increase in social benefits to travelers. Hence, there is a great need to examine the consumer’s intention toward usage of ride-hailing services in the sharing economy. The current study aims to examine the green consumption intention of eco-friendly services as an outcome of environmental responsibility and environmental knowledge. It also attempts to examine the serial mediation of green concern, value co-creation, and mediated moderation of social support as an explanatory mechanism of green consumption intention of eco-friendly services. The research design was cross-section and deductive. The respondents of the study were registered university students in Islamabad who were active consumers of major ride-hailing services in Pakistan, i.e., Uber, Careem, Uplift, InDriver, B4U Cabes, and SUVL. A total of 402 responses were gathered using purposive sampling. Partial Least Squares Structural Equation Modeling (PLS-SEM) in Smart PLS is used to evaluate the reliability of measurement instruments and the validity of the research model. The current study results showed that environmental responsibility and knowledge positively and significantly affect motivation to engage in green consumption. Furthermore, environmental concern and value co-creation partially mediate the proposed relationship. In addition, social support also moderates the association between green concern and value co-creation such that it strengthens the connection. The current research findings are an addition to the existing literature and have managerial applications with limitations preceded by future research directions. Full article
(This article belongs to the Special Issue Behavioural Approaches to Promoting Sustainable Transport Systems)
Show Figures

Figure 1

Figure 1
<p>Theoretical framework.</p>
Full article ">Figure 2
<p>Direct Relationship Effect Analysis.</p>
Full article ">Figure 3
<p>Indirect Relationship Effect Analysis.</p>
Full article ">
25 pages, 15710 KiB  
Article
TG-PGAT: An AIS Data-Driven Dynamic Spatiotemporal Prediction Model for Ship Traffic Flow in the Port
by Jianwen Ma, Yue Zhou, Yumiao Chang, Zhaoxin Zhu, Guoxin Liu and Zhaojun Chen
J. Mar. Sci. Eng. 2024, 12(10), 1875; https://doi.org/10.3390/jmse12101875 (registering DOI) - 18 Oct 2024
Abstract
Accurate prediction of ship traffic flow is essential for developing intelligent maritime transportation systems. To address the complexity of ship traffic flow data in the port and the challenges of capturing its dynamic spatiotemporal dependencies, a dynamic spatiotemporal model called Temporal convolutional network-bidirectional [...] Read more.
Accurate prediction of ship traffic flow is essential for developing intelligent maritime transportation systems. To address the complexity of ship traffic flow data in the port and the challenges of capturing its dynamic spatiotemporal dependencies, a dynamic spatiotemporal model called Temporal convolutional network-bidirectional Gated recurrent unit-Pearson correlation coefficient-Graph Attention Network (TG-PGAT) is proposed for predicting traffic flow in port waters. This model extracts spatial features of traffic flow by combining the adjacency matrix and spatial dynamic coefficient correlation matrix within the Graph Attention Network (GAT) and captures temporal features through the concatenation of the Temporal Convolutional Network (TCN) and Bidirectional Gated Recurrent Unit (BiGRU). The proposed TG-PGAT model demonstrates higher prediction accuracy and stability than other classic traffic flow prediction methods. The experimental results from multiple angles, such as ablation experiments and robustness tests, further validate the critical role and strong noise resistance of different modules in the TG-PGAT model. The experimental results of visualization demonstrate that this model not only exhibits significant predictive advantages in densely trafficked areas of the port but also outperforms other models in surrounding areas with sparse traffic flow data. Full article
(This article belongs to the Special Issue Management and Control of Ship Traffic Behaviours)
Show Figures

Figure 1

Figure 1
<p>Gridded results of the study waters.</p>
Full article ">Figure 2
<p>Spatial influencing factors of ship traffic flow in port waters.</p>
Full article ">Figure 3
<p>VMD time-series decomposition of ship traffic flow in the port.</p>
Full article ">Figure 4
<p>Thermal map of spatial nodes related to ship traffic flow in the port.</p>
Full article ">Figure 5
<p>Architecture of the TG-PGAT model.</p>
Full article ">Figure 6
<p>The calculation of the spatial attention coefficient of ship traffic flow in the port using GAT.</p>
Full article ">Figure 7
<p>Fusion strategy for spatial features of ship traffic flow in the port.</p>
Full article ">Figure 8
<p>The architecture of the TCN model for extracting time-series features of ship traffic flow in the port (<span class="html-italic">a</span> represents the TCN neural network architecture, <span class="html-italic">b</span> represents the residual block, and <span class="html-italic">c</span> represents the dilated causal convolution).</p>
Full article ">Figure 9
<p>Extraction steps of temporal features of ship traffic flow in the port using BiGRU.</p>
Full article ">Figure 10
<p>Loss values of different loss functions.</p>
Full article ">Figure 11
<p>Loss values of different optimizers.</p>
Full article ">Figure 12
<p>Error values of different random dropout parameters. (<b>a</b>) <span class="html-italic">MAE</span>; (<b>b</b>) <span class="html-italic">RMSE</span>.</p>
Full article ">Figure 13
<p>Training set effect of the TG-PGAT model.</p>
Full article ">Figure 14
<p>Testing set effect of the TG-PGAT model.</p>
Full article ">Figure 15
<p>Distribution of error indicators for each model under different prediction durations. (<b>a</b>) prediction duration of 1 h; (<b>b</b>) prediction duration of 2 h; (<b>c</b>) prediction duration of 3 h.</p>
Full article ">Figure 16
<p>The training process of ablation experiment.</p>
Full article ">Figure 17
<p>Comparison of error indicators for prediction effects in ablation experiments. (<b>a</b>) <span class="html-italic">MAE</span> error; (<b>b</b>) <span class="html-italic">RMSE</span> error; (<b>c</b>) <span class="html-italic">MAPE</span> error.</p>
Full article ">Figure 17 Cont.
<p>Comparison of error indicators for prediction effects in ablation experiments. (<b>a</b>) <span class="html-italic">MAE</span> error; (<b>b</b>) <span class="html-italic">RMSE</span> error; (<b>c</b>) <span class="html-italic">MAPE</span> error.</p>
Full article ">Figure 18
<p>Variations in evaluation indicators after adding Gaussian noise for different prediction durations.</p>
Full article ">Figure 19
<p>Comparison of traffic flow prediction by different models at various temporal nodes within a day. (<b>a</b>) node x5y5; (<b>b</b>) node x10y4.</p>
Full article ">Figure 20
<p>Distribution of traffic flow prediction error values by different models at various spatial nodes in port waters. (<b>a</b>) CNN-LSTM; (<b>b</b>) SDSTGNN; (<b>c</b>) STA-BiLSTM; (<b>d</b>) TG-PGAT.</p>
Full article ">Figure 20 Cont.
<p>Distribution of traffic flow prediction error values by different models at various spatial nodes in port waters. (<b>a</b>) CNN-LSTM; (<b>b</b>) SDSTGNN; (<b>c</b>) STA-BiLSTM; (<b>d</b>) TG-PGAT.</p>
Full article ">
23 pages, 3210 KiB  
Article
Limb Temperature Observations in the Stratosphere and Mesosphere Derived from the OMPS Sensor
by Pedro Da Costa Louro, Philippe Keckhut, Alain Hauchecorne, Mustapha Meftah, Glen Jaross and Antoine Mangin
Remote Sens. 2024, 16(20), 3878; https://doi.org/10.3390/rs16203878 (registering DOI) - 18 Oct 2024
Abstract
Molecular scattering (Rayleigh scattering) has been extensively used from the ground with lidars and from space to observe the limb, thereby deriving vertical temperature profiles between 30 and 80 km. In this study, we investigate how temperature can be measured using the new [...] Read more.
Molecular scattering (Rayleigh scattering) has been extensively used from the ground with lidars and from space to observe the limb, thereby deriving vertical temperature profiles between 30 and 80 km. In this study, we investigate how temperature can be measured using the new Ozone Mapping and Profiler Suite (OMPS) sensor, aboard the Suomi NPP and NOAA-21 satellites. The OMPS consists of three instruments whose main purpose is to study the composition of the stratosphere. One of these, the Limb Profiler (LP), measures the radiance of the limb of the middle atmosphere (stratosphere and mesosphere, 12 to 90 km altitude) at wavelengths from 290 to 1020 nm. This new data set has been used with a New Simplified Radiative Transfer Model (NSRTM) to derive temperature profiles with a vertical resolution of 1 km. To validate the method, the OMPS-derived temperature profiles were compared with data from four ground-based lidars and the ERA5 and MSIS models. The results show that OMPS and the lidars are in agreement within a range of about 5 K from 30 to 80 km. Comparisons with the models also show similar results, except for ERA5 beyond 50 km. We investigated various sources of bias, such as different attenuation sources, which can produce errors of up to 120 K in the UV range, instrumental errors around 0.8 K and noise problems of up to 150 K in the visible range for OMPS. This study also highlighted the interest in developing a new miniaturised instrument that could provide real-time observation of atmospheric vertical temperature profiles using a constellation of CubeSats with our NSRTM. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>In our case, the green part of the profile is unused, the red part is the profile measured by OMPS and the violet part is simulated using an inverse exponential thanks to the red part. The initialisation altitude marks the ‘boundary’ between these 2 parts.</p>
Full article ">Figure 2
<p>Example of the effect of noise correction on a daily profile in relation to the position of the site on La Réunion where the lidar is located. Temperature inversions are performed using several wavelength bands available on the OMPS instrument. On the left are the profiles with noise estimated from the latest channels. On the right are the profiles with noise estimated using the MSIS model as described in <a href="#sec2-remotesensing-16-03878" class="html-sec">Section 2</a>.</p>
Full article ">Figure 3
<p>Representation of the horizontal resolution for three tangent heights, each point represents a layer measured by OMPS. As an example with the blue curve, the first point at 113 km represents the distance observed by OMPS between the layer observed, here 30.5 km, and the next layer 31.5 km; the second point at 43 km still represents the distance observed by OMPS at 30.5 km but this time between the layers 31.5 and 32.5 km; and so on.</p>
Full article ">Figure 4
<p>This diagram illustrates the path taken by the radiance in each layer observed by OMPS. At each layer, the radiance is scattered by air molecules and absorbed by ozone and nitrogen dioxide molecules. <math display="inline"><semantics> <msub> <mi>R</mi> <msub> <mi>Couche</mi> <mi>i</mi> </msub> </msub> </semantics></math> represents the radius from the Earth to the given layer, and <math display="inline"><semantics> <msub> <mi>D</mi> <msub> <mrow> <mi mathvariant="normal">c</mi> <mo>-</mo> <mi>Sat</mi> </mrow> <mi>i</mi> </msub> </msub> </semantics></math> represents the distance that the radiance in layer <span class="html-italic">i</span> crosses to reach the satellite. Similarly, <math display="inline"><semantics> <msub> <mi>D</mi> <msub> <mrow> <mi>sol</mi> <mo>-</mo> <mi mathvariant="normal">c</mi> </mrow> <mi>i</mi> </msub> </msub> </semantics></math> represents the distance travelled by the radiance arriving from the sun to the layer.</p>
Full article ">Figure 5
<p>The left figure shows the correction applied to the radiance profile per cm at different wavelengths, while the rightfigure shows the effect of this correction in kelvin on the temperature profiles at the same wavelengths.</p>
Full article ">Figure 6
<p>Share of Rayleigh scattering in total signal attenuation in % at different wavelengths. This figure should be read in conjunction with <a href="#remotesensing-16-03878-f005" class="html-fig">Figure 5</a>, Figure 8 and Figure 10 and provides a better understanding of the roles of Rayleigh scattering and O<sub>3</sub> and NO<sub>2</sub> absorption in the corrections applied to the radiance profile and, by extension, to the temperature profiles.</p>
Full article ">Figure 7
<p>Example of an O<sub>3</sub> profile measured by OMPS in the middle atmosphere.</p>
Full article ">Figure 8
<p>Share of O<sub>3</sub> absorption in total signal attenuation in % at different wavelengths.</p>
Full article ">Figure 9
<p>Example of a NO<sub>2</sub> profile in the middle atmosphere. WACCM gives an average profile per month for each year.</p>
Full article ">Figure 10
<p>Share of NO<sub>2</sub> absorption in total signal attenuation in % at different wavelengths.</p>
Full article ">Figure 11
<p>The upper figure shows a temperature profile obtained by OMPS without correction of the radiance profile; the lower figure is the same temperature profile but with correction of the radiance profile by our NSRTM. The temperature profiles of the ERA 5 and MSIS 2.0 models and the lidar profile (in this case, the Réunion lidar) obtained on the same day show the extent of the correction.</p>
Full article ">Figure 12
<p>Example of Earth limb radiances measured by OMPS on 13 August 2012 at 45°N latitude [<a href="#B41-remotesensing-16-03878" class="html-bibr">41</a>].</p>
Full article ">Figure 13
<p>Annual temperature difference between OMPS and the lidar on Réunion Island at different wavelengths.</p>
Full article ">Figure 14
<p>Annual temperature difference between OMPS and the MSIS 2.0 model at different wavelengths. The effect of aerosols on temperature profiles can be seen between 20 and 30 km. As the wavelength increases, aerosol scattering takes precedence over molecular scattering. Lidars do not provide temperature data below 30 km, so we show this phenomenon using MSIS differences.</p>
Full article ">Figure 15
<p>Scatterplot and mean standard deviation of the temperature inversion method with OMPS obtained with each wavelength as a function of altitude.</p>
Full article ">Figure 16
<p>Comparisons of OMPS temperature profiles with ERA5, MSIS 2.0 and HOH. On the left are the differences between OMPS and the various sources compared; in the centre the standard deviation; and on the right the uncertainty on the standard deviation. In order from first to last line, the study sites are OHP, RUN, MLO and HOH.</p>
Full article ">Figure 17
<p>Deviation in temperature between OMPS and OHP. The red zone represents the calculated expected differences. The blue and yellow curves represent the temperature differences between OMPS and OHP at 2 and 3 stds.</p>
Full article ">
19 pages, 4338 KiB  
Article
Discovering Electric Vehicle Charging Locations Based on Clustering Techniques Applied to Vehicular Mobility Datasets
by Elmer Magsino, Francis Miguel M. Espiritu and Kerwin D. Go
ISPRS Int. J. Geo-Inf. 2024, 13(10), 368; https://doi.org/10.3390/ijgi13100368 (registering DOI) - 18 Oct 2024
Abstract
With the proliferation of vehicular mobility traces because of inexpensive on-board sensors and smartphones, utilizing them to further understand road movements have become easily accessible. These huge numbers of vehicular traces can be utilized to determine where to enhance road infrastructures such as [...] Read more.
With the proliferation of vehicular mobility traces because of inexpensive on-board sensors and smartphones, utilizing them to further understand road movements have become easily accessible. These huge numbers of vehicular traces can be utilized to determine where to enhance road infrastructures such as the deployment of electric vehicle (EV) charging stations. As more EVs are plying today’s roads, the driving anxiety is minimized with the presence of sufficient charging stations. By correctly extracting the various transportation parameters from a given dataset, one can design an adequate and adaptive EV charging network that can provide comfort and convenience for the movement of people and goods from one point to another. In this study, we determined the possible EV charging station locations based on an urban city’s vehicular capacity distribution obtained from taxi and ride-hailing mobility GPS traces. To achieve this, we first transformed the dynamic vehicular environment based on vehicular capacity into its equivalent urban single snapshot. We then obtained the various traffic zone distributions by initially utilizing k-means clustering to allow flexibility in the total number of wanted traffic zones in each dataset. In each traffic zone, iterative clustering techniques employing Density-based Spatial Clustering of Applications with Noise (DBSCAN) or clustering by fast search and find of density peaks (CFS) revealed various area separation where EV chargers were needed. Finally, to find the exact location of the EV charging station, we last ran k-means to locate centroids, depending on the constraint on how many EV chargers were needed. Extensive simulations revealed the strengths and weaknesses of the clustering methods when applied to our datasets. We utilized the silhouette and Calinski–Harabasz indices to measure the validity of cluster formations. We also measured the inter-station distances to understand the closeness of the locations of EV chargers. Our study shows how CFS + k-means clustering techniques are able to pinpoint EV charger locations. However, when utilizing DBSCAN initially, the results did not present any notable outcome. Full article
(This article belongs to the Topic Spatial Decision Support Systems for Urban Sustainability)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the study.</p>
Full article ">Figure 2
<p>The urban map is uniformly partitioned to reveal different <math display="inline"><semantics> <msub> <mi>g</mi> <mrow> <mi>p</mi> <mo>,</mo> <mi>q</mi> </mrow> </msub> </semantics></math> and its utility network parameters at sampling time <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mi>i</mi> <msub> <mi>T</mi> <mi>S</mi> </msub> </mrow> </semantics></math>. Vehicles of the same color represent their respective trajectories.</p>
Full article ">Figure 3
<p>Using vehicular capacity, the dynamic urban vehicular map is transformed into a snapshot where darker colors represent low vehicular capacity and lighter colors show places with high vehicular capacity [<a href="#B41-ijgi-13-00368" class="html-bibr">41</a>].</p>
Full article ">Figure 4
<p>Example of <span class="html-italic">k</span>-means clustering applied to randomly generated GPS data (<b>a</b>). When <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>, the four traffic zones are shown in (<b>b</b>). <math display="inline"><semantics> <msub> <mi>Z</mi> <mn>1</mn> </msub> </semantics></math> is represented by the magenta color (<b>upper left</b>). <math display="inline"><semantics> <msub> <mi>Z</mi> <mn>2</mn> </msub> </semantics></math> is represented by the red color (<b>upper right</b>). <math display="inline"><semantics> <msub> <mi>Z</mi> <mn>3</mn> </msub> </semantics></math> is represented by the green color (<b>lower left</b>). <math display="inline"><semantics> <msub> <mi>Z</mi> <mn>4</mn> </msub> </semantics></math> is represented by the blue color (<b>lower right</b>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Example in <a href="#ijgi-13-00368-f004" class="html-fig">Figure 4</a>a clustered using DBSCAN producing three clusters with many outliers represented by −1. (<b>b</b>–<b>d</b>) are the <span class="html-italic">k</span>-means clustering results when <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> for each DBSCAN cluster. These were superimposed onto the original data to show how DBSCAN performed reduction in the original dataset. Colors represent the cluster determined by DBSCAN in (<b>a</b>) and <span class="html-italic">k</span>-means clustering in (<b>b</b>)–(<b>d</b>).</p>
Full article ">Figure 6
<p>Example in <a href="#ijgi-13-00368-f004" class="html-fig">Figure 4</a>a clustered using CFS having the two outliers as the main traffic zone within the four clusters derived from <span class="html-italic">k</span>-means clustering. (<b>a</b>) Original Data, (<b>b</b>) Distance vs. Density plot, and (<b>c</b>) two chosen outliers from (<b>d</b>) as the cluster center represented by the blue diamond.</p>
Full article ">Figure 7
<p>The hourly vehicular capacity of (<b>a</b>) BJG, (<b>b</b>) JKT, and (<b>c</b>) SIN.</p>
Full article ">Figure 8
<p>The hourly vehicular speed of (<b>a</b>) BJG, (<b>b</b>) JKT, and (<b>c</b>) SIN.</p>
Full article ">Figure 9
<p>The spatiotemporal stable vehicular capacity network characteristic snapshot of (<b>a</b>) BJG, (<b>b</b>) JKT, and (<b>c</b>) SIN. Lighter colors depict high values when compared to dark colors.</p>
Full article ">Figure 10
<p>The spatiotemporal stable vehicular speed network characteristic snapshot of (<b>a</b>) BJG, (<b>b</b>) JKT, and (<b>c</b>) SIN. Lighter colors depict high values when compared to dark colors.</p>
Full article ">Figure 11
<p><span class="html-italic">k</span>-means clustering results when <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>50</mn> <mo>,</mo> <mo> </mo> <mn>300</mn> <mo>,</mo> <mo> </mo> <mn>500</mn> </mrow> </semantics></math> for BJG, JKT, and SIN. The first row is <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, the second row is <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math>, and the third row is <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math>. The first column is clustering BJG, the second column is clustering JKT, and the third column is clustering SIN. “×” denotes the cluster center of the colored cluster formed by <span class="html-italic">k</span>-means.</p>
Full article ">Figure 12
<p>Silhouette evaluation after performing <span class="html-italic">k</span>-means clustering when <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>50</mn> <mo>,</mo> <mo> </mo> <mn>300</mn> <mo>,</mo> <mo> </mo> <mn>500</mn> </mrow> </semantics></math> for BJG, JKT, and SIN. The first row is <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>, the second row is <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math>, and the third row is <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math>. The first column is clustering BJG, the second column is clustering JKT, and the third column is clustering SIN.</p>
Full article ">Figure 13
<p>Inter-cluster distances when <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (first row) and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> (second row) for (<b>a</b>) BJG (first column), (<b>b</b>) JKT (second column), and (<b>c</b>) SIN (third column).</p>
Full article ">Figure 14
<p>Determining the locations of EV chargers from partition 30 of the SIN dataset calculated by CFS. The upper left shows the decision graph, the upper right shows the clusters when two outliers were chosen, represented by cyan and red cluster groups, and the second row shows clusters 1 and 2 further divided into four subareas, represented by four different colors, using <span class="html-italic">k</span>-means. “×” denotes EV charger locations.</p>
Full article ">Figure 15
<p>Determining the most number of allowable EV charging stations in clusters (<b>a</b>) 1 and (<b>b</b>) 2 of partition 30.</p>
Full article ">Figure 16
<p>Using a secondary <span class="html-italic">k</span>-means partitioning with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (<b>upper left</b>, first row) and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> (<b>upper right</b>, first row) on an initialy <span class="html-italic">k</span>-means cluster. Colors represent which cluster a mobility trace belongs. The second row shows the silhouette and Calinski–Harabasz indices to determine the optimal number of clusters in this partition.</p>
Full article ">
34 pages, 8862 KiB  
Article
A Novel Detection Transformer Framework for Ship Detection in Synthetic Aperture Radar Imagery Using Advanced Feature Fusion and Polarimetric Techniques
by Mahmoud Ahmed, Naser El-Sheimy and Henry Leung
Remote Sens. 2024, 16(20), 3877; https://doi.org/10.3390/rs16203877 - 18 Oct 2024
Abstract
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. [...] Read more.
Ship detection in synthetic aperture radar (SAR) imagery faces significant challenges due to the limitations of traditional methods, such as convolutional neural network (CNN) and anchor-based matching approaches, which struggle with accurately detecting smaller targets as well as adapting to varying environmental conditions. These methods, relying on either intensity values or single-target characteristics, often fail to enhance the signal-to-clutter ratio (SCR) and are prone to false detections due to environmental factors. To address these issues, a novel framework is introduced that leverages the detection transformer (DETR) model along with advanced feature fusion techniques to enhance ship detection. This feature enhancement DETR (FEDETR) module manages clutter and improves feature extraction through preprocessing techniques such as filtering, denoising, and applying maximum and median pooling with various kernel sizes. Furthermore, it combines metrics like the line spread function (LSF), peak signal-to-noise ratio (PSNR), and F1 score to predict optimal pooling configurations and thus enhance edge sharpness, image fidelity, and detection accuracy. Complementing this, the weighted feature fusion (WFF) module integrates polarimetric SAR (PolSAR) methods such as Pauli decomposition, coherence matrix analysis, and feature volume and helix scattering (Fvh) components decomposition, along with FEDETR attention maps, to provide detailed radar scattering insights that enhance ship response characterization. Finally, by integrating wave polarization properties, the ability to distinguish and characterize targets is augmented, thereby improving SCR and facilitating the detection of weakly scattered targets in SAR imagery. Overall, this new framework significantly boosts DETR’s performance, offering a robust solution for maritime surveillance and security. Full article
(This article belongs to the Special Issue Target Detection with Fully-Polarized Radar)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed ship detection in SAR imagery.</p>
Full article ">Figure 2
<p>CNN preprocessing model.</p>
Full article ">Figure 3
<p>DETR pipeline overview [<a href="#B52-remotesensing-16-03877" class="html-bibr">52</a>].</p>
Full article ">Figure 4
<p>Performance of FEDETR for two images from the test datasets SSDD and SAR Ship, including Gaofen-3 (<b>a1</b>–<b>a8</b>) and Sentinel-1 images (<b>b1</b>–<b>b8</b>) with different polarizations and resolutions. The ground truths, detection results, the false detection and missed detection results are indicated with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 5
<p>Experimental results for ship detection in SAR images across four distinct regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images; (<b>b</b>–<b>e</b>) are the detection results for DETR using VV and VH (DETR_VV, DETR_VH) as well as FEDETR using VV and VH (FEDETR_VV, FEDETR_VH) polarizations, respectively. Ground truths, detection results, false detection results, and missed detection results are marked with green, red, yellow, and blue boxes.</p>
Full article ">Figure 6
<p>Experimental results for ship detection in SAR images across four regions: Onshore1, Onshore2, Offshore1, and Offshore2. (<b>a</b>) are the ground truth images and (<b>b</b>,<b>c</b>) are the predicted results from FEDETR with optimal pooling and kernel size and the WFF method, respectively. Ground truths, detection results, false detections, and missed detections are marked with green, red, yellow, and blue boxes, respectively.</p>
Full article ">Figure 7
<p>Correlation matrix analyzing the relationship between kernel Size, LSF, and PSNR for max pooling (<b>a</b>) and median pooling (<b>b</b>) on SSD and SAR Ship datasets. Validation of FEDETR module effectiveness.</p>
Full article ">Figure 8
<p>Depicts the LSF of images with different types of pooling and kernel sizes. Panels (<b>a1</b>–<b>a4</b>) depict LSF images after max pooling, while panels (<b>a5</b>–<b>a8</b>) show LSF images after median pooling with kernel sizes 3, 5, 7, and 9 respectively for Gaofen-3 HH images from the SAR Ship dataset. Panels (<b>b1</b>–<b>b4</b>) illustrate LSF images after max pooling and panels (<b>b5</b>–<b>b8</b>) show LSF images after median pooling for images from the SSD dataset.</p>
Full article ">Figure 9
<p>Backscattering intensity in VV and VH polarizations and ship presence across four regions. (<b>a1</b>,<b>a2</b>) Backscattering intensity in VV and VH polarizations for Onshore1; (<b>a3</b>,<b>a4</b>) backscattering intensity for ships in Onshore1; (<b>b1</b>,<b>b2</b>) backscattering intensity in VV and VH polarizations for Onshore2; (<b>b3</b>,<b>b4</b>) backscattering intensity for ships in Onshore2; (<b>c1</b>,<b>c2</b>) backscattering intensity in VV and VH polarizations for Offshore1; (<b>c3</b>,<b>c4</b>) backscattering intensity for ships in Offshore1; (<b>d1</b>,<b>d2</b>) backscattering intensity in VV and VH polarizations for Offshore2; and (<b>d3</b>,<b>d4</b>) backscattering intensity for ships in Offshore2. In each subfigure, the x-axis represents pixel intensity, and the y-axis represents frequency.</p>
Full article ">Figure 10
<p>LSF and PSNR Comparisons for Onshore and Offshore Areas (Onshore1 (<b>a</b>,<b>b</b>), Onshore2 (<b>c</b>,<b>d</b>), Offshore1 (<b>e</b>,<b>f</b>), Offshore2 (<b>g</b>,<b>h</b>)) Using VV and VH Polarization with Median and Max Pooling.</p>
Full article ">Figure 11
<p>Visual comparison of max and median pooling with different kernel sizes on onshore and offshore SAR imagery for VV and VH polarizations: (<b>a1</b>,<b>a2</b>) Onshore1 VV (max kernel size 3; median kernel size 3); (<b>a3</b>,<b>a4</b>) Onshore1 VV (median kernel size 5); (<b>b1</b>,<b>b2</b>) Onshore2 VV (max kernel size 3); (<b>b3</b>,<b>b4</b>) Onshore2 VH (median kernel size 5); (<b>c1</b>,<b>c2</b>) Offshore1 VV (max kernel size 7; median kernel size 7); (<b>c3</b>,<b>c4</b>) Offshore1 VH (max kernel size 3; median kernel size 3); (<b>d1</b>,<b>d2</b>) Offshore2 VV (max kernel size 5; median kernel size 5); (<b>d3</b>,<b>d4</b>) Offshore2 VH (max kernel size 5; median kernel size 5).</p>
Full article ">Figure 12
<p>Experimental results for ship detection in SAR images across four regions: (<b>a</b>) Onshore1, (<b>b</b>) Onshore2, (<b>c</b>) Offshore1, and (<b>d</b>) Offshore2. The figure illustrates the effectiveness of the Pauli decomposition method in reducing noise and distinguishing ships from the background. Ships are marked in pink, while noise clutter is shown in green.</p>
Full article ">Figure 13
<p>Signal-to-clutter ratio (SCR) comparisons for different polarizations across various scenarios. VV polarization is in blue, VH polarization in orange, and Fvh in green.</p>
Full article ">Figure 14
<p>Otsu’s thresholding on four regions for Pauli and FVH images: (<b>a1</b>–<b>a4</b>) thresholding for Onshore1, Onshore2, Offshore1, and Offshore2 for Pauli images; (<b>b1</b>–<b>b4</b>) thresholding for the same regions for Fvh images.</p>
Full article ">Figure 15
<p>Visualization of FEDETR attention maps, Pauli decomposition, Fvh feature maps, and WFF results for Onshore1 (<b>a1</b>–<b>a4</b>), Onshore2 (<b>b1</b>–<b>b4</b>), Offshore1 (<b>c1</b>–<b>c4</b>), and Offshore2 (<b>d1</b>–<b>d4</b>).</p>
Full article ">
14 pages, 316 KiB  
Article
Noise Transfer Approach to GKP Quantum Circuits
by Timothy C. Ralph, Matthew S. Winnel, S. Nibedita Swain and Ryan J. Marshman
Entropy 2024, 26(10), 874; https://doi.org/10.3390/e26100874 - 18 Oct 2024
Abstract
The choice between the Schrödinger and Heisenberg pictures can significantly impact the computational resources needed to solve a problem, even though they are equivalent formulations of quantum mechanics. Here, we present a method for analysing Bosonic quantum circuits based on the Heisenberg picture [...] Read more.
The choice between the Schrödinger and Heisenberg pictures can significantly impact the computational resources needed to solve a problem, even though they are equivalent formulations of quantum mechanics. Here, we present a method for analysing Bosonic quantum circuits based on the Heisenberg picture which allows, under certain conditions, a useful factoring of the evolution into signal and noise contributions, similar way to what can be achieved with classical communication systems. We provide examples which suggest that this approach may be particularly useful in analysing quantum computing systems based on the Gottesman–Kitaev–Preskill (GKP) qubits. Full article
(This article belongs to the Special Issue Quantum Optics: Trends and Challenges)
Show Figures

Figure 1

Figure 1
<p>Example <span class="html-italic">q</span> quadrature probability distribution for the cat state in Equation (<a href="#FD13-entropy-26-00874" class="html-disp-formula">13</a>) with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Example <span class="html-italic">q</span> quadrature probability distribution for the GKP state in Equation (20) with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mo>Δ</mo> <mn>2</mn> </msup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Example <span class="html-italic">q</span> quadrature probability distribution for the GKP state in Equation (20) with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mo>Δ</mo> <mn>2</mn> </msup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> but rotated through a quadrature angle of <math display="inline"><semantics> <mrow> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>. This is equal to the “−” GKP state or, equivalently, the <span class="html-italic">p</span> quadrature probability distribution of the “1” state.</p>
Full article ">Figure 4
<p>Average position quadrature variance <math display="inline"><semantics> <msub> <mi>V</mi> <mi>q</mi> </msub> </semantics></math> as a function of the parameter <math display="inline"><semantics> <mi>α</mi> </semantics></math> for the cat state defined in Equation (<a href="#FD13-entropy-26-00874" class="html-disp-formula">13</a>). Notably, <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mi>q</mi> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math> for small values of <math display="inline"><semantics> <mi>α</mi> </semantics></math>, which can be attributed to clipping effects.</p>
Full article ">Figure 5
<p>Average position quadrature variance <math display="inline"><semantics> <msub> <mi>V</mi> <mi>q</mi> </msub> </semantics></math> as a function of the squeezing parameter <math display="inline"><semantics> <msup> <mo>Δ</mo> <mn>2</mn> </msup> </semantics></math> for GKP logical states. The computational-basis states are defined in Equation (20), and the dual-basis states are simply rotated versions of the computational-basis states. The dashed line represents <math display="inline"><semantics> <msup> <mo>Δ</mo> <mn>2</mn> </msup> </semantics></math>. <math display="inline"><semantics> <msub> <mi>V</mi> <mi>q</mi> </msub> </semantics></math> matches <math display="inline"><semantics> <msup> <mo>Δ</mo> <mn>2</mn> </msup> </semantics></math> for small values of <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> but deviates in a state-dependent way for larger values. Plotting <math display="inline"><semantics> <msub> <mi>V</mi> <mi>p</mi> </msub> </semantics></math> follows a similar approach, as the <span class="html-italic">p</span> quadrature is simply a rotation, with the computational and dual-basis states switching roles.</p>
Full article ">Figure 6
<p>Simple teleportation circuit with CZ gates to interact with the modes and feedforward of momentum measurements of mode 1 as imaginary displacements of mode 3 and momentum measurements of mode 2 as real displacements of mode 3. The measurement of mode 1 is represented by the operator <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mn>1</mn> <mi>o</mi> </mrow> </msub> </semantics></math>, but if error correction is being implemented, then it is <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>c</mi> <mn>1</mn> <mi>o</mi> </mrow> </msub> </semantics></math>, which is fed forward. Similarly, the measurement of mode 2 is represented by the operator <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mn>2</mn> <mi>o</mi> </mrow> </msub> </semantics></math>, but if error correction is being implemented, then it is <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>c</mi> <mn>2</mn> <mi>o</mi> </mrow> </msub> </semantics></math>, which is fed forward.</p>
Full article ">Figure 7
<p>The simple teleportation error correction circuit of <a href="#entropy-26-00874-f003" class="html-fig">Figure 3</a> but with loss errors included for all components. The loss is modelled with beamsplitters, where the transmission of the beamsplitters represents the efficiency of the corresponding components. Additional components (loss and linear amplification of mode 3) are indicated in blue. These components, along with tailored feedforward gains, allow the circuit to still implement error correction. The measurement of mode 1 is represented by the operator <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mn>1</mn> <mi>o</mi> </mrow> </msub> </semantics></math>, but if error correction is being implemented, then it is <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>c</mi> <mn>1</mn> <mi>o</mi> </mrow> </msub> </semantics></math> which is fed forward. Similarly, the measurement of mode 2 is represented by the operator <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mn>2</mn> <mi>o</mi> </mrow> </msub> </semantics></math>, but if error correction is being implemented, then it is <math display="inline"><semantics> <msub> <mover accent="true"> <mi>p</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>c</mi> <mn>2</mn> <mi>o</mi> </mrow> </msub> </semantics></math> which is fed forward.</p>
Full article ">
22 pages, 14012 KiB  
Article
Towards Advancing Real-Time Railroad Inspection Using a Directional Eddy Current Probe
by Meirbek Mussatayev, Ruby Kempka and Mohammed Alanesi
Sensors 2024, 24(20), 6702; https://doi.org/10.3390/s24206702 - 18 Oct 2024
Abstract
In the field of railroad safety, the effective detection of surface cracks is critical, necessitating reliable, high-speed, non-destructive testing (NDT) methods. This study introduces a hybrid Eddy Current Testing (ECT) probe, specifically engineered for railroad inspection, to address the common issue of “lift-off [...] Read more.
In the field of railroad safety, the effective detection of surface cracks is critical, necessitating reliable, high-speed, non-destructive testing (NDT) methods. This study introduces a hybrid Eddy Current Testing (ECT) probe, specifically engineered for railroad inspection, to address the common issue of “lift-off noise” due to varying distances between the probe and the test material. Unlike traditional ECT methods, this probe integrates transmit and differential receiver (Tx-dRx) coils, aiming to enhance detection sensitivity and minimise the lift-off impact. The study optimises ECT probes employing different transmitter coils, emphasising three main objectives: (a) quantitatively evaluating each probe using signal-to-noise ratio (SNR) and outlining a real-time data-processing algorithm based on SNR methodology; (b) exploring the frequency range proximal to the electrical resonance of the receiver coil; and (c) examining sensitivity variations across varying lift-off distances. The experimental outcomes indicate that the newly designed probe with a figure-8 shaped transmitter coil significantly improves sensitivity in detecting surface cracks on railroads. It achieves an impressive SNR exceeding 100 for defects with minimal dimensions of 1 mm in width and depth. The simulation results closely align with experimental findings, validating the investigation of the optimal operational frequency and lift-off distance for selected probe performance, which are determined to be 0.3 MHz and 1 mm, respectively. The realisation of this project would lead to notable advancements in enhancing railroad safety by improving the efficiency of crack detection. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Improved efficiency of current railroad inspection.</p>
Full article ">Figure 2
<p>Three eddy current probe configurations with: (<b>a</b>) singular; (<b>b</b>) +point, and (<b>c</b>) figure-8 shaped.</p>
Full article ">Figure 3
<p>Directional EC probe design: (<b>a</b>) winding method of the +Point, (<b>b</b>) figure-8 shaped, (<b>c</b>) rectangular single transmitter, (<b>d</b>) rectangular (RCT) receiver coil dimensions (top-down view) and (<b>e</b>) resonant frequencies of different transmitter coils.</p>
Full article ">Figure 4
<p>Simplified diagram of amplifier with tuning caps and drive coils.</p>
Full article ">Figure 5
<p>Exemplar plot of identification process of proposed system.</p>
Full article ">Figure 6
<p>Schematic of the virtual scanning model.</p>
Full article ">Figure 7
<p>(<b>i</b>) Simulated mesh overview: (<b>a</b>,<b>c</b>,<b>e</b>) show the mesh with the air domain, while (<b>b</b>,<b>d</b>,<b>f</b>) provide a zoomed-in view of the transmitter coils’ positioning near the rail for the “Singular”, “+Point”, and “Figure-8” configurations, respectively. (<b>ii</b>) FEM simulations of induced eddy current flow patterns for the (<b>a</b>) singular, (<b>b</b>) plus-point, and (<b>c</b>) figure-8-shaped probes.</p>
Full article ">Figure 8
<p>(<b>a</b>) Rail track sample (<b>b</b>). Experimental set-up.</p>
Full article ">Figure 9
<p>Optimal frequency selection study results depicted in rows: (<b>i</b>) experimental raw EC data; (<b>ii</b>) FEM simulations for EC probes with transmitter coils in configurations: (<b>a</b>) single, (<b>b</b>) plus-point, and (<b>c</b>) figure-8 shaped.</p>
Full article ">Figure 10
<p>Comparative analysis of probe sensitivity across various.</p>
Full article ">Figure 11
<p>Eddy current scan data and FEM simulation results are depicted as follows: (<b>i</b>) experimental raw EC data obtained using a figure-8 shaped transmitter at lift-offs of (<b>a</b>) 0.25 mm, (<b>b</b>) 0.5 mm, and (<b>c</b>) 1 mm; (<b>ii</b>) corresponding FEM simulation outputs for the same lift-offs shown in (<b>d</b>–<b>f</b>), respectively.</p>
Full article ">Figure 12
<p>Performance assessment of EC probe with a figure-8 shaped transmitter across lift-off distances of 0.25 mm, 0.5 mm, and 1.0 mm: (<b>a</b>) separate contributions of signal (S) and noise (N) and (<b>b</b>) SNR across various frequencies; (<b>c</b>) average voltage and structural noise at optimum 0.3 MHz.</p>
Full article ">Figure 13
<p>Lift-off noise insensitivity results using the initial EC probe prototype with a +Point transmitter and two differentially hand-wound meander receiver coils: (<b>a</b>) top-down view of the probe, and (<b>b</b>) side view of the probe on the sample with varying depths and EC scan results at 0.3 MHz.</p>
Full article ">Figure 14
<p>Dimensions of the specimen and artificially drilled hole: (<b>a</b>) schematic diagram; (<b>b</b>) real photo of the specimen.</p>
Full article ">Figure 15
<p>Raw EC data with lift-off distances of 0.25, 0.5, and 1 mm at (<b>a</b>) 0.25 MHz, (<b>b</b>) 0.3 MHz, and (<b>c</b>) 0.35 MHz operational frequencies.</p>
Full article ">Figure 16
<p>Selected probe signal-to-noise ratio over defect 1 and defects 2–3, with separate noise contributions as a function of lift-off distance.</p>
Full article ">Figure 17
<p>Eddy current scan results over studs using the selected probe: (<b>a</b>) inspected sample; (<b>b</b>) raw eddy current scan results from two individual scans.</p>
Full article ">
12 pages, 753 KiB  
Review
Cardiac Troponin Levels in Patients with Chronic Kidney Disease: “Markers of High Risk or Just Noise’’?
by Eleni V. Geladari, Natalia G. Vallianou, Angelos Evangelopoulos, Petros Koufopoulos, Fotis Panagopoulos, Evangelia Margellou, Maria Dalamaga, Vassilios Sevastianos and Charalampia V. Geladari
Diagnostics 2024, 14(20), 2316; https://doi.org/10.3390/diagnostics14202316 - 18 Oct 2024
Viewed by 182
Abstract
Kidney disease is linked to the development of cardiovascular disorders, further increasing morbidity and mortality in this high-risk population. Thus, early detection of myocardial damage is imperative in order to prevent devastating cardiovascular complications within this patient group. Over the years, cardiac biomarkers [...] Read more.
Kidney disease is linked to the development of cardiovascular disorders, further increasing morbidity and mortality in this high-risk population. Thus, early detection of myocardial damage is imperative in order to prevent devastating cardiovascular complications within this patient group. Over the years, cardiac biomarkers have been identified and are now widely used in everyday clinical practice. More specifically, available data suggest that cardiac troponin and its regulatory subunits (TnT, TnI, and TnC) reflect the injury and necrosis of myocardial tissue. While cTnC is identical in cardiac and skeletal muscle, TnT and TnI constitute cardiac-specific forms of troponin, and, as such, they have been established by international societies as biomarkers of cardiac damage and diagnostic indicators for acute myocardial infarction. Elevations in the levels of both cardiac troponins (cTnT and cTnI) have been also reported in asymptomatic patients suffering from chronic kidney disease. Therefore, if abnormal, they often generate confusion among clinicians regarding the interpretation and clinical significance of their numerical values in emergency settings. The aim of this review is to explore the reasons behind elevated troponin levels in patients with chronic kidney disease and identify when these elevated levels of biomarkers indicate the need for urgent intervention, considering the high cardiovascular risk in this patient group. Full article
Show Figures

Figure 1

Figure 1
<p>The cardiac muscle depends on the cardiac muscle cells for their contraction. Myofibrils, such as tropomyosin, troponin complex, myosin, and actin, all counteract when there is an influx of calcium into the myoblast to induce myoblast contraction. When necrosis of cardiomyocytes occurs, especially involving a large number of cardiac muscle cells, there is release of troponins in the blood circulation, accounting for a significant rise in the blood levels of troponins. This absolute increase in the blood levels of cTnI and cTnT may translate into the use of cTnI and cTnT as biomarkers of myocardial damage.</p>
Full article ">
21 pages, 12855 KiB  
Article
Noise Study Auralization of an Open-Rotor Engine
by Qing Zhang, Siyi Jiang, Xiaojun Yang, Yongjia Xu and Maosheng Zhu
Aerospace 2024, 11(10), 857; https://doi.org/10.3390/aerospace11100857 - 17 Oct 2024
Viewed by 274
Abstract
Based on the performance and acoustic data files of reduced-size open-rotor engines in low-speed wind tunnels, the static sound pressure level was derived by converting the 1-foot lossless spectral density into sound-pressure-level data, the background noise was removed, and the results were corrected [...] Read more.
Based on the performance and acoustic data files of reduced-size open-rotor engines in low-speed wind tunnels, the static sound pressure level was derived by converting the 1-foot lossless spectral density into sound-pressure-level data, the background noise was removed, and the results were corrected according to the environmental parameters of the low-speed wind tunnels. In accordance with the requirements of Annex 16 of the Convention on International Civil Aviation Organization and Part 36 of the Civil Aviation Regulations of China on noise measurement procedures, the takeoff trajectory was physically modeled; the static noise source was mapped onto the takeoff trajectory to simulate the propagation process of the noise during takeoff; and the 24 one-third-octave center frequencies that corresponded to the SPL data were corrected for geometrical dispersion, atmospheric absorption, and Doppler effects, so that the takeoff noise could be corrected to represent a real environment. In addition, the audible processing of noise data with a 110° source pointing angle was achieved, which can be useful for enabling practical observers to analyze the noise characteristics. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Figure 1
<p>The new-generation open-rotor engine configuration.</p>
Full article ">Figure 2
<p>Aircraft-noise-monitoring points for noise airworthiness requirements.</p>
Full article ">Figure 3
<p>Full- and reduced-thrust takeoff trajectories.</p>
Full article ">Figure 4
<p>Example of reduced-thrust takeoff trajectory noise source localization.</p>
Full article ">Figure 5
<p>Attenuation curve of the geometric dispersion effect.</p>
Full article ">Figure 6
<p>Atmospheric absorption sound attenuation curve.</p>
Full article ">Figure 7
<p>Doppler effect curve (The intersection of the blue dotted line and the Doppler effect curve is the angle at which the sound pressure level attenuation is 0).</p>
Full article ">Figure 8
<p>Source synthesis, propagation path and receiver setting.</p>
Full article ">Figure 9
<p>Broadband and monophonic filtering results.</p>
Full article ">Figure 10
<p>Broadband synthesis.</p>
Full article ">Figure 11
<p>Flight path simulated in a 3D virtual environment.</p>
Full article ">Figure 12
<p>The 110° angle noise-data audio realization point (at the red dot).</p>
Full article ">Figure 13
<p>Wind tunnel environment simulation.</p>
Full article ">Figure 14
<p>Acoustic measurement position.</p>
Full article ">Figure 15
<p>Mixed-reality environment visualization.</p>
Full article ">Figure 16
<p>Labeling of the test.</p>
Full article ">Figure 17
<p>Dynamic bar graph of the mixed-reality environment.</p>
Full article ">
8 pages, 3225 KiB  
Communication
Generation of High-Quality Cylindrical Vector Beams from All-Few-Mode Fiber Laser
by Pingping Xiao, Zhen Tang, Fei Wang, Yaqiong Lu and Zuxing Zhang
Photonics 2024, 11(10), 975; https://doi.org/10.3390/photonics11100975 - 17 Oct 2024
Viewed by 167
Abstract
Transverse mode control of laser intracavity oscillation is crucial for generating high-purity cylindrical vector beams (CVBs). We utilized the mode conversion and mode selection properties of two-mode long-period fiber gratings (TM-LPFGs) and two-mode fiber Bragg gratings (TM-FBGs) to achieve intracavity hybrid-mode oscillations of [...] Read more.
Transverse mode control of laser intracavity oscillation is crucial for generating high-purity cylindrical vector beams (CVBs). We utilized the mode conversion and mode selection properties of two-mode long-period fiber gratings (TM-LPFGs) and two-mode fiber Bragg gratings (TM-FBGs) to achieve intracavity hybrid-mode oscillations of LP01 and LP11 from an all-few-mode fiber laser. A mode-locked pulse output with a repetition rate of 12.46 MHz and a signal-to-noise ratio of 53 dB was achieved with a semiconductor saturable absorber mirror (SESAM) for mode-locking, at a wavelength of 1550.32 nm. The 30 dB spectrum bandwidth of the mode-locked pulse was 0.13 nm. Furthermore, a high-purity CVB containing radially polarized and azimuthally polarized LP11 modes was generated. The purity of the obtained CVB was greater than 99%. The high-purity CVB pulses have great potential for applications in optical tweezers, high-speed mode-division multiplexing communication, and more. Full article
(This article belongs to the Special Issue Single Frequency Fiber Lasers and Their Applications)
Show Figures

Figure 1

Figure 1
<p>TM-LPFG transmission spectrum.</p>
Full article ">Figure 2
<p>TM-LPFG transmission spectra under different injection modes (black curve for LP<sub>01</sub> injection; red curve for LP<sub>11</sub> injection).</p>
Full article ">Figure 3
<p>Schematic diagram of the all-few-mode fiber laser.</p>
Full article ">Figure 4
<p>(<b>a</b>) Output spectrum and (<b>b</b>) pulse train of the laser.</p>
Full article ">Figure 5
<p>Radio frequency spectra of mode-locked pulse train in the range of: (<b>a</b>) 0.1 KHz and (<b>b</b>) 2 GHz.</p>
Full article ">Figure 6
<p>The relationship between output power and pump power for two outputs.</p>
Full article ">Figure 7
<p>Optical intensity distribution monitored at output 1: (<b>a</b>–<b>e</b>) RPL and (<b>f</b>–<b>j</b>) APL.</p>
Full article ">Figure 8
<p>Light field distribution measured at output 2 under different pump power: (<b>a</b>) 700 mW, (<b>b</b>) 600 mW.</p>
Full article ">Figure 9
<p>Output spectra of the mode-locked laser from output 1 and output 2 at the same time.</p>
Full article ">
12 pages, 6298 KiB  
Article
A CMOS Optoelectronic Transimpedance Amplifier Using Concurrent Automatic Gain Control for LiDAR Sensors
by Yeojin Chon, Shinhae Choi and Sung-Min Park
Photonics 2024, 11(10), 974; https://doi.org/10.3390/photonics11100974 - 17 Oct 2024
Viewed by 105
Abstract
This paper presents a novel optoelectronic transimpedance amplifier (OTA) for short-range LiDAR sensors used in 180 nm CMOS technology, which consists of a main transimpedance amplifier (m-TIA) with an on-chip P+/N-well/Deep N-well avalanche photodiode (P+/NW/DNW APD) and a replica [...] Read more.
This paper presents a novel optoelectronic transimpedance amplifier (OTA) for short-range LiDAR sensors used in 180 nm CMOS technology, which consists of a main transimpedance amplifier (m-TIA) with an on-chip P+/N-well/Deep N-well avalanche photodiode (P+/NW/DNW APD) and a replica TIA with another on-chip APD, not only to acquire circuit symmetry but to also obtain concurrent automatic gain control (AGC) function within a narrow single pulse-width duration. In particular, for concurrent AGC operations, 3-bit PMOS switches with series resistors are added in parallel with the passive feedback resistor in the m-TIA. Then, the PMOS switches can be turned on or off in accordance with the DC output voltage amplitudes of the replica TIA. The post-layout simulations reveal that the OTA extends the dynamic range up to 74.8 dB (i.e., 1 µApp~5.5 mApp) and achieves a 67 dBΩ transimpedance gain, an 830 MHz bandwidth, a 16 pA/Hz noise current spectral density, a −31 dBm optical sensitivity for a 10−12 bit error rate, and a 6 mW power dissipation from a single 1.8 V supply. The chip occupies a core area of 200 × 120 µm2. Full article
(This article belongs to the Section Optoelectronics and Optical Materials)
Show Figures

Figure 1

Figure 1
<p>Block diagram of (<b>a</b>) a traditional optical receiver with AGC and (<b>b</b>) a previously reported MCC-TIA with a single-pulse AGC [<a href="#B11-photonics-11-00974" class="html-bibr">11</a>].</p>
Full article ">Figure 2
<p>Block diagram of proposed optoelectronic transimpedance amplifier (OTA).</p>
Full article ">Figure 3
<p>Schematic diagrams of (<b>a</b>) the m-TIA and (<b>b</b>) the replica TIA.</p>
Full article ">Figure 4
<p>(<b>a</b>) Cross-sectional view of P<sup>+</sup>/NW/DNW APD and (<b>b</b>) its layout and measured results.</p>
Full article ">Figure 5
<p>(<b>a</b>) A block diagram of the test circuitry for the proposed OTA and (<b>b</b>) a schematic diagram of the utilized PDH circuit.</p>
Full article ">Figure 6
<p>(<b>a</b>) A block diagram of T2V converter and (<b>b</b>) a schematic diagram of latch comparator.</p>
Full article ">Figure 7
<p>Layout of proposed LiDAR receiver.</p>
Full article ">Figure 8
<p>(<b>a</b>) The simulated frequency response of the proposed OTA and (<b>b</b>) the gain variation with the proposed concurrent AGC.</p>
Full article ">Figure 9
<p>Simulated pulse response of the proposed OTA for the different input currents.</p>
Full article ">Figure 10
<p>Simulated pulse responses of (<b>a</b>) the A2V converter with different input currents and (<b>b</b>) the T2V converter for various time intervals.</p>
Full article ">
20 pages, 16040 KiB  
Article
Unveiling Anomalies in Terrain Elevation Products from Spaceborne Full-Waveform LiDAR over Forested Areas
by Hailan Jiang, Yi Li, Guangjian Yan, Weihua Li, Linyuan Li, Feng Yang, Anxin Ding, Donghui Xie, Xihan Mu, Jing Li, Kaijian Xu, Ping Zhao, Jun Geng and Felix Morsdorf
Forests 2024, 15(10), 1821; https://doi.org/10.3390/f15101821 - 17 Oct 2024
Viewed by 193
Abstract
Anomalies displaying significant deviations between terrain elevation products acquired from spaceborne full-waveform LiDAR and reference elevations are frequently observed in assessment studies. While the predominant focus is on “normal” data, recognizing anomalies within datasets obtained from the Geoscience Laser Altimeter System (GLAS) and [...] Read more.
Anomalies displaying significant deviations between terrain elevation products acquired from spaceborne full-waveform LiDAR and reference elevations are frequently observed in assessment studies. While the predominant focus is on “normal” data, recognizing anomalies within datasets obtained from the Geoscience Laser Altimeter System (GLAS) and the Global Ecosystem Dynamics Investigation (GEDI) is essential for a comprehensive understanding of widely used spaceborne full-waveform data, which not only facilitates optimal data utilization but also enhances the exploration of potential applications. Nevertheless, our comprehension of anomalies remains limited as they have received scant specific attention. Diverging from prevalent practices of directly eliminating outliers, we conducted a targeted exploration of anomalies in forested areas using both transmitted and return waveforms from the GLAS and the GEDI in conjunction with airborne LiDAR point cloud data. We unveiled that elevation anomalies stem not from the transmitted pulses or product algorithms, but rather from scattering sources. We further observed similarities between the GLAS and the GEDI despite their considerable disparities in sensor parameters, with the waveforms characterized by a low signal-to-noise ratio and a near exponential decay in return energy; specifically, return signals of anomalies originated from clouds rather than the land surface. This discovery underscores the potential of deriving cloud-top height from spaceborne full-waveform LiDAR missions, particularly the GEDI, suggesting promising prospects for applying GEDI data in atmospheric science—an area that has received scant attention thus far. To mitigate the impact of abnormal return waveforms on diverse land surface studies, we strongly recommend incorporating spaceborne LiDAR-offered terrain elevation in data filtering by establishing an elevation-difference threshold against a reference elevation. This is especially vital for studies concerning forest parameters due to potential cloud interference, yet a consensus has not been reached within the community. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and the geolocation of GLAS and GEDI data.</p>
Full article ">Figure 2
<p>Flowchart of this study.</p>
Full article ">Figure 3
<p>Spatial distribution of terrain elevation anomalies in the GLAS and GEDI datasets.</p>
Full article ">Figure 4
<p>Scatter plot of terrain elevation estimates obtained from GLAS (<b>a</b>) and GEDI (<b>b</b>) vs. the terrain elevation derived from airborne laser scanning (ALS) as a reference. A0 denotes the default algorithm of the GEDI L2A product.</p>
Full article ">Figure 5
<p>Details of terrain elevation outliers from GLAS: scatter plot of terrain elevation from data acquired during nighttime (<b>a</b>) and daytime (<b>b</b>) before removing outliers, scatter plot (<b>c</b>), transmitted waveforms (<b>d</b>), the histogram of the data acquisition time (<b>e</b>), and the histogram of Signal-to-Noise Ratio (SNR) (<b>f</b>) of source laser shot of the outliers.</p>
Full article ">Figure 6
<p>Details of terrain elevation outliers from GEDI: scatter plot of terrain elevation from data acquired during nighttime (<b>a</b>) and daytime (<b>b</b>) before removing outliers, scatter plot (<b>c</b>), transmitted waveforms (<b>d</b>), the histogram (<b>e</b>) of the beam type (<b>e1</b>) and data acquisition time (<b>e2</b>), and the histogram of sensitivity (<b>f</b>) of source laser shot of the outliers.</p>
Full article ">Figure 7
<p>Examples with small (upper panel) and large (lower panel) terrain elevation error: the three-dimensional scene (<b>left</b>), the transmitted waveform (<b>middle</b>), and the return waveform (<b>right</b>) of GLAS and GEDI with the terrain elevation from product and airborne laser scanning (ALS) data illustrated. In the lower panel (<b>right</b>), the ALS terrain elevation is not indicated since the GLAS or GEDI terrain elevation exceeds ALS by more than 330 m (see outlier-27 and outlier-12 indicated in green circles in <a href="#forests-15-01821-f005" class="html-fig">Figure 5</a> and <a href="#forests-15-01821-f006" class="html-fig">Figure 6</a>c). A&lt;<span class="html-italic">n</span>&gt; (<span class="html-italic">n</span>: 1–6) denotes the terrain elevation from six different algorithm groups of GEDI.</p>
Full article ">Figure 8
<p>Scatter plot of canopy height estimates for laser shots of terrain elevation anomalies obtained from GEDI L2A product versus the canopy height derived from airborne laser scanning (ALS) as a reference (the legend of the point density applies to all the figures). A0 denotes the default algorithm (<b>a</b>), and A&lt;<span class="html-italic">n</span>&gt; (<span class="html-italic">n</span>: 1–6) denotes the other six algorithm groups (<b>b</b>–<b>g</b>).</p>
Full article ">Figure A1
<p>Probability density of “sensitivity” of “power” and “coverage” beams estimated by different algorithms (<b>a</b>–<b>f</b>) of GEDI using the data with “sensitivity &gt; 0.90” in all footprints. A0 denotes the default algorithm setting (<b>a</b>), and A&lt;n&gt; (n: 1–6) denotes the other six algorithm groups (<b>b</b>–<b>g</b>).</p>
Full article ">Figure A2
<p>Original GLAS (<b>upper panel</b>) and GEDI (<b>lower panel</b>) waveform examples of terrain elevation anomalies with terrain elevation provided by GLAS and GEDI product indicated. A0 denotes the default algorithm, and A&lt;<span class="html-italic">n</span>&gt; (<span class="html-italic">n</span>: 1–6) denotes the other six algorithm groups of GEDI.</p>
Full article ">
17 pages, 3205 KiB  
Article
New Method for Tomato Disease Detection Based on Image Segmentation and Cycle-GAN Enhancement
by Anjun Yu, Yonghua Xiong, Zirong Lv, Peng Wang, Jinhua She and Longsheng Wei
Sensors 2024, 24(20), 6692; https://doi.org/10.3390/s24206692 - 17 Oct 2024
Viewed by 221
Abstract
A major concern in data-driven deep learning (DL) is how to maximize the capability of a model for limited datasets. The lack of high-performance datasets limits intelligent agriculture development. Recent studies have shown that image enhancement techniques can alleviate the limitations of datasets [...] Read more.
A major concern in data-driven deep learning (DL) is how to maximize the capability of a model for limited datasets. The lack of high-performance datasets limits intelligent agriculture development. Recent studies have shown that image enhancement techniques can alleviate the limitations of datasets on model performance. Existing image enhancement algorithms mainly perform in the same category and generate highly correlated samples. Directly using authentic images to expand the dataset, the environmental noise in the image will seriously affect the model’s accuracy. Hence, this paper designs an automatic leaf segmentation algorithm (AISG) based on the EISeg segmentation method, separating the leaf information with disease spot characteristics from the background noise in the picture. This algorithm enhances the network model’s ability to extract disease features. In addition, the Cycle-GAN network is used for minor sample data enhancement to realize cross-category image transformation. Then, MobileNet was trained by transfer learning on an enhanced dataset. The experimental results reveal that the proposed method achieves a classification accuracy of 98.61% for the ten types of tomato diseases, surpassing the performance of other existing methods. Our method is beneficial in solving the problems of low accuracy and insufficient training data in tomato disease detection. This method can also provide a reference for the detection of other types of plant diseases. Full article
(This article belongs to the Section Smart Agriculture)
Back to TopTop