[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,730)

Search Parameters:
Keywords = cloud computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 810 KiB  
Article
Does Government Digital Transformation Drive High-Quality Urban Economic Development? Evidence from E-Government Platform Construction
by Li Xiong, Xiaoyu Wang, Zijie Liu and Xiaoliang Long
Systems 2024, 12(9), 372; https://doi.org/10.3390/systems12090372 (registering DOI) - 15 Sep 2024
Abstract
Digitalization represents a pivotal global development trend and serves as a significant force propelling economic and social transformation. This manuscript uses the global Malmquist–Luenberger (GML) model to estimate green total factor productivity (GTFP) across 284 Chinese cities from 2003 to 2018, taking the [...] Read more.
Digitalization represents a pivotal global development trend and serves as a significant force propelling economic and social transformation. This manuscript uses the global Malmquist–Luenberger (GML) model to estimate green total factor productivity (GTFP) across 284 Chinese cities from 2003 to 2018, taking the pilot policy of “construction and application of e-government public platforms based on cloud computing” as an example to assess the impact of government digital transformation on the qualitative development of the economy by using a difference-in-differences model to explore the path of its role and driving mechanism. The results reveal that government digital transformation promotes the qualitative improvement of the city’s economic development, and its driving effect shows a marginal incremental law. Moreover, government digital transformation can contribute to the formation of a “latecomer advantage” in the lagging regions, which creates a “catch-up effect” on the regions with favorable development foundations, excellent geographical conditions, high urban ranking, and high education quality. Additionally, government digital transformation boosts economic and social development quality through both innovation spillover and structural optimization. Full article
24 pages, 5994 KiB  
Article
Mapping Natural Populus euphratica Forests in the Mainstream of the Tarim River Using Spaceborne Imagery and Google Earth Engine
by Jiawei Zou, Hao Li, Chao Ding, Suhong Liu and Qingdong Shi
Remote Sens. 2024, 16(18), 3429; https://doi.org/10.3390/rs16183429 (registering DOI) - 15 Sep 2024
Abstract
Populus euphratica is a unique constructive tree species within riparian desert areas that is essential for maintaining oasis ecosystem stability. The Tarim River Basin contains the most densely distributed population of P. euphratica forests in the world, and obtaining accurate distribution data in [...] Read more.
Populus euphratica is a unique constructive tree species within riparian desert areas that is essential for maintaining oasis ecosystem stability. The Tarim River Basin contains the most densely distributed population of P. euphratica forests in the world, and obtaining accurate distribution data in the mainstream of the Tarim River would provide important support for its protection and restoration. We propose a new method for automatically extracting P. euphratica using Sentinel-1 and 2 and Landsat-8 images based on the Google Earth Engine cloud platform and the random forest algorithm. A mask of the potential distribution area of P. euphratica was created based on prior knowledge to save computational resources. The NDVI (Normalized Difference Vegetation Index) time series was then reconstructed using the preferred filtering method to obtain phenological parameter features, and the random forest model was input by combining the phenological parameter, spectral index, textural, and backscattering features. An active learning method was employed to optimize the model and obtain the best model for extracting P. euphratica. Finally, the map of natural P. euphratica forests with a resolution of 10 m in the mainstream of the Tarim River was obtained. The overall accuracy, producer’s accuracy, user’s accuracy, kappa coefficient, and F1-score of the map were 0.96, 0.98, 0.95, 0.93, and 0.96, respectively. The comparison experiments showed that simultaneously adding backscattering and textural features improved the P. euphratica extraction accuracy, while textural features alone resulted in a poor extraction effect. The method developed in this study fully considered the prior and posteriori information and determined the feature set suitable for the P. euphratica identification task, which can be used to quickly obtain accurate large-area distribution data of P. euphratica. The method can also provide a reference for identifying other typical desert vegetation. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location of the study area and the distribution of sample points. (<b>a</b>): location of the study area in Xinjiang province in China; (<b>b</b>): training dataset distribution; (<b>c</b>): detailed sample area showing <span class="html-italic">P. euphratica</span> and non–<span class="html-italic">P. euphratica</span> in a Sentinel-2 false-color image.</p>
Full article ">Figure 2
<p>Distribution of validation dataset. The black solid line represents the range of the study area; the red and yellow points represent <span class="html-italic">P. euphratica</span> and non–<span class="html-italic">P. euphratica</span>, respectively.</p>
Full article ">Figure 3
<p>Workflow of the research.</p>
Full article ">Figure 4
<p>Threshold segmentation effect of MNDWI and NDVI. (<b>a</b>): false color image of Jieran Lik Reservoir in Xinjiang Province; (<b>b</b>): statistical result of the corresponding frequency distribution of MNDWI values of water and other ground objects in area (<b>a</b>); (<b>c</b>): false color image of Pazili Tamu in Xinjiang; (<b>d</b>): statistical result for the corresponding frequency distribution of NDVI values of desert bare land and other ground objects in region (<b>c</b>).</p>
Full article ">Figure 5
<p>Comparison of NDVI data before and after spatiotemporal fusion: (<b>a</b>) NDVI data derived from Sentinel-2 before fusion, (<b>b</b>) NDVI data after fusion.</p>
Full article ">Figure 6
<p>Comparison of the effects of different filter functions for: (<b>a</b>) <span class="html-italic">P. euphratica</span>; (<b>b</b>) <span class="html-italic">Tamarix</span>; (<b>c</b>) allee tree; (<b>d</b>) farmland; (<b>e</b>) wetland; (<b>f</b>) urban tree.</p>
Full article ">Figure 7
<p>Comparison between phenological curves of six typical vegetation species. Phenology parameters of (<b>a</b>) <span class="html-italic">P. euphratica</span>, (<b>b</b>) <span class="html-italic">Tamarix</span>, (<b>c</b>) allee tree, (<b>d</b>) farmland, (<b>e</b>) wetland, and (<b>f</b>) urban tree.</p>
Full article ">Figure 8
<p>Importance of different features in the RF classification.</p>
Full article ">Figure 9
<p>Natural <span class="html-italic">P. euphratica</span> forest maps extracted using four feature combinations: (<b>a</b>) PS, (<b>b</b>) PSB, (<b>c</b>) PST, and (<b>d</b>) PSBT.</p>
Full article ">Figure 10
<p>Comparison of <span class="html-italic">P. euphratica</span> extraction results using different feature combinations on Sentinel-2 standard false color images. Rows 1 to 4 show the identification of <span class="html-italic">P. euphratica</span> in desert areas, <span class="html-italic">P. euphratica</span>-dense areas, agricultural areas, and large river areas, respectively. The green area represents the classification result of <span class="html-italic">P. euphratica</span>. The yellow circle corresponding to each row is the area where the extraction results of different feature combinations are quite different.</p>
Full article ">Figure 11
<p>(<b>a</b>) Distribution of natural <span class="html-italic">P. euphratica</span> forest in the mainstream of the Tarim River. (<b>b</b>): UAV image of healthy <span class="html-italic">P. euphratica</span>, (<b>c</b>): classification result of healthy <span class="html-italic">P. euphratica</span>, (<b>d</b>): UAV image of unhealthy <span class="html-italic">P. euphratica</span>, (<b>e</b>): classification result of unhealthy <span class="html-italic">P. euphratica</span>, (<b>f</b>): UAV image of dense <span class="html-italic">P. euphratica</span>, (<b>g</b>): classification result of dense <span class="html-italic">P. euphratica</span>, (<b>h</b>): UAV image of sparse <span class="html-italic">P. euphratica</span>, (<b>i</b>): classification result of sparse <span class="html-italic">P. euphratica</span>. The green area represents the classification results of <span class="html-italic">P. euphratica</span>.</p>
Full article ">Figure 12
<p>Mixed pixel problems associated with <span class="html-italic">P. euphratica</span>: (<b>a</b>) <span class="html-italic">P. euphratica</span> occupying less than one pixel; (<b>b</b>) sandy soil interfering with the reflected signal of <span class="html-italic">P. euphratica</span>. The red box represents a pixel on the images for clearer observation. Basemaps of row 1-2 are UAV images while row 3 are Sentinel-2 standard false color images.</p>
Full article ">
19 pages, 601 KiB  
Opinion
Challenges and Solutions for Sustainable ICT: The Role of File Storage
by Luigi Mersico, Hossein Abroshan, Erika Sanchez-Velazquez, Lakshmi Babu Saheer, Sarinova Simandjuntak, Sunrita Dhar-Bhattacharjee, Ronak Al-Haddad, Nagham Saeed and Anisha Saxena
Sustainability 2024, 16(18), 8043; https://doi.org/10.3390/su16188043 (registering DOI) - 14 Sep 2024
Viewed by 309
Abstract
Digitalization has been increasingly recognized for its role in addressing numerous societal and environmental challenges. However, the rapid surge in data production and the widespread adoption of cloud computing has resulted in an explosion of redundant, obsolete, and trivial (ROT) data within organizations’ [...] Read more.
Digitalization has been increasingly recognized for its role in addressing numerous societal and environmental challenges. However, the rapid surge in data production and the widespread adoption of cloud computing has resulted in an explosion of redundant, obsolete, and trivial (ROT) data within organizations’ data estates. This issue adversely affects energy consumption and carbon footprint, leading to inefficiencies and a higher environmental impact. Thus, this opinion paper aims to discuss the challenges and potential solutions related to the environmental impact of file storage on the cloud, aiming to address the research gap in “digital sustainability” and the Green IT literature. The key findings reveal that technological issues dominate cloud computing and sustainability research. Key challenges in achieving sustainable practices include the widespread lack of awareness about the environmental impacts of digital activities, the complexity of implementing accurate carbon accounting systems compliant with existing regulatory frameworks, and the role of public–private partnerships in developing novel solutions in emerging areas such as 6G technology. Full article
Show Figures

Figure 1

Figure 1
<p>Amount of data created, consumed, and stored 2010–2020, with forecasts to 2025. Source: [<a href="#B29-sustainability-16-08043" class="html-bibr">29</a>]. Note: * Estimated.</p>
Full article ">
26 pages, 3492 KiB  
Article
Image Processing for Smart Agriculture Applications Using Cloud-Fog Computing
by Dušan Marković, Zoran Stamenković, Borislav Đorđević and Siniša Ranđić
Sensors 2024, 24(18), 5965; https://doi.org/10.3390/s24185965 (registering DOI) - 14 Sep 2024
Viewed by 280
Abstract
The widespread use of IoT devices has led to the generation of a huge amount of data and driven the need for analytical solutions in many areas of human activities, such as the field of smart agriculture. Continuous monitoring of crop growth stages [...] Read more.
The widespread use of IoT devices has led to the generation of a huge amount of data and driven the need for analytical solutions in many areas of human activities, such as the field of smart agriculture. Continuous monitoring of crop growth stages enables timely interventions, such as control of weeds and plant diseases, as well as pest control, ensuring optimal development. Decision-making systems in smart agriculture involve image analysis with the potential to increase productivity, efficiency and sustainability. By applying Convolutional Neural Networks (CNNs), state recognition and classification can be performed based on images from specific locations. Thus, we have developed a solution for early problem detection and resource management optimization. The main concept of the proposed solution relies on a direct connection between Cloud and Edge devices, which is achieved through Fog computing. The goal of our work is creation of a deep learning model for image classification that can be optimized and adapted for implementation on devices with limited hardware resources at the level of Fog computing. This could increase the importance of image processing in the reduction of agricultural operating costs and manual labor. As a result of the off-load data processing at Edge and Fog devices, the system responsiveness can be improved, the costs associated with data transmission and storage can be reduced, and the overall system reliability and security can be increased. The proposed solution can choose classification algorithms to find a trade-off between size and accuracy of the model optimized for devices with limited hardware resources. After testing our model for tomato disease classification compiled for execution on FPGA, it was found that the decrease in test accuracy is as small as 0.83% (from 96.29% to 95.46%). Full article
(This article belongs to the Special Issue Smart Decision Systems for Digital Farming: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Cloud-Fog computing structure.</p>
Full article ">Figure 2
<p>PYNQ Z2 board.</p>
Full article ">Figure 3
<p>Training CNN models and preparing for image classification on the server and PYNQ Z2.</p>
Full article ">Figure 4
<p>Preparation of CNN models to run on PYNQ Z2.</p>
Full article ">Figure 5
<p>Preparing an acceleration model for image classification on FPGA.</p>
Full article ">Figure 6
<p>Test accuracy for CNN models run on server and PYNQ Z2.</p>
Full article ">Figure 7
<p>Latency in image classification on the server for different application settings.</p>
Full article ">Figure 8
<p>Latency in image classification on the server running all three applications.</p>
Full article ">Figure 9
<p>Time elapsed in receiving result of image classification.</p>
Full article ">Figure 10
<p>Network data transfer to the server.</p>
Full article ">Figure 11
<p>Energy consumption on the server during application testing.</p>
Full article ">
25 pages, 2612 KiB  
Article
Measuring the Effectiveness of Carbon-Aware AI Training Strategies in Cloud Instances: A Confirmation Study
by Roberto Vergallo and Luca Mainetti
Future Internet 2024, 16(9), 334; https://doi.org/10.3390/fi16090334 - 13 Sep 2024
Viewed by 237
Abstract
While the massive adoption of Artificial Intelligence (AI) is threatening the environment, new research efforts begin to be employed to measure and mitigate the carbon footprint of both training and inference phases. In this domain, two carbon-aware training strategies have been proposed in [...] Read more.
While the massive adoption of Artificial Intelligence (AI) is threatening the environment, new research efforts begin to be employed to measure and mitigate the carbon footprint of both training and inference phases. In this domain, two carbon-aware training strategies have been proposed in the literature: Flexible Start and Pause & Resume. Such strategies—natively Cloud-based—use the time resource to postpone or pause the training algorithm when the carbon intensity reaches a threshold. While such strategies have proved to achieve interesting results on a benchmark of modern models covering Natural Language Processing (NLP) and computer vision applications and a wide range of model sizes (up to 6.1B parameters), it is still unclear whether such results may hold also with different algorithms and in different geographical regions. In this confirmation study, we use the same methodology as the state-of-the-art strategies to recompute the saving in carbon emissions of Flexible Start and Pause & Resume in the Anomaly Detection (AD) domain. Results confirm their effectiveness in two specific conditions, but the percentage reduction behaves differently compared with what is stated in the existing literature. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence in Smart Societies)
Show Figures

Figure 1

Figure 1
<p>Number of publications per type of Green AI definition (source [<a href="#B11-futureinternet-16-00334" class="html-bibr">11</a>]).</p>
Full article ">Figure 2
<p>Research methodology adopted in this paper.</p>
Full article ">Figure 3
<p>An example of a dataset provided by WattTime. In this case, it is the .csv file for the Italian region.</p>
Full article ">Figure 4
<p>Average emissions of the trainings during the year 2021 across different regions. These emissions exceed 1.6 kg CO<sub>2</sub>eq, comparable to emissions of CO<sub>2</sub> per litre of fuel consumed by a car.</p>
Full article ">Figure 5
<p>The carbon emissions from training HF-SCA (AD on 8 × A100 GPUs for 16 h) in seven different regions (one region per line) at various times throughout the year are shown. Each line is relatively flat, indicating that emissions in a single region are consistent across different months. However, there is significant variation between the least carbon-intensive regions (represented by the lowest lines) and the most carbon-intensive regions (represented by the top lines). This confirms that selecting the region in which experiments are conducted can have a substantial impact on emissions, with differences ranging from 0.25 kg in the most efficient regions to 2.5 kg in the least efficient regions.</p>
Full article ">Figure 6
<p>Emission reduction percentage for the four AI workloads using Flexible Start strategy and the hours set in Northern Carolina. The <span class="html-italic">x</span> axis represents the time extension (6, 12, 18, 24 h) assigned to the workload to complete the job. The <span class="html-italic">y</span> axis represents the checking time (15, 30, 60, 120 min) for carbon intensity. The <span class="html-italic">z</span> axis represents the emission reduction percentages for each specific combination of time extension and checking time. (<b>a</b>) Isolation Forest workload; (<b>b</b>) SVM workload; (<b>c</b>) HF-SCA workload; (<b>d</b>) autoencoder workload.</p>
Full article ">Figure 7
<p>Emission reduction percentage for the four AI workloads using Flexible Start strategy and the percentage set in Northern Carolina. The <span class="html-italic">x</span> axis represents the time extension (+25%, 50%, 75%, 100% of the original training time) assigned to the workload to complete the job. The <span class="html-italic">y</span> axis represents the checking time (15, 30, 60, 120 min) for carbon intensity. The <span class="html-italic">z</span> axis represents the emission reduction percentages for each specific combination of time extension and checking time: (<b>a</b>) Isolation Forest workload; (<b>b</b>) SVM workload; (<b>c</b>) HF-SCA workload; (<b>d</b>) autoencoder workload.</p>
Full article ">Figure 8
<p>Emission reduction percentage for the four AI workloads using Pause &amp; Resume strategy and the hours set in Northern Carolina. The <span class="html-italic">x</span> axis represents the time extension (6, 12, 18, 24 h) assigned to the workload to complete the job. The <span class="html-italic">y</span> axis represents the checking time (15, 30, 60, 120 min) for carbon intensity. The <span class="html-italic">z</span> axis represents the emission reduction percentages for each specific combination of time extension and checking time: (<b>a</b>) Isolation Forest workload; (<b>b</b>) SVM workload; (<b>c</b>) HF-SCA workload; (<b>d</b>) autoencoder workload.</p>
Full article ">Figure 9
<p>Emission reduction percentage for the four AI workloads using Pause &amp; Resume strategy and the percentage set in Northern Carolina. The <span class="html-italic">x</span> axis represents the time extension (+25%, 50%, 75%, 100% of the original training time) assigned to the workload to complete the job. The <span class="html-italic">y</span> axis represents the checking time (15, 30, 60, 120 min) for carbon intensity. The <span class="html-italic">z</span> axis represents the emission reduction percentages for each specific combination of time extension and checking time: (<b>a</b>) Isolation Forest workload; (<b>b</b>) SVM workload; (<b>c</b>) HF-SCA workload; (<b>d</b>) autoencoder workload.</p>
Full article ">
27 pages, 6340 KiB  
Article
Design and Evaluation of Real-Time Data Storage and Signal Processing in a Long-Range Distributed Acoustic Sensing (DAS) Using Cloud-Based Services
by Abdusomad Nur and Yonas Muanenda
Sensors 2024, 24(18), 5948; https://doi.org/10.3390/s24185948 - 13 Sep 2024
Viewed by 243
Abstract
In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we [...] Read more.
In cloud-based Distributed Acoustic Sensing (DAS) sensor data management, we are confronted with two primary challenges. First, the development of efficient storage mechanisms capable of handling the enormous volume of data generated by these sensors poses a challenge. To solve this issue, we propose a method to address the issue of handling the large amount of data involved in DAS by designing and implementing a pipeline system to efficiently send the big data to DynamoDB in order to fully use the low latency of the DynamoDB data storage system for a benchmark DAS scheme for performing continuous monitoring over a 100 km range at a meter-scale spatial resolution. We employ the DynamoDB functionality of Amazon Web Services (AWS), which allows highly expandable storage capacity with latency of access of a few tens of milliseconds. The different stages of DAS data handling are performed in a pipeline, and the scheme is optimized for high overall throughput with reduced latency suitable for concurrent, real-time event extraction as well as the minimal storage of raw and intermediate data. In addition, the scalability of the DynamoDB-based data storage scheme is evaluated for linear and nonlinear variations of number of batches of access and a wide range of data sample sizes corresponding to sensing ranges of 1–110 km. The results show latencies of 40 ms per batch of access with low standard deviations of a few milliseconds, and latency per sample decreases for increasing the sample size, paving the way toward the development of scalable, cloud-based data storage services integrating additional post-processing for more precise feature extraction. The technique greatly simplifies DAS data handling in key application areas requiring continuous, large-scale measurement schemes. In addition, the processing of raw traces in a long-distance DAS for real-time monitoring requires the careful design of computational resources to guarantee requisite dynamic performance. Now, we will focus on the design of a system for the performance evaluation of cloud computing systems for diverse computations on DAS data. This system is aimed at unveiling valuable insights into performance metrics and operational efficiencies of computations on the data in the cloud, which will provide a deeper understanding of the system’s performance, identify potential bottlenecks, and suggest areas for improvement. To achieve this, we employ the CloudSim framework. The analysis reveals that the virtual machine (VM) performance decreases significantly the processing time with more capable VMs, influenced by Processing Elements (PEs) and Million Instructions Per Second (MIPS). The results also reflect that, although a larger number of computations is required as the fiber length increases, with the subsequent increase in processing time, the overall speed of computation is still suitable for continuous real-time monitoring. We also see that VMs with lower performance in terms of processing speed and number of CPUs have more inconsistent processing times compared to those with higher performance, while not incurring significantly higher prices. Additionally, the impact of VM parameters on computation time is explored, highlighting the importance of resource optimization in the DAS system design for efficient performance. The study also observes a notable trend in processing time, showing a significant decrease for every additional 50,000 columns processed as the length of the fiber increases. This finding underscores the efficiency gains achieved with larger computational loads, indicating improved system performance and capacity utilization as the DAS system processes more extensive datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup of a distributed vibration sensor using a <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>-OTDR scheme in direct detection [<a href="#B8-sensors-24-05948" class="html-bibr">8</a>].</p>
Full article ">Figure 2
<p>Intrusion detection using a <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>-OTDR sensor [<a href="#B16-sensors-24-05948" class="html-bibr">16</a>].</p>
Full article ">Figure 3
<p>Block diagram of the developed system.</p>
Full article ">Figure 4
<p>Schematic representation of the connection of the DAS sensor system to DynamoDB.</p>
Full article ">Figure 5
<p>Steps to use CloudSim.</p>
Full article ">Figure 6
<p>Block diagram of simulation flow for the basic scenario.</p>
Full article ">Figure 7
<p>Schematic representation of the implementation of processing of DAS data in CloudSim.</p>
Full article ">Figure 8
<p>Latency per batch of DynamoDB access for sample number of batches used to write trace samples.</p>
Full article ">Figure 9
<p>Latency per batch of DynamoDB access used to write trace samples with number of batches scaling with <math display="inline"><semantics> <msup> <mn>2</mn> <mi>n</mi> </msup> </semantics></math> for each index n.</p>
Full article ">Figure 10
<p>(<b>a</b>) Total latency of DynamoDB access (<b>b</b>) Latency per sample for varying trace sample sizes in the range of 5000–550,000 samples corresponding to 1–110 km sensing distances.</p>
Full article ">Figure 11
<p>Analysis of processing time and cloudlet utilization for differential operations in DAS sensing system: a study on single cycle versus multiple cycles. The study focuses on two distinct scenarios: (<b>a</b>) a single cycle of measurement, and (<b>b</b>) a series of 10 consecutive cycles of measurement. The measurements are conducted in a 110 km long optical sensing fiber. Note that the number of cloudlets increases for each cloudlet ID in the horizontal axis.</p>
Full article ">Figure 12
<p>Processing time versus cloudlets for FFT operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber. Note that the number of cloudlets increases for each cloudlet ID in the horizontal axis.</p>
Full article ">Figure 13
<p>Evaluation of the mean processing time for each virtual machine in differential operations: a comparative study on a single cycle versus multiple cycles in a 110 km optical fiber. The investigation is conducted under two distinct conditions: (<b>a</b>) a single cycle of measurement, and (<b>b</b>) a series of 10 consecutive cycles of measurement. The measurements are performed in a 110 km long optical fiber. This research aims to understand the computational efficiency of cloud services in DAS sensing systems.</p>
Full article ">Figure 14
<p>Mean processing time for each VM for FFT operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">Figure 15
<p>Statistical analysis of processing time for virtual machines in differential operations: an examination of standard deviation and variance across single and multiple cycles in a 110 km optical fiber. The analysis is conducted under two different scenarios: (<b>a</b>) a single cycle of measurement, and (<b>b</b>) a sequence of 10 cycles of measurement. The measurements are carried out in a 110 km long optical fiber. This study provides a deeper understanding of the variability and consistency in the performance of VMs during differential operations in DAS sensing systems.</p>
Full article ">Figure 16
<p>Standard deviation and variance for vms based on processing time-for differential operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">Figure 17
<p>Evaluation of processing time for incremental data in optical fiber measurements (for each additional 50,000 rows) during two distinct operations: (<b>a</b>) the differential operation, and (<b>b</b>) the Fast Fourier Transform (FFT) operation. The measurements are conducted in a 110 km long optical fiber. This examination aims to understand the computational scalability of these operations in the context of increasing data volume.</p>
Full article ">Figure 18
<p>Analysis of processing time and cloudlet utilization for differential operations in optical fiber measurements with a specific focus on two distinct scenarios: (<b>a</b>) varying only the Million Instructions Per Second (MIPS) of the virtual machines (VMs), and (<b>b</b>) varying only the Processing Elements (PE) of the VMs. The measurements are conducted during a single cycle in a 110 km long optical fiber. This study aims to understand the influence of MIPS and PE variations on the performance and efficiency of VMs during differential operations in DAS sensing systems.</p>
Full article ">Figure 19
<p>Processing time versus cloudlets for differential operation for (<b>a</b>) varying only the MIPS of the VMs, and (<b>b</b>) varying only the PE of the VMs, for a 10 cycle of measurements in a 110 km fiber.</p>
Full article ">Figure 20
<p>Processing time versus cost for (<b>a</b>) differential, and (<b>b</b>) FFT operation for 10 cycles of measurement in a 110 km fiber.</p>
Full article ">Figure 21
<p>Cost of processing versus cloudlets for differential operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">Figure 22
<p>Cost of processing versus cloudlets for FFT operation for (<b>a</b>) a single cycle, and (<b>b</b>) 10 cycles, of measurement in a 110 km fiber.</p>
Full article ">
35 pages, 6364 KiB  
Article
Mapping the Influence of Olympic Games’ Urban Planning on the Land Surface Temperatures: An Estimation Using Landsat Series and Google Earth Engine
by Joan-Cristian Padró, Valerio Della Sala, Marc Castelló-Bueno and Rafael Vicente-Salar
Remote Sens. 2024, 16(18), 3405; https://doi.org/10.3390/rs16183405 - 13 Sep 2024
Viewed by 326
Abstract
The Olympic Games are a sporting event and a catalyst for urban development in their host city. In this study, we utilized remote sensing and GIS techniques to examine the impact of the Olympic infrastructure on the surface temperature of urban areas. Using [...] Read more.
The Olympic Games are a sporting event and a catalyst for urban development in their host city. In this study, we utilized remote sensing and GIS techniques to examine the impact of the Olympic infrastructure on the surface temperature of urban areas. Using Landsat Series Collection 2 Tier 1 Level 2 data and cloud computing provided by Google Earth Engine (GEE), this study examines the effects of various forms of Olympic Games facility urban planning in different historical moments and location typologies, as follows: monocentric, polycentric, peripheric and clustered Olympic ring. The GEE code applies to the Olympic Games that occurred from Paris 2024 to Montreal 1976. However, this paper focuses specifically on the representative cases of Paris 2024, Tokyo 2020, Rio 2016, Beijing 2008, Sydney 2000, Barcelona 1992, Seoul 1988, and Montreal 1976. The study is not only concerned with obtaining absolute land surface temperatures (LST), but rather the relative influence of mega-event infrastructures on mitigating or increasing the urban heat. As such, the locally normalized land surface temperature (NLST) was utilized for this purpose. In some cities (Paris, Tokyo, Beijing, and Barcelona), it has been determined that Olympic planning has resulted in the development of green spaces, creating “green spots” that contribute to lower-than-average temperatures. However, it should be noted that there is a significant variation in temperature within intensely built-up areas, such as Olympic villages and the surrounding areas of the Olympic stadium, which can become “hotspots.” Therefore, it is important to acknowledge that different planning typologies of Olympic infrastructure can have varying impacts on city heat islands, with the polycentric and clustered Olympic ring typologies displaying a mitigating effect. This research contributes to a cloud computing method that can be updated for future Olympic Games or adapted for other mega-events and utilizes a widely available remote sensing data source to study a specific urban planning context. Full article
(This article belongs to the Special Issue Urban Planning Supported by Remote Sensing Technology II)
Show Figures

Figure 1

Figure 1
<p>Location of the Olympic Game cities from 1972 to 2024, which are included in the Google Earth Engine code. The cities used as examples in this paper, representing four Olympic urban planning patterns, are highlighted in yellow. Source: Author’s own elaboration based on data from Open Street Map (@OpenStreetMap contributors) and International Olympic Committee (IOC) information [<a href="#B59-remotesensing-16-03405" class="html-bibr">59</a>].</p>
Full article ">Figure 2
<p>Area of interest (city AOI) of the eight cities analysed (red outline), and its corresponding area of interest (Olympic AOI) of the Olympic facilities (yellow outline). (<b>a</b>) In the Paris case, the city AOI is defined by the Ille de France administrative boundaries. (<b>b</b>) In the Tokyo case, the city AOI is defined by some municipalities of the Tokyo Metropolitan Area administrative boundaries. (<b>c</b>) In the Rio case, the city AOI is defined by the Rio de Janeiro Municipality administrative boundaries. (<b>d</b>) In the Beijing case, the city AOI is defined by Beijing’s central urban area. (<b>e</b>) In the Sydney case, the city AOI is defined by some municipalities of the North South Wales administrative boundaries. (<b>f</b>) In the Barcelona case, the city AOI is defined by the administrative boundaries of Barcelonès. (<b>g</b>) In the Seoul case, the city AOI is defined by Keijo Teukbyeolsi administrative boundaries. (<b>h</b>) In the Montreal case, the city AOI is defined by the Champlain, Communauté Urbaine de Montréal and Laval administrative boundaries.</p>
Full article ">Figure 3
<p>Overall methodology and processing chain.</p>
Full article ">Figure 4
<p>Area of interest (city AOI) of the eight cities analysed (red outline), and the corresponding areas of interest (Olympic AOI) of the Olympic facilities (yellow outline). (<b>a</b>) In the Paris case, the city AOI is defined by the Ille de France administrative boundaries and the Olympic AOI is clustered. (<b>b</b>) In the Tokyo case, the city AOI is defined by some municipalities of the Tokyo Metropolitan Area administrative boundaries and the Olympic AOI is polycentric. (<b>c</b>) In the Rio case, the city AOI is defined by the Rio de Janeiro Municipality administrative boundaries and the Olympic AOI is peripheric. (<b>d</b>) In the Beijing case, the city AOI is defined by Beijing’s central urban area and the Olympic AOI is polycentric. (<b>e</b>) In the Sydney case, the city AOI is defined by some municipalities of North South Wales administrative boundaries and the Olympic AOI is peripheric. (<b>f</b>) In the Barcelona case, the city AOI is defined by Barcelonès administrative boundaries and the Olympic AOI is clustered. (<b>g</b>) In the Seoul case, the city AOI is defined by Keijo Teukbyeolsi administrative boundaries and the Olympic AOI is monocentric. (<b>h</b>) In the Montreal case, the city AOI is defined by the Champlain, Communauté Urbaine de Montréal and Laval administrative boundaries and the Olympic AOI is monocentric.</p>
Full article ">Figure 5
<p>Normalized difference vegetation index (NDVI) maps created using the median synthetic image over a 5-year period, for each of the 8 cities analysed in this study. NDVI was calculated using the NIR and red bands of the synthetic image (see Equation (3)). Additionally, there is a focus on the main Olympic facilities. The real data range is [−1 to 1] but this was stretched to [−0.25 to 0.25] in all of the maps for better understanding and visualization. (<b>a</b>) In the Paris case, higher NDVI levels are in the periphery, but some Olympic facilities (i.e. Champs-de-Mars) take advantage of inner green areas. (<b>b</b>) In the Tokyo case, there are sparse but important green spaces in the central area. (<b>c</b>) In the Rio case, there are elevated NDVI levels for the entire urban area, but not in the main Olympic facilities. (<b>d</b>) In the Beijing case, the central urban area has sparse green spaces, with overall moderate NDVI levels, where Olympic facilities where placed. (<b>e</b>) In the Sydney case, elevated NDVI levels suggest that their urban area has a considerable amount of green space, including some parts pf the Olympic Park. (<b>f</b>) In the Barcelona case, sparse green spaces can be found, highlighting Montjuïc Olympic ring area. (<b>g</b>) In the Seoul case, higher NDVI levels are located in the north and the south periphery, not where the Olympic Park was placed. (<b>h</b>) In the Montreal case, the overall urban area presents high NDVI levels, and some of the Olympic Park area was also located in a green area.</p>
Full article ">Figure 6
<p>Normalized difference built-up index (NDBI) maps created using the median synthetic image over a 5-year period, for each of the 8 cities analysed in this study. NDBI was calculated using the NIR and SWIR1 bands of the synthetic image (see Equation (4)). Additionally, there is a focus on the main Olympic facilities. The real data range is [−1 to 1] but this was stretched to [−0.25 to 0.25] in all the maps for better understanding and visualization. (<b>a</b>) In the Paris case, radial urban configuration shows a dense urbanised centre with high NDBI levels, such as the Stade de France. (<b>b</b>) In the Tokyo case, the area is densely urbanised, such as the Tokyo Dome complex, but with interstitial green spaces. (<b>c</b>) In the Rio case, there are several densely urbanised focuses with high NDBI levels, such as the Barra Olímpica complex, limited by densely vegetated areas. (<b>d</b>) In the Beijing case, the concentric pattern leads to a densely urbanised city with high NDBI levels, such as the National Stadium. (<b>e</b>) In the Sydney case, the extensive urbanization sprawl is combined with green spaces, such as the Olympic Park. (<b>f</b>) In the Barcelona case, the gridded configuration shows urban continuity and density with very high NDBI levels, such as the Olympic Village. (<b>g</b>) In the Seoul case, the urbanisation is dense around the Han River, with generalized high NDBI levels, such as the Jasmil Sports Complex. (<b>h</b>) In the Montreal case, the gridded pattern presents dense build-up areas combined with inner green spaces, such as the Olympic Park and the adjacent Botanical Garden.</p>
Full article ">Figure 7
<p>Normalized land surface temperature (NLST) maps created using the median synthetic image over a 5-year period, for each of the 8 cities analysed in this study. NLST was calculated using the thermal band of the synthetic image and the minimum and maximum LST values in each city (see Equation (4)). Additionally, there was a focus on the main Olympic facilities. The transect used to calculate the thermal profile of the NLST in each city is included. (<b>a</b>) In the Paris case, the relative high temperature focuses are in the central, north and south areas, also some Olympic facilities such as the Stade de France. (<b>b</b>) In the Tokyo case, the relative high temperature focuses are on the port and around the centre of the SUHI, while Olympic facilities are relative lower temperature zones. (<b>c</b>) In the Rio case, the relative low temperature focuses of the forested areas can be seen in the centre of the AOI, and Olympic venues are acting as relative hotspots. (<b>d</b>) In the Beijing case, the relative high temperature focuses are on the south-west, south and east areas, locating the Olympic facilities in relative lower temperature areas. (<b>e</b>) In the Sydney case, the Olympic venues act as hotspots in relation with the surroundings. (<b>f</b>) In the Barcelona case, the Olympic ring is a relative green spot in comparison with the urban area. (<b>g</b>) In the Seoul case, the Olympic facilities are acting as relative high temperature areas in the SUHI. (<b>h</b>) In the Montreal case, the location of the Stade Olympique is part of the relative higher temperature areas.</p>
Full article ">Figure 8
<p>Transects created using the NSLT, the NDVI and the NDBI images. A segment was digitized that crossed the city and the Olympic facilities to obtain the thermal profile from the NLST. Additionally, to compare results, the NDVI and the NDBI profiles were added. This was undertaken with the Profile Tool v.4.2.6 QGIS plugin, which essentially intersects the segment with the target raster, and extracts the value of the overlapped pixels. The result is a table with values that can be plotted in the GIS v.3.32 software or exported to another software to edit the graph. (<b>a</b>) In the Paris case, there is a peak in the Stade de France NLST transect graph, indicating a hotspot in this location in relation to the Paris UHI. (<b>b</b>) In the Tokyo case, the thermal peak is located over the Stadium and the Dome. (<b>c</b>) In the Rio case, the thermal peak is located over Barra Olímpica and a secondary peak is found over Maracaná. (<b>d</b>) In the Beijing case, the hottest location is the Beijing National Stadium. (<b>e</b>) In the Sydney case, the extensive and low-density neighbourhoods, with many green spaces, contrasts with the Olympic Stadium and the central and dense downtown, where the thermal peaks are located. (<b>f</b>) In the Barcelona case, the highest surface temperature is in the industrial area, and the Olympic Ring has low relative temperatures due to its vegetated park areas. (<b>g</b>) In the Seoul case, the Han River presents the lowest relative surface temperatures, with the higher temperatures located on dense urban areas and over the Olympic Stadium. (<b>h</b>) In the Montreal case, the higher relative surface temperatures are found on the dense residential areas, and there is observed a peak just over the Olympic Park.</p>
Full article ">Figure 9
<p>Boxplots relating the normalized land surface temperature (NLST) within each urban area and within its Olympic facilities. (<b>a</b>) In the Paris case, the boxplot indicates that the Olympic facilities contribute to a slight increase in the relative LST in Paris’s urban area. (<b>b</b>) In the Tokyo case, the boxplot indicates a strong contribution of the Olympic facilities to reducing the overall LST in the Tokyo urban area. (<b>c</b>) In the Rio case, boxplot indicates that the Olympic facilities contribute to increase the overall SUHI LST. (<b>d</b>) In the Beijing case, the boxplot shows a median and average LST lower in the Olympic facilities, thus a strong contribution to the reduced overall LST in the Beijing urban area. (<b>e</b>) In the Sydney case, the NLST median and average values within the Olympic area are much higher than in the overall urban area; thus, the Olympic facilities contribute to the overall increase in LST of the resulting urban area after the games. (<b>f</b>) In the Barcelona case, the median and average LST is lower in the Olympic facilities, which significantly contributes to the overall reduction of LST in the urban area of Barcelona. (<b>g</b>) In the Seoul case, the boxplot suggest that the Olympic facilities have led to a relative rise of LST in the Seoul urban area. (<b>h</b>) In the Montreal case, the average and median values, as well as the higher position of the 1st and 3rd quartile, suggest that the Olympic venues have contributed to an overall relative increase of the LST in the resulting Montreal urban area after the games.</p>
Full article ">Figure 10
<p>Linear simple regressions relating the LST and the NLST pixels overlapped by the thermal transect defined in each city. The LST(K) and the NLST are expected to perfectly correlate in a simple linear regression because they are simply converted using the scaling method in all the cases (<b>a</b>–<b>h</b>) (see <a href="#sec2dot3dot1-remotesensing-16-03405" class="html-sec">Section 2.3.1</a>).</p>
Full article ">
18 pages, 12186 KiB  
Article
Cloud-Edge Collaborative Defect Detection Based on Efficient Yolo Networks and Incremental Learning
by Zhenwu Lei, Yue Zhang, Jing Wang and Meng Zhou
Sensors 2024, 24(18), 5921; https://doi.org/10.3390/s24185921 - 12 Sep 2024
Viewed by 224
Abstract
Defect detection constitutes one of the most crucial processes in industrial production. With a continuous increase in the number of defect categories and samples, the defect detection model underpinned by deep learning finds it challenging to expand to new categories, and the accuracy [...] Read more.
Defect detection constitutes one of the most crucial processes in industrial production. With a continuous increase in the number of defect categories and samples, the defect detection model underpinned by deep learning finds it challenging to expand to new categories, and the accuracy and real-time performance of product defect detection are also confronted with severe challenges. This paper addresses the problem of insufficient detection accuracy of existing lightweight models on resource-constrained edge devices by presenting a new lightweight YoloV5 model, which integrates four modules, SCDown, GhostConv, RepNCSPELAN4, and ScalSeq. Here, this paper abbreviates it as SGRS-YoloV5n. Through the incorporation of these modules, the model notably enhances feature extraction and computational efficiency while reducing the model size and computational load, making it more conducive for deployment on edge devices. Furthermore, a cloud-edge collaborative defect detection system is constructed to improve detection accuracy and efficiency through initial detection by edge devices, followed by additional inspection by cloud servers. An incremental learning mechanism is also introduced, enabling the model to adapt promptly to new defect categories and update its parameters accordingly. Experimental results reveal that the SGRS-YoloV5n model exhibits superior detection accuracy and real-time performance, validating its value and stability for deployment in resource-constrained environments. This system presents a novel solution for achieving efficient and accurate real-time defect detection. Full article
Show Figures

Figure 1

Figure 1
<p>Cloud-edge collaborative defect inspection system.</p>
Full article ">Figure 2
<p>Structure of SGRS-YoloV5n (The model proposed in the article is abbreviated as SGRS-YoloV5n because it is based on the YoloV5n model and combines SCDown, GhostConv, RepNCSPELAN4, and ScalSeq in the backbone and neck parts.).</p>
Full article ">Figure 3
<p>Structure of GhostConv.</p>
Full article ">Figure 4
<p>Structure of C3Ghost.</p>
Full article ">Figure 5
<p>Structure of SCDown.</p>
Full article ">Figure 6
<p>Structure of RepNCSPELAN4.</p>
Full article ">Figure 7
<p>Structure of ScalSeq.</p>
Full article ">Figure 8
<p>Cloud-edge collaborative defect detection platform.</p>
Full article ">Figure 9
<p>Examples of PCB defect types: (<b>a</b>) A defect where a necessary hole is missing. (<b>b</b>) Small indentations or nibbles on the PCB edge. (<b>c</b>) A break in the circuit where continuity is lost. (<b>d</b>) A defect caused by unintended connections between conductive parts. (<b>e</b>) An extraneous copper connection leading to an undesired short. (<b>f</b>) Unwanted copper residues left on the PCB.</p>
Full article ">Figure 10
<p>SGRS-YoloV5n and YoloV5n training comparison.</p>
Full article ">Figure 11
<p>Confusion matrixb comparison. (<b>a</b>) Training confusion matrix of Yolov5n; (<b>b</b>) training confusion matrix of SGRS-Yolov5n.</p>
Full article ">Figure 11 Cont.
<p>Confusion matrixb comparison. (<b>a</b>) Training confusion matrix of Yolov5n; (<b>b</b>) training confusion matrix of SGRS-Yolov5n.</p>
Full article ">Figure 12
<p>Detection of defect results. (<b>a</b>) Detected missing holes with confidence scores of 0.77 and 0.86. (<b>b</b>) Detected mouse bites with confidence scores of 0.77 and 0.72. (<b>c</b>) Detected open circuits with confidence scores of 0.81 and 0.78. (<b>d</b>) Detected shorts with confidence scores of 0.85 and 0.89. (<b>e</b>) Detected spurs with confidence scores of 0.82 and 0.60. (<b>f</b>) Detected spurious copper with confidence scores of 0.74 and 0.80.</p>
Full article ">Figure 13
<p>Real-time detection results for edge devices.</p>
Full article ">Figure 14
<p>Before and after cloud detecting. (<b>a</b>) Original image taken by edge device before cloud processing. (<b>b</b>) Detection results after cloud-based processing. The system accurately detects missing holes with confidence scores of 0.71 and 0.69, and a spur with a confidence score of 0.84.</p>
Full article ">Figure 15
<p>Before and after cloud detecting. (<b>a</b>) Original defect characteristics; (<b>b</b>) new defective features.</p>
Full article ">
15 pages, 3271 KiB  
Article
Spiking PointCNN: An Efficient Converted Spiking Neural Network under a Flexible Framework
by Yingzhi Tao and Qiaoyun Wu
Electronics 2024, 13(18), 3626; https://doi.org/10.3390/electronics13183626 - 12 Sep 2024
Viewed by 261
Abstract
Spiking neural networks (SNNs) are generating wide attention due to their brain-like simulation capabilities and low energy consumption. Converting artificial neural networks (ANNs) to SNNs provides great advantages, combining the high accuracy of ANNs with the robustness and energy efficiency of SNNs. Existing [...] Read more.
Spiking neural networks (SNNs) are generating wide attention due to their brain-like simulation capabilities and low energy consumption. Converting artificial neural networks (ANNs) to SNNs provides great advantages, combining the high accuracy of ANNs with the robustness and energy efficiency of SNNs. Existing point clouds processing SNNs have two issues to be solved: first, they lack a specialized surrogate gradient function; second, they are not robust enough to process a real-world dataset. In this work, we present a high-accuracy converted SNN for 3D point cloud processing. Specifically, we first revise and redesign the Spiking X-Convolution module based on the X-transformation. To address the problem of non-differentiable activation function arising from the binary signal from spiking neurons, we propose an effective adjustable surrogate gradient function, which can fit various models well by tuning the parameters. Additionally, we introduce a versatile ANN-to-SNN conversion framework enabling modular transformations. Based on this framework and the spiking X-Convolution module, we design the Spiking PointCNN, a highly efficient converted SNN for processing 3D point clouds. We conduct experiments on the public 3D point cloud datasets ModelNet40 and ScanObjectNN, on which our proposed model achieves excellent accuracy. Code will be available on GitHub. Full article
(This article belongs to the Special Issue Artificial Intelligence in Image Processing and Computer Vision)
Show Figures

Figure 1

Figure 1
<p>Structure of X-transformation convolution.</p>
Full article ">Figure 2
<p>Structure of spiking X-transformation convolution module.</p>
Full article ">Figure 3
<p>Comparison of spiking signals and sigmoid and surrogate functions with different k. Left picture shows the functions while the right one shows their gradients.</p>
Full article ">Figure 4
<p>Comparison of proposed adjustable surrogate gradient function with traditional activation functions.</p>
Full article ">Figure 5
<p>The process of a numeric matrix being converted to spiking matrix sets.</p>
Full article ">Figure 6
<p>The complete structure of Spiking PointCNN.</p>
Full article ">
18 pages, 9000 KiB  
Article
Multilevel Geometric Feature Embedding in Transformer Network for ALS Point Cloud Semantic Segmentation
by Zhuanxin Liang and Xudong Lai
Remote Sens. 2024, 16(18), 3386; https://doi.org/10.3390/rs16183386 - 12 Sep 2024
Viewed by 283
Abstract
Effective semantic segmentation of Airborne Laser Scanning (ALS) point clouds is a crucial field of study and influences subsequent point cloud application tasks. Transformer networks have made significant progress in 2D/3D computer vision tasks, exhibiting superior performance. We propose a multilevel geometric feature [...] Read more.
Effective semantic segmentation of Airborne Laser Scanning (ALS) point clouds is a crucial field of study and influences subsequent point cloud application tasks. Transformer networks have made significant progress in 2D/3D computer vision tasks, exhibiting superior performance. We propose a multilevel geometric feature embedding transformer network (MGFE-T), which aims to fully utilize the three-dimensional structural information carried by point clouds and enhance transformer performance in ALS point cloud semantic segmentation. In the encoding stage, compute the geometric features surrounding tee sampling points at each layer and embed them into the transformer workflow. To ensure that the receptive field of the self-attention mechanism and the geometric computation domain can maintain a consistent scale at each layer, we propose a fixed-radius dilated KNN (FR-DKNN) search method to address the limitation of traditional KNN search methods in considering domain radius. In the decoding stage, we aggregate prediction deviations at each level into a unified loss value, enabling multilevel supervision to improve the network’s feature learning ability at different levels. The MGFE-T network can predict the class label of each point in an end-to-end manner. Experiments were conducted on three widely used benchmark datasets. The results indicate that the MGFE-T network achieves superior OA and mF1 scores on the LASDU and DFC2019 datasets and performs well on the ISPRS dataset with imbalanced classes. Full article
Show Figures

Figure 1

Figure 1
<p>MGFE-T Semantic Segmentation Network Architecture.</p>
Full article ">Figure 2
<p>GFE-T/Transformer Block with Residual Connection.</p>
Full article ">Figure 3
<p>GFE-T Module Architecture.</p>
Full article ">Figure 4
<p>Comparison of FR-DKNN with other methods (<span class="html-italic">k</span> = 4, <span class="html-italic">d</span> = 2).</p>
Full article ">Figure 5
<p>Preview of the LASDU dataset.</p>
Full article ">Figure 6
<p>Preview of the DFC2019 dataset (3 of 110 files).</p>
Full article ">Figure 7
<p>Preview of the ISPRS dataset.</p>
Full article ">Figure 8
<p>Visualization of semantic segmentation results for some regions of the LASDU dataset (the first, second, and third columns are the ground truth, the results of the baseline, and the results of MGFE-T, respectively).</p>
Full article ">Figure 9
<p>Visualization of semantic segmentation results for some regions of the DFC2019 dataset (the first, second, and third columns are the ground truth, the results of the baseline, and the results of MGFE-T, respectively).</p>
Full article ">Figure 10
<p>Visualization of semantic segmentation results for some regions of the ISPRS dataset (the first, second, and third columns are the ground truth, the results of the baseline, and the results of MGFE-T, respectively).</p>
Full article ">Figure 11
<p>Comparison of experimental results for different radius percentiles.</p>
Full article ">
20 pages, 6757 KiB  
Article
A Task Offloading and Resource Allocation Strategy Based on Multi-Agent Reinforcement Learning in Mobile Edge Computing
by Guiwen Jiang, Rongxi Huang, Zhiming Bao and Gaocai Wang
Future Internet 2024, 16(9), 333; https://doi.org/10.3390/fi16090333 - 11 Sep 2024
Viewed by 409
Abstract
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view [...] Read more.
Task offloading and resource allocation is a research hotspot in cloud-edge collaborative computing. Many existing pieces of research adopted single-agent reinforcement learning to solve this problem, which has some defects such as low robustness, large decision space, and ignoring delayed rewards. In view of the above deficiencies, this paper constructs a cloud-edge collaborative computing model, and related task queue, delay, and energy consumption model, and gives joint optimization problem modeling for task offloading and resource allocation with multiple constraints. Then, in order to solve the joint optimization problem, this paper designs a decentralized offloading and scheduling scheme based on “task-oriented” multi-agent reinforcement learning. In this scheme, we present information synchronization protocols and offloading scheduling rules and use edge servers as agents to construct a multi-agent system based on the Actor–Critic framework. In order to solve delayed rewards, this paper models the offloading and scheduling problem as a “task-oriented” Markov decision process. This process abandons the commonly used equidistant time slot model but uses dynamic and parallel slots in the step of task processing time. Finally, an offloading decision algorithm TOMAC-PPO is proposed. The algorithm applies the proximal policy optimization to the multi-agent system and combines the Transformer neural network model to realize the memory and prediction of network state information. Experimental results show that this algorithm has better convergence speed and can effectively reduce the service cost, energy consumption, and task drop rate under high load and high failure rates. For example, the proposed TOMAC-PPO can reduce the average cost by from 19.4% to 66.6% compared to other offloading schemes under the same network load. In addition, the drop rate of some baseline algorithms with 50 users can achieve 62.5% for critical tasks, while the proposed TOMAC-PPO only has 5.5%. Full article
(This article belongs to the Special Issue Convergence of Edge Computing and Next Generation Networking)
Show Figures

Figure 1

Figure 1
<p>Cloud-edge collaboration model for MEC network.</p>
Full article ">Figure 2
<p>Task computing and transmission queues.</p>
Full article ">Figure 3
<p>Synchronization timeout due to disconnection (The blue arrow is the direction of information transmission, and the red cross indicates that the sender and receiver are not connected).</p>
Full article ">Figure 4
<p>Offloading and scheduling when user moves across cell (The numbers ①–⑥ represent the order in which the scheduling rules are executed).</p>
Full article ">Figure 5
<p>Multi-agent system based on the Actor–Critic framework.</p>
Full article ">Figure 6
<p>Task-oriented Markov decision process.</p>
Full article ">Figure 7
<p>Policy network structure.</p>
Full article ">Figure 8
<p>Value network structure.</p>
Full article ">Figure 9
<p>Clipped summarize objective schematic diagram.</p>
Full article ">Figure 10
<p>Offloading and allocation algorithms conversion and cumulative rewards versus episodes.</p>
Full article ">Figure 11
<p>Average task cost versus user number.</p>
Full article ">Figure 12
<p>Average task cost versus failure probability.</p>
Full article ">Figure 13
<p>Optimization effect of schemes on the key performance metrics of different types of tasks.</p>
Full article ">
25 pages, 1154 KiB  
Article
Digital Transformation and Innovation: The Influence of Digital Technologies on Turnover from Innovation Activities and Types of Innovation
by Anca Antoaneta Vărzaru and Claudiu George Bocean
Systems 2024, 12(9), 359; https://doi.org/10.3390/systems12090359 - 11 Sep 2024
Viewed by 747
Abstract
In today’s competitive and globalized world, innovation is essential for organizational survival, offering a means for companies to address environmental impacts and social challenges. As innovation processes accelerate, managers need to rethink the entire value-creation chain, with digital transformation emerging as a continuous [...] Read more.
In today’s competitive and globalized world, innovation is essential for organizational survival, offering a means for companies to address environmental impacts and social challenges. As innovation processes accelerate, managers need to rethink the entire value-creation chain, with digital transformation emerging as a continuous process of organizational adaptation to the evolving societal landscape. The research question focuses on how digital technologies—such as artificial intelligence, Big Data, cloud computing, industrial and service robots, and the Internet of Things—influence innovation-driven revenues among enterprises within the European Union (EU). The paper examines, using neural network analysis, the specific impact of each digital technology on innovation revenues while exploring how these technologies affect various types of social innovation within organizations. Through cluster analysis, the study identifies patterns among EU countries based on their digital technology adoption, innovation expenditures, and revenues and the proportion of enterprises engaged in innovation activities. The findings highlight the central role of digital technologies in enhancing innovation and competitiveness, with significant implications for managers and policymakers. These results underscore the necessity for companies to strategically integrate digital technologies to sustain long-term competitiveness in the rapidly evolving digital landscape of the EU. Full article
(This article belongs to the Special Issue Digital Transformation and Processes Innovation)
Show Figures

Figure 1

Figure 1
<p>Research stages.</p>
Full article ">Figure 2
<p>MLP model of the influences of digital technologies on turnover from innovation activities. Source: authors’ design using SPSS v.27.</p>
Full article ">Figure 3
<p>MLP model of the influences of digital technologies on types of innovation. Source: authors’ design using SPSS v.27.</p>
Full article ">Figure 4
<p>Dendrogram. Source: authors’ design using SPSS v.27.</p>
Full article ">
17 pages, 1343 KiB  
Review
The State of the Art of Digital Twins in Health—A Quick Review of the Literature
by Leonardo El-Warrak and Claudio M. de Farias
Computers 2024, 13(9), 228; https://doi.org/10.3390/computers13090228 - 11 Sep 2024
Viewed by 707
Abstract
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things [...] Read more.
A digital twin can be understood as a representation of a real asset, in other words, a virtual replica of a physical object, process or even a system. Virtual models can integrate with all the latest technologies, such as the Internet of Things (IoT), cloud computing, and artificial intelligence (AI). Digital twins have applications in a wide range of sectors, from manufacturing and engineering to healthcare. They have been used in managing healthcare facilities, streamlining care processes, personalizing treatments, and enhancing patient recovery. By analysing data from sensors and other sources, healthcare professionals can develop virtual models of patients, organs, and human systems, experimenting with various strategies to identify the most effective approach. This approach can lead to more targeted and efficient therapies while reducing the risk of collateral effects. Digital twin technology can also be used to generate a virtual replica of a hospital to review operational strategies, capabilities, personnel, and care models to identify areas for improvement, predict future challenges, and optimize organizational strategies. The potential impact of this tool on our society and its well-being is quite significant. This article explores how digital twins are being used in healthcare. This article also introduces some discussions on the impact of this use and future research and technology development projections for the use of digital twins in the healthcare sector. Full article
(This article belongs to the Topic eHealth and mHealth: Challenges and Prospects, 2nd Volume)
Show Figures

Figure 1

Figure 1
<p>PICO Strategy.</p>
Full article ">Figure 2
<p>PRISMA flow diagram of the selection process.</p>
Full article ">Figure 3
<p>Main axes.</p>
Full article ">
24 pages, 4921 KiB  
Article
DuCFF: A Dual-Channel Feature-Fusion Network for Workload Prediction in a Cloud Infrastructure
by Kai Jia, Jun Xiang and Baoxia Li
Electronics 2024, 13(18), 3588; https://doi.org/10.3390/electronics13183588 - 10 Sep 2024
Viewed by 263
Abstract
Cloud infrastructures are designed to provide highly scalable, pay-as-per-use services to meet the performance requirements of users. The workload prediction of the cloud plays a crucial role in proactive auto-scaling and the dynamic management of resources to move toward fine-grained load balancing and [...] Read more.
Cloud infrastructures are designed to provide highly scalable, pay-as-per-use services to meet the performance requirements of users. The workload prediction of the cloud plays a crucial role in proactive auto-scaling and the dynamic management of resources to move toward fine-grained load balancing and job scheduling due to its ability to estimate upcoming workloads. However, due to users’ diverse usage demands, the changing characteristics of workloads have become more and more complex, including not only short-term irregular fluctuation characteristics but also long-term dynamic variations. This prevents existing workload-prediction methods from fully capturing the above characteristics, leading to degradation of prediction accuracy. To deal with the above problems, this paper proposes a framework based on a dual-channel temporal convolutional network and transformer (referred to as DuCFF) to perform workload prediction. Firstly, DuCFF introduces data preprocessing technology to decouple different components implied by workload data and combine the original workload to form new model inputs. Then, in a parallel manner, DuCFF adopts the temporal convolution network (TCN) channel to capture local irregular fluctuations in workload time series and the transformer channel to capture long-term dynamic variations. Finally, the features extracted from the above two channels are further fused, and workload prediction is achieved. The performance of the proposed DuCFF’s was verified on various workload benchmark datasets (i.e., ClarkNet and Google) and compared to its nine competitors. Experimental results show that the proposed DuCFF can achieve average performance improvements of 65.2%, 70%, 64.37%, and 15%, respectively, in terms of Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE) and R-squared (R2) compared to the baseline model CNN-LSTM. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of DuCFF, (<b>a</b>) main structure, (<b>b</b>) TCN channel, and (<b>c</b>) Transformer channel.</p>
Full article ">Figure 2
<p>The decomposed effects for three workload datasets using VMD. (<b>a</b>) Decomposed effect for ClarkNet–HTTP trace data (Requests). (<b>b</b>) Decomposed effect for Google trace data1 (CPU utilization). (<b>c</b>) Decomposed effect for Google trace data2 (CPU utilization).</p>
Full article ">Figure 3
<p>The implementation process of the move window sampling.</p>
Full article ">Figure 4
<p>A comparison of traditional CNN and TCN.</p>
Full article ">Figure 5
<p>The calculation process of MSA.</p>
Full article ">Figure 6
<p>The variation characteristics of workload data collected from Clarknet and Google traces. (<b>a</b>) <span class="html-italic">CNH</span>. (<b>b</b>) <span class="html-italic">GC1</span>. (<b>c</b>) <span class="html-italic">GC2</span>.</p>
Full article ">Figure 7
<p>The fitting effects on <span class="html-italic">GC2</span> for the proposed DuCFF and compared models.</p>
Full article ">Figure 8
<p>The parameter sensitivity analysis on <span class="html-italic">CNH</span> for the proposed DuCFF in terms of two selected evaluation metrics.</p>
Full article ">Figure 9
<p>The performance overhead comparisons for all models (time is recorded in seconds).</p>
Full article ">
21 pages, 10483 KiB  
Article
Evading Cyber-Attacks on Hadoop Ecosystem: A Novel Machine Learning-Based Security-Centric Approach towards Big Data Cloud
by Neeraj A. Sharma, Kunal Kumar, Tanzim Khorshed, A B M Shawkat Ali, Haris M. Khalid, S. M. Muyeen and Linju Jose
Information 2024, 15(9), 558; https://doi.org/10.3390/info15090558 - 10 Sep 2024
Viewed by 263
Abstract
The growing industry and its complex and large information sets require Big Data (BD) technology and its open-source frameworks (Apache Hadoop) to (1) collect, (2) analyze, and (3) process the information. This information usually ranges in size from gigabytes to petabytes of data. [...] Read more.
The growing industry and its complex and large information sets require Big Data (BD) technology and its open-source frameworks (Apache Hadoop) to (1) collect, (2) analyze, and (3) process the information. This information usually ranges in size from gigabytes to petabytes of data. However, processing this data involves web consoles and communication channels which are prone to intrusion from hackers. To resolve this issue, a novel machine learning (ML)-based security-centric approach has been proposed to evade cyber-attacks on the Hadoop ecosystem while considering the complexity of Big Data in Cloud (BDC). An Apache Hadoop-based management interface “Ambari” was implemented to address the variation and distinguish between attacks and activities. The analyzed experimental results show that the proposed scheme effectively (1) blocked the interface communication and retrieved the performance measured data from (2) the Ambari-based virtual machine (VM) and (3) BDC hypervisor. Moreover, the proposed architecture was able to provide a reduction in false alarms as well as cyber-attack detection. Full article
(This article belongs to the Special Issue Cybersecurity, Cybercrimes, and Smart Emerging Technologies)
Show Figures

Figure 1

Figure 1
<p>BD gaps and loopholes. Here, MAPreduce and HDFS are the acronyms for big data analysis model that processes data sets using a parallel algorithm on computer clusters and Hadoop Distributed File System.</p>
Full article ">Figure 2
<p>Graphical abstract of BDC and security vulnerabilities.</p>
Full article ">Figure 3
<p>BDC—ingredients and basis. In this figure, SaaS, PaaS, and IaaS are the acronyms of software as a service, platform as a service, and infrastructure as a service, respectively.</p>
Full article ">Figure 4
<p>Hadoop Ecosystem—an infrastructure. Here, HDFS is the acronym of Hadoop Distributed File System.</p>
Full article ">Figure 5
<p>Experimental design.</p>
Full article ">Figure 6
<p>Ambari-based web interface in pre-attack.</p>
Full article ">Figure 7
<p>Ambari-based web interfaced during an attack.</p>
Full article ">Figure 8
<p>Attack performed on VM port 8080 with Java LOIC.</p>
Full article ">Figure 9
<p>Hadoop VM performance graph—Generated attack using Java LOIC [<a href="#B28-information-15-00558" class="html-bibr">28</a>].</p>
Full article ">Figure 10
<p>Hadoop VM attack—Running RTDOS (Rixer) on default HTTP port 80.</p>
Full article ">Figure 11
<p>Hadoop VM during RTDoS attack (Rixer)—CPU performance and trends [<a href="#B24-information-15-00558" class="html-bibr">24</a>].</p>
Full article ">Figure 12
<p>Graphical presentation—an ML-driven workflow.</p>
Full article ">Figure 13
<p>Percentage-based comparative analysis. From left to right, the comparison is made between references [<a href="#B77-information-15-00558" class="html-bibr">77</a>,<a href="#B78-information-15-00558" class="html-bibr">78</a>,<a href="#B79-information-15-00558" class="html-bibr">79</a>,<a href="#B80-information-15-00558" class="html-bibr">80</a>,<a href="#B81-information-15-00558" class="html-bibr">81</a>], and proposed PART algorithm respectively.</p>
Full article ">
Back to TopTop