[go: up one dir, main page]

 
 
applsci-logo

Journal Browser

Journal Browser

Advances in Edge Computing for Internet of Things

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 April 2025 | Viewed by 7405

Special Issue Editors


E-Mail Website
Guest Editor
Electrical and Electronics Engineering, Chung-Ang University, Seoul, Republic of Korea
Interests: system virtualization; data center networking; fog/edge/cloud computing; machine learnig
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronics Engineering, Kyungpook National University, Daegu, Republic of Korea
Interests: server virtualization; cloud computing; Internet of Things

E-Mail Website
Guest Editor
Information and Telecommunication Engineering, Incheon National University, Incheon, Republic of Korea
Interests: operating systems; cloud computing; microservices and failure issues; Internet of Things or mobile system security

Special Issue Information

Dear Colleagues,

The prevailing approach for most Internet-based applications involves using remote cloud data center resources. Data from Internet of Things (IoT) devices, like smartphones, wearables, and sensors, is sent to distant cloud servers for processing and storage. However, this process is questionable due to potential communication delays, as billions of IoT devices connect online, negatively impacting the Quality of Service (QoS). An alternative is to place computing resources closer to IoT devices, reducing the amount of data sent to the cloud and minimizing communication delays. At present, research focuses on decentralizing some data center resources, relocating them to the network edge near users and sensors. This approach is commonly referred to as edge computing.

This Special Issue invites submissions encompassing a broad array of topics related to edge computing for IoT, including, but not limited to:

  • The implementation of intelligent real-time data analytics for grounded IoT in edge computing.
  • The application of machine learning methodologies to enhance IoT within edge computing.
  • The development of communication protocols, network architecture, and protocols tailored to IoT with edge computing.
  • The cohesive design of communication and computing for IoT based on edge computing.
  • Establishing theoretical frameworks and models for IoT rooted in edge computing.
  • Ensuring security and privacy within IoT systems leveraging edge computing.
  • The management of data, provision of decision support, and introduction of innovative services within IoT-enabled by edge computing.
  • The design and implementation of QoS-aware IoT framework in edge computing.
  • Innovative and advanced microservice-based architectures or supporting mechanisms in IoT-powered edge computing environments.

Dr. Cheol-Ho Hong
Dr. Kyungwoon Lee
Dr. Youngpil Kim
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things (IoT)
  • Quality of Service (QoS)
  • edge computing
  • machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3504 KiB  
Article
Optimizing V2X Communication: Spectrum Resource Allocation and Power Control Strategies for Next-Generation Wireless Technologies
by Ali. M. A. Ibrahim, Zhigang Chen, Yijie Wang, Hala A. Eljailany and Aridegbe A. Ipaye
Appl. Sci. 2024, 14(2), 531; https://doi.org/10.3390/app14020531 - 8 Jan 2024
Cited by 4 | Viewed by 3666
Abstract
The upcoming wireless technology developments in the next generations are expected to substantially transform the vehicle-to-everything (V2X) communication network. The challenge of limited spectrum resources in V2X communication, caused by the need for high data rates, necessitates a thorough analysis of spectrum resource [...] Read more.
The upcoming wireless technology developments in the next generations are expected to substantially transform the vehicle-to-everything (V2X) communication network. The challenge of limited spectrum resources in V2X communication, caused by the need for high data rates, necessitates a thorough analysis of spectrum resource allocation and power control. This complex problem falls under the domain of mixed-integer nonlinear programming; a strategic approach is implemented to overcome these issues, which divides the main challenge into two sub-problems. The issue of resource allocation is addressed by implementing a multiaccess spectrum allocation method, which is deliberately designed to optimize the utilization of the spectrum resources that are currently accessible. Concurrently, the power control issue is resolved by employing a continuous convex approximation technique, which effectively converts non-convex power-allocation issues into convex equivalents. This approach helps to alleviate interference between users. Finally, the simulation results prove that the proposed approaches can improve vehicle performance. The algorithms proposed in this article significantly improve the system throughput and access rate of vehicular user equipment (VUEs) while ensuring the data rate of cellular user equipment (CUEs). Full article
(This article belongs to the Special Issue Advances in Edge Computing for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Flowchart of the resource-allocation algorithm.</p>
Full article ">Figure 3
<p>Comparison of VUEs’ un-access rate.</p>
Full article ">Figure 4
<p>Comparison of the VUEs throughput.</p>
Full article ">Figure 5
<p>Relationship between the CUEs quantity Y and VUE throughput.</p>
Full article ">Figure 6
<p>Relationship of throughput between <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>Ψ</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>z</mi> <mi>e</mi> </mrow> </msup> </mrow> </semantics></math> (data rate) and VUEs systems.</p>
Full article ">Figure 7
<p>Relationship between vehicle speed and VUEs’ throughput.</p>
Full article ">Figure 8
<p>The relationship between the number of subcarriers V and the VUEs throughput.</p>
Full article ">
16 pages, 768 KiB  
Article
Impact of Secure Container Runtimes on File I/O Performance in Edge Computing
by Kyungwoon Lee, Jeongsu Kim, Ik-Hyeon Kwon, Hyunchan Park and Cheol-Ho Hong
Appl. Sci. 2023, 13(24), 13329; https://doi.org/10.3390/app132413329 - 18 Dec 2023
Cited by 1 | Viewed by 1854
Abstract
Containers enable high performance and easy deployment due to their lightweight architecture, thus facilitating resource utilization in edge computing nodes. Secure container runtimes have attracted significant attention because of the necessity for overcoming the security vulnerabilities of containers. As the runtimes adopt an [...] Read more.
Containers enable high performance and easy deployment due to their lightweight architecture, thus facilitating resource utilization in edge computing nodes. Secure container runtimes have attracted significant attention because of the necessity for overcoming the security vulnerabilities of containers. As the runtimes adopt an additional layer such as virtual machines and user-space kernels to enforce isolation, the container performance can be degraded. Even though previous studies presented experimental results on performance evaluations of secure container runtimes, they lack a detailed analysis of the root causes that affect the performance of the runtimes. This paper explores the architecture of three secure container runtimes in detail: Kata containers, gVisor, and Firecracker. We focus on file I/O, which is one of the key aspects of container performance. In addition, we present the results of the user- and kernel-level profiling and reveal the major factors that impact the file I/O performance of the runtimes. As a result, we observe three key findings: (1) Firecracker shows the highest file I/O performance as it allows for utilizing the page cache inside VMs, and (2) Kata containers offer the lowest file I/O performance by consuming the largest amount of CPU resources. Also, we observe that gVisor scales well as the block size increases because the file I/O requests are mainly handled by the host OS similar to native applications. Full article
(This article belongs to the Special Issue Advances in Edge Computing for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Different architectures of representative secure container runtimes: Kata containers, gVisor, and Firecracker. (<b>a</b>) Kata containers, (<b>b</b>) gVisor, and (<b>c</b>) Firecracker.</p>
Full article ">Figure 2
<p>File operations of Kata containers. (<b>a</b>) Overview, and (<b>b</b>) symbol-level analysis of the file I/O stack.</p>
Full article ">Figure 3
<p>File operations of gVisor. (<b>a</b>) Overview, and (<b>b</b>) symbol-level analysis of the file I/O stack.</p>
Full article ">Figure 4
<p>File operations of Firecracker. (<b>a</b>) Overview, and (<b>b</b>) symbol-level analysis of the file I/O stack.</p>
Full article ">Figure 5
<p>Sequential file I/O performance of runc, Kata containers (Kata), gVisor, and Firecracker (FC) with different block sizes. (<b>a</b>) Sequential read, and (<b>b</b>) sequential write.</p>
Full article ">Figure 6
<p>CPU usage in processing sequential file I/O operations under runc (R), Kata containers (K), gVisor (G), and Firecracker (F). (<b>a</b>) Sequential read, and (<b>b</b>) sequential write.</p>
Full article ">Figure 7
<p>Random file I/O performance of runc, Kata containers (Kata), gVisor, and Firecracker (FC) with different block sizes. (<b>a</b>) Random read, and (<b>b</b>) random write.</p>
Full article ">Figure 8
<p>CPU usage in processing random file I/O operations under runc (R), Kata containers (K), gVisor (G), and Firecracker (F). (<b>a</b>) Random read, and (<b>b</b>) random write.</p>
Full article ">Figure 9
<p>Symbol-level profiling of I/O processing in Kata containers.</p>
Full article ">Figure 10
<p>Symbol-level profiling of I/O processing in gVisor.</p>
Full article ">Figure 11
<p>Symbol-level profiling of I/O processing in Firecracker.</p>
Full article ">
24 pages, 2617 KiB  
Article
An Approach for Deployment of Service-Oriented Simulation Run-Time Resources
by Zekun Zhang, Yong Peng, Miao Zhang, Quanjun Yin and Qun Li
Appl. Sci. 2023, 13(20), 11341; https://doi.org/10.3390/app132011341 - 16 Oct 2023
Viewed by 1037
Abstract
The requirements for low latency and high stability in large-scale geo-distributed training simulations have made cloud-edge collaborative simulation an emerging trend. However, there is currently limited research on how to deploy simulation run-time resources (SRR), including edge servers, simulation services, and simulation members. [...] Read more.
The requirements for low latency and high stability in large-scale geo-distributed training simulations have made cloud-edge collaborative simulation an emerging trend. However, there is currently limited research on how to deploy simulation run-time resources (SRR), including edge servers, simulation services, and simulation members. On one hand, the deployment schemes of these resources are coupled and have mutual impacts. It is difficult to ensure overall optimum by deploying these resources separately. On the other hand, the pursuit of low latency and high system stability is often challenging to achieve simultaneously because high stability implies low server load, while a small number of simulation services implies high response latency. We formulate this problem as a multi-objective optimization problem for the joint deployment of SRR, considering the complex combinatorial relationship between simulation services. Our objective is to minimize the system time cost and resource usage rate of edge servers under constraints such as server resource capacity and the relationship between edge servers and base stations. To address this problem, we propose a learnable genetic algorithm for SRR deployment (LGASRD) where the population can learn from elites and adaptively select evolution operators performing well. Extensive experiments with different settings based on real-world data sets demonstrate that LGASRD outperforms the baseline policies in terms of optimality, feasibility, and convergence rate, verifying the effectiveness and excellence of LGASRD when deploying SRR. Full article
(This article belongs to the Special Issue Advances in Edge Computing for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>System Architecture.</p>
Full article ">Figure 2
<p>Deployment schemes for different objects: (<b>a</b>) Edge server deployment; (<b>b</b>) SRSR deployment; (<b>c</b>) Joint SRR deployment.</p>
Full article ">Figure 3
<p>Algorithm performance comparison and iteration process of three evolutionary algorithms under experimental settings A, B and C.</p>
Full article ">Figure 4
<p>Algorithm performance comparison and iteration process of three evolutionary algorithms under experimental settings D, E and F.</p>
Full article ">Figure 5
<p>Algorithm performance comparison and iteration process of three evolutionary algorithms under experimental settings G, H and I.</p>
Full article ">Figure 6
<p>Algorithm performance comparison under experimental settings J, K and L.</p>
Full article ">Figure 7
<p>Algorithm performance comparison of three evolutionary algorithms under the resource-constrained scenario.</p>
Full article ">Figure 8
<p>Three types of time cost defined in <a href="#sec4dot2-applsci-13-11341" class="html-sec">Section 4.2</a>.</p>
Full article ">
Back to TopTop