[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Feature Papers in the Internet of Things Section 2022

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 81274

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Informatics and Telematics (IIT), National Research Council of Italy (CNR), Via G. Moruzzi, 1, I-56124 Pisa, Italy
Interests: MAC protocols for wireless networks; architectures and protocols for the Internet of Things; vehicular networks; 5G networks; smart transportation; smart grids and smart buildings
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Dipartimento di Ingegneria Elettrica e delle Tecnologie dell’Informazione, Università degli Studi di Napoli Federico II, 80125 Naples, Italy
Interests: communication systems and networks test and measurement; measurements for Internet of Things applications; compressive sampling based measurements; measurements for Industry 4.0; measurement uncertainty
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Mobile Multimedia Laboratory, Department of Informatics, School of Information Sciences and Technology, Athens University of Economics and Business, 104 34 Athens, Greece
Interests: access control; blockchain technologies; cryptography; information-centric networking; IoT; privacy; security; web technologies
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Division of Network and Systems Engineering, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 114 28 Stockholm, Sweden
Interests: security of IoT; IIoT; cyber-physical systems and smart-grid, especially on LoRaWAN networks
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce that the Section Internet of Things is now compiling a collection of papers submitted by the Editorial Board Members (EBMs) of our section and outstanding scholars in this research field. We welcome contributions as well as recommendations from EBMs.

We expect original papers and review articles showing state-of-the-art, theoretical, and applicative advances, new experimental discoveries, and novel technological improvements regarding Internet of Things. We expect these papers to be widely read and highly influential within the field. All papers in this Special Issue will be collected into a printed edition book after the deadline and be well promoted.

We would also like to take this opportunity to call on more excellent scholars to join the Section Internet of Things so that we can work together to further develop this exciting field of research.

Dr. Raffaele Bruno
Prof. Dr. Leopoldo Angrisani
Dr. Nikos Fotiou
Dr. Ismail Butun
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (21 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

20 pages, 8326 KiB  
Article
Power Efficient Machine Learning Models Deployment on Edge IoT Devices
by Anastasios Fanariotis, Theofanis Orphanoudakis, Konstantinos Kotrotsios, Vassilis Fotopoulos, George Keramidas and Panagiotis Karkazis
Sensors 2023, 23(3), 1595; https://doi.org/10.3390/s23031595 - 1 Feb 2023
Cited by 10 | Viewed by 4572
Abstract
Computing has undergone a significant transformation over the past two decades, shifting from a machine-based approach to a human-centric, virtually invisible service known as ubiquitous or pervasive computing. This change has been achieved by incorporating small embedded devices into a larger computational system, [...] Read more.
Computing has undergone a significant transformation over the past two decades, shifting from a machine-based approach to a human-centric, virtually invisible service known as ubiquitous or pervasive computing. This change has been achieved by incorporating small embedded devices into a larger computational system, connected through networking and referred to as edge devices. When these devices are also connected to the Internet, they are generally named Internet-of-Thing (IoT) devices. Developing Machine Learning (ML) algorithms on these types of devices allows them to provide Artificial Intelligence (AI) inference functions such as computer vision, pattern recognition, etc. However, this capability is severely limited by the device’s resource scarcity. Embedded devices have limited computational and power resources available while they must maintain a high degree of autonomy. While there are several published studies that address the computational weakness of these small systems-mostly through optimization and compression of neural networks- they often neglect the power consumption and efficiency implications of these techniques. This study presents power efficiency experimental results from the application of well-known and proven optimization methods using a set of well-known ML models. The results are presented in a meaningful manner considering the “real world” functionality of devices and the provided results are compared with the basic “idle” power consumption of each of the selected systems. Two different systems with completely different architectures and capabilities were used providing us with results that led to interesting conclusions related to the power efficiency of each architecture. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Keysight 34465A DMM and Bench PSU.</p>
Full article ">Figure 2
<p>DMM Connection Diagram for Test Board (<b>a</b>) ESP32, (<b>b</b>) STM32H7.</p>
Full article ">Figure 3
<p>ESP32 Development Board.</p>
Full article ">Figure 4
<p>STM32H743-Nucleo Development Board.</p>
Full article ">Figure 5
<p>The selected Models. (<b>a</b>) LeNet-5, (<b>b</b>) Sine Calculator Model, (<b>c</b>) MobileNet-025, (<b>d</b>) IPS Model.</p>
Full article ">Figure 6
<p>STM32H7 Idle consumption on Arduino Core.</p>
Full article ">Figure 7
<p>Uncompressed Lenet-5 Model Inference consumption (<b>a</b>) For ESP32, (<b>b</b>) For STM32H7.</p>
Full article ">Figure 8
<p>Quantized LeNet-5 Model Inference consumption on STM32H7.</p>
Full article ">Figure 9
<p>Sine Wave Prediction Model (<b>a</b>): Unoptimized, (<b>b</b>): Post Quantized, (<b>c</b>): Quantization Aware Trained.</p>
Full article ">Figure 10
<p>MobileNet-025 Inference Power consumption.</p>
Full article ">Figure 11
<p>Uncompressed IPS Model Inference consumption (<b>a</b>) For ESP32, (<b>b</b>) For STM32H7.</p>
Full article ">
18 pages, 6913 KiB  
Article
Inertial Sensor-Based Sport Activity Advisory System Using Machine Learning Algorithms
by Justyna Patalas-Maliszewska, Iwona Pajak, Pascal Krutz, Grzegorz Pajak, Matthias Rehm, Holger Schlegel and Martin Dix
Sensors 2023, 23(3), 1137; https://doi.org/10.3390/s23031137 - 19 Jan 2023
Cited by 12 | Viewed by 2186
Abstract
The aim of this study was to develop a physical activity advisory system supporting the correct implementation of sport exercises using inertial sensors and machine learning algorithms. Specifically, three mobile sensors (tags), six stationary anchors and a system-controlling server (gateway) were employed for [...] Read more.
The aim of this study was to develop a physical activity advisory system supporting the correct implementation of sport exercises using inertial sensors and machine learning algorithms. Specifically, three mobile sensors (tags), six stationary anchors and a system-controlling server (gateway) were employed for 15 scenarios of the series of subsequent activities, namely squats, pull-ups and dips. The proposed solution consists of two modules: an activity recognition module (ARM) and a repetition-counting module (RCM). The former is responsible for extracting the series of subsequent activities (so-called scenario), and the latter determines the number of repetitions of a given activity in a single series. Data used in this study contained 488 three defined sport activity occurrences. Data processing was conducted to enhance performance, including an overlapping and non-overlapping window, raw and normalized data, a convolutional neural network (CNN) with an additional post-processing block (PPB) and repetition counting. The developed system achieved satisfactory accuracy: CNN + PPB: non-overlapping window and raw data, 0.88; non-overlapping window and normalized data, 0.78; overlapping window and raw data, 0.92; overlapping window and normalized data, 0.87. For repetition counting, the achieved accuracies were 0.93 and 0.97 within an error of ±1 and ±2 repetitions, respectively. The archived results indicate that the proposed system could be a helpful tool to support the correct implementation of sport exercises and could be successfully implemented in further work in the form of web application detecting the user’s sport activity. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the advisory system supporting the correct implementation of sport exercises, namely squats, pull-ups and dips.</p>
Full article ">Figure 2
<p>Raw acceleration signals from a sensor placed on the chest of a participant performing three pull-ups, five dips and five squats.</p>
Full article ">Figure 3
<p>(<b>a</b>) Distribution of total time; (<b>b</b>) distribution of time per repetition of a given activity.</p>
Full article ">Figure 4
<p>The scheme of the validation module.</p>
Full article ">Figure 5
<p>The general structure of the CNN classifier.</p>
Full article ">Figure 6
<p>An exemplary result of live signal processing.</p>
Full article ">Figure 7
<p>CNN output signal transformed by PPB filters.</p>
Full article ">Figure 8
<p>Acceleration signals from chest and hand sensors before and after filtering.</p>
Full article ">
16 pages, 1100 KiB  
Article
Technological Transformation of Telco Operators towards Seamless IoT Edge-Cloud Continuum
by Kasim Oztoprak, Yusuf Kursat Tuncel and Ismail Butun
Sensors 2023, 23(2), 1004; https://doi.org/10.3390/s23021004 - 15 Jan 2023
Cited by 16 | Viewed by 3077
Abstract
This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, [...] Read more.
This article investigates and discusses challenges in the telecommunication field from multiple perspectives, both academic and industry sides are catered for, surveying the main points of technological transformation toward edge-cloud continuum from the view of a telco operator to show the complete picture, including the evolution of cloud-native computing, Software-Defined Networking (SDN), and network automation platforms. The cultural shift in software development and management with DevOps enabled the development of significant technologies in the telecommunication world, including network equipment, application development, and system orchestration. The effect of the aforementioned cultural shift to the application area, especially from the IoT point of view, is investigated. The enormous change in service diversity and delivery capabilities to mass devices are also discussed. During the last two decades, desktop and server virtualization has played an active role in the Information Technology (IT) world. With the use of OpenFlow, SDN, and Network Functions Virtualization (NFV), the network revolution has got underway. The shift from monolithic application development and deployment to micro-services changed the whole picture. On the other hand, the data centers evolved in several generations where the control plane cannot cope with all the networks without an intelligent decision-making process, benefiting from the AI/ML techniques. AI also enables operators to forecast demand more accurately, anticipate network load, and adjust capacity and throughput automatically. Going one step further, zero-touch networking and service management (ZSM) is proposed to get high-level human intents to generate a low-level configuration for network elements with validated results, minimizing the ratio of faults caused by human intervention. Harmonizing all signs of progress in different communication technologies enabled the use of edge computing successfully. Low-powered (from both energy and processing perspectives) IoT networks have disrupted the customer and end-point demands within the sector, as such paved the path towards devising the edge computing concept, which finalized the whole picture of the edge-cloud continuum. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>The word-cloud of the cloud-edge continuum concept.</p>
Full article ">Figure 2
<p>The change in the capacity demand, latency, and services in the telecommunication world [<a href="#B8-sensors-23-01004" class="html-bibr">8</a>].</p>
Full article ">Figure 3
<p>VM Architecture vs. Cloud-Native Functions.</p>
Full article ">Figure 4
<p>Blockchain for Telco Operators (inspired from [<a href="#B27-sensors-23-01004" class="html-bibr">27</a>]).</p>
Full article ">Figure 5
<p>Mind map for the Transformation of the Networks [<a href="#B8-sensors-23-01004" class="html-bibr">8</a>].</p>
Full article ">
17 pages, 5928 KiB  
Article
A Low-Cost Hardware Architecture for EV Battery Cell Characterization Using an IoT-Based Platform
by Rafael Martínez-Sánchez, Ángel Molina-García, Alfonso P. Ramallo-González, Juan Sánchez-Valverde and Benito Úbeda-Miñarro
Sensors 2023, 23(2), 816; https://doi.org/10.3390/s23020816 - 10 Jan 2023
Cited by 5 | Viewed by 2515
Abstract
Since 1997, when the first hybrid vehicle was launched on the market, until today, the number of NIMH batteries that have been discarded due to their obsolescence has not stopped increasing, with an even faster rate more recently due to the progressive disappearance [...] Read more.
Since 1997, when the first hybrid vehicle was launched on the market, until today, the number of NIMH batteries that have been discarded due to their obsolescence has not stopped increasing, with an even faster rate more recently due to the progressive disappearance of thermal vehicles on the market. The battery technologies used are mostly NIMH for hybrid vehicles and Li ion for pure electric vehicles, making recycling difficult due to the hazardous materials they contain. For this reason, and with the aim of extending the life of the batteries, even including a second life within electric vehicle applications, this paper describes and evaluates a low-cost system to characterize individual cells of commercial electric vehicle batteries by identifying such abnormally performing cells that are out of use, minimizing regeneration costs in a more sustainable manner. A platform based on the IoT technology is developed, allowing the automation of charging and discharging cycles of each independent cell according to some parameters given by the user, and monitoring the real-time data of such battery cells. A case study based on a commercial Toyota Prius battery is also included in the paper. The results show the suitability of the proposed solution as an alternative way to characterize individual cells for subsequent electric vehicle applications, decreasing operating costs and providing an autonomous, flexible, and reliable system. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>System architecture and its components.</p>
Full article ">Figure 2
<p>INA219. Source: [<a href="#B34-sensors-23-00816" class="html-bibr">34</a>].</p>
Full article ">Figure 3
<p>MCP4725. Source: [<a href="#B35-sensors-23-00816" class="html-bibr">35</a>].</p>
Full article ">Figure 4
<p>Cell control flow chart.</p>
Full article ">Figure 5
<p>System control flow chart.</p>
Full article ">Figure 6
<p>Designed PCB to control the charging and discharging cycles of each cell.</p>
Full article ">Figure 7
<p>Integrated circuit assembly in a rack with the Arduino card. General overview.</p>
Full article ">Figure 8
<p>Detailed view of the assembly of a designed PCB mounted in the rack.</p>
Full article ">Figure 9
<p>Details of two commercial cells tested in this case study: Toyota Prius battery cells.</p>
Full article ">Figure 10
<p>Toyota Prius battery and proposed system prototype. General overview.</p>
Full article ">Figure 11
<p>Toyota Prius battery and prototype connections to the battery cells.</p>
Full article ">Figure 12
<p>Detailed thermograph of an integrated circuit before the installation of the heat sinks.</p>
Full article ">Figure 13
<p>Results of a discharging cycle at constant intensity of 500 mA. Individual cell.</p>
Full article ">Figure 14
<p>Results of a charging cycle at constant intensity of 500 mA. Individual cell with a threshold voltage of 8 V.</p>
Full article ">Figure 15
<p>Results of a charging cycle at a constant intensity of 600 mA. Individual cell with a threshold voltage of 8.1 V.</p>
Full article ">Figure 16
<p>Results of a discharging cycle at different constant intensities: 500 mA, 300 mA, and 200 mA.</p>
Full article ">Figure 17
<p>Results of a discharging cycle at different constant intensities: 600 mA, 400 mA, and 300 mA.</p>
Full article ">Figure 18
<p>Results of a charging cycle at a constant intensity of 600 mA with a threshold voltage of 10 V.</p>
Full article ">Figure 19
<p>Comparison of two cells; the one below was deformed by an over-charge process.</p>
Full article ">
22 pages, 1884 KiB  
Article
Delay-Packet-Loss-Optimized Distributed Routing Using Spiking Neural Network in Delay-Tolerant Networking
by Gandhimathi Velusamy and Ricardo Lent
Sensors 2023, 23(1), 310; https://doi.org/10.3390/s23010310 - 28 Dec 2022
Viewed by 3229
Abstract
Satellite communication is inevitable due to the Internet of Everything and the exponential increase in the usage of smart devices. Satellites have been used in many applications to make human life safe, secure, sophisticated, and more productive. The applications that benefit from satellite [...] Read more.
Satellite communication is inevitable due to the Internet of Everything and the exponential increase in the usage of smart devices. Satellites have been used in many applications to make human life safe, secure, sophisticated, and more productive. The applications that benefit from satellite communication are Earth observation (EO), military missions, disaster management, and 5G/6G integration, to name a few. These applications rely on the timely and accurate delivery of space data to ground stations. However, the channels between satellites and ground stations suffer attenuation caused by uncertain weather conditions and long delays due to line-of-sight constraints, congestion, and physical distance. Though inter-satellite links (ISLs) and inter-orbital links (IOLs) create multiple paths between satellite nodes, both ISLs and IOLs have the same issues. Some essential applications, such as EO, depend on time-sensitive and error-free data delivery, which needs better throughput connections. It is challenging to route space data to ground stations with better QoS by leveraging the ISLs and IOLs. Routing approaches that use the shortest path to optimize latency may cause packet losses and reduced throughput based on the channel conditions, while routing methods that try to avoid packet losses may end up delivering data with long delays. Existing routing algorithms that use multi-optimization goals tend to use priority-based optimization to optimize either of the metrics. However, critical satellite missions that depend on high-throughput and low-latency data delivery need routing approaches that optimize both metrics concurrently. We used a modified version of Kleinrock’s power metric to reduce delay and packet losses and verified it with experimental evaluations. We used a cognitive space routing approach, which uses a reinforcement-learning-based spiking neural network to implement routing strategies in NASA’s High Rate Delay Tolerant Networking (HDTN) project. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Cognizant controller running on satellites autonomously selects links at each satellite to optimize latency and pack loss in a possible interplanetary network.</p>
Full article ">Figure 2
<p>Space-based information network of satellites and a GS. (<b>a</b>) Topology 1 (<b>b</b>) Topology 2.</p>
Full article ">Figure 3
<p>Routing performance: Scenario 1. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 4
<p>Link selection at Node h26: Scenario 1.</p>
Full article ">Figure 5
<p>Routing performance: Scenario 2. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 6
<p>Link selection at Node h26: Scenario 2.</p>
Full article ">Figure 7
<p>Routing performance: Scenario 3. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 8
<p>Link selection at Node h26: Scenario 3.</p>
Full article ">Figure 9
<p>Routing performance: Scenario 4. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 10
<p>Link selection at Node h26: Scenario 4.</p>
Full article ">Figure 11
<p>Routing performance: Scenario 5. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 12
<p>Link selection at Node h26: Scenario 5.</p>
Full article ">Figure 13
<p>Routing performance: Scenario 6. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 14
<p>Link selection at Node h26: Scenario 6.</p>
Full article ">Figure 15
<p>Routing performance: Scenario 7. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 16
<p>Routing performance: Scenario 8. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 17
<p>Routing performance: Scenario 9. (<b>a</b>) Average response time; (<b>b</b>) Packet loss ratio (%); (<b>c</b>) Throughput.</p>
Full article ">Figure 18
<p>Path lengths traveled by bundles in Topology 2. (<b>a</b>) Scenario 7; (<b>b</b>) Scenario 8; (<b>c</b>) Scenario 9.</p>
Full article ">
14 pages, 3100 KiB  
Article
Water Meter Reading for Smart Grid Monitoring
by Fabio Martinelli, Francesco Mercaldo and Antonella Santone
Sensors 2023, 23(1), 75; https://doi.org/10.3390/s23010075 - 21 Dec 2022
Cited by 11 | Viewed by 4500
Abstract
Many tasks that require a large workforce are automated. In many areas of the world, the consumption of utilities, such as electricity, gas and water, is monitored by meters that need to be read by humans. The reading of such meters requires the [...] Read more.
Many tasks that require a large workforce are automated. In many areas of the world, the consumption of utilities, such as electricity, gas and water, is monitored by meters that need to be read by humans. The reading of such meters requires the presence of an employee or a representative of the utility provider. Automatic meter reading is crucial in the implementation of smart grids. For this reason, with the aim to boost the implementation of the smart grid paradigm, in this paper, we propose a method aimed to automatically read digits from a dial meter. In detail, the proposed method aims to localise the dial meter from an image, to detect the digits and to classify the digits. Deep learning is exploited, and, in particular, the YOLOv5s model is considered for the localisation of digits and for their recognition. An experimental real-world case study is presented to confirm the effectiveness of the proposed method for automatic digit localisation recognition from dial meters. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>The workflow of the proposed method for automatic dial meter reading from images.</p>
Full article ">Figure 2
<p>An example of a dial meter with details for the cubic meter quantity (in the black square) and the consumed water litres (in the red square).</p>
Full article ">Figure 3
<p>The results obtained from the experimental analysis.</p>
Full article ">Figure 4
<p>The precision–recall graph.</p>
Full article ">Figure 5
<p>Normalised confusion matrix.</p>
Full article ">Figure 6
<p>Four differentexamples of water meter detection performed by the proposed method: the first column shows the original image; the second column presents the water meter digits, the related prediction and the time employed for the detection; and the third column shows the image generated by the proposed method consisting of the overlaying of the original image with the details for the counter detection, the litre detection and the digit identification with the related detection percentage.</p>
Full article ">
22 pages, 470 KiB  
Article
Recent Advances in Artificial Intelligence and Tactical Autonomy: Current Status, Challenges, and Perspectives
by Desta Haileselassie Hagos and Danda B. Rawat
Sensors 2022, 22(24), 9916; https://doi.org/10.3390/s22249916 - 16 Dec 2022
Cited by 14 | Viewed by 6284
Abstract
This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their [...] Read more.
This paper presents the findings of detailed and comprehensive technical literature aimed at identifying the current and future research challenges of tactical autonomy. It discusses in great detail the current state-of-the-art powerful artificial intelligence (AI), machine learning (ML), and robot technologies, and their potential for developing safe and robust autonomous systems in the context of future military and defense applications. Additionally, we discuss some of the technical and operational critical challenges that arise when attempting to practically build fully autonomous systems for advanced military and defense applications. Our paper provides the state-of-the-art advanced AI methods available for tactical autonomy. To the best of our knowledge, this is the first work that addresses the important current trends, strategies, critical challenges, tactical complexities, and future research directions of tactical autonomy. We believe this work will greatly interest researchers and scientists from academia and the industry working in the field of robotics and the autonomous systems community. We hope this work encourages researchers across multiple disciplines of AI to explore the broader tactical autonomy domain. We also hope that our work serves as an essential step toward designing advanced AI and ML models with practical implications for real-world military and defense settings. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Brief history and milestones of tactical autonomy.</p>
Full article ">Figure 2
<p>Explainable AI. As presented in <a href="#sec8-sensors-22-09916" class="html-sec">Section 8</a>, developing advanced ML techniques to produce explainable models is one direction of our future work. In addition to this, integrating state-of-the-art explanation interfaces that produce efficient explanations of the underlying models is a challenge we plan to explore in our future work.</p>
Full article ">Figure 3
<p>Requirements and elements of a trustworthy AI [<a href="#B108-sensors-22-09916" class="html-bibr">108</a>].</p>
Full article ">
12 pages, 1045 KiB  
Article
Modeling Driver Behavior in Road Traffic Simulation
by Teodora Mecheva, Radoslav Furnadzhiev and Nikolay Kakanakov
Sensors 2022, 22(24), 9801; https://doi.org/10.3390/s22249801 - 14 Dec 2022
Cited by 3 | Viewed by 2350
Abstract
Driver behavior models are an important part of road traffic simulation modeling. They encompass characteristics such as mood, fatigue, and response to distracting conditions. The relationships between external factors and the way drivers perform tasks can also be represented in models. This article [...] Read more.
Driver behavior models are an important part of road traffic simulation modeling. They encompass characteristics such as mood, fatigue, and response to distracting conditions. The relationships between external factors and the way drivers perform tasks can also be represented in models. This article proposes a methodology for establishing parameters of driver behavior models. The methodology is based on road traffic data and determines the car-following model and routing algorithm and their parameters that best describe driving habits. Sequential and parallel implementation of the methodology through the urban mobility simulator SUMO and Python are proposed. Four car-following models and three routing algorithms and their parameters are investigated. The results of the performed simulations prove the applicability of the methodology. Based on more than 7000 simulations performed, it is concluded that in future experiments of the traffic in Plovdiv it is appropriate to use a Contraction Hierarchies routing algorithm with the default routing step and the Krauss car-following model with the default configuration parameters. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Methodology diagram.</p>
Full article ">Figure 2
<p>Map being used.</p>
Full article ">
27 pages, 1819 KiB  
Article
Applied Machine Learning for IIoT and Smart Production—Methods to Improve Production Quality, Safety and Sustainability
by Attila Frankó, Gergely Hollósi, Dániel Ficzere and Pal Varga
Sensors 2022, 22(23), 9148; https://doi.org/10.3390/s22239148 - 25 Nov 2022
Cited by 9 | Viewed by 3895
Abstract
Industrial IoT (IIoT) has revolutionized production by making data available to stakeholders at many levels much faster, with much greater granularity than ever before. When it comes to smart production, the aim of analyzing the collected data is usually to achieve greater efficiency [...] Read more.
Industrial IoT (IIoT) has revolutionized production by making data available to stakeholders at many levels much faster, with much greater granularity than ever before. When it comes to smart production, the aim of analyzing the collected data is usually to achieve greater efficiency in general, which includes increasing production but decreasing waste and using less energy. Furthermore, the boost in communication provided by IIoT requires special attention to increased levels of safety and security. The growth in machine learning (ML) capabilities in the last few years has affected smart production in many ways. The current paper provides an overview of applying various machine learning techniques for IIoT, smart production, and maintenance, especially in terms of safety, security, asset localization, quality assurance and sustainability aspects. The approach of the paper is to provide a comprehensive overview on the ML methods from an application point of view, hence each domain—namely security and safety, asset localization, quality control, maintenance—has a dedicated chapter, with a concluding table on the typical ML techniques and the related references. The paper summarizes lessons learned, and identifies research gaps and directions for future work. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>The architectural layers of IIoT systems [<a href="#B1-sensors-22-09148" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Production stages typically covered by the process of manufacturing. While machine learning can be applied in all areas of manufacturing, only a small subset of them use machine learning techniques extensively (shown in bold typeset).</p>
Full article ">Figure 3
<p>Trustworthiness of an IIoT System as specified by the Industrial IoT Consortium [<a href="#B20-sensors-22-09148" class="html-bibr">20</a>]. The key characteristics of the trustworthy IoT system are security, privacy, reliability, safety and resilience.</p>
Full article ">Figure 4
<p>Typical use-cases for industrial indoor and outdoor asset tracking and localization. Beyond classical indoor and outdoor use-cases, there are a couple of less know topics, e.g., tracing the food chain or tracking disposable items (icons from <a href="http://Flaticon.com" target="_blank">Flaticon.com</a>).</p>
Full article ">Figure 5
<p>General architecture model for vision-based product quality inspection [<a href="#B104-sensors-22-09148" class="html-bibr">104</a>].</p>
Full article ">Figure 6
<p>Difference between maintenance approaches in terms of condition and time.</p>
Full article ">Figure 7
<p>Overview of the MANTIS reference architecture [<a href="#B188-sensors-22-09148" class="html-bibr">188</a>].</p>
Full article ">
16 pages, 765 KiB  
Article
Selective Content Retrieval in Information-Centric Networking
by José Quevedo and Daniel Corujo
Sensors 2022, 22(22), 8742; https://doi.org/10.3390/s22228742 - 12 Nov 2022
Cited by 5 | Viewed by 1680
Abstract
Recently, novel networking architectures have emerged to cope with the fast-evolving and new Internet utilisation patterns. Information-Centric Networking (ICN) is a prominent example of this architecture. By perceiving content as the core element of the networking functionalities, ICN opens up a whole new [...] Read more.
Recently, novel networking architectures have emerged to cope with the fast-evolving and new Internet utilisation patterns. Information-Centric Networking (ICN) is a prominent example of this architecture. By perceiving content as the core element of the networking functionalities, ICN opens up a whole new avenue of information exchange optimisation possibilities. This paper presents an approach that progresses the base operation of ICN and leverages content identification right at the network layer, allowing to selectively retrieve partial pieces of information from content already present in ICN in-network caches. Additionally, this proposal enables information producers to seamlessly offload some content processing tasks into the network. The concept is discussed and demonstrated through a proof-of-concept prototype targeting an Internet of Things (IoT) scenario, where consumers retrieve specific pieces of the whole information generated by sensors. The obtained results showcase reduced traffic and storage consumption at the core of the network. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Paper organization.</p>
Full article ">Figure 2
<p>Motivational scenario: IoT data (dis)aggregation.</p>
Full article ">Figure 3
<p>NDN strategies for selective content retrieval: (<b>a</b>) Simple Request; (<b>b</b>) Advanced Request; (<b>c</b>) Simple Response; (<b>d</b>) Advance Response.</p>
Full article ">Figure 4
<p>Dis(aggregation) strategies.</p>
Full article ">Figure 5
<p>CiC forwarding: (<b>a</b>) Interest processing pipeline; (<b>b</b>) Data processing pipeline.</p>
Full article ">Figure 6
<p>CiC Forwarding: message sequence diagram.</p>
Full article ">Figure 7
<p>Simulation details: (<b>a</b>) Simulation topology (depth = 3); (<b>b</b>) JSON Train Data content.</p>
Full article ">Figure 8
<p>Evaluation results for different metrics and freshness values: (<b>a</b>) Average cache utilisation; (<b>b</b>) Average number of hops per data packet; (<b>c</b>) Average application level delay.</p>
Full article ">
19 pages, 674 KiB  
Article
ECO6G: Energy and Cost Analysis for Network Slicing Deployment in Beyond 5G Networks
by Anurag Thantharate, Ankita Vijay Tondwalkar, Cory Beard and Andres Kwasinski
Sensors 2022, 22(22), 8614; https://doi.org/10.3390/s22228614 - 8 Nov 2022
Cited by 9 | Viewed by 2083
Abstract
Fifth-generation (5G) wireless technology promises to be the critical enabler of use cases far beyond smartphones and other connected devices. This next-generation 5G wireless standard represents the changing face of connectivity by enabling elevated levels of automation through continuous optimization of several Key [...] Read more.
Fifth-generation (5G) wireless technology promises to be the critical enabler of use cases far beyond smartphones and other connected devices. This next-generation 5G wireless standard represents the changing face of connectivity by enabling elevated levels of automation through continuous optimization of several Key Performance Indicators (KPIs) such as latency, reliability, connection density, and energy efficiency. Mobile Network Operators (MNOs) must promote and implement innovative technologies and solutions to reduce network energy consumption while delivering high-speed and low-latency services to deploy energy-efficient 5G networks with a reduced carbon footprint. This research evaluates an energy-saving method using data-driven learning through load estimation for Beyond 5G (B5G) networks. The proposed ‘ECO6G’ model utilizes a supervised Machine Learning (ML) approach for forecasting traffic load and uses the estimated load to evaluate the energy efficiency and OPEX savings. The simulation results demonstrate a comparative analysis between the traditional time-series forecasting methods and the proposed ML model that utilizes learned parameters. Our ECO6G dataset is captured from measurements on a real-world operational 5G base station (BS). We showcase simulations using our ECO6G model for a given dataset and demonstrate that the proposed ECO6G model is accurate within $4.3 million over 100,000 BSs over 5 years compared to three other models that would increase OPEX cost from $370 million to $1.87 billion during varying network load scenarios against other data-driven and statistical learning models. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>ECO6G Framework.</p>
Full article ">Figure 2
<p>Model Evaluation and Metrics.</p>
Full article ">Figure 3
<p>Simulation results of Network Load Prediction using Neural Network and Statistical Modeling.</p>
Full article ">Figure 4
<p>Base station’s typical daily energy usage.</p>
Full article ">
9 pages, 2389 KiB  
Communication
Multiple Fingerprinting Localization by an Artificial Neural Network
by Jaehyun Yoo
Sensors 2022, 22(19), 7505; https://doi.org/10.3390/s22197505 - 3 Oct 2022
Cited by 7 | Viewed by 1792
Abstract
Fingerprinting localization is a promising indoor positioning methods thanks to its advantage of using preinstalled infrastructure. For example, WiFi signal strength can be measured by pre-existing WiFi routers. In the offline phase, the fingerprinting localization method first stores of position and RSSI measurement [...] Read more.
Fingerprinting localization is a promising indoor positioning methods thanks to its advantage of using preinstalled infrastructure. For example, WiFi signal strength can be measured by pre-existing WiFi routers. In the offline phase, the fingerprinting localization method first stores of position and RSSI measurement pairs in a dataset. Second, it predicts a target’s location by comparing the stored fingerprint database to the current measurement. The database size is normally huge, and data patterns are complicated; thus, an artificial neural network is used to model the relationship of fingerprints and locations. The existing fingerprinting locations, however, have been developed to predict only single locations. In practice, many users may require positioning services, and as such, the core algorithm should be capable of multiple localizations, which is the main contribution of this paper. In this paper, multiple fingerprinting localization is developed based on an artificial neural network and an analysis of the number of targets that can be estimated without loss of accuracy is conducted by experiments. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Multiple fingerprinting indoor localization.</p>
Full article ">Figure 2
<p>Deep neural network structure for single-position learning.</p>
Full article ">Figure 3
<p>Extended deep neural network structure for multi-position learning.</p>
Full article ">Figure 4
<p>Experimental WiFi fingerprint data distribution along hallway of a multi-story building.</p>
Full article ">Figure 5
<p>Positioning test error by deep neural network according to the number of estimated positions: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math>.</p>
Full article ">
18 pages, 2906 KiB  
Article
Motion Shield: An Automatic Notifications System for Vehicular Communications
by Petros Balios, Philotas Kyriakidis, Stelios Zimeras, Petros S. Bithas and Lambros Sarakis
Sensors 2022, 22(6), 2419; https://doi.org/10.3390/s22062419 - 21 Mar 2022
Cited by 1 | Viewed by 2508
Abstract
Motion Shield is an automatic crash notification system that uses a mobile phone to generate automatic alerts related to the safety of a user when the user is boarding a means of transportation. The objective of Motion Shield is to improve road safety [...] Read more.
Motion Shield is an automatic crash notification system that uses a mobile phone to generate automatic alerts related to the safety of a user when the user is boarding a means of transportation. The objective of Motion Shield is to improve road safety by considering a moving vehicle’s risk, estimating the probability of an emergency, and assessing the likelihood of an accident. The system, using multiple sources of external information, the mobile phone sensors’ readings, geolocated information, weather data, and historical evidence of traffic accidents, processes a plethora of parameters in order to predict the onset of an accident and act preventively. All the collected data are forwarded into a decision support system which dynamically calculates the mobility risk and driving behavior aspects in order to proactively send personalized notifications and alerts to the user and a public safety answering point (PSAP) (112). Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Existing systems vs. Motion Shield—a comparative study.</p>
Full article ">Figure 2
<p>Description of the structure of the proposed system. It consists of three functioning levels: the application, the cloud, and the control level, with subsystems operating at their corresponding level.</p>
Full article ">Figure 3
<p>Description of how the mobile app operates in the application level.</p>
Full article ">Figure 4
<p>Description of what happens when the system detects an emergency (emergency signals management introduced in <a href="#sensors-22-02419-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 5
<p>Description of the crash management process introduced in <a href="#sensors-22-02419-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Decision process methodology.</p>
Full article ">Figure 7
<p>System simulation.</p>
Full article ">Figure 8
<p>Motion Shield dashboard.</p>
Full article ">Figure 9
<p>Description of what happens when the system loses contact with MD.</p>
Full article ">
16 pages, 5832 KiB  
Article
SVIoT: A Secure Visual-IoT Framework for Smart Healthcare
by Javaid A. Kaw, Solihah Gull and Shabir A. Parah
Sensors 2022, 22(5), 1773; https://doi.org/10.3390/s22051773 - 24 Feb 2022
Cited by 6 | Viewed by 2731
Abstract
The advancement of the Internet of Things (IoT) has transfigured the overlay of the physical world by superimposing digital information in various sectors, including smart cities, industry, healthcare, etc. Among the various shared information, visual data are an insensible part of smart cities, [...] Read more.
The advancement of the Internet of Things (IoT) has transfigured the overlay of the physical world by superimposing digital information in various sectors, including smart cities, industry, healthcare, etc. Among the various shared information, visual data are an insensible part of smart cities, especially in healthcare. As a result, visual-IoT research is gathering momentum. In visual-IoT, visual sensors, such as cameras, collect critical multimedia information about industries, healthcare, shopping, autonomous vehicles, crowd management, etc. In healthcare, patient-related data are captured and then transmitted via insecure transmission lines. The security of this data are of paramount importance. Besides the fact that visual data requires a large bandwidth, the gap between communication and computation is an additional challenge for visual IoT system development. In this paper, we present SVIoT, a Secure Visual-IoT framework, which addresses the issues of both data security and resource constraints in IoT-based healthcare. This was achieved by proposing a novel reversible data hiding (RDH) scheme based on One Dimensional Neighborhood Mean Interpolation (ODNMI). The use of ODNMI reduces the computational complexity and storage/bandwidth requirements by 50 percent. We upscaled the original image from M × N to M ± 2N, dissimilar to conventional interpolation methods, wherein images are upscaled to 2M × 2N. We made use of an innovative mechanism, Left Data Shifting (LDS), before embedding data in the cover image. Before embedding the data, we encrypted it using an AES-128 encryption algorithm to offer additional security. The use of LDS ensures better perceptual quality at a relatively high payload. We achieved an average PSNR of 43 dB for a payload of 1.5 bpp (bits per pixel). In addition, we embedded a fragile watermark in the cover image to ensure authentication of the received content. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>A typical smart-health system.</p>
Full article ">Figure 2
<p>Block diagram of the proposed scheme.</p>
Full article ">Figure 3
<p>(<b>a</b>) Original image block; (<b>b</b>) Generation of the cover image using ODNMI.</p>
Full article ">Figure 4
<p>Various test images.</p>
Full article ">Figure 5
<p>Watermark.</p>
Full article ">Figure 6
<p>Subjective analysis for reversibility.</p>
Full article ">Figure 7
<p>Authentication and fragility analysis (Technique-1).</p>
Full article ">Figure 8
<p>Authentication and fragility analysis (Technique-2).</p>
Full article ">
23 pages, 3614 KiB  
Article
Joint Communications and Sensing Employing Multi- or Single-Carrier OFDM Communication Signals: A Tutorial on Sensing Methods, Recent Progress and a Novel Design
by Kai Wu, Jian Andrew Zhang, Xiaojing Huang and Yingjie Jay Guo
Sensors 2022, 22(4), 1613; https://doi.org/10.3390/s22041613 - 18 Feb 2022
Cited by 12 | Viewed by 4093
Abstract
Joint communications and sensing (JCAS) has recently attracted extensive attention due to its potential in substantially improving the cost, energy and spectral efficiency of Internet of Things (IoT) systems that need both radio frequency functions. Given the wide applicability of orthogonal frequency division [...] Read more.
Joint communications and sensing (JCAS) has recently attracted extensive attention due to its potential in substantially improving the cost, energy and spectral efficiency of Internet of Things (IoT) systems that need both radio frequency functions. Given the wide applicability of orthogonal frequency division multiplexing (OFDM) in modern communications, OFDM sensing has become one of the major research topics of JCAS. To raise the awareness of some critical yet long-overlooked issues that restrict the OFDM sensing capability, a comprehensive overview of OFDM sensing is provided first in this paper, and then a tutorial on the issues is presented. Moreover, some recent research efforts for addressing the issues are reviewed, with interesting designs and results highlighted. In addition, the redundancy in OFDM sensing signals is unveiled, on which, a novel method is based and developed in order to remove the redundancy by introducing efficient signal decimation. Corroborated by analysis and simulation results, the new method further reduces the sensing complexity over one of the most efficient methods to date, with a minimal impact on the sensing performance. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Illustrating the changes in signal timing in OFDM sensing, where CP is short for cyclic prefix and <span class="html-italic">Q</span> is the number of samples in a CP. The top signal, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> given in (1), is the essential part of OFDM symbols. The middle signal, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>x</mi> <mo stretchy="false">˜</mo> </mover> <mi>m</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mover accent="true"> <mi>k</mi> <mo stretchy="false">˜</mo> </mover> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> given in (3), illustrates the CP-OFDM symbols to be emitted. The bottom signal, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>y</mi> <mo stretchy="false">˜</mo> </mover> <mi>m</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mover accent="true"> <mi>k</mi> <mo stretchy="false">˜</mo> </mover> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> given in (4), is the baseband echo at the sensing Rx, where the delay of <math display="inline"><semantics> <msub> <mi>k</mi> <mi>r</mi> </msub> </semantics></math> samples account for the round-trip traveling from Tx to Rx.</p>
Full article ">Figure 2
<p>Illustrating the signal timing in RCP-OTFS sensing, where, different from OFDM shown in <a href="#sensors-22-01613-f001" class="html-fig">Figure 1</a>, only a single CP is added to a whole block of symbols.</p>
Full article ">Figure 3
<p>Illustrating the processing diagram of COS and C-COS, where C-Tx stands for communication transmitter, S-Rx for sensing receiver, PWD for point-wise division, PWP for point-wise product and RDM for range–Doppler map.</p>
Full article ">Figure 4
<p>Illustrating RDMs, where “-” in the color bar is the negative sign. Note that <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> with constant modulus <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is plotted in (<b>a</b>), demonstrating OFDM under PSK constellations processed by either PWD or PWP. Moreover, <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>Y</mi> <mi>m</mi> </msub> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> (obtained under PWD) with noise-like <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is plotted in (<b>b</b>). In addition, <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mrow> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> (using PWP) with noise-like <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is plotted in (<b>c</b>). According to Remark 1, DFT-s-OFDM and OTFS have their frequency-domain signals, i.e., <math display="inline"><semantics> <mrow> <msub> <mi>s</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, conform to normal distribution. Thus, subfigures (<b>b</b>,<b>c</b>) can represent either DFT-s-OFDM or OTFS. Here, <span class="html-italic">R</span> and <span class="html-italic">D</span> stand for range and Doppler grids, respectively. When generating the RDMs as performed in (8) and (11), the DFT sizes in both dimensions are increased by 16 times to make the grids denser.</p>
Full article ">Figure 5
<p>A novel sensing framework that suits OFDM, DFT-s-OFDM and OTFS, where SB stands for sub-block and VCP for virtual CP. The left sub-figure shows the sensing diagram, where the DFT results will go through the last three steps in <a href="#sensors-22-01613-f003" class="html-fig">Figure 3</a> to generate RDMs. The right sub-figure is a novel signal segmentation proposed in [<a href="#B39-sensors-22-01613" class="html-bibr">39</a>], where <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </semantics></math> can be the middle signal in <a href="#sensors-22-01613-f001" class="html-fig">Figure 1</a> or <a href="#sensors-22-01613-f002" class="html-fig">Figure 2</a>. <span class="html-italic">That is, the sensing framework suits OFDM or DFT-s-OFDM with regular CPs (one per symbol), as well as the OTFS with a reduced CP (i.e., a single CP for a long block of symbols)</span>.</p>
Full article ">Figure 6
<p>Comparing RDMs of C-COS and the novel sensing framework (NSF) illustrated in Algorithm 1, where simulation parameters are summarized in <a href="#sensors-22-01613-t002" class="html-table">Table 2</a>, the results in the first row are for C-COS and the results in the second row are for NSF. More specifically, the RDM of C-COS is given in subfigure (<b>a</b>), while that of NSF is in subfigure (<b>e</b>). Subfigures (<b>b</b>) and (<b>f</b>) are the zoomed in versions of subfigures (<b>a</b>,<b>e</b>), respectively, where the zoom-in centers are the true target range and Doppler bins. Subfigures (<b>c</b>) and (<b>d</b>) illustrate the range and Doppler cuts of the RDM given in subfigure (<b>a</b>); similarly, subfigures (<b>g</b>) and (<b>h</b>) give those of the RDM in (<b>e</b>). <span class="html-italic">Note that COS and NSF are performed with the same communication-transmitted and sensing echo signals</span>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Illustration of general steps for decimation; (<b>b</b>) spectrum features at different stages of decimation; (<b>c</b>) decomposing the anti-aliasing filter in (<b>a</b>); (<b>d</b>) the polyphase structure-based decimation specifically tailored for OFDM sensing.</p>
Full article ">Figure 8
<p>Illustration of target detection, where COS-RDM is given in (<b>a</b>), DCOS-RDM in (<b>b</b>), the range cuts at <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mo>−</mo> <mn>10</mn> </mrow> </semantics></math> m/s are shown in (<b>c</b>) and the velocity cuts at <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>56</mn> </mrow> </semantics></math> m in (<b>d</b>). Most settings in <a href="#sensors-22-01613-t003" class="html-table">Table 3</a> are again used here, except that the number of OFDM symbols is <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math> and the hamming window is used in (8) and (18) for both range and velocity measurements. In addition, three targets are set here. Their ranges and velocities are <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>50</mn> <mo>,</mo> <mn>56</mn> <mo>,</mo> <mn>56</mn> <mo>]</mo> </mrow> </semantics></math> m and <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mn>10</mn> <mo>,</mo> <mo>−</mo> <mn>10</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> </mrow> </semantics></math> m/s, respectively. Note that the symbol “-” in the axes of all subfigures is the minus sign (not hyphen).</p>
Full article ">Figure 9
<p>Illustration of SNR in DCOS-RDM versus <span class="html-italic">P</span> in (<b>a</b>,<b>b</b>); and (<b>c</b>) a comparative illustration of the SNR in RDM of both COS and DCOS versus <math display="inline"><semantics> <mi>γ</mi> </semantics></math>, the SNR in (7). Parameter settings are summarized in <a href="#sensors-22-01613-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 10
<p>Comparing C-COS and DCOS in terms of detection and estimation performances, where the OFDM parameters are given in <a href="#sensors-22-01613-t003" class="html-table">Table 3</a>, and a single unit-power target is set here with range and velocity randomly generated over <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> independent trials. (<b>a</b>) illustrates the detection probability of the two methods under <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </semantics></math> false-alarm rate and <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mo>−</mo> <mn>60</mn> </mrow> </semantics></math> dB. (<b>b</b>,<b>c</b>) illustrates the range and velocity estimation performance, respectively, where the estimation method [<a href="#B48-sensors-22-01613" class="html-bibr">48</a>] is employed for both parameters. (<b>d</b>) compares the wall-clock time per run, including RDM generation, detection and estimation, for the two methods, as averaged over <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> trials.</p>
Full article ">
29 pages, 19618 KiB  
Article
Automated License Plate Recognition for Resource-Constrained Environments
by Heshan Padmasiri, Jithmi Shashirangana, Dulani Meedeniya, Omer Rana and Charith Perera
Sensors 2022, 22(4), 1434; https://doi.org/10.3390/s22041434 - 13 Feb 2022
Cited by 31 | Viewed by 8420
Abstract
The incorporation of deep-learning techniques in embedded systems has enhanced the capabilities of edge computing to a great extent. However, most of these solutions rely on high-end hardware and often require a high processing capacity, which cannot be achieved with resource-constrained edge computing. [...] Read more.
The incorporation of deep-learning techniques in embedded systems has enhanced the capabilities of edge computing to a great extent. However, most of these solutions rely on high-end hardware and often require a high processing capacity, which cannot be achieved with resource-constrained edge computing. This study presents a novel approach and a proof of concept for a hardware-efficient automated license plate recognition system for a constrained environment with limited resources. The proposed solution is purely implemented for low-resource edge devices and performed well for extreme illumination changes such as day and nighttime. The generalisability of the proposed models has been achieved using a novel set of neural networks for different hardware configurations based on the computational capabilities and low cost. The accuracy, energy efficiency, communication, and computational latency of the proposed models are validated using different license plate datasets in the daytime and nighttime and in real time. Meanwhile, the results obtained from the proposed study have shown competitive performance to the state-of-the-art server-grade hardware solutions as well. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed model.</p>
Full article ">Figure 2
<p>Hardware stack of the proposed solution.</p>
Full article ">Figure 3
<p>Two-stage license plate recognition pipeline.</p>
Full article ">Figure 4
<p>High-tier model (<b>left</b>): Internal view, (<b>right</b>): Exterior deployment view.</p>
Full article ">Figure 5
<p>Circuit diagram of the design.</p>
Full article ">Figure 6
<p>Data flow of the proposed system.</p>
Full article ">Figure 7
<p>Pix2Pix for nighttime image generation.</p>
Full article ">Figure 8
<p>Stochastic super-network (<b>left</b>): PC-DARTS, (<b>right</b>): FB-Net.</p>
Full article ">Figure 9
<p>Model Architectures (<b>left</b>): hardware-optimized detection, (<b>middle</b>): hardware-agnostic detection, (<b>right</b>): recognition subnetworks.</p>
Full article ">Figure 10
<p>Model accuracy on the synthetically generated dataset (<b>left</b>): detection, (<b>right</b>) recognition.</p>
Full article ">Figure 11
<p>Camera positions (<b>left</b>) and sample deployed image (<b>right</b>).</p>
Full article ">Figure 12
<p>Model accuracies of each experiment.</p>
Full article ">
20 pages, 1093 KiB  
Article
Vehicle Localization Using Doppler Shift and Time of Arrival Measurements in a Tunnel Environment
by Rreze Halili, Noori BniLam, Marwan Yusuf, Emmeric Tanghe, Wout Joseph, Maarten Weyn and Rafael Berkvens
Sensors 2022, 22(3), 847; https://doi.org/10.3390/s22030847 - 22 Jan 2022
Cited by 11 | Viewed by 3863
Abstract
Most applications and services of Cooperative Intelligent Transport Systems (C-ITS) rely on accurate and continuous vehicle location information. The traditional localization method based on the Global Navigation Satellite System (GNSS) is the most commonly used. However, it does not provide reliable, continuous, and [...] Read more.
Most applications and services of Cooperative Intelligent Transport Systems (C-ITS) rely on accurate and continuous vehicle location information. The traditional localization method based on the Global Navigation Satellite System (GNSS) is the most commonly used. However, it does not provide reliable, continuous, and accurate positioning in all scenarios, such as tunnels. Therefore, in this work, we present an algorithm that exploits the existing Vehicle-to-Infrastructure (V2I) communication channel that operates within the LTE-V frequency band to acquire in-tunnel vehicle location information. We propose a novel solution for vehicle localization based on Doppler shift and Time of Arrival measurements. Measurements performed in the Beveren tunnel in Antwerp, Belgium, are used to obtain results. A comparison between estimated positions using Extended Kalman Filter (EKF) on Doppler shift measurements and individual Kalman Filter (KF) on Doppler shift and Time of Arrival measurements is carried out to analyze the filtering methods performance. Findings show that the EKF performs better than KF, reducing the average estimation error by 10 m, while the algorithm accuracy depends on the relevant RF channel propagation conditions and other in-tunnel-related environment knowledge included in the estimation. The proposed solution can be used for monitoring the position and speed of vehicles driving in tunnel environments. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>The Beveren tunnel in Antwerp, Belgium. The vehicle travels from point R2 with a speed of 90 km/h toward the end of the tunnel, marked with a teardrop-shaped location symbol. ©2021 Google.</p>
Full article ">Figure 2
<p>Time-varying (<b>a</b>) Power Delay Profile (PDP) and (<b>b</b>) Doppler Power Profile (DPP) for the scenario of crossing Tx at the known position with a constant speed of 90 km/h [<a href="#B10-sensors-22-00847" class="html-bibr">10</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) True and estimated vehicle trajectory, (<b>b</b>) Distance between transmitter and receiver, and (<b>c</b>) Estimation error, when using Extended Kalman Filter on Doppler measurements (EKF DS), Kalman Filter on individual time and Doppler measurements (KF DS and ToA), and the combination of two previous approaches (EKF DS and KF ToA), while the vehicle is driving in the tunnel environment.</p>
Full article ">Figure 4
<p>CDF of the location estimation errors for different filtering approaches (<b>a</b>) EKF DS, (<b>b</b>) KF DS and ToA, and (<b>c</b>) EKF DS and KF ToA, while using various standard deviation values <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">
14 pages, 434 KiB  
Article
Privacy-Preserving Human Action Recognition with a Many-Objective Evolutionary Algorithm
by Pau Climent-Pérez and Francisco Florez-Revuelta
Sensors 2022, 22(3), 764; https://doi.org/10.3390/s22030764 - 20 Jan 2022
Cited by 6 | Viewed by 2625
Abstract
Wrist-worn devices equipped with accelerometers constitute a non-intrusive way to achieve active and assisted living (AAL) goals, such as automatic journaling for self-reflection, i.e., lifelogging, as well as to provide other services, such as general health and wellbeing monitoring, personal autonomy assessment, among [...] Read more.
Wrist-worn devices equipped with accelerometers constitute a non-intrusive way to achieve active and assisted living (AAL) goals, such as automatic journaling for self-reflection, i.e., lifelogging, as well as to provide other services, such as general health and wellbeing monitoring, personal autonomy assessment, among others. Human action recognition (HAR), and in particular, the recognition of activities of daily living (ADLs), can be used for these types of assessment or journaling. In this paper, a many-objective evolutionary algorithm (MaOEA) is used in order to maximise action recognition from individuals while concealing (minimising recognition of) gender and age. To validate the proposed method, the PAAL accelerometer signal ADL dataset (v2.0) is used, which includes data from 52 participants (26 men and 26 women) and 24 activity class labels. The results show a drop in gender and age recognition to 58% (from 89%, a 31% drop), and to 39% (from 83%, a 44% drop), respectively; while action recognition stays closer to the initial value of 68% (from: 87%, i.e., 19% down). Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>A matrix plot showing a summary of the collected dataset [<a href="#B22-sensors-22-00764" class="html-bibr">22</a>] used in this paper, showing number of instances recorded per participant and activity (reproduced from [<a href="#B23-sensors-22-00764" class="html-bibr">23</a>]).</p>
Full article ">Figure 2
<p>Histogram showing the distribution of participants among different age groups and genders (reproduced from [<a href="#B23-sensors-22-00764" class="html-bibr">23</a>]).</p>
Full article ">Figure 3
<p>Confusion matrices for gender and age recognition, showing the results for the <span class="html-italic">initial</span> individual. It can be observed that the features used are quite good at revealing undesired, sensitive identity traits. Values are given as a % of the total number of samples per category. (<b>a</b>) Gender classification, initial individual. (<b>b</b>) Age classification, initial individual.</p>
Full article ">Figure 4
<p>Confusion matrix for human action recognition, showing the <span class="html-italic">initial</span> individual (all features equally weighted to 1.0). Values are given as a % of the total number of samples per category.</p>
Full article ">Figure 5
<p>Final set of solutions (individuals) after running the NSGA-III MOEA algorithm. The best-performing (overall) solution is shown in green. The initial individual (all features equally weighted to 1.0) is shown in red. Black crosses represent individuals of the population in the latest generation. Gender accuracy range is in the range <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0.5</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math> (two bins), whereas age accuracy is <math display="inline"><semantics> <mrow> <mo>∈</mo> <mo>[</mo> <mfrac> <mn>1</mn> <mn>7</mn> </mfrac> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math> (seven bins).</p>
Full article ">Figure 6
<p>Contributions (weights) assigned to each feature in the best-performing individual (green dot on <a href="#sensors-22-00764-f005" class="html-fig">Figure 5</a>).</p>
Full article ">Figure 7
<p>Confusion matrices for gender and age recognition, showing the results for the <span class="html-italic">best</span> (i.e., after optimisation) individual. It can be observed that optimisation obfuscates age and gender recognition, as desired. Values are given as a % of the total number of samples per category. (<b>a</b>) Gender classification, best individual. (<b>b</b>) Age classification, best individual.</p>
Full article ">Figure 8
<p>Confusion matrix for human action recognition, showing the <span class="html-italic">best</span> individual after optimisation (features weighted as per <a href="#sensors-22-00764-f006" class="html-fig">Figure 6</a>). Values are given as a % of the total number of samples per category.</p>
Full article ">

Review

Jump to: Research, Other

33 pages, 917 KiB  
Review
Vehicular Platoon Communication: Architecture, Security Threats and Open Challenges
by Sean Joe Taylor, Farhan Ahmad, Hoang Nga Nguyen and Siraj Ahmed Shaikh
Sensors 2023, 23(1), 134; https://doi.org/10.3390/s23010134 - 23 Dec 2022
Cited by 11 | Viewed by 4111
Abstract
The emerging technology that is vehicular platooning is an exciting technology. It promises to save space on congested roadways, improve safety and utilise less fuel for transporting goods, reducing greenhouse gas emissions. The technology has already been shown to be vulnerable to attack [...] Read more.
The emerging technology that is vehicular platooning is an exciting technology. It promises to save space on congested roadways, improve safety and utilise less fuel for transporting goods, reducing greenhouse gas emissions. The technology has already been shown to be vulnerable to attack and exploitation by attackers. Attackers have several attack surfaces available for exploitation to achieve their goals (either personal or financial). The goal of this paper and its contribution to the area of research is to present the attacks and defence mechanisms for vehicular platoons and put risks of existing identified attacks forwards. Here the variety of attacks that have been identified in the literature are presented and how they compromise the wireless communications of vehicle platoons. As part of this, a risk assessment is presented to assess the risk factor of the attacks. Finally, this paper presents the range of defence and countermeasures to vehicle platooning attacks and how they protect the safe operations of vehicular platoons. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>A visual representation of relationship between CAV, VANET, and Vehicular Platoon.</p>
Full article ">Figure 2
<p>Current Example of Vehicular Platooning within a Smart City.</p>
Full article ">Figure 3
<p>Platoon inter-vehicle space compared to non-platooning vehicles.</p>
Full article ">Figure 4
<p>Centralised topology of platooning communications.</p>
Full article ">Figure 5
<p>Decentralised topology of platooning communications.</p>
Full article ">Figure 6
<p>Predecessor-leader following topology of platooning communications.</p>
Full article ">Figure 7
<p>Predecessor-leader following topology of platooning communications.</p>
Full article ">Figure 8
<p>Bidirectional-leader topology of platooning communications.</p>
Full article ">Figure 9
<p>Two-predecessors following the topology of platooning communications.</p>
Full article ">Figure 10
<p>WAVE network stack.</p>
Full article ">Figure 11
<p>A detailed taxonomy of Vehicular Platoons.</p>
Full article ">Figure 12
<p>Attacks on platoons sorted in accordance with the intended outcome of the attack.</p>
Full article ">Figure 13
<p>A risk matrix example.</p>
Full article ">

Other

Jump to: Research, Review

27 pages, 643 KiB  
Systematic Review
Model-Driven Engineering Techniques and Tools for Machine Learning-Enabled IoT Applications: A Scoping Review
by Zahra Mardani Korani, Armin Moin, Alberto Rodrigues da Silva and João Carlos Ferreira
Sensors 2023, 23(3), 1458; https://doi.org/10.3390/s23031458 - 28 Jan 2023
Cited by 5 | Viewed by 4419
Abstract
This paper reviews the literature on model-driven engineering (MDE) tools and languages for the internet of things (IoT). Due to the abundance of big data in the IoT, data analytics and machine learning (DAML) techniques play a key role in providing smart IoT [...] Read more.
This paper reviews the literature on model-driven engineering (MDE) tools and languages for the internet of things (IoT). Due to the abundance of big data in the IoT, data analytics and machine learning (DAML) techniques play a key role in providing smart IoT applications. In particular, since a significant portion of the IoT data is sequential time series data, such as sensor data, time series analysis techniques are required. Therefore, IoT modeling languages and tools are expected to support DAML methods, including time series analysis techniques, out of the box. In this paper, we study and classify prior work in the literature through the mentioned lens and following the scoping review approach. Hence, the key underlying research questions are what MDE approaches, tools, and languages have been proposed and which ones have supported DAML techniques at the modeling level and in the scope of smart IoT services. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>Followed stages of the scoping review framework.</p>
Full article ">Figure 2
<p>PRISMA flow diagram for paper selection.</p>
Full article ">Figure 3
<p>Distribution of papers by year and publication type.</p>
Full article ">Figure 4
<p>Distribution of papers by year and the number of authors.</p>
Full article ">Figure 5
<p>Distribution of papers by country.</p>
Full article ">Figure 6
<p>Distribution of papers by year and Journal.</p>
Full article ">Figure 7
<p>Distribution of papers by year and digital databases.</p>
Full article ">
20 pages, 977 KiB  
Systematic Review
Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review
by Maurício Pasetto de Freitas, Vinícius Aquino Piai, Ricardo Heffel Farias, Anita M. R. Fernandes, Anubis Graciela de Moraes Rossetto and Valderi Reis Quietinho Leithardt
Sensors 2022, 22(21), 8531; https://doi.org/10.3390/s22218531 - 5 Nov 2022
Cited by 22 | Viewed by 7869
Abstract
According to the World Health Organization, about 15% of the world’s population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education [...] Read more.
According to the World Health Organization, about 15% of the world’s population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education and become part of the labor market and society in a worthy manner. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things (AIoT) devices. AIoT processes and analyzes the large amount of data generated by Internet of Things (IoT) devices and applies Artificial Intelligence models, specifically, machine learning, to discover patterns for generating insights and assisting in decision making. Based on a systematic literature review, this article aims to identify the machine-learning models used across different research on Artificial Intelligence of Things applied to Assistive Technology. The survey of the topics approached in this article also highlights the context of such research, their application, the IoT devices used, and gaps and opportunities for further development. The survey results show that 50% of the analyzed research address visual impairment, and, for this reason, most of the topics cover issues related to computational vision. Portable devices, wearables, and smartphones constitute the majority of IoT devices. Deep neural networks represent 81% of the machine-learning models applied in the reviewed research. Full article
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)
Show Figures

Figure 1

Figure 1
<p>The systematic review steps (guidelines) [<a href="#B80-sensors-22-08531" class="html-bibr">80</a>].</p>
Full article ">Figure 2
<p>The systematic review model [<a href="#B80-sensors-22-08531" class="html-bibr">80</a>].</p>
Full article ">Figure 3
<p>Word cloud.</p>
Full article ">
Back to TopTop