[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,810)

Search Parameters:
Keywords = automated vehicles

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 5148 KiB  
Article
Model Optimization and Application of Straw Mulch Quantity Using Remote Sensing
by Yuanyuan Liu, Yu Sun, Yueyong Wang, Jun Wang, Xuebing Gao, Libin Wang and Mengqi Liu
Agronomy 2024, 14(10), 2352; https://doi.org/10.3390/agronomy14102352 - 12 Oct 2024
Viewed by 186
Abstract
Straw mulch quantity is an important indicator in the detection of straw returned to the field in conservation tillage, but there is a lack of large-scale automated measurement methods. In this study, we estimated global straw mulch quantity and completed the detection of [...] Read more.
Straw mulch quantity is an important indicator in the detection of straw returned to the field in conservation tillage, but there is a lack of large-scale automated measurement methods. In this study, we estimated global straw mulch quantity and completed the detection of straw returned to the field. We used an unmanned aerial vehicle (UAV) carrying a multispectral camera to acquire remote sensing images of straw in the field. First, the spectral index was selected using the Elastic-net (ENET) algorithm. Then, we used the Genetic Algorithm Hybrid Particle Swarm Optimization (GA-HPSO) algorithm, which embeds crossover and mutation operators from the Genetic Algorithm (GA) into the improved Particle Swarm Optimization (PSO) algorithm to solve the problem of machine learning model prediction performance being greatly affected by parameters. Finally, we used the Monte Carlo method to achieve a global estimation of straw mulch quantity and complete the rapid detection of field plots. The results indicate that the inversion model optimized using the GA-HPSO algorithm performed the best, with the coefficient of determination (R2) reaching 0.75 and the root mean square error (RMSE) only being 0.044. At the same time, the Monte Carlo estimation method achieved an average accuracy of 88.69% for the estimation of global straw mulch quantity, which was effective and applicable in the detection of global mulch quantity. This study provides a scientific reference for the detection of straw mulch quantity in conservation tillage and also provides a reliable model inversion estimation method for the estimation of straw mulch quantity in other crops. Full article
Show Figures

Figure 1

Figure 1
<p>The diagram of the locations of different study areas.</p>
Full article ">Figure 2
<p>Sampling and measurement process.</p>
Full article ">Figure 3
<p>Spectral reflectance acquisition.</p>
Full article ">Figure 4
<p>GA-HPSO algorithm process.</p>
Full article ">Figure 5
<p>Comparison of different evolutionary curves. (<b>a</b>) Study area 1, Autumn 22; (<b>b</b>) Study area 1, Spring 23; (<b>c</b>) Study area 2, Autumn 22; (<b>d</b>) Study area 2, Spring 23.</p>
Full article ">Figure 5 Cont.
<p>Comparison of different evolutionary curves. (<b>a</b>) Study area 1, Autumn 22; (<b>b</b>) Study area 1, Spring 23; (<b>c</b>) Study area 2, Autumn 22; (<b>d</b>) Study area 2, Spring 23.</p>
Full article ">Figure 6
<p>Comparison of different evolutionary curves. (<b>a</b>) Sphere Evolution; (<b>b</b>) Griewank Evolution; (<b>c</b>) Rastrigin Evolution.</p>
Full article ">Figure 7
<p>Scatterplot of measured versus and predicted values. (<b>a</b>) Study area 1, Spring 22; (<b>b</b>) Study area 1, Autumn 22; (<b>c</b>) Study area 1, Spring 23; (<b>d</b>) Study area 1, Autumn 23; (<b>e</b>) Study area 2, Spring 22; (<b>f</b>) Study area 2, Autumn 22; (<b>g</b>) Study area 2, Spring 23; (<b>h</b>) Study area 2, Autumn 23.</p>
Full article ">Figure 7 Cont.
<p>Scatterplot of measured versus and predicted values. (<b>a</b>) Study area 1, Spring 22; (<b>b</b>) Study area 1, Autumn 22; (<b>c</b>) Study area 1, Spring 23; (<b>d</b>) Study area 1, Autumn 23; (<b>e</b>) Study area 2, Spring 22; (<b>f</b>) Study area 2, Autumn 22; (<b>g</b>) Study area 2, Spring 23; (<b>h</b>) Study area 2, Autumn 23.</p>
Full article ">Figure 8
<p>Experience distribution validation chart.</p>
Full article ">Figure 9
<p>Validation experiment process.</p>
Full article ">Figure 10
<p>Comparison results between predicted and actual values in the validation area. (<b>a</b>) Study area 3; (<b>b</b>) Study area 4; (<b>c</b>) Study area 5; (<b>d</b>) Study area 6; (<b>e</b>) Study area 7; (<b>f</b>) Study area 8.</p>
Full article ">
24 pages, 6852 KiB  
Article
Automatic Landing Control for Fixed-Wing UAV in Longitudinal Channel Based on Deep Reinforcement Learning
by Jinghang Li, Shuting Xu, Yu Wu and Zhe Zhang
Drones 2024, 8(10), 568; https://doi.org/10.3390/drones8100568 - 10 Oct 2024
Viewed by 358
Abstract
The objective is to address the control problem associated with the landing process of unmanned aerial vehicles (UAVs), with a particular focus on fixed-wing UAVs. The Proportional–Integral–Derivative (PID) controller is a widely used control method, which requires the tuning of its parameters to [...] Read more.
The objective is to address the control problem associated with the landing process of unmanned aerial vehicles (UAVs), with a particular focus on fixed-wing UAVs. The Proportional–Integral–Derivative (PID) controller is a widely used control method, which requires the tuning of its parameters to account for the specific characteristics of the landing environment and the potential for external disturbances. In contrast, neural networks can be modeled to operate under given inputs, allowing for a more precise control strategy. In light of these considerations, a control system based on reinforcement learning is put forth, which is integrated with the conventional PID guidance law to facilitate the autonomous landing of fixed-wing UAVs and the automated tuning of PID parameters through the use of a Deep Q-learning Network (DQN). A traditional PID control system is constructed based on a fixed-wing UAV dynamics model, with the flight state being discretized. The landing problem is transformed into a Markov Decision Process (MDP), and the reward function is designed in accordance with the landing conditions and the UAV’s attitude, respectively. The state vectors are fed into the neural network framework, and the optimized PID parameters are output by the reinforcement learning algorithm. The optimal policy is obtained through the training of the network, which enables the automatic adjustment of parameters and the optimization of the traditional PID control system. Furthermore, the efficacy of the control algorithms in actual scenarios is validated through the simulation of UAV state vector perturbations and ideal gliding curves. The results demonstrate that the controller modified by the DQN network exhibits a markedly superior convergence effect and maneuverability compared to the unmodified traditional controller. Full article
Show Figures

Figure 1

Figure 1
<p>Six degrees of freedom fixed-wing UAV.</p>
Full article ">Figure 2
<p>Control structure of UAV landing with PID-based guidance law.</p>
Full article ">Figure 3
<p>Enhanced Learning Systems Framework.</p>
Full article ">Figure 4
<p>Architecture of DQN algorithm.</p>
Full article ">Figure 5
<p>Optimized PID parametric landing controller framework based on RL.</p>
Full article ">Figure 6
<p>Conventional landing control system.</p>
Full article ">Figure 7
<p>Neural network framework based on reinforcement learning.</p>
Full article ">Figure 8
<p>Working condition simulation results of uniform Controlling operation input. (<b>a</b>) Flat-tail deflection angle result, (<b>b</b>) Throttle stick deflection angle result.</p>
Full article ">Figure 9
<p>Control output results of fixed-wing UAV. (<b>a</b>) Velocity results, (<b>b</b>) Pitch angle velocity results, (<b>c</b>) Pitch angle results, (<b>d</b>) Angle of attack results, (<b>e</b>) Altitude results.</p>
Full article ">Figure 10
<p>Working condition simulation results of uniform Controlling operation input.</p>
Full article ">Figure 11
<p>Motion generating method.</p>
Full article ">Figure 12
<p>White Noise generation: (<b>a</b>) White noise sequence; (<b>b</b>) White noise power spectral density analysis.</p>
Full article ">Figure 13
<p>Control output results of fixed-wing UAV with external perturbation. (<b>a</b>) Velocity results, (<b>b</b>) Pitch angle velocity results, (<b>c</b>) Pitch angle results, (<b>d</b>) Angle of attack results, (<b>e</b>) Altitude results.</p>
Full article ">Figure 13 Cont.
<p>Control output results of fixed-wing UAV with external perturbation. (<b>a</b>) Velocity results, (<b>b</b>) Pitch angle velocity results, (<b>c</b>) Pitch angle results, (<b>d</b>) Angle of attack results, (<b>e</b>) Altitude results.</p>
Full article ">Figure 14
<p>Fixed-wing UAV landing environment modeling.</p>
Full article ">Figure 15
<p>DQN-PID controller for fixed-wing UAV landing experiment. (<b>a</b>) Experimental demonstration, (<b>b</b>) Experimental results of uniform Controlling operation input.</p>
Full article ">Figure 16
<p>Control outputs results of a fixed-wing UAV in the Gazebo simulator. (<b>a</b>) Velocity results, (<b>b</b>) Pitch angle velocity results, (<b>c</b>) Pitch angle results, (<b>d</b>) Angle of attack results, (<b>e</b>) Altitude results.</p>
Full article ">
16 pages, 528 KiB  
Article
Preparing for Connected and Automated Vehicles: Insights from North Carolina Transportation Professionals
by Thanh Schado, Elizabeth Shay, Bhuwan Thapa and Tabitha S. Combs
Sustainability 2024, 16(20), 8747; https://doi.org/10.3390/su16208747 - 10 Oct 2024
Viewed by 495
Abstract
The connected and automated vehicles (CAVs) that are expected to be increasingly common on U.S. roads in the coming decades offer potential benefits in safety, efficiency, and mobility; they also raise concerns related to equity, access, and impacts on land use and travel [...] Read more.
The connected and automated vehicles (CAVs) that are expected to be increasingly common on U.S. roads in the coming decades offer potential benefits in safety, efficiency, and mobility; they also raise concerns related to equity, access, and impacts on land use and travel behavior, as well as questions about extensive data requirements for CAVs to communicate with other vehicles and the environment in order to operate safely and efficiently. We report on interviews with North Carolina transportation experts about CAVs and their implications for sustainable transportation that serves all travelers with affordable, safe, and dignified mobility that also produces fewer environment impacts (emissions to air, water, and land; resource consumption; land use changes). The data reveal great interest among transportation professionals about a CAV transition, but a lack of consensus on the state of play and necessary next steps. Concerns include impacts on planning practice; implications for land use, equity, and safety; and data security and privacy. The findings suggest that local, regional, and state agencies would benefit from clear technical guidance on how to prepare for CAVs and to engage with the public, given high interest about a coming CAV transition. Intense data requirements for CAVs and associated infrastructure, as well as the regulatory and policy tools that will be required, raise concerns about threats to data safety and security and argue for proactive action. Full article
Show Figures

Figure 1

Figure 1
<p>Co-occurrence network derived from co-occurrence matrix of top 12 codes. (Note: Transit Demand and Street Infrastructure were excluded because they did not co-occur with other codes, while Public Sector was included; see discussion above <a href="#sustainability-16-08747-f001" class="html-fig">Figure 1</a>).</p>
Full article ">
17 pages, 5756 KiB  
Article
Physics-Informed Neural Network-Based Nonlinear Model Predictive Control for Automated Guided Vehicle Trajectory Tracking
by Yinping Li and Li Liu
World Electr. Veh. J. 2024, 15(10), 460; https://doi.org/10.3390/wevj15100460 - 10 Oct 2024
Viewed by 665
Abstract
This paper proposes a nonlinear Model Predictive Control (MPC) method based on Physics-Informed Neural Networks (PINNs), aimed at enhancing the trajectory tracking performance of Automated Guided Vehicles (AGVs) in complex dynamic environments. Traditional physical models often face the challenges of computational inefficiency and [...] Read more.
This paper proposes a nonlinear Model Predictive Control (MPC) method based on Physics-Informed Neural Networks (PINNs), aimed at enhancing the trajectory tracking performance of Automated Guided Vehicles (AGVs) in complex dynamic environments. Traditional physical models often face the challenges of computational inefficiency and insufficient control precision when dealing with complex dynamic systems. However, by integrating physical laws directly into the training process of neural networks, PINNs can effectively learn and capture the kinematic characteristics of vehicles, replacing traditional nonlinear ordinary differential equation models and thus significantly enhancing computational efficiency and control performance. During the model-training phase, this study further incorporates the Theory of Functional Connections (TFC) and adaptive loss balancing strategies to efficiently solve ODE problems without relying on numerical integration and optimize the control strategy. This combined approach not only reduces computational complexity, but also improves the robustness and precision of the control strategy in varying environments. Numerical simulations demonstrate that this method offers significant advantages in AGV trajectory-tracking tasks, manifested in higher computational efficiency and precise control performance. The proposal of the PINN-MPC method provides new theoretical support and innovative methods for real-time complex system control, with important research and application potential, and is expected to play a key role in future intelligent control systems. Full article
(This article belongs to the Special Issue Intelligent Electric Vehicle Control, Testing and Evaluation)
Show Figures

Figure 1

Figure 1
<p>Configuration of the AGV platform.</p>
Full article ">Figure 2
<p>A schematic diagram of the LB-TFC-PINN framework.</p>
Full article ">Figure 3
<p>LB-TFC-PINN-based nonlinear MPC scheme.</p>
Full article ">Figure 4
<p>Training Performance.</p>
Full article ">Figure 5
<p>Simulation results from LB-TFC-PINN, PINN, and RK4.</p>
Full article ">Figure 6
<p>Trajectory tracking simulation results (Task 1).</p>
Full article ">Figure 7
<p>Control variables change over time (Task 1).</p>
Full article ">Figure 8
<p>Error over time of the solution of LB-TFC-PINN-based nonlinear MPC for the tracking problem (Task 1).</p>
Full article ">Figure 9
<p>Trajectory tracking simulation results (Task 2).</p>
Full article ">Figure 10
<p>Control variable changes over time (Task 2).</p>
Full article ">Figure 11
<p>Error over time of the solution of LB-TFC-PINN-based nonlinear MPC for the tracking problem (Task 2).</p>
Full article ">
29 pages, 6572 KiB  
Article
Robust Parking Space Recognition Approach Based on Tightly Coupled Polarized Lidar and Pre-Integration IMU
by Jialiang Chen, Fei Li, Xiaohui Liu and Yuelin Yuan
Appl. Sci. 2024, 14(20), 9181; https://doi.org/10.3390/app14209181 - 10 Oct 2024
Viewed by 316
Abstract
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition [...] Read more.
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition and annotation in unknown environments, this paper proposes an automatic parking space annotation approach with tight coupling of Lidar and Inertial Measurement Unit (IMU). First, the pose of the Lidar frame was tightly coupled with high-frequency IMU data to compensate for vehicle motion, reducing its impact on the pose transformation of the Lidar point cloud. Next, simultaneous localization and mapping (SLAM) were performed using the compensated Lidar frame. By extracting two-dimensional polarized edge features and planar features from the three-dimensional Lidar point cloud, a polarized Lidar odometry was constructed. The polarized Lidar odometry factor and loop closure factor were jointly optimized in the iSAM2. Finally, the pitch angle of the constructed local map was evaluated to filter out ground points, and the regions of interest (ROI) were projected onto a grid map. The free space between adjacent vehicle point clouds was assessed on the grid map using convex hull detection and straight-line fitting. The experiments were conducted on both local and open datasets. The proposed method achieved an average precision and recall of 98.89% and 98.79% on the local dataset, respectively; it also achieved 97.08% and 99.40% on the nuScenes dataset. And it reduced storage usage by 48.38% while ensuring running time. Comparative experiments on open datasets show that the proposed method can adapt to various scenarios and exhibits strong robustness. Full article
Show Figures

Figure 1

Figure 1
<p>Example of parking space recognition. (<b>a</b>) Parking space corner points detection and parking space feature extraction for automatic valet parking [<a href="#B2-applsci-14-09181" class="html-bibr">2</a>]; (<b>b</b>) Parking space recognition and parking lot mapping based on Lidar [<a href="#B3-applsci-14-09181" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>Comparison between image vision and Lidar vision, data from nuScenes [<a href="#B15-applsci-14-09181" class="html-bibr">15</a>]. (<b>a</b>) Image vision in a night scene; (<b>b</b>) Lidar vision at the same time and place; (<b>c</b>) Superposition of image vision and laser vision.</p>
Full article ">Figure 3
<p>Modern ecological parking lots where parking lines cannot be detected. (<b>a</b>) Top-view image of a parking lot; (<b>b</b>) Image of an ecological parking lot.</p>
Full article ">Figure 4
<p>Flow chart of parking space automatic recognition method.</p>
Full article ">Figure 5
<p>Schematic diagram illustrating posture compensation with tightly coupled Lidar and IMU.</p>
Full article ">Figure 6
<p>Schematic diagram of polarized Lidar odometry and loop closure.</p>
Full article ">Figure 7
<p>Flow chart of polarized Lidar odometry.</p>
Full article ">Figure 8
<p>Flow chart of loop closure.</p>
Full article ">Figure 9
<p>Schematic diagram of the vertical angle evaluation method.</p>
Full article ">Figure 10
<p>Schematic diagram of the barrier grid marking method based on erosion and dilation.</p>
Full article ">Figure 11
<p>Flow chart of obstacle vehicle detection and vacancy annotation.</p>
Full article ">Figure 12
<p>Parking slots classification (based on grid projection). (<b>a</b>) Perpendicular parking space; (<b>b</b>) Angled parking space; (<b>c</b>) Parallel parking space.</p>
Full article ">Figure 13
<p>Diagram of parking space recognition and annotation. (<b>a</b>) Annotation diagram of perpendicular parking space; (<b>b</b>) Annotation diagram of angled parking space; (<b>c</b>) Annotation diagram of parallel parking space.</p>
Full article ">Figure 14
<p>Schematic diagram of parking slot types and actual arrangements in local data. (<b>a</b>) Perpendicular parking slots; (<b>b</b>) Arrangement of perpendicular parking slots; (<b>c</b>) Parallel parking slots; (<b>d</b>) Arrangement of parallel parking slots; (<b>e</b>) Angled parking slots; (<b>f</b>) Arrangement of angled parking slots.</p>
Full article ">Figure 14 Cont.
<p>Schematic diagram of parking slot types and actual arrangements in local data. (<b>a</b>) Perpendicular parking slots; (<b>b</b>) Arrangement of perpendicular parking slots; (<b>c</b>) Parallel parking slots; (<b>d</b>) Arrangement of parallel parking slots; (<b>e</b>) Angled parking slots; (<b>f</b>) Arrangement of angled parking slots.</p>
Full article ">Figure 15
<p>Examples of parking scenes in nuScenes [<a href="#B15-applsci-14-09181" class="html-bibr">15</a>]. (<b>a</b>) A parking lot in Queenstown, Singapore; (<b>b</b>) A on-street parking area at Boston Seaport.</p>
Full article ">Figure 16
<p>3D point cloud map of a test scene and ground filtering of the ROI. (<b>a</b>) 3D point cloud map of a test scene; (<b>b</b>) 3D point cloud map of a test scene with an ROI box; (<b>c</b>) Before filtering out ground points in the ROI; (<b>d</b>) After filtering out ground points in the ROI.</p>
Full article ">Figure 17
<p>Schematic diagram of the annotation for obstacle vehicles and parking spaces. (<b>a</b>) Perpendicular obstacle vehicles and perpendicular parking spaces; (<b>b</b>) Parallel obstacle vehicles and parallel parking spaces. “0” recognized the classification label of perpendicular parking mode; And “1” represented horizontal parking mode.</p>
Full article ">Figure 18
<p>Visualization diagram of free space detection. (<b>a</b>) Polarized matrix visualization of the proposed algorithm in the scene from <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a. Different colors represent different depth values. (<b>b</b>) Free space detection visualization of the proposed algorithm in the scene from <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a: The red rectangle denotes the ROI, the yellow indicates free space meeting the size constraints of the ego vehicle, and the green represents the non-compliant workshop area. (<b>c</b>) Free space detection visualization of the method in reference [<a href="#B19-applsci-14-09181" class="html-bibr">19</a>] in the scene from <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a: The green rectangle denotes obstacle vehicles, and the yellow represents free space. (<b>d</b>) Lane line binarization schematic of the method in reference [<a href="#B3-applsci-14-09181" class="html-bibr">3</a>] under weak texture conditions, corresponding to <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a: The red circle denotes lane line texture information. The red circle highlights subtle lane line texture under weak lighting conditions, which may be difficult to discern due to minimal pixel intensity differences. (<b>e</b>) Lane line binarization schematic of the method in reference [<a href="#B3-applsci-14-09181" class="html-bibr">3</a>] under clear texture but subtle pixel differences due to overexposure, corresponding to <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>b.</p>
Full article ">Figure 18 Cont.
<p>Visualization diagram of free space detection. (<b>a</b>) Polarized matrix visualization of the proposed algorithm in the scene from <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a. Different colors represent different depth values. (<b>b</b>) Free space detection visualization of the proposed algorithm in the scene from <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a: The red rectangle denotes the ROI, the yellow indicates free space meeting the size constraints of the ego vehicle, and the green represents the non-compliant workshop area. (<b>c</b>) Free space detection visualization of the method in reference [<a href="#B19-applsci-14-09181" class="html-bibr">19</a>] in the scene from <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a: The green rectangle denotes obstacle vehicles, and the yellow represents free space. (<b>d</b>) Lane line binarization schematic of the method in reference [<a href="#B3-applsci-14-09181" class="html-bibr">3</a>] under weak texture conditions, corresponding to <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>a: The red circle denotes lane line texture information. The red circle highlights subtle lane line texture under weak lighting conditions, which may be difficult to discern due to minimal pixel intensity differences. (<b>e</b>) Lane line binarization schematic of the method in reference [<a href="#B3-applsci-14-09181" class="html-bibr">3</a>] under clear texture but subtle pixel differences due to overexposure, corresponding to <a href="#applsci-14-09181-f015" class="html-fig">Figure 15</a>b.</p>
Full article ">Figure 19
<p>Some parking spaces in the nuScenes [<a href="#B15-applsci-14-09181" class="html-bibr">15</a>] dataset have curbstones next to them.</p>
Full article ">Figure 20
<p>Some parking spaces in the nuScenes [<a href="#B15-applsci-14-09181" class="html-bibr">15</a>] dataset do not have line corners, or the parking space lines are unclear.</p>
Full article ">
24 pages, 4416 KiB  
Article
Cybersecurity Certification Requirements for Distributed Energy Resources: A Survey of SunSpec Alliance Standards
by Sean Tsikteris, Odyssefs Diamantopoulos Pantaleon and Eirini Eleni Tsiropoulou
Energies 2024, 17(19), 5017; https://doi.org/10.3390/en17195017 (registering DOI) - 9 Oct 2024
Viewed by 469
Abstract
This survey paper explores the cybersecurity certification requirements defined by the SunSpec Alliance for Distributed Energy Resource (DER) devices, focusing on aspects such as software updates, device communications, authentication mechanisms, device security, logging, and test procedures. The SunSpec cybersecurity standards mandate support for [...] Read more.
This survey paper explores the cybersecurity certification requirements defined by the SunSpec Alliance for Distributed Energy Resource (DER) devices, focusing on aspects such as software updates, device communications, authentication mechanisms, device security, logging, and test procedures. The SunSpec cybersecurity standards mandate support for remote and automated software updates, secure communication protocols, stringent authentication practices, and robust logging mechanisms to ensure operational integrity. Furthermore, the paper discusses the implementation of the SAE J3072 standard using the IEEE 2030.5 protocol, emphasizing the secure interactions between electric vehicle supply equipment (EVSE) and plug-in electric vehicles (PEVs) for functionalities like vehicle-to-grid (V2G) capabilities. This research also examines the SunSpec Modbus standard, which enhances the interoperability among DER system components, facilitating compliance with grid interconnection standards. This paper also analyzes the existing SunSpec Device Information Models, which standardize data exchange formats for DER systems across communication interfaces. Finally, this paper concludes with a detailed discussion of the energy storage cybersecurity specification and the blockchain cybersecurity requirements as proposed by SunSpec Alliance. Full article
(This article belongs to the Section F2: Distributed Energy System)
Show Figures

Figure 1

Figure 1
<p>SunSpec cybersecurity certification requirements—an overview.</p>
Full article ">Figure 2
<p>Superset test cases that should be validated along with their purpose (part 1).</p>
Full article ">Figure 3
<p>Superset test cases that should be validated along with their purpose (part 2).</p>
Full article ">Figure 4
<p>PEV and EVSE interaction following the IEEE 2030.5 protocol.</p>
Full article ">Figure 5
<p>National Institute of Standards and Technology Cybersecurity Framework: core structure.</p>
Full article ">
17 pages, 3232 KiB  
Article
Impact of Mixed-Vehicle Environment on Speed Disparity as a Measure of Safety on Horizontal Curves
by Tahmina Sultana and Yasser Hassan
World Electr. Veh. J. 2024, 15(10), 456; https://doi.org/10.3390/wevj15100456 - 9 Oct 2024
Viewed by 361
Abstract
Due to the transition of vehicle fleets from conventional driver-operated vehicles (DVs) to connected vehicles (CVs) and/or automated vehicles (AVs), vehicles with different technologies will soon operate on the same roads in a mixed-vehicle environment. Although a major goal of vehicle connectivity and [...] Read more.
Due to the transition of vehicle fleets from conventional driver-operated vehicles (DVs) to connected vehicles (CVs) and/or automated vehicles (AVs), vehicles with different technologies will soon operate on the same roads in a mixed-vehicle environment. Although a major goal of vehicle connectivity and automation is to improve traffic safety, negative safety impacts may persist in the mixed-vehicle environment. Speed disparity measures have been shown in the literature to be related to safety performance. Therefore, speed disparity measures are derived from the expected speed distributions of different vehicle technologies and are used as surrogate measures to assess the safety of mixed-vehicle environments and identify the efficacy of prospective countermeasures. This paper builds on speed models in the literature to predict the speed behavior of CVs, AVs, and DVs on horizontal curves on freeways and major arterials. The paper first proposes a methodology to determine speed disparity measures on horizontal curves without any control in terms of speed limit. The impact of speed limit or advisory speed, as a safety countermeasure, is modeled and assessed using different strategies to set the speed limit. The results indicated that the standard deviation of the speeds of all vehicles (σc) in a mixed environment would increase on arterial roads under no control compared to the case of DV-only traffic. This speed disparity can be reduced using an advisory speed as a safety countermeasure to decrease the adverse safety impacts in this environment. Moreover, it was shown that compared to the practice of a constant speed limit based on road classification, the advisory speed is more effective when it is based on the speed behavior of various vehicle types. Full article
(This article belongs to the Special Issue Vehicle Safe Motion in Mixed Vehicle Technologies Environment)
Show Figures

Figure 1

Figure 1
<p>Schematic of AV speed distributions in scenarios 2 and 3 (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 2
<p>Schematic of compliant and non-compliant vehicles of a specific subpopulation (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 3
<p>Mean and 85th percentile speeds for each vehicle type (no CM) (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 4
<p>Combined speed and speed disparity measures (no CM) for a sample of vehicle share combinations (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 5
<p>Maximum values of speed disparity measures (no CM) for different vehicle share combinations (adopted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 6
<p>Maximum values of speed disparity measures for two countermeasures (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 7
<p>Maximum <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> on arterial roads for the case of no CM and all countermeasures (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 8
<p>Heatmap of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>C</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>D</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> in CM4b (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">Figure 9
<p>Heatmap of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>σ</mi> </mrow> <mrow> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>C</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>D</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> in CM6 (reprinted from [<a href="#B16-wevj-15-00456" class="html-bibr">16</a>]).</p>
Full article ">
16 pages, 1921 KiB  
Article
Investigation of Traffic System with Traffic Restriction Scheme in the Presence of Automated and Human-Driven Vehicles
by Dong Ding, Yadi Hou, Fulong Shen, Pengyun Chong and Yifeng Niu
Systems 2024, 12(10), 417; https://doi.org/10.3390/systems12100417 - 8 Oct 2024
Viewed by 423
Abstract
In the context of transportation development, the simultaneous emergence of automated vehicles (AVs) and human-driven vehicles (HDVs) can lead to varied traffic system performance. For the purpose of improving traffic systems, this paper proposes a traffic restriction scheme only for HDVs. We develop [...] Read more.
In the context of transportation development, the simultaneous emergence of automated vehicles (AVs) and human-driven vehicles (HDVs) can lead to varied traffic system performance. For the purpose of improving traffic systems, this paper proposes a traffic restriction scheme only for HDVs. We develop a variational inequality (VI) model to describe travel mode and route choices under this restriction scheme and design an algorithm to solve the model. The proposed model and algorithm are applied to a Sioux Falls network example to evaluate the effects of the traffic restriction scheme. Our findings indicate that the scheme improves overall social welfare, with a higher proportion of restricted travelers leading to greater social welfare as well as increased travel demand due to changes in capacity. However, some lower exogenous monetary factors lead to negative social welfare, as the presence of AVs may exacerbate road congestion. Additionally, advancements in technology are needed to adjust the weightings of travel time and congestion level estimates to further enhance social welfare. These results offer valuable insights for traffic demand management in traffic systems with a mix of AVs and HDVs. Full article
Show Figures

Figure 1

Figure 1
<p>The relationships among the demands.</p>
Full article ">Figure 2
<p>The topology of the Sioux Falls network.</p>
Full article ">Figure 3
<p>The weights versus the social welfare.</p>
Full article ">Figure 4
<p>The amortized costs versus the social welfare.</p>
Full article ">Figure 5
<p>Social welfare versus the fare of transit.</p>
Full article ">Figure 6
<p>Social welfare versus various headways.</p>
Full article ">
27 pages, 3881 KiB  
Article
Neuroergonomic Attention Assessment in Safety-Critical Tasks: EEG Indices and Subjective Metrics Validation in a Novel Task-Embedded Reaction Time Paradigm
by Bojana Bjegojević, Miloš Pušica, Gabriele Gianini, Ivan Gligorijević, Sam Cromie and Maria Chiara Leva
Brain Sci. 2024, 14(10), 1009; https://doi.org/10.3390/brainsci14101009 - 7 Oct 2024
Viewed by 846
Abstract
Background/Objectives: This study addresses the gap in methodological guidelines for neuroergonomic attention assessment in safety-critical tasks, focusing on validating EEG indices, including the engagement index (EI) and beta/alpha ratio, alongside subjective ratings. Methods: A novel task-embedded reaction time paradigm was developed to evaluate [...] Read more.
Background/Objectives: This study addresses the gap in methodological guidelines for neuroergonomic attention assessment in safety-critical tasks, focusing on validating EEG indices, including the engagement index (EI) and beta/alpha ratio, alongside subjective ratings. Methods: A novel task-embedded reaction time paradigm was developed to evaluate the sensitivity of these metrics to dynamic attentional demands in a more naturalistic multitasking context. By manipulating attention levels through varying secondary tasks in the NASA MATB-II task while maintaining a consistent primary reaction-time task, this study successfully demonstrated the effectiveness of the paradigm. Results: Results indicate that both the beta/alpha ratio and EI are sensitive to changes in attentional demands, with beta/alpha being more responsive to dynamic variations in attention, and EI reflecting more the overall effort required to sustain performance, especially in conditions where maintaining attention is challenging. Conclusions: The potential for predicting the attention lapses through integration of performance metrics, EEG measures, and subjective assessments was demonstrated, providing a more nuanced understanding of dynamic fluctuations of attention in multitasking scenarios, mimicking those in real-world safety-critical tasks. These findings provide a foundation for advancing methods to monitor attention fluctuations accurately and mitigate risks in critical scenarios, such as train-driving or automated vehicle operation, where maintaining a high attention level is crucial. Full article
(This article belongs to the Special Issue Computational Intelligence and Brain Plasticity)
Show Figures

Figure 1

Figure 1
<p>Methodological steps followed in the study.</p>
Full article ">Figure 2
<p>Experimental protocol and timeline.</p>
Full article ">Figure 3
<p>MATB-II subtasks used in this study (panel (<b>A</b>)). Task conditions representing the level of attention demanded to cope with the task (labelled A1 through A2), as well as their sequences of presentation to the two groups of participants (panel (<b>B</b>)). All conditions contain SysMon as primary task, while secondary tasks are Comm (in conditions B and D) and ResMan (in conditions C and D).</p>
Full article ">Figure 4
<p>Mean reaction times in seconds (<span class="html-italic">y</span>-axis) in each experimental condition (n.s. denotes non-significant differences). The increase in attentional demands across conditions is represented by a grayscale, where the lightest shade corresponds to the least demanding conditions.</p>
Full article ">Figure 5
<p>Mean coefficient of variation in reaction times across different levels of attentional demands in increasing (<b>left</b>) and decreasing (<b>right</b>) load group. Only the non-significant differences are marked. n.s. denotes non-significant differences.</p>
Full article ">Figure 6
<p>Proportion of omissions per condition.</p>
Full article ">Figure 7
<p><b>(Right)</b> panel: subjective fatigue ratings; differences within each shaded block are non-significant, while those between the blocks are statistically significant. (<b>Left</b>) panel: subjective fatigue as a function of time on task.</p>
Full article ">Figure 8
<p>Mean mental workload index (<b>A</b>) and reported task difficulty (<b>B</b>) across the varying attentional demands.</p>
Full article ">Figure 9
<p>Mean engagement (<b>left</b>) and beta/alpha indices (<b>right</b>) in response to varying attentional demands. Significant differences (<span class="html-italic">p</span> &lt; 0.05) are marked with an asterisk.</p>
Full article ">Figure 10
<p>Subjective level of attention as a function of approximate time on task. The transition to darker shaded area of the graph denotes the significant decrease in ratings.</p>
Full article ">Figure 11
<p>Comparison of trends in coefficient of variation in reaction time (CV<sub>rt</sub>) with beta/alpha (<b>above</b>) and EI (<b>below</b>).</p>
Full article ">Figure 12
<p>Correlation heatmap of the variables considered for the regression model.</p>
Full article ">Figure 13
<p>Shapley value of the input features in the task of predicting the output value through the model ert (the best of the models considered by the pycaret AutoML library). Each dot corresponds to an instance of the dataset. The color ranges from blue to red: the redder the color, the higher the value of the input feature. The impact of that feature value in improving the prediction of the model in correspondence to the specific record is represented along the <span class="html-italic">x</span> axis.</p>
Full article ">
19 pages, 6565 KiB  
Article
Research on AGV Path Planning Based on Improved Directed Weighted Graph Theory and ROS Fusion
by Yinping Li and Li Liu
Actuators 2024, 13(10), 404; https://doi.org/10.3390/act13100404 - 7 Oct 2024
Viewed by 576
Abstract
This article addresses the common issues of insufficient computing power and path congestion for automated guided vehicles (AGVs) in real-world production environments, as well as the shortcomings of traditional path-planning algorithms that mainly consider the shortest path while ignoring vehicle turning time and [...] Read more.
This article addresses the common issues of insufficient computing power and path congestion for automated guided vehicles (AGVs) in real-world production environments, as well as the shortcomings of traditional path-planning algorithms that mainly consider the shortest path while ignoring vehicle turning time and stability. We propose a secondary path-planning method based on an improved directed weighted graph theory integrated with an ROS. Firstly, the production environment is modeled in detail to identify the initial position of the AGV. Secondly, the operational area is systematically divided, key nodes are selected and optimized, and a directed weighted graph is constructed with optimized weights. It is integrated with the ROS for path planning, using the Floyd algorithm to find the optimal path. The effectiveness and superiority of this method have been demonstrated through simulation verification and actual AGV operation testing. The path planning strategy and fusion algorithm proposed in this article that comprehensively considers distance and angle steering are simple and practical, effectively reducing production costs for enterprises. This method is suitable for logistics sorting and small transport AGVs with a shorter overall path-planning time, higher stability, and limited computing power, and it has reference significance and practical value. Full article
(This article belongs to the Section Actuators for Robotics)
Show Figures

Figure 1

Figure 1
<p>Vertex selection and path planning effects in convex polygons (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 2
<p>Triangulation method for decomposing multi-connected domain maps (<b>a</b>,<b>b</b>).</p>
Full article ">Figure 3
<p>Basic process of the path planning algorithm based on the improved directed weighted graph theory.</p>
Full article ">Figure 4
<p>Directed weighted graph of the vehicle operation scenario.</p>
Full article ">Figure 5
<p>Path planning results of the Floyd algorithm.</p>
Full article ">Figure 6
<p>Directed weighted graph with increased steering weights.</p>
Full article ">Figure 7
<p>Path planning results after adding steering weights.</p>
Full article ">Figure 8
<p>Comparison of path planning results under different directed weighted graphs (P0–P12).</p>
Full article ">Figure 9
<p>Comparison of path planning results under different directed weighted graphs (P0–P13).</p>
Full article ">Figure 10
<p>Comparison results of path planning and A* algorithm path planning under different directed weighted graphs.</p>
Full article ">Figure 11
<p>Comparison results of path planning and A* algorithm path planning under different directed weighted graphs II.</p>
Full article ">Figure 12
<p>Directed weighted graph of vehicle operation scenarios.</p>
Full article ">Figure 13
<p>Gazebo simulation experiment environment diagram.</p>
Full article ">Figure 14
<p>Gazebo simulation experiment map.</p>
Full article ">Figure 15
<p>Path planning diagram based on ROS and directed weighted graph theory (P0–P12).</p>
Full article ">Figure 16
<p>Path planning diagram based on ROS and directed weighted graph theory (P0–P13).</p>
Full article ">Figure 17
<p>Real vehicle validation environment.</p>
Full article ">Figure 18
<p>Comparison between simulation experiment 1 and real vehicle verification.</p>
Full article ">Figure 19
<p>Comparison between simulation experiment 2 and real vehicle verification.</p>
Full article ">
17 pages, 4996 KiB  
Article
Safeguarding Personal Identifiable Information (PII) after Smartphone Pairing with a Connected Vehicle
by Jason Carlton and Hafiz Malik
J. Sens. Actuator Netw. 2024, 13(5), 63; https://doi.org/10.3390/jsan13050063 - 6 Oct 2024
Viewed by 498
Abstract
The integration of connected autonomous vehicles (CAVs) has significantly enhanced driving convenience, but it has also raised serious privacy concerns, particularly regarding the personal identifiable information (PII) stored on infotainment systems. Recent advances in connected and autonomous vehicle control, such as multi-agent system [...] Read more.
The integration of connected autonomous vehicles (CAVs) has significantly enhanced driving convenience, but it has also raised serious privacy concerns, particularly regarding the personal identifiable information (PII) stored on infotainment systems. Recent advances in connected and autonomous vehicle control, such as multi-agent system (MAS)-based hierarchical architectures and privacy-preserving strategies for mixed-autonomy platoon control, underscore the increasing complexity of privacy management within these environments. Rental cars with infotainment systems pose substantial challenges, as renters often fail to delete their data, leaving it accessible to subsequent renters. This study investigates the risks associated with PII in connected vehicles and emphasizes the necessity of automated solutions to ensure data privacy. We introduce the Vehicle Inactive Profile Remover (VIPR), an innovative automated solution designed to identify and delete PII left on infotainment systems. The efficacy of VIPR is evaluated through surveys, hands-on experiments with rental vehicles, and a controlled laboratory environment. VIPR achieved a 99.5% success rate in removing user profiles, with an average deletion time of 4.8 s or less, demonstrating its effectiveness in mitigating privacy risks. This solution highlights VIPR as a critical tool for enhancing privacy in connected vehicle environments, promoting a safer, more responsible use of connected vehicle technology in society. Full article
(This article belongs to the Special Issue Feature Papers in the Section of Network Security and Privacy)
Show Figures

Figure 1

Figure 1
<p>Illustration of the Vehicle-to-Everything (V2X) communications model.</p>
Full article ">Figure 2
<p>Illustration of modern in-vehicle network architecture.</p>
Full article ">Figure 3
<p>High-level illustration PII leakage through Bluetooth pairing and removing user profiles using VIPR.</p>
Full article ">Figure 4
<p>VIPR state diagram for rental vehicle depot return and subsequent rentals.</p>
Full article ">Figure 5
<p>VIPR state diagram for ride-sharing.</p>
Full article ">Figure 6
<p>Replicated vehicle infotainment system.</p>
Full article ">Figure 7
<p>Infotainment system display showing current and previous paired devices (active or inactive).</p>
Full article ">Figure 8
<p>Menu illustration of current (green) and previous (red) paired devices.</p>
Full article ">Figure 9
<p>VIPR automatic removal of inactive profiles.</p>
Full article ">Figure 10
<p>Menu illustration of active profiles after the VIPR executes.</p>
Full article ">
24 pages, 3579 KiB  
Article
Prototype for Multi-UAV Monitoring–Control System Using WebRTC
by Fatih Kilic, Mainul Hassan and Wolfram Hardt
Drones 2024, 8(10), 551; https://doi.org/10.3390/drones8100551 - 5 Oct 2024
Viewed by 585
Abstract
Most unmanned aerial vehicle (UAV) ground control station (GCS) solutions today are either web-based or native applications, primarily designed to support a single UAV. In this paper, our research aims to provide an open, universal framework intended for rapid prototyping, addressing these objectives [...] Read more.
Most unmanned aerial vehicle (UAV) ground control station (GCS) solutions today are either web-based or native applications, primarily designed to support a single UAV. In this paper, our research aims to provide an open, universal framework intended for rapid prototyping, addressing these objectives by developing a Web Real-Time Communication (WebRTC)-based multi-UAV monitoring and control system for applications such as automated power line inspection (APOLI). The APOLI project focuses on identifying damage and faults in power line insulators through real-time image processing, video streaming, and flight data monitoring. The implementation is divided into three main parts. First, we configure UAVs for hardware-accelerated streaming using the GStreamer framework on the NVIDIA Jetson Nano companion board. Second, we develop the server-side application to receive hardware-encoded video feeds from the UAVs by utilizing a WebRTC media server. Lastly, we develop a web application that facilitates communication between clients and the server, allowing users with different authorization levels to access video feeds and control the UAVs. The system supports three user types: pilot/admin, inspector, and customer. Our research aims to leverage the WebRTC media server framework to develop a web-based GCS solution capable of managing multiple UAVs with low latency. The proposed solution enables real-time video streaming and flight data collection from multiple UAVs to a server, which is displayed in a web application interface hosted on the GCS. This approach ensures efficient inspection for applications like APOLI while prioritizing UAV safety during critical scenarios. Another advantage of the solution is its integration compatibility with platforms such as cloud services and native applications, as well as the modularity of the plugin-based architecture offered by the Janus WebRTC server for future development. Full article
(This article belongs to the Special Issue Conceptual Design, Modeling, and Control Strategies of Drones-II)
Show Figures

Figure 1

Figure 1
<p>Vision subsystem test in AREIOM Platform [<a href="#B11-drones-08-00551" class="html-bibr">11</a>].</p>
Full article ">Figure 2
<p>Streaming plugin: CPU and memory [<a href="#B32-drones-08-00551" class="html-bibr">32</a>].</p>
Full article ">Figure 3
<p>Proposed architecture.</p>
Full article ">Figure 4
<p>Hardware and software components with utilized technology.</p>
Full article ">Figure 5
<p>Flowchart of the UAV side development.</p>
Full article ">Figure 6
<p>Flowchart of the GCS side development.</p>
Full article ">Figure 7
<p>Testbed for latency measurements.</p>
Full article ">Figure 8
<p>Application interface of GCS.</p>
Full article ">Figure 9
<p>Testbed for Test 5 measurements.</p>
Full article ">Figure 10
<p>Network bandwidth during multiple streaming.</p>
Full article ">Figure 11
<p>Janus WebRTC benchmark results (9 streams).</p>
Full article ">Figure 12
<p>Janus WebRTC benchmark results − webrtc−internals dump (9 streams).</p>
Full article ">
20 pages, 816 KiB  
Article
A Multimodal Recurrent Model for Driver Distraction Detection
by Marcel Ciesla and Gerald Ostermayer
Appl. Sci. 2024, 14(19), 8935; https://doi.org/10.3390/app14198935 - 4 Oct 2024
Viewed by 373
Abstract
Distracted driving is a significant threat to road safety, causing numerous accidents every year. Driver distraction detection systems offer a promising solution by alerting the driver to refocus on the primary driving task. Even with increasing vehicle automation, human drivers must remain alert, [...] Read more.
Distracted driving is a significant threat to road safety, causing numerous accidents every year. Driver distraction detection systems offer a promising solution by alerting the driver to refocus on the primary driving task. Even with increasing vehicle automation, human drivers must remain alert, especially in partially automated vehicles where they may need to take control in critical situations. In this work, an AI-based distraction detection model is developed that focuses on improving classification performance using a long short-term memory (LSTM) network. Unlike traditional approaches that evaluate individual frames independently, the LSTM network captures temporal dependencies across multiple time steps. In addition, this study investigated the integration of vehicle sensor data and an inertial measurement unit (IMU) to further improve detection accuracy. The results show that the recurrent LSTM network significantly improved the average F1 score from 71.3% to 87.0% compared to a traditional vision-based approach using a single image convolutional neural network (CNN). Incorporating sensor data further increased the score to 90.1%. These results highlight the benefits of integrating temporal dependencies and multimodal inputs and demonstrate the potential for more effective driver distraction detection systems that can improve road safety. Full article
Show Figures

Figure 1

Figure 1
<p>Visualization of a correlation matrix heatmap of the numerical features.</p>
Full article ">Figure 2
<p>Visualization of the class distribution of the dataset among the drivers.</p>
Full article ">Figure 3
<p>Boxplots with the durations of distracting activities.</p>
Full article ">Figure 4
<p>CNN model used for vision-based driver distraction detection.</p>
Full article ">Figure 5
<p>CNN-LSTM model used for recurrent vision-based driver distraction detection.</p>
Full article ">Figure 6
<p>CNN-LSTM-S model used for multimodal recurrent vision-based driver distraction detection.</p>
Full article ">Figure 7
<p>Confusion matrices for the individual models. For each model type, the training run that achieved the median F1 score was selected.</p>
Full article ">Figure 8
<p>Results of the leave-one-driver-out cross-validation. A boxplot represents the scores across the different validation drivers.</p>
Full article ">Figure A1
<p>Histogram with durations of distracting activities.</p>
Full article ">Figure A2
<p>Visualization of histograms of durations of different distractions separated by driver.</p>
Full article ">
18 pages, 7522 KiB  
Article
Trajectory Tracking Control of Unmanned Vehicles via Front-Wheel Driving
by Jie Zhou, Can Zhao, Yunpei Chen, Kaibo Shi, Eryang Chen and Ziqi Luo
Drones 2024, 8(10), 543; https://doi.org/10.3390/drones8100543 - 1 Oct 2024
Viewed by 358
Abstract
Automated Guided Vehicles (AGVs) are the fastest commercially available application of unmanned driving technology, and the research significance of unmanned vehicle technology remains substantial. This paper investigates the driving mode of AGVs and proposes a method to extend the kinematic model of center-driven [...] Read more.
Automated Guided Vehicles (AGVs) are the fastest commercially available application of unmanned driving technology, and the research significance of unmanned vehicle technology remains substantial. This paper investigates the driving mode of AGVs and proposes a method to extend the kinematic model of center-driven unmanned vehicles to front-wheel drive. This change in driving force enables unmanned vehicles to achieve faster tracking and higher consistency, solving the problems of long tracking time and insufficient accuracy in complex environments and reducing production costs. By analyzing the posture relationship of the unmanned vehicle system during movement, we established a posture error system to analyze the trajectory tracking problem. Utilizing Lyapunov stability theory and the concept of backstepping, we designed a control scheme that uses linear velocity and heading angular velocity as variables for the posture error system. This control scheme aims to stabilize the system and achieve synchronized trajectory tracking control of the unmanned vehicle. The impact of control parameters in the controller on tracking performance is also discussed. The final experimental simulation results show that the system error stabilizes, and the unmanned vehicle accurately follows the predetermined trajectory, verifying the feasibility of our proposed method and control scheme. Full article
(This article belongs to the Special Issue UAV Trajectory Generation, Optimization and Cooperative Control)
Show Figures

Figure 1

Figure 1
<p>Analysis chart of precursor angle parameters.</p>
Full article ">Figure 2
<p>Diagram of the connection between the rear wheel and the center.</p>
Full article ">Figure 3
<p>Parameters and analytic diagram of unmanned vehicles.</p>
Full article ">Figure 4
<p>Block diagram of unmanned vehicle control system.</p>
Full article ">Figure 5
<p>The system error of the unmanned vehicle in example 1.</p>
Full article ">Figure 6
<p>The speed tracking of the unmanned vehicle in example 1.</p>
Full article ">Figure 7
<p>The spiral synchronized trajectory tracking control in example 1.</p>
Full article ">Figure 8
<p>Dynami errors of two parameters in example 1 and example 2 under circular trajectory.</p>
Full article ">Figure 9
<p>The speed changes of the two parameters in example 1 and example 2 under the circular trajectory.</p>
Full article ">Figure 10
<p>The two parameters in example 1 and example 2 are used in circular trajectory tracking.</p>
Full article ">Figure 11
<p>Comparison of practical trajectories 1 and 2.</p>
Full article ">Figure 12
<p>Comparison of practical trajectories 1 and 3.</p>
Full article ">Figure 13
<p>Comparison of practical trajectories 1 and 4.</p>
Full article ">Figure 14
<p>Comparative analysis of trajectory tracking under different driving forces in Example 2.</p>
Full article ">Figure 15
<p>Partial details in <a href="#drones-08-00543-f014" class="html-fig">Figure 14</a>.</p>
Full article ">Figure 16
<p>Cartesian heart-shaped trajectory tracking path diagram.</p>
Full article ">Figure 17
<p>Complex curve trajectory tracking path diagram.</p>
Full article ">
18 pages, 5626 KiB  
Article
An Eco-Driving Strategy at Multiple Fixed-Time Signalized Intersections Considering Traffic Flow Effects
by Huinian Wang, Junbin Guo, Jingyao Wang and Jinghua Guo
Sensors 2024, 24(19), 6356; https://doi.org/10.3390/s24196356 - 30 Sep 2024
Viewed by 459
Abstract
To encourage energy saving and emission reduction and improve traffic efficiency in the multiple signalized intersections area, an eco-driving strategy for connected and automated vehicles (CAVs) considering the effects of traffic flow is proposed for the mixed traffic environment. Firstly, the formation and [...] Read more.
To encourage energy saving and emission reduction and improve traffic efficiency in the multiple signalized intersections area, an eco-driving strategy for connected and automated vehicles (CAVs) considering the effects of traffic flow is proposed for the mixed traffic environment. Firstly, the formation and dissipation process of signalized intersection queues are analyzed based on traffic wave theory, and a traffic flow situation estimation model is constructed, which can estimate intersection queue length and rear obstructed fleet length. Secondly, a feasible speed set calculation method for multiple signalized intersections is proposed to enable vehicles to pass through intersections without stopping and obstructing the following vehicles, adopting a trigonometric profile to generate smooth speed trajectory to ensure good riding comfort, and the speed trajectory is optimized with comprehensive consideration of fuel consumption, emissions, and traffic efficiency costs. Finally, the effectiveness of the strategy is verified. The results show that traffic performance and fuel consumption benefits increase as the penetration rate of CAVs increases. When all vehicles on the road are CAVs, the proposed strategy can increase the average speed by 9.5%, reduce the number of stops by 78.2%, reduce the stopped delay by 82.0%, and reduce the fuel consumption, NOx, and HC emissions by 20.4%, 39.4%, and 46.6%, respectively. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Basic relationship between traffic wave flow and concentration.</p>
Full article ">Figure 2
<p>Schematic diagram of queuing at signalized intersections.</p>
Full article ">Figure 3
<p>Schematic diagram of vehicles obstruction.</p>
Full article ">Figure 4
<p>Eco-driving control system architecture.</p>
Full article ">Figure 5
<p>Schematic diagram of eco-driving vehicle trajectory.</p>
Full article ">Figure 6
<p>The speed trajectory of the trigonometric function curve. (<b>a</b>) Acceleration process; (<b>b</b>) Deceleration process.</p>
Full article ">Figure 7
<p>Schematic diagram of the connected eco-driving vehicle through the intersection.</p>
Full article ">Figure 8
<p>Stochastic crossover schematic.</p>
Full article ">Figure 9
<p>Best fitness value and mean fitness value of each generation.</p>
Full article ">Figure 10
<p>Flowchart of the proposed strategy.</p>
Full article ">Figure 11
<p>MATLAB and VISSIM joint simulation platform architecture.</p>
Full article ">Figure 12
<p>Simulation parameters diagram.</p>
Full article ">Figure 13
<p>Fuel consumption rate and emission rate of CAV flow versus HDV flow at medium traffic volume. (<b>a</b>) Fuel consumption rate; (<b>b</b>) NO<sub>x</sub> emission rate; (<b>c</b>) HC emission rate.</p>
Full article ">Figure 14
<p>Comparison of traffic performance, fuel consumption, and emissions between CAV flow and manual driving traffic flow under different traffic volumes. (<b>a</b>) Average speed; (<b>b</b>) Number of stops; (<b>c</b>) Stopped delay; (<b>d</b>) Fuel consumption; (<b>e</b>) NO<sub>x</sub> emission; (<b>f</b>) HC emission.</p>
Full article ">Figure 15
<p>Benefits for different penetration rates. (<b>a</b>) Average speed improvement; (<b>b</b>) Number of stops benefits; (<b>c</b>) Stopped delay benefits; (<b>d</b>) Fuel consumption benefits; (<b>e</b>) NO<sub>x</sub> emission benefits; (<b>f</b>) HC emission benefits.</p>
Full article ">
Back to TopTop