[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (883)

Search Parameters:
Keywords = algorithmic information theory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 6023 KiB  
Article
Automatic Recognition of Multiple Emotional Classes from EEG Signals through the Use of Graph Theory and Convolutional Neural Networks
by Fatemeh Mohajelin, Sobhan Sheykhivand, Abbas Shabani, Morad Danishvar, Sebelan Danishvar and Lida Zare Lahijan
Sensors 2024, 24(18), 5883; https://doi.org/10.3390/s24185883 - 10 Sep 2024
Viewed by 522
Abstract
Emotion is a complex state caused by the functioning of the human brain in relation to various events, for which there is no scientific definition. Emotion recognition is traditionally conducted by psychologists and experts based on facial expressions—the traditional way to recognize something [...] Read more.
Emotion is a complex state caused by the functioning of the human brain in relation to various events, for which there is no scientific definition. Emotion recognition is traditionally conducted by psychologists and experts based on facial expressions—the traditional way to recognize something limited and is associated with errors. This study presents a new automatic method using electroencephalogram (EEG) signals based on combining graph theory with convolutional networks for emotion recognition. In the proposed model, firstly, a comprehensive database based on musical stimuli is provided to induce two and three emotional classes, including positive, negative, and neutral emotions. Generative adversarial networks (GANs) are used to supplement the recorded data, which are then input into the suggested deep network for feature extraction and classification. The suggested deep network can extract the dynamic information from the EEG data in an optimal manner and has 4 GConv layers. The accuracy of the categorization for two classes and three classes, respectively, is 99% and 98%, according to the suggested strategy. The suggested model has been compared with recent research and algorithms and has provided promising results. The proposed method can be used to complete the brain-computer-interface (BCI) systems puzzle. Full article
Show Figures

Figure 1

Figure 1
<p>Emotional classes based on Russell’s theory.</p>
Full article ">Figure 2
<p>The main pipeline for automatic recognition of emotions using EEG signals.</p>
Full article ">Figure 3
<p>Recording of EEG signals related to one of the participants in the experiment.</p>
Full article ">Figure 4
<p>The sequence and timing of the playing of different music to arouse different emotions in the participants.</p>
Full article ">Figure 5
<p>Graph design stage (The sign * in this figure indicates the multiplication sign).</p>
Full article ">Figure 6
<p>General pipeline for the proposed architecture in this study.</p>
Full article ">Figure 7
<p>Details related to the pipeline of the proposed architecture.</p>
Full article ">Figure 8
<p>Results related to the use of different numbers of layers in the proposed pipeline.</p>
Full article ">Figure 9
<p>Results related to the use of different polynomial coefficients in the proposed pipeline.</p>
Full article ">Figure 10
<p>The results related to the correctness and error of the proposed pipeline in 200 repetitions of the network.</p>
Full article ">Figure 11
<p>ROC analysis for classification of 2 and 3 classes of emotions.</p>
Full article ">Figure 12
<p>T-SNE chart for the classification of two and three classes of emotions.</p>
Full article ">Figure 13
<p>Performance of the proposed model using 5-fold cross-validation.</p>
Full article ">Figure 14
<p>Contrasting the suggested method’s performance with several algorithms.</p>
Full article ">Figure 15
<p>Comparing the resistance of different algorithms against the suggested method in the presence of measurement noise.</p>
Full article ">
29 pages, 11129 KiB  
Article
A Bio-Inspired Sliding Mode Method for Autonomous Cooperative Formation Control of Underactuated USVs with Ocean Environment Disturbances
by Zaopeng Dong, Fei Tan, Min Yu, Yuyang Xiong and Zhihao Li
J. Mar. Sci. Eng. 2024, 12(9), 1607; https://doi.org/10.3390/jmse12091607 - 10 Sep 2024
Viewed by 266
Abstract
In this paper, a bio-inspired sliding mode control (bio-SMC) and minimal learning parameter (MLP) are proposed to achieve the cooperative formation control of underactuated unmanned surface vehicles (USVs) with external environmental disturbances and model uncertainties. Firstly, the desired trajectory of the follower USV [...] Read more.
In this paper, a bio-inspired sliding mode control (bio-SMC) and minimal learning parameter (MLP) are proposed to achieve the cooperative formation control of underactuated unmanned surface vehicles (USVs) with external environmental disturbances and model uncertainties. Firstly, the desired trajectory of the follower USV is generated by the leader USV’s position information based on the leader–follower framework, and the problem of cooperative formation control is transformed into a trajectory tracking error stabilization problem. Besides, the USV position errors are stabilized by a backstepping approach, then the virtual longitudinal and virtual lateral velocities can be designed. To alleviate the system oscillation and reduce the computational complexity of the controller, a sliding mode control with a bio-inspired model is designed to avoid the problem of differential explosion caused by repeated derivation. A radial basis function neural network (RBFNN) is adopted for estimating and compensating for the environmental disturbances and model uncertainties, where the MLP algorithm is utilized to substitute for online weight learning in a single-parameter form. Finally, the proposed method is proved to be uniformly and ultimately bounded through the Lyapunov stability theory, and the validity of the method is also verified by simulation experiments. Full article
(This article belongs to the Special Issue Autonomous Marine Vehicle Operations—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Formation trajectory tracking diagram.</p>
Full article ">Figure 2
<p>Leader–follower framework.</p>
Full article ">Figure 3
<p>Flowchart of cooperative formation control for underactuated USVs.</p>
Full article ">Figure 4
<p>The structure diagram of the RBF neural network.</p>
Full article ">Figure 5
<p>Diagrams of USVs trajectory (Case 1). (<b>a</b>)with SMC; (<b>b</b>) with SMC and RBF; (<b>c</b>) with bio-SMC and RBF.</p>
Full article ">Figure 6
<p>Diagrams of USVs’ tracking error (Case 1). (<b>a</b>) longitudinal position error; (<b>b</b>) lateral position error.</p>
Full article ">Figure 7
<p>Diagrams of USVs’ control input signals (Case 1). (<b>a</b>) surge force; (<b>b</b>) yaw moment.</p>
Full article ">Figure 8
<p>Diagrams of USVs’ velocity variables (Case 1). (<b>a</b>) Leader USV; (<b>b</b>) Follower USV1; (<b>c</b>) Follower USV2.</p>
Full article ">Figure 9
<p>Diagrams of USVs’ virtual velocity variables (Case 1). (<b>a</b>) longitudinal virtual velocity; (<b>b</b>) lateral virtual velocity.</p>
Full article ">Figure 10
<p>Diagrams of USVs’ velocity error (Case 1). (<b>a</b>) longitudinal velocity error; (<b>b</b>) lateral velocity error.</p>
Full article ">Figure 11
<p>Approximation results of USVs (Case 1). (<b>a</b>) surge dynamic damping; (<b>b</b>) yaw dynamic damping.</p>
Full article ">Figure 12
<p>Diagrams of USVs’ trajectory (Case 2). (<b>a</b>)with SMC; (<b>b</b>) with SMC and RBF; (<b>c</b>) with bio-SMC and RBF.</p>
Full article ">Figure 13
<p>Diagrams of USVs’ tracking error (Case 2). (<b>a</b>) longitudinal position error; (<b>b</b>) lateral position error.</p>
Full article ">Figure 14
<p>Diagrams of USVs’ control input signals (Case 2). (<b>a</b>) surge force; (<b>b</b>) yaw moment.</p>
Full article ">Figure 15
<p>Diagrams of USVs’ velocity variables (Case 2). (<b>a</b>) Leader USV; (<b>b</b>) Follower USV1; (<b>c</b>) Follower USV2.</p>
Full article ">Figure 16
<p>Diagrams of USVs’ virtual velocity variables (Case 2). (<b>a</b>) longitudinal virtual velocity; (<b>b</b>) lateral virtual velocity.</p>
Full article ">Figure 17
<p>Diagrams of USVs’ velocity error (Case 2). (<b>a</b>) longitudinal velocity error; (<b>b</b>) lateral velocity error.</p>
Full article ">Figure 18
<p>Approximation results of USVs’ (Case 2). (<b>a</b>) surge dynamic damping; (<b>b</b>) yaw dynamic damping.</p>
Full article ">Figure 19
<p>Diagrams of USVs’ trajectory (Case 3). (<b>a</b>)with SMC; (<b>b</b>) with SMC and RBF; (<b>c</b>) with bio-SMC and RBF.</p>
Full article ">Figure 20
<p>Diagrams of USVs’ tracking error (Case 3). (<b>a</b>) longitudinal position error; (<b>b</b>) lateral position error.</p>
Full article ">Figure 21
<p>Diagrams of USVs’ control input signals (Case 3). (<b>a</b>) surge force; (<b>b</b>) yaw moment.</p>
Full article ">Figure 22
<p>Diagrams of USVs’ velocity variables (Case 3). (<b>a</b>) Leader USV; (<b>b</b>) Follower USV1; (<b>c</b>) Follower USV2.</p>
Full article ">Figure 23
<p>Diagrams of USVs’ virtual velocity variables (Case 3). (<b>a</b>) longitudinal virtual velocity; (<b>b</b>) lateral virtual velocity.</p>
Full article ">Figure 24
<p>Diagrams of USVs’ velocity error (Case 3). (<b>a</b>) longitudinal velocity error; (<b>b</b>) lateral velocity error.</p>
Full article ">Figure 25
<p>Approximation results of USVs’ (Case 3). (<b>a</b>) surge dynamic damping; (<b>b</b>) yaw dynamic damping.</p>
Full article ">Figure 26
<p>Diagrams of USVs’ trajectory (Case 4). (<b>a</b>) with SMC; (<b>b</b>) with SMC and RBF; (<b>c</b>) with bio-SMC and RBF.</p>
Full article ">Figure 27
<p>Diagrams of USVs’ tracking error (Case 4). (<b>a</b>) longitudinal position error; (<b>b</b>) lateral position error.</p>
Full article ">Figure 28
<p>Diagrams of USVs’ control input signals (Case 4). (<b>a</b>) surge force; (<b>b</b>) yaw moment.</p>
Full article ">Figure 29
<p>Diagrams of USVs’ velocity variables (Case 4). (<b>a</b>) Leader USV; (<b>b</b>) Follower USV1; (<b>c</b>) Follower USV2.</p>
Full article ">Figure 30
<p>Diagrams of USVs’ virtual velocity variables (Case 4). (<b>a</b>) longitudinal virtual velocity; (<b>b</b>) lateral virtual velocity.</p>
Full article ">Figure 31
<p>Diagrams of USVs’ velocity error (Case 4). (<b>a</b>) longitudinal velocity error; (<b>b</b>) lateral velocity error.</p>
Full article ">Figure 32
<p>Approximation results of USVs’ (Case 4). (<b>a</b>) surge dynamic damping; (<b>b</b>) yaw dynamic damping.</p>
Full article ">
21 pages, 3253 KiB  
Article
Probing Asymmetric Interactions with Time-Separated Mutual Information: A Case Study Using Golden Shiners
by Katherine Daftari, Michael L. Mayo, Bertrand H. Lemasson, James M. Biedenbach and Kevin R. Pilkiewicz
Entropy 2024, 26(9), 775; https://doi.org/10.3390/e26090775 - 10 Sep 2024
Viewed by 263
Abstract
Leader–follower modalities and other asymmetric interactions that drive the collective motion of organisms are often quantified using information theory metrics like transfer or causation entropy. These metrics are difficult to accurately evaluate without a much larger number of data than is typically available [...] Read more.
Leader–follower modalities and other asymmetric interactions that drive the collective motion of organisms are often quantified using information theory metrics like transfer or causation entropy. These metrics are difficult to accurately evaluate without a much larger number of data than is typically available from a time series of animal trajectories collected in the field or from experiments. In this paper, we use a generalized leader–follower model to argue that the time-separated mutual information between two organism positions can serve as an alternative metric for capturing asymmetric correlations that is much less data intensive and more accurately estimated by popular k-nearest neighbor algorithms than transfer entropy. Our model predicts a local maximum of this mutual information at a time separation value corresponding to the fundamental reaction timescale of the follower organism. We confirm this prediction by analyzing time series trajectories recorded for a pair of golden shiner fish circling an annular tank. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The time-separated self mutual information (for fixed time separation <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> time steps) of the position of a standard, one-dimensional Gaussian random walker is plotted versus the absolute time <span class="html-italic">t</span> (in red). This curve is compared with the same mutual information for a Gaussian random walker confined to the perimeter of a ring of unit radius (in black). The latter is computed numerically from an ensemble of replicate simulations using the KSG algorithm. A jackknifing procedure is used to generate multiple datasets, and the mean information is plotted with standard error bars. (<a href="#sec3dot2-entropy-26-00775" class="html-sec">Section 3.2</a> describes this procedure in more detail.) (<b>B</b>) The mutual information for the Gaussian random walker on a ring is plotted at steady state as a function of time separation, <math display="inline"><semantics> <mi>τ</mi> </semantics></math>. The exact information—computed from an ensemble of replicates—is plotted in black. The blue curve is computed from a single trajectory without accounting for the correlations between individually sampled pairs of time points. The red curve is computed from a single trajectory as well, but now each sampled pair of time points is at least 300 time steps apart from every other sample.</p>
Full article ">Figure 2
<p>(<b>A</b>) A 10 s sample trajectory, taken from one of the agitated condition experiments, in which fish-0 (red) and fish-1 (blue) swim smooth laps about the outer wall of the annular tank. (<b>B</b>) A 25 s sample trajectory, taken from the same experimental replicate, in which the two fish swim more erratic laps about the tank.</p>
Full article ">Figure 3
<p>In panel (<b>A</b>), fish-0 and fish-1 are facing each other and have similar alignment angles. In panel (<b>B</b>), fish-0 leads fish-1 and fish-0 has an angle close to <math display="inline"><semantics> <mi>π</mi> </semantics></math>, whereas fish-1 has an angle close to 0. In panel (<b>C</b>), these alignment angles are averaged over a rolling 15 s window and are plotted for both fish over the entirety of one agitated condition experiment. Note the strong polarization in the alignments over most of the experiment and that fish-1 predominantly takes the lead position (<math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mn>10</mn> </msub> <mo>≫</mo> <msub> <mi>A</mi> <mn>01</mn> </msub> </mrow> </semantics></math> over most of the time series).</p>
Full article ">Figure 4
<p>In the top three panels, we plot the time-separated mutual information <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>I</mi> <mo>(</mo> <mrow> <mo>{</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> <mo>;</mo> <mrow> <mo>{</mo> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mi>τ</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> <mo>)</mo> </mrow> </semantics></math> in our one-dimensional leader–follower model for the cases (from left to right) where there is no change in leadership (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>), occasional change in leadership (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.00021</mn> </mrow> </semantics></math>), and where both fish behave independently (no following behavior). The red segment of each curve emphasizes that, for positive time separations, the mutual information presumes fish-0 is the leader; the blue segment emphasizes the same for fish-1 at negative time separations. In all cases, we set <math display="inline"><semantics> <mrow> <mo>Ω</mo> <mo>=</mo> <mn>0.0067</mn> </mrow> </semantics></math> radians per time step. In the first two cases, we set <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.0039</mn> </mrow> </semantics></math> radians per time step; but, in the third case, we needed to use the smaller value <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.00039</mn> </mrow> </semantics></math> radians per time step to observe a trajectory that qualitatively matched those of the other two cases. The bottom three panels show segments of representative trajectories corresponding to the same three cases, with red and blue curves once again representing fish-0 and fish-1, respectively. A dashed box highlights a change in leadership in the middle panel. Note how our mutual information metric deftly distinguishes qualitatively similar path data.</p>
Full article ">Figure 5
<p>The ensemble-estimated mutual information <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>I</mi> <mo>(</mo> <mrow> <mo>{</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> <mo>;</mo> <mrow> <mo>{</mo> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mi>τ</mi> <mo>)</mo> </mrow> <mo>}</mo> </mrow> <mo>)</mo> </mrow> </semantics></math> is plotted for our schematic model in black and successive single-trajectory estimates, <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>I</mi> <mo>(</mo> <msub> <mi>θ</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>;</mo> <msub> <mi>θ</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mi>τ</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math>, are plotted for different window sizes, ranging from <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. As expected, increasing the window between consecutively sampled pairs of time points results in convergence to the ensemble-averaged result. Even for small window sizes, however, we obtain surprisingly good convergence near the local maximum of the information, which is, in our case, the principal feature of interest.</p>
Full article ">Figure 6
<p>The time-separated mutual information between the angular positions of the two golden shiners, computed from a single experimental time series, is plotted for a window size <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> s. A dashed vertical line marks the estimated location of the peak. This peak occurring for <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math> implies that fish-1 is the leader during most of this time series. (The red and blue coloring has the same meaning as in the top panels of <a href="#entropy-26-00775-f004" class="html-fig">Figure 4</a>).</p>
Full article ">Figure 7
<p>(<b>A</b>) LOESS smoothing of the <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>≤</mo> <mn>0</mn> </mrow> </semantics></math> branch of the mutual information (in blue) plotted in <a href="#entropy-26-00775-f006" class="html-fig">Figure 6</a> for data fractions <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> (orange), <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> (green), and <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (magenta). (<b>B</b>) The time separation of the peak location for this same mutual information is plotted (in blue) as a function of <span class="html-italic">f</span>. For a broad range of data fractions, the peak locations falls within a narrow band of time separations. The same plot in red shows that the positive time separation branch of the MI curve in <a href="#entropy-26-00775-f006" class="html-fig">Figure 6</a> has no peak for any data fraction.</p>
Full article ">Figure A1
<p>Top panel: raw video frame from one of the golden shiner experiments showing the annular tank and the fish dyad. Bottom panel: a 15 s rolling window average of the alignment angles of fish-0 (in red) and fish-1 (in blue) computed over the course of an agitated condition replicate experiment different than that used throughout the rest of the paper. While the lapping behavior in this replicate is not quite as smooth and consistent as that shown in <a href="#entropy-26-00775-f003" class="html-fig">Figure 3</a>, long intervals of apparent leader–follower behavior can still be observed.</p>
Full article ">
18 pages, 10246 KiB  
Article
Hypergraph-Based Influence Maximization in Online Social Networks
by Chuangchuang Zhang, Wenlin Cheng, Fuliang Li and Xingwei Wang
Mathematics 2024, 12(17), 2769; https://doi.org/10.3390/math12172769 - 7 Sep 2024
Viewed by 284
Abstract
Influence maximization in online social networks is used to select a set of influential seed nodes to maximize the influence spread under a given diffusion model. However, most existing proposals have huge computational costs and only consider the dyadic influence relationship between two [...] Read more.
Influence maximization in online social networks is used to select a set of influential seed nodes to maximize the influence spread under a given diffusion model. However, most existing proposals have huge computational costs and only consider the dyadic influence relationship between two nodes, ignoring the higher-order influence relationships among multiple nodes. It limits the applicability and accuracy of existing influence diffusion models in real complex online social networks. To this end, in this paper, we present a novel information diffusion model by introducing hypergraph theory to determine the most influential nodes by jointly considering adjacent influence and higher-order influence relationships to improve diffusion efficiency. We mathematically formulate the influence maximization problem under higher-order influence relationships in online social networks. We further propose a hypergraph sampling greedy algorithm (HSGA) to effectively select the most influential seed nodes. In the HSGA, a random walk-based influence diffusion method and a Monte Carlo-based influence approximation method are devised to achieve fast approximation and calculation of node influences. We conduct simulation experiments on six real datasets for performance evaluations. Simulation results demonstrate the effectiveness and efficiency of the HSGA, and the HSGA has a lower computational cost and higher seed selection accuracy than comparison mechanisms. Full article
(This article belongs to the Special Issue Deep Representation Learning for Social Network Analysis)
Show Figures

Figure 1

Figure 1
<p>An illustrative example of social network hypergraph.</p>
Full article ">Figure 2
<p>Influence of hyperedge influence ratio on improvement ratio.</p>
Full article ">Figure 3
<p>Comparison results of different algorithms.</p>
Full article ">Figure 4
<p>Comparison of running time.</p>
Full article ">Figure 5
<p>Overlap of seed nodes.</p>
Full article ">
25 pages, 1873 KiB  
Article
Fixed-Time Distributed Event-Triggered Cooperative Guidance Methods for Multiple Vehicles with Limited Communications to Achieve Simultaneous Arrival
by Zhenzhen Gu, Xugang Wang and Zhongyuan Wang
Aerospace 2024, 11(9), 709; https://doi.org/10.3390/aerospace11090709 - 31 Aug 2024
Viewed by 243
Abstract
Aiming at the salvo-attack problem of multiple missiles, a distributed cooperative guidance law based on the event-triggered mechanism is proposed, which enables missiles with large differences in spatial location and velocity to achieve simultaneous attacks with only a few dozen information exchanges. It [...] Read more.
Aiming at the salvo-attack problem of multiple missiles, a distributed cooperative guidance law based on the event-triggered mechanism is proposed, which enables missiles with large differences in spatial location and velocity to achieve simultaneous attacks with only a few dozen information exchanges. It effectively reduces the generation of control commands and communication frequency, thereby reducing channel load and improving communication efficiency and reliability. Compared to traditional periodic sampling communication, the number of communications has been reduced by over 90%. The guidance process is divided into two stages. The first stage is the cooperative guidance stage, where missiles achieve consensus of the time-to-go estimates through information exchange. In this stage, each missile is designed with an event-triggered function based on its own state error, and the missile only updates and transmits its information in the communication network when the error meets the set threshold, effectively reducing the occupancy rate of missile-borne resources during the cooperation process. The second stage is the independent guidance stage, where missiles can hit the target simultaneously while keeping the communication network silent. This is achieved by ensuring that the time-to-go estimates of missiles can represent the real time-to-go after achieving consensus. By the design of the two-stage guidance law and the replacement of the event-triggered function, the cooperative guidance system can be ensured to remain stable in scenarios where the leader missile is present and destroyed, and this excludes Zeno behavior. The stability of the cooperative guidance law is rigorously proved by algebraic graph theory, matrix theory, and the Lyapunov method. Finally, the numerical simulation results demonstrate the validity of the algorithm and the correctness of the stability analysis. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the missile swarm cooperative operation.</p>
Full article ">Figure 2
<p>Planar engagement geometry.</p>
Full article ">Figure 3
<p>Principle of event-triggered mechanism.</p>
Full article ">Figure 4
<p>Communication topology.</p>
Full article ">Figure 5
<p>Simulation results of the missile swarm with the leader under the effect of fixed-time distributed event-triggered cooperative guidance law. (<b>a</b>) Trajectories of missiles. (<b>b</b>) Distance between missiles and target. (<b>c</b>) Time-to-go estimates. (<b>d</b>) Time-to-go estimates at event-triggered moment. (<b>e</b>) Navigation ratio. (<b>f</b>) Normal acceleration. (<b>g</b>) Tangential acceleration. (<b>h</b>) Event-triggered moment.</p>
Full article ">Figure 6
<p>Simulation results of the missile swarm without the leader under the effect of fixed-time distributed event-triggered cooperative guidance law. (<b>a</b>) Trajectories of missiles. (<b>b</b>) Distance between missiles and target. (<b>c</b>) Time-to-go estimates. (<b>d</b>) Time-to-go estimates at event-triggered moment. (<b>e</b>) Navigation ratio. (<b>f</b>) Normal acceleration. (<b>g</b>) Tangential acceleration. (<b>h</b>) Event-triggered moment.</p>
Full article ">Figure 7
<p>Simulation results of the missile swarm with the bidirectional communication leader under the effect of fixed-time distributed event-triggered cooperative guidance law. (<b>a</b>) Trajectories of missiles. (<b>b</b>) Distance between missiles and target. (<b>c</b>) Time-to-go estimates. (<b>d</b>) Time-to-go estimates at event-triggered moment. (<b>e</b>) Navigation ratio. (<b>f</b>) Normal acceleration. (<b>g</b>) Tangential acceleration. (<b>h</b>) Event-triggered moment.</p>
Full article ">Figure 8
<p>Simulation results of the missile swarm with the leader under the effect of fixed-time distributed cooperative guidance law. (<b>a</b>) Trajectories of missiles. (<b>b</b>) Distance between missiles and target. (<b>c</b>) Time-to-go estimates. (<b>d</b>) Navigation ratio. (<b>e</b>) Normal acceleration. (<b>f</b>) Tangential acceleration.</p>
Full article ">
25 pages, 4182 KiB  
Article
W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots
by Dingji Luo, Yucan Huang, Xuchao Huang, Mingda Miao and Xueshan Gao
Sensors 2024, 24(17), 5662; https://doi.org/10.3390/s24175662 - 30 Aug 2024
Viewed by 424
Abstract
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of [...] Read more.
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot’s body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of mobile robot visual SLAM with integrated wheel speed.</p>
Full article ">Figure 2
<p>Schematic diagram of mobile robot coordinate system.</p>
Full article ">Figure 3
<p>Camera projection model.</p>
Full article ">Figure 4
<p>Schematic diagram of wheel speed information pre-integration.</p>
Full article ">Figure 5
<p>Schematic diagram of square movement in the comparative experiment.</p>
Full article ">Figure 6
<p>Reference keyframe and current frame in previous tracking, (<b>a</b>) reference keyframe of previous tracking and (<b>b</b>) current frame of previous tracking.</p>
Full article ">Figure 7
<p>Comparative Experiment One: robot poses and environmental map points obtained by W-VSLAM.</p>
Full article ">Figure 8
<p>Comparative Experiment Two: robot poses and environmental map points obtained by W-VSLAM.</p>
Full article ">Figure 9
<p>Comparative Experiment One: trajectory comparison chart of different algorithms.</p>
Full article ">Figure 10
<p>Comparative Experiment Two: trajectory comparison chart of different algorithms.</p>
Full article ">Figure 11
<p>Comparative test one: translational component comparison chart of trajectories from different algorithms.</p>
Full article ">Figure 12
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment One. (<b>b</b>) Comparison between Experiment One and reference trajectory.</p>
Full article ">Figure 13
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment Two. (<b>b</b>) Comparison between Experiment Two and reference trajectory.</p>
Full article ">Figure 14
<p>(<b>a</b>) Perception results of robot poses and map points in Experiment Three. (<b>b</b>) Comparison between Experiment Three and reference trajectory.</p>
Full article ">Figure 15
<p>Indoor long corridor environment trajectory; Rivz result diagram.</p>
Full article ">Figure 16
<p>Comparison chart of trajectories in the indoor long corridor environment.</p>
Full article ">Figure 17
<p>Comparison chart of translational components of trajectories in the indoor long corridor environment.</p>
Full article ">Figure 18
<p>Absolute accuracy of rotational estimation in the indoor long corridor environment.</p>
Full article ">Figure 19
<p>Relative accuracy of rotational estimation in the indoor long corridor environment (with a 1° increment).</p>
Full article ">
16 pages, 3639 KiB  
Article
Time-of-Flight Camera Intensity Image Reconstruction Based on an Untrained Convolutional Neural Network
by Tian-Long Wang, Lin Ao, Na Han, Fu Zheng, Yan-Qiu Wang and Zhi-Bin Sun
Photonics 2024, 11(9), 821; https://doi.org/10.3390/photonics11090821 - 30 Aug 2024
Viewed by 606
Abstract
With the continuous development of science and technology, laser ranging technology will become more efficient, convenient, and widespread, and it has been widely used in the fields of medicine, engineering, video games, and three-dimensional imaging. A time-of-flight (ToF) camera is a three-dimensional stereo [...] Read more.
With the continuous development of science and technology, laser ranging technology will become more efficient, convenient, and widespread, and it has been widely used in the fields of medicine, engineering, video games, and three-dimensional imaging. A time-of-flight (ToF) camera is a three-dimensional stereo imaging device with the advantages of small size, small measurement error, and strong anti-interference ability. However, compared to traditional sensors, ToF cameras typically exhibit lower resolution and signal-to-noise ratio due to inevitable noise from multipath interference and mixed pixels during usage. Additionally, in environments with scattering media, the information about objects gets scattered multiple times, making it challenging for ToF cameras to obtain effective object information. To address these issues, we propose a solution that combines ToF cameras with single-pixel imaging theory. Leveraging intensity information acquired by ToF cameras, we apply various reconstruction algorithms to reconstruct the object’s image. Under undersampling conditions, our reconstruction approach yields higher peak signal-to-noise ratio compared to the raw camera image, significantly improving the quality of the target object’s image. Furthermore, when ToF cameras fail in environments with scattering media, our proposed approach successfully reconstructs the object’s image when the camera is imaging through the scattering medium. This experimental demonstration effectively reduces the noise and direct ambient light generated by the ToF camera itself, while opening up the potential application of ToF cameras in challenging environments, such as scattering media or underwater. Full article
Show Figures

Figure 1

Figure 1
<p>Flight time measurement in continuous sinusoidal wave modulation mode.</p>
Full article ">Figure 2
<p>Schematic diagram of the image reconstruction using a neural network. (<b>a</b>) Schematic diagram of network operation, (<b>b</b>) images reconstructed by the neural network with different sampling rates and different number of iterations.</p>
Full article ">Figure 3
<p>The schematic diagrams of SPI.</p>
Full article ">Figure 4
<p>The schematic diagrams of SPI based on a ToF camera.</p>
Full article ">Figure 5
<p>Experimental results of imaging reconstruction using intensity images at different SRs. (<b>a</b>) Target object, (<b>b</b>) ToF image, (<b>c</b>–<b>f</b>) the recovered images by CGI, BP, TVAL3, and DL. The SRs from left to right is 6.25%, 12.5%, 18.75%, 25%, 31.25% and 37.5%.</p>
Full article ">Figure 6
<p>Plots of the PSNRs of the reconstructed intensity images versus the SRs by different algorithms. The black, red, blue, and green lines denote the PSNRs by CGI, BP, TVAL3, and DL.</p>
Full article ">Figure 7
<p>Experimental results of reconstruction using the intensity images through the scattering media at different SRs. (<b>a</b>) ToF image, (<b>b</b>–<b>e</b>) the recovered images by CGI, BP, TVAL3, and DL. The SRs from left to right are 6.25%, 12.5%, 18.75%, 25%, 31.25%, and 37.5%.</p>
Full article ">Figure 8
<p>Plots comparing the PSNR and SRs for the reconstruction of intensity images through scattering media using different algorithms.</p>
Full article ">Figure 9
<p>Experimental results of reconstruction using the intensity images through the scattering media at different SRs. (<b>a</b>) ToF image, (<b>b</b>) ToF image with added Gaussian noise. (<b>c</b>–<b>f</b>) the recovered images by CGI, BP, TVAL3, and DL. The SRs from left to right are 6.25%, 12.5%, 18.75%, 25%, 31.25%, and 37.5%.</p>
Full article ">Figure 10
<p>Plots comparing the PSNR and SRs for the reconstruction of intensity images through scattering media using different algorithms.</p>
Full article ">
27 pages, 15545 KiB  
Article
Multi-Level Behavioral Mechanisms and Kinematic Modeling Research of Cellular Space Robot
by Xiaomeng Liu, Haiyu Gu, Xiangyu Zhang, Jianyu Duan, Zhaoxu Liu, Zhichao Li, Siyu Wang and Bindi You
Machines 2024, 12(9), 598; https://doi.org/10.3390/machines12090598 - 27 Aug 2024
Viewed by 267
Abstract
The cellular space robot (CSR) is a new type of self-reconfigurable robot. It can adapt the variety of on-orbit service tasks with large space spans through multi-level reconfiguration mechanisms. As the CSR has a large configuration space, kinematic solving becomes a key problem [...] Read more.
The cellular space robot (CSR) is a new type of self-reconfigurable robot. It can adapt the variety of on-orbit service tasks with large space spans through multi-level reconfiguration mechanisms. As the CSR has a large configuration space, kinematic solving becomes a key problem affecting on-orbit operation capability, and kinematic automatic solving research must be conducted. In order to solve this problem, firstly, the cellular space robot system capable of realizing multi-level self-reconfiguration is proposed for the demand of space on-orbit service, and the kinematic equations of modules are constructed by considering a single module function using screw theory. Secondly, the kinematics of the cellular space robot are encapsulated and divided into multiple levels, and the multilevel-assembly relationship-description method for robotic systems is proposed based on graph theory. On this basis, the pathway-solving algorithm is proposed to express the robot organization reachability information. Finally, the module–organ–robot multilevel kinematics solving algorithm is proposed in combination with screw theory. In order to verify the effectiveness of the algorithm in this paper, numerical simulation is used to compare with the proposed algorithm. The results show that compared with the traditional algorithm, the method in this paper only needs to update part of the assembly relations after organ migration, which simplifies the kinematic modeling operation and improves the efficiency of kinematic computation. Full article
(This article belongs to the Section Automation and Control Systems)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the cellular space robot module.</p>
Full article ">Figure 2
<p>Schematic diagram of CSR multi-level reconfiguration mechanism.</p>
Full article ">Figure 3
<p>Dimension schematic of cellular space robot structure.</p>
Full article ">Figure 4
<p>Schematic of the kinematic model of the module.</p>
Full article ">Figure 5
<p>CSR’s interface coordinate system.</p>
Full article ">Figure 6
<p>CSR module connection method.</p>
Full article ">Figure 7
<p>Description of the CSR organ level topology.</p>
Full article ">Figure 8
<p>Description of the CSR robot level topology.</p>
Full article ">Figure 9
<p>Description of the CSR organ level 3D assembly information.</p>
Full article ">Figure 10
<p>Description of the CSR robot level 3D assembly information.</p>
Full article ">Figure 11
<p>Schematic diagram of CSR organ pathway.</p>
Full article ">Figure 12
<p>Schematic diagram of CSR robot pathway.</p>
Full article ">Figure 13
<p>Description of the 3D assembly information for organ-1.</p>
Full article ">Figure 14
<p>Displacement of the end of organ-1 pathway.</p>
Full article ">Figure 15
<p>Schematic of the 3D assembly information for organ-2 and organ-3.</p>
Full article ">Figure 16
<p>3D assembly information for robot-1 (before organ migration).</p>
Full article ">Figure 17
<p>Three-dimensional assembly information for robot-1 (after organ migration).</p>
Full article ">Figure 18
<p>Displacement of the end of the robot pathway.</p>
Full article ">Figure 18 Cont.
<p>Displacement of the end of the robot pathway.</p>
Full article ">
16 pages, 323 KiB  
Article
An Innovative Algorithm Based on Octahedron Sets via Multi-Criteria Decision Making
by Güzide Şenel
Symmetry 2024, 16(9), 1107; https://doi.org/10.3390/sym16091107 - 26 Aug 2024
Viewed by 630
Abstract
Octahedron sets, which extend beyond the previously defined fuzzy set and soft set concepts to address uncertainty, represent a hybrid set theory that incorporates three distinct systems: interval-valued fuzzy sets, intuitionistic fuzzy sets, and traditional fuzzy set components. This comprehensive set theory is [...] Read more.
Octahedron sets, which extend beyond the previously defined fuzzy set and soft set concepts to address uncertainty, represent a hybrid set theory that incorporates three distinct systems: interval-valued fuzzy sets, intuitionistic fuzzy sets, and traditional fuzzy set components. This comprehensive set theory is designed to express all information provided by decision makers as interval-valued intuitionistic fuzzy decision matrices, addressing a broader range of demands than conventional fuzzy decision-making methods. Multi-criteria decision-making (MCDM) methods are essential tools for analyzing and evaluating alternatives across multiple dimensions, enabling informed decision making aligned with strategic objectives. In this study, we applied MCDM methods to octahedron sets for the first time, optimizing decision results by considering various constraints and preferences. By employing an MCDM algorithm, this study demonstrated how the integration of MCDM into octahedron sets can significantly enhance decision-making processes. The algorithm allowed for the systematic evaluation of alternatives, showcasing the practical utility and effectiveness of octahedron sets in real-world scenarios. This approach was validated through influential examples, underscoring the value of algorithms in leveraging the full potential of octahedron sets. Furthermore, the application of MCDM to octahedron sets revealed that this hybrid structure could handle a wider range of decision-making problems more effectively than traditional fuzzy set approaches. This study not only highlights the theoretical advancements brought by octahedron sets but also provides practical evidence of their application, proving their importance and usefulness in complex decision-making environments. Overall, the integration of octahedron sets and MCDM methods marks a significant step forward in decision science, offering a robust framework for addressing uncertainty and optimizing decision outcomes. This research paves the way for future studies to explore the full capabilities of octahedron sets, potentially transforming decision-making practices across various fields. Full article
(This article belongs to the Special Issue Recent Developments on Fuzzy Sets Extensions)
20 pages, 16980 KiB  
Article
A Dempster–Shafer Enhanced Framework for Urban Road Planning Using a Model-Based Digital Twin and MCDM Techniques
by Zahra Maserrat, Ali Asghar Alesheikh, Ali Jafari, Neda Kaffash Charandabi and Javad Shahidinejad
ISPRS Int. J. Geo-Inf. 2024, 13(9), 302; https://doi.org/10.3390/ijgi13090302 - 25 Aug 2024
Viewed by 715
Abstract
Rapid urbanization in developing countries presents a critical challenge in the need for extensive and appropriate road expansion, which in turn contributes to traffic congestion and air pollution. Urban areas are economic engines, but their efficiency and livability rely on well-designed road networks. [...] Read more.
Rapid urbanization in developing countries presents a critical challenge in the need for extensive and appropriate road expansion, which in turn contributes to traffic congestion and air pollution. Urban areas are economic engines, but their efficiency and livability rely on well-designed road networks. This study proposes a novel approach to urban road planning that leverages the power of several innovative techniques. The cornerstone of this approach is a digital twin model of the urban environment. This digital twin model facilitates the evaluation and comparison of road development proposals. To support informed decision-making, a multi-criteria decision-making (MCDM) framework is used, enabling planners to consider various factors such as traffic flow, environmental impact, and economic considerations. Spatial data and 3D visualizations are also provided to enrich the analysis. Finally, the Dempster–Shafer theory (DST) provides a robust mathematical framework to address uncertainties inherent in the weighting process. The proposed approach was applied to planning for both new road constructions and existing road expansions. By combining these elements, the model offers a sustainable and knowledge-based approach to optimize urban road planning. Results from integrating weights obtained through two weighting methods, the Analytic Hierarchy Process (AHP) and the Bayesian best–worst Method (B-BWM), showed a very high weight for the “worn-out urban texture” criterion and a meager weight for “noise pollution”. Finally, the cost path algorithm was used to evaluate the results from all three methods (AHP, B-BWM, and DST). The high degree of similarity in the results from these methods suggests a stable outcome for the proposed approach. Analysis of the study area revealed the following significant challenge for road planning: 35% of the area was deemed unsuitable, with only a tiny portion (4%) being suitable for road development based on the selected criteria. This highlights the need to explore alternative approaches or significantly adjust the current planning process. Full article
Show Figures

Figure 1

Figure 1
<p>Criteria maps provided for urban road planning, (<b>a</b>) worn-out texture, (<b>b</b>) land-use, (<b>c</b>) traffic congestion, (<b>d</b>) air pollution, (<b>e</b>) noise distribution, (<b>f</b>) demolition cost, and (<b>g</b>) forbidden zone.</p>
Full article ">Figure 2
<p>Digital twin of buildings.</p>
Full article ">Figure 3
<p>Overall workflow for urban road planning.</p>
Full article ">Figure 4
<p>Study area.</p>
Full article ">Figure 5
<p>Weights combined from the AHP, B-BWM, and the DST.</p>
Full article ">Figure 6
<p>Overall suitability map for urban road planning using (<b>a</b>) the AHP, (<b>b</b>) B-BWM, and (<b>c</b>) the DST.</p>
Full article ">Figure 7
<p>Comparative analysis of alignment maps generated by the cost path algorithm for (<b>a</b>) the AHP, (<b>b</b>) B-BWM, (<b>c</b>) the DST, and (<b>d</b>) discrepancies in alignments across the three models.</p>
Full article ">Figure 8
<p>Comparison of cost path outputs generated by AHP, B-BWM, and DST methods.</p>
Full article ">Figure 9
<p>Buildings marked for relocation (orange) and demolition (red) due to the proposed road alignment.</p>
Full article ">Figure 10
<p>Demolition costs of buildings for road planning in Iranian Rial.</p>
Full article ">
22 pages, 8850 KiB  
Article
Analysis of Fractal Properties of Atmospheric Turbulence and the Practical Applications
by Zihan Liu, Hongsheng Zhang, Zuntao Fu, Xuhui Cai and Yu Song
Fractal Fract. 2024, 8(8), 483; https://doi.org/10.3390/fractalfract8080483 - 19 Aug 2024
Viewed by 469
Abstract
Atmospheric turbulence, recognized as a quintessential space–time chaotic system, can be characterized by its fractal properties. The characteristics of the time series of multiple orders of fractal dimensions, together with their relationships with stability parameters, are examined using the data from an observational [...] Read more.
Atmospheric turbulence, recognized as a quintessential space–time chaotic system, can be characterized by its fractal properties. The characteristics of the time series of multiple orders of fractal dimensions, together with their relationships with stability parameters, are examined using the data from an observational station in Horqin Sandy Land to explore how the diurnal variation, synoptic process, and stratification conditions can affect the fractal characteristics. The findings reveal that different stratification conditions can disrupt the quasi-three-dimensional state of atmospheric turbulence in different manners within different scales of motion. Two aspects of practical applications of fractal dimensions are explored. Firstly, fractal properties can be employed to refine similarity relationships, thereby offering prospects for revealing more information and expanding the scope of application of similarity theories. Secondly, utilizing different orders of fractal dimensions, a systematic algorithm is developed. This algorithm distinguishes and eliminates non-turbulent motions from observational data, which are shown to exhibit slow-changing features and result in a universal overestimation of turbulent fluxes. This overestimation correlates positively with the boundary frequency between turbulent and non-turbulent motions. The evaluation of these two aspects of applications confirms that fractal properties hold promise for practical studies on atmospheric turbulence. Full article
Show Figures

Figure 1

Figure 1
<p>Location and layout of the instruments (left panel from Google Maps); blue circles denote 2 observation towers, arrows mark orientation of sensors, and sector shows direction range of accepted data.</p>
Full article ">Figure 2
<p>The relationship among the sample size and the fractal dimensions of different orders for 20 random cases. The black dashed lines mark the stages of the increased dimensions, the red dashed line denotes the sample size chosen in this study, and the gray area denotes the standard deviation.</p>
Full article ">Figure 3
<p>Flow chart of conducting Hilbert–Huang transform and reconstructing turbulence data with the help of fractal dimensions.</p>
Full article ">Figure 4
<p>The diurnal pattern and time series of fractal dimensions in July and December 2022, the left 2 panels for July, and the right ones for December. The error bars in the top 2 panels denote the 95% confidence intervals, and the erect dashed lines denote the times of sunrise and sunset. The blue, red, green, and purple points/lines in the bottom 2 panels denote the data from Tower 2 (with the height of 2.2 m) and Tower 1 with the height of 4 m, 8 m, and 16 m, respectively.</p>
Full article ">Figure 5
<p>The similarity relationship based on fractal dimensions. The left panels for unstable stratifications and the right for stable ones; the top 4 panels for the relationship between <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and the stability parameter in July 2022, with the same colors as <a href="#fractalfract-08-00483-f004" class="html-fig">Figure 4</a>. The 3rd row for the ratio <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>/</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> </mrow> </semantics></math> with the solid lines and points for July, and the dashed lines and hollow points for December.</p>
Full article ">Figure 6
<p>The comparison of similarity relationships of normalized u-wind standard deviation and the fractal dimension-refined relationships in July and December 2022. The points denote the original dimension data and are semi-transparent processed, and the lines are the fitted curves. The blue ones are for the results in July, and red ones for the results in December.</p>
Full article ">Figure 7
<p>The comparison of similarity relationships of normalized cross-correlation coefficients of u and w-wind (top 2 panels) and the fractal dimension-refined relationships (bottom 2 panels) in July and December 2022. For line and point styles, please refer to <a href="#fractalfract-08-00483-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 8
<p>Comparison of similarity relationships of normalized cross-correlation coefficients of w-wind and potential temperatures (top 2 panels), and the fractal dimension-refined relationships (bottom 2 panels) in July and December 2022. For line and point styles, please refer to <a href="#fractalfract-08-00483-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 9
<p>The relationship between the average fractal dimensions and the frequency of superimposed IMFs in July 2022. The left 2 panels for multivariate EMD, and the remaining 4 panels for univariate EMD; the top 3 panels correspond to stable stratifications and the bottom ones for unstable stratifications. The blue, red, yellow, and purple lines denote <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>0</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math>, and the error bars for the 95% confidence intervals of each point.</p>
Full article ">Figure 10
<p>The relationship between fractal dimensions and the frequency of superimposed IMFs for individual cases in July 2022. The colors are the same as <a href="#fractalfract-08-00483-f009" class="html-fig">Figure 9</a>. The top 3 panels for the stable stratification case (13 July 20:00–20:30 LT), and the bottom for unstable case (10 July 09:00–09:30 LT), LT = UTC + 8.</p>
Full article ">Figure 11
<p>The comparison of reconstructed and original turbulent time series of u-wind and their energy spectra. The left 2 panels for the unstable stratification case (10 July 09:00–09:30 LT), and the right ones for stable case (13 July 20:00–20:30 LT), LT = UTC + 8.</p>
Full article ">Figure 12
<p>Relationship between boundary frequency and the sample size (<b>left</b>) and the correction of turbulent momentum and sensible heat flux with respect to boundary frequency (<b>right</b>). The black dashed lines mark the division of different stages, the gray area and the error bars denote the 95% confidence intervals.</p>
Full article ">Figure 13
<p>Comparison of the reconstructed fluxes and the unprocessed fluxes in both July and December 2022. The red points show the fluxes estimated through cut-off filter of boundary frequencies, and the blue points the original fluxes.</p>
Full article ">
32 pages, 28406 KiB  
Article
Infrared and Harsh Light Visible Image Fusion Using an Environmental Light Perception Network
by Aiyun Yan, Shang Gao, Zhenlin Lu, Shuowei Jin and Jingrong Chen
Entropy 2024, 26(8), 696; https://doi.org/10.3390/e26080696 - 16 Aug 2024
Viewed by 499
Abstract
The complementary combination of emphasizing target objects in infrared images and rich texture details in visible images can effectively enhance the information entropy of fused images, thereby providing substantial assistance for downstream composite high-level vision tasks, such as nighttime vehicle intelligent driving. However, [...] Read more.
The complementary combination of emphasizing target objects in infrared images and rich texture details in visible images can effectively enhance the information entropy of fused images, thereby providing substantial assistance for downstream composite high-level vision tasks, such as nighttime vehicle intelligent driving. However, mainstream fusion algorithms lack specific research on the contradiction between the low information entropy and high pixel intensity of visible images under harsh light nighttime road environments. As a result, fusion algorithms that perform well in normal conditions can only produce low information entropy fusion images similar to the information distribution of visible images under harsh light interference. In response to these problems, we designed an image fusion network resilient to harsh light environment interference, incorporating entropy and information theory principles to enhance robustness and information retention. Specifically, an edge feature extraction module was designed to extract key edge features of salient targets to optimize fusion information entropy. Additionally, a harsh light environment aware (HLEA) module was proposed to avoid the decrease in fusion image quality caused by the contradiction between low information entropy and high pixel intensity based on the information distribution characteristics of harsh light visible images. Finally, an edge-guided hierarchical fusion (EGHF) module was designed to achieve robust feature fusion, minimizing irrelevant noise entropy and maximizing useful information entropy. Extensive experiments demonstrate that, compared to other advanced algorithms, the method proposed fusion results contain more useful information and have significant advantages in high-level vision tasks under harsh nighttime lighting conditions. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing II)
Show Figures

Figure 1

Figure 1
<p>The architecture of the harsh light environment-aware visible and infrared image fusion network. The network backbone is composed of four components: multi-modal feature extraction, edge feature extraction, harsh light environment awareness, and image decoding and reconstruction.</p>
Full article ">Figure 2
<p>The architecture of the high-level semantic feature interactive fusion (HSIF) module. The channel attention mechanism reallocates channel weights for edge features <span class="html-italic">E</span><sub>4</sub> and <span class="html-italic">E</span><sub>5</sub> and then constructs an interactive spatial attention mechanism to obtain more comprehensive global semantic information.</p>
Full article ">Figure 3
<p>The architecture of the edge feature enhancement block (EFEB). Detail features are fused with deep features and then enhanced with the residual Sobel operator to strengthen the edges of salient objects.</p>
Full article ">Figure 4
<p>The architecture of the harsh light environment aware (HLEA) module. The module completes the prediction of the ambient light intensity level for input infrared and visible images through three parts: illuminance decomposition sub-network, illuminance classification sub-network, and image quality assessment.</p>
Full article ">Figure 5
<p>The architecture of the edge-guided hierarchical fusion (EGHF) module. In the edge guidance part, the infrared and visible features are fused according to the perceptual weights and then complete the redistribution of attention weights under the guidance of edge features. In the feature enhancement part, the infrared and visible features are added to the features guided by edges to achieve feature enhancement.</p>
Full article ">Figure 6
<p>Qualitative results for the TNO dataset images. The images in the red boxes show texture details in the original images or magnified details of important targets.</p>
Full article ">Figure 7
<p>VIF metric results for fused images in normal and harsh light conditions. The first row and the second row correspond to the fusion results in harsh light and normal environments, respectively.</p>
Full article ">Figure 8
<p>Qualitative results for the MFNet dataset images. The images in the red boxes show texture details in the original images or magnified details of important targets.</p>
Full article ">Figure 9
<p>Qualitative results for the M3FD dataset images. The images in the red boxes show texture details in the original images or magnified details of important targets.</p>
Full article ">Figure 10
<p>Qualitative results of target detection in harsh light and high exposure scenes for infrared images, visible images, and fusion images generated by various algorithms. Each pair of lines represents the detection results for one scene.</p>
Full article ">Figure 11
<p>Qualitative results of target detection in normal light scenes for infrared images, visible images, and fusion images generated by various algorithms. Each pair of lines represents the detection results for one scene.</p>
Full article ">Figure 12
<p>Results of edge feature detection using different algorithms. The images in the red boxes show texture details in the original images or magnified details of important targets.</p>
Full article ">Figure 13
<p>The intermediate and final results of the HLEA module under three typical scenarios in normal and harsh light environments are displayed. The “Class Results” column shows the illumination classification results, “Intermediate Weight” presents the illumination decomposition and image quality estimation results, and “Perceptual Weight” displays the final output results.</p>
Full article ">Figure 14
<p>Module corresponding ablation experiments. The images in the red boxes show texture details in the original images or magnified details of important targets.</p>
Full article ">
19 pages, 4899 KiB  
Article
The Many Shades of the Vegetation–Climate Causality: A Multimodel Causal Appreciation
by Yuhao Shao, Daniel Fiifi Tawia Hagan, Shijie Li, Feihong Zhou, Xiao Zou and Pedro Cabral
Forests 2024, 15(8), 1430; https://doi.org/10.3390/f15081430 - 14 Aug 2024
Viewed by 506
Abstract
The causal relationship between vegetation and temperature serves as a driving factor for global warming in the climate system. However, causal relationships are typically characterized by complex facets, particularly within natural systems, necessitating the ongoing development of robust approaches capable of addressing the [...] Read more.
The causal relationship between vegetation and temperature serves as a driving factor for global warming in the climate system. However, causal relationships are typically characterized by complex facets, particularly within natural systems, necessitating the ongoing development of robust approaches capable of addressing the challenges inherent in causality analysis. Various causality approaches offer distinct perspectives on understanding causal structures, even when experiments are meticulously designed with a specific target. Here, we use the complex vegetation–climate interaction to demonstrate some of the many facets of causality analysis by applying three different causality frameworks including (i) the kernel Granger causality (KGC), a nonlinear extension of the Granger causality (GC), to understand the nonlinearity in the vegetation–climate causal relationship; (ii) the Peter and Clark momentary conditional independence (PCMCI), which combines the Peter and Clark (PC) algorithm with the momentary conditional independence (MCI) approach to distinguish the feedback and coupling signs in vegetation–climate interaction; and (iii) the Liang–Kleeman information flow (L-K IF), a rigorously formulated causality formalism based on the Liang–Kleeman information flow theory, to reveal the causal influence of vegetation on the evolution of temperature variability. The results attempt to capture a fuller understanding of the causal interaction of leaf area index (LAI) on air temperature (T) during 1981–2018, revealing the characteristics and differences in distinct climatic tipping point regions, particularly in terms of nonlinearity, feedback signals, and variability sources. This study demonstrates that realizing a more holistic causal structure of complex problems like the vegetation–climate interaction benefits from the combined use of multiple models that shed light on different aspects of its causal structure, thus revealing novel insights that are missing when we rely on one single approach. This prompts the need to move toward a multimodel causality analysis that could reduce biases and limitations in causal interpretations. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Forestry)
Show Figures

Figure 1

Figure 1
<p>The many shades of the vegetation–climate causality. That is, the vegetation–temperature interaction mechanism will be revealed from different causal aspects, where LAI is the leaf area index; T is the temperature; and KGC (kernel Granger causality), PCMCI (Peter and Clark momentary conditional independence), and L-K IF (Liang–Kleeman information flow) are three different causal analysis methods.</p>
Full article ">Figure 2
<p>The research outline of this study, where the GLASS LAI is the leaf area index from the Global Land Surface Satellite (GLASS) dataset; CRU TS represents the temperature data from the Climatic Research Unit (CRU) dataset; ET represents the evapotranspiration data from the Global Land Evaporation Amsterdam Model (GLEAM) dataset; and KGC (kernel Granger causality), PCMCI (Peter and Clark momentary conditional independence), and L-K IF (Liang–Kleeman information flow) are three different causality analysis methods.</p>
Full article ">Figure 3
<p>The kernel Granger causality (KGC) results from leaf area index (LAI) to temperature (T), indicated as LAI→T, are shown in (<b>a</b>–<b>c</b>), representing P equal to (<b>a</b>) 1, (<b>b</b>) 3, and (<b>c</b>) 5. Spatial results for P equal to 4, 5 are shown in the <a href="#app1-forests-15-01430" class="html-app">Supplementary Materials</a> <a href="#app1-forests-15-01430" class="html-app">Figure S1</a>. The statistical boxplots of KGC results for LAI→T across different degrees of nonlinearity are shown in (<b>d</b>), indicated by the parameter P, which ranges from 1 to 5. In (<b>d</b>), the star symbol in the middle of each boxplot represents the median value, the dashed line indicates the mean value, and outliers are not displayed. Results are computed at a 5% statistical significance.</p>
Full article ">Figure 4
<p>The global distribution of the Pearson correlation between LAI and T is shown in (<b>a</b>), while the influence of LAI on T with a one-month time lag considering the influence of ET, is shown in (<b>b</b>), both of which are computed at a statistical significance level of 1%, where warm colors (red-orange) indicate positive values, while cool colors (blue) indicate negative values.</p>
Full article ">Figure 5
<p>Time series statistics and causal analysis results for selected typical regions. The first column represents the scatter plot between LAI (<span class="html-italic">x</span>-axis) and T (<span class="html-italic">y</span>-axis) of each region, and the second column represents contour plots of the kernel densities of the scatter plot for (<b>a</b>–<b>c</b>) the boreal forest (60–65° N, 90–95° E), (<b>d</b>–<b>f</b>) East Asian monsoon region (26–31° N, 110–115° E), (<b>g</b>–<b>i</b>) Sahel (5–10° N, 30–35° E), and (<b>j</b>–<b>l</b>) Amazon rainforest (0–10° S, 55–65° W). The third column, (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>) shows the causal structure of LAI and T in these regions. The unidirectional curved arrows represent the causal relationship with a delay of 1 calculated with PCMCI, and the bidirectional straight arrows represent the results calculated by PCMCI Plus with no time delay. The colors of the arrows are blue for negative causality and red for positive causality.</p>
Full article ">Figure 6
<p>The global Information flow from LAI to T. Red colours indicate positive IF rates and blue colours indicate negative IF rates. All results are computed at a 5% statistical significance. White regions are statistically insignificant regions or masked out due to the absence of vegetation.</p>
Full article ">
29 pages, 3895 KiB  
Article
Wind Speed Forecasting Based on Phase Space Reconstruction and a Novel Optimization Algorithm
by Zhaoshuang He, Yanhua Chen and Yale Zang
Sustainability 2024, 16(16), 6945; https://doi.org/10.3390/su16166945 - 13 Aug 2024
Viewed by 564
Abstract
The wind power generation capacity is increasing rapidly every year. There needs to be a corresponding development in the management of wind power. Accurate wind speed forecasting is essential for a wind power management system. However, it is not easy to forecast wind [...] Read more.
The wind power generation capacity is increasing rapidly every year. There needs to be a corresponding development in the management of wind power. Accurate wind speed forecasting is essential for a wind power management system. However, it is not easy to forecast wind speed precisely since wind speed time series data are usually nonlinear and fluctuant. This paper proposes a novel combined wind speed forecasting model that based on PSR (phase space reconstruction), NNCT (no negative constraint theory) and a novel GPSOGA (a hybrid optimization algorithm that combines global elite opposition-based learning strategy, particle swarm optimization and the genetic algorithm) optimization algorithm. SSA (singular spectrum analysis) is firstly applied to decompose the original wind speed time series into IMFs (intrinsic mode functions). Then, PSR is employed to reconstruct the intrinsic mode functions into input and output vectors of the forecasting model. A combined forecasting model is proposed that contains a CBP (cascade back propagation network), RNN (recurrent neural network), GRU (gated recurrent unit), and CNNRNN (convolutional neural network combined with recurrent neural network). The NNCT strategy is used to combine the output of the four predictors, and a new optimization algorithm is proposed to find the optimal combination parameters. In order to validate the performance of the proposed algorithm, we compare the forecasting results of the proposed algorithm with different models on four datasets. The experimental results demonstrate that the forecasting performance of the proposed algorithm is better than other comparison models in terms of different indicators. The DM (Diebold–Mariano) test, Akaike’s information criterion and the Nash–Sutcliffe efficiency coefficient confirm that the proposed algorithm outperforms the comparison models. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart of CNNRNN.</p>
Full article ">Figure 2
<p>The flow chart of the proposed SSA-GPSOGA-NNCT algorithm.</p>
Full article ">Figure 3
<p>The statistical information of the four datasets used in this paper.</p>
Full article ">Figure 4
<p>The forecasting results of the proposed algorithm and four single forecasting models on dataset 1.</p>
Full article ">Figure 5
<p>The forecasting errors of all the models for three-step forecasting and the performance indicators for one-step, three-step and five-step forecasting on dataset 1.</p>
Full article ">Figure 6
<p>The forecasting results of all the models for one-step forecasting and the performance indicators for one-step, three-step, and five-step forecasting on dataset 2.</p>
Full article ">Figure 7
<p>The results of different forecasting models with different decomposition methods on dataset 1.</p>
Full article ">
24 pages, 3934 KiB  
Article
The Use of Computational Algorithms to Display the Aircraft Instruments That Work with Gyroscopic and Magnetic Physics (Theory for Programming an Elementary Flight Simulator)
by Adan Ramirez-Lopez
Appl. Sci. 2024, 14(16), 7099; https://doi.org/10.3390/app14167099 - 13 Aug 2024
Viewed by 447
Abstract
The present study shows the development of computational algorithms to represent aircraft instruments such as the attitude indicator or the turn-and-slip indicator; moreover, the algorithms represent a magnetic compass and other instruments that function according to other physical theories. These instruments work by [...] Read more.
The present study shows the development of computational algorithms to represent aircraft instruments such as the attitude indicator or the turn-and-slip indicator; moreover, the algorithms represent a magnetic compass and other instruments that function according to other physical theories. These instruments work by using the gyroscope and magnetic principles and help the pilot in navigation. These are considered to be the basic instruments required to provide location-related and positional information about the actual aircraft attitude. The algorithms developed in this study are capable of working in concordance with other instruments and the physical conditions established. The programming language used was C++ and the algorithms were compiled in independent files and subroutines for computational efficiency, eliminating unnecessary code. The display options were successfully tested. Additionally, an analysis that evaluated the error and approached flight simulation as a function of step time (Δt) is also described. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

Figure 1
<p>Magnetic compass representation: (<b>a</b>) segmentation of dial and movement areas; (<b>b</b>) instrument simulated computationally.</p>
Full article ">Figure 2
<p>Concordance between simulated terrain and assumptions for magnetic compass used to represent heading angles.</p>
Full article ">Figure 3
<p>Computational representation of spherical coordinates used to divide attitude indicator into segments to representing aircraft movements (<b>a</b>) for rolling and (<b>b</b>) for pitching.</p>
Full article ">Figure 4
<p>Movements of attitude indicator (<b>a</b>) as function of aircraft rolling angle and (<b>b</b>) as function of aircraft pitching angle.</p>
Full article ">Figure 5
<p>Calculation for representing aircraft movements in attitude indicator: (<b>a</b>) rolling; (<b>b</b>) pitching; (<b>c</b>) combination of both movements simultaneously (pitching + rolling).</p>
Full article ">Figure 6
<p>Computational representation of attitude indicator.</p>
Full article ">Figure 7
<p>Turn-and-slip indicator (<b>a</b>) segmentation of dial and areas for movement of indicators; (<b>b</b>) instrument simulated computationally.</p>
Full article ">Figure 8
<p>RPM indicator (<b>a</b>) segmentation of dial and areas for movement of indicators; (<b>b</b>) instrument simulated computationally.</p>
Full article ">Figure 9
<p>Flow charts for showing procedures for simulator developed. (<b>a</b>) Flow chart presents information required to execute simulation. (<b>b</b>) Flowchart displays computer animation of flight instruments. (<b>c</b>) Flowchart for obtaining minimum and maximum values.</p>
Full article ">Figure 9 Cont.
<p>Flow charts for showing procedures for simulator developed. (<b>a</b>) Flow chart presents information required to execute simulation. (<b>b</b>) Flowchart displays computer animation of flight instruments. (<b>c</b>) Flowchart for obtaining minimum and maximum values.</p>
Full article ">Figure 10
<p>Flight instruments displayed on computer screen with computational algorithms developed: (<b>a</b>) conditions displayed are those at beginning of simulation (<b>b</b>,<b>c</b>); there are different conditions for simulated flights (1).</p>
Full article ">Figure 10 Cont.
<p>Flight instruments displayed on computer screen with computational algorithms developed: (<b>a</b>) conditions displayed are those at beginning of simulation (<b>b</b>,<b>c</b>); there are different conditions for simulated flights (1).</p>
Full article ">Figure 11
<p>Simulation of the aircraft speed calculated using the algorithms developed (<b>a</b>) for flight (1); (<b>b</b>) for flight (2); and (<b>c</b>) for flight (3).</p>
Full article ">Figure 12
<p>Simulation of aircraft displacement calculated using algorithms developed (<b>a</b>) for flight (1); (<b>b</b>) for flight (2); and (<b>c</b>) for flight (3).</p>
Full article ">Figure 12 Cont.
<p>Simulation of aircraft displacement calculated using algorithms developed (<b>a</b>) for flight (1); (<b>b</b>) for flight (2); and (<b>c</b>) for flight (3).</p>
Full article ">Figure 13
<p>Computer simulation of the aircraft flight showing the 2D and 3D aircraft path according to the data in table (1) for every flight: (<b>a</b>) for flight (1); (<b>b</b>) for flight (2); (<b>c</b>) for flight (3).</p>
Full article ">Figure 13 Cont.
<p>Computer simulation of the aircraft flight showing the 2D and 3D aircraft path according to the data in table (1) for every flight: (<b>a</b>) for flight (1); (<b>b</b>) for flight (2); (<b>c</b>) for flight (3).</p>
Full article ">Figure 14
<p>Simulation of aircraft displacement—calculated using algorithms developed during maneuvers (1) and (2). Shows reduction in step time (Δ<span class="html-italic">t</span>) as function of steps used for calculation.</p>
Full article ">Figure 15
<p>Calculation of approach on calculation based on total aircraft displacement along axis (x,y,z) for flight (1).</p>
Full article ">Figure 16
<p>Calculation of approach calculation based on total aircraft displacement along axis (x,y,z) for flight (2).</p>
Full article ">Figure 17
<p>Calculation of approach calculation based on the total aircraft displacement along axis (x,y,z) for flight (3).</p>
Full article ">
Back to TopTop