[go: up one dir, main page]

Previous Issue
Volume 26, October
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 11 (November 2024) – 94 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 514 KiB  
Article
New Variable-Weight Optical Orthogonal Codes with Weights 3 to 5
by Si-Yeon Pak, Hyo-Won Kim, DaeHan Ahn and Jin-Ho Chung
Entropy 2024, 26(11), 982; https://doi.org/10.3390/e26110982 (registering DOI) - 15 Nov 2024
Abstract
In optical networks, designing optical orthogonal codes (OOCs) with appropriate parameters is essential for enhancing the overall system performance. They are divided into two categories, constant-weight OOCs (CW-OOCs) and variable-weight OOCs (VW-OOCs), based on the number of distinct Hamming weights present in their [...] Read more.
In optical networks, designing optical orthogonal codes (OOCs) with appropriate parameters is essential for enhancing the overall system performance. They are divided into two categories, constant-weight OOCs (CW-OOCs) and variable-weight OOCs (VW-OOCs), based on the number of distinct Hamming weights present in their codewords. This paper introduces a method for constructing VW-OOCs of length by using the structure of an integer ring and the Chinese Remainder Theorem. In particular, we present some specific VW-OOCs with weights of 3, 4, or 5. The results demonstrate that certain optimal VW-OOCs can be obtained with parameters that are not covered in the existing literature. Full article
(This article belongs to the Special Issue New Advances in Error-Correcting Codes)
50 pages, 729 KiB  
Article
Non-Equilibrium Quantum Brain Dynamics: Water Coupled with Phonons and Photons
by Akihiro Nishiyama, Shigenori Tanaka and Jack Adam Tuszynski
Entropy 2024, 26(11), 981; https://doi.org/10.3390/e26110981 (registering DOI) - 15 Nov 2024
Abstract
We investigate Quantum Electrodynamics (QED) of water coupled with sound and light, namely Quantum Brain Dynamics (QBD) of water, phonons and photons. We provide phonon degrees of freedom as additional quanta in the framework of QBD in this paper. We begin with the [...] Read more.
We investigate Quantum Electrodynamics (QED) of water coupled with sound and light, namely Quantum Brain Dynamics (QBD) of water, phonons and photons. We provide phonon degrees of freedom as additional quanta in the framework of QBD in this paper. We begin with the Lagrangian density QED with non-relativistic charged bosons, photons and phonons, and derive time-evolution equations of coherent fields and Kadanoff–Baym (KB) equations for incoherent particles. We next show an acoustic super-radiance solution in our model. We also introduce a kinetic entropy current in KB equations in 1st order approximation in the gradient expansion and show the H-theorem for self-energy in Hartree–Fock approximation. We finally derive conserved number density of charged bosons and conserved energy density in spatially homogeneous system. Full article
(This article belongs to the Section Quantum Information)
15 pages, 27235 KiB  
Article
Dynamics of Aggregation in Systems of Self-Propelled Rods
by Richard J. G. Löffler and Jerzy Gorecki
Entropy 2024, 26(11), 980; https://doi.org/10.3390/e26110980 - 15 Nov 2024
Abstract
We highlight camphene–camphor–polypropylene plastic as a useful material for self-propelled objects that show aggregation while floating on a water surface. We consider self-propelled rods as an example of aggregation of objects characterized by non-trivial individual shapes with low-symmetry interactions between them. The motion [...] Read more.
We highlight camphene–camphor–polypropylene plastic as a useful material for self-propelled objects that show aggregation while floating on a water surface. We consider self-propelled rods as an example of aggregation of objects characterized by non-trivial individual shapes with low-symmetry interactions between them. The motion of rods made of the camphene–camphor–polypropylene plastic is supported by dissipation of the surface-active molecules. The physical processes leading to aggregation and the mathematical model of the process are discussed. We analyze experimental data of aggregate formation dynamics and relate them to the system’s properties. We speculate that the aggregate structure can be represented as a string of symbols, which opens the potential applicability of the phenomenon for information processing if objects floating on a water surface are regarded as reservoir computers. Full article
(This article belongs to the Special Issue Matter-Aggregating Systems at a Classical vs. Quantum Interface)
Show Figures

Figure 1

Figure 1
<p>Aggregation in a system of polylactic acid rods driven by capillary forces. The rods are <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> mm long and have diameter <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> mm. They were placed on the water surface of a Petri dish with 10 cm diameter. Subfigures (<b>a</b>–<b>d</b>) correspond to times <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> s (the initial distribution of rods on the water), <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> s, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>90</mn> </mrow> </semantics></math> s, and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>180</mn> </mrow> </semantics></math> s, respectively.</p>
Full article ">Figure 2
<p>An example of aggregation in a system of 20 rods inside a 12 cm Petri dish. Each rod is <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> mm long and has <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> mm diameter. Subfigures show the positions of rods at different times: (<b>a</b>) shortly after rods were placed on the water surface (<math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>); (<b>b</b>–<b>e</b>) correspond to times <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>b</mi> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> s, <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>26</mn> </mrow> </semantics></math> s, <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>d</mi> </msub> <mo>=</mo> <mn>73</mn> </mrow> </semantics></math> s, and <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>90</mn> </mrow> </semantics></math> s. The surprising cave-art style figure of a hunter chasing an animal (<b>f</b>), here shown for <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>25</mn> </mrow> </semantics></math> min, was generated by self-aggregated roots not by a human hand. The movie illustrating the most important fragments of the evolution is included in the <a href="#app1-entropy-26-00980" class="html-app">Supplementary Information</a> as 140-first-30s.mp4 (the first 30 s of time evolution after all rods are placed on the water surface), 140-second-1m30s.mp4 (evolution in the time interval [30 s, 2 min]), 140-start12m-end14-30.mp4 (evolution in the time interval [12 min, 14 min 30 s]). Scale bars are 10 mm.</p>
Full article ">Figure 3
<p>Qualitative analysis of the number of fragments as a function of time in systems of 20 self-propelled rods (<math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> mm, <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> mm) aggregating inside 12 cm Petri dish: (<b>a</b>) the total number of fragments; (<b>b</b>) the number of birods; (<b>c</b>) the number of monorods; (<b>d</b>) the number of trirods. The results were obtained from the time evolution leading to metastable structures illustrated in <a href="#entropy-26-00980-f004" class="html-fig">Figure 4</a>b,c,e (red, blue and black curves, respectively).</p>
Full article ">Figure 4
<p>The metastable aggregates observed at the end of experiments. Subfigures (<b>a</b>–<b>c</b>,<b>e</b>) correspond to times <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>63</mn> </mrow> </semantics></math> min, <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>b</mi> </msub> <mo>=</mo> <mn>60</mn> </mrow> </semantics></math> min, <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>65</mn> </mrow> </semantics></math> min, and <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>56</mn> </mrow> </semantics></math> min. The structure shown in (<b>d</b>) (<math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>d</mi> </msub> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math> min) aggregated after dispersing an existing structure to single rods, i.e., 12 min after a concluded 60 min experiment. The structure in (<b>f</b>) (<math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math> min was formed in a smaller Petri dish with the diameter <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> cm. The freeze frames depicted were cropped in order to show the full extent of aggregates and all scale bars are 10 mm.</p>
Full article ">Figure 5
<p>Simple fragments of aggregate that can be used as an alphabet to code a large structure of rods. End-to-end connections between two rods: (<b>a</b>)—<span class="html-italic">I</span> junction, (<b>b</b>)—<span class="html-italic">V</span> junction, and (<b>c</b>)—<math display="inline"><semantics> <mo>Γ</mo> </semantics></math> junction (there is also a rotated version ℸ). End-to-center connections between two rods: (<b>d</b>) <span class="html-italic">T</span> junction; (<b>e</b>,<b>f</b>) complex branching following the <span class="html-italic">T</span> junction corresponding to <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>{</mo> <mo>{</mo> <mn>1</mn> <mo>}</mo> <mo>,</mo> <mo>{</mo> <mo>}</mo> <mo>,</mo> <mo>{</mo> <mi>V</mi> <mo>}</mo> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>{</mo> <mo>{</mo> <mn>1</mn> <mo>}</mo> <mo>,</mo> <mo>{</mo> <mn>1</mn> <mo>}</mo> <mo>,</mo> <mo>{</mo> <mo>}</mo> <mo>}</mo> </mrow> </semantics></math>; (<b>h</b>) <math display="inline"><semantics> <mi>λ</mi> </semantics></math>–connection. (<b>g</b>) End-to-end connections between three rods–the <span class="html-italic">Y</span> junction. (<b>i</b>) A complex connection of rods needing a separate symbol.</p>
Full article ">Figure 6
<p>Aggregation of PacMan [<a href="#B54-entropy-26-00980" class="html-bibr">54</a>] characters made of camphene–camphor–polypropylene plastic floating on water surface inside square-shaped <math display="inline"><semantics> <mrow> <mn>15</mn> <mo>×</mo> <mn>15</mn> </mrow> </semantics></math> cm area. Subfigures (<b>a</b>–<b>d</b>) correspond to the times 8, 17, 23, and 26 s. The movie illustrating the time evolution can be watched on YouTube ([<a href="#B56-entropy-26-00980" class="html-bibr">56</a>]).</p>
Full article ">
23 pages, 324 KiB  
Article
Bowen’s Formula for a Dynamical Solenoid
by Andrzej Biś, Wojciech Kozłowski and Agnieszka Marczuk
Entropy 2024, 26(11), 979; https://doi.org/10.3390/e26110979 - 15 Nov 2024
Viewed by 122
Abstract
More than 50 years ago, Rufus Bowen noticed a natural relation between the ergodic theory and the dimension theory of dynamical systems. He proved a formula, known today as the Bowen’s formula, that relates the Hausdorff dimension of a conformal repeller to the [...] Read more.
More than 50 years ago, Rufus Bowen noticed a natural relation between the ergodic theory and the dimension theory of dynamical systems. He proved a formula, known today as the Bowen’s formula, that relates the Hausdorff dimension of a conformal repeller to the zero of a pressure function defined by a single conformal map. In this paper, we extend the result of Bowen to a sequence of conformal maps. We present a dynamical solenoid, i.e., a generalized dynamical system obtained by backward compositions of a sequence of continuous surjections (fn:XX)nN defined on a compact metric space (X,d). Under mild assumptions, we provide a self-contained proof that Bowen’s formula holds for dynamical conformal solenoids. As a corollary, we obtain that the Bowen’s formula holds for a conformal surjection f:XX of a compact Full article
(This article belongs to the Section Statistical Physics)
13 pages, 284 KiB  
Article
Quantum Control Design by Lyapunov Trajectory Tracking and Optimal Control
by Hongli Yang, Guohui Yu and Ivan Ganchev Ivanov
Entropy 2024, 26(11), 978; https://doi.org/10.3390/e26110978 - 15 Nov 2024
Viewed by 188
Abstract
In this paper, we investigate a Lyapunov trajectory tracking design method that incorporates a Schrödinger equation with a dipole subterm and polarizability. Our findings suggest that the proposed control law can overcome the limitations of certain existing control laws that do not converge. [...] Read more.
In this paper, we investigate a Lyapunov trajectory tracking design method that incorporates a Schrödinger equation with a dipole subterm and polarizability. Our findings suggest that the proposed control law can overcome the limitations of certain existing control laws that do not converge. By integrating a quadratic performance index, we introduce an optimal control law, which we subsequently analyze for stability and optimality. We also simulate the spin-1/2 particle system to illustrate our results. These findings are further validated through numerical illustrations involving a 3D, 5D system, and a spin-1/2 particle system. Full article
(This article belongs to the Special Issue Information Theory in Control Systems, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>The evolution of the control law <math display="inline"><semantics> <msub> <mi>u</mi> <mi>i</mi> </msub> </semantics></math> adopts the control law of (7) under the system of (8).</p>
Full article ">Figure 2
<p>The evolution of the Lyapunov function <span class="html-italic">V</span> with time under system (8) and control law (7).</p>
Full article ">Figure 3
<p>The evolution of the control law <math display="inline"><semantics> <msub> <mi>u</mi> <mi>i</mi> </msub> </semantics></math> adopts the control law of (7) under the system of (9).</p>
Full article ">Figure 4
<p>The evolution of the Lyapunov function <span class="html-italic">V</span> with time under system (9) and control law (7).</p>
Full article ">Figure 5
<p>The evolution of the control law <math display="inline"><semantics> <msub> <mi>u</mi> <mi>i</mi> </msub> </semantics></math> adopts the control law of (7) under the system of (10).</p>
Full article ">Figure 6
<p>The evolution of the Lyapunov function <span class="html-italic">V</span> with time under system (10) and control law (7).</p>
Full article ">Figure 7
<p>The evolution of the control law <math display="inline"><semantics> <msub> <mi>u</mi> <mi>i</mi> </msub> </semantics></math> adopts the control law of (7) under the system of (11).</p>
Full article ">Figure 8
<p>The evolution of the Lyapunov function <span class="html-italic">V</span> with time under system (11) and control law (7).</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <msubsup> <mi>u</mi> <mi>i</mi> <mo>*</mo> </msubsup> </semantics></math> corresponds to the evolution process under <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <msubsup> <mi>u</mi> <mi>i</mi> <mo>*</mo> </msubsup> </semantics></math> corresponds to the evolution process under <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>r</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">
12 pages, 359 KiB  
Article
Statistical Properties of Superpositions of Coherent Phase States with Opposite Arguments
by Miguel Citeli de Freitas and Viktor V. Dodonov
Entropy 2024, 26(11), 977; https://doi.org/10.3390/e26110977 - 15 Nov 2024
Viewed by 180
Abstract
We calculate the second-order moments, the Robertson–Schrödinger uncertainty product, and the Mandel factor for various superpositions of coherent phase states with opposite arguments, comparing the results with similar superpositions of the usual (Klauder–Glauber–Sudarshan) coherent states. We discover that the coordinate variance in the [...] Read more.
We calculate the second-order moments, the Robertson–Schrödinger uncertainty product, and the Mandel factor for various superpositions of coherent phase states with opposite arguments, comparing the results with similar superpositions of the usual (Klauder–Glauber–Sudarshan) coherent states. We discover that the coordinate variance in the analog of even coherent states can show the most strong squeezing effect, close to the maximal possible squeezing for the given mean photon number. On the other hand, the Robertson–Schrödinger (RS) uncertainty product in superpositions of coherent phase states increases much slower (as function of the mean photon number) than in superpositions of the usual coherent states. A nontrivial behavior of the Mandel factor for small mean photon numbers is discovered in superpositions with unequal weights of two components. An exceptional nature of the even and odd superpositions is demonstrated. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness V)
Show Figures

Figure 1

Figure 1
<p>Variances <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>x</mi> </msub> </semantics></math> as functions of <math display="inline"><semantics> <msup> <mrow> <mo>|</mo> <mi>α</mi> <mo>|</mo> </mrow> <mn>2</mn> </msup> </semantics></math> in the even and YS superpositions of the usual coherent states with <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>(<b>Left</b>) Variances of <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> as functions of <math display="inline"><semantics> <msup> <mrow> <mo>|</mo> <mi>ε</mi> <mo>|</mo> </mrow> <mn>2</mn> </msup> </semantics></math> in the superpositions of the coherent phase states with <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>. (<b>Right</b>) Variances of <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> </semantics></math> as functions of <math display="inline"><semantics> <mrow> <mo>〈</mo> <mover accent="true"> <mi>n</mi> <mo stretchy="false">^</mo> </mover> <mo>〉</mo> </mrow> </semantics></math> in the even superposition of the coherent phase states with <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> compared with the variances (<a href="#FD50-entropy-26-00977" class="html-disp-formula">50</a>) in the ideal vacuum squeezed state. All numeric results were obtained taking into account <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>,</mo> <mn>000</mn> </mrow> </semantics></math> terms in series <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mo>±</mo> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mo>±</mo> <mn>2</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>The Robertson–Schrödinger uncertainty product <span class="html-italic">D</span> for superpositions of coherent states (<b>left</b>) and coherent phase states (<b>right</b>) as functions of the mean number of quanta <math display="inline"><semantics> <msub> <mi>n</mi> <mn>0</mn> </msub> </semantics></math> in the states with <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. All numeric results were obtained taking into account <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>,</mo> <mn>000</mn> </mrow> </semantics></math> terms in series <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mo>±</mo> <mn>1</mn> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mrow> <mo>±</mo> <mn>2</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>The Mandel factors of superpositions of coherent states (<b>left</b>) and coherent phase states (<b>right</b>) as functions of the mean number of quanta <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>|</mo> <mi>α</mi> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </semantics></math> in the original coherent state (<a href="#FD1-entropy-26-00977" class="html-disp-formula">1</a>) and <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mrow> <mo>|</mo> <mi>ε</mi> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>−</mo> <msup> <mrow> <mo>|</mo> <mi>ε</mi> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mfenced> </mrow> </semantics></math> in the original coherent phase state (<a href="#FD3-entropy-26-00977" class="html-disp-formula">3</a>) for different values of parameter <span class="html-italic">r</span>.</p>
Full article ">
31 pages, 1865 KiB  
Article
Robustness Analysis of Multilayer Infrastructure Networks Based on Incomplete Information Stackelberg Game: Considering Cascading Failures
by Haitao Li, Lixin Ji, Yingle Li and Shuxin Liu
Entropy 2024, 26(11), 976; https://doi.org/10.3390/e26110976 - 14 Nov 2024
Viewed by 278
Abstract
The growing importance of critical infrastructure systems (CIS) makes maintaining their normal operation against deliberate attacks such as terrorism a significant challenge. Combining game theory and complex network theory provides a framework for analyzing CIS robustness in adversarial scenarios. Most existing studies focus [...] Read more.
The growing importance of critical infrastructure systems (CIS) makes maintaining their normal operation against deliberate attacks such as terrorism a significant challenge. Combining game theory and complex network theory provides a framework for analyzing CIS robustness in adversarial scenarios. Most existing studies focus on single-layer networks, while CIS are better modeled as multilayer networks. Research on multilayer network games is limited, lacking methods for constructing incomplete information through link hiding and neglecting the impact of cascading failures. We propose a multilayer network Stackelberg game model with incomplete information considering cascading failures (MSGM-IICF). First, we describe the multilayer network model and define the multilayer node-weighted degree. Then, we present link hiding rules and a cascading failure model. Finally, we construct MSGM-IICF, providing methods for calculating payoff functions from the different perspectives of attackers and defenders. Experiments on synthetic and real-world networks demonstrate that link hiding improves network robustness without considering cascading failures. However, when cascading failures are considered, they become the primary factor determining network robustness. Dynamic capacity allocation enhances network robustness, while changes in dynamic costs make the network more vulnerable. The proposed method provides a new way of analyzing the robustness of diverse CIS, supporting resilient CIS design. Full article
(This article belongs to the Special Issue Robustness and Resilience of Complex Networks)
Show Figures

Figure 1

Figure 1
<p>Single-layer networks are coupled into multilayer networks and a false network is constructed through active link hiding: (<b>a</b>) two single-layer networks, each with six nodes; (<b>b</b>) the multilayer networks formed by coupling the single-layer networks, representing the actual network (AN); (<b>c</b>) rule-based link hiding in the multilayer networks; (<b>d</b>) the generated multilayer false network (FN). In subfigures (<b>a</b>–<b>c</b>), the blue circles represent nodes of layer 1, and the orange circles represent nodes of layer 2; gray solid lines represent intra-layer links, green solid lines represent inter-layer links, and red dashed lines represent hiding links. In subfigure (<b>d</b>), the nodes in FN are painted gray.</p>
Full article ">Figure 2
<p>Construction of the payoff matrix for the attacker and defender in MSGM-IICF. Let <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mi>n</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>. (<b>a</b>) Shows the construction of the defender’s payoff matrix and (<b>b</b>) the construction of the attacker’s payoff matrix. In Step 1, the set of nodes for typical defense and attack strategies is identified. The blue and orange nodes represent the defender’s node selection based on the AN, with blue nodes belonging to layer 1 of the multilayer network and orange nodes belonging to layer 2. The dark gray nodes represent the attacker’s node selection based on the FN. In Step 2, nodes in the AN and FN are removed according to the deletion rule, considering the impact of link hiding. Dashed lines indicate nodes where the attack failed due to link hiding. In Step 3, we consider the set of removed nodes in the AN and FN after accounting for cascading failure effects. In Step 4, the set of nodes identified in Step 3 is removed, obtaining the network after the attack.</p>
Full article ">Figure 3
<p>The defender’s equilibrium payoff under different <math display="inline"><semantics> <mi>ε</mi> </semantics></math> for various <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> </mrow> </semantics></math> along with the difference in equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>: (<b>a</b>) the defender’s equilibrium payoff under different <math display="inline"><semantics> <mi>ε</mi> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>; (<b>b</b>) the defender’s equilibrium payoff under different <math display="inline"><semantics> <mi>ε</mi> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>; (<b>c</b>) difference in equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>; (<b>d</b>) difference in equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>. In subfigures (<b>c</b>,<b>d</b>), dark blue indicates a small difference, and dark red indicates a large difference.</p>
Full article ">Figure 4
<p>Comparison of the attacker’s expected equilibrium payoff and actual equilibrium payoff for <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math> when <math display="inline"><semantics> <mi>ε</mi> </semantics></math> takes values of 0, 0.15, 0.3, and 0.45. Blue represents the expected equilibrium payoff, while yellow represents the actual equilibrium payoff.</p>
Full article ">Figure 5
<p>For <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>q</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>, the probability of the defender choosing the HDS and the attacker’s equilibrium strategy choice when <math display="inline"><semantics> <mi>ε</mi> </semantics></math> takes values of 0, 0.15, 0.3, and 0.45. The first row shows the probability of choosing the HDS, where lighter colors indicate higher probabilities of choice, while the second row shows the attacker’s equilibrium strategy choice, with light red representing a high-property attack strategy, light blue a low-property attack strategy, and light green #HAS that the defender’s payoff is the same for both the HAS and LAS.</p>
Full article ">Figure 6
<p>Relationship between the actual multilayer node-weighted degree in the AN network and the false multilayer node-weighted degree in the FN network. The <span class="html-italic">x</span>-axis represents the multilayer node-weighted degree in the AN and the <span class="html-italic">y</span>-axis represents the multilayer node-weighted degree in the FN. The blue circles represent the change in multilayer node-weighted degree when <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, green diamonds represent the change when <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>, red triangles represent the change when <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math>, and the dashed line is the bisector of the coordinate axes.</p>
Full article ">Figure 7
<p>Comparison of the defender’s payoff under various combinations of cascading failures and link hiding factors. Here, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>ε</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> represents the defender’s equilibrium payoff without considering cascading failures and link hiding, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>ε</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math> represents the defender’s equilibrium payoff without considering cascading failures and with a link hiding coefficient of 0.3, and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.5</mn> <mo>,</mo> <mi>ε</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math> represents the defender’s equilibrium payoff considering cascading failures with a tolerance coefficient of 1.5 and a link hiding coefficient of 0.3.</p>
Full article ">Figure 8
<p>Tendency graph of the defender’s equilibrium payoff changes when <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.5</mn> <mo>,</mo> <mi>ε</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math> under cascading failures. Lighter surface colors indicate higher payoff. The red line on the surface represents the change in the defender’s payoff when attack and defense budget resources are equal <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>m</mi> <mo>=</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The defender’s equilibrium payoff under different tolerance coefficients <math display="inline"><semantics> <mi>λ</mi> </semantics></math> with cascading failures and the difference in defense equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.1</mn> </mrow> </semantics></math>: (<b>a</b>) the defender’s equilibrium payoff when <math display="inline"><semantics> <mi>λ</mi> </semantics></math> takes values of 1.1, 1.5, and 1.9; (<b>b</b>) the difference in defense equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.1</mn> </mrow> </semantics></math>, Dark blue indicates a small difference, and dark red indicates a large difference.</p>
Full article ">Figure 10
<p>Comparison of the attacker’s expected equilibrium payoff and the actual equilibrium payoff when the cascading failure tolerance coefficient <math display="inline"><semantics> <mi>λ</mi> </semantics></math> takes values of 1.1, 1.5, and 1.9. Blue represents the attacker’s expected equilibrium payoff, while yellow represents the attacker’s actual equilibrium payoff.</p>
Full article ">Figure 11
<p>Probability of the defender choosing the HDS and attacker’s equilibrium strategy choice when the cascading failure tolerance coefficient <math display="inline"><semantics> <mi>λ</mi> </semantics></math> takes values of 1.1, 1.5, and 1.9. The first row shows the probability of choosing HDS, with lighter colors indicating higher probabilities of making that choice; the second row shows the attacker’s equilibrium strategy choice, with light red representing the high-property attack strategy, light blue the low-property attack strategy, and light green #HAS indicating that the defender’s payoff is the same for both the HAS and LAS.</p>
Full article ">Figure 12
<p>The effect of the cost sensitivity coefficient <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>/</mo> <mi>q</mi> </mrow> </semantics></math> on the defender’s probability of choosing HDS and the attacker’s choice of equilibrium strategy in SSE: (<b>a</b>) probability of choosing HDS under different <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>/</mo> <mi>q</mi> </mrow> </semantics></math> values without considering cascading failures; (<b>b</b>) probability of choosing HDS under different <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>/</mo> <mi>q</mi> </mrow> </semantics></math> values considering cascading failures (in the grids, colors from dark to light represent increasing probabilities of choosing HDS); (<b>c</b>) attacker’s equilibrium strategy choice under different <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>/</mo> <mi>q</mi> </mrow> </semantics></math> values without considering cascading failures; (<b>d</b>) attacker’s equilibrium strategy choice under different <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>/</mo> <mi>q</mi> </mrow> </semantics></math> values considering cascading failures. Light red represents the high-property attack strategy, light blue represents the low-property attack strategy, and light green #HAS indicates that the defender’s payoff is the same for both the HAS and LAS.</p>
Full article ">Figure 13
<p>Effect of the load exponent <math display="inline"><semantics> <mi>θ</mi> </semantics></math> on the defender’s probability of choosing the HDS and the attacker’s choice of equilibrium strategy in SSE: (<b>a</b>) probability of choosing the HDS under different <math display="inline"><semantics> <mi>θ</mi> </semantics></math> values without considering link hiding; (<b>b</b>) probability of choosing the HDS under different <math display="inline"><semantics> <mi>θ</mi> </semantics></math> values considering link hiding (in the grids, colors from dark to light represent increasing probabilities of choosing the HDS); (<b>c</b>) attacker’s equilibrium strategy choice under different <math display="inline"><semantics> <mi>θ</mi> </semantics></math> values without considering link hiding; (<b>d</b>) attacker’s equilibrium strategy choice under different <math display="inline"><semantics> <mi>θ</mi> </semantics></math> values considering link hiding. Light red represents the high-property attack strategy choice, light blue represents the low-property attack strategy choice, and light green #HAS indicates cases where both the defender’s payoff and the attacker’s payoff are equal for choosing either the HAS or LAS.</p>
Full article ">Figure 14
<p>The defender’s equilibrium payoff under different <math display="inline"><semantics> <mi>ε</mi> </semantics></math> in the US air transportation network: (<b>a</b>) American–United network, (<b>b</b>) American–Delta network; (<b>c</b>) United–Delta network.</p>
Full article ">Figure 15
<p>Probability of the defender choosing HDS and the attacker’s equilibrium strategy choice in the American–United network when <math display="inline"><semantics> <mi>ε</mi> </semantics></math> takes values of 0, 0.15, 0.3, and 0.45. The first row shows the probability of choosing HDS, where lighter color indicates a higher probability of that choice; the second row shows the attacker’s equilibrium strategy choice, where light red represents the high-property attack strategy, light blue represents the low-property attack strategy, and light green #HAS indicates that the defender’s payoff is the same for both the HAS and LAS.</p>
Full article ">Figure 16
<p>Defender’s equilibrium payoffs under different link hiding methods in the American–United network: (<b>a</b>) represents link hiding in different layers, where MLH-Am+Un denotes simultaneous hiding in both layers, MLH-Am denotes hiding only in the American network layer, MLH-Un denotes hiding only in the United network layer, and MLH_NO denotes no link hiding; (<b>b</b>) represents link hiding methods based on different rules, where MLH denotes rule-based link hiding, MREC denotes random hiding plus random reconnection, and MLH_NO denotes no link hiding.</p>
Full article ">Figure 17
<p>Defender’s equilibrium payoff under different tolerance coefficients <math display="inline"><semantics> <mi>λ</mi> </semantics></math> with cascading failures and the difference in defense equilibrium payoff between <math display="inline"><semantics> <mi>λ</mi> </semantics></math> in the American–United network: (<b>a</b>) defender’s equilibrium payoff when <math display="inline"><semantics> <mi>λ</mi> </semantics></math> takes values of 1.1, 1.5, and 1.9; (<b>b</b>) difference in defense equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.1</mn> </mrow> </semantics></math>; (<b>c</b>) difference in defense equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Defender’s equilibrium payoff under different <math display="inline"><semantics> <mi>α</mi> </semantics></math> and difference in defense payoff between <math display="inline"><semantics> <mi>α</mi> </semantics></math> in the American–United network for <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>: (<b>a</b>) defender’s equilibrium payoff when <math display="inline"><semantics> <mi>α</mi> </semantics></math> takes values of 0.1, 0.5, and 0.9; (<b>b</b>) difference in defense equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>c</b>) difference in defense equilibrium payoff between <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Probability of the defender choosing the HDS and the attacker’s equilibrium strategy choice when the cascading failure tolerance coefficient <math display="inline"><semantics> <mi>λ</mi> </semantics></math> takes values of 1.1, 1.5, and 1.9 in the American–United network. The first row shows the probability of choosing the HDS, where lighter colors indicate a higher probability of that choice. The second row shows the attacker’s equilibrium strategy choice, where light red represents the high-property attack strategy, light blue represents the low-property attack strategy, and light green #HAS indicates that the defender’s payoff is the same for both the HAS and LAS.</p>
Full article ">Figure 20
<p>Probability of the defender choosing the HDS and the attacker’s equilibrium strategy choice when <math display="inline"><semantics> <mi>α</mi> </semantics></math> takes values of 0.1, 0.5, and 0.9 in the American–United network. The first row shows the probability of choosing the HDS, where lighter colors indicate a higher probability of that choice. The second row shows the attacker’s equilibrium strategy choice, where light red represents the high-property attack strategy, light blue represents the low-property attack strategy, and light green #HAS indicates that the defender’s payoff is the same for both the HAS and LAS.</p>
Full article ">Figure 21
<p>Impact of different edge weights w on network robustness in the American–United network, measured using the size of LMCC: (<b>a</b>) with link hiding and (<b>b</b>) with link hiding and cascading failures.</p>
Full article ">Figure 22
<p>Defender’s payoff under different cost adjustment factors <math display="inline"><semantics> <mi>μ</mi> </semantics></math>. The time step for the dynamic cost model is 10.</p>
Full article ">Figure 23
<p>Strategy choices of attacker and defender under different cost adjustment factors <math display="inline"><semantics> <mi>μ</mi> </semantics></math>. The time step for the dynamic cost model is 10. The first row shows the probability of choosing the HDS, while the second row shows the attacker’s equilibrium strategy choice.</p>
Full article ">Figure 24
<p>Relationship between tolerance coefficients and network robustness: (<b>a</b>) impact of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> on network robustness in SCA and (<b>b</b>) impact of <math display="inline"><semantics> <mi>τ</mi> </semantics></math> on network robustness under different <math display="inline"><semantics> <mi>α</mi> </semantics></math> values in DCA.</p>
Full article ">
13 pages, 668 KiB  
Article
Sensitivity of Bayesian Networks to Errors in Their Structure
by Agnieszka Onisko and Marek J. Druzdzel
Entropy 2024, 26(11), 975; https://doi.org/10.3390/e26110975 - 14 Nov 2024
Viewed by 219
Abstract
There is a widespread belief in the Bayesian network (BN) community that while the overall accuracy of the results of BN inference is not sensitive to the precision of parameters, it is sensitive to the structure. We report on the results of a [...] Read more.
There is a widespread belief in the Bayesian network (BN) community that while the overall accuracy of the results of BN inference is not sensitive to the precision of parameters, it is sensitive to the structure. We report on the results of a study focusing on the parameters in a companion paper, while this paper focuses on the BN graphical structure. We present the results of several experiments in which we test the impact of errors in the BN structure on its accuracy in the context of medical diagnostic models. We study the deterioration in model accuracy under structural changes that systematically modify the original gold standard model, notably the node and edge removal and edge reversal. Our results confirm the popular belief that the BN structure is important, and we show that structural errors may lead to a serious deterioration in the diagnostic accuracy. At the same time, most BN models are forgiving to single errors. In light of these results and the results of the companion paper, we recommend that knowledge engineers focus their efforts on obtaining a correct model structure and worry less about the overall precision of parameters. Full article
(This article belongs to the Special Issue Bayesian Network Modelling in Data Sparse Environments)
Show Figures

Figure 1

Figure 1
<p>The BN models learned from the <span class="html-italic">Breast Cancer</span> data set: using the BSA algorithm (<b>left</b>) and using the ANB algorithm (<b>right</b>).</p>
Full article ">Figure 2
<p>The ACC of the <span class="html-small-caps">Breast Cancer</span>, <span class="html-small-caps">Cardiotocography</span>, <span class="html-small-caps">Dermatology</span>, <span class="html-small-caps">HCV</span>, <span class="html-small-caps">Hepatitis</span>, <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">SPECT Heart</span> models as a function of the percentage of nodes removed. The colors indicate the ascending, random, and descending order of the cross-entropy between the class node and the removed nodes.</p>
Full article ">Figure 3
<p>The ROC curves for the <span class="html-small-caps">Cardiotocography</span> model when 0%, 20%, and 40% of the nodes have been removed in a descending order.</p>
Full article ">Figure 4
<p>The AUC of the <span class="html-small-caps">Breast Cancer</span>, <span class="html-small-caps">Cardiotocography</span>, <span class="html-small-caps">Dermatology</span>, <span class="html-small-caps">HCV</span>, <span class="html-small-caps">Hepatitis</span>, <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">SPECT Heart</span> models as a function of the percentage of nodes removed. The colors indicate the ascending, random, and descending order of the cross-entropy between the class node and the removed nodes.</p>
Full article ">Figure 5
<p>The AUC of the <span class="html-small-caps">Breast Cancer</span>, <span class="html-small-caps">Cardiotocography</span>, <span class="html-small-caps">Dermatology</span>, <span class="html-small-caps">HCV</span>, <span class="html-small-caps">Hepatitis</span>, <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">SPECT Heart</span> models as a function of the percentage of edges removed. The colors indicate the ascending, random, and descending order of the strengths of the removed edges.</p>
Full article ">Figure 6
<p>The AUC of the <span class="html-small-caps">Breast Cancer</span>, <span class="html-small-caps">Cardiotocography</span>, <span class="html-small-caps">Dermatology</span>, <span class="html-small-caps">HCV</span>, <span class="html-small-caps">Hepatitis</span>, <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">SPECT Heart</span> models as a function of the percentage of edges reversed. The colors indicate the ascending, random, and descending order of the strengths of the reversed edges.</p>
Full article ">Figure 6 Cont.
<p>The AUC of the <span class="html-small-caps">Breast Cancer</span>, <span class="html-small-caps">Cardiotocography</span>, <span class="html-small-caps">Dermatology</span>, <span class="html-small-caps">HCV</span>, <span class="html-small-caps">Hepatitis</span>, <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">SPECT Heart</span> models as a function of the percentage of edges reversed. The colors indicate the ascending, random, and descending order of the strengths of the reversed edges.</p>
Full article ">
27 pages, 12110 KiB  
Article
Exploring the Impact of Additive Shortcuts in Neural Networks via Information Bottleneck-like Dynamics: From ResNet to Transformer
by Zhaoyan Lyu and Miguel R. D. Rodrigues
Entropy 2024, 26(11), 974; https://doi.org/10.3390/e26110974 - 14 Nov 2024
Viewed by 277
Abstract
Deep learning has made significant strides, driving advances in areas like computer vision, natural language processing, and autonomous systems. In this paper, we further investigate the implications of the role of additive shortcut connections, focusing on models such as ResNet, Vision Transformers (ViTs), [...] Read more.
Deep learning has made significant strides, driving advances in areas like computer vision, natural language processing, and autonomous systems. In this paper, we further investigate the implications of the role of additive shortcut connections, focusing on models such as ResNet, Vision Transformers (ViTs), and MLP-Mixers, given that they are essential in enabling efficient information flow and mitigating optimization challenges such as vanishing gradients. In particular, capitalizing on our recent information bottleneck approach, we analyze how additive shortcuts influence the fitting and compression phases of training, crucial for generalization. We leverage Z-X and Z-Y measures as practical alternatives to mutual information for observing these dynamics in high-dimensional spaces. Our empirical results demonstrate that models with identity shortcuts (ISs) often skip the initial fitting phase and move directly into the compression phase, while non-identity shortcut (NIS) models follow the conventional two-phase process. Furthermore, we explore how IS models are still able to compress effectively, maintaining their generalization capacity despite bypassing the early fitting stages. These findings offer new insights into the dynamics of shortcut connections in neural networks, contributing to the optimization of modern deep learning architectures. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>The framework for estimating the Z-X and Z-Y measures. This figure is adapted from <a href="#entropy-26-00974-f001" class="html-fig">Figure 1</a> in [<a href="#B17-entropy-26-00974" class="html-bibr">17</a>]. <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mrow> <mi>S</mi> <mi>E</mi> </mrow> </msub> </semantics></math> refers to the squared loss, and <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mrow> <mi>C</mi> <mi>E</mi> </mrow> </msub> </semantics></math> represents the cross-entropy loss.</p>
Full article ">Figure 2
<p>An illustration of identity and non-identity shortcuts. The representations at different stages are labeled in pink.</p>
Full article ">Figure 3
<p>Architecture of the CNN (<b>left</b>), ResCNN (<b>middle</b>), and iResCNN (<b>right</b>). “Conv” refers to convolutional layers, “ReLU” to rectified linear unit activation, and “FC” to fully connected layers. The convolutional kernel and weight matrix shapes are noted in gray, and the tensor/matrix/vector shapes are labeled in blue.</p>
Full article ">Figure 4
<p>Z-X estimator design for convolutional networks. “TConv” stands for transposed convolution, used to upscale feature maps, and “tanh” represents the hyperbolic tangent activation function.</p>
Full article ">Figure 5
<p>The Z-X dynamics of the CNN (<b>left</b>), ResCNN (<b>middle</b>), and iResCNN (<b>right</b>). The Z-X measures are estimated at corresponding modules in <a href="#entropy-26-00974-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Architecture of ViT (<b>left</b>) and MLP-Mixer (<b>right</b>). “MHSA” represents multi-head self attention modules, “FF” indicates feed-forward modules, and “GAP” represents global average pooling layers.</p>
Full article ">Figure 7
<p>Architecture of Z-X estimators for token-based models. For the tokenized representation of the ViT and MLP-Mixer in this paper, <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>64</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>512</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The Z-X dynamics of the ViT (<b>top</b>) and MLP-Mixer (<b>bottom</b>). The Z-X measures are estimated at the corresponding modules in <a href="#entropy-26-00974-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 9
<p>Dynamics of averaged element-wise correlation coefficients and averaged element-wise variance in the iResCNN: In the upper row, the curves in a darker color and the left axis show the dynamics of <math display="inline"><semantics> <mover> <mrow> <mi>Corr</mi> <mo>(</mo> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mo>;</mo> <mi>F</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Z</mi> <mi>I</mi> </msub> <mo>)</mo> </mrow> <mo>¯</mo> </mover> </semantics></math>, while the lighter curves and the right axis show the Z-X measure (<math display="inline"><semantics> <msub> <mi>m</mi> <mrow> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mo>;</mo> <mi>X</mi> </mrow> </msub> </semantics></math>) obtained from <a href="#sec4dot2-entropy-26-00974" class="html-sec">Section 4.2</a>. In the lower row, the dynamics of <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mo>;</mo> <mi>F</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math>, and <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mi>I</mi> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math> are visualized. The left, middle, and right panels show the representations of different modules. <span class="html-italic">l</span> is the index for the modules in the iResCNN shown in the right panel of <a href="#entropy-26-00974-f003" class="html-fig">Figure 3</a>. Panels in the same row or column share the same axes.</p>
Full article ">Figure 10
<p>Histograms of element-wise correlation coefficients in the iResCNN: These histograms summarize the element-wise coefficients <math display="inline"><semantics> <mrow> <mi>Corr</mi> <mo>(</mo> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mo>;</mo> <mi>F</mi> <mo>;</mo> <mi>i</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Z</mi> <mrow> <mi>I</mi> <mo>;</mo> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math>, where <span class="html-italic">i</span> indexes the entries of the representation components.</p>
Full article ">Figure 11
<p>Dynamics of the averaged element-wise correlation coefficients and averaged element-wise variance in the MLP-Mixer: In the upper row, the curves in a darker color and the left axis show the dynamics of <math display="inline"><semantics> <mover> <mrow> <mi>Corr</mi> <mo>(</mo> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mo>;</mo> <mi>F</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Z</mi> <mi>I</mi> </msub> <mo>)</mo> </mrow> <mo>¯</mo> </mover> </semantics></math>, while the lighter curves and the right axis show the Z-X measure (<math display="inline"><semantics> <msub> <mi>m</mi> <mrow> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mo>;</mo> <mi>X</mi> </mrow> </msub> </semantics></math>) obtained from <a href="#sec4dot4-entropy-26-00974" class="html-sec">Section 4.4</a>. In the lower row, the dynamics of <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mo>;</mo> <mi>F</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math>, and <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mi>I</mi> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math> are visualized. The left, middle, and right panels show the representations of different modules. <span class="html-italic">l</span> is the index for the modules in the MLP-Mixer shown in the right panel in <a href="#entropy-26-00974-f006" class="html-fig">Figure 6</a>. Panels in the same row or column share the same axes.</p>
Full article ">Figure 12
<p>Dynamics of averaged statistics for the ViT: In the upper row, the curves in a darker color and the left axis show the dynamics of <math display="inline"><semantics> <mover> <mrow> <mi>Corr</mi> <mo>(</mo> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mo>;</mo> <mi>F</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Z</mi> <mi>I</mi> </msub> <mo>)</mo> </mrow> <mo>¯</mo> </mover> </semantics></math>, while the lighter curves and the right axis show the Z-X measure (<math display="inline"><semantics> <msub> <mi>m</mi> <mrow> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mo>;</mo> <mi>X</mi> </mrow> </msub> </semantics></math>) obtained from <a href="#sec4dot4-entropy-26-00974" class="html-sec">Section 4.4</a>. In the lower row, the dynamics of <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mi>l</mi> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mrow> <mi>l</mi> <mo>;</mo> <mi>F</mi> </mrow> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math>, and <math display="inline"><semantics> <mover> <msubsup> <mi>σ</mi> <msub> <mi>Z</mi> <mi>I</mi> </msub> <mn>2</mn> </msubsup> <mo>¯</mo> </mover> </semantics></math> are visualized. The left, middle, and right panels show the representations of different modules. <span class="html-italic">l</span> is the index for the modules in the ViT shown in the left panel in <a href="#entropy-26-00974-f006" class="html-fig">Figure 6</a>. Panels in the same row or column share the same axes.</p>
Full article ">
20 pages, 752 KiB  
Article
DUS Topp–Leone-G Family of Distributions: Baseline Extension, Properties, Estimation, Simulation and Useful Applications
by Divine-Favour N. Ekemezie, Kizito E. Anyiam, Mohammed Kayid, Oluwafemi Samson Balogun and Okechukwu J. Obulezi
Entropy 2024, 26(11), 973; https://doi.org/10.3390/e26110973 - 13 Nov 2024
Viewed by 376
Abstract
This study introduces the DUS Topp–Leone family of distributions, a novel extension of the Topp–Leone distribution enhanced by the DUS transformer. We derive the cumulative distribution function (CDF) and probability density function (PDF), demonstrating the distribution’s flexibility in modeling various lifetime phenomena. The [...] Read more.
This study introduces the DUS Topp–Leone family of distributions, a novel extension of the Topp–Leone distribution enhanced by the DUS transformer. We derive the cumulative distribution function (CDF) and probability density function (PDF), demonstrating the distribution’s flexibility in modeling various lifetime phenomena. The DUS-TL exponential distribution was studied as a sub-model, with analytical and graphical evidence revealing that it exhibits a unique unimodal shape, along with fat-tail characteristics, making it suitable for time-to-event data analysis. We evaluate parameter estimation methods, revealing that non-Bayesian approaches, particularly Maximum Likelihood and Least Squares, outperform Bayesian techniques in terms of bias and root mean square error. Additionally, the distribution effectively models datasets with varying skewness and kurtosis values, as illustrated by its application to total factor productivity data across African countries and the mortality rate of people who injected drugs. Overall, the DUS Topp–Leone family represents a significant advancement in statistical modeling, offering robust tools for researchers in diverse fields. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Outline of the remaining sections of this study.</p>
Full article ">Figure 2
<p>PDF plots of DUS-TLE <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>ν</mi> <mo>,</mo> <mi>ρ</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>h(x) of DUS-TLE.</p>
Full article ">Figure 4
<p>(<b>a</b>) Mean of DUS-TLE. (<b>b</b>) Variance of DUS-TLE.</p>
Full article ">Figure 5
<p>(<b>a</b>) Skewness of DUS-TLE. (<b>b</b>) Kurtosis of DUS-TLE.</p>
Full article ">Figure 6
<p>Parametric and non-parametric plots for Data-I.</p>
Full article ">Figure 7
<p>Parametric and non-parametric plots for Data-II.</p>
Full article ">
24 pages, 916 KiB  
Article
An Instructive CO2 Adsorption Model for DAC: Wave Solutions and Optimal Processes
by Emily Kay-Leighton and Henning Struchtrup
Entropy 2024, 26(11), 972; https://doi.org/10.3390/e26110972 - 13 Nov 2024
Viewed by 312
Abstract
We present and investigate a simple yet instructive model for the adsorption of CO2 from air in porous media as used in direct air capture (DAC) processes. Mathematical analysis and non-dimensionalization reveal that the sorbent is characterized by the sorption timescale and [...] Read more.
We present and investigate a simple yet instructive model for the adsorption of CO2 from air in porous media as used in direct air capture (DAC) processes. Mathematical analysis and non-dimensionalization reveal that the sorbent is characterized by the sorption timescale and capacity, while the adsorption process is effectively wavelike. The systematic evaluation shows that the overall adsorption rate and the recommended charging duration depend only on the wave parameter that is found as the ratio of capacity and dimensionless air flow velocity. Specifically, smaller wave parameters yield a larger overall charging rate, while larger wave parameters reduce the work required to move air through the sorbent. Thus, optimal process conditions must compromise between a large overall adsorption rate and low work requirements. Full article
Show Figures

Figure 1

Figure 1
<p>Wave solution centered at <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> for wave parameters <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi>ϕ</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>υ</mi> </mfrac> </mstyle> </mrow> </semantics></math> in <math display="inline"><semantics> <mfenced separators="" open="[" close="]"> <mn>0.5</mn> <mo>,</mo> <mspace width="4pt"/> <mn>20</mn> </mfenced> </semantics></math>.</p>
Full article ">Figure 2
<p>Solution of adsorption process with <math display="inline"><semantics> <mrow> <mi>υ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, that is <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi>ϕ</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>υ</mi> </mfrac> </mstyle> <mo>=</mo> <mn>10.1</mn> </mrow> </semantics></math>, for times <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>14</mn> </mrow> </semantics></math>; green: <math display="inline"><semantics> <mrow> <mi>β</mi> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mfenced> </mrow> </semantics></math>; orange: <math display="inline"><semantics> <mrow> <mi>χ</mi> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mfenced> </mrow> </semantics></math>; blue: wave solution for <math display="inline"><semantics> <mrow> <mi>χ</mi> <mo>,</mo> <mi>β</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Solution of adsorption process with <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi>ϕ</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>υ</mi> </mfrac> </mstyle> <mo>=</mo> <mn>1.01</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>υ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, for times <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mspace width="4pt"/> <mn>1.5</mn> <mo>,</mo> <mspace width="4pt"/> <mn>3</mn> <mo>,</mo> <mspace width="4pt"/> <mn>4.5</mn> <mo>,</mo> <mspace width="4pt"/> <mn>6</mn> </mrow> </semantics></math>; green: <math display="inline"><semantics> <mrow> <mi>β</mi> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mfenced> </mrow> </semantics></math>; orange: <math display="inline"><semantics> <mrow> <mi>χ</mi> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>,</mo> <mi>t</mi> </mfenced> </mrow> </semantics></math>; blue: wave solution for <math display="inline"><semantics> <mrow> <mi>χ</mi> <mo>,</mo> <mi>β</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Accumulation <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>β</mi> <mo stretchy="false">¯</mo> </mover> <mfenced open="(" close=")"> <mi>t</mi> </mfenced> </mrow> </semantics></math> over time for <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi>ϕ</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>v</mi> </mfrac> </mstyle> <mo>=</mo> <mn>0.01</mn> <mo>,</mo> <mspace width="4pt"/> <mn>1</mn> <mo>,</mo> <mspace width="4pt"/> <mn>5</mn> <mo>,</mo> <mspace width="4pt"/> <mn>10</mn> <mo>;</mo> </mrow> </semantics></math> continuous: numerical solution; dashed: wave solution.</p>
Full article ">Figure 5
<p>Charging duration <math display="inline"><semantics> <msub> <mi>t</mi> <mi>ch</mi> </msub> </semantics></math> in dependence of <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi>ϕ</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>υ</mi> </mfrac> </mstyle> </mrow> </semantics></math> for accumulations <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>β</mi> <mo stretchy="false">¯</mo> </mover> <mi>ch</mi> </msub> <mo>=</mo> <mn>0.4</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>0.99</mn> </mrow> </semantics></math> for wave (left) and numerical (right) solutions.</p>
Full article ">Figure 6
<p>Charging rates <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>β</mi> <mo>˙</mo> </mover> <mo>=</mo> <msub> <mover accent="true"> <mi>β</mi> <mo stretchy="false">¯</mo> </mover> <mi>ch</mi> </msub> </mrow> </semantics></math>/<math display="inline"><semantics> <msub> <mi>t</mi> <mi>ch</mi> </msub> </semantics></math> as functions of charge duration <math display="inline"><semantics> <msub> <mi>t</mi> <mi>ch</mi> </msub> </semantics></math> for several <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mrow> <mi>ϕ</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>υ</mi> </mfrac> </mstyle> </mrow> </semantics></math> (numerical solution). The dots indicate the values at <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>ch</mi> </msub> <mo>=</mo> <mi>λ</mi> </mrow> </semantics></math>. Note the logarithmic scale.</p>
Full article ">
9 pages, 238 KiB  
Article
Dirac Equation and Fisher Information
by Asher Yahalom
Entropy 2024, 26(11), 971; https://doi.org/10.3390/e26110971 - 12 Nov 2024
Viewed by 352
Abstract
Previously, it was shown that Schrödinger’s theory can be derived from a potential flow Lagrangian provided a Fisher information term is added. This approach was later expanded to Pauli’s theory of an electron with spin, which required a Clebsch flow Lagrangian with non-zero [...] Read more.
Previously, it was shown that Schrödinger’s theory can be derived from a potential flow Lagrangian provided a Fisher information term is added. This approach was later expanded to Pauli’s theory of an electron with spin, which required a Clebsch flow Lagrangian with non-zero vorticity. Here, we use the recent relativistic flow Lagrangian to represent Dirac’s theory with the addition of a Lorentz invariant Fisher information term as is required by quantum mechanics. Full article
(This article belongs to the Special Issue Applications of Fisher Information in Sciences II)
28 pages, 13144 KiB  
Article
Complexity and Variation in Infectious Disease Birth Cohorts: Findings from HIV+ Medicare and Medicaid Beneficiaries, 1999–2020
by Nick Williams
Entropy 2024, 26(11), 970; https://doi.org/10.3390/e26110970 - 12 Nov 2024
Viewed by 380
Abstract
The impact of uncertainty in information systems is difficult to assess, especially when drawing conclusions from human observation records. In this study, we investigate survival variation in a population experiencing infectious disease as a proxy to investigate uncertainty problems. Using Centers for Medicare [...] Read more.
The impact of uncertainty in information systems is difficult to assess, especially when drawing conclusions from human observation records. In this study, we investigate survival variation in a population experiencing infectious disease as a proxy to investigate uncertainty problems. Using Centers for Medicare and Medicaid Services claims, we discovered 1,543,041 HIV+ persons, 363,425 of whom were observed dying from all-cause mortality. Once aggregated by HIV status, year of birth and year of death, Age-Period-Cohort disambiguation and regression models were constructed to produce explanations of variance in survival. We used Age-Period-Cohort as an alternative method to work around under-observed features of uncertainty like infection transmission, receiver host dynamics or comorbidity noise impacting survival variation. We detected ages that have a consistent, disproportionate share of deaths independent of study year or year of birth. Variation in seasonality of mortality appeared stable in regression models; in turn, HIV cases in the United States do not have a survival gain when uncertainty is uncontrolled for. Given the information complexity issues under observed exposure and transmission, studies of infectious diseases should either include robust decedent cases, observe transmission physics or avoid drawing conclusions about survival from human observation records. Full article
(This article belongs to the Special Issue Stability and Flexibility in Dynamic Systems: Novel Research Pathways)
Show Figures

Figure 1

Figure 1
<p>CMS case volumes over time by death with mortality rates and mortality relative rate (OR) by HIV status.</p>
Full article ">Figure 2
<p>Tile plot of CMS HIV+ cases and deaths by number of years old at observation (YOAO).</p>
Full article ">Figure 3
<p>Inflow candidates, outflow (death) and overflow candidate case volumes by program and across study years by number of years old at observation (YOAO).</p>
Full article ">Figure 4
<p>(<b>a</b>) Inflow among CMS HIV+ cases, YOAO in period. (<b>b</b>) Inflow among CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Inflow among CMS HIV+ cases, cohort per period. (<b>d</b>) Inflow among CMS HIV+ cases, period in YOAO. (<b>e</b>) Inflow among CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Inflow among CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Inflow among CMS HIV+ cases, YOAO in period. (<b>b</b>) Inflow among CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Inflow among CMS HIV+ cases, cohort per period. (<b>d</b>) Inflow among CMS HIV+ cases, period in YOAO. (<b>e</b>) Inflow among CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Inflow among CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Inflow among CMS HIV+ cases, YOAO in period. (<b>b</b>) Inflow among CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Inflow among CMS HIV+ cases, cohort per period. (<b>d</b>) Inflow among CMS HIV+ cases, period in YOAO. (<b>e</b>) Inflow among CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Inflow among CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Inflow among CMS HIV+ cases, YOAO in period. (<b>b</b>) Inflow among CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Inflow among CMS HIV+ cases, cohort per period. (<b>d</b>) Inflow among CMS HIV+ cases, period in YOAO. (<b>e</b>) Inflow among CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Inflow among CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 5
<p>(<b>a</b>) Overflow in CMS HIV+ cases, YOAO in period. (<b>b</b>) Overflow in CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Overflow in CMS HIV+ cases, cohort in period. (<b>d</b>) Overflow in CMS HIV+ cases, period in YOAO. (<b>e</b>) Overflow in CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Overflow in CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Overflow in CMS HIV+ cases, YOAO in period. (<b>b</b>) Overflow in CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Overflow in CMS HIV+ cases, cohort in period. (<b>d</b>) Overflow in CMS HIV+ cases, period in YOAO. (<b>e</b>) Overflow in CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Overflow in CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) Overflow in CMS HIV+ cases, YOAO in period. (<b>b</b>) Overflow in CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Overflow in CMS HIV+ cases, cohort in period. (<b>d</b>) Overflow in CMS HIV+ cases, period in YOAO. (<b>e</b>) Overflow in CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Overflow in CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 6
<p>(<b>a</b>) Outflow (Deaths) among CMS HIV+ cases, YOAO in period. (<b>b</b>) Outflow (Deaths) among CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Outflow (Deaths) among CMS HIV+ cases, cohort in period. (<b>d</b>) Outflow (Deaths) among CMS HIV+ cases, period in YOAO. (<b>e</b>) Outflow (Deaths) among CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Outflow (Deaths) among CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) Outflow (Deaths) among CMS HIV+ cases, YOAO in period. (<b>b</b>) Outflow (Deaths) among CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Outflow (Deaths) among CMS HIV+ cases, cohort in period. (<b>d</b>) Outflow (Deaths) among CMS HIV+ cases, period in YOAO. (<b>e</b>) Outflow (Deaths) among CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Outflow (Deaths) among CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) Outflow (Deaths) among CMS HIV+ cases, YOAO in period. (<b>b</b>) Outflow (Deaths) among CMS HIV+ cases, YOAO in cohort. (<b>c</b>) Outflow (Deaths) among CMS HIV+ cases, cohort in period. (<b>d</b>) Outflow (Deaths) among CMS HIV+ cases, period in YOAO. (<b>e</b>) Outflow (Deaths) among CMS HIV+ cases, cohort in YOAO. (<b>f</b>) Outflow (Deaths) among CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 7
<p>(<b>a</b>) CMS HIV+ cases, YOAO in period. (<b>b</b>) CMS HIV+ cases, YOAO in cohort. (<b>c</b>) CMS HIV+ cases, cohort in period. (<b>d</b>) CMS HIV+ cases, period in YOAO. (<b>e</b>) CMS HIV+ cases, cohort in YOAO. (<b>f</b>) CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) CMS HIV+ cases, YOAO in period. (<b>b</b>) CMS HIV+ cases, YOAO in cohort. (<b>c</b>) CMS HIV+ cases, cohort in period. (<b>d</b>) CMS HIV+ cases, period in YOAO. (<b>e</b>) CMS HIV+ cases, cohort in YOAO. (<b>f</b>) CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) CMS HIV+ cases, YOAO in period. (<b>b</b>) CMS HIV+ cases, YOAO in cohort. (<b>c</b>) CMS HIV+ cases, cohort in period. (<b>d</b>) CMS HIV+ cases, period in YOAO. (<b>e</b>) CMS HIV+ cases, cohort in YOAO. (<b>f</b>) CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) CMS HIV+ cases, YOAO in period. (<b>b</b>) CMS HIV+ cases, YOAO in cohort. (<b>c</b>) CMS HIV+ cases, cohort in period. (<b>d</b>) CMS HIV+ cases, period in YOAO. (<b>e</b>) CMS HIV+ cases, cohort in YOAO. (<b>f</b>) CMS HIV+ cases, period in cohort.</p>
Full article ">Figure 8
<p>Age, period and cohort estimates from Poisson linear models for Medicare, Medicaid and CMS HIV+ decedent cases.</p>
Full article ">
13 pages, 4232 KiB  
Article
Universality of Dynamical Symmetries in Chaotic Maps
by Marcos Acero, Sean Lyons, Andrés Aragoneses and Arjendu K. Pattanayak
Entropy 2024, 26(11), 969; https://doi.org/10.3390/e26110969 - 12 Nov 2024
Viewed by 354
Abstract
Identifying signs of regularity and uncovering dynamical symmetries in complex and chaotic systems is crucial both for practical applications and for enhancing our understanding of complex dynamics. Recent approaches have quantified temporal correlations in time series, revealing hidden, approximate dynamical symmetries that provide [...] Read more.
Identifying signs of regularity and uncovering dynamical symmetries in complex and chaotic systems is crucial both for practical applications and for enhancing our understanding of complex dynamics. Recent approaches have quantified temporal correlations in time series, revealing hidden, approximate dynamical symmetries that provide insight into the systems under study. In this paper, we explore universality patterns in the dynamics of chaotic maps using combinations of complexity quantifiers. We also apply a recently introduced technique that projects dynamical symmetries into a “symmetry space”, providing an intuitive and visual depiction of these symmetries. Our approach unifies and extends previous results and, more importantly, offers a meaningful interpretation of universality by linking it with dynamical symmetries and their transitions. Full article
(This article belongs to the Special Issue Ordinal Patterns-Based Tools and Their Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Word populations as a function of <span class="html-italic">r</span> for the logistic map. (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>E</mi> </mrow> </semantics></math> as a function of <span class="html-italic">r</span> for the logistic map. Both figures identify the windows of periodicity in this chaotic map. The word probabilities and their constraints also identify different families of chaos through their distribution and hierarchies.</p>
Full article ">Figure 2
<p>(<b>a</b>) Fisher Information Measure versus PE. (<b>b</b>) Reversibility <math display="inline"><semantics> <msub> <mi>T</mi> <mi>ρ</mi> </msub> </semantics></math> versus <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>E</mi> </mrow> </semantics></math>. Both combinations show a clear signature that tracks the different dynamics performed by the map as the control parameter is varied. <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>I</mi> <mi>M</mi> </mrow> </semantics></math> computed using <math display="inline"><semantics> <mrow> <mi>S</mi> <msup> <mi>A</mi> <mo>*</mo> </msup> </mrow> </semantics></math> sorting array: <math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> </mrow> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>4</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>1</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>5</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>3</mn> </msub> </semantics></math>-<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>6</mn> </msub> <mrow> <mo>]</mo> </mrow> </mrow> </semantics></math> (see Ref. [<a href="#B14-entropy-26-00969" class="html-bibr">14</a>]). Dashed ellipses point to some families of chaos. The different circled regions indicate different families of chaos, from <b>[A]</b> laminar with more dynamical constraints to <b>[F]</b> more chaotic (details in main text).</p>
Full article ">Figure 3
<p>For the logistic (<math display="inline"><semantics> <mrow> <mn>3.5</mn> <mo>≤</mo> <mi>r</mi> <mo>≤</mo> <mn>4.0</mn> </mrow> </semantics></math>), tent (<math display="inline"><semantics> <mrow> <mn>1.1</mn> <mo>≤</mo> <mi>r</mi> <mo>≤</mo> <mn>2</mn> </mrow> </semantics></math>), cusp (<math display="inline"><semantics> <mrow> <mn>1</mn> <mo>≤</mo> <mi>r</mi> <mo>≤</mo> <mn>2</mn> </mrow> </semantics></math>), Ricker’s (<math display="inline"><semantics> <mrow> <mn>15</mn> <mo>≤</mo> <mi>r</mi> <mo>≤</mo> <mn>20</mn> </mrow> </semantics></math>), sine (<math display="inline"><semantics> <mrow> <mn>0.85</mn> <mo>≤</mo> <mi>r</mi> <mo>≤</mo> <mn>1.00</mn> </mrow> </semantics></math>), and cubic (<math display="inline"><semantics> <mrow> <mn>2.3</mn> <mo>≤</mo> <mi>r</mi> <mo>≤</mo> <mn>2.6</mn> </mrow> </semantics></math>) maps: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>I</mi> <mi>M</mi> </mrow> </semantics></math> versus <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>E</mi> </mrow> </semantics></math>. Sorting array for <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>I</mi> <mi>M</mi> </mrow> </semantics></math>: <math display="inline"><semantics> <mrow> <mi>S</mi> <msup> <mi>A</mi> <mo>†</mo> </msup> </mrow> </semantics></math>=<math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mi>P</mi> <mn>4</mn> </msub> </mrow> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>3</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>1</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>6</mn> </msub> </semantics></math>-<math display="inline"><semantics> <msub> <mi>P</mi> <mn>5</mn> </msub> </semantics></math>-<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>2</mn> </msub> <mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>. (<b>b</b>) Reversibility <math display="inline"><semantics> <msub> <mi>T</mi> <mi>ρ</mi> </msub> </semantics></math> versus PE.</p>
Full article ">Figure 4
<p>For the logistic, tent, cusp, Ricker’s, sine, and cubic maps: (<b>a</b>) Rotational Hierarchy versus Rotational Variance [<a href="#B12-entropy-26-00969" class="html-bibr">12</a>]. (<b>b</b>) Mirror Hierarchy versus Mirror Variance. Legend as in <a href="#entropy-26-00969-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Symmetry vector plotted in symmetry space for the logistic map. Color code indicates the control parameter <math display="inline"><semantics> <mrow> <mn>3.5</mn> <mo>≤</mo> <mi>r</mi> <mo>≤</mo> <mn>4.0</mn> </mrow> </semantics></math>. The pink dot refers to a random dynamics. (<b>b</b>) Symmetry vector for several chaotic iterative maps. Colors as in <a href="#entropy-26-00969-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 6
<p>(<b>a</b>) Rotational Hierarchy versus Rotational Variance for several 1D iterative maps for words of length 4. (<b>b</b>) Mirror Hierarchy versus Mirror Variance for words of length 4.</p>
Full article ">
35 pages, 2268 KiB  
Article
Efficient Search Algorithms for Identifying Synergistic Associations in High-Dimensional Datasets
by Cillian Hourican, Jie Li, Pashupati P. Mishra, Terho Lehtimäki, Binisha H. Mishra, Mika Kähönen, Olli T. Raitakari, Reijo Laaksonen, Liisa Keltikangas-Järvinen, Markus Juonala and Rick Quax
Entropy 2024, 26(11), 968; https://doi.org/10.3390/e26110968 - 11 Nov 2024
Viewed by 400
Abstract
In recent years, there has been a notably increased interest in the study of multivariate interactions and emergent higher-order dependencies. This is particularly evident in the context of identifying synergistic sets, which are defined as combinations of elements whose joint interactions result in [...] Read more.
In recent years, there has been a notably increased interest in the study of multivariate interactions and emergent higher-order dependencies. This is particularly evident in the context of identifying synergistic sets, which are defined as combinations of elements whose joint interactions result in the emergence of information that is not present in any individual subset of those elements. The scalability of frameworks such as partial information decomposition (PID) and those based on multivariate extensions of mutual information, such as O-information, is limited by combinational explosion in the number of sets that must be assessed. In order to address these challenges, we propose a novel approach that utilises stochastic search strategies in order to identify synergistic triplets within datasets. Furthermore, the methodology is extensible to larger sets and various synergy measures. By employing stochastic search, our approach circumvents the constraints of exhaustive enumeration, offering a scalable and efficient means to uncover intricate dependencies. The flexibility of our method is illustrated through its application to two epidemiological datasets: The Young Finns Study and the UK Biobank Nuclear Magnetic Resonance (NMR) data. Additionally, we present a heuristic for reducing the number of synergistic sets to analyse in large datasets by excluding sets with overlapping information. We also illustrate the risks of performing a feature selection before assessing synergistic information in the system. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of synergy scores across the YFS dataset and the NMR variables in the UKB data, using two different synergistic metrics and different data binning. The PID-based approach in (<b>a</b>,<b>b</b>) shows a notable skew towards lower values. The data exhibits a heavy right tail, indicating that higher synergy scores are less common. Similarly, in (<b>c</b>,<b>d</b>), the majority of triplets have a very low synergy score (near zero). In the UKB NMR data, only 0.4% have a strongly synergistic score below −0.7, while the minimum synergy score in the YFS data is −0.56. (<b>a</b>) UKB NMR PID-based synergy scores using data discretized into five states via quantile binning. (<b>b</b>) YFS PID-based synergy scores using data discretized into four states via quantile binning. (<b>c</b>) UKB NMR O-information synergy-redundancy balance scores using data discretized into 12 states via quantile binning. (<b>d</b>) YFS NMR O-information synergy-redundancy balance scores using data discretized into 12 states via quantile binning.</p>
Full article ">Figure 2
<p>RMSE scores of the LightGBM model for predicting O-information scores based on different combinations of pairwise features. This plot illustrates the relationship between the number of features and the model’s prediction accuracy for O-information on the UKB NMR data. As expected, increasing the number of features generally leads to improved model accuracy, resulting in lower RMSE scores. A similar trend was found for both synergy metrics and both datasets.</p>
Full article ">Figure 3
<p>Prevalence of echoes in UKB NMR. If we select the top N synergistic sets in the data (<span class="html-italic">y</span>-axis) and then set an MI threshold for similarity (<span class="html-italic">x</span>-axis), the plot shows the proportion of synergistic sets that would remain after filtering. Most applications would lie in the bottom right of this figure.</p>
Full article ">Figure 4
<p>Example of a non-echo. <span class="html-italic">Total Free Cholesterol</span> and <span class="html-italic">Cholesterol in Medium LDL</span> have a strong MI score (red line). <span class="html-italic">Cholesterol in Medium LDL</span> forms a high synergy score with two other variables (green triangle). However, the triplet containing <span class="html-italic">Total Free Cholesterol</span> instead of <span class="html-italic">Cholesterol in Medium LDL</span> does not form a highly synergistic triplet (dashed purple triangle). This shows that if <span class="html-italic">Total Free Cholesterol</span> was used in the search for synergy, but <span class="html-italic">Cholesterol in Medium LDL</span> was not, then this synergistic association would be missed.</p>
Full article ">Figure 5
<p>Examples of highly correlated sets of variables in the UKB NMR data. For each highly synergistic triplet (defined as having less than twice the mean O-information synergy score) containing each variable, we substitute an element from the triplet with a highly correlated variable from this set, and record the percentage of triplets that remain highly synergistic, as indicated by the percentages in the curved blue lines. The arrow on the curved blue lines indicates that the variable (one of the highly correlated variables) in the highly synergistic triplets is replaced with the other variable in the highly correlated pair. Normalized mutual information (nMI) scores are depicted by the red straight lines. (<b>a</b>) illustrates a case where three variables exhibit strong pairwise associations and share similar synergies, while (<b>b</b>) depicts variables with strong pairwise associations but very different synergies, as indicated by the low R scores. (<b>a</b>) Set of triplets with mostly the same synergies. (<b>b</b>) Set of triplets with differing synergies.</p>
Full article ">Figure 6
<p>CDF of PID Scores for different pairwise metrics with the clique search strategy on UKB NMR and YFS datasets. (<b>a</b>) CDF of PID Scores for different pairwise metrics with the clique search strategy on UKB NMR. (<b>b</b>) CDF of PID Scores for different pairwise metrics with the clique search strategy on YFS.</p>
Full article ">Figure 7
<p>CDF of negated O-information Scores for different pairwise metrics with the clique search strategy on UKB NMR and YFS datasets. These plots compare the performance of each strategy against the ground truth synergy scores, with the ground truth CDF highlighted by a thicker blue line for distinction. Percentile thresholds are indicated by dashed lines. Better performance is represented by curves that are initially below and to the right of the solid blue line. (<b>a</b>) CDF of negated O-information scores for different pairwise metrics with the clique search strategy on UKB NMR. (<b>b</b>) CDF of negated O-information scores for different pairwise metrics with the clique search strategy on YFS.</p>
Full article ">Figure 8
<p>Distribution of best synergy scores found in the YFS and UKB NMR data for 100 runs of the SA algorithm using different strategies to guide particle movement. Synergy was computed via the O-information. The SA algorithm used a geometric cooling strategy with random initializations for each run. Lower values (to the left) indicate triplets with more synergistic information were found. Each particle movement strategy converged multiple times to triplets in the overall top 100 synergistic triplets in each dataset. (<b>a</b>) Distribution of best synergy scores found in the YFS data for 100 runs of the SA algorithm. (<b>b</b>) Distribution of best synergy scores found in the UKB NMR data for 100 runs of the SA algorithm.</p>
Full article ">Figure 9
<p>Heatmap of pairwise and higher-order mutual information (MI) for lipid metabolism triplets, with biological groupings. The heatmap displays the pairwise MI (columns 1–3) and the higher-order MI (columns 4–6) for each triplet, where variables are coloured based on their biological cluster. The strong synergies (higher-order MI) between variables from different clusters, such as HDL Cholesterol and Phospholipids in VLDL, highlight cross-functional interactions in lipid metabolism.</p>
Full article ">Figure 10
<p>Network visualization of synergistic interactions between metabolite categories. Each node represents a category of metabolites, and the size of each node reflects the number of variables in the UKB NMR data assigned to that category. Edge thickness represents the number of synergistic triplets between pairs of categories, with thicker edges indicating more synergistic associations between those variable categories. Node colours correspond to distinct categories: Amino Acids, Lipoprotein Particle Sizes, Other Metabolites, Derived Lipid Measures, Fatty Acids, Triglycerides, Lipids and Phospholipids, Lipoprotein Particles and Concentration, and Cholesterol and Esterified Cholesterol. Self-loops, indicating within-category synergy, are positioned outside the nodes to enhance clarity.</p>
Full article ">Figure A1
<p>Distribution of improvement factors for triplet interactions. The improvement factor quantifies the ratio between the joint mutual information of two variables about the third and the sum of their pairwise mutual information. The histogram reveals that most triplets exhibit improvement factors between 3 and 5, with a peak around 4, indicating that synergistic interactions are common. The presence of a long tail with higher values suggests that some triplets exhibit particularly strong synergies, highlighting the importance of higher-order interactions in lipid metabolism.</p>
Full article ">Figure A2
<p>Heatmap of pairwise and higher-order mutual information (MI) for lipid metabolism triplets with significant synergy and large improvement factors, categorized by biological function. The heatmap presents both the pairwise MI (columns 1–3) and the higher-order MI (columns 4–6) for each triplet. Variables are colour-coded according to their biological cluster, showing that these triplets exhibit strong synergistic interactions and large improvement factors. The analysis reveals that interactions between variables from different clusters, such as lipoprotein particle concentration and cholesterol transport, exhibit stronger synergies, suggesting coordinated regulation of lipid transport and metabolism.</p>
Full article ">Figure A3
<p>Initial and final scores for each particle in the PSO algorithm for one parameter setting. The plot shows that the particles generally improve their positions, moving towards lower O-information scores, indicating that the PSO algorithm effectively prevents particles from getting stuck in local minima compared to the SA approach.</p>
Full article ">Figure A4
<p>Training trajectories of three different runs of the SA algorithm. These plots illustrate the performance and challenges faced by the SA in navigating the optimization landscape. The variability in the trajectories indicates the algorithm’s sensitivity to initial conditions and potential difficulties in escaping local minima. (<b>a</b>) No significant improvement: In this plot, the SA algorithm remains trapped in a region of suboptimal solutions throughout the run, failing to find any highly synergistic triplets. The scores fluctuate around the same range, indicating that the algorithm is unable to escape local minima in this particular run. (<b>b</b>) Slow initial improvement: This plot demonstrates a scenario where the SA algorithm initially struggles to find a good solution, resulting in high variability in scores. However, after approximately 4000 steps, the algorithm starts to improve steadily, eventually finding a highly synergistic triplet. (<b>c</b>) Quick improvement: this plot shows a trajectory where the SA algorithm rapidly finds a highly synergistic triplet within the first few thousand steps and maintains the low score with minor fluctuations.</p>
Full article ">Figure A5
<p>Convergence rates of the PSO algorithm with nudge types none and constant. The plots illustrate how the global best score evolves over time for various weight types used in the SA algorithm.</p>
Full article ">Figure A5 Cont.
<p>Convergence rates of the PSO algorithm with nudge types none and constant. The plots illustrate how the global best score evolves over time for various weight types used in the SA algorithm.</p>
Full article ">Figure A6
<p>Convergence rates of the PSO algorithm with nudge types exponential decay and adaptive. The plots illustrate how the global best score evolves over time for various weight types used in the SA algorithm.</p>
Full article ">
13 pages, 464 KiB  
Review
Entropy of Neuronal Spike Patterns
by Artur Luczak
Entropy 2024, 26(11), 967; https://doi.org/10.3390/e26110967 - 11 Nov 2024
Viewed by 296
Abstract
Neuronal spike patterns are the fundamental units of neural communication in the brain, which is still not fully understood. Entropy measures offer a quantitative framework to assess the variability and information content of these spike patterns. By quantifying the uncertainty and informational content [...] Read more.
Neuronal spike patterns are the fundamental units of neural communication in the brain, which is still not fully understood. Entropy measures offer a quantitative framework to assess the variability and information content of these spike patterns. By quantifying the uncertainty and informational content of neuronal patterns, entropy measures provide insights into neural coding strategies, synaptic plasticity, network dynamics, and cognitive processes. Here, we review basic entropy metrics and then we provide examples of recent advancements in using entropy as a tool to improve our understanding of neuronal processing. It focuses especially on studies on critical dynamics in neural networks and the relation of entropy to predictive coding and cortical communication. We highlight the necessity of expanding entropy measures from single neurons to encompass multi-neuronal activity patterns, as cortical circuits communicate through coordinated spatiotemporal activity patterns, called neuronal packets. We discuss how the sequential and partially stereotypical nature of neuronal packets influences the entropy of cortical communication. Stereotypy reduces entropy by enhancing reliability and predictability in neural signaling, while variability within packets increases entropy, allowing for greater information capacity. This balance between stereotypy and variability supports both robustness and flexibility in cortical information processing. We also review challenges in applying entropy to analyze such spatiotemporal neuronal spike patterns, notably, the “curse of dimensionality” in estimating entropy for high-dimensional neuronal data. Finally, we discuss strategies to overcome these challenges, including dimensionality reduction techniques, advanced entropy estimators, sparse coding schemes, and the integration of machine learning approaches. Thus, this work summarizes the most recent developments on how entropy measures contribute to our understanding of principles underlying neural coding. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Cartoon illustration of neuronal activity packets. (<b>A</b>) Sequential activity patterns (called packets) during deep sleep where activity occurs sporadically. Within each packet, neurons fire with a stereotyped sequential pattern (each neuron marked with different color). (<b>B</b>) In an awake state, when more information is transmitted, packets occur right after each other, without long periods of silence, but temporal relationships between neurons are similar to those in the sleep state. (<b>C</b>) Consistency and variability in neuronal packets (geometrical interpretation). The gray area illustrates the space of all spiking patterns theoretically possible for a packet. The left-side panels show a cartoon of sample packets, each corresponding to a single point in gray space. The white area inside represents the space of packets experimentally observed in the brain. Packets evoked by different sensory stimuli occupy smaller subspaces (colored blobs). The right-side panels illustrate stimulus-evoked packets. The overall structure of evoked packets is similar, with differences in the firing rate and in the spike timing of neurons encoding information about different stimuli (figure modified from [<a href="#B18-entropy-26-00967" class="html-bibr">18</a>]).</p>
Full article ">
7 pages, 220 KiB  
Article
An Information-Theoretic Proof of a Hypercontractive Inequality
by Ehud Friedgut
Entropy 2024, 26(11), 966; https://doi.org/10.3390/e26110966 - 11 Nov 2024
Viewed by 296
Abstract
The famous hypercontractive estimate discovered independently by Gross, Bonami and Beckner has had a great impact on combinatorics and theoretical computer science since it was first used in this setting in a seminal paper by Kahn, Kalai and Linial. The usual proofs of [...] Read more.
The famous hypercontractive estimate discovered independently by Gross, Bonami and Beckner has had a great impact on combinatorics and theoretical computer science since it was first used in this setting in a seminal paper by Kahn, Kalai and Linial. The usual proofs of this inequality begin with two-point space, where some elementary calculus is used and then generalised immediately by introducing another dimension using submultiplicativity (Minkowski’s integral inequality). In this paper, we prove this inequality using information theory. We compare the entropy of a pair of correlated vectors in {0,1}n to their separate entropies, analysing them bit by bit (not as a figure of speech, but as the bits are revealed) using the chain rule of entropy. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
3 pages, 529 KiB  
Correction
Correction: Herrera Romero, R.; Bastarrachea-Magnani, M.A. Phase and Amplitude Modes in the Anisotropic Dicke Model with Matter Interactions. Entropy 2024, 26, 574
by Ricardo Herrera Romero and Miguel Angel Bastarrachea-Magnani
Entropy 2024, 26(11), 965; https://doi.org/10.3390/e26110965 - 11 Nov 2024
Viewed by 183
Abstract
The authors wish to make the following correction to this published paper [...] Full article
(This article belongs to the Special Issue Current Trends in Quantum Phase Transitions II)
Show Figures

Figure 3

Figure 3
<p>Polariton modes of the anisotropic Dicke model as a function of the coupling for (<b>a1</b>) TC limit (<math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>), (<b>b1</b>) anisotropic case (<math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>), and (<b>c1</b>) Dicke limit (<math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) with material collective interactions at <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.9</mn> <msub> <mi>ω</mi> <mn>0</mn> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>η</mi> <mrow> <mi>z</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>0.0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>η</mi> <mrow> <mi>z</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>0.9</mn> <msub> <mi>ω</mi> <mn>0</mn> </msub> </mrow> </semantics></math>). The critical coupling <math display="inline"><semantics> <msubsup> <mi>γ</mi> <mrow> <mi>ξ</mi> <mi>x</mi> </mrow> <mi>c</mi> </msubsup> </semantics></math> (<math display="inline"><semantics> <msubsup> <mi>γ</mi> <mrow> <mi>ξ</mi> <mi>y</mi> </mrow> <mi>c</mi> </msubsup> </semantics></math>) is indicated by the vertical solid black (dotted red) line. (<b>a2</b>–<b>a4</b>,<b>b2</b>–<b>b4</b>,<b>c2</b>–<b>c4</b>) depict the corresponding energy surfaces for the respective cases. The vertical dashed purple line shows the position of energy surfaces in the energy spectrum in the normal phases (<b>a2</b>–<b>c2</b>). The yellow one indicates the location of energy surfaces in the superradiant phase (<b>a3</b>–<b>c3</b>), while the blue line represents higher values of light–matter couplings (<b>a4</b>–<b>c4</b>). Green points in the energy surfaces represent energy minima, red ones indicate maxima and yellow points denote saddle points. Tilde variables are scaled to <math display="inline"><semantics> <msub> <mi>ω</mi> <mn>0</mn> </msub> </semantics></math>. All cases are calculated in resonance (<math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <msub> <mi>ω</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">
25 pages, 1557 KiB  
Article
Evidential Analysis: An Alternative to Hypothesis Testing in Normal Linear Models
by Brian Dennis, Mark L. Taper and José M. Ponciano
Entropy 2024, 26(11), 964; https://doi.org/10.3390/e26110964 - 10 Nov 2024
Viewed by 410
Abstract
Statistical hypothesis testing, as formalized by 20th century statisticians and taught in college statistics courses, has been a cornerstone of 100 years of scientific progress. Nevertheless, the methodology is increasingly questioned in many scientific disciplines. We demonstrate in this paper how many of [...] Read more.
Statistical hypothesis testing, as formalized by 20th century statisticians and taught in college statistics courses, has been a cornerstone of 100 years of scientific progress. Nevertheless, the methodology is increasingly questioned in many scientific disciplines. We demonstrate in this paper how many of the worrisome aspects of statistical hypothesis testing can be ameliorated with concepts and methods from evidential analysis. The model family we treat is the familiar normal linear model with fixed effects, embracing multiple regression and analysis of variance, a warhorse of everyday science in labs and field stations. Questions about study design, the applicability of the null hypothesis, the effect size, error probabilities, evidence strength, and model misspecification become more naturally housed in an evidential setting. We provide a completely worked example featuring a two-way analysis of variance. Full article
Show Figures

Figure 1

Figure 1
<p>Probability density functions (solid curves) of the noncentral F(<math display="inline"><semantics> <mrow> <mi>q</mi> <mrow> <mo>,</mo> <mo> </mo> </mrow> <mi>n</mi> <mo>−</mo> <mi>r</mi> <mrow> <mo>,</mo> <mo> </mo> </mrow> <mi>λ</mi> </mrow> </semantics></math>) distribution for various values of sample size <math display="inline"><semantics> <mi>n</mi> </semantics></math> and the noncentrality parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math>, as represented in the formula for <math display="inline"><semantics> <mrow> <mi>f</mi> <mfenced> <mi>u</mi> </mfenced> </mrow> </semantics></math> in the text, Equation (30). Here, <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mi>n</mi> <msup> <mi>δ</mi> <mn>2</mn> </msup> </mrow> </semantics></math>, which is in the common form of a simple experimental design, where <math display="inline"><semantics> <mi>n</mi> </semantics></math> is the number of observations and <math display="inline"><semantics> <mrow> <msup> <mi>δ</mi> <mn>2</mn> </msup> </mrow> </semantics></math> is a generalized squared per-observation effect size. The cumulative distribution function of the noncentral F distribution, exemplified here as the area under each density curve to the left of the dashed vertical line, is a monotone decreasing function of <math display="inline"><semantics> <mi>n</mi> </semantics></math>. Here, <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>δ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mi>n</mi> </semantics></math> has the values <math display="inline"><semantics> <mrow> <mn>24</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>36</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>48</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>60</mn> </mrow> </semantics></math>. Dashed curve is the density function for the F(<math display="inline"><semantics> <mrow> <mi>q</mi> <mrow> <mo>,</mo> <mo> </mo> </mrow> <mi>n</mi> <mo>−</mo> <mi>r</mi> <mrow> <mo>,</mo> <mo> </mo> </mrow> <mi>λ</mi> </mrow> </semantics></math>) distribution with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>24</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>δ</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (central F distribution). Notice that for a given effect size, the noncentral distribution increasingly diverges from the central distribution as sample size increases.</p>
Full article ">Figure 2
<p>Curves: Estimated cdf of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>SIC</mi> </mrow> </semantics></math> for the citrus tree example (two-factor analysis of variance, <a href="#entropy-26-00964-t001" class="html-table">Table 1</a>, with model 1 representing no interactions, model 2 representing interactions) using parametric (solid) and nonparametric (dashed) bootstrap with <math display="inline"><semantics> <mrow> <mn>1024</mn> </mrow> </semantics></math> bootstrap samples. Dotted horizontal lines depict 0.05 and 0.95 levels.</p>
Full article ">Figure 3
<p>The effect of sample size on the uncertainty of an evidential estimation. The data are simulated from the estimated model 2 (representing interactions). For each data set, confidence intervals were generated with 1024 bootstraps. To depict the expected behavior of such intervals the confidence points from 1024 simulated data sets are averaged. The vertical lines indicate the average 90% confidence intervals. The open circles and the dashes indicate the average location of the 50% confidence point. The solid horizontal line indicates equal evidence for model 1 and model 2. The dotted horizontal line indicates the pseudo-true difference of Kullback–Leibler divergences in the simulations.</p>
Full article ">Figure 4
<p>Interaction plot. An interaction plot is a graphical display of the potential magnitude and location of interaction in a linear model. For a two-factor ANOVA, a basic interaction plot displays a central measure for each cell (generally mean or median) on the <span class="html-italic">Y</span>-axis plotted against a categorical factor indicated on the <span class="html-italic">X</span>-axis. The second factor is indicated by lines joining cells that share a factor level. If there is no interaction, these lines will be parallel. The stronger an interaction, the greater the deviation from parallelism will be. Of course, some deviation may result from error in the estimation of cell central values. As consequence, interaction plots often include a display, such as a boxplot or confidence interval, of the uncertainty in the estimate of cell central value. In this figure, we plot 95% confidence intervals of cell means. Because replication is low (2 observations per cell), we calculate these intervals using a pooled estimate of the standard error. We further enhance this plot by including confidence intervals on the slope of the lines. If one considers any value within an interval for a central value a plausible value, a line from any plausible central value to any plausible value in the next interval represents a plausible slope. The maximum plausible slope runs from the lower bound on the left to the upper bound on the right. Similarly, the minimum plausible slope runs from the upper bound on the left to the lower bound on the right. If the intervals on central values are confidence intervals, then these maximum and minimum plausible slopes are themselves a pair confidence bounds on the slopes whose confidence level is equal to the square of the central value interval confidence level. Since in the figure we are using 95% intervals on the cell means, the confidence level on slopes is 90.5%. In the case study of citrus yields, the interaction plot readily shows that small changes in the cell mean yields well within the uncertainties in cell means could make all lines parallel. This interpretation matches the quantitative estimate of very low evidence for interactions.</p>
Full article ">
17 pages, 610 KiB  
Article
Sensitivity of Bayesian Networks to Noise in Their Parameters
by Agnieszka Onisko and Marek J. Druzdzel
Entropy 2024, 26(11), 963; https://doi.org/10.3390/e26110963 - 9 Nov 2024
Viewed by 354
Abstract
There is a widely spread belief in the Bayesian network (BN) community that the overall accuracy of results of BN inference is not too sensitive to the precision of their parameters. We present the results of several experiments in which we put this [...] Read more.
There is a widely spread belief in the Bayesian network (BN) community that the overall accuracy of results of BN inference is not too sensitive to the precision of their parameters. We present the results of several experiments in which we put this belief to a test in the context of medical diagnostic models. We study the deterioration of accuracy under random symmetric noise but also biased noise that represents overconfidence and underconfidence of human experts.Our results demonstrate consistently, across all models studied, that while noise leads to deterioration of accuracy, small amounts of noise have minimal effect on the diagnostic accuracy of BN models. Overconfidence, common among human experts, appears to be safer than symmetric noise and much safer than underconfidence in terms of the resulting accuracy. Noise in medical laboratory results and disease nodes as well as in nodes forming the Markov blanket of the disease nodes has the largest effect on accuracy. In light of these results, knowledge engineers should moderately worry about the overall quality of the numerical parameters of BNs and direct their effort where it is most needed, as indicated by sensitivity analysis. Full article
(This article belongs to the Special Issue Bayesian Network Modelling in Data Sparse Environments)
Show Figures

Figure 1

Figure 1
<p>The <span class="html-small-caps">Hepar II</span> network. Colors represent the role of each node: yellow are disorder nodes, blue are risk factors, history, and demographic data, and green are symptoms, signs, and laboratory tests.</p>
Full article ">Figure 2
<p>Example BN models learned from data: <span class="html-small-caps">Hepatitis</span> (<b>left</b>) and <span class="html-small-caps">Breast Cancer</span> (<b>right</b>). The yellow nodes (<span class="html-italic">Class</span> and <span class="html-italic">recurrence</span>) represent class variables.</p>
Full article ">Figure 3
<p>Scatterplots of the original (horizontal axis) vs. transformed (vertical axis) probabilities for the <span class="html-small-caps">Hepar II</span> model.</p>
Full article ">Figure 4
<p>Scatterplots of the original (horizontal axis) vs. transformed (vertical axis) probabilities for <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>. The top two plots show symmetric noise, the middle two plots show overconfidence, the bottom two plots show underconfidence.</p>
Full article ">Figure 5
<p>The average posteriors for the true diagnoses as a function of unbiased (symmetric) noise.</p>
Full article ">Figure 6
<p>The posterior probabilities of <span class="html-small-caps">Hepar II</span> disorders as a function of <math display="inline"><semantics> <mi>σ</mi> </semantics></math> on a single patient case. The lines represent posterior probabilities of the 11 disorders.</p>
Full article ">Figure 7
<p>The diagnostic accuracy of the eight models (clock-wise: <span class="html-small-caps">Hepar II</span>, <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Primary Tumor</span>, <span class="html-small-caps">Acute Inflammation</span>, <span class="html-small-caps">Cardiotocography</span>, <span class="html-small-caps">Breast Cancer</span>, <span class="html-small-caps">Hepatitis</span>, and <span class="html-small-caps">Spect Heart</span>) as a function of the amount of unbiased (symmetric) and biased (overconfidence and underconfidence) noise.</p>
Full article ">Figure 8
<p>The diagnostic accuracy of various semantic parts (physical examinations, laboratory results, history, and disorders) of the <span class="html-small-caps">Hepar II</span> model as a function of the amount of unbiased (symmetric) noise.</p>
Full article ">Figure 9
<p>An example of a Bayesian network model with a Markov blanket of the node <span class="html-italic">X</span>. The nodes in green depict the Markov blanket of the node in red.</p>
Full article ">Figure 10
<p>The diagnostic accuracy of the four models (clock-wise: <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Spect Heart</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">Hepatitis</span>) as a function of the amount of unbiased (symmetric) noise.</p>
Full article ">Figure 11
<p>The diagnostic accuracy of the four models (clock-wise: <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Spect Heart</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">Hepatitis</span>) as a function of the amount of biased noise (overconfidence).</p>
Full article ">Figure 12
<p>The diagnostic accuracy of the four models (clock-wise: <span class="html-small-caps">Lymphography</span>, <span class="html-small-caps">Spect Heart</span>, <span class="html-small-caps">Primary Tumor</span>, and <span class="html-small-caps">Hepatitis</span>) as a function of the amount of biased (underconfidence).</p>
Full article ">
14 pages, 2887 KiB  
Article
Machine Learning-Assisted Hartree–Fock Approach for Energy Level Calculations in the Neutral Ytterbium Atom
by Kaichen Ma, Chen Yang, Junyao Zhang, Yunfei Li, Gang Jiang and Junjie Chai
Entropy 2024, 26(11), 962; https://doi.org/10.3390/e26110962 - 8 Nov 2024
Viewed by 410
Abstract
Data-driven machine learning approaches with precise predictive capabilities are proposed to address the long-standing challenges in the calculation of complex many-electron atomic systems, including high computational costs and limited accuracy. In this work, we develop a general workflow for machine learning-assisted atomic structure [...] Read more.
Data-driven machine learning approaches with precise predictive capabilities are proposed to address the long-standing challenges in the calculation of complex many-electron atomic systems, including high computational costs and limited accuracy. In this work, we develop a general workflow for machine learning-assisted atomic structure calculations based on the Cowan code’s Hartree–Fock with relativistic corrections (HFR) theory. The workflow incorporates enhanced ElasticNet and XGBoost algorithms, refined using entropy weight methodology to optimize performance. This semi-empirical framework is applied to calculate and analyze the excited state energy levels of the 4f closed-shell Yb I atom, providing insights into the applicability of different algorithms under various conditions. The reliability and advantages of this innovative approach are demonstrated through comprehensive comparisons with ab initio calculations, experimental data, and other theoretical results. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Step-by-step diagram of the machine learning-assisted atomic structure calculation workflow. Step 1: Initial calculation; Step 2: data extraction; Step 3: data preparation; Step 4: machine learning fitting calculations; Step 5: parameter refinement; Step 6: results evaluation.</p>
Full article ">Figure 2
<p>Errors of even-parity excited state energy levels for Yb I calculated using ab initio method, RAEN, XGB-R, XGB-B, and XGB-C. Top marginal histograms show error distributions for each method. The vertical axis is broken (indicated by //) to accommodate the large range of errors.</p>
Full article ">Figure 3
<p>Errors in even-parity excited state energy levels of Yb I calculated by XGB-R, XGB-B, and XGB-C models with an expanded training set. Top marginal histograms show error distributions for each method. The vertical axis is broken (indicated by //) to accommodate the large range of errors.</p>
Full article ">Figure 4
<p>Errors in small-sample even-parity excited state energy levels of Yb I calculated using Cowan fit, RAEN, XGB-R, XGB-B, and XGB-C methods. Top marginal histograms show error distributions for each method. The vertical axis is broken (indicated by //) to accommodate the large range of errors.</p>
Full article ">
17 pages, 5532 KiB  
Article
Permutation Entropy: An Ordinal Pattern-Based Resilience Indicator for Industrial Equipment
by Christian Salas, Orlando Durán, José Ignacio Vergara and Adolfo Arata
Entropy 2024, 26(11), 961; https://doi.org/10.3390/e26110961 - 8 Nov 2024
Viewed by 359
Abstract
In a highly dynamic and complex environment where risks and uncertainties are inevitable, the ability of a system to quickly recover from disturbances and maintain optimal performance is crucial for ensuring operational continuity and efficiency. In this context, resilience has become an increasingly [...] Read more.
In a highly dynamic and complex environment where risks and uncertainties are inevitable, the ability of a system to quickly recover from disturbances and maintain optimal performance is crucial for ensuring operational continuity and efficiency. In this context, resilience has become an increasingly important topic in the field of engineering and the management of productive systems. However, there is no single quantitative indicator of resilience that allows for the measurement of this characteristic in a productive system. This study proposes the use of permutation entropy of ordinal patterns in time series as an indicator of resilience in industrial equipment and systems. Based on the definition of resilience, the developed method enables precise and efficient assessment of a system’s ability to withstand and recover from disturbances. The methodology includes the identification of ordinal patterns and their analysis through the calculation of a permutation entropy indicator to characterize the dynamics of industrial systems. Case studies are presented and the results are compared with other resilience models existing in the literature, aiming to demonstrate the effectiveness of the proposed approach. The results are promising and highlight a highly applicable and simple indicator for resilience in industrial systems. Full article
(This article belongs to the Special Issue Ordinal Pattern-Based Entropies: New Ideas and Challenges)
Show Figures

Figure 1

Figure 1
<p>Geometry ordinal patterns. (<b>a</b>) delay equal to one in the detection of ordinal patterns. (<b>b</b>) delay equal to two.</p>
Full article ">Figure 2
<p>Pattern combinations.</p>
Full article ">Figure 3
<p>System performance after a shock.</p>
Full article ">Figure 4
<p>Transition diagrams. Where nodes represent the specific states of the system, arrows indicate the direction and type of transition between states. The numbers next to the arrows indicate the type of transition according to the classification in the analysis.</p>
Full article ">Figure 5
<p>Resilience, ranges, and mean per segment, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <span class="html-italic">lookback</span> = 10.</p>
Full article ">Figure 6
<p>Identified patterns for the 10 data points in Segment 1.</p>
Full article ">Figure 7
<p>Identified patterns for the 10 data points in Segment 2.</p>
Full article ">Figure 8
<p>Identified patterns for the 10 data points in Segment 3.</p>
Full article ">Figure 9
<p>Identified patterns for the 10 data points in Segment 4.</p>
Full article ">Figure 10
<p>Identified patterns for the 10 data points in Segment 5.</p>
Full article ">Figure 11
<p>Resilience, ranges, and mean per segment of the actual time series.</p>
Full article ">Figure 12
<p>Correlation and Bland–Altman plots with values obtained by Methods A and B.</p>
Full article ">Figure 13
<p>Correlation and Bland–Altman plots with values obtained by Methods A and C.</p>
Full article ">Figure 14
<p>Resilience values obtained by the three methods under comparison.</p>
Full article ">
23 pages, 5276 KiB  
Article
Generalized Gaussian Distribution Improved Permutation Entropy: A New Measure for Complex Time Series Analysis
by Kun Zheng, Hong-Seng Gan, Jun Kit Chaw, Sze-Hong Teh and Zhe Chen
Entropy 2024, 26(11), 960; https://doi.org/10.3390/e26110960 - 7 Nov 2024
Viewed by 452
Abstract
To enhance the performance of entropy algorithms in analyzing complex time series, generalized Gaussian distribution improved permutation entropy (GGDIPE) and its multiscale variant (MGGDIPE) are proposed in this paper. First, the generalized Gaussian distribution cumulative distribution function is employed for data normalization to [...] Read more.
To enhance the performance of entropy algorithms in analyzing complex time series, generalized Gaussian distribution improved permutation entropy (GGDIPE) and its multiscale variant (MGGDIPE) are proposed in this paper. First, the generalized Gaussian distribution cumulative distribution function is employed for data normalization to enhance the algorithm’s applicability across time series with diverse distributions. The algorithm further processes the normalized data using improved permutation entropy, which maintains both the absolute magnitude and temporal correlations of the signals, overcoming the equal value issue found in traditional permutation entropy (PE). Simulation results indicate that GGDIPE is less sensitive to parameter variations, exhibits strong noise resistance, accurately reveals the dynamic behavior of chaotic systems, and operates significantly faster than PE. Real-world data analysis shows that MGGDIPE provides markedly better separability for RR interval signals, EEG signals, bearing fault signals, and underwater acoustic signals compared to multiscale PE (MPE) and multiscale dispersion entropy (MDE). Notably, in underwater target recognition tasks, MGGDIPE achieves a classification accuracy of 97.5% across four types of acoustic signals, substantially surpassing the performance of MDE (70.5%) and MPE (62.5%). Thus, the proposed method demonstrates exceptional capability in processing complex time series. Full article
(This article belongs to the Special Issue Ordinal Pattern-Based Entropies: New Ideas and Challenges)
Show Figures

Figure 1

Figure 1
<p>Calculation flow chart of generalized Gaussian distribution improved permutation entropy (GGDIPE).</p>
Full article ">Figure 2
<p>The GGDIPE analysis for three types of noise across varying embedding dimensions. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Analysis results of PE and DispEn for three types of noise across various embedding dimensions. (<b>a</b>) PE analysis result; (<b>b</b>) DispEn analysis result.</p>
Full article ">Figure 4
<p>The GGDIPE analysis for three types of noise across varying <math display="inline"><semantics> <mi>L</mi> </semantics></math> (<b>a</b>) <span class="html-italic">L</span> = 2; (<b>b</b>) <span class="html-italic">L</span> = 4; (<b>c</b>) <span class="html-italic">L</span> = 6; (<b>d</b>) <span class="html-italic">L</span> = 8.</p>
Full article ">Figure 5
<p>The GGDIPE analysis for three types of noise across varying data length. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.9</mn> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>1.5</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.1</mn> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.9</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Analysis results of PE and DispEn for three types of noise across various data length. (<b>a</b>) PE analysis result; (<b>b</b>) DispEn analysis result.</p>
Full article ">Figure 7
<p>The GGDIPE analysis for noisy Lorenz signals. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Analysis results of PE and DispEn for noisy Lorenz signals. (<b>a</b>) PE analysis result; (<b>b</b>) DispEn analysis result.</p>
Full article ">Figure 9
<p>Analysis results of applying various entropy algorithms to the Logistic model.</p>
Full article ">Figure 10
<p>GGDIPE analysis results for the RR intervals of healthy young and healthy elderly subjects.</p>
Full article ">Figure 11
<p>Analysis results of PE and DispEn for the RR intervals of healthy young and healthy elderly subjects. (<b>a</b>) PE analysis result; (<b>b</b>) DispEn analysis result.</p>
Full article ">Figure 12
<p>The MGGDIPE analysis for EEG signals. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.9</mn> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>1.5</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.1</mn> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.9</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>The MPE and MDE analysis for EEG signals. (<b>a</b>) MPE analysis result; (<b>b</b>) MDE analysis result.</p>
Full article ">Figure 14
<p>The MGGDIPE analysis for bearing fault signals. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.9</mn> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>1.5</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.1</mn> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.9</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>The MPE and MDE analysis for bearing fault signals. (<b>a</b>) MPE analysis result; (<b>b</b>) MDE analysis result.</p>
Full article ">Figure 16
<p>The MGGDIPE analysis for underwater acoustic signals. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.9</mn> </mrow> </semantics></math> (<b>b</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>1.5</mn> </mrow> </semantics></math> (<b>c</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.1</mn> </mrow> </semantics></math> (<b>d</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">β</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.9</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>The MPE and MDE analysis for underwater acoustic signals. (<b>a</b>) MPE analysis result; (<b>b</b>) MDE analysis result.</p>
Full article ">Figure A1
<p>Probability density function of generalized Gaussian distribution.</p>
Full article ">
15 pages, 355 KiB  
Article
Exact Expressions for Kullback–Leibler Divergence for Univariate Distributions
by Victor Nawa and Saralees Nadarajah
Entropy 2024, 26(11), 959; https://doi.org/10.3390/e26110959 - 7 Nov 2024
Viewed by 465
Abstract
The Kullback–Leibler divergence (KL divergence) is a statistical measure that quantifies the difference between two probability distributions. Specifically, it assesses the amount of information that is lost when one distribution is used to approximate another. This concept is crucial in various fields, including [...] Read more.
The Kullback–Leibler divergence (KL divergence) is a statistical measure that quantifies the difference between two probability distributions. Specifically, it assesses the amount of information that is lost when one distribution is used to approximate another. This concept is crucial in various fields, including information theory, statistics, and machine learning, as it helps in understanding how well a model represents the underlying data. In a recent study by Nawa and Nadarajah, a comprehensive collection of exact expressions for the Kullback–Leibler divergence was derived for both multivariate and matrix-variate distributions. This work is significant as it expands on our existing knowledge of KL divergence by providing precise formulations for over sixty univariate distributions. The authors also ensured the accuracy of these expressions through numerical checks, which adds a layer of validation to their findings. The derived expressions incorporate various special functions, highlighting the mathematical complexity and richness of the topic. This research contributes to a deeper understanding of KL divergence and its applications in statistical analysis and modeling. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>The differences between the values of (<a href="#FD1-entropy-26-00959" class="html-disp-formula">1</a>) obtained by numerical integration and using the derived expressions for the chi distribution with <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>top left</b>); the exponential distribution with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>top right</b>); the generalized logistic distribution of type I with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>bottom left</b>); the power function distribution of type I with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>bottom right</b>).</p>
Full article ">Figure 2
<p>The differences between the values of (<a href="#FD1-entropy-26-00959" class="html-disp-formula">1</a>) obtained by numerical integration and using the derived expressions for the chi distribution with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>top left</b>); the exponential distribution with <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>top right</b>); the generalized logistic distribution of type I with <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>bottom left</b>); the power function distribution of type I with <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>100</mn> </mrow> </semantics></math> (<b>bottom right</b>).</p>
Full article ">
14 pages, 321 KiB  
Article
On the Negative Result Experiments in Quantum Mechanics
by Kenichi Konishi
Entropy 2024, 26(11), 958; https://doi.org/10.3390/e26110958 - 7 Nov 2024
Viewed by 314
Abstract
We comment on the so-called negative result experiments (also known as null measurements, interaction-free measurements, and so on) in quantum mechanics (QM), in the light of the new general understanding of the quantum-measurement processes, proposed recently. All experiments of this kind (null measurements) [...] Read more.
We comment on the so-called negative result experiments (also known as null measurements, interaction-free measurements, and so on) in quantum mechanics (QM), in the light of the new general understanding of the quantum-measurement processes, proposed recently. All experiments of this kind (null measurements) can be understood as improper measurements with an intentionally biased detector set up, which introduces exclusion or selection of certain events. The prediction on the state of a microscopic system under study based on a null measurement is sometimes dramatically described as “wave-function collapse without any microsystem-detector interactions”. Though certainly correct, such a prediction is just a consequence of the standard QM laws, not different from the situation in the so-called state-preparation procedure. Another closely related concept is the (first-class or) repeatable measurements. The verification of the prediction made by a null measurement requires eventually a standard unbiased measurement involving the microsystem-macroscopic detector interactions, which are nonadiabatic, irreversible processes of signal amplification. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness V)
Show Figures

Figure 1

Figure 1
<p>Elitzur–Vaidman bomb-tester experiment. The photon enters from the lower left corner to a Mach–Zehnder interferometry. The detection of the photon at the detector <math display="inline"><semantics> <msub> <mi>D</mi> <mn>2</mn> </msub> </semantics></math> implies that the bomb is real, but that the photon has not interacted with the bomb.</p>
Full article ">Figure 2
<p>The standard SG set-up.</p>
Full article ">Figure 3
<p>The modified SG set-up.</p>
Full article ">
19 pages, 4252 KiB  
Article
Information Propagation in Hypergraph-Based Social Networks
by Hai-Bing Xiao, Feng Hu, Peng-Yue Li, Yu-Rong Song and Zi-Ke Zhang
Entropy 2024, 26(11), 957; https://doi.org/10.3390/e26110957 - 6 Nov 2024
Viewed by 327
Abstract
Social networks, functioning as core platforms for modern information dissemination, manifest distinctive user clustering behaviors and state transition mechanisms, thereby presenting new challenges to traditional information propagation models. Based on hypergraph theory, this paper augments the traditional SEIR model by introducing a novel [...] Read more.
Social networks, functioning as core platforms for modern information dissemination, manifest distinctive user clustering behaviors and state transition mechanisms, thereby presenting new challenges to traditional information propagation models. Based on hypergraph theory, this paper augments the traditional SEIR model by introducing a novel hypernetwork information dissemination SSEIR model specifically designed for online social networks. This model accurately represents complex, multi-user, high-order interactions. It transforms the traditional single susceptible state (S) into active (Sa) and inactive (Si) states. Additionally, it enhances traditional information dissemination mechanisms through reaction process strategies (RP strategies) and formulates refined differential dynamical equations, effectively simulating the dissemination and diffusion processes in online social networks. Employing mean field theory, this paper conducts a comprehensive theoretical derivation of the dissemination mechanisms within the SSEIR model. The effectiveness of the model in various network structures was verified through simulation experiments, and its practicality was further validated by its application on real network datasets. The results show that the SSEIR model excels in data fitting and illustrating the internal mechanisms of information dissemination within hypernetwork structures, further clarifying the dynamic evolutionary patterns of information dissemination in online social hypernetworks. This study not only enriches the theoretical framework of information dissemination but also provides a scientific theoretical foundation for practical applications such as news dissemination, public opinion management, and rumor monitoring in online social networks. Full article
(This article belongs to the Special Issue Spreading Dynamics in Complex Networks)
Show Figures

Figure 1

Figure 1
<p>Evolutionary schematic of the hypernetwork model (<math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>). Blue solid lines indicate existing hyperedges, green nodes denote existing nodes, red dashed lines depict new hyperedges added in the current time step, and blue nodes signify new nodes added during the current time step.</p>
Full article ">Figure 2
<p>SEIR model state transition diagram. In the context of information dissemination, the green section represents the <math display="inline"><semantics> <mrow> <mi>S</mi> </mrow> </semantics></math>-state, indicating unawareness of the information. The dark blue section is the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state, where individuals are aware of but not spreading the information. The purple section denotes the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, where individuals actively spread the information. The light blue section represents the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state, indicating immunity to the information.</p>
Full article ">Figure 3
<p>SSEIR model state transition diagram. Dark green denotes the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>-state, light green denotes the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math>-state, dark blue denotes the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state, purple denotes the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, and light blue denotes the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state.</p>
Full article ">Figure 4
<p>Comparison chart of theoretical and simulation trends in information dissemination. The green dashed line represents theoretical values for the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state, the red dashed line for the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, and the light blue dashed line for the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state. Green star-shaped markers denote simulation results for the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state, red stars for the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, and light blue stars for the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state.</p>
Full article ">Figure 5
<p>Trends of information dissemination across different network models. Deep blue denotes the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>-state, black denotes the <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math>-state, light blue denotes the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state, red denotes the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, and green denotes the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state. (<b>A</b>) displays the theoretical curves of the model, (<b>B</b>) applies the model to a hypernetwork, (<b>C</b>) to a BA scale-free network, and (<b>D</b>) to an NW small-world network.</p>
Full article ">Figure 6
<p>Impact of different <math display="inline"><semantics> <mrow> <mi>θ</mi> </mrow> </semantics></math> on the quantities of <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state and <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state. The (<b>A</b>) displays the effect on the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, while the (<b>B</b>) shows the effect on the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state. The green curve corresponds to a spreading rate of 0.005, the red to 0.03, and the blue to 0.05.</p>
Full article ">Figure 7
<p>Effects of different <math display="inline"><semantics> <mrow> <mi>ε</mi> </mrow> </semantics></math> on the quantities of <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state and <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state. The (<b>A</b>) shows the effect of recovering rate on the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, while the (<b>B</b>) details the effect on the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state. The green curve indicates a <math display="inline"><semantics> <mrow> <mi>ε</mi> </mrow> </semantics></math> of 0.04, the red a rate of 0.02, and the blue a rate of 0.01.</p>
Full article ">Figure 8
<p>Impact of varying average numbers of adjacent nodes on the quantities of <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state and <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state. The (<b>A</b>) details the effects on the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, while the (<b>B</b>) details the effects on the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state. The green curve denotes <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; the red curve denotes <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>; the blue curve denotes <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Impact of different ratios of active (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math>) to inactive (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>) nodes on the quantities of <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state and <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state. The (<b>A</b>) details the effect on the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state, while the (<b>B</b>) details the effect on the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state. The green curve indicates a ratio of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> <mo>:</mo> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mn>4</mn> <mo>:</mo> <mn>6</mn> </mrow> </semantics></math>, the red a ratio of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> <mo>:</mo> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mn>3</mn> <mo>:</mo> <mn>7</mn> </mrow> </semantics></math>, and the blue a ratio of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>a</mi> </mrow> </msub> <mo>:</mo> <msub> <mrow> <mi>S</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <mo>:</mo> <mn>8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Time-dependent curves of active users in different information dissemination models at a fixed transmission rate. The blue curve in the figure represents the trend in the number of users in the <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state in the SIR model, the red curve represents the trends in the number of users in both the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state and <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state in the SEIR model, and the green curve represents the trends in the number of users in the <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state and <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state in the SSEIR model.</p>
Full article ">Figure 11
<p>Change curves of different states of the SSEIR model under various real networks. (<b>A</b>) shows the validation of the SSEIR model in a scientific collaboration network, while (<b>B</b>) depicts the validation in a Twitter social network. The figures use green, red, and blue curves to represent the change curves of the <math display="inline"><semantics> <mrow> <mi>R</mi> </mrow> </semantics></math>-state, <math display="inline"><semantics> <mrow> <mi>I</mi> </mrow> </semantics></math>-state, and <math display="inline"><semantics> <mrow> <mi>E</mi> </mrow> </semantics></math>-state, respectively.</p>
Full article ">
15 pages, 8736 KiB  
Article
Research on Classification and Identification of Crack Faults in Steam Turbine Blades Based on Supervised Contrastive Learning
by Qinglei Zhang, Laifeng Tang, Jiyun Qin, Jianguo Duan and Ying Zhou
Entropy 2024, 26(11), 956; https://doi.org/10.3390/e26110956 - 6 Nov 2024
Viewed by 376
Abstract
Steam turbine blades may crack, break, or suffer other failures due to high temperatures, high pressures, and high-speed rotation, which seriously threatens the safety and reliability of the equipment. The signal characteristics of different fault types are slightly different, making it difficult to [...] Read more.
Steam turbine blades may crack, break, or suffer other failures due to high temperatures, high pressures, and high-speed rotation, which seriously threatens the safety and reliability of the equipment. The signal characteristics of different fault types are slightly different, making it difficult to accurately classify the faults of rotating blades directly through vibration signals. This method combines a one-dimensional convolutional neural network (1DCNN) and a channel attention mechanism (CAM). 1DCNN can effectively extract local features of time series data, while CAM assigns different weights to each channel to highlight key features. To further enhance the efficacy of feature extraction and classification accuracy, a projection head is introduced in this paper to systematically map all sample features into a normalized space, thereby improving the model’s capacity to distinguish between distinct fault types. Finally, through the optimization of a supervised contrastive learning (SCL) strategy, the model can better capture the subtle differences between different fault types. Experimental results show that the proposed method has an accuracy of 99.61%, 97.48%, and 96.22% in the classification task of multiple crack fault types at three speeds, which is significantly better than Multilayer Perceptron (MLP), Residual Network (ResNet), Momentum Contrast (MoCo), and Transformer methods. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>System framework diagram (The figure illustrates a fault diagnosis framework based on vibration signal data. The experiment collects vibration signals of blade crack faults, constructs a fault dataset, and adds Gaussian noise. The 1D CNN is combined with CAM to extract fault features. A projection head is introduced to map all sample features into a normalized space, thereby enhancing the model’s ability to distinguish between different fault types. Additionally, contrast loss and cross-entropy loss are calculated through supervised contrast learning to complete the fault classification.).</p>
Full article ">Figure 2
<p>Vibration signals before and after adding Gaussian noise. ((<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) represent the “normal” without adding Gaussian noise, the fault signal of “crack1”, the fault signal of “crack2”, the fault signal of “crack3”, and the fault signal of “fracture”, respectively; (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) represent the above five signals after adding Gaussian noise, respectively.).</p>
Full article ">Figure 3
<p>Dynamic test bench of rotor system with integral shroud blade.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>c</b>) represent the accuracy of MLP, ResNet, MoCo, Transformer, and our proposed method at 1400 r/min, 1800 r/min, and 2200 r/min, respectively.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) represent the confusion matrix results at 1400 r/min, 1800 r/min, and 2200 r/min. The values on the diagonal represent the number of samples predicted correctly, and the values on the off-diagonal represent the number of samples predicted incorrectly.</p>
Full article ">Figure 6
<p>t-SNE feature visualization: (<b>a</b>) 1400 r/min, (<b>b</b>) 1800 r/min, (<b>c</b>) 2200 r/min.</p>
Full article ">
13 pages, 577 KiB  
Article
Identifying Key Nodes for the Influence Spread Using a Machine Learning Approach
by Mateusz Stolarski, Adam Piróg and Piotr Bródka
Entropy 2024, 26(11), 955; https://doi.org/10.3390/e26110955 - 6 Nov 2024
Viewed by 360
Abstract
The identification of key nodes in complex networks is an important topic in many network science areas. It is vital to a variety of real-world applications, including viral marketing, epidemic spreading and influence maximization. In recent years, machine learning algorithms have proven to [...] Read more.
The identification of key nodes in complex networks is an important topic in many network science areas. It is vital to a variety of real-world applications, including viral marketing, epidemic spreading and influence maximization. In recent years, machine learning algorithms have proven to outperform the conventional, centrality-based methods in accuracy and consistency, but this approach still requires further refinement. What information about the influencers can be extracted from the network? How can we precisely obtain the labels required for training? Can these models generalize well? In this paper, we answer these questions by presenting an enhanced machine learning-based framework for the influence spread problem. We focus on identifying key nodes for the Independent Cascade model, which is a popular reference method. Our main contribution is an improved process of obtaining the labels required for training by introducing “Smart Bins” and proving their advantage over known methods. Next, we show that our methodology allows ML models to not only predict the influence of a given node, but to also determine other characteristics of the spreading process—which is another novelty to the relevant literature. Finally, we extensively test our framework and its ability to generalize beyond complex networks of different types and sizes, gaining important insight into the properties of these methods. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>The framework for identifying influential nodes.</p>
Full article ">Figure 2
<p>Smart bins—clustering discretization on the Facebook network. The dashed line marks the top 5% of the nodes. Nodes are colored according to the class they fit into, with the blue ones being top class (most influential nodes).</p>
Full article ">Figure 3
<p>Machine learning algorithm comparison with mean values aggregated across all the experiments.</p>
Full article ">Figure 4
<p>LightGBM performance in various tasks.</p>
Full article ">Figure 5
<p>Model generalization.</p>
Full article ">Figure 6
<p>Comparison of <span class="html-italic">Smart Bins</span>, clustering discretization and <span class="html-italic">Fixed bins</span>, an arbitrary choice of top 5% nodes.</p>
Full article ">Figure 7
<p>Feature importance.</p>
Full article ">
19 pages, 2580 KiB  
Article
A Hybrid Quantum-Classical Model for Stock Price Prediction Using Quantum-Enhanced Long Short-Term Memory
by Kimleang Kea, Dongmin Kim, Chansreynich Huot, Tae-Kyung Kim and Youngsun Han
Entropy 2024, 26(11), 954; https://doi.org/10.3390/e26110954 - 6 Nov 2024
Viewed by 428
Abstract
The stock markets have become a popular topic within machine learning (ML) communities, with one particular application being stock price prediction. However, accurately predicting the stock market is a challenging task due to the various factors within financial markets. With the introduction of [...] Read more.
The stock markets have become a popular topic within machine learning (ML) communities, with one particular application being stock price prediction. However, accurately predicting the stock market is a challenging task due to the various factors within financial markets. With the introduction of ML, prediction techniques have become more efficient but computationally demanding for classical computers. Given the rise of quantum computing (QC), which holds great promise for being exponentially faster than current classical computers, it is natural to explore ML within the QC domain. In this study, we leverage a hybrid quantum-classical ML approach to predict a company’s stock price. We integrate classical long short-term memory (LSTM) with QC, resulting in a new variant called QLSTM. We initially validate the proposed QLSTM model by leveraging an IBM quantum simulator running on a classical computer, after which we conduct predictions using an IBM real quantum computer. Thereafter, we evaluate the performance of our model using the root mean square error (RMSE) and prediction accuracy. Additionally, we perform a comparative analysis, evaluating the prediction performance of the QLSTM model against several other classical models. Further, we explore the impacts of hyperparameters on the QLSTM model to determine the best configuration. Our experimental results demonstrate that while the classical LSTM model achieved an RMSE of 0.0693 and a prediction accuracy of 0.8815, the QLSTM model exhibited superior performance, achieving values of 0.0602 and 0.9736, respectively. Furthermore, the QLSTM outperformed other classical models in both metrics. Full article
(This article belongs to the Special Issue The Future of Quantum Machine Learning and Quantum AI)
Show Figures

Figure 1

Figure 1
<p>Schematic for the internal structure of a classical LSTM cell.</p>
Full article ">Figure 2
<p>Four different methods for integrating ML with QC.</p>
Full article ">Figure 3
<p>Angle encoding quantum circuit.</p>
Full article ">Figure 4
<p>The overall architecture of the proposed QLSTM for stock closing price prediction.</p>
Full article ">Figure 5
<p>A QLSTM cell consists of VQCs as replacements to LSTM gates.</p>
Full article ">Figure 6
<p>The general architecture for a single variational quantum circuit (VQC) is described as follows: <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> represents the quantum operations for encoding classical data (<span class="html-italic">x</span>), and <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>(</mo> <mi>θ</mi> <mo>)</mo> </mrow> </semantics></math> represents the repetition of variational layers, from 1 to <span class="html-italic">i</span>, each with tunable parameters <math display="inline"><semantics> <mi>θ</mi> </semantics></math>. The final layer is a measurement layer employed to obtain the VQC probability distribution.</p>
Full article ">Figure 7
<p>Variation quantum circuit in the QLSTM architecture, as utilized in [<a href="#B12-entropy-26-00954" class="html-bibr">12</a>,<a href="#B38-entropy-26-00954" class="html-bibr">38</a>]. <span class="html-italic">H</span>, <math display="inline"><semantics> <msub> <mi>R</mi> <mi>y</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>R</mi> <mi>z</mi> </msub> </semantics></math> denote quantum gates, while <span class="html-italic">x</span> represents the classical input data vector, functioning as a data encoding layer. Parameters <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>γ</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </semantics></math> are adjustable and require optimization. The line connecting • and ⊗ represents a CNOT gate. The circuits conclude with a measurement layer.</p>
Full article ">Figure 8
<p>Efficient gradient computation through the technique parameter-shift rules on a VQC.</p>
Full article ">Figure 9
<p>Selected stock price data from 1 January 2022, to 1 January 2023. The training data are depicted on the left side of the blue dashed line, whereas the testing data are on the right side.</p>
Full article ">Figure 10
<p>The training losses of the <span class="html-italic">Noiseless</span> and <span class="html-italic">Noisy</span> QLSTM, classical LSTM, and other models on the stock price dataset over 50 epochs, where the loss function descends to the lowest point near zero.</p>
Full article ">Figure 11
<p>Comparison of the prediction performance of QLSTM in various quantum environments with classical LSTM and other models using 20 stock price data points.</p>
Full article ">Figure 12
<p>Visualizing the training accuracy and loss of QLSTM models across different numbers of qubits, spanning from 4 to 15. Green bars represent accuracy, while red bars denote losses in RMSE.</p>
Full article ">
71 pages, 3052 KiB  
Perspective
The Algorithmic Agent Perspective and Computational Neuropsychiatry: From Etiology to Advanced Therapy in Major Depressive Disorder
by Giulio Ruffini, Francesca Castaldo, Edmundo Lopez-Sola, Roser Sanchez-Todo and Jakub Vohryzek
Entropy 2024, 26(11), 953; https://doi.org/10.3390/e26110953 - 6 Nov 2024
Viewed by 445
Abstract
Major Depressive Disorder (MDD) is a complex, heterogeneous condition affecting millions worldwide. Computational neuropsychiatry offers potential breakthroughs through the mechanistic modeling of this disorder. Using the Kolmogorov theory (KT) of consciousness, we developed a foundational model where algorithmic agents interact with the world [...] Read more.
Major Depressive Disorder (MDD) is a complex, heterogeneous condition affecting millions worldwide. Computational neuropsychiatry offers potential breakthroughs through the mechanistic modeling of this disorder. Using the Kolmogorov theory (KT) of consciousness, we developed a foundational model where algorithmic agents interact with the world to maximize an Objective Function evaluating affective valence. Depression, defined in this context by a state of persistently low valence, may arise from various factors—including inaccurate world models (cognitive biases), a dysfunctional Objective Function (anhedonia, anxiety), deficient planning (executive deficits), or unfavorable environments. Integrating algorithmic, dynamical systems, and neurobiological concepts, we map the agent model to brain circuits and functional networks, framing potential etiological routes and linking with depression biotypes. Finally, we explore how brain stimulation, psychotherapy, and plasticity-enhancing compounds such as psychedelics can synergistically repair neural circuits and optimize therapies using personalized computational models. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptual Roadmap—From agent theory to treatment. The algorithmic agent framework provides conceptual links between first-person experience and information processing and guides the search for the causes of MDD (see <a href="#sec2-entropy-26-00953" class="html-sec">Section 2</a>). The translational framework represents the various theoretical functional modules from the perspective of circuits and dynamics with special emphasis on the clinical biotypes pertaining to MDD. Based on brain circuits and individualized biotypes, different MDD interventions are proposed for treatment.</p>
Full article ">Figure 2
<p>(<b>a</b>) Generic agent model. The agent interacts dynamically with its environment and is involved in continuously exchanging data with the external world. The modeling engine runs the model and makes predictions of future data (both from external interfaces and the agent’s own actions). Then, the prediction error (from the Comparator) is evaluated to update the model. The Updater receives prediction errors from the Comparator to improve the model. The Simulator is a shared infrastructure used to run simulations for planning or valence evaluation. The Planning Engine runs counterfactual simulations and selects plans for the next actions (agent outputs). A key agent element is the Objective Function, which the agent aims to optimize through a choice of afferent actions selected by the Planning Engine. (<b>b</b>) MDD agent. Non-exclusive dysfunction across the main agent components (<b>b1</b>–<b>b3</b>) or hostile world inputs (<b>b4</b>) can result in sustained low output values from the Objective Function (low valence, red arrow). Identifying those can help us identify the routes that lead the agent to a depressed state.</p>
Full article ">Figure 3
<p>From the agent model to brain circuits, functional networks, and biotypes. To highlight the goals of the proposed framework, we provide a tentative map or agent modules to brain regions, circuits, and candidate biotypes based on the current literature. Proposed mapping of agent model and agent elements (<b>a</b>) to high-level brain circuits (<b>b</b>)—Modeling Engine in pink, Planning Engine in blue, and Objective Function (valence evaluation) in yellow. The high-level circuital model maps into structural/anatomical circuits (<b>c</b>) and, finally, into observed functional networks that can be derived from fMRI (<b>d</b>). Features from these networks can be used as biomarkers for the definition of patient clusters or FN biotypes (<b>e</b>) [<a href="#B10-entropy-26-00953" class="html-bibr">10</a>,<a href="#B11-entropy-26-00953" class="html-bibr">11</a>,<a href="#B12-entropy-26-00953" class="html-bibr">12</a>,<a href="#B65-entropy-26-00953" class="html-bibr">65</a>,<a href="#B100-entropy-26-00953" class="html-bibr">100</a>]. The characteristics of each biotype (alterations in the activity of regions or their connectivity) will reflect alterations of the main agent modules involved. As an example, we display a particular FN biotype (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">N</mi> <mi>S</mi> </mrow> <mrow> <mi>A</mi> <mo>+</mo> </mrow> </msub> <msub> <mi mathvariant="normal">P</mi> <mrow> <mi>A</mi> <mo>+</mo> </mrow> </msub> </mrow> </semantics></math> in [<a href="#B12-entropy-26-00953" class="html-bibr">12</a>]), with altered activity in the Objective Function (AMY, BG, and ACC). Most FN biotypes present alterations in more than one agent module.</p>
Full article ">Figure 4
<p>The agent and dynamical landscape frameworks provide a theoretical basis for the development and interpretation of data analysis methods and mechanistic computational models of the brain. This can be employed for a model-based design of therapeutic interventions such as pharmacology, brain stimulation, or even psychotherapy, as well as their combination. In the figure, we display the process of data assimilation from a group of reference healthy subjects/controls (HCs) and from MDD patients of a particular subtype. The resulting sets of models can guide the design of an intervention for the normalization of the dynamics and connectivity profile of the patients. The treatment can be designed at the group [<a href="#B289-entropy-26-00953" class="html-bibr">289</a>] or individual level.</p>
Full article ">Figure A1
<p>KT and AIF agent models and etiology of MDD. Schematic view of the agent models in KT and FEP/AIF. (<b>a</b>) In KT, the Objective Function is explicit, while in AIF, it is implicit in the loss function (free energy). (<b>b</b>) In AIF, the Planning Engine seeks to minimize free energy or surprise (negative valence). All the agent elements, in either case, can be affected and lead to depression in KT, i.e., persistent low valence: (i) Modeling Engine creates models leading to low valence; (ii) Objective Function returns low valence regardless of the models and displays no plasticity to “recalibrate”; (iii) Planning Engine delivers ineffective or detrimental plans for valence; and (iv) world provides inputs with an implicit stasis threat that cascades down to low valence. Further details: The agent interacts dynamically with its environment and is involved in a continuous exchange of data with the external world. The Modeling Engine runs the model and makes predictions of future data (both from external interfaces as well as the agent’s own actions) and then evaluates the prediction error (from the Comparator C) to update the model. The Simulator (not shown for simplicity) is a shared infrastructure used to run simulations for planning or valence evaluation. The Planning Engine (P) runs counterfactual simulations and selects plans for the next actions (agent outputs). The Updater receives as inputs prediction errors from the Comparator to improve the model. The other agent element is the Objective Function (which may also be plastic), which the agent aims to optimize through a choice of afferent actions selected by the Planning Engine.</p>
Full article ">Figure A2
<p>The hierarchical agent: Agent diagram displaying hierarchical processing of inputs (as in the predictive coding framework) and with a segregated evaluation of valence, which is also hierarchical, since it relies on the same hierarchical simulation engine.</p>
Full article ">
Previous Issue
Back to TopTop