[go: up one dir, main page]

Previous Issue
Volume 26, July
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 8 (August 2024) – 75 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 10566 KiB  
Article
The Area Law of Molecular Entropy: Moving beyond Harmonic Approximation
by Amitava Roy, Tibra Ali and Vishwesh Venkatraman
Entropy 2024, 26(8), 688; https://doi.org/10.3390/e26080688 - 14 Aug 2024
Abstract
This article shows that the gas-phase entropy of molecules is proportional to the area of the molecules, with corrections for the different curvatures of the molecular surface. The ability to estimate gas-phase entropy by the area law also allows us to calculate molecular [...] Read more.
This article shows that the gas-phase entropy of molecules is proportional to the area of the molecules, with corrections for the different curvatures of the molecular surface. The ability to estimate gas-phase entropy by the area law also allows us to calculate molecular entropy faster and more accurately than currently popular methods of estimating molecular entropy with harmonic oscillator approximation. The speed and accuracy of our method will open up new possibilities for the explicit inclusion of entropy in various computational biology methods. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) <math display="inline"><semantics> <msubsup> <mi>S</mi> <mi>th</mi> <mi>area</mi> </msubsup> </semantics></math> values, the thermodynamical entropies calculated from the area law (Equation (<a href="#FD9-entropy-26-00688" class="html-disp-formula">9</a>)), are plotted against experimental gas-phase entropies for 1942 molecules. The root mean square error (RMSE) between the calculated and experimental entropy is 21.34 J/mol·K. The correlation (DCOR) between the values, calculated using distance correlation (see <a href="#sec2-entropy-26-00688" class="html-sec">Section 2</a>), is 0.97, and the mean average percentage error (MAPE) is 3.94%. The dotted line represents the line where the values of the X and Y axes are equal. (<b>B</b>) Thermodynamic entropies were calculated using G4 quantum chemical calculations with the SHM approximation and plotted against experimental gas-phase entropies for 1529 molecules (blue dots). For the remaining 413 molecules, mainly the larger molecules, the G4 calculations did not converge. The orange dots represent the positional and orientational entropy as a fraction of the calculated total entropy. The error in the calculated entropy increases as the positional and orientational entropy falls below 60% of the calculated total entropy. (<b>C</b>) Thermodynamic entropies were calculated using normal mode analysis (NMA) and plotted against experimental gas-phase entropies for 1665 molecules (blue dots). The parameters could not be generated for the remaining 277 molecules (see Methods). The orange dots represent the positional and orientational entropy as a fraction of the calculated total entropy. (<b>D</b>) The differences in calculated (Y-axis) and experimental (X-axis) entropy of all possible pairs of molecules are plotted as histograms. The area of the circles is proportional to the number of molecule pairs the circle represents. Plots (<b>E</b>,<b>F</b>) represent the entropy differences calculated using G4 and NMA, respectively.</p>
Full article ">Figure 2
<p>(<b>A</b>) shows the shape index (see Equation (<a href="#FD11-entropy-26-00688" class="html-disp-formula">11</a>)) mapped to the molecular surface of benzene for <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. Shape index scales (ranging from −1 to 1) are divided into 9 categories as defined by Koenderink and van Doorn [<a href="#B52-entropy-26-00688" class="html-bibr">52</a>]: (i) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>7</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> shown in green, (ii) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>7</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>5</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> in cyan, (iii) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>5</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>3</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> in blue, (iv) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>3</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> in pale blue (v) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> in white, (vi) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>3</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> in pale yellow, (vii) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>3</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>5</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> in yellow, (viii) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>5</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>7</mn> <mn>8</mn> </mfrac> </mstyle> <mo stretchy="false">)</mo> </mrow> </semantics></math> in orange, and (ix) <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>7</mn> <mn>8</mn> </mfrac> </mstyle> <mo>,</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math> in red. (<b>B</b>) shows a transparent version of (<b>A</b>), along with the embedded model of benzene represented by sticks.</p>
Full article ">
12 pages, 815 KiB  
Article
Dose Finding in Oncology Trials Guided by Ordinal Toxicity Grades Using Continuous Dose Levels
by Mourad Tighiouart and André Rogatko
Entropy 2024, 26(8), 687; https://doi.org/10.3390/e26080687 - 14 Aug 2024
Abstract
We present a Bayesian adaptive design for dose finding in oncology trials with application to a first-in-human trial. The design is based on the escalation with overdose control principle and uses an intermediate grade 2 toxicity in addition to the traditional binary indicator [...] Read more.
We present a Bayesian adaptive design for dose finding in oncology trials with application to a first-in-human trial. The design is based on the escalation with overdose control principle and uses an intermediate grade 2 toxicity in addition to the traditional binary indicator of dose-limiting toxicity (DLT) to guide the dose escalation and de-escalation. We model the dose–toxicity relationship using the proportional odds model. This assumption satisfies an important ethical concern when a potentially toxic drug is first introduced in the clinic; if a patient experiences grade 2 toxicity at the most, then the amount of dose escalation is lower relative to that wherein if this patient experienced a maximum of grade 1 toxicity. This results in a more careful dose escalation. The performance of the design was assessed by deriving the operating characteristics under several scenarios for the true MTD and expected proportions of grade 2 toxicities. In general, the trial design is safe and achieves acceptable efficiency of the estimated MTD for a planned sample size of twenty patients. At the time of writing this manuscript, twelve patients have been enrolled to the trial. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

Figure 1
<p>Possible dose allocation for patients 1 and 2 and selected situations for patients 3 and 4. G = 3, 4 corresponds to DLT.</p>
Full article ">Figure 2
<p>Summary statistics for trial safety and efficiency under ordinal toxicity and binary toxicity models under all scenarios.</p>
Full article ">Figure A1
<p>Dose–toxicity relationship under the proportional odds model (black solid curve) and non-proportional odds models <math display="inline"><semantics> <msub> <mi>M</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>M</mi> <mn>2</mn> </msub> </semantics></math> when the true MTD is <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>7.76</mn> </mrow> </semantics></math> mg/kg. The black dashed curve corresponds to the probability of grade 2 or more toxicity.</p>
Full article ">
16 pages, 2616 KiB  
Article
Wandering Drunkards Walk after Fibonacci Rabbits: How the Presence of Shared Market Opinions Modifies the Outcome of Uncertainty
by Nicolas Maloumian
Entropy 2024, 26(8), 686; https://doi.org/10.3390/e26080686 - 13 Aug 2024
Viewed by 226
Abstract
Shared market opinions and beliefs by market participants generate a set of constraints that mediate information through a not-so-unstable system of expected target prices. Price trajectories, within these sets of constraints, confirm or disprove the likelihood of participant expectations and cannot, de facto, [...] Read more.
Shared market opinions and beliefs by market participants generate a set of constraints that mediate information through a not-so-unstable system of expected target prices. Price trajectories, within these sets of constraints, confirm or disprove the likelihood of participant expectations and cannot, de facto, be considered permutable, as literature has shown, since their inner structure is dynamically affected by their own progress, suggesting per se the presence of both heat and cycles. This study described and discussed how trajectories are built using different alphabets and suggests that prices follow an ergodic course within structurally similar tessellation classes. It is reported that the courses of price moves are self-similar due to their a priori structure, and they do not need to be complete in order to create the conditions, in resembling conditions, for the appearance of the well-known and commonly used Fibonacci ratios between price trajectories. To date, financial models and engineering are mostly based on the mathematics of randomness. If these theoretical findings need empirical validation, such a potential infrastructure of ratios would suggest the possibility for a superstructure to exist, in other words, the emergence of exploitable patterns. Full article
(This article belongs to the Special Issue Complexity in Financial Networks)
Show Figures

Figure 1

Figure 1
<p>Different possible combinations after six trades, ‘a’ marking any price change, ‘H’ marking no change.</p>
Full article ">Figure 2
<p>Different possible combinations posting a six-letter chain.</p>
Full article ">Figure 3
<p>Different ways to see the same rise in price from a bottom to a top as depicted in ‘a’, with ‘b’ presenting an l-composition (l standing for left) of ‘a’, and with ‘c’ presenting an r-composition (r standing for right) of ‘a’. On both ‘b’ and ‘c’, each horizontal line is an ‘H’.</p>
Full article ">Figure 4
<p>Grouping different ‘tiles’ of an eight-letter chain composed with ‘1’s’ (one tick up or down) and ‘0’s’ (no change). (<b>A</b>) shows the set of tiles for moves going up, and (<b>B</b>) shows the set of tiles for moves going down.</p>
Full article ">Figure 5
<p>Probability structure of Fibonacci <span class="html-italic">n</span>-letter chains, or classes, probability levels on the y-axis.</p>
Full article ">Figure 6
<p>Showing how the tile sets T{9} T{8} T{7} are related. Black squares indicate no price changes (‘H’ or ‘0’), and all other squares indicate a price change of one unit of price (‘1’).</p>
Full article ">
14 pages, 723 KiB  
Article
Dynamic Injection and Permutation Coding for Enhanced Data Transmission
by Kehinde Ogunyanda, Opeyemi O. Ogunyanda and Thokozani Shongwe
Entropy 2024, 26(8), 685; https://doi.org/10.3390/e26080685 (registering DOI) - 13 Aug 2024
Viewed by 152
Abstract
In this paper, we propose a novel approach to enhance spectral efficiency in communication systems by dynamically adjusting the mapping between cyclic permutation coding (CPC) and its injected form. By monitoring channel conditions such as interference levels and impulsive noise strength, the system [...] Read more.
In this paper, we propose a novel approach to enhance spectral efficiency in communication systems by dynamically adjusting the mapping between cyclic permutation coding (CPC) and its injected form. By monitoring channel conditions such as interference levels and impulsive noise strength, the system optimises the coding scheme to maximise data transmission reliability and efficiency. The CPC method employed in this work maps information bits onto non-binary symbols in a cyclic manner, aiming to improve the Hamming distance between mapped symbols. To address challenges such as low data rates inherent in permutation coding, injection techniques are introduced by removing δ column(s) from the CPC codebook. Comparative analyses demonstrate that the proposed dynamic adaptation scheme outperforms conventional permutation coding and injection schemes. Additionally, we present a generalised mathematical expression to describe the relationship between the spectral efficiencies of both coding schemes. This dynamic approach ensures efficient and reliable communication in environments with varying levels of interference and impulsive noise, highlighting its potential applicability to systems like power line communications. Full article
(This article belongs to the Special Issue New Advances in Error-Correcting Codes)
Show Figures

Figure 1

Figure 1
<p>Rate gain vs. <span class="html-italic">M</span> at various <math display="inline"><semantics> <mi>δ</mi> </semantics></math> values.</p>
Full article ">Figure 2
<p>Logical topology for dynamic adaptation.</p>
Full article ">Figure 3
<p>Interference level <math display="inline"><semantics> <mi>γ</mi> </semantics></math> vs. BER for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics></math> vs. BER at <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>→</mo> <mo>∞</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics></math> vs. BER at <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>=</mo> <mn>0.99</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics></math> vs. BER at <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>=</mo> <mn>0.99</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">Figure 7
<p>Interference level <math display="inline"><semantics> <mi>γ</mi> </semantics></math> vs. BER for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">Figure 8
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics></math> vs. BER at <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>→</mo> <mo>∞</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics></math> vs. BER at <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>=</mo> <mn>0.99</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics></math> vs. BER at <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Γ</mo> <mo>=</mo> <mn>0.99</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>M</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> codebooks.</p>
Full article ">
21 pages, 422 KiB  
Review
Domain-Agnostic Representation of Side-Channels
by Aaron Spence and Shaun Bangay
Entropy 2024, 26(8), 684; https://doi.org/10.3390/e26080684 (registering DOI) - 13 Aug 2024
Viewed by 154
Abstract
Side channels are unintended pathways within target systems that leak internal target information. Side-channel sensing (SCS) is the process of exploiting side channels to extract embedded target information. SCS is well established within the cybersecurity (CYB) domain, and has recently been proposed for [...] Read more.
Side channels are unintended pathways within target systems that leak internal target information. Side-channel sensing (SCS) is the process of exploiting side channels to extract embedded target information. SCS is well established within the cybersecurity (CYB) domain, and has recently been proposed for medical diagnostics and monitoring (MDM). Remaining unrecognised is its applicability to human–computer interaction (HCI), among other domains (Misc). This article analyses literature demonstrating SCS examples across the MDM, HCI, Misc, and CYB domains. Despite their diversity, established fields of advanced sensing and signal processing underlie each example, enabling the unification of these currently otherwise isolated domains. Identified themes are collating under a proposed domain-agnostic SCS framework. This SCS framework enables a formalised and systematic approach to studying, detecting, and exploiting of side channels both within and between domains. Opportunities exist for modelling SCS as data structures, allowing for computation irrespective of domain. Future methodologies can take such data structures to enable cross- and intra-domain transferability of extraction techniques, perform side-channel leakage detection, and discover new side channels within target systems. Full article
(This article belongs to the Special Issue An Information-Theoretic Approach to Side-Channel Analysis)
Show Figures

Figure 1

Figure 1
<p>An illustration of the domain-agnostic SCS process.</p>
Full article ">Figure 2
<p>Side-channel properties describe the structure and behaviour of side channels and their embedded signals.</p>
Full article ">Figure 3
<p>The SCS framework unifies the themes from each individual domain.</p>
Full article ">Figure 4
<p>A side channel connecting heartbeats and skin colour variance. A smartphone quantifies this colour variance on a person’s forehead to infer heart rate (the target information). Side channels can be understood following the SCS framework components.</p>
Full article ">
30 pages, 2097 KiB  
Article
Incoherence: A Generalized Measure of Complexity to Quantify Ensemble Divergence in Multi-Trial Experiments and Simulations
by Timothy Davey
Entropy 2024, 26(8), 683; https://doi.org/10.3390/e26080683 (registering DOI) - 13 Aug 2024
Viewed by 166
Abstract
Complex systems confound the typical scientific process. The non-linear relationships mean they can not being broken into smaller, more manageable parts. They also are highly sensitivity to initial conditions, making reproducibility a challenge. Both meaning it crucial we are able to easily identify [...] Read more.
Complex systems confound the typical scientific process. The non-linear relationships mean they can not being broken into smaller, more manageable parts. They also are highly sensitivity to initial conditions, making reproducibility a challenge. Both meaning it crucial we are able to easily identify when a system is acting as complex. Here we propose an information theory based measure which quantifies the uncertainty of any ensemble model arising from complex dynamics. We first compare this measure (named incoherence) to commonly used statistical tests across both continuous and discrete data. Then, we briefly investigate how incoherence can be used to quantify key characteristics of complexity such as criticality and perturbation. Full article
(This article belongs to the Special Issue Information and Self-Organization III)
Show Figures

Figure 1

Figure 1
<p>Here, we compare <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>[</mo> <mo>(</mo> <mn>1</mn> <mo>−</mo> <mi>x</mi> <mo>)</mo> <mo>,</mo> <mi>x</mi> <mo>]</mo> <mo>,</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> <mo>]</mo> </mrow> </semantics></math> to demonstrate the shape of these key measures across a full range of difference. The first thing to note is that the variants of incoherence and Jensen–Shannon divergence are the only measures that are normalized within these extremes. We can also see based on the difference in incoherence and T (<a href="#FD11-entropy-26-00683" class="html-disp-formula">11</a>) how dividing by <math display="inline"><semantics> <mover accent="true"> <mi>H</mi> <mo stretchy="false">˜</mo> </mover> </semantics></math> affects the lower values.</p>
Full article ">Figure 2
<p>This figure looks at an identity matrix of size <span class="html-italic">x</span>. Incoherence and the normalized Jensen–Shannon divergence (<math display="inline"><semantics> <mrow> <mi>J</mi> <mi>S</mi> <mo>/</mo> <msub> <mi>H</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math>) are the only measures to be consistent and bounded at 1.</p>
Full article ">Figure 3
<p>This figure compares a single high-entropy distribution <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.5</mn> <mo>]</mo> </mrow> </semantics></math> with <span class="html-italic">x</span> low-entropy <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> </mrow> </semantics></math> distributions. What we see is incoherence (and all measures) reduce as the ensemble becomes more homogeneous. But, we should also note how incoherence reduces much more slowly as it emphasizes outliers more. This is in part due to squaring the divergence (known as taking the second moment) and dividing by <math display="inline"><semantics> <mover accent="true"> <mi>H</mi> <mo stretchy="false">˜</mo> </mover> </semantics></math> (as can be seen compared to <span class="html-italic">T</span> (<a href="#FD11-entropy-26-00683" class="html-disp-formula">11</a>)).</p>
Full article ">Figure 4
<p>This figure looks at <span class="html-italic">x</span>-many duplicated highest-entropy and lowest-entropy distributions. The first thing to note is how all the divergences are consistent to the additional distributions, which is to be expected. The only exception to this is the <math display="inline"><semantics> <msup> <mi>χ</mi> <mn>2</mn> </msup> </semantics></math> value, which counter-intuitively increases (i.e., becomes less confident) with more distributions. This is because more distributions require more data in the <math display="inline"><semantics> <msup> <mi>χ</mi> <mn>2</mn> </msup> </semantics></math> interpretation.</p>
Full article ">Figure 5
<p>This figure compares one minimum entropy to one maximum entropy distribution, each of length <span class="html-italic">x</span>, so that when <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, we have <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> <mo>,</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> <mo>]</mo> </mrow> </semantics></math>. In this case, incoherence is the only measure that sees this comparison as equivalent and so is invariant.</p>
Full article ">Figure 6
<p>This figure compares two minimum entropy distributions of length <span class="html-italic">x</span>, such that when <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, we have <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> <mo>,</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> <mo>]</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> gives <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>[</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> <mo>,</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> <mo>]</mo> </mrow> </semantics></math>. Here, incoherence decreases. Unlike Jensen–Shannon divergence, it sees the consistency of the additional zeros as a measure of coherence.</p>
Full article ">Figure 7
<p>The <b>top row</b> shows how all standard measures are good at detecting a difference in <math display="inline"><semantics> <mi>μ</mi> </semantics></math> of multiple Guassian distributions. The <b>middle row</b>, however, shows that only incoherence and the standard deviation of standard deviations <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mi>σ</mi> <mo>)</mo> </mrow> </semantics></math> can detect a difference in <math display="inline"><semantics> <mi>σ</mi> </semantics></math>. Even then, the <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mi>σ</mi> <mo>)</mo> </mrow> </semantics></math> value only works in this case because it is comparative, as the value is relative to the change; therefore, it is hard to obtain a baseline using real-world data. Meanwhile, the (<b>bottom</b>) graph shows how incoherence behaves between its bounds <math display="inline"><semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>In this Figure, we look at cases where incoherence should be consistent. The <b>top row</b> shows that low volumes of points taken from a random distribution cause incoherence to rise. The <b>middle row</b> shows how incoherence is bounded by 1 when approximating an identity matrix in a continuous format. However, this bound is lower when analyzing too few distributions, as this approximation breaks down. Meanwhile, the <b>bottom row</b> shows how consistent the values are, even with increasing numbers of distributions. Also, the value of incoherence is the same as in its equivalent discrete case in <a href="#entropy-26-00683-f005" class="html-fig">Figure 5</a>, showing the consistency of the measure across discrete and continuous cases.</p>
Full article ">Figure 9
<p>Rule 218 cellular automata. There is a heatmap of all 10 trial results on the left, with two of those individual trials to the right. This rule is wolfram class 2, which is typically considered not complex. However, the wolfram classes look at the state of an individual sample as it evolves over time (i.e., comparing rows of cells downward), whereas here, we have been comparing the state across time (the entire 2D grid) against other samples. For the other examples (rules 220, 0, and 30), there is a high correlation between these two perspectives. However, this rule highlights where these views diverge and the versatility of incoherence. For instance, we can calculate the incoherence from the wolfram perspective by treating downward rows as an ensemble of trials. In this case, we find that in most trials, there is a low incoherence, since over time, the patterns are the same and highly predictable. This means that there <span class="html-italic">can</span> be a strong agreement with the wolfram classes and incoherence. However, taking an ensemble perspective over multiple samples, we can see that each trial is highly sensitive to initial conditions and behaves very differently, to the extreme in the difference between the central and right graphs, leading to a high incoherence of <math display="inline"><semantics> <mrow> <mn>0.59</mn> </mrow> </semantics></math> and an unpredictability that was not measured based on previous analysis.</p>
Full article ">Figure 10
<p>Rule 0 cellular automata. This is wolfram class 1, where the results are highly ordered and uniform. We see that no matter the initial state of the cells, every iteration leads to the same outcome, meaning this is very coherent and ordered, with an incoherence value of <math display="inline"><semantics> <mrow> <mn>0.00</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Rule 30 cellular automata. This is wolfram class 3, where the results are highly disordered and unpredictable; in this case, creating a high level of entropy. Here, each iteration is not exactly the same. However, there is no clear distinct or ordered pattern, leading to a low incoherence value of <math display="inline"><semantics> <mrow> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>The vertical red line is the theoretically determined critical point of <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mi>l</mi> <mi>n</mi> <mo>(</mo> <mi>n</mi> <mo>)</mo> <mo>/</mo> <mi>n</mi> </mrow> </semantics></math> for an <span class="html-italic">n</span> node graph. The left hand side is for <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math> and the right is <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. We see that in both cases, incoherence largely agrees with this theoretical tipping point, but it does have a spread of values that correlate with the percentage of graphs that were found to be connected. Interestingly, although incoherence is close to the theoretical tipping point, its maximal value is slightly higher when the percentage connected is <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math> (dashed red horizontal line).</p>
Full article ">Figure 13
<p>An ER graph with 50 nodes and 100 trials compared at multiple values of <span class="html-italic">p</span>. On the left, we look at standard ER graphs, while on the right, we compare this to our modified version. On the <b>top left</b>, we see how since the Jensen–Shannon divergence is an absolute measure and relative to the pooled entropy, it follows the curve of the pooled entropy. Meanwhile, incoherence is approximately flat across the entire spread <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>≈</mo> <mn>0.07</mn> </mrow> </semantics></math>, recording a low inconsistency from the minor randomness from each trial, similar to an idea gas, dropping only to zero when <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> <mo>|</mo> <mn>1</mn> </mrow> </semantics></math>, where the graphs are truly identical. On the <b>right</b>, we can see where the <span class="html-italic">p</span> value is slightly randomized on a per instance (system) level. Here, we see how incoherence is markedly higher at <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>≈</mo> <mn>0.28</mn> </mrow> </semantics></math>. In the <b>bottom right</b>, we now see how the entropy of the individual trials is much lower than the pooled entropy compared to the standard case (shown in the <b>bottom left</b>).</p>
Full article ">Figure 13 Cont.
<p>An ER graph with 50 nodes and 100 trials compared at multiple values of <span class="html-italic">p</span>. On the left, we look at standard ER graphs, while on the right, we compare this to our modified version. On the <b>top left</b>, we see how since the Jensen–Shannon divergence is an absolute measure and relative to the pooled entropy, it follows the curve of the pooled entropy. Meanwhile, incoherence is approximately flat across the entire spread <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>≈</mo> <mn>0.07</mn> </mrow> </semantics></math>, recording a low inconsistency from the minor randomness from each trial, similar to an idea gas, dropping only to zero when <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> <mo>|</mo> <mn>1</mn> </mrow> </semantics></math>, where the graphs are truly identical. On the <b>right</b>, we can see where the <span class="html-italic">p</span> value is slightly randomized on a per instance (system) level. Here, we see how incoherence is markedly higher at <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>≈</mo> <mn>0.28</mn> </mrow> </semantics></math>. In the <b>bottom right</b>, we now see how the entropy of the individual trials is much lower than the pooled entropy compared to the standard case (shown in the <b>bottom left</b>).</p>
Full article ">Figure 14
<p>Cellular automata rule 226 with <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mover> <mn>50</mn> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math>. Here, we see there is a critical point that causes the higher central incoherence values in <a href="#entropy-26-00683-f016" class="html-fig">Figure 16</a>, such that depending on the initial conditions (of this entirely deterministic ruleset), some samples create diagonals (<b>left</b>) and other patterns (<b>right</b>). The left hand graph shows a heatmap of all the samples pooled together (creating a crosshatch), while the <b>central</b> and <b>right</b> graphs show single samples with this specific <span class="html-italic">p</span> value striping in opposite directions depending on the initial state.</p>
Full article ">Figure 15
<p>Cellular automata rule 226, with <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.73</mn> </mrow> </semantics></math>. We see that this incoherent rule creates an interesting stripe patterns, which are unique per sample (hence why it is has a large incoherence value) but are uniform in direction. For <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.4</mn> </mrow> </semantics></math>, the pattern is reversed.</p>
Full article ">Figure 16
<p>Cellular automata rule 226 tests two types of sensitivity to initial conditions on cellular automata for incoherence <span class="html-italic">I</span>. Firstly, <span class="html-italic">p</span> represents the proportion of 1 cells to 0 starting cells. Secondly, for each of the 20 samples created at each <span class="html-italic">p</span> increment, the placement of those 1 cells was random. What we first see is that sensitivity to the initial condition of placement only applies to the complex case (rule 226, blue) and not the ordered (rule 0, orange) or disordered (rule 30, green) cases. What we also see is that this sensitivity for the complex case is variable depending on the initial proportion <span class="html-italic">p</span>, meaning that incoherence is a parameter that is dependent and not a feature of an overall system per se.</p>
Full article ">Figure 17
<p>Daisy world model. The x-axis varies the luminosity of the sun in the world. For each luminosity value, the simulation was run 20 times, varying just the initial positions of the daisies randomly. Here, we are looking at the distribution of temperatures across the world at the final step. In the top graph, the blue dots represent the mean global temperature for each trial. The dashed red line is the mean of the blue dots (i.e., the mean of means). The orange line is the mean of the pooled distribution. The second graph shows the incoherence of the temperature distributions. The third graph shows the standard deviation of the mean and the standard deviation of the standard deviations for each trial distribution. The fourth shows the Kruskal H and <span class="html-italic">p</span>-values, while the bottom graph shows the ANOVA F and <span class="html-italic">p</span>-values. There are a few notable things that this graph illustrates in a reasonably sophisticated use case. The first thing we notice is how the Kruskal <span class="html-italic">p</span>-value and ANOVA values fail to offer any useful information in this case. For the ANOVA, this is because it assumes consistent variance across test distributions, which is a poor assumption in real-world situations or even for mildly complex scenarios. The next thing we see is how incoherence picks out phase changes in the system. For instance, around a luminosity value of 0.55, we see how trials start to fall into two predictable scenarios, where the system goes from where no daises can survive to some black ones surviving. Kruskal H and the standard deviations both also detect these points of criticality. Incoherence, however, is the only measure that accurately identifies all the parameter values that lead to inconsistent results. Moreover, although the standard deviations and Kruskal H-statistic are able to identify some of the inconsistencies, their actual values are relative the temperature values and are hard to interpret against other systems.</p>
Full article ">Figure 18
<p>Temperature distributions for each of the twenty trials at a luminosity of 0.83 for the Daisyworld model. The x-axes are categorical, showing each trial run in a different color. The added x-axis scatter within each trial is present so that each point can be seen more easily. This figure is highlighted in <a href="#entropy-26-00683-f017" class="html-fig">Figure 17</a> by a vertical grey dotted line. We single out this particular instance, as it demonstrates a unique difference in incoherence against the other standard continuous statistic tests. ANOVA, Kruskal, and the standard deviations tests do not see these distributions as inconsistent because only two out of the twenty are outliers. Incoherence is designed to overweight the significance of outliers. In this specific case, these distributions show varying long tails, which are typically poorly identified using standard techniques yet are a common occurrence in complex systems.</p>
Full article ">Figure 19
<p>Illustrative results for visualizing the idealized use case. These represent the results of a digital twin of a manufacturing plant. The distributions represent the steady-state times it takes for goods to be produced end-to-end. Each graph shows the individual probability density functions of five trial simulations in blue. The pooled distribution is shown in orange. The title displays the value of the pooled mean <math display="inline"><semantics> <mi>μ</mi> </semantics></math> (dashed red line) and the Incoherence <span class="html-italic">I</span>. The idealized use case section (<a href="#sec6-entropy-26-00683" class="html-sec">Section 6</a>) describes how to interpret these values in the context of real-world limitations.</p>
Full article ">
13 pages, 2783 KiB  
Article
New Quantum Private Comparison Using Bell States
by Min Hou and Yue Wu
Entropy 2024, 26(8), 682; https://doi.org/10.3390/e26080682 (registering DOI) - 13 Aug 2024
Viewed by 180
Abstract
Quantum private comparison (QPC) represents a cryptographic approach that enables two parties to determine whether their confidential data are equivalent, without disclosing the actual values. Most existing QPC protocols utilizing single photons or Bell states are considered highly feasible, but they suffer from [...] Read more.
Quantum private comparison (QPC) represents a cryptographic approach that enables two parties to determine whether their confidential data are equivalent, without disclosing the actual values. Most existing QPC protocols utilizing single photons or Bell states are considered highly feasible, but they suffer from inefficiency. To address this issue, we present a novel QPC protocol that capitalizes on the entanglement property of Bell states and local operations to meet the requirements of efficiency. In the proposed protocol, two participants with private inputs perform local operations on shared Bell states received from a semi-honest third party (STP). Afterward, the modified qubits are returned to the STP, who can then determine the equality of the private inputs and relay the results to the participants. A simulation on the IBM Quantum Cloud Platform confirmed the feasibility of our protocol, and a security analysis further demonstrated that the STP and both participants were unable to learn anything about the individual private inputs. In comparison to other QPC protocols, our proposed solution offers superior performance in terms of efficiency. Full article
(This article belongs to the Special Issue Quantum Entanglement—Second Edition)
Show Figures

Figure 1

Figure 1
<p>The diagram of the QPC protocol.</p>
Full article ">Figure 2
<p>Preparation of Bell states.</p>
Full article ">Figure 3
<p>Measurement results of the Bell states.</p>
Full article ">Figure 4
<p>Quantum circuit for comparing <span class="html-italic">A</span> and <span class="html-italic">B.</span></p>
Full article ">Figure 5
<p>The measurement results of <a href="#entropy-26-00682-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Quantum circuit for comparing <math display="inline"><semantics> <mrow> <msup> <mi>A</mi> <mo>′</mo> </msup> <mo> </mo> <mi mathvariant="normal">a</mi> <mi mathvariant="normal">n</mi> <mi mathvariant="normal">d</mi> <mo> </mo> <msup> <mi>B</mi> <mo>′</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>The measurement result of <a href="#entropy-26-00682-f006" class="html-fig">Figure 6</a>.</p>
Full article ">
14 pages, 5380 KiB  
Article
Cross-Modality Person Re-Identification Method with Joint-Modality Generation and Feature Enhancement
by Yihan Bi, Rong Wang, Qianli Zhou, Zhaolong Zeng, Ronghui Lin and Mingjie Wang
Entropy 2024, 26(8), 681; https://doi.org/10.3390/e26080681 (registering DOI) - 13 Aug 2024
Viewed by 185
Abstract
In order to minimize the disparity between visible and infrared modalities and enhance pedestrian feature representation, a cross-modality person re-identification method is proposed, which integrates modality generation and feature enhancement. Specifically, a lightweight network is used for dimension reduction and augmentation of visible [...] Read more.
In order to minimize the disparity between visible and infrared modalities and enhance pedestrian feature representation, a cross-modality person re-identification method is proposed, which integrates modality generation and feature enhancement. Specifically, a lightweight network is used for dimension reduction and augmentation of visible images, and intermediate modalities are generated to bridge the gap between visible images and infrared images. The Convolutional Block Attention Module is embedded into the ResNet50 backbone network to selectively emphasize key features sequentially from both channel and spatial dimensions. Additionally, the Gradient Centralization algorithm is introduced into the Stochastic Gradient Descent optimizer to accelerate convergence speed and improve generalization capability of the network model. Experimental results on SYSU-MM01 and RegDB datasets demonstrate that our improved network model achieves significant performance gains, with an increase in Rank-1 accuracy of 7.12% and 6.34%, as well as an improvement in mAP of 4.00% and 6.05%, respectively. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Overall architecture of the proposed method. Among them, the initial Stage 0 includes the initial convolutional layer, batch normalization (BN) layer, ReLU layer, and Max pooling layer.</p>
Full article ">Figure 2
<p>Structure of the lightweight modality generator.</p>
Full article ">Figure 3
<p>Overview of Convolutional Block Attention Module.</p>
Full article ">Figure 4
<p>Residual structure of CBAM-ResNet.</p>
Full article ">Figure 5
<p>Diagram of Gradient Centralization.</p>
Full article ">Figure 6
<p>CMC curves of five methods on RegDB dataset.</p>
Full article ">Figure 7
<p>Loss curves before and after introducing Gradient Centralization on RegDB dataset.</p>
Full article ">Figure 8
<p>Visualization results output on the SYSU-MM01 dataset.</p>
Full article ">Figure 9
<p>Visualization results output on the RegDB dataset.</p>
Full article ">
9 pages, 280 KiB  
Article
Multipartite Correlations in Parikh–Wilczek Non-Thermal Spectrum
by Xi Ming
Entropy 2024, 26(8), 680; https://doi.org/10.3390/e26080680 - 12 Aug 2024
Viewed by 230
Abstract
In this study, we systematically investigate the multipartite correlations in the process of black hole radiation via the Parikh–Wilczek tunneling model. We examine not only the correlations among Hawking radiations but also the correlations between the emissions and the remainder of the black [...] Read more.
In this study, we systematically investigate the multipartite correlations in the process of black hole radiation via the Parikh–Wilczek tunneling model. We examine not only the correlations among Hawking radiations but also the correlations between the emissions and the remainder of the black hole. Our findings indicate that the total correlation among emitted particles continues to increase as the black hole evaporates. Additionally, we observe that the bipartite correlation between the emissions and the remainder of the black hole initially increases and then decreases, while the total correlation of the entire system monotonically increases. Finally, we extend our analysis to include quantum correction and observe similar phenomena. Through this research, we aim to elucidate the mechanism of information conservation in the black hole information paradox. Full article
(This article belongs to the Special Issue Black Hole Information Problem: Challenges and Perspectives)
Show Figures

Figure 1

Figure 1
<p>The evolution of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msubsup> <mi mathvariant="script">C</mi> <mrow> <msub> <mi>E</mi> <mi>i</mi> </msub> </mrow> <mrow> <mi>t</mi> <mi>o</mi> <mi>t</mi> <mi>a</mi> <mi>l</mi> </mrow> </msubsup> </mrow> </semantics></math> with <math display="inline"><semantics> <msub> <mi>E</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>−</mo> <msub> <mi>E</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The evolution of <math display="inline"><semantics> <msubsup> <mi mathvariant="script">C</mi> <mrow> <mi mathvariant="script">E</mi> <mi mathvariant="script">R</mi> </mrow> <mn>2</mn> </msubsup> </semantics></math> with <math display="inline"><semantics> <msub> <mi>E</mi> <mi>T</mi> </msub> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">
45 pages, 4378 KiB  
Article
GAD-PVI: A General Accelerated Dynamic-Weight Particle-Based Variational Inference Framework
by Fangyikang Wang, Huminhao Zhu, Chao Zhang, Hanbin Zhao and Hui Qian
Entropy 2024, 26(8), 679; https://doi.org/10.3390/e26080679 - 11 Aug 2024
Viewed by 340
Abstract
Particle-based Variational Inference (ParVI) methods have been widely adopted in deep Bayesian inference tasks such as Bayesian neural networks or Gaussian Processes, owing to their efficiency in generating high-quality samples given the score of the target distribution. Typically, ParVI methods evolve a weighted-particle [...] Read more.
Particle-based Variational Inference (ParVI) methods have been widely adopted in deep Bayesian inference tasks such as Bayesian neural networks or Gaussian Processes, owing to their efficiency in generating high-quality samples given the score of the target distribution. Typically, ParVI methods evolve a weighted-particle system by approximating the first-order Wasserstein gradient flow to reduce the dissimilarity between the particle system’s empirical distribution and the target distribution. Recent advancements in ParVI have explored sophisticated gradient flows to obtain refined particle systems with either accelerated position updates or dynamic weight adjustments. In this paper, we introduce the semi-Hamiltonian gradient flow on a novel Information–Fisher–Rao space, known as the SHIFR flow, and propose the first ParVI framework that possesses both accelerated position update and dynamical weight adjustment simultaneously, named the General Accelerated Dynamic-Weight Particle-based Variational Inference (GAD-PVI) framework. GAD-PVI is compatible with different dissimilarities between the empirical distribution and the target distribution, as well as different approximation approaches to gradient flow. Moreover, when the appropriate dissimilarity is selected, GAD-PVI is also suitable for obtaining high-quality samples even when analytical scores cannot be obtained. Experiments conducted under both the score-based tasks and sample-based tasks demonstrate the faster convergence and reduced approximation error of GAD-PVI methods over the state-of-the-art. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p><inline-formula><mml:math id="mm1726"><mml:semantics><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> to the target with respect to iterations in the GMM task.</p>
Full article ">Figure 2
<p>The contour lines of the log posterior in the Gaussian Process task (all variants with BLOB strategy).</p>
Full article ">Figure 3
<p><inline-formula><mml:math id="mm1727"><mml:semantics><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> distance to the target with respect to iterations in the shape morphing task.</p>
Full article ">Figure 4
<p><inline-formula><mml:math id="mm1728"><mml:semantics><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> distance to the target with respect to iterations in the sketching task.</p>
Full article ">Figure 5
<p>The sketching task from random noise to cheetah, <inline-formula><mml:math id="mm1729"><mml:semantics><mml:mrow><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn>1000</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure A1
<p>The shape morphing of the source shape CAT to the target SPIRAL.</p>
Full article ">Figure A2
<p>Averaged test <inline-formula><mml:math id="mm1722"><mml:semantics><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> distance to the target with respect to iterations in the SG task for algorithms (W-Type).</p>
Full article ">Figure A3
<p>Averaged test <inline-formula><mml:math id="mm1723"><mml:semantics><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> distance to the target with respect to iterations in the SG task for algorithms (KW/S-Type).</p>
Full article ">Figure A4
<p>Averaged test <inline-formula><mml:math id="mm1724"><mml:semantics><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> distance to the target with respect to iterations in the GMM task for algorithms (KW/S-Type).</p>
Full article ">Figure A5
<p>Averaged test <inline-formula><mml:math id="mm1725"><mml:semantics><mml:msub><mml:mi>W</mml:mi><mml:mn>2</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> distance to the target with respect to iterations in the SG task for algorithms (WGAD-KSDD-Type).</p>
Full article ">
22 pages, 3992 KiB  
Article
Bayesian Modeling for Nonstationary Spatial Point Process via Spatial Deformations
by Dani Gamerman, Marcel de Souza Borges Quintana and Mariane Branco Alves
Entropy 2024, 26(8), 678; https://doi.org/10.3390/e26080678 - 11 Aug 2024
Viewed by 271
Abstract
Many techniques have been proposed to model space-varying observation processes with a nonstationary spatial covariance structure and/or anisotropy, usually on a geostatistical framework. Nevertheless, there is an increasing interest in point process applications, and methodologies that take nonstationarity into account are welcomed. In [...] Read more.
Many techniques have been proposed to model space-varying observation processes with a nonstationary spatial covariance structure and/or anisotropy, usually on a geostatistical framework. Nevertheless, there is an increasing interest in point process applications, and methodologies that take nonstationarity into account are welcomed. In this sense, this work proposes an extension of a class of spatial Cox process using spatial deformation. The proposed method enables the deformation behavior to be data-driven, through a multivariate latent Gaussian process. Inference leads to intractable posterior distributions that are approximated via MCMC. The convergence of algorithms based on the Metropolis–Hastings steps proved to be slow, and the computational efficiency of the Bayesian updating scheme was improved by adopting Hamiltonian Monte Carlo (HMC) methods. Our proposal was also compared against an alternative anisotropic formulation. Studies based on synthetic data provided empirical evidence of the benefit brought by the adoption of nonstationarity through our anisotropic structure. A real data application was conducted on the spatial spread of the Spodoptera frugiperda pest in a corn-producing agricultural area in southern Brazil. Once again, the proposed method demonstrated its benefit over alternatives. Full article
(This article belongs to the Special Issue Bayesianism)
Show Figures

Figure 1

Figure 1
<p>Intensity function given by expression (<a href="#FD12-entropy-26-00678" class="html-disp-formula">12</a>) exhibited in different angles.</p>
Full article ">Figure 2
<p>Estimation of the intensity function of expression (<a href="#FD12-entropy-26-00678" class="html-disp-formula">12</a>): (<b>a</b>) True intensity function on the defined grid along with generated point process (dots), (<b>b</b>) estimated intensity function not considering and (<b>c</b>) considering the deformation for <span class="html-italic">probit</span> link function, (<b>d</b>) estimated intensity function not considering and (<b>e</b>) considering the deformation for <span class="html-italic">log</span> link function.</p>
Full article ">Figure 3
<p>Further details of estimation of the intensity function (<a href="#FD12-entropy-26-00678" class="html-disp-formula">12</a>). Scatter plot of true and estimated intensity function for different estimating scenarios: (<b>a</b>) intensity function not considering and (<b>b</b>) considering the deformation for <span class="html-italic">probit</span> link function, (<b>c</b>) intensity function not considering and (<b>d</b>) considering the deformation for <span class="html-italic">log</span> link function. The dots represent the pair of true and estimated intensity function values for all pixels. Full and dashed lines represent the identity and the exploratory regression lines, respectively. The latter is obtained by performing the fit of the estimated values based on the assumed true values as a covariate.</p>
Full article ">Figure 4
<p>For the scenario of expression (<a href="#FD12-entropy-26-00678" class="html-disp-formula">12</a>): (<b>a</b>) IS for the model not considering and (<b>b</b>) considering the deformation for <span class="html-italic">probit</span> link function, (<b>c</b>) IS for the model not considering and (<b>d</b>) considering the deformation for <span class="html-italic">log</span> link function.</p>
Full article ">Figure 5
<p>Intensity function estimation for the four selected locations: true intensity function (<b>top</b>) and density plot for the selected locations with their respective credibility interval of 95% (<b>bottom</b>) for the four models considered.</p>
Full article ">Figure 6
<p>Estimation of the deformation <span class="html-italic">d</span>: (<b>a</b>) estimated mean deformation for the <span class="html-italic">probit</span> link function; (<b>b</b>) mean and (<b>c</b>) standard deviation for the posterior distribution of the distance between each centroid <math display="inline"><semantics> <msub> <mi>s</mi> <mi>z</mi> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <msub> <mi>s</mi> <mi>z</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mspace width="0.166667em"/> <mo>…</mo> <mo>,</mo> <mspace width="0.166667em"/> <mi>Z</mi> <mo>}</mo> </mrow> </semantics></math> for the <span class="html-italic">probit</span> link function.</p>
Full article ">Figure 7
<p>Point pattern for the corn plants infected by the <span class="html-italic">S. frugiperda</span> with (<b>a</b>) original and (<b>b</b>) rotated latitude and longitude and (<b>c</b>) a satellite image of the experimental area with the polygon of the study area. Adapted from Nava et al. [<a href="#B37-entropy-26-00678" class="html-bibr">37</a>].</p>
Full article ">Figure 8
<p>Estimation of the intensity function for <span class="html-italic">Spodoptera frugiperda</span> pest data: (<b>a</b>) not considering and (<b>b</b>) considering the deformation for probit link function, (<b>c</b>) not considering and (<b>d</b>) considering the deformation for log link function.</p>
Full article ">Figure 9
<p>Standard deviation for the intensity function for the model for the <span class="html-italic">Spodoptera frugiperda</span> pest data: (<b>a</b>) not considering and (<b>b</b>) considering the deformation for <span class="html-italic">probit</span> link function, (<b>c</b>) not considering and (<b>d</b>) considering the deformation for <span class="html-italic">log</span> link function.</p>
Full article ">Figure 10
<p>Estimated intensity function map for the <span class="html-italic">Spodoptera frugiperda</span> pest data showing the positions of intensity function selected for posterior density comparison across models (<b>top</b>) and density plot for the selected positions with their respective credibility interval of 95% (<b>bottom</b>).</p>
Full article ">Figure 11
<p>Estimation of the deformation <span class="html-italic">d</span> for the <span class="html-italic">Spodoptera frugiperda</span> pest data: (<b>a</b>) estimated mean deformation for the <span class="html-italic">probit</span> link function; (<b>b</b>) mean and (<b>c</b>) standard deviation for the posterior distribution of the distance between each centroid <math display="inline"><semantics> <msub> <mi>s</mi> <mi>z</mi> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>(</mo> <msub> <mi>s</mi> <mi>z</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mspace width="0.166667em"/> <mo>…</mo> <mo>,</mo> <mspace width="0.166667em"/> <mi>Z</mi> <mo>}</mo> </mrow> </semantics></math> for the <span class="html-italic">probit</span> link function.</p>
Full article ">Figure 12
<p>Estimated (<b>a</b>) intensity function and (<b>b</b>) DP for the model considering the <span class="html-italic">probit</span> link function for unrotated and rescaled <span class="html-italic">Spodoptera frugiperda</span> pest data.</p>
Full article ">
18 pages, 264 KiB  
Article
Hacking the Predictive Mind
by Andy Clark
Entropy 2024, 26(8), 677; https://doi.org/10.3390/e26080677 - 10 Aug 2024
Viewed by 547
Abstract
According to active inference, constantly running prediction engines in our brain play a large role in delivering all human experience. These predictions help deliver everything we see, hear, touch, and feel. In this paper, I pursue one apparent consequence of this increasingly well-supported [...] Read more.
According to active inference, constantly running prediction engines in our brain play a large role in delivering all human experience. These predictions help deliver everything we see, hear, touch, and feel. In this paper, I pursue one apparent consequence of this increasingly well-supported view. Given the constant influence of hidden predictions on human experience, can we leverage the power of prediction in the service of human flourishing? Can we learn to hack our own predictive regimes in ways that better serve our needs and purposes? Asking this question rapidly reveals a landscape that is at once familiar and new. It is also challenging, suggesting important questions about scope and dangers while casting further doubt (as if any was needed) on old assumptions about a firm mind/body divide. I review a range of possible hacks, starting with the careful use of placebos, moving on to look at chronic pain and functional disorders, and ending with some speculations concerning the complex role of genetic influences on the predictive brain. Full article
19 pages, 6004 KiB  
Article
An Evaluation Model for Node Influence Based on Heuristic Spatiotemporal Features
by Sheng Jin, Yuzhi Xiao, Jiaxin Han and Tao Huang
Entropy 2024, 26(8), 676; https://doi.org/10.3390/e26080676 - 10 Aug 2024
Viewed by 314
Abstract
The accurate assessment of node influence is of vital significance for enhancing system stability. Given the structural redundancy problem triggered by the network topology deviation when an empirical network is copied, as well as the dynamic characteristics of the empirical network itself, it [...] Read more.
The accurate assessment of node influence is of vital significance for enhancing system stability. Given the structural redundancy problem triggered by the network topology deviation when an empirical network is copied, as well as the dynamic characteristics of the empirical network itself, it is difficult for traditional static assessment methods to effectively capture the dynamic evolution of node influence. Therefore, we propose a heuristic-based spatiotemporal feature node influence assessment model (HEIST). First, the zero-model method is applied to optimize the network-copying process and reduce the noise interference caused by network structure redundancy. Second, the copied network is divided into subnets, and feature modeling is performed to enhance the node influence differentiation. Third, node influence is quantified based on the spatiotemporal depth-perception module, which has a built-in local and global two-layer structure. At the local level, a graph convolutional neural network (GCN) is used to improve the spatial perception of node influence; it fuses the feature changes of the nodes in the subnetwork variation, combining this method with a long- and short-term memory network (LSTM) to enhance its ability to capture the depth evolution of node influence and improve the robustness of the assessment. Finally, a heuristic assessment algorithm is used to jointly optimize the influence strength of the nodes at different stages and quantify the node influence via a nonlinear optimization function. The experiments show that the Kendall coefficients exceed 90% in multiple datasets, proving that the model has good generalization performance in empirical networks. Full article
Show Figures

Figure 1

Figure 1
<p>Study overview.</p>
Full article ">Figure 2
<p>Node influence assessment process diagram.</p>
Full article ">Figure 3
<p>Nodal spatiotemporal feature construction maps.</p>
Full article ">Figure 4
<p>Plot of the scale of impact on the network when the HEIST model is compared to other models with high-impact nodes selected as propagation sources.</p>
Full article ">Figure 5
<p>Analysis of propagation in a small network.</p>
Full article ">Figure 6
<p>Visualization of different network structures.</p>
Full article ">Figure 7
<p>Graph of the effect of different training network training tests.</p>
Full article ">
17 pages, 6945 KiB  
Article
Intelligent Fault Diagnosis Method for Rotating Machinery Based on Recurrence Binary Plot and DSD-CNN
by Yuxin Shi, Hongwei Wang, Wenlei Sun and Ruoyang Bai
Entropy 2024, 26(8), 675; https://doi.org/10.3390/e26080675 - 9 Aug 2024
Viewed by 285
Abstract
To tackle the issue of the traditional intelligent diagnostic algorithm’s insufficient utilization of correlation characteristics within the time series of fault signals and to meet the challenges of accuracy and computational complexity in rotating machinery fault diagnosis, a novel approach based on a [...] Read more.
To tackle the issue of the traditional intelligent diagnostic algorithm’s insufficient utilization of correlation characteristics within the time series of fault signals and to meet the challenges of accuracy and computational complexity in rotating machinery fault diagnosis, a novel approach based on a recurrence binary plot (RBP) and a lightweight, deep, separable, dilated convolutional neural network (DSD-CNN) is proposed. Firstly, a recursive encoding method is used to convert the fault vibration signals of rotating machinery into two-dimensional texture images, extracting feature information from the internal structure of the fault signals as the input for the model. Subsequently, leveraging the excellent feature extraction capabilities of a lightweight convolutional neural network embedded with attention modules, the fault diagnosis of rotating machinery is carried out. The experimental results using different datasets demonstrate that the proposed model achieves excellent diagnostic accuracy and computational efficiency. Additionally, compared with other representative fault diagnosis methods, this model shows better anti-noise performance under different noise test data, and it provides a reliable and efficient reference solution for rotating machinery fault-classification tasks. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Figure 1
<p>The flow chart of RBP.</p>
Full article ">Figure 2
<p>An RBP of diverse signals. (<b>a</b>) Sinusoidal signal. (<b>b</b>) X component of the Lorentz curve. (<b>c</b>) Gaussian white noise.</p>
Full article ">Figure 3
<p>Determination of the delay time and embedding dimension. (<b>a</b>) Delay time. (<b>b</b>) Embedding dimension.</p>
Full article ">Figure 4
<p>The structure of a CA.</p>
Full article ">Figure 5
<p>The structure of the DSD-CNN.</p>
Full article ">Figure 6
<p>The proposed fault diagnosis framework.</p>
Full article ">Figure 7
<p>The RBP of the CWRU dataset.</p>
Full article ">Figure 8
<p>The RBP of the WTDS dataset.</p>
Full article ">Figure 9
<p>The determination of the model’s parameters. (<b>a</b>) Initial learning rate. (<b>b</b>) Batch size.</p>
Full article ">Figure 10
<p>A visualization of the results of the ablation experiment. (<b>a</b>) Removal of the SeparableConv block. (<b>b</b>) Removal of the DepthwiseConv block. (<b>c</b>) Removal of the CA module. (<b>d</b>) The DSD-CNN.</p>
Full article ">Figure 11
<p>Loss and accuracy curves of the DSD-CNN based on the WTDS dataset. (<b>a</b>) Training set. (<b>b</b>) Test set.</p>
Full article ">Figure 12
<p>Classification results of the WTDS dataset. (<b>a</b>) Confusion matrix. (<b>b</b>) Dimensionality reduction visualization by the T-SNE.</p>
Full article ">Figure 13
<p>The ROC curves of the DSD-CNN based on the WTDS dataset.</p>
Full article ">Figure 14
<p>Comparison of the different methods on the WTDS test rig. (<b>a</b>) Test accuracy curve. (<b>b</b>) Boxplots.</p>
Full article ">Figure 15
<p>Loss and accuracy curves of the DSD-CNN based on the CWRU dataset. (<b>a</b>) Training set. (<b>b</b>) Test set.</p>
Full article ">Figure 16
<p>Classification results of the CWRU dataset. (<b>a</b>) Confusion matrix. (<b>b</b>) Dimensionality reduction visualization by the T-SNE.</p>
Full article ">Figure 17
<p>Comparison of the different methods on the CWRU test rig. (<b>a</b>) Test accuracy curve. (<b>b</b>) Boxplots.</p>
Full article ">
11 pages, 252 KiB  
Article
Furstenberg Family and Chaos for Time-Varying Discrete Dynamical Systems
by Risong Li, Yongjiang Li, Tianxiu Lu, Jiazheng Zhao and Jing Su
Entropy 2024, 26(8), 674; https://doi.org/10.3390/e26080674 - 9 Aug 2024
Viewed by 248
Abstract
Assume that (Y,ρ) is a nontrivial complete metric space, and that (Y,g1,) is a time-varying discrete dynamical system (T-VDDS), which is given by sequences (gl)l=1 [...] Read more.
Assume that (Y,ρ) is a nontrivial complete metric space, and that (Y,g1,) is a time-varying discrete dynamical system (T-VDDS), which is given by sequences (gl)l=1 of continuous selfmaps gl:YY. In this paper, for a given Furstenberg family G and a given T-VDDS (Y,g1,), G-scrambled pairs of points of the system (Y,g1,) (which contains the well-known scrambled pairs) are provided. Some properties of the set of G-scrambled pairs of a given T-VDDS (Y,g1,) are studied. Moreover, the generically G-chaotic T-VDDS and the generically strongly G-chaotic T-VDDS are defined. A sufficient condition for a given T-VDDS to be generically strongly G-chaotic is also presented. Full article
(This article belongs to the Section Complexity)
18 pages, 4026 KiB  
Article
Generalized Kinetic Equations with Fractional Time-Derivative and Nonlinear Diffusion: H-Theorem and Entropy
by Ervin K. Lenzi, Michely P. Rosseto, Derik W. Gryczak, Luiz R. Evangelista, Luciano R. da Silva, Marcelo K. Lenzi and Rafael S. Zola
Entropy 2024, 26(8), 673; https://doi.org/10.3390/e26080673 - 8 Aug 2024
Viewed by 335
Abstract
We investigate the H-theorem for a class of generalized kinetic equations with fractional time-derivative, hyperbolic term, and nonlinear diffusion. When the H-theorem is satisfied, we demonstrate that different entropic forms may emerge due to the equation’s nonlinearity. We obtain the entropy production related [...] Read more.
We investigate the H-theorem for a class of generalized kinetic equations with fractional time-derivative, hyperbolic term, and nonlinear diffusion. When the H-theorem is satisfied, we demonstrate that different entropic forms may emerge due to the equation’s nonlinearity. We obtain the entropy production related to these entropies and show that its form remains invariant. Furthermore, we investigate some behaviors for these equations from both numerical and analytical perspectives, showing a large class of behaviors connected with anomalous diffusion and their effects on entropy. Full article
(This article belongs to the Special Issue Theory and Applications of Hyperbolic Diffusion and Shannon Entropy)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) show the behavior of Equation (<a href="#FD15-entropy-26-00673" class="html-disp-formula">15</a>) with the probability density distribution for <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>4.9</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, in the absence of external forces, for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>. (<b>c</b>,<b>d</b>) show the mean square displacement <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>〈</mo> <msup> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>−</mo> <mo>〈</mo> <mi>x</mi> <mo>〉</mo> </mfenced> <mn>2</mn> </msup> <mo>〉</mo> </mrow> </mrow> </semantics></math>. We consider <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>k</mi> <mi>γ</mi> </msub> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>γ</mi> </mrow> </msup> <mo>/</mo> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>−</mo> <mi>γ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, and different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mi>γ</mi> </msub> </semantics></math>. We also added straight lines to highlight the different behaviors present in the system during the time evolution.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) show the behavior of Equation (<a href="#FD15-entropy-26-00673" class="html-disp-formula">15</a>) with the probability density distribution at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>4.9</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, in the absence of external forces, for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>. (<b>c</b>,<b>d</b>) show the mean square displacement <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>〈</mo> <msup> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>−</mo> <mo>〈</mo> <mi>x</mi> <mo>〉</mo> </mfenced> <mn>2</mn> </msup> <mo>〉</mo> </mrow> </mrow> </semantics></math>. We consider <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>k</mi> <mi>γ</mi> <mo>′</mo> </msubsup> <msup> <mi>e</mi> <mrow> <mo>−</mo> <msup> <mi>γ</mi> <mo>′</mo> </msup> <mi>t</mi> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mi>γ</mi> </msub> </semantics></math>. We also added straight lines to highlight the different behaviors present in the system during the time evolution.</p>
Full article ">Figure 3
<p>Probability density maps for a pair of initial conditions with <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.10</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>. In (<b>a</b>,<b>b</b>), the kernel <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mo>=</mo> <msub> <mi>k</mi> <mi>γ</mi> </msub> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>γ</mi> </mrow> </msup> <mo>/</mo> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>−</mo> <mi>γ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> was used and in (<b>c</b>,<b>d</b>), the kernel <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mo>=</mo> <msubsup> <mi>k</mi> <mi>γ</mi> <mo>′</mo> </msubsup> <msup> <mi>e</mi> <mrow> <mo>−</mo> <msup> <mi>γ</mi> <mo>′</mo> </msup> <mi>t</mi> </mrow> </msup> </mrow> </semantics></math> was used. For simplicity, we consider <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>f</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> for all systems. Note that <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> governed by a power-law is less diffusive than <math display="inline"><semantics> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> </semantics></math> governed by an exponential.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) show the behavior of the entropy and (<b>c</b>,<b>d</b>) show the behavior of Equation (<a href="#FD44-entropy-26-00673" class="html-disp-formula">44</a>) for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mi>γ</mi> </msub> </semantics></math>. We considered, for simplicity, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mn>0</mn> <mo>)</mo> <mo>=</mo> <mi>δ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> for the initial condition.</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) show the behavior of entropy for the power-law kernel, i.e., <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>k</mi> <mi>γ</mi> </msub> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>γ</mi> </mrow> </msup> <mo>/</mo> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>−</mo> <mi>γ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, in the absence of external forces, for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math>. (<b>c</b>,<b>d</b>) show the behavior of Equation (<a href="#FD44-entropy-26-00673" class="html-disp-formula">44</a>) for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math>. We consider <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mi>γ</mi> </msub> </semantics></math>. We considered, for simplicity, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mn>0</mn> <mo>)</mo> <mo>=</mo> <mi>δ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> for the initial condition.</p>
Full article ">Figure 6
<p>This figure shows the behavior of entropy and Equation (<a href="#FD44-entropy-26-00673" class="html-disp-formula">44</a>) for the exponential <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>k</mi> <mi>γ</mi> <mo>′</mo> </msubsup> <msup> <mi>e</mi> <mrow> <mo>−</mo> <msup> <mi>γ</mi> <mo>′</mo> </msup> <mi>t</mi> </mrow> </msup> </mrow> </semantics></math> (<b>a</b>,<b>c</b>) and power-law <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>k</mi> <mi>γ</mi> </msub> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>γ</mi> </mrow> </msup> <mo>/</mo> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>−</mo> <mi>γ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, (<b>b</b>,<b>d</b>) kernels in the absence of external forces. We consider <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and different values of <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>c</mi> </msub> </semantics></math>. We considered, for simplicity, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mn>0</mn> <mo>)</mo> <mo>=</mo> <mi>δ</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> for the initial condition.</p>
Full article ">Figure A1
<p>(<b>a</b>,<b>b</b>) illustrate the behavior of Equation (<a href="#FD15-entropy-26-00673" class="html-disp-formula">15</a>) in the absence of external forces. (<b>c</b>,<b>d</b>) show the mean square displacement <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>〈</mo> <msup> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>−</mo> <mo>〈</mo> <mi>x</mi> <mo>〉</mo> </mfenced> <mn>2</mn> </msup> <mo>〉</mo> </mrow> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math>. We consider <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>k</mi> <mi>γ</mi> <mo>′</mo> </msubsup> <msup> <mi>e</mi> <mrow> <mo>−</mo> <msup> <mi>γ</mi> <mo>′</mo> </msup> <mi>t</mi> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, and different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mi>γ</mi> </msub> </semantics></math>. We also added straight lines to highlight the different behaviors present in the system during the time evolution.</p>
Full article ">Figure A2
<p>(<b>a</b>,<b>b</b>) illustrate the behavior of Equation (<a href="#FD15-entropy-26-00673" class="html-disp-formula">15</a>) in the absence of external forces. (<b>c</b>,<b>d</b>) show the mean square displacement <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>〈</mo> <msup> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>−</mo> <mo>〈</mo> <mi>x</mi> <mo>〉</mo> </mfenced> <mn>2</mn> </msup> <mo>〉</mo> </mrow> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>1.3</mn> </mrow> </semantics></math>. We consider <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>k</mi> <mi>γ</mi> </msub> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>γ</mi> </mrow> </msup> <mo>/</mo> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>−</mo> <mi>γ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, and different values of <math display="inline"><semantics> <msub> <mi>k</mi> <mi>γ</mi> </msub> </semantics></math>. We also added straight lines to highlight the different behaviors present in the system during the time evolution.</p>
Full article ">Figure A3
<p>This figure shows the behavior of Equation (<a href="#FD15-entropy-26-00673" class="html-disp-formula">15</a>) in the absence of external forces, and the mean square displacement <math display="inline"><semantics> <mrow> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>〈</mo> <msup> <mfenced separators="" open="(" close=")"> <mi>x</mi> <mo>−</mo> <mo>〈</mo> <mi>x</mi> <mo>〉</mo> </mfenced> <mn>2</mn> </msup> <mo>〉</mo> </mrow> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>γ</mi> </msub> <mo>=</mo> <mn>2.5</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math>. We consider <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>k</mi> <mi>γ</mi> <mo>′</mo> </msubsup> <msup> <mi>e</mi> <mrow> <mo>−</mo> <msup> <mi>γ</mi> <mo>′</mo> </msup> <mi>t</mi> </mrow> </msup> </mrow> </semantics></math> in (<b>a</b>,<b>c</b>) and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">K</mi> <mi>γ</mi> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>k</mi> <mi>γ</mi> </msub> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>γ</mi> </mrow> </msup> <mo>/</mo> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>−</mo> <mi>γ</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in (<b>b</b>,<b>d</b>), <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>ν</mi> <mi>D</mi> <msup> <mi>ρ</mi> <mrow> <mi>ν</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, and different values of <math display="inline"><semantics> <mi>γ</mi> </semantics></math>.</p>
Full article ">
23 pages, 1900 KiB  
Review
Nonlinear Charge Transport and Excitable Phenomena in Semiconductor Superlattices
by Luis L. Bonilla, Manuel Carretero and Emanuel Mompó
Entropy 2024, 26(8), 672; https://doi.org/10.3390/e26080672 - 8 Aug 2024
Viewed by 263
Abstract
Semiconductor superlattices are periodic nanostructures consisting of epitaxially grown quantum wells and barriers. For thick barriers, the quantum wells are weakly coupled and the main transport mechanism is a sequential resonant tunneling of electrons between wells. We review quantum transport in these materials, [...] Read more.
Semiconductor superlattices are periodic nanostructures consisting of epitaxially grown quantum wells and barriers. For thick barriers, the quantum wells are weakly coupled and the main transport mechanism is a sequential resonant tunneling of electrons between wells. We review quantum transport in these materials, and the rate equations for electron densities, currents, and the self-consistent electric potential or field. Depending on superlattice configuration, doping density, temperature, voltage bias, and other parameters, superlattices behave as excitable systems, and can respond to abrupt dc bias changes by large transients involving charge density waves before arriving at a stable stationary state. For other parameters, the superlattices may have self-sustained oscillations of the current through them. These oscillations are due to repeated triggering and recycling of charge density waves, and can be periodic in time, quasiperiodic, and chaotic. Modifying the superlattice configuration, it is possible to attain robust chaos due to wave dynamics. External noise of appropriate strength can generate time-periodic current oscillations when the superlattice is in a stable stationary state without noise, which is called the coherence resonance. In turn, these oscillations can resonate with a periodic signal in the presence of sufficient noise, thereby displaying a stochastic resonance. These properties can be exploited to design and build many devices. Here, we describe detectors of weak signals by using coherence and stochastic resonance and fast generators of true random sequences useful for safe communications and storage. Full article
(This article belongs to the Special Issue Quantum Transport in Molecular Nanostructures)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Sketch of a voltage biased semiconductor superlattice. An epitaxially grown succession of alternate layers of two semiconductors is cut into a mesa, whose cross section is a square (or a circle) with sides measuring tens of microns. The semiconductor with smaller (larger) bandgap forms the QWs (QBs) of the superlattice conduction band. Here, QWs are 10 nm layers of GaAs negatively doped in their central part, and QBs are 4 nm undoped layers of AlGaAs. (<b>b</b>) Sketch of a stationary electric potential profile in the SSL conduction band, comprising a LFD followed by a charge accumulation domain wall and a HFD. In the LFD, sequential resonant tunneling of electrons is from the lowest subband to the lowest subband of the adjacent QW across the QB. In the HFD, electrons tunnel from lowest subband to first excited subband of the adjacent QW, followed by a fast scattering event that transfers electrons to the lowest subband of the same QW.</p>
Full article ">Figure 2
<p>(<b>a</b>) Tunneling current density versus field for a homogeneous field <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>F</mi> </mrow> </semantics></math> and density <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>N</mi> <mi>D</mi> </msub> </mrow> </semantics></math> showing constant solutions <math display="inline"><semantics> <mrow> <msup> <mi>F</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msup> <mrow> <mo>(</mo> <mi>J</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math> of <math display="inline"><semantics> <mrow> <msub> <mi>J</mi> <mrow> <mi>i</mi> <mo>→</mo> <mi>i</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mi>J</mi> </mrow> </semantics></math>. (<b>b</b>) Tunneling current density versus voltage (<math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>i</mi> </msub> <mo>=</mo> <msub> <mi>N</mi> <mi>D</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>V</mi> </mrow> </semantics></math>) comparing the reference configuration (ref.) <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> nm, <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>W</mi> </msub> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math> nm to <math display="inline"><semantics> <msub> <mi>J</mi> <mrow> <mn>0</mn> <mo>→</mo> <mn>1</mn> </mrow> </msub> </semantics></math> in Equation (<a href="#FD8-entropy-26-00672" class="html-disp-formula">8</a>) (dot-dashed straight line) and to configurations having QWs with more or less monolayers (m.l.). The rhombus marks the critical current <math display="inline"><semantics> <msub> <mi>J</mi> <mi>cr</mi> </msub> </semantics></math> and voltage <math display="inline"><semantics> <msub> <mi>V</mi> <mi>cr</mi> </msub> </semantics></math> at which the contact Ohm’s law intersects the reference configuration. When the current surpasses <math display="inline"><semantics> <msub> <mi>J</mi> <mi>cr</mi> </msub> </semantics></math>, a new HFD is created at the emitter. Reprinted from E. Mompó, M. Carretero, L. L. Bonilla, Designing hyperchaos and intermittency in semiconductor superlattices, <span class="html-italic">Physical Review Letters</span> 127, 096601 (2021); <a href="https://doi.org/10.1103/PhysRevLett.127.096601" target="_blank">https://doi.org/10.1103/PhysRevLett.127.096601</a> [<a href="#B28-entropy-26-00672" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Velocities of wave fronts shown in the inset versus current bias, <span class="html-italic">I</span>. GaA/AlAs SSL parameters are <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>W</mi> </msub> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math> nm, <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> nm, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>D</mi> </msub> <mo>=</mo> <mn>1.5</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>11</mn> </msup> </mrow> </semantics></math> cm<sup>−2</sup>, and cross section <math display="inline"><semantics> <mrow> <mn>1.13</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> cm<sup>2</sup>. Courtesy of Andreas Wacker, appeared in [<a href="#B3-entropy-26-00672" class="html-bibr">3</a>].</p>
Full article ">Figure 4
<p>Numerically simulated sawtooth current–voltage characteristic and current response vs. time of a 40-well AlAs/GaAs superlattice (<math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>B</mi> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> nm, <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>W</mi> </msub> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math> nm). Upper branches correspond to voltage up-sweep, lower branches to down-sweep. The arrows in (<b>a</b>) indicate the starting and end points of imposed voltage steps. (<b>b</b>) gives an enlarged view of the initial operating point <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> V (box), as well as of different final points (circles) below and above the voltage threshold for triggering a large excursion of the current. (<b>c</b>) Current vs. time for different initial positive voltage steps. (<b>d</b>) Same for negative voltage steps. For clarity, the curves are shifted vertically in units of 20 μA in (<b>c</b>) and 30 μA in (<b>d</b>). Reprinted from [<a href="#B54-entropy-26-00672" class="html-bibr">54</a>].</p>
Full article ">Figure 5
<p>Response of the current (<b>a</b>) and evolution of electron densities (<b>b</b>,<b>c</b>) for different values of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>V</mi> </mrow> </semantics></math>. Reprinted from [<a href="#B54-entropy-26-00672" class="html-bibr">54</a>].</p>
Full article ">Figure 6
<p>Evolution of the current (<b>a</b>,<b>b</b>) and of the electric field profile (<b>c</b>,<b>d</b>) during typical high- and low-frequency SSOCs on the left and right panels, respectively. High frequency oscillations correspond to a supercritical Hopf bifurcation (left panel), whereas low frequency oscillations appear near the SNIPER bifurcation (right panel). Reprinted from the supplementary material of [<a href="#B29-entropy-26-00672" class="html-bibr">29</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) Current–voltage diagram of the numerically simulated AlGaAs/GaAs SSL. Maximum, minimum and time-averaged values of the current are shown for voltages on the interval of SSOC. (<b>b</b>) Frequency of the SSOC near the voltage <math display="inline"><semantics> <msub> <mi>V</mi> <mi>SNIPER</mi> </msub> </semantics></math> that shows the square root dependence characterizing a SNIPER bifurcation.</p>
Full article ">Figure 8
<p>Coherence resonance: (<b>a</b>–<b>e</b>) <span class="html-italic">ac</span> components of the time dependent current, and (<b>f</b>–<b>j</b>) the corresponding frequency spectra (the triangle marks the interspike average frequency) for different noise amplitudes at <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mi>dc</mi> </msub> <mo>=</mo> <mn>0.373</mn> </mrow> </semantics></math> V. Values of <math display="inline"><semantics> <msubsup> <mi>V</mi> <mi>noise</mi> <mi>rms</mi> </msubsup> </semantics></math> are 3, 4, 6, 8, and 10 mV. Current traces have been shifted to have zero current at the stationary state. (<b>k</b>) Fourier spectra of the current traces <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> for different values of the bandlimited white noise RMS amplitude. Darker (brighter) colors represent higher (lower) frequency amplitudes (in arbitrary units). The frequency associated with the mean interspike time is represented by a dashed red line. (<b>l</b>) Normalized standard deviation versus <math display="inline"><semantics> <msubsup> <mi>V</mi> <mi>noise</mi> <mi>rms</mi> </msubsup> </semantics></math> noise (<math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>). Inset: mean interspike interval versus <math display="inline"><semantics> <msubsup> <mi>V</mi> <mi>noise</mi> <mi>rms</mi> </msubsup> </semantics></math>. The vertical asymptotes (dashed lines) occur at <math display="inline"><semantics> <mrow> <msubsup> <mi>V</mi> <mi>noise</mi> <mi>rms</mi> </msubsup> <mo>=</mo> <mn>2.494</mn> </mrow> </semantics></math> mV.</p>
Full article ">Figure 9
<p>Stochastic resonance: (<b>a</b>–<b>j</b>) are as in <a href="#entropy-26-00672-f008" class="html-fig">Figure 8</a>, but now a 15 MHz <span class="html-italic">ac</span> signal with <math display="inline"><semantics> <mrow> <msubsup> <mi>V</mi> <mi>sin</mi> <mi>rms</mi> </msubsup> <mo>=</mo> <mn>1.8</mn> </mrow> </semantics></math> mV has been added. The values of <math display="inline"><semantics> <msubsup> <mi>V</mi> <mi>noise</mi> <mi>rms</mi> </msubsup> </semantics></math> are 4, 6, 7, 8, and 10 mV. (<b>k</b>) Fourier spectra of <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> for different values of <math display="inline"><semantics> <msubsup> <mi>V</mi> <mi>noise</mi> <mi>rms</mi> </msubsup> </semantics></math>. Darker (brighter) colors represent higher (lower) frequency amplitudes (in arbitrary units). The frequency associated with the mean interspike time is represented by a dashed red line and the CR frequency is indicated by a dashed black line. (<b>l</b>) Values of <math display="inline"><semantics> <msubsup> <mi>V</mi> <mi>noise</mi> <mi>rms</mi> </msubsup> </semantics></math> needed to trigger periodic SSOC versus <math display="inline"><semantics> <msubsup> <mi>V</mi> <mi>sin</mi> <mi>rms</mi> </msubsup> </semantics></math>.</p>
Full article ">Figure 10
<p>Poincaré maps from (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mn>42</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>V</mi> <mo>˙</mo> </mover> <mn>42</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, (<b>c</b>) Lyapunov exponents, and (<b>d</b>) Fourier spectrum as functions of <math display="inline"><semantics> <msub> <mi>V</mi> <mi>dc</mi> </msub> </semantics></math> for the modified SSL with <math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>i</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math>. There are jumps between periodic attractors at <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>d</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mn>1.3</mn> <mo> </mo> <mi mathvariant="normal">V</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>d</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mn>1.43</mn> <mo> </mo> <mi mathvariant="normal">V</mi> </mrow> </semantics></math> (Poincaré map) corresponding to quasi-periodic attractors with different incommensurate frequencies (Fourier spectrum). There is hyperchaos (2 positive Lyapunov exponents of comparable magnitude) for <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>d</mi> <mi>c</mi> </mrow> </msub> <mo>&lt;</mo> <mn>1.08</mn> <mo> </mo> <mi mathvariant="normal">V</mi> </mrow> </semantics></math> and intermittent chaos for <math display="inline"><semantics> <mrow> <msub> <mi>V</mi> <mrow> <mi>d</mi> <mi>c</mi> </mrow> </msub> <mo>&gt;</mo> <mn>1.08</mn> <mo> </mo> <mi mathvariant="normal">V</mi> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mo>≫</mo> <msub> <mi>λ</mi> <mn>2</mn> </msub> <mo>≈</mo> <mn>0</mn> </mrow> </semantics></math>). Reprinted from E. Mompó, M. Carretero, L. L. Bonilla, Designing hyperchaos and intermittency in semiconductor superlattices, <span class="html-italic">Physical Review Letters</span> 127, 096601 (2021); <a href="https://doi.org/10.1103/PhysRevLett.127.096601" target="_blank">https://doi.org/10.1103/PhysRevLett.127.096601</a> [<a href="#B28-entropy-26-00672" class="html-bibr">28</a>].</p>
Full article ">
18 pages, 22304 KiB  
Article
A High-Performance FPGA PRNG Based on Multiple Deep-Dynamic Transformations
by Shouliang Li, Zichen Lin, Yi Yang and Ruixuan Ning
Entropy 2024, 26(8), 671; https://doi.org/10.3390/e26080671 - 7 Aug 2024
Viewed by 287
Abstract
Pseudo-random number generators (PRNGs) are important cornerstones of many fields, such as statistical analysis and cryptography, and the need for PRNGs for information security (in fields such as blockchain, big data, and artificial intelligence) is becoming increasingly prominent, resulting in a steadily growing [...] Read more.
Pseudo-random number generators (PRNGs) are important cornerstones of many fields, such as statistical analysis and cryptography, and the need for PRNGs for information security (in fields such as blockchain, big data, and artificial intelligence) is becoming increasingly prominent, resulting in a steadily growing demand for high-speed, high-quality random number generators. To meet this demand, the multiple deep-dynamic transformation (MDDT) algorithm is innovatively developed. This algorithm is incorporated into the skewed tent map, endowing it with more complex dynamical properties. The improved one-dimensional discrete chaotic mapping method is effectively realized on a field-programmable gate array (FPGA), specifically the Xilinx xc7k325tffg900-2 model. The proposed pseudo-random number generator (PRNG) successfully passes all evaluations of the National Institute of Standards and Technology (NIST) SP800-22, diehard, and TestU01 test suites. Additional experimental results show that the PRNG, possessing high novelty performance, operates efficiently at a clock frequency of 150 MHz, achieving a maximum throughput of 14.4 Gbps. This performance not only surpasses that of most related studies but also makes it exceptionally suitable for embedded applications. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Lyapunov exponents of the MDDT system (3D), where the color of the figure represents the value of the angle.</p>
Full article ">Figure 2
<p>Bifurcation Diagram of the MDDT System (3D).</p>
Full article ">Figure 3
<p>(<b>a</b>) Chaotic trajectories of the MDDT system (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.45</mn> <mo>,</mo> <mi>p</mi> <mo>=</mo> <mn>0.61</mn> </mrow> </semantics></math>); (<b>b</b>) chaotic trajectories of the skew tent system (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.499</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>(<b>a</b>) Sample entropy: MDDT (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math>) vs. skew tent map; (<b>b</b>) permutation entropy: MDDT (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.45</mn> </mrow> </semantics></math>) vs. skew tent map.</p>
Full article ">Figure 5
<p>Schematic diagram of the data exchange of the random number part of the FPGA.</p>
Full article ">Figure 6
<p>Hardware design structure of the <span class="html-italic">X</span> update logic implemented on FPGA.</p>
Full article ">Figure 7
<p>Implementation of <span class="html-italic">P</span> update logic on FPGA.</p>
Full article ">Figure 8
<p>Calculation logic for <math display="inline"><semantics> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Hardware resource utilization for various random number generators [<a href="#B1-entropy-26-00671" class="html-bibr">1</a>,<a href="#B16-entropy-26-00671" class="html-bibr">16</a>,<a href="#B19-entropy-26-00671" class="html-bibr">19</a>,<a href="#B21-entropy-26-00671" class="html-bibr">21</a>,<a href="#B23-entropy-26-00671" class="html-bibr">23</a>,<a href="#B24-entropy-26-00671" class="html-bibr">24</a>,<a href="#B26-entropy-26-00671" class="html-bibr">26</a>,<a href="#B27-entropy-26-00671" class="html-bibr">27</a>,<a href="#B28-entropy-26-00671" class="html-bibr">28</a>].</p>
Full article ">Figure 10
<p>(<b>a</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>o</mi> <msub> <mi>m</mi> <mi>n</mi> </msub> <mo>−</mo> <mi>R</mi> <mi>a</mi> <mi>n</mi> <mi>d</mi> <mi>o</mi> <msub> <mi>m</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> trajectory points. (<b>b</b>) Histogram of random bitstreams.</p>
Full article ">
21 pages, 359 KiB  
Article
Revisiting Possibilistic Fuzzy C-Means Clustering Using the Majorization-Minimization Method
by Yuxue Chen and Shuisheng Zhou
Entropy 2024, 26(8), 670; https://doi.org/10.3390/e26080670 - 6 Aug 2024
Viewed by 315
Abstract
Possibilistic fuzzy c-means (PFCM) clustering is a kind of hybrid clustering method based on fuzzy c-means (FCM) and possibilistic c-means (PCM), which not only has the stability of FCM but also partly inherits the robustness of PCM. However, as an extension of FCM [...] Read more.
Possibilistic fuzzy c-means (PFCM) clustering is a kind of hybrid clustering method based on fuzzy c-means (FCM) and possibilistic c-means (PCM), which not only has the stability of FCM but also partly inherits the robustness of PCM. However, as an extension of FCM on the objective function, PFCM tends to find a suboptimal local minimum, which affects its performance. In this paper, we rederive PFCM using the majorization-minimization (MM) method, which is a new derivation approach not seen in other studies. In addition, we propose an effective optimization method to solve the above problem, called MMPFCM. Firstly, by eliminating the variable VRp×c, the original optimization problem is transformed into a simplified model with fewer variables but a proportional term. Therefore, we introduce a new intermediate variable sRc to convert the model with the proportional term into an easily solvable equivalent form. Subsequently, we design an iterative sub-problem using the MM method. The complexity analysis indicates that MMPFCM and PFCM share the same computational complexity. However, MMPFCM requires less memory per iteration. Extensive experiments, including objective function value comparison and clustering performance comparison, demonstrate that MMPFCM converges to a better local minimum compared to PFCM. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Mean and standard deviation of F*, as well as Purity, for different values of <span class="html-italic">K</span> on five real-world datasets, where <span class="html-italic">K</span> is the number of iterations in the inner loop. (<b>a</b>) Mean and standard deviation of F*. (<b>b</b>) Mean and standard deviation of Purity.</p>
Full article ">Figure 2
<p>Box plot of the differences in objective function values across twelve real-world datasets under the same initialization conditions.</p>
Full article ">Figure 3
<p>Convergence curves of PFCM and MMPFCM on twelve real-world datasets, where the two methods share the same initialization.</p>
Full article ">Figure 4
<p>Plot of the corresponding times of the different algorithms on twelve real-world datasets.</p>
Full article ">
18 pages, 662 KiB  
Article
Bilateral Matching Method for Business Resources Based on Synergy Effects and Incomplete Data
by Shuhai Wang, Linfu Sun and Yang Yu
Entropy 2024, 26(8), 669; https://doi.org/10.3390/e26080669 - 6 Aug 2024
Viewed by 298
Abstract
On the third-party cloud platform, to help enterprises accurately obtain high-quality and valuable business resources from the massive information resources, a bilateral matching method for business resources, based on synergy effects and incomplete data, is proposed. The method first utilizes a k-nearest neighbor [...] Read more.
On the third-party cloud platform, to help enterprises accurately obtain high-quality and valuable business resources from the massive information resources, a bilateral matching method for business resources, based on synergy effects and incomplete data, is proposed. The method first utilizes a k-nearest neighbor imputation algorithm, based on comprehensive similarity, to fill in missing values. Then, it constructs a satisfaction evaluation index system for business resource suppliers and demanders, and the weights of the satisfaction evaluation indices are determined, based on the fuzzy analytic hierarchy process (FAHP) and the entropy weighting method (EWM). On this basis, a bilateral matching model is constructed with the objectives of maximizing the satisfaction of both the supplier and the demander, as well as achieving the synergy effect. Finally, the model is solved using the linear weighting method to obtain the most satisfactory business resources for both supply and demand. The effectiveness of the method is verified through a practical application and comparative experiments. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

Figure 1
<p>The structure of the proposed method.</p>
Full article ">Figure 2
<p>The results of <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Comparison of different algorithms on different datasets.</p>
Full article ">Figure 4
<p>Comparative analysis of business resource matching quality.</p>
Full article ">
19 pages, 991 KiB  
Article
Effect of Pure Dephasing Quantum Noise in the Quantum Search Algorithm Using Atos Quantum Assembly
by Maria Heloísa Fraga da Silva, Gleydson Fernandes de Jesus and Clebson Cruz
Entropy 2024, 26(8), 668; https://doi.org/10.3390/e26080668 - 6 Aug 2024
Viewed by 368
Abstract
Quantum computing is tipped to lead the future of global technological progress. However, the obstacles related to quantum software development are an actual challenge to overcome. In this scenario, this work presents an implementation of the quantum search algorithm in Atos Quantum Assembly [...] Read more.
Quantum computing is tipped to lead the future of global technological progress. However, the obstacles related to quantum software development are an actual challenge to overcome. In this scenario, this work presents an implementation of the quantum search algorithm in Atos Quantum Assembly Language (AQASM) using the quantum software stack my Quantum Learning Machine (myQLM) and the programming development platform Quantum Learning Machine (QLM). We present the creation of a virtual quantum processor whose configurable architecture allows the analysis of induced quantum noise effects on the quantum algorithms. The codes are available throughout the manuscript so that readers can replicate them and apply the methods discussed in this article to solve their own quantum computing projects. The presented results are consistent with theoretical predictions and demonstrate that AQASM and QLM are powerful tools for building, implementing, and simulating quantum hardware. Full article
(This article belongs to the Special Issue Quantum Computing in the NISQ Era)
Show Figures

Figure 1

Figure 1
<p>Sketch of the four steps of Grover’s algorithm along with the evolution of the probability amplitudes of each element of the 4-qubit computational basis.</p>
Full article ">Figure 2
<p>Grover’s circuit using myQLM (abbreviated).</p>
Full article ">Figure 3
<p>Probability distribution for the 16 items of the database. The searched item is found with 95.86% probability, while the other ones did not reach 0.5% probability individually.</p>
Full article ">Figure 4
<p>Sketch of the 5-qubit simulated topology. All qubits are coupled for simplicity since they can all be concurrently controlled or targeted by controlled quantum operations.</p>
Full article ">Figure 5
<p>The decomposition of (<b>a</b>) multi-controlled <math display="inline"><semantics> <mi mathvariant="monospace">Z</mi> </semantics></math> gate (<math display="inline"><semantics> <mi mathvariant="monospace">CCCZ</mi> </semantics></math>) using (<b>b</b>) <math display="inline"><semantics> <mi mathvariant="monospace">Hadamard</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="monospace">CCCNOT</mi> </semantics></math> gates or (<b>c</b>) <math display="inline"><semantics> <mi mathvariant="monospace">Hadamard</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="monospace">CCNOT</mi> </semantics></math> gates.</p>
Full article ">Figure 6
<p>Simulation with noise using <math display="inline"><semantics> <msub> <mi>T</mi> <mi>ϕ</mi> </msub> </semantics></math> when <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math> is equal to 1000 ns and <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math> is equal to (<b>a</b>) 1000 ns, (<b>b</b>) 750 ns, and (<b>c</b>) 500 ns.</p>
Full article ">Figure 7
<p>Probability distribution for item <math display="inline"><semantics> <mrow> <mo>|</mo> <mrow> <mi>B</mi> <mi>l</mi> <mi>u</mi> <mi>e</mi> </mrow> <mo>〉</mo> </mrow> </semantics></math> as a function of pure dephasing times, <math display="inline"><semantics> <msub> <mi>T</mi> <mi>ϕ</mi> </msub> </semantics></math>. The dashed blue line highlights the probability of the sought state (95.86%) obtained in the noise-free simulation presented in <a href="#entropy-26-00668-f003" class="html-fig">Figure 3</a>.</p>
Full article ">
27 pages, 456 KiB  
Article
A Higher Performance Data Backup Scheme Based on Multi-Factor Authentication
by Lingfeng Wu, Yunhua Wen and Jinghai Yi
Entropy 2024, 26(8), 667; https://doi.org/10.3390/e26080667 - 5 Aug 2024
Viewed by 305
Abstract
Remote data backup technology avoids the risk of data loss and tampering, and has higher security compared to local data backup solutions. However, the data transmission channel for remote data backup is not secure, and the backup server cannot be fully trusted, so [...] Read more.
Remote data backup technology avoids the risk of data loss and tampering, and has higher security compared to local data backup solutions. However, the data transmission channel for remote data backup is not secure, and the backup server cannot be fully trusted, so users usually encrypt the data before uploading it to the remote server. As a result, how to protect this encryption key is crucial. We design a User-Centric Design (UCD) data backup scheme based on multi-factor authentication to protect this encryption key. Our scheme utilizes a secret sharing scheme to divide the encryption key into three parts, which are stored in the laptop, the smart card, and the server. The encryption key can be easily reconstructed from any two parts with user’s private information password, identity and biometrics. As long as the biometrics has enough entropy, our scheme can resist replay attacks, impersonation user attacks, impersonation server attacks, malicious servers and offline password guessing attacks. Full article
(This article belongs to the Special Issue Information Security and Data Privacy)
Show Figures

Figure 1

Figure 1
<p>Model of our data backup scheme.</p>
Full article ">
24 pages, 4949 KiB  
Article
Assessment of Fractal Synchronization during an Epileptic Seizure
by Oleg Gorshkov and Hernando Ombao
Entropy 2024, 26(8), 666; https://doi.org/10.3390/e26080666 - 5 Aug 2024
Viewed by 288
Abstract
In this paper, we define fractal synchronization (FS) based on the idea of stochastic synchronization and propose a mathematical apparatus for estimating FS. One major advantage of our proposed approach is that fractal synchronization makes it possible to estimate the [...] Read more.
In this paper, we define fractal synchronization (FS) based on the idea of stochastic synchronization and propose a mathematical apparatus for estimating FS. One major advantage of our proposed approach is that fractal synchronization makes it possible to estimate the aggregate strength of the connection on multiple time scales between two projections of the attractor, which are time series with a fractal structure. We believe that one of the promising uses of FS is the assessment of the interdependence of encephalograms. To demonstrate this approach in evaluating the cross-dependence between channels in a network of electroencephalograms, we evaluated the FS of encephalograms during an epileptic seizure. Fractal synchronization demonstrates the presence of desynchronization during an epileptic seizure. Full article
(This article belongs to the Special Issue Fractal and Multifractal Analysis of Complex Networks II)
Show Figures

Figure 1

Figure 1
<p>The projections of the Lorenz attractor <span class="html-italic">x</span>(<span class="html-italic">t</span>) (red), <span class="html-italic">y</span>(<span class="html-italic">t</span>) (green), <span class="html-italic">z</span>(<span class="html-italic">t</span>) (blue) on the <span class="html-italic">x</span>-subspace, <span class="html-italic">y</span>-subspace, and <span class="html-italic">z</span>-subspace and a point on the attractor with coordinates (<math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>0</mn> </msub> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <msub> <mi>z</mi> <mn>0</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 2
<p>The time series generated by the random midpoint displacement algorithm for the Hurst exponents <span class="html-italic">H</span> = 0.1, <span class="html-italic">H</span> = 0.5, <span class="html-italic">H</span> = 0.7.</p>
Full article ">Figure 3
<p>The dependence of the power spectrum on the Hurst exponent for synthetic time series. The corresponding time series were generated using the random midpoint displacement algorithm for different Hurst exponents with a step of 0.1.</p>
Full article ">Figure 4
<p>The degree of fractal synchronization <span class="html-italic">FS</span> versus the coupling strength <span class="html-italic">C</span> for the system (18).</p>
Full article ">Figure 5
<p>The significance <span class="html-italic">S</span> of fractal synchronization <span class="html-italic">FS</span> versus the coupling strength <span class="html-italic">C</span> for the fractal synchronization <span class="html-italic">FS</span>, presented in <a href="#entropy-26-00666-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>The disposition of the scalp electrodes. The abbreviations designate the disposition on the scalp: frontal polar (Fp), frontal (F), central (C), temporal (T), parietal (P), and occipital (O). Channel T3, which is highlighted in red, is the focus of the epileptic seizure.</p>
Full article ">Figure 7
<p>(<b>a</b>) The dynamic change in the average <span class="html-italic">FS</span> between the EEG signal of the T3 channel and the EEG signals of the right and left hemisphere brain channels during the epileptic seizure. (<b>b</b>) The dynamic change in the average <span class="html-italic">H</span> of the EEG signals of the right and left hemisphere brain channels during the epileptic seizure.</p>
Full article ">Figure 8
<p>Dynamic change in the average significance S between the EEG signal of the T3 channel and the EEG signals of the right and left hemisphere brain channels.</p>
Full article ">Figure 9
<p>The power spectrum during the epileptic seizure for channels (<b>a</b>) T3, (<b>b</b>) T4, (<b>c</b>) C3, (<b>d</b>) C4.</p>
Full article ">Figure 10
<p>Subject 34: seizure interval: (0–451) seconds. (<b>a</b>) The records of EEG signals in the considered channels during the interval (0, 3600) seconds; (<b>b</b>) dynamic change in the average <span class="html-italic">FS</span> between the EEG signals of the considered channels; (<b>c</b>) dynamic change in the average significance <span class="html-italic">S</span> between the EEG signals of the considered channels.</p>
Full article ">Figure 11
<p>Subject 36: seizure interval 1: (0–109) seconds; seizure interval 2: (187–528) seconds.(<b>a</b>) The records of EEG signals in the considered channels during the interval (0, 3600) seconds; (<b>b</b>) the dynamic change in the average <span class="html-italic">FS</span> between the EEG signals of the considered channels; (<b>c</b>) dynamic change in the average significance <span class="html-italic">S</span> between the EEG signals of the considered channels.</p>
Full article ">Figure 12
<p>Subject 13: seizure interval 1: (0–291) seconds; seizure interval 2: (799–1294) seconds.(<b>a</b>) The records of EEG signals in the considered channels during the interval (0, 3600) seconds; (<b>b</b>) the dynamic change in the average <span class="html-italic">FS</span> between the EEG signals of the considered channels; (<b>c</b>) the dynamic change in the average significance <span class="html-italic">S</span> between the EEG signals of the considered channels.</p>
Full article ">Figure 13
<p>Subject 78: seizure interval: (300–420) seconds;seizure interval: (840–900) seconds; seizure interval: (2220–2280) seconds; seizure interval: (2520–2580) seconds.(<b>a</b>) The records of EEG signals in the considered channels during the interval (0, 3600) seconds; (<b>b</b>) the dynamic change in the average <span class="html-italic">FS</span> between the EEG signals of the considered channels; (<b>c</b>) the dynamic change in the average significance <span class="html-italic">S</span> between the EEG signals of the considered channels.</p>
Full article ">
19 pages, 2444 KiB  
Article
Fractional Telegrapher’s Equation under Resetting: Non-Equilibrium Stationary States and First-Passage Times
by Katarzyna Górska, Francisco J. Sevilla, Guillermo Chacón-Acosta and Trifce Sandev
Entropy 2024, 26(8), 665; https://doi.org/10.3390/e26080665 - 5 Aug 2024
Viewed by 342
Abstract
We consider two different time fractional telegrapher’s equations under stochastic resetting. Using the integral decomposition method, we found the probability density functions and the mean squared displacements. In the long-time limit, the system approaches non-equilibrium stationary states, while the mean squared displacement saturates [...] Read more.
We consider two different time fractional telegrapher’s equations under stochastic resetting. Using the integral decomposition method, we found the probability density functions and the mean squared displacements. In the long-time limit, the system approaches non-equilibrium stationary states, while the mean squared displacement saturates due to the resetting mechanism. We also obtain the fractional telegraph process as a subordinated telegraph process by introducing operational time such that the physical time is considered as a Lévy stable process whose characteristic function is the Lévy stable distribution. We also analyzed the survival probability for the first-passage time problem and found the optimal resetting rate for which the corresponding mean first-passage time is minimal. Full article
(This article belongs to the Special Issue Theory and Applications of Hyperbolic Diffusion and Shannon Entropy)
Show Figures

Figure 1

Figure 1
<p>MSD (<a href="#FD25-entropy-26-00665" class="html-disp-formula">25</a>) under the effects of stochastic resetting for different values of the resetting rate, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>0.1</mn> <mo>,</mo> <mn>1.0</mn> <mo>,</mo> <mn>10.0</mn> <mo>}</mo> </mrow> </semantics></math>. The case <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> corresponds to the MSD (<a href="#FD22-entropy-26-00665" class="html-disp-formula">22</a>).</p>
Full article ">Figure 2
<p>MFPT (<a href="#FD31-entropy-26-00665" class="html-disp-formula">31</a>) for <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> (blue solid line), <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>3</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> (red dashed line), <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (black dotted line).</p>
Full article ">Figure 3
<p>Optimal resetting rate <math display="inline"><semantics> <msub> <mi>r</mi> <mo>*</mo> </msub> </semantics></math> versus <math display="inline"><semantics> <mi>μ</mi> </semantics></math>, obtained by numerically solving Equation (<a href="#FD32-entropy-26-00665" class="html-disp-formula">32</a>).</p>
Full article ">Figure 4
<p>MSD (<a href="#FD38-entropy-26-00665" class="html-disp-formula">38</a>) under the effects of stochastic resetting for different values of <math display="inline"><semantics> <mi>μ</mi> </semantics></math> and the resetting rate, <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mo> </mo> <mn>0.1</mn> <mo>,</mo> <mo> </mo> <mn>1.0</mn> <mo>,</mo> <mo> </mo> <mn>10.0</mn> <mo>}</mo> </mrow> </semantics></math>. The case <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> corresponds to the MSD (<a href="#FD36-entropy-26-00665" class="html-disp-formula">36</a>).</p>
Full article ">Figure 5
<p>MFPT (<a href="#FD40-entropy-26-00665" class="html-disp-formula">40</a>) for <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> (blue solid line), <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>3</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> (red dashed line), <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (black dotted line).</p>
Full article ">Figure 6
<p>Optimal resetting rate <math display="inline"><semantics> <msub> <mi>r</mi> <mo>*</mo> </msub> </semantics></math> versus <math display="inline"><semantics> <mi>μ</mi> </semantics></math>, obtained by numerically solving Equation (<a href="#FD41-entropy-26-00665" class="html-disp-formula">41</a>).</p>
Full article ">Figure A1
<p>Comparison of the efficiencies for FTE-I and FTE-II from Equations (<a href="#FD30-entropy-26-00665" class="html-disp-formula">30</a>) and (<a href="#FD39-entropy-26-00665" class="html-disp-formula">39</a>), for <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 605 KiB  
Article
A Hierarchical Multi-Task Learning Framework for Semantic Annotation in Tabular Data
by Jie Wu and Mengshu Hou
Entropy 2024, 26(8), 664; https://doi.org/10.3390/e26080664 - 4 Aug 2024
Viewed by 388
Abstract
To optimize the utilization and analysis of tables, it is essential to recognize and understand their semantics comprehensively. This requirement is especially critical given that many tables lack explicit annotations, necessitating the identification of column types and inter-column relationships. Such identification can significantly [...] Read more.
To optimize the utilization and analysis of tables, it is essential to recognize and understand their semantics comprehensively. This requirement is especially critical given that many tables lack explicit annotations, necessitating the identification of column types and inter-column relationships. Such identification can significantly augment data quality, streamline data integration, and support data analysis and mining. Current table annotation models often address each subtask independently, which may result in the neglect of constraints and contextual information, causing relational ambiguities and inference errors. To address this issue, we propose a unified multi-task learning framework capable of concurrently handling multiple tasks within a single model, including column named entity recognition, column type identification, and inter-column relationship detection. By integrating these tasks, the framework exploits their interrelations, facilitating the exchange of shallow features and the sharing of representations. Their cooperation enables each task to leverage insights from the others, thereby improving the performance of individual subtasks and enhancing the model’s overall generalization capabilities. Notably, our model is designed to employ only the internal information of tabular data, avoiding reliance on external context or knowledge graphs. This design ensures robust performance even with limited input information. Extensive experiments demonstrate the superior performance of our model across various tasks, validating the effectiveness of unified multi-task learning framework in the recognition and comprehension of table semantics. Full article
(This article belongs to the Special Issue Natural Language Processing and Data Mining)
Show Figures

Figure 1

Figure 1
<p>An example of table semantic annotation. The goal is to assign semantic tags to columns and column pairs within the table.</p>
Full article ">Figure 2
<p>The architecture of our multi-task learning framework. Our model features a shared bottom encoder coupled with multiple associated classifiers. Once the table, serialized into text, is encoded, the representations of all columns are forwarded to the upper classifiers. Information is then processed sequentially according to the task hierarchy of column named entity recognition (NER), column type annotation (CTA), and inter-column relationship annotation (CRA).</p>
Full article ">Figure 3
<p>The training losses of three subtasks.</p>
Full article ">Figure 4
<p>Performance of training under different proportions of dataset.</p>
Full article ">Figure 5
<p>The case study on HardTables2022 dataset. It includes the prediction results of different models for column types and inter-column relationships.</p>
Full article ">
29 pages, 374 KiB  
Article
Exact Expressions for Kullback–Leibler Divergence for Multivariate and Matrix-Variate Distributions
by Victor Nawa and Saralees Nadarajah
Entropy 2024, 26(8), 663; https://doi.org/10.3390/e26080663 - 4 Aug 2024
Viewed by 321
Abstract
The Kullback–Leibler divergence is a measure of the divergence between two probability distributions, often used in statistics and information theory. However, exact expressions for it are not known for multivariate or matrix-variate distributions apart from a few cases. In this paper, exact expressions [...] Read more.
The Kullback–Leibler divergence is a measure of the divergence between two probability distributions, often used in statistics and information theory. However, exact expressions for it are not known for multivariate or matrix-variate distributions apart from a few cases. In this paper, exact expressions for the Kullback–Leibler divergence are derived for over twenty multivariate and matrix-variate distributions. The expressions involve various special functions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
10 pages, 3306 KiB  
Article
Modified Gravity in the Presence of Matter Creation: Scenario for the Late Universe
by Giovanni Montani, Nakia Carlevaro and Mariaveronica De Angelis
Entropy 2024, 26(8), 662; https://doi.org/10.3390/e26080662 - 4 Aug 2024
Viewed by 339
Abstract
We consider a dynamic scenario for characterizing the late Universe evolution, aiming to mitigate the Hubble tension. Specifically, we consider a metric f(R) gravity in the Jordan frame which is implemented to the dynamics of a flat isotropic Universe. This [...] Read more.
We consider a dynamic scenario for characterizing the late Universe evolution, aiming to mitigate the Hubble tension. Specifically, we consider a metric f(R) gravity in the Jordan frame which is implemented to the dynamics of a flat isotropic Universe. This cosmological model incorporates a matter creation process, due to the time variation of the cosmological gravitational field. We model particle creation by representing the isotropic Universe (specifically, a given fiducial volume) as an open thermodynamic system. The resulting dynamical model involves four unknowns: the Hubble parameter, the non-minimally coupled scalar field, its potential, and the energy density of the matter component. We impose suitable conditions to derive a closed system for these functions of the redshift. In this model, the vacuum energy density of the present Universe is determined by the scalar field potential, in line with the modified gravity scenario. Hence, we construct a viable model, determining the form of the f(R) theory a posteriori and appropriately constraining the phenomenological parameters of the matter creation process to eliminate tachyon modes. Finally, by analyzing the allowed parameter space, we demonstrate that the Planck evolution of the Hubble parameter can be reconciled with the late Universe dynamics, thus alleviating the Hubble tension. Full article
(This article belongs to the Special Issue Modified Gravity: From Black Holes Entropy to Current Cosmology IV)
Show Figures

Figure 1

Figure 1
<p>Density plot for all the model’s parameters. The yellow region indicates the most frequent values preferred by the model.</p>
Full article ">Figure 2
<p>Plot of <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> (black) using the parameters in Equation (<a href="#FD27-entropy-26-00662" class="html-disp-formula">27</a>) and the profiles (with the corresponding errors) of <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>P</mi> <mi>l</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (blue) and <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mrow> <mi>S</mi> <mi>N</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (red). Grey squares represent the SH0ES prior and six relevant measurements for BAO sources for <math display="inline"><semantics> <mrow> <mn>0.3</mn> <mo>&lt;</mo> <mi>z</mi> <mo>&lt;</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Plot of <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> (green) and <math display="inline"><semantics> <mrow> <msub> <mo>Ω</mo> <mi>r</mi> </msub> <mrow> <mo>(</mo> <mi>z</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (orange) integrated from Equations (<a href="#FD21-entropy-26-00662" class="html-disp-formula">21</a>) and (<a href="#FD22-entropy-26-00662" class="html-disp-formula">22</a>), implementing the parameters in Equation (<a href="#FD27-entropy-26-00662" class="html-disp-formula">27</a>).</p>
Full article ">
16 pages, 1081 KiB  
Article
Optimizing Contact Network Topological Parameters of Urban Populations Using the Genetic Algorithm
by Abimael R. Sergio and Pedro H. T. Schimit
Entropy 2024, 26(8), 661; https://doi.org/10.3390/e26080661 - 3 Aug 2024
Viewed by 433
Abstract
This paper explores the application of complex network models and genetic algorithms in epidemiological modeling. By considering the small-world and Barabási–Albert network models, we aim to replicate the dynamics of disease spread in urban environments. This study emphasizes the importance of accurately mapping [...] Read more.
This paper explores the application of complex network models and genetic algorithms in epidemiological modeling. By considering the small-world and Barabási–Albert network models, we aim to replicate the dynamics of disease spread in urban environments. This study emphasizes the importance of accurately mapping individual contacts and social networks to forecast disease progression. Using a genetic algorithm, we estimate the input parameters for network construction, thereby simulating disease transmission within these networks. Our results demonstrate the networks’ resemblance to real social interactions, highlighting their potential in predicting disease spread. This study underscores the significance of complex network models and genetic algorithms in understanding and managing public health crises. Full article
(This article belongs to the Special Issue Dynamics in Biological and Social Networks)
Show Figures

Figure 1

Figure 1
<p>Cumulative evolution of COVID-19 cases over 35 days in selected cities during GA training. Actual data are depicted in red, and simulation results are shown in blue, with each simulation identified by the network model used and the city’s population. (<b>a</b>) BA, Águas de Santa Bárbara (5931); (<b>b</b>) BA, Bernardino de Campos (10,787); (<b>c</b>) BA, Pirajú (28,574); (<b>d</b>) SW, Santa Cruz do Rio Pardo (46,110); (<b>e</b>) SW, Avaré (87,538); (<b>f</b>) BA, Ourinhos (110,489); (<b>g</b>) SW, Itapetininga (160,150); (<b>h</b>) BA, Presidente Prudente (221,073); (<b>i</b>) BA, Jundiaí (407,016).</p>
Full article ">Figure 2
<p>Accumulated COVID-19 case trends over 10 days following the training period in selected cities. Simulations, labeled according to the network model and city population, are compared against actual data: real cases are shown in red and simulated results are shown in blue. (<b>a</b>) BA, Águas de Santa Bárbara (5931); (<b>b</b>) BA, Pirajú (28,574); (<b>c</b>) BA, Bauru (364,225); (<b>d</b>) SW, Boituva (57,292); (<b>e</b>) SW, Bragança Paulista (163,980); (<b>f</b>) BA, Cerqueira César (19,213); (<b>g</b>) SW, Itapetininga (160,150); (<b>h</b>) BA, Jundiaí (407,016); (<b>i</b>) BA, Presidente Prudente (221,073).</p>
Full article ">Figure 3
<p>Comparison between clustering coefficient, entropy, and the mean number of edges per node divided by the size of the networks for the data results of both network models considered here.</p>
Full article ">
40 pages, 1418 KiB  
Hypothesis
Unification of Mind and Matter through Hierarchical Extension of Cognition: A New Framework for Adaptation of Living Systems
by Toshiyuki Nakajima
Entropy 2024, 26(8), 660; https://doi.org/10.3390/e26080660 - 2 Aug 2024
Viewed by 385
Abstract
Living systems (LSs) must solve the problem of adapting to their environment by identifying external states and acting appropriately to maintain external relationships and internal order for survival and reproduction. This challenge is akin to the philosophical enigma of how the self can [...] Read more.
Living systems (LSs) must solve the problem of adapting to their environment by identifying external states and acting appropriately to maintain external relationships and internal order for survival and reproduction. This challenge is akin to the philosophical enigma of how the self can escape solipsism. In this study, a comprehensive model is developed to address the adaptation problem. LSs are composed of material entities capable of detecting their external states. This detection is conceptualized as “cognition”, a state change in relation to its external states. This study extends the concept of cognition to include three hierarchical levels of the world: physical, chemical, and semiotic cognitions, with semiotic cognition being closest to the conventional meaning of cognition. This radical extension of the cognition concept to all levels of the world provides a monistic model named the cognizers system model, in which mind and matter are unified as a single entity, the “cognizer”. During evolution, LSs invented semiotic cognition based on physical and chemical cognitions to manage the probability distribution of events that occur to them. This study proposes a theoretical model in which semiotic cognition is an adaptive process wherein the inverse causality operation produces particular internal states as symbols that signify hidden external states. This operation makes LSs aware of the external world. Full article
(This article belongs to the Special Issue Probability, Entropy, Information, and Semiosis in Living Systems)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Isolated state change refers to a change that occurs independently of the environment. (<b>B</b>) Related state change refers to a change that occurs depending on the environment, called cognition.</p>
Full article ">Figure 2
<p>A person draws one ball from the box. What is the probability of drawing a particular ball? (<b>A</b>) The box is opaque and contains ten balls: two orange, three red, and five blue balls. (<b>B</b>) The box is transparent but otherwise the same as (<b>A</b>). (<b>C</b>) The box is semitransparent but otherwise the same as (<b>A</b>). (<b>D</b>) The box is transparent and contains ten balls numbered from 1 to 10. (<b>E</b>) The conditions are unknown from the player’s perspective.</p>
Full article ">Figure 3
<p>Framework of the cognizers system (CS) model (see text for differences between “cognizers system” and “cognizer system”). The meta-observer describes the world, which is deterministic. The world comprises cognizers. A system of cognizers is referred to as a cognizers system. Dots and gray circles indicate cognizers that perform physical and chemical cognitions, respectively. Internal and external observers are cognizers (living systems, not restricted to humans) who perform semiotic cognitions, denoted by blue circles. Systems 1, 2, and 3 show subsets of cognizers that the meta-observer can demarcate as a “system”.</p>
Full article ">Figure 4
<p>(<b>A</b>) Cognizers system comprising only two cognizers, <span class="html-italic">C</span>1 and <span class="html-italic">C</span>2, in the world. (<b>B</b>) Related state changes in <span class="html-italic">C</span>1 and <span class="html-italic">C</span>2 through cognition. (<b>C</b>) The determination (selection) of a succeeding state narrows down the relation to the other cognizer. (<b>D</b>) If <span class="html-italic">C</span>1 cannot discriminate between various states of <span class="html-italic">C</span>2, namely <span class="html-italic">c</span>21 and <span class="html-italic">c</span>2<span class="html-italic">i</span>′, then <span class="html-italic">C</span>1 will have uncertainty about the states of <span class="html-italic">C</span>2 that occur following the cognition <span class="html-italic">c</span>1<span class="html-italic">i</span> ⟼ <span class="html-italic">c</span>1<span class="html-italic">j</span>.</p>
Full article ">Figure 5
<p>(<b>A</b>) From an external perspective, <span class="html-italic">A</span> in state <span class="html-italic">a</span>0 can discriminate between the external states. <span class="html-italic">ei</span> and <span class="html-italic">ej</span>. (<b>B</b>) From an internal perspective, external states, <span class="html-italic">ei</span> and <span class="html-italic">ej</span>, are hidden. Suppose that <span class="html-italic">A</span> has changed from <span class="html-italic">a</span>0 (a baseline state) to <span class="html-italic">ai</span> in some instances and from <span class="html-italic">a</span>0 to <span class="html-italic">aj</span> in others. In this case, different symbols, <span class="html-italic">bi</span> and <span class="html-italic">bj</span>, are introduced behind <span class="html-italic">a</span>0 (dotted arrows) to fulfil inverse causality (IC) by IC operation. The introduced symbols signify the hidden external states <span class="html-italic">ei</span> and <span class="html-italic">ej</span>, respectively. (<b>C</b>) <span class="html-italic">B</span> changes from a baseline state (<span class="html-italic">b</span>0) to <span class="html-italic">bi</span> or <span class="html-italic">bj</span>, which violates the principle of inverse causality. Therefore, different symbols, c<span class="html-italic">i</span> and c<span class="html-italic">j</span>, are introduced behind <span class="html-italic">b</span>0 by IC operation. The states of <span class="html-italic">C</span> (<span class="html-italic">ci</span> and <span class="html-italic">cj</span>) signify the hidden external states (e.g., <span class="html-italic">ek</span> and <span class="html-italic">el</span>, not represented in the figure) that <span class="html-italic">A</span> cannot detect.</p>
Full article ">Figure 6
<p>(<b>A</b>) IC operation system composed of measurers, <span class="html-italic">A<sup>M</sup></span>, <span class="html-italic">B<sup>M</sup></span>, <span class="html-italic">C<sup>M</sup></span>, and <span class="html-italic">D<sup>M</sup></span>. This diagram focuses on only a short process. The baseline states are denoted by the subscript “0”. The state changes from <span class="html-italic">t</span>0 to <span class="html-italic">t6</span> are shown; namely: (<span class="html-italic">a</span>0, <span class="html-italic">b</span>0, <span class="html-italic">c</span>0, <span class="html-italic">d</span>0), (<span class="html-italic">a</span>0, <span class="html-italic">b</span>0, <span class="html-italic">c</span>0, <span class="html-italic">d</span>0), (<span class="html-italic">a</span>1, <span class="html-italic">b</span>0, <span class="html-italic">c</span>0, <span class="html-italic">d</span>0), (<span class="html-italic">a</span>0, <span class="html-italic">b</span>1, <span class="html-italic">c</span>0, <span class="html-italic">d</span>0), (<span class="html-italic">a</span>0, <span class="html-italic">b</span>0, <span class="html-italic">c</span>1, <span class="html-italic">d</span>0), (<span class="html-italic">a</span>0, <span class="html-italic">b</span>0, <span class="html-italic">c</span>0, <span class="html-italic">d</span>1), (<span class="html-italic">a</span>0, <span class="html-italic">b</span>0, <span class="html-italic">c</span>0, <span class="html-italic">d</span>0) … The <span class="html-italic">A<sup>M</sup></span> sequence includes <span class="html-italic">a</span>0 ⟼ <span class="html-italic">a</span>0 or <span class="html-italic">a</span>1, to which inverse causality is operated. This process transforms the backward-in-time IC algorithm (<a href="#entropy-26-00660-f005" class="html-fig">Figure 5</a>B,C) into a measurement process forward in time, in which the distinctions made by each measurer are transmitted downstream in time. Modified from Figure 9 in [<a href="#B16-entropy-26-00660" class="html-bibr">16</a>]. (<b>B</b>) <span class="html-italic">A<sup>M</sup></span>, <span class="html-italic">B<sup>M</sup></span>, <span class="html-italic">C<sup>M</sup></span>, and <span class="html-italic">D<sup>M</sup></span> are composed of sub-measurers.</p>
Full article ">Figure 7
<p>(<b>A</b>) The population of molecules described as particles that behave in the state space of position and velocity. The entire population includes subpopulations of type <span class="html-italic">a</span> molecules (<span class="html-italic">a</span>1, <span class="html-italic">a</span>2, …), type <span class="html-italic">b</span> (<span class="html-italic">b</span>1, <span class="html-italic">b</span>2, …), and type <span class="html-italic">c</span> (<span class="html-italic">c</span>1, <span class="html-italic">c</span>2, …). (<b>B</b>) A subpopulation of molecules of the same type is described as a field entity in which individual molecules (particles) are generated and degraded through chemical reactions. Molecules as fields (<span class="html-italic">a</span>, <span class="html-italic">b</span>, and <span class="html-italic">c</span>) behave in a density-state space. (<b>C</b>) Metabolic closure, which comprises molecular fields (<span class="html-italic">a</span>, <span class="html-italic">b</span>, <span class="html-italic">c</span>, <span class="html-italic">d</span>, and <span class="html-italic">e</span>, linked with red arrows), emerges as a molecular system in the density space. The molecular relationships in the chemical reactions are indicated by the red and black arrows; their spatial distributions, as in B, are not presented here.</p>
Full article ">
23 pages, 351 KiB  
Article
Average Entropy of Gaussian Mixtures
by Basheer Joudeh and Boris Škorić
Entropy 2024, 26(8), 659; https://doi.org/10.3390/e26080659 - 1 Aug 2024
Viewed by 296
Abstract
We calculate the average differential entropy of a q-component Gaussian mixture in Rn. For simplicity, all components have covariance matrix σ21, while the means {Wi}i=1q are i.i.d. Gaussian vectors with [...] Read more.
We calculate the average differential entropy of a q-component Gaussian mixture in Rn. For simplicity, all components have covariance matrix σ21, while the means {Wi}i=1q are i.i.d. Gaussian vectors with zero mean and covariance s21. We obtain a series expansion in μ=s2/σ2 for the average differential entropy up to order O(μ2), and we provide a recipe to calculate higher-order terms. Our result provides an analytic approximation with a quantifiable order of magnitude for the error, which is not achieved in previous literature. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Previous Issue
Back to TopTop