[go: up one dir, main page]

Next Issue
Volume 25, June
Previous Issue
Volume 25, April
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 25, Issue 5 (May 2023) – 129 articles

Cover Story (view full-size image): MscS is a bacterial tension-operated membrane valve that regulates turgor and rescues cells from lysis. Its fast action is critical for reducing osmotic gradients in the race against water influx. While MscS requires no external chemical energy for activation, the opening rate is steeply tension-dependent and exceeds 104 s-1 at near-lytic tensions. How dissipative is this process? We present MscS as a two-state switch and measure the dissipated heat using a patch clamp in different kinetic regimes. We find that MscS works as a frictionless switch when the characteristic time of the transition is 5 s or longer. In this regime, the dissipated heat approaches the Landauer bound of kTln2. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
49 pages, 10680 KiB  
Article
Multivariate Time Series Information Bottleneck
by Denis Ullmann, Olga Taran and Slava Voloshynovskiy
Entropy 2023, 25(5), 831; https://doi.org/10.3390/e25050831 - 22 May 2023
Cited by 2 | Viewed by 3233
Abstract
Time series (TS) and multiple time series (MTS) predictions have historically paved the way for distinct families of deep learning models. The temporal dimension, distinguished by its evolutionary sequential aspect, is usually modeled by decomposition into the trio of “trend, seasonality, noise”, by [...] Read more.
Time series (TS) and multiple time series (MTS) predictions have historically paved the way for distinct families of deep learning models. The temporal dimension, distinguished by its evolutionary sequential aspect, is usually modeled by decomposition into the trio of “trend, seasonality, noise”, by attempts to copy the functioning of human synapses, and more recently, by transformer models with self-attention on the temporal dimension. These models may find applications in finance and e-commerce, where any increase in performance of less than 1% has large monetary repercussions, they also have potential applications in natural language processing (NLP), medicine, and physics. To the best of our knowledge, the information bottleneck (IB) framework has not received significant attention in the context of TS or MTS analyses. One can demonstrate that a compression of the temporal dimension is key in the context of MTS. We propose a new approach with partial convolution, where a time sequence is encoded into a two-dimensional representation resembling images. Accordingly, we use the recent advances made in image extension to predict an unseen part of an image from a given one. We show that our model compares well with traditional TS models, has information–theoretical foundations, and can be easily extended to more dimensions than only time and space. An evaluation of our multiple time series–information bottleneck (MTS-IB) model proves its efficiency in electricity production, road traffic, and astronomical data representing solar activity, as recorded by NASA’s interface region imaging spectrograph (IRIS) satellite. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of Markov chains for a selection of deep TS predictors: the blue parts correspond to the compressed representations of the time dimension. Some of these may accept additional inputs (correlated context) but we did not include them in these diagrams because that would overload the global understanding, and the time dimension is compressed in the same way. A bold <math display="inline"><semantics> <mi mathvariant="bold">X</mi> </semantics></math> is used when the model accepts vectors as input.</p>
Full article ">Figure 2
<p>Schematic analogy between the IB principle and image extension: (<b>Left</b>) schematically shows the time prediction under the IB principle, with compression and decoding, using <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>P</mi> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>v</mi> </mrow> </semantics></math> and skipping connections to form a variant of U-Net. (<b>Right</b>) is an equivalent representation seen as the image extension, where the skipping layers connect <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>T</mi> </mrow> </msub> </semantics></math> from the input to the output, and the bottleneck principle allows predicting <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mrow> <mi>T</mi> <mo>+</mo> <mn>1</mn> <mo>:</mo> <mi>T</mi> <mo>+</mo> <mi>F</mi> </mrow> </msub> </semantics></math> from <math display="inline"><semantics> <msub> <mi mathvariant="bold">X</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>T</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Problem formulation: <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> represent the spatial coordinates, <math display="inline"><semantics> <mi mathvariant="san-serif">λ</mi> </semantics></math> and <span class="html-italic">t</span>, respectively, represent the spectral and time coordinates. NASA’s IRIS satellite integrates a mirror from which the <span class="html-italic">Sun image</span> or videos are captured by a sensor paired with a wavelength filter chosen among <math display="inline"><semantics> <mrow> <mn>1330</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">Å</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>1400</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">Å</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>2796</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">Å</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>2832</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">Å</mi> </mrow> </semantics></math>. This mirror holds a vertical slit from which the diffraction occurs. The <span class="html-italic">x</span> position of the slit can vary in time and is chosen before the observation. A sensor behind the mirror captures the <span class="html-italic">Sun spectra</span> for each vertical position <span class="html-italic">y</span> of the <span class="html-italic">Sun’s image</span>, but only at the <span class="html-italic">x</span> position of the slit. We only consider the MgIIh/k data, which are between <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="san-serif">λ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>2793.8401</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">Å</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="san-serif">λ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>2806.02</mn> <mspace width="3.33333pt"/> <mi mathvariant="sans-serif">Å</mi> </mrow> </semantics></math>, and we consider all available time sequences.</p>
Full article ">Figure 4
<p>Histogram of the cadences in seconds/time steps.</p>
Full article ">Figure 5
<p>Structure of the ED-LSTM and ED-GRU models used for comparison. <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">C</mi> <mi>t</mi> <mi>i</mi> </msubsup> </semantics></math> represents the hidden state vectors for GRU cells, combined with cell state vectors for LSTM cells.</p>
Full article ">Figure 6
<p>Evaluations performed on the proposed time predictor: center assignments, activity classification, and physical features. Classical MTS and CV evaluations were also performed without appearing in this diagram for readability concerns.</p>
Full article ">Figure 7
<p>Evaluation of predictions for one flaring (FL) sample performed by the proposed IB-MTS model. The <b>first row</b> contains, respectively, the masked input, the predicted output, the genuine data, and the magnified pixel-wise error between the predicted and genuine. <b>Second row</b>: Spectral center distribution for the prior, the predicted, and the genuine MTS. <b>Third row</b>: MTS evaluation on the prediction. <b>Last twelve plots</b>: astrophysical features evaluations; the dotted blues represent the genuine and the green lines represent the prediction.</p>
Full article ">Figure 8
<p>Prediction results: The first column presents the results of the direct predictions (blue part) and the second column presents the iterated predictions (violet part). A masked sample is given from the original sequence (<b>first row</b>); the prediction (<b>second row</b>) and the magnified (<math display="inline"><semantics> <mrow> <mo>×</mo> <mn>5</mn> </mrow> </semantics></math>) differences (<b>third row</b>) are shown.</p>
Full article ">Figure 9
<p>MTS metrics evaluation averaged on the test set for the direct prediction setups on QS, AR, and FL IRIS data.</p>
Full article ">Figure 10
<p>MTS metrics evaluation averaged on the test set for the iterated prediction setups on QS, AR, and FL IRIS data.</p>
Full article ">Figure 11
<p>Histogram of the event durations from IRIS data.</p>
Full article ">Figure 12
<p>CV evaluation (over time) of the forecast for the direct and iterated predictions on IRIS data.</p>
Full article ">Figure 13
<p>Average distributions of centroids with their standard deviations as vertical gray error bars. The first graph is for the average prior central data, the middle graph is for the average genuine target, and the right graph is the average distribution of predictions performed with IB-MTS.</p>
Full article ">Figure 14
<p>IRIS center assignment evaluation (over time) of the forecasts for direct predictions.</p>
Full article ">Figure 15
<p>IRIS center assignment evaluation (over time) of the forecasts for iterated predictions.</p>
Full article ">Figure 16
<p>Evaluation of the relative prediction errors for physical features over time of the forecasts for IRIS data and the <span class="html-italic">direct</span> setup. The lower the better.</p>
Full article ">Figure 17
<p>Evaluation of the relative prediction errors for physical features over time of the forecasts for IRIS data and the <span class="html-italic">iterated</span> setup.</p>
Full article ">Figure A1
<p>Detailed MTS metrics evaluation on the test set for the direct prediction setup. The evaluations are given for each solar activity: the first row of results is for QS activity, the second row is for AR, and the last row is for FL.</p>
Full article ">Figure A1 Cont.
<p>Detailed MTS metrics evaluation on the test set for the direct prediction setup. The evaluations are given for each solar activity: the first row of results is for QS activity, the second row is for AR, and the last row is for FL.</p>
Full article ">Figure A2
<p>Detailed MTS metrics evaluation on the test set for the iterated prediction setup. The evaluations are given for each solar activity: the first row of results is for QS activity, the second row is for AR, and the last row is for FL.</p>
Full article ">Figure A3
<p>Confusion matrices for the prediction of centroids on IRIS data, for the direct procedure. We used the 53 centroids from [<a href="#B55-entropy-25-00831" class="html-bibr">55</a>]. Each row of results corresponds to a model. Columns are organized by data labels: <span class="html-italic">global</span> aggregate results for QS, AR, and FL data; other columns present the result for of each label, taken separately. Each confusion matrix gives results in terms of join probability distribution values between the genuine and the predicted. Probability values are displayed with color maps, where violet is the lowest probability and yellow is the highest.</p>
Full article ">Figure A4
<p>Confusion matrices for the prediction of centroids on IRIS data, for the iterated procedure. We used the 53 centroids from [<a href="#B55-entropy-25-00831" class="html-bibr">55</a>]. Each row of results corresponds to a model. Columns are organized by data labels: <span class="html-italic">global</span> aggregate results for QS, AR, and FL data; other columns present the results for of each label, taken separately. Each confusion matrix provides results in terms of join probability distribution values, between the genuine and the predicted. Probability values are displayed with color maps, where violet is the lowest probability and yellow is the highest.</p>
Full article ">
49 pages, 4386 KiB  
Article
Free Choice in Quantum Theory: A p-adic View
by Vladimir Anashin
Entropy 2023, 25(5), 830; https://doi.org/10.3390/e25050830 - 22 May 2023
Cited by 6 | Viewed by 1917
Abstract
In this paper, it is rigorously proven that since observational data (i.e., numerical values of physical quantities) are rational numbers only due to inevitably nonzero measurements errors, the conclusion about whether Nature at the smallest scales is discrete or continuous, random and chaotic, [...] Read more.
In this paper, it is rigorously proven that since observational data (i.e., numerical values of physical quantities) are rational numbers only due to inevitably nonzero measurements errors, the conclusion about whether Nature at the smallest scales is discrete or continuous, random and chaotic, or strictly deterministic, solely depends on experimentalist’s free choice of the metrics (real or p-adic) he chooses to process the observational data. The main mathematical tools are p-adic 1-Lipschitz maps (which therefore are continuous with respect to the p-adic metric). The maps are exactly the ones defined by sequential Mealy machines (rather than by cellular automata) and therefore are causal functions over discrete time. A wide class of the maps can naturally be expanded to continuous real functions, so the maps may serve as mathematical models of open physical systems both over discrete and over continuous time. For these models, wave functions are constructed, entropic uncertainty relation is proven, and no hidden parameters are assumed. The paper is motivated by the ideas of I. Volovich on p-adic mathematical physics, by G. ‘t Hooft’s cellular automaton interpretation of quantum mechanics, and to some extent, by recent papers on superdeterminism by J. Hance, S. Hossenfelder, and T. Palmer. Full article
(This article belongs to the Special Issue New Trends in Theoretical and Mathematical Physics)
Show Figures

Figure 1

Figure 1
<p>State transition diagram of a 2-adic automaton. Label <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>|</mo> <mi>β</mi> </mrow> </semantics></math> of the arrow that goes from the state <math display="inline"><semantics> <msub> <mi>s</mi> <mi>i</mi> </msub> </semantics></math> to the state <math display="inline"><semantics> <msub> <mi>s</mi> <mi>j</mi> </msub> </semantics></math> means that if the automaton is in the state <math display="inline"><semantics> <msub> <mi>s</mi> <mi>i</mi> </msub> </semantics></math> and obtains <math display="inline"><semantics> <mi>α</mi> </semantics></math> as the input symbol, it changes its state to <math display="inline"><semantics> <msub> <mi>s</mi> <mi>j</mi> </msub> </semantics></math> and produces <math display="inline"><semantics> <mi>β</mi> </semantics></math> as the output symbol.</p>
Full article ">Figure 2
<p>Reduced state transition diagram of the 2-adic odometer.</p>
Full article ">Figure 3
<p>The <span class="html-italic">p</span>-adic clock.</p>
Full article ">Figure 4
<p>A point in the unit square <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="double-struck">I</mi> <mn>2</mn> </msup> <mo>⊂</mo> <msup> <mi mathvariant="double-struck">R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> produced by the automaton <math display="inline"><semantics> <mi mathvariant="fraktur">A</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Limit plot in <math display="inline"><semantics> <msup> <mi mathvariant="double-struck">R</mi> <mn>2</mn> </msup> </semantics></math> of an automaton having two affine subautomata.</p>
Full article ">Figure 6
<p>Limit plot of the same automaton on the torus <math display="inline"><semantics> <msup> <mi mathvariant="double-struck">T</mi> <mn>2</mn> </msup> </semantics></math> in <math display="inline"><semantics> <msup> <mi mathvariant="double-struck">R</mi> <mn>3</mn> </msup> </semantics></math>.</p>
Full article ">Figure 7
<p>The automaton function is <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mo>−</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <mi>z</mi> </mrow> </semantics></math>; the minimal subautomaton function is <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mo>−</mo> <mfrac> <mn>1</mn> <mn>3</mn> </mfrac> <mi>z</mi> <mo>−</mo> <mfrac> <mn>2</mn> <mn>3</mn> </mfrac> </mrow> </semantics></math>; (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>), <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>s</mi> <mn>1</mn> </msub> </semantics></math> are respective initial states.</p>
Full article ">Figure 8
<p>Limit plots of the automaton and of its minimal subautomaton coincide.</p>
Full article ">Figure 9
<p>Limit plot of the function <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> <mo>=</mo> <mn>2</mn> <mo>/</mo> <mn>7</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>), in <math display="inline"><semantics> <msup> <mi mathvariant="double-struck">I</mi> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 10
<p>Limit plot of the same function on the torus <math display="inline"><semantics> <msup> <mi mathvariant="double-struck">T</mi> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">Figure 11
<p>State transition diagram of the autonomous automaton whose automaton function <math display="inline"><semantics> <mrow> <mi>f</mi> <mo lspace="0pt">:</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> <mo>→</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math> is a constant: <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> <mo>=</mo> <mn>2</mn> <mo>/</mo> <mn>7</mn> </mrow> </semantics></math>, (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> <mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. State 1 is initial.</p>
Full article ">Figure 12
<p>Limit plot of the automaton having two subautomata whose functions are <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mn>3</mn> <mi>z</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mn>5</mn> <mi>z</mi> </mrow> </semantics></math>, (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 13
<p>Limit plot of the same automaton on the torus <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="double-struck">T</mi> <mn>2</mn> </msup> <mo>⊂</mo> <msup> <mi mathvariant="double-struck">R</mi> <mn>3</mn> </msup> </mrow> </semantics></math>. The surface of the torus is made visible by cross-hatching.</p>
Full article ">Figure 14
<p>State transition diagram of the automaton having two minimal subautomata whose automata functions are <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mn>3</mn> <mi>z</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mn>5</mn> <mi>z</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>. The initial state is 0.</p>
Full article ">Figure 15
<p>Plot of a finite automaton which is an approximation of a measure-1 automaton whose automaton function is <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mn>1</mn> <mo>+</mo> <mn>3</mn> <mi>z</mi> <mo>+</mo> <mn>2</mn> <msup> <mi>z</mi> <mn>2</mn> </msup> </mrow> </semantics></math>, (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 16
<p>Plot of a measure-0 automaton having the only minimal subautomaton whose automaton function is <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>→</mo> <mn>5</mn> <mi>z</mi> </mrow> </semantics></math>, (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 17
<p>Limit plot of a finite automaton whose automaton function is <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>↦</mo> <mo>(</mo> <mo>(</mo> <mi>z</mi> <mi>AND</mi> <mn>1</mn> <mo>)</mo> <mo>−</mo> <mo>(</mo> <mo>(</mo> <mi>NOT</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> <mo>)</mo> <mi>AND</mi> <mn>1</mn> <mo>)</mo> <mo>)</mo> <mo>·</mo> <mi>z</mi> </mrow> </semantics></math>, (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>) on the (horn) torus.</p>
Full article ">Figure 18
<p>Solenoid that is a limit plot of the automaton having the same automaton function <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mo>(</mo> <mi>z</mi> <mi>AND</mi> <mn>1</mn> <mo>)</mo> <mo>−</mo> <mo>(</mo> <mo>(</mo> <mi>NOT</mi> <mo>(</mo> <mi>z</mi> <mo>)</mo> <mo>)</mo> <mi>AND</mi> <mn>1</mn> <mo>)</mo> <mo>)</mo> <mo>·</mo> <mi>z</mi> </mrow> </semantics></math>, (<math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <msub> <mi mathvariant="double-struck">Z</mi> <mn>2</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">Figure 19
<p>General automaton whose minimal subautomata are all finite and affine.</p>
Full article ">Figure 20
<p>Example state transition diagram of 2-adic automaton having minimal subautomata (output symbols of labels of arrows are omitted). <math display="inline"><semantics> <msub> <mi>s</mi> <mn>0</mn> </msub> </semantics></math> is the initial state. The respective probabilities of reaching subautomata <math display="inline"><semantics> <msub> <mi mathvariant="fraktur">S</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi mathvariant="fraktur">S</mi> <mn>2</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi mathvariant="fraktur">S</mi> <mn>3</mn> </msub> </semantics></math> are <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mn>11</mn> <mo>/</mo> <mn>64</mn> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>8</mn> <mo>+</mo> <mn>1</mn> <mo>/</mo> <mn>32</mn> <mo>+</mo> <mn>1</mn> <mo>/</mo> <mn>64</mn> </mrow> </semantics></math>.</p>
Full article ">
12 pages, 314 KiB  
Article
Orthogonal Polynomials with Singularly Perturbed Freud Weights
by Chao Min and Liwei Wang
Entropy 2023, 25(5), 829; https://doi.org/10.3390/e25050829 - 22 May 2023
Cited by 2 | Viewed by 1334
Abstract
In this paper, we are concerned with polynomials that are orthogonal with respect to the singularly perturbed Freud weight functions. By using Chen and Ismail’s ladder operator approach, we derive the difference equations and differential-difference equations satisfied by the recurrence coefficients. We also [...] Read more.
In this paper, we are concerned with polynomials that are orthogonal with respect to the singularly perturbed Freud weight functions. By using Chen and Ismail’s ladder operator approach, we derive the difference equations and differential-difference equations satisfied by the recurrence coefficients. We also obtain the differential-difference equations and the second-order differential equations for the orthogonal polynomials, with the coefficients all expressed in terms of the recurrence coefficients. Full article
(This article belongs to the Special Issue Random Matrices: Theory and Applications)
35 pages, 1600 KiB  
Article
Local Phase Transitions in a Model of Multiplex Networks with Heterogeneous Degrees and Inter-Layer Coupling
by Nedim Bayrakdar, Valerio Gemmetto and Diego Garlaschelli
Entropy 2023, 25(5), 828; https://doi.org/10.3390/e25050828 - 22 May 2023
Viewed by 1576
Abstract
Multilayer networks represent multiple types of connections between the same set of nodes. Clearly, a multilayer description of a system adds value only if the multiplex does not merely consist of independent layers. In real-world multiplexes, it is expected that the observed inter-layer [...] Read more.
Multilayer networks represent multiple types of connections between the same set of nodes. Clearly, a multilayer description of a system adds value only if the multiplex does not merely consist of independent layers. In real-world multiplexes, it is expected that the observed inter-layer overlap may result partly from spurious correlations arising from the heterogeneity of nodes, and partly from true inter-layer dependencies. It is therefore important to consider rigorous ways to disentangle these two effects. In this paper, we introduce an unbiased maximum entropy model of multiplexes with controllable intra-layer node degrees and controllable inter-layer overlap. The model can be mapped to a generalized Ising model, where the combination of node heterogeneity and inter-layer coupling leads to the possibility of local phase transitions. In particular, we find that node heterogeneity favors the splitting of critical points characterizing different pairs of nodes, leading to link-specific phase transitions that may, in turn, increase the overlap. By quantifying how the overlap can be increased by increasing either the intra-layer node heterogeneity (spurious correlation) or the strength of the inter-layer coupling (true correlation), the model allows us to disentangle the two effects. As an application, we show that the empirical overlap observed in the International Trade Multiplex genuinely requires a nonzero inter-layer coupling in its modeling, as it is not merely a spurious result of the correlation between node degrees across different layers. Full article
(This article belongs to the Special Issue Recent Trends and Developments in Econophysics)
Show Figures

Figure 1

Figure 1
<p>A graphical illustration of the solution(s) of Equation (<a href="#FD53-entropy-25-00828" class="html-disp-formula">53</a>). The solid lines show the RHS of Equation (<a href="#FD53-entropy-25-00828" class="html-disp-formula">53</a>) as a function of <math display="inline"><semantics> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> for the different parameters <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mo>−</mo> <mn>12</mn> <mo>,</mo> <mo>−</mo> <mn>8</mn> <mo>,</mo> <mo>−</mo> <mn>4</mn> <mo>,</mo> <mo>−</mo> <mn>2</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>8</mn> <mo>,</mo> <mn>12</mn> <mo>}</mo> </mrow> </mrow> </semantics></math>, while the dashed line shows the LHS, which equals <math display="inline"><semantics> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> itself. For a given parameter value, the solutions of Equation (<a href="#FD53-entropy-25-00828" class="html-disp-formula">53</a>) are the intersection between the dashed and the corresponding solid line. Each panel corresponds to a different value of <span class="html-italic">J</span> (in the rest of the paper, we will consider only <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 2
<p>The upper (blue) and lower (red) curves correspond to Equations (<a href="#FD65-entropy-25-00828" class="html-disp-formula">65</a>) and (<a href="#FD66-entropy-25-00828" class="html-disp-formula">66</a>), respectively, which delimit the region of phase space (yellow area), for which Equation (<a href="#FD53-entropy-25-00828" class="html-disp-formula">53</a>) has three solutions. Note that the ‘zero-field’ condition <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <mi>J</mi> </mrow> </semantics></math> is always in the yellow area when <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>, so the condition <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math> is sufficient to ensure that the system in zero field is in the magnetized (symmetry-broken) phase.</p>
Full article ">Figure 3
<p>Solutions for <math display="inline"><semantics> <msub> <mi>u</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> as a function of <math display="inline"><semantics> <msub> <mi>θ</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> for different parameter values. The blue and red segments of the curve(s) correspond to the stable and unstable solutions of Equation (<a href="#FD53-entropy-25-00828" class="html-disp-formula">53</a>), respectively. <b>Left</b> panel: <math display="inline"><semantics> <mrow> <msub> <mi>B</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (with <span class="html-italic">J</span> varying accordingly). <b>Middle</b> panel: <math display="inline"><semantics> <mrow> <msub> <mi>B</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (with <span class="html-italic">J</span> varying accordingly). <b>Right</b> panel: constant value of <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, which translates to a non-constant <math display="inline"><semantics> <msub> <mi>B</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>Total number of links <span class="html-italic">L</span> (top panels) and inter-layer overlap <span class="html-italic">O</span> (bottom panels) as a function of simulation time using the Metropolis–Hastings algorithm for <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. <b>Left</b> panels: <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>1.4</mn> </mrow> </semantics></math>. <b>Middle</b> panels: <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>1.5</mn> <mo>=</mo> <mi>J</mi> </mrow> </semantics></math> (symmetry-broken case). <b>Right</b> panels: <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>1.6</mn> </mrow> </semantics></math>. For fixed <span class="html-italic">J</span>, varying <math display="inline"><semantics> <mi>θ</mi> </semantics></math> determines a phase transition from a high-density phase to a low-density phase.</p>
Full article ">Figure 5
<p>Relationship between the expected inter-layer overlap <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> </semantics></math> and the total number of links <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> </semantics></math> in homogeneous multiplexes with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <mi>θ</mi> </mrow> </semantics></math> for all <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>. The blue points correspond to simulations obtained via the Metropolis–Hastings algorithm for <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>∈</mo> <mo>{</mo> <mn>0.0</mn> <mo>,</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.6</mn> <mo>,</mo> <mn>0.9</mn> <mo>,</mo> <mn>1.2</mn> <mo>,</mo> <mn>1.5</mn> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>∈</mo> <mo>[</mo> <mn>0.05</mn> <mo>,</mo> <mn>2.00</mn> <mo>]</mo> </mrow> </semantics></math> in steps of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>θ</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>. The open red circles are the corresponding theoretically predicted points. The solid curve corresponds to the quadratic trend <math display="inline"><semantics> <mrow> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>N</mi> <mn>2</mn> </msup> </mrow> </semantics></math> predicted for all <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math>. Multiple solutions for <math display="inline"><semantics> <msubsup> <mi>u</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>*</mo> </msubsup> </semantics></math> first appear when <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>, but the system keeps following the quadratic trend, albeit drifting away from the central point obtained for the zero-field case <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mi>J</mi> </mrow> </semantics></math> (corresponding to a spontaneously broken symmetry).</p>
Full article ">Figure 6
<p>Relationship between the expected inter-layer overlap <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> </semantics></math> and the total number of links <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> </semantics></math> in heterogeneous multiplexes with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mn>0</mn> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math> sampled from a power law distribution with different values for <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. The colored points correspond to simulations obtained via the Metropolis–Hastings algorithm for <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>∈</mo> <mo>{</mo> <mn>0.0</mn> <mo>,</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.6</mn> <mo>,</mo> <mn>0.9</mn> <mo>,</mo> <mn>1.2</mn> <mo>,</mo> <mn>1.5</mn> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <mo>[</mo> <mn>0.05</mn> <mo>,</mo> <mn>2.00</mn> <mo>]</mo> </mrow> </semantics></math> in steps of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>. The straight line corresponds to the upper limit <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> <mo>=</mo> <mi>M</mi> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> calculated in Equation (<a href="#FD67-entropy-25-00828" class="html-disp-formula">67</a>). The solid curve corresponds to the quadratic trend <math display="inline"><semantics> <mrow> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>N</mi> <mn>2</mn> </msup> </mrow> </semantics></math> (achieved by homogeneous multiplexes with constant <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math>), which here turns out to mark a lower bound. For increasing values of <span class="html-italic">J</span>, and especially as <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>, the system moves closer to the upper bound. For <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, we see that the points are concentrating towards high-density and low-density (symmetry-broken) regimes, drifting away from the intermediate values, like in the homogeneous case. However, this is now the combined result of the behavior of statistically different pairs of nodes, each having a different zero-field condition <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>θ</mi> <mi>j</mi> </msub> <mo>=</mo> <mn>2</mn> <mi>J</mi> </mrow> </semantics></math>, so the spontaneous symmetry breaking cannot be realized for all node pairs simultaneously.</p>
Full article ">Figure 7
<p>Relationship between the expected inter-layer overlap <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> </semantics></math> and the total number of links <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> </semantics></math> in heterogeneous multiplexes with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mn>0</mn> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math> sampled from a power law distribution with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The blue points correspond to simulations obtained via the Metropolis–Hastings algorithm for <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <mo>[</mo> <mn>0.05</mn> <mo>,</mo> <mn>2.00</mn> <mo>]</mo> </mrow> </semantics></math> in steps of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (<b>left</b> panel) and <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> (<b>right</b> panel). The red open circles are the theoretically predicted values corresponding to the same parameters used in the simulations. The straight line corresponds to the upper limit <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> <mo>=</mo> <mi>M</mi> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> calculated in Equation (<a href="#FD67-entropy-25-00828" class="html-disp-formula">67</a>). The solid curve corresponds to the quadratic trend <math display="inline"><semantics> <mrow> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>N</mi> <mn>2</mn> </msup> </mrow> </semantics></math> (achieved by homogeneous multiplexes with constant <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math>), which here turns out to mark a lower bound. We see that, compared with the homogeneous lower bound, the heterogeneity of nodes increases the overlap dramatically, even in the absence of true coupling (<math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>). When coupling is present, the overlap is additionally increased and already approaches the upper bound for <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Relationship between the expected inter-layer overlap <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> </semantics></math> and the total number of links <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> </semantics></math> in heterogeneous multiplexes with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mn>0</mn> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math> sampled from a log-normal distribution with different values for <math display="inline"><semantics> <mi>σ</mi> </semantics></math>. The colored points correspond to simulations obtained via the Metropolis–Hastings algorithm for <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>∈</mo> <mo>{</mo> <mn>0.0</mn> <mo>,</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.6</mn> <mo>,</mo> <mn>0.9</mn> <mo>,</mo> <mn>1.2</mn> <mo>,</mo> <mn>1.5</mn> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <mo>[</mo> <mn>0.05</mn> <mo>,</mo> <mn>2.00</mn> <mo>]</mo> </mrow> </semantics></math> in steps of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>. The straight line corresponds to the upper limit <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> <mo>=</mo> <mi>M</mi> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> calculated in Equation (<a href="#FD67-entropy-25-00828" class="html-disp-formula">67</a>). The solid curve corresponds to the quadratic trend <math display="inline"><semantics> <mrow> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>N</mi> <mn>2</mn> </msup> </mrow> </semantics></math> (achieved by homogeneous multiplexes with constant <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math>), which here marks a lower bound achieved when <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>→</mo> <msup> <mn>0</mn> <mo>+</mo> </msup> </mrow> </semantics></math>. For increasing values of <span class="html-italic">J</span> (genuine coupling) and <math display="inline"><semantics> <mi>σ</mi> </semantics></math> (spurious coupling), the system moves closer to the upper bound. For <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>, we see that, starting from the multiplexes with smaller values of <math display="inline"><semantics> <mi>σ</mi> </semantics></math>, the points are concentrating towards high-density and low-density (symmetry-broken) regimes, drifting away from the intermediate values, like in the homogeneous and power law cases. To realize this separation for larger values of <math display="inline"><semantics> <mi>σ</mi> </semantics></math>, a larger value of <span class="html-italic">J</span> is required.</p>
Full article ">Figure 9
<p>Relationship between the expected inter-layer overlap <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> </semantics></math> and the total number of links <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> </semantics></math> in heterogeneous multiplexes with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mn>0</mn> <mo>,</mo> <mi>i</mi> </mrow> </msub> </semantics></math> sampled from a log-normal distribution with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The blue points correspond to simulations obtained via the Metropolis–Hastings algorithm for <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>∈</mo> <mo>[</mo> <mn>0.05</mn> <mo>,</mo> <mn>2.00</mn> <mo>]</mo> </mrow> </semantics></math> in steps of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>z</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (<b>left</b> panel) and <math display="inline"><semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> (<b>right</b> panel). The red open circles are the theoretically predicted values corresponding to the same parameters used in the simulations.</p>
Full article ">Figure 10
<p>Comparison of the empirical World Trade Multiplex (WTM) with the zero-coupling (<math display="inline"><semantics> <mrow> <msup> <mi>J</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>) benchmark provided by the Average Configuration Model (ACM). The WTM consists of <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>206</mn> </mrow> </semantics></math> nodes, each representing a country, and <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>96</mn> </mrow> </semantics></math> layers, each representing a commodity group. The filtered data were obtained by retaining the same number <math display="inline"><semantics> <msup> <mi>L</mi> <mn>0</mn> </msup> </semantics></math> of strongest links in each layer (hence <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mi>M</mi> <msup> <mi>L</mi> <mn>0</mn> </msup> </mrow> </semantics></math> links in the entire multiplex), and varying <math display="inline"><semantics> <msup> <mi>L</mi> <mn>0</mn> </msup> </semantics></math>. <b>Top left</b>: relationship between the expected inter-layer overlap <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> </semantics></math> and the total number of links <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> </semantics></math> in the WTM (blue), compared with the upper limit <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> <mo>=</mo> <mi>M</mi> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> calculated in Equation (<a href="#FD67-entropy-25-00828" class="html-disp-formula">67</a>) (purple straight line) and the quadratic trend <math display="inline"><semantics> <mrow> <mrow> <mo>〈</mo> <mi>O</mi> <mo>〉</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>〈</mo> <mi>L</mi> <mo>〉</mo> </mrow> <mn>2</mn> </msup> <mo>/</mo> <msup> <mi>N</mi> <mn>2</mn> </msup> </mrow> </semantics></math> achieved by homogeneous multiplexes (black solid curve). <b>Top right</b>: zoomed-in version of the top left panel, showing that the empirical data follow an intermediate scaling between the two extremes. <b>Center left</b>: cumulative distributions reporting the number <math display="inline"><semantics> <mrow> <mi>F</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> of nodes with hidden variable larger than <span class="html-italic">x</span> in the ACM, obtained for different values of <math display="inline"><semantics> <msup> <mi>L</mi> <mn>0</mn> </msup> </semantics></math> (see legend). <b>Center right</b>: same as the top right panel with the addition of the relationship produced by the ACM benchmark, showing that the empirical WTM (blue) has a higher overlap than the corresponding null model having zero inter-layer coupling but the same degree heterogeneity (orange). <b>Bottom left</b>: log–log plot of the relationship between the overlap and the number of links in the empirical WTM, along with a power law fit of the form <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>=</mo> <mi>A</mi> <msup> <mi>L</mi> <mi>α</mi> </msup> </mrow> </semantics></math>, where the fitted exponent is <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.19</mn> </mrow> </semantics></math>. <b>Bottom right</b>: log–log plot of the same relationship in the ACM benchmark with no coupling, along with a power law fit of the form <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>=</mo> <mi>A</mi> <msup> <mi>L</mi> <mi>α</mi> </msup> </mrow> </semantics></math>, where the fitted exponent is <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.06</mn> </mrow> </semantics></math>.</p>
Full article ">
15 pages, 331 KiB  
Article
A Kind of (t, n) Threshold Quantum Secret Sharing with Identity Authentication
by Depeng Meng, Zhihui Li, Shuangshuang Luo and Zhaowei Han
Entropy 2023, 25(5), 827; https://doi.org/10.3390/e25050827 - 22 May 2023
Cited by 6 | Viewed by 1550
Abstract
Quantum secret sharing (QSS) is an important branch of quantum cryptography. Identity authentication is a significant means to achieve information protection, which can effectively confirm the identity information of both communication parties. Due to the importance of information security, more and more communications [...] Read more.
Quantum secret sharing (QSS) is an important branch of quantum cryptography. Identity authentication is a significant means to achieve information protection, which can effectively confirm the identity information of both communication parties. Due to the importance of information security, more and more communications require identity authentication. We propose a d-level (t,n) threshold QSS scheme in which both sides of the communication use mutually unbiased bases for mutual identity authentication. In the secret recovery phase, the sharing of secrets that only the participant holds will not be disclosed or transmitted. Therefore, external eavesdroppers will not get any information about secrets at this phase. This protocol is more secure, effective, and practical. Security analysis shows that this scheme can effectively resist intercept–resend attacks, entangle–measure attacks, collusion attacks, and forgery attacks. Full article
(This article belongs to the Special Issue Advanced Technology in Quantum Cryptography)
Show Figures

Figure 1

Figure 1
<p>The process of this scheme.</p>
Full article ">Figure 2
<p>Identity authentication process between participants in this scheme.</p>
Full article ">
18 pages, 3785 KiB  
Article
Infrared Image Caption Based on Object-Oriented Attention
by Junfeng Lv, Tian Hui, Yongfeng Zhi and Yuelei Xu
Entropy 2023, 25(5), 826; https://doi.org/10.3390/e25050826 - 22 May 2023
Cited by 4 | Viewed by 1992
Abstract
With the ongoing development of image technology, the deployment of various intelligent applications on embedded devices has attracted increased attention in the industry. One such application is automatic image captioning for infrared images, which involves converting images into text. This practical task is [...] Read more.
With the ongoing development of image technology, the deployment of various intelligent applications on embedded devices has attracted increased attention in the industry. One such application is automatic image captioning for infrared images, which involves converting images into text. This practical task is widely used in night security, as well as for understanding night scenes and other scenarios. However, due to the differences in image features and the complexity of semantic information, generating captions for infrared images remains a challenging task. From the perspective of deployment and application, to improve the correlation between descriptions and objects, we introduced the YOLOv6 and LSTM as encoder-decoder structure and proposed infrared image caption based on object-oriented attention. Firstly, to improve the domain adaptability of the detector, we optimized the pseudo-label learning process. Secondly, we proposed the object-oriented attention method to address the alignment problem between complex semantic information and embedded words. This method helps select the most crucial features of the object region and guides the caption model in generating words that are more relevant to the object. Our methods have shown good performance on the infrared image and can produce words explicitly associated with the object regions located by the detector. The robustness and effectiveness of the proposed methods were demonstrated through evaluation on various datasets, along with other state-of-the-art methods. Our approach achieved BLUE-4 scores of 31.6 and 41.2 on KAIST and Infrared City and Town datasets, respectively. Our approach provides a feasible solution for the deployment of embedded devices in industrial applications. Full article
(This article belongs to the Special Issue Pattern Recognition and Data Clustering in Information Theory)
Show Figures

Figure 1

Figure 1
<p>Different caption results are generated from different methods. (1) Generated from the baseline model [<a href="#B1-entropy-25-00826" class="html-bibr">1</a>]. (2) Generated from the baseline model [<a href="#B2-entropy-25-00826" class="html-bibr">2</a>]. (3) Represents our proposed method. (4) The Ground Truth.</p>
Full article ">Figure 2
<p>Infrared images and visible light images. (<b>a</b>) Infrared image; (<b>b</b>) visible light image (night); (<b>c</b>) infrared image; (<b>d</b>) visible light image (daytime).</p>
Full article ">Figure 3
<p>The overview of our proposed methods.</p>
Full article ">Figure 4
<p>Domain transfer.</p>
Full article ">Figure 5
<p>The adaptive weighting module.</p>
Full article ">Figure 6
<p>Overview of object-oriented attention.</p>
Full article ">Figure 7
<p>Infrared image captions results. (<b>a</b>) Urban scenery; (<b>b</b>) urban scenery; (<b>c</b>) rural scenery; (<b>d</b>) mountainous scenery.</p>
Full article ">
32 pages, 2311 KiB  
Article
Approximating Functions with Approximate Privacy for Applications in Signal Estimation and Learning
by Naima Tasnim, Jafar Mohammadi, Anand D. Sarwate and Hafiz Imtiaz
Entropy 2023, 25(5), 825; https://doi.org/10.3390/e25050825 - 22 May 2023
Cited by 5 | Viewed by 2042
Abstract
Large corporations, government entities and institutions such as hospitals and census bureaus routinely collect our personal and sensitive information for providing services. A key technological challenge is designing algorithms for these services that provide useful results, while simultaneously maintaining the privacy of the [...] Read more.
Large corporations, government entities and institutions such as hospitals and census bureaus routinely collect our personal and sensitive information for providing services. A key technological challenge is designing algorithms for these services that provide useful results, while simultaneously maintaining the privacy of the individuals whose data are being shared. Differential privacy (DP) is a cryptographically motivated and mathematically rigorous approach for addressing this challenge. Under DP, a randomized algorithm provides privacy guarantees by approximating the desired functionality, leading to a privacy–utility trade-off. Strong (pure DP) privacy guarantees are often costly in terms of utility. Motivated by the need for a more efficient mechanism with better privacy–utility trade-off, we propose Gaussian FM, an improvement to the functional mechanism (FM) that offers higher utility at the expense of a weakened (approximate) DP guarantee. We analytically show that the proposed Gaussian FM algorithm can offer orders of magnitude smaller noise compared to the existing FM algorithms. We further extend our Gaussian FM algorithm to decentralized-data settings by incorporating the CAPE protocol and propose capeFM. Our method can offer the same level of utility as its centralized counterparts for a range of parameter choices. We empirically show that our proposed algorithms outperform existing state-of-the-art approaches on synthetic and real datasets. Full article
(This article belongs to the Special Issue Information-Theoretic Privacy in Retrieval, Computing, and Learning)
Show Figures

Figure 1

Figure 1
<p>Linear regression performance comparison in terms of MSE and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <span class="html-italic">IWPC</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>9</mn> <mo>)</mo> </mrow> </semantics></math>, <span class="html-italic">crime</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>101</mn> <mo>)</mo> </mrow> </semantics></math>, and <span class="html-italic">twitter</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>77</mn> <mo>)</mo> </mrow> </semantics></math> datasets with varying noise standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> in (<b>a</b>,<b>d</b>,<b>g</b>) the number of training samples <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<b>b</b>,<b>e</b>,<b>h</b>), and privacy parameter <math display="inline"><semantics> <mi>δ</mi> </semantics></math> in (<b>c</b>,<b>f</b>,<b>i</b>).</p>
Full article ">Figure 2
<p>Logistic regression performance comparison in terms of accuracy and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <span class="html-italic">phishing</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>30</mn> <mo>)</mo> </mrow> </semantics></math>, <span class="html-italic">adult</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>13</mn> <mo>)</mo> </mrow> </semantics></math>, and <span class="html-italic">kdd</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>36</mn> <mo>)</mo> </mrow> </semantics></math> datasets with varying noise standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> in (<b>a</b>,<b>d</b>,<b>g</b>), the number of training samples <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<b>b</b>,<b>e</b>,<b>h</b>), and privacy parameter <math display="inline"><semantics> <mi>δ</mi> </semantics></math> in (<b>c</b>,<b>f</b>,<b>i</b>).</p>
Full article ">Figure 3
<p>Decentralized linear regression performance comparison in terms of MSE and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <span class="html-italic">IWPC</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>9</mn> <mo>)</mo> </mrow> </semantics></math>, <span class="html-italic">crime</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>101</mn> <mo>)</mo> </mrow> </semantics></math>, and <span class="html-italic">twitter</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>77</mn> <mo>)</mo> </mrow> </semantics></math> datasets with varying noise standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> in (<b>a</b>,<b>d</b>,<b>g</b>), the number of training samples <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<b>b</b>,<b>e</b>,<b>h</b>), and privacy parameter <math display="inline"><semantics> <mi>δ</mi> </semantics></math> in (<b>c</b>,<b>f</b>,<b>i</b>).</p>
Full article ">Figure 3 Cont.
<p>Decentralized linear regression performance comparison in terms of MSE and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <span class="html-italic">IWPC</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>9</mn> <mo>)</mo> </mrow> </semantics></math>, <span class="html-italic">crime</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>101</mn> <mo>)</mo> </mrow> </semantics></math>, and <span class="html-italic">twitter</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>77</mn> <mo>)</mo> </mrow> </semantics></math> datasets with varying noise standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> in (<b>a</b>,<b>d</b>,<b>g</b>), the number of training samples <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<b>b</b>,<b>e</b>,<b>h</b>), and privacy parameter <math display="inline"><semantics> <mi>δ</mi> </semantics></math> in (<b>c</b>,<b>f</b>,<b>i</b>).</p>
Full article ">Figure 4
<p>Decentralized logistic regression performance comparison in terms of accuracy and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <span class="html-italic">phishing</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>30</mn> <mo>)</mo> </mrow> </semantics></math>, <span class="html-italic">adult</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>13</mn> <mo>)</mo> </mrow> </semantics></math>, and <span class="html-italic">kdd</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>36</mn> <mo>)</mo> </mrow> </semantics></math> datasets with varying noise standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> in (<b>a</b>,<b>d</b>,<b>g</b>), the number of training samples <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<b>b</b>,<b>e</b>,<b>h</b>), and privacy parameter <math display="inline"><semantics> <mi>δ</mi> </semantics></math> in (<b>c</b>,<b>f</b>,<b>i</b>).</p>
Full article ">Figure 5
<p>Decentralized linear and logistic regression performance comparison and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> with varying number of sites <span class="html-italic">S</span> for the datasets (<b>a</b>) <span class="html-italic">IWPC</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>9</mn> <mo>)</mo> </mrow> </semantics></math>, (<b>b</b>) <span class="html-italic">crime</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>101</mn> <mo>)</mo> </mrow> </semantics></math>, (<b>c</b>) <span class="html-italic">twitter</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>77</mn> <mo>)</mo> </mrow> </semantics></math>, (<b>d</b>) <span class="html-italic">phishing</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>30</mn> <mo>)</mo> </mrow> </semantics></math>, (<b>e</b>) <span class="html-italic">adult</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>13</mn> <mo>)</mo> </mrow> </semantics></math>, and (<b>f</b>) <span class="html-italic">kdd</span> <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>D</mi> <mo>=</mo> <mn>36</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure A1
<p>Standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> of the additive noise for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> for different values of dimensionality <span class="html-italic">D</span> for differentially private linear regression using <b>fm</b>, <b>rlx-fm</b>, and <b>gauss-fm</b>.</p>
Full article ">Figure A2
<p>Standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> of the additive noise for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> for different values of dimensionality <span class="html-italic">D</span> for differentially private logistic regression using <b>fm</b>, <b>rlx-fm</b>, and <b>gauss-fm</b>.</p>
Full article ">Figure A3
<p>Performance comparison and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <span class="html-italic">synthetic</span> datasets with varying noise standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> in (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>), number of training samples <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>), and privacy parameter <math display="inline"><semantics> <mi>δ</mi> </semantics></math> in (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>).</p>
Full article ">Figure A3 Cont.
<p>Performance comparison and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <span class="html-italic">synthetic</span> datasets with varying noise standard deviation <math display="inline"><semantics> <mi>τ</mi> </semantics></math> in (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>), number of training samples <math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>t</mi> <mi>r</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>), and privacy parameter <math display="inline"><semantics> <mi>δ</mi> </semantics></math> in (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>).</p>
Full article ">Figure A4
<p>Decentralized linear and logistic regression performance comparison and overall <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> with varying number of sites <span class="html-italic">S</span> for the datasets (<b>a</b>) <span class="html-italic">synth (D = 20)</span> and (<b>b</b>) <span class="html-italic">synth (D = 50)</span>.</p>
Full article ">
21 pages, 453 KiB  
Article
Winning a CHSH Game without Entangled Particles in a Finite Number of Biased Rounds: How Much Luck Is Needed?
by Christoph Gallus, Pawel Blasiak and Emmanuel M. Pothos
Entropy 2023, 25(5), 824; https://doi.org/10.3390/e25050824 - 21 May 2023
Cited by 1 | Viewed by 2361
Abstract
Quantum games, such as the CHSH game, are used to illustrate the puzzle and power of entanglement. These games are played over many rounds and in each round, the participants, Alice and Bob, each receive a question bit to which they each have [...] Read more.
Quantum games, such as the CHSH game, are used to illustrate the puzzle and power of entanglement. These games are played over many rounds and in each round, the participants, Alice and Bob, each receive a question bit to which they each have to give an answer bit, without being able to communicate during the game. When all possible classical answering strategies are analyzed, it is found that Alice and Bob cannot win more than 75% of the rounds. A higher percentage of wins arguably requires an exploitable bias in the random generation of the question bits or access to “non-local“ resources, such as entangled pairs of particles. However, in an actual game, the number of rounds has to be finite and question regimes may come up with unequal likelihood, so there is always a possibility that Alice and Bob win by pure luck. This statistical possibility has to be transparently analyzed for practical applications such as the detection of eavesdropping in quantum communication. Similarly, when Bell tests are used in macroscopic situations to investigate the connection strength between system components and the validity of proposed causal models, the available data are limited and the possible combinations of question bits (measurement settings) may not be controlled to occur with equal likelihood. In the present work, we give a fully self-contained proof for a bound on the probability to win a CHSH game by pure luck without making the usual assumption of only small biases in the random number generators. We also show bounds for the case of unequal probabilities based on results from McDiarmid and Combes and numerically illustrate certain exploitable biases. Full article
(This article belongs to the Special Issue Quantum Probability and Randomness IV)
Show Figures

Figure 1

Figure 1
<p>Two probability distributions for <math display="inline"><semantics> <msubsup> <mi>S</mi> <mn>1</mn> <mi>obs</mi> </msubsup> </semantics></math> generated by a Monte Carlo simulation of <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>,</mo> <mn>000</mn> </mrow> </semantics></math> CHSH games of <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> rounds each. The threshold of <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>+</mo> <mi>η</mi> <mo>=</mo> <mn>2.25</mn> </mrow> </semantics></math> is shown in red. (<b>left</b>) The graph on the left-hand side was generated with Alice and Bob randomly picking elementary strategies with <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, while the regimes <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math> were generated by independent and unbiased coin tosses. The simulated probability is <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>r</mi> <mo>{</mo> <msubsup> <mi>S</mi> <mn>1</mn> <mi>obs</mi> </msubsup> <mo>⩾</mo> <mn>2.25</mn> <mo>}</mo> <mo>=</mo> <mn>4.5</mn> <mo>%</mo> </mrow> </semantics></math> with a maximum value of <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mn>1</mn> <mo>,</mo> <mo movablelimits="true" form="prefix">max</mo> </mrow> <mi>obs</mi> </msubsup> <mo>=</mo> <mn>2.52</mn> </mrow> </semantics></math> observed in a single CHSH game. (<b>right</b>) The graph on the right-hand side has been generated with Alice and Bob randomly picking elementary strategies that win in the regime <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> <mo>=</mo> <mn>00</mn> </mrow> </semantics></math> as well as satisfy <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. Here, all regimes <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math> were generated by independent, but biased coin tosses with <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>x</mi> <mi>y</mi> <mo>=</mo> <mn>00</mn> <mo>)</mo> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>x</mi> <mi>y</mi> <mo>=</mo> <mn>01</mn> <mo>)</mo> <mo>=</mo> <mi>P</mi> <mo>(</mo> <mi>x</mi> <mi>y</mi> <mo>=</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mi>P</mi> <mo>(</mo> <mi>x</mi> <mi>y</mi> <mo>=</mo> <mn>11</mn> <mo>)</mo> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. The simulated probability is <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>r</mi> <mo>{</mo> <msubsup> <mi>S</mi> <mn>1</mn> <mi>obs</mi> </msubsup> <mo>⩾</mo> <mn>2.25</mn> <mo>}</mo> <mo>=</mo> <mn>14</mn> <mo>%</mo> </mrow> </semantics></math> with a maximum value of <math display="inline"><semantics> <mrow> <msubsup> <mi>S</mi> <mrow> <mn>1</mn> <mo>,</mo> <mo movablelimits="true" form="prefix">max</mo> </mrow> <mi>obs</mi> </msubsup> <mo>=</mo> <mn>2.86</mn> </mrow> </semantics></math> observed in a single CHSH game.</p>
Full article ">
12 pages, 329 KiB  
Article
Entropy of Financial Time Series Due to the Shock of War
by Ewa A. Drzazga-Szczȩśniak, Piotr Szczepanik, Adam Z. Kaczmarek and Dominik Szczȩśniak
Entropy 2023, 25(5), 823; https://doi.org/10.3390/e25050823 - 21 May 2023
Cited by 11 | Viewed by 2186
Abstract
The concept of entropy is not uniquely relevant to the statistical mechanics but, among others, it can play pivotal role in the analysis of a time series, particularly the stock market data. In this area, sudden events are especially interesting as they describe [...] Read more.
The concept of entropy is not uniquely relevant to the statistical mechanics but, among others, it can play pivotal role in the analysis of a time series, particularly the stock market data. In this area, sudden events are especially interesting as they describe abrupt data changes with potentially long-lasting effects. Here, we investigate the impact of such events on the entropy of financial time series. As a case study, we assume data of the Polish stock market, in the context of its main cumulative index, and discuss it for the finite time periods before and after outbreak of the 2022 Russian invasion of Ukraine. This analysis allows us to validate the entropy-based methodology in assessing changes in the market volatility, as driven by the extreme external factors. We show that some qualitative features of such market variations can be well captured in terms of the entropy. In particular, the discussed measure appears to highlight differences between data of the two considered timeframes in agreement with the character of their empirical distributions, which is not always the case in terms of the conventional standard deviation. Moreover, the entropy of cumulative index averages, qualitatively, the entropies of composing assets, suggesting capability for describing interdependencies between them. The entropy is also found to exhibit signatures of the upcoming extreme events. To this end, the role of recent war in shaping the current economic situation is briefly discussed. Full article
(This article belongs to the Special Issue Recent Trends and Developments in Econophysics)
Show Figures

Figure 1

Figure 1
<p>The standard deviation for the WIG20 index and its composing stocks. The first panel is for the constant component companies and the second (third) for the stocks introduced to (removed from) the index at some point. The results are given for the one-year time period before the beginning of the Russian invasion of Ukraine (blue) and after this event (orange). The solid lines correspond to the WIG20 index, whereas closed symbols represent estimates for the component stocks. Dashed lines are the guide for an eye.</p>
Full article ">Figure 2
<p>The discrete probability density function for the WIG20 index, for the one-year period before (blue) and after (orange) the beginning of the Russian invasion of Ukraine.</p>
Full article ">Figure 3
<p>The discrete probability density function for the component stocks of the WIG20 index. The first four rows are for the constant component companies, and the fifth (sixth) row is for the stocks introduced to (removed from) the index at some point. The results are presented for the one-year time period before the beginning of the Russian invasion of Ukraine (blue) and after this event (orange).</p>
Full article ">Figure 4
<p>The Shannon entropy for the WIG20 index and its composing stocks. The first panel is for the constant component companies and the second (third) for the stocks introduced to (removed from) the index at some point. The results are given for the one-year time period before the beginning of the Russian invasion of Ukraine (blue) and after this event (orange). The solid lines correspond to the WIG20 index, whereas closed symbols represent estimates for the component stocks. Dashed lines are the guide for an eye.</p>
Full article ">Figure 5
<p>The Shannon entropy for the WIG20 index as calculated for different periods of time before (blue) and after (orange) the beginning of the Russian invasion of Ukraine. Dashed lines are the guide for an eye.</p>
Full article ">Figure A1
<p>The daily log-returns for the WIG20 cumulative index before (blue) and after (orange) the beginning of the 2022 Russian invasion of Ukraine. For convenience, the inset presents data in the vicinity of the initial invasion day.</p>
Full article ">
17 pages, 838 KiB  
Article
Attribute-Based Verifiable Conditional Proxy Re-Encryption Scheme
by Yongli Tang, Minglu Jin, Hui Meng, Li Yang and Chengfu Zheng
Entropy 2023, 25(5), 822; https://doi.org/10.3390/e25050822 - 19 May 2023
Cited by 3 | Viewed by 2126
Abstract
There are mostly semi-honest agents in cloud computing, so agents may perform unreliable calculations during the actual execution process. In this paper, an attribute-based verifiable conditional proxy re-encryption (AB-VCPRE) scheme using a homomorphic signature is proposed to solve the problem that the current [...] Read more.
There are mostly semi-honest agents in cloud computing, so agents may perform unreliable calculations during the actual execution process. In this paper, an attribute-based verifiable conditional proxy re-encryption (AB-VCPRE) scheme using a homomorphic signature is proposed to solve the problem that the current attribute-based conditional proxy re-encryption (AB-CPRE) algorithm cannot detect the illegal behavior of the agent. The scheme implements robustness, that is the re-encryption ciphertext, can be verified by the verification server, showing that the received ciphertext is correctly converted by the agent from the original ciphertext, thus, meaning that illegal activities of agents can be effectively detected. In addition, the article demonstrates the reliability of the constructed AB-VCPRE scheme validation in the standard model, and proves that the scheme satisfies CPA security in the selective security model based on the learning with errors (LWE) assumption. Full article
(This article belongs to the Special Issue Information Security and Privacy: From IoT to IoV)
Show Figures

Figure 1

Figure 1
<p>Flow chart of AB-VCPRE.</p>
Full article ">Figure 2
<p>The workflow of AB-VCPRE.</p>
Full article ">
15 pages, 2586 KiB  
Article
TSFN: A Novel Malicious Traffic Classification Method Using BERT and LSTM
by Zhaolei Shi, Nurbol Luktarhan, Yangyang Song and Huixin Yin
Entropy 2023, 25(5), 821; https://doi.org/10.3390/e25050821 - 19 May 2023
Cited by 10 | Viewed by 3561
Abstract
Traffic classification is the first step in network anomaly detection and is essential to network security. However, existing malicious traffic classification methods have several limitations; for example, statistical-based methods are vulnerable to hand-designed features, and deep learning-based methods are vulnerable to the balance [...] Read more.
Traffic classification is the first step in network anomaly detection and is essential to network security. However, existing malicious traffic classification methods have several limitations; for example, statistical-based methods are vulnerable to hand-designed features, and deep learning-based methods are vulnerable to the balance and adequacy of data sets. In addition, the existing BERT-based malicious traffic classification methods only focus on the global features of traffic and ignore the time-series features of traffic. To address these problems, we propose a BERT-based Time-Series Feature Network (TSFN) model in this paper. The first is a Packet encoder module built by the BERT model, which completes the capture of global features of the traffic using the attention mechanism. The second is a temporal feature extraction module built by the LSTM model, which captures the time-series features of the traffic. Then, the global and time-series features of the malicious traffic are incorporated together as the final feature representation, which can better represent the malicious traffic. The experimental results show that the proposed approach can effectively improve the accuracy of malicious traffic classification on the publicly available USTC-TFC dataset, reaching an F1 value of 99.50%. This shows that the time-series features in malicious traffic can help improve the accuracy of malicious traffic classification. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Time-series Feature Network model Structure.</p>
Full article ">Figure 2
<p>Performance of the model under different network traffic representations.</p>
Full article ">Figure 3
<p>Effect of different sequence lengths (accuracy).</p>
Full article ">Figure 4
<p>Effect of different num_layer (accuracy).</p>
Full article ">Figure 5
<p>Precision rate of each class in the three methods for malicious traffic classification.</p>
Full article ">Figure 6
<p>Recall rate of each class in the three methods for malicious traffic classification.</p>
Full article ">Figure 7
<p>F1-score rate of each class in the three methods for malicious traffic classification.</p>
Full article ">
12 pages, 480 KiB  
Article
TTANAD: Test-Time Augmentation for Network Anomaly Detection
by Seffi Cohen, Niv Goldshlager, Bracha Shapira and Lior Rokach
Entropy 2023, 25(5), 820; https://doi.org/10.3390/e25050820 - 19 May 2023
Viewed by 2205
Abstract
Machine learning-based Network Intrusion Detection Systems (NIDS) are designed to protect networks by identifying anomalous behaviors or improper uses. In recent years, advanced attacks, such as those mimicking legitimate traffic, have been developed to avoid alerting such systems. Previous works mainly focused on [...] Read more.
Machine learning-based Network Intrusion Detection Systems (NIDS) are designed to protect networks by identifying anomalous behaviors or improper uses. In recent years, advanced attacks, such as those mimicking legitimate traffic, have been developed to avoid alerting such systems. Previous works mainly focused on improving the anomaly detector itself, whereas in this paper, we introduce a novel method, Test-Time Augmentation for Network Anomaly Detection (TTANAD), which utilizes test-time augmentation to enhance anomaly detection from the data side. TTANAD leverages the temporal characteristics of traffic data and produces temporal test-time augmentations on the monitored traffic data. This method aims to create additional points of view when examining network traffic during inference, making it suitable for a variety of anomaly detector algorithms. Our experimental results demonstrate that TTANAD outperforms the baseline in all benchmark datasets and with all examined anomaly detection algorithms, according to the Area Under the Receiver Operating Characteristic (AUC) metric. Full article
(This article belongs to the Special Issue Signal and Information Processing in Networks)
Show Figures

Figure 1

Figure 1
<p>Temporal aggregation-based TTA: Creating more samples using inner windows with a stride of 1, then extracting temporal features using the defined aggregators. We produce a prediction for the original window and the augmentations using the anomaly detector, then calculate the final prediction using the average of all predictions.</p>
Full article ">Figure 2
<p>Temporal Aggregation: extracting temporal features using the minimum, maximum, and standard deviation aggregators with a window size and step size of 5. The extracted features are forwarded to the anomaly detector.</p>
Full article ">
13 pages, 1435 KiB  
Article
Modeling Exact Frequency-Energy Distribution for Quakes by a Probabilistic Cellular Automaton
by Mariusz Białecki, Mateusz Gałka, Arpan Bagchi and Jacek Gulgowski
Entropy 2023, 25(5), 819; https://doi.org/10.3390/e25050819 - 19 May 2023
Cited by 1 | Viewed by 1399
Abstract
We develop the notion of Random Domino Automaton, a simple probabilistic cellular automaton model for earthquake statistics, in order to provide a mechanistic basis for the interrelation of Gutenberg–Richter law and Omori law with the waiting time distribution for earthquakes. In this work, [...] Read more.
We develop the notion of Random Domino Automaton, a simple probabilistic cellular automaton model for earthquake statistics, in order to provide a mechanistic basis for the interrelation of Gutenberg–Richter law and Omori law with the waiting time distribution for earthquakes. In this work, we provide a general algebraic solution to the inverse problem for the model and apply the proposed procedure to seismic data recorded in the Legnica-Głogów Copper District in Poland, which demonstrate the adequacy of the method. The solution of the inverse problem enables adjustment of the model to localization-dependent seismic properties manifested by deviations from Gutenberg–Richter law. Full article
Show Figures

Figure 1

Figure 1
<p>The distribution of clusters <math display="inline"><semantics> <msub> <mover accent="true"> <mi>n</mi> <mo>^</mo> </mover> <mi>i</mi> </msub> </semantics></math> (<b>A</b>) and the distribution of rebound parameters <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>i</mi> </msub> </semantics></math> (<b>B</b>) calculated for avalanche size distribution <math display="inline"><semantics> <msub> <mi>w</mi> <mi>i</mi> </msub> </semantics></math> given by geometric series of Equation (<a href="#FD29-entropy-25-00819" class="html-disp-formula">29</a>) with <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>99</mn> <mo>/</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The augmented distribution of avalanches <math display="inline"><semantics> <msub> <mi>w</mi> <mi>i</mi> </msub> </semantics></math>, up to <span class="html-italic">i</span> = 140,000, obtained from the LGCD episode from Legnica-Głogów Copper District in Poland. The recorded data (blue dots) were adjusted to the resolution of RDA model (green dots gradually turning into a line) and the exponential tail (thicker green line) was added using the auxiliary inverse-power fit (violet dashed line). See the main text for explanations.</p>
Full article ">Figure 3
<p>(<b>A</b>) Distribution of clusters <math display="inline"><semantics> <msub> <mi>n</mi> <mi>i</mi> </msub> </semantics></math> and rebound parameters <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>i</mi> </msub> </semantics></math> obtained from solution of the inverse problem for the augmented distribution of avalanches presented in <a href="#entropy-25-00819-f002" class="html-fig">Figure 2</a>. The left axis refers to the values of <math display="inline"><semantics> <msub> <mi>n</mi> <mi>i</mi> </msub> </semantics></math>, and the right axis refers to values of the <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>i</mi> </msub> </semantics></math>. (<b>B</b>) Comparison of the initial distribution of avalanches form <a href="#entropy-25-00819-f002" class="html-fig">Figure 2</a> with the distribution of avalanches obtained from the solution of the direct problem, i.e., calculated based on the calculated values of <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>i</mi> </msub> </semantics></math>. These two distributions overlap in the whole range, which confirms the accuracy of the proposed procedure.</p>
Full article ">
18 pages, 16091 KiB  
Article
Study of Generalized Chaotic Synchronization Method Incorporating Error-Feedback Coefficients
by Yanan Xing, Wenjie Dong, Jian Zeng, Pengteng Guo, Jing Zhang and Qun Ding
Entropy 2023, 25(5), 818; https://doi.org/10.3390/e25050818 - 18 May 2023
Cited by 5 | Viewed by 1670
Abstract
In this paper, taking the generalized synchronization problem of discrete chaotic systems as a starting point, a generalized synchronization method incorporating error-feedback coefficients into the controller based on the generalized chaos synchronization theory and stability theorem for nonlinear systems is proposed. Two discrete [...] Read more.
In this paper, taking the generalized synchronization problem of discrete chaotic systems as a starting point, a generalized synchronization method incorporating error-feedback coefficients into the controller based on the generalized chaos synchronization theory and stability theorem for nonlinear systems is proposed. Two discrete chaotic systems with different dimensions are constructed in this paper, the dynamics of the proposed systems are analyzed, and finally, the phase diagrams, Lyapunov exponent diagrams, and bifurcation diagrams of these are shown and described. The experimental results show that the design of the adaptive generalized synchronization system is achievable in cases in which the error-feedback coefficient satisfies certain conditions. Finally, a chaotic hiding image encryption transmission system based on a generalized synchronization approach is proposed, in which an error-feedback coefficient is introduced into the controller. Full article
(This article belongs to the Topic Advances in Nonlinear Dynamics: Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>Bifurcation diagram with the <span class="html-italic">a</span> of the proposed system (12).</p>
Full article ">Figure 2
<p>Output of chaotic sequences with different initial values of state variables and their autocorrelations: (<b>a</b>) output of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when the initial values of the system are different; (<b>b</b>) autocorrelations of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mo>−</mo> <mn>0.3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>c</b>) output chaotic sequences of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>; (<b>d</b>) output of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when the initial values of the system are different; (<b>e</b>) autocorrelations of the output chaotic sequences (<math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>) when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mo>−</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>; (<b>f</b>) output chaotic sequences of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Phase diagrams of proposed 3D hyperchaotic mapping (5): (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3 Cont.
<p>Phase diagrams of proposed 3D hyperchaotic mapping (5): (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Dynamical curves of status variables <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) dynamical curves of <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) dynamical curves of status variables <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) dynamical curves of <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) dynamical curves of status variables <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>f</b>) dynamical curves of <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Output of chaotic sequences with different initial values of state variables and their autocorrelations: (<b>a</b>) output of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when the initial values of the system are different; (<b>b</b>) autocorrelations of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mo>−</mo> <mn>0.3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>c</b>) output chaotic sequences of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mo>-</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>d</b>) output of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when the initial values of the system are different; (<b>e</b>) autocorrelations of the output chaotic sequences of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mo>−</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>; (<b>f</b>) output chaotic sequences of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <mo>−</mo> <mn>0.3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Phase diagrams of proposed 6D hyperchaotic system (14): (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>h</b>) <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Dynamical diagrams of status variables with <span class="html-italic">k</span>: System (21): (<b>a</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; System (22): (<b>e</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>g</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>h</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>4</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; System (23): (<b>i</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>j</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>k</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>y</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>l</b>) <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>6</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Framework diagram of proposed encrypted transmission system.</p>
Full article ">Figure 9
<p>Results of proposed encryption and decryption transmission system: (<b>a</b>) original Lena; (<b>b</b>) encrypted Lena; (<b>c</b>) decrypted Lena.</p>
Full article ">Figure 10
<p>Histograms of different components: Original Lena: (<b>a</b>) R component of original Lena; (<b>b</b>) G component of original Lena; (<b>c</b>) B component of original Lena; Encrypted Lena: (<b>d</b>) R component of encrypted Lena; (<b>e</b>) G component of encrypted Lena; (<b>f</b>) B component of encrypted Lena.</p>
Full article ">Figure 11
<p>Results of decrypting image using slightly different keystreams: (<b>a</b>) decryption result for correct key; (<b>b</b>) decryption result for wrong key.</p>
Full article ">Figure 12
<p>Test result graphs for resistance to data attacks: (<b>a</b>) cropped with black pixels in encrypted Lena; (<b>b</b>) decrypted Lena.</p>
Full article ">Figure 13
<p>Correlations between pixel points in different orientations of original image: (<b>a</b>) correlation between horizontal pixel points of R component; (<b>b</b>) correlation between vertical pixel points of R component; (<b>c</b>) correlation between diagonal pixel points of R component; (<b>h</b>) correlation between horizontal pixel points of G component; (<b>i</b>) correlation between vertical pixel points of G component; (<b>j</b>) correlation between diagonal pixel points of G component; (<b>n</b>) correlation between horizontal pixel points of B component; (<b>o</b>) correlation between vertical pixel points of B component; (<b>p</b>) correlation between diagonal pixel points of B component. (<b>d</b>) correlation between horizontal pixel points of R component; (<b>e</b>) correlation between vertical pixel points of R component; (<b>f</b>) correlation between diagonal pixel points of R component; (<b>k</b>) correlation between horizontal pixel points of G component; (<b>l</b>) correlation between vertical pixel points of G component; (<b>m</b>) correlation between diagonal pixel points of G component; (<b>q</b>) correlation between horizontal pixel points of B component; (<b>r</b>) correlation between vertical pixel points of B component; (<b>s</b>) correlation between diagonal pixel points of B component.</p>
Full article ">
20 pages, 4794 KiB  
Article
Swarming Transition in Super-Diffusive Self-Propelled Particles
by Morteza Nattagh Najafi, Rafe Md. Abu Zayed and Seyed Amin Nabavizadeh
Entropy 2023, 25(5), 817; https://doi.org/10.3390/e25050817 - 18 May 2023
Viewed by 1742
Abstract
A super-diffusive Vicsek model is introduced in this paper that incorporates Levy flights with exponent α. The inclusion of this feature leads to an increase in the fluctuations of the order parameter, ultimately resulting in the disorder phase becoming more dominant as [...] Read more.
A super-diffusive Vicsek model is introduced in this paper that incorporates Levy flights with exponent α. The inclusion of this feature leads to an increase in the fluctuations of the order parameter, ultimately resulting in the disorder phase becoming more dominant as α increases. The study finds that for α values close to two, the order–disorder transition is of the first order, while for small enough values of α, it shows degrees of similarities with the second-order phase transitions. The article formulates a mean field theory based on the growth of the swarmed clusters that accounts for the decrease in the transition point as α increases. The simulation results show that the order parameter exponent β, correlation length exponent ν, and susceptibility exponent γ remain constant when α is altered, satisfying a hyperscaling relation. The same happens for the mass fractal dimension, information dimension, and correlation dimension when α is far from two. The study reveals that the fractal dimension of the external perimeter of connected self-similar clusters conforms to the fractal dimension of Fortuin–Kasteleyn clusters of the two-dimensional Q=2 Potts (Ising) model. The critical exponents linked to the distribution function of global observables vary when α changes. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Figure 1
<p>A snapshot of the particle density in the ordinary and super-diffusive VM (our model) at the transition point. A color map for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.95</mn> </mrow> </semantics></math> is shown in (<b>a</b>) and (<b>b</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>) The time series of <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> for various amounts of <math display="inline"><semantics> <mi>η</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math>. Left (right) inset shows the probability distribution of <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <mi>η</mi> <mo>)</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>&lt;</mo> <msub> <mi>η</mi> <mi>c</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <msub> <mi>η</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>&gt;</mo> <msub> <mi>η</mi> <mi>c</mi> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.95</mn> </mrow> </semantics></math>). (<b>b</b>) <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>-<math display="inline"><semantics> <mi>η</mi> </semantics></math> graph for various <math display="inline"><semantics> <mi>α</mi> </semantics></math> values, showing the transition structure. Upper inset shows that Binder cumulant <span class="html-italic">G</span> in terms of <math display="inline"><semantics> <mi>η</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, which gives the transition point as its coincidence point. Lower inset shows the transition point in terms of <math display="inline"><semantics> <mi>α</mi> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>-<math display="inline"><semantics> <mi>η</mi> </semantics></math> graph for various <span class="html-italic">L</span> values. The coincidence point of re-scaled <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> is shown in the lower inset in terms of <math display="inline"><semantics> <mi>η</mi> </semantics></math>, which determines the transition point. The upper inset shows how the peak of the distribution functions runs with the system size. (<b>d</b>) <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <mo>(</mo> <msup> <mi>L</mi> <mrow> <mi>β</mi> <mo>/</mo> <mi>ν</mi> </mrow> </msup> <mi>ϕ</mi> <mrow> <mo>(</mo> <mi>η</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math> in terms of <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <mi>ϵ</mi> <msup> <mi>L</mi> <mrow> <mn>1</mn> <mo>/</mo> <mi>ν</mi> </mrow> </msup> </mrow> </semantics></math>, exhibiting a scaling behavior according to Equation (<a href="#FD4-entropy-25-00817" class="html-disp-formula">4</a>). The inset shows the exponents in terms of <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mi>χ</mi> </semantics></math> in terms of <math display="inline"><semantics> <mi>η</mi> </semantics></math>, exhibiting a divergent behavior at the transition point (<b>a</b>) for various system sizes and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> and (<b>b</b>) various <math display="inline"><semantics> <mi>α</mi> </semantics></math> values and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>128</mn> </mrow> </semantics></math>. The inset of (<b>a</b>) is <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <msub> <mi>χ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> in terms of <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <mi>L</mi> </mrow> </semantics></math> the slope of which is <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>/</mo> <mi>ν</mi> </mrow> </semantics></math>, and the inset of (<b>b</b>) is <math display="inline"><semantics> <msub> <mi>χ</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> in terms of <math display="inline"><semantics> <mi>α</mi> </semantics></math>. (<b>c</b>) re-scaled <math display="inline"><semantics> <mi>χ</mi> </semantics></math> in terms of re-scaled <math display="inline"><semantics> <mi>η</mi> </semantics></math> with the slope <math display="inline"><semantics> <mrow> <mo>−</mo> <mi>γ</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> in terms of <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> for various <span class="html-italic">L</span> values (up to <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>128</mn> </mrow> </semantics></math>). Inset: The data collapse analysis for <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> in terms of <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, where the reduced density is defined as <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>ρ</mi> <mo stretchy="false">˜</mo> </mover> <mo>≡</mo> <mfrac> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>−</mo> <mi>ρ</mi> </mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </mfrac> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Schematic representation of the mean field method. (<b>a</b>) A static swarmed cluster (the yellow area shows), where <span class="html-italic">r</span> is its average radius, and the red and blue rings indicate the number of particles entering <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> and leaving <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </semantics></math> the cluster, respectively. (<b>b</b>) The coherent movement of the swarmed cluster in the preferred direction <math display="inline"><semantics> <mfenced open="&#x2329;" close="&#x232A;"> <mi>θ</mi> </mfenced> </semantics></math>. The green bar moves from <math display="inline"><semantics> <mrow> <msup> <mi>y</mi> <mo>′</mo> </msup> <mo>=</mo> <mo>−</mo> <mi>r</mi> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <msup> <mi>y</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>r</mi> </mrow> </semantics></math> running over the area inside the swarmed cluster. The active particles with Levy flights in the range <math display="inline"><semantics> <mrow> <mo>[</mo> <msub> <mover accent="true"> <mi>l</mi> <mo stretchy="false">¯</mo> </mover> <mi>α</mi> </msub> <mo>−</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>+</mo> <msup> <mi>y</mi> <mo>′</mo> </msup> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mover accent="true"> <mi>l</mi> <mo stretchy="false">¯</mo> </mover> <mi>α</mi> </msub> <mo>+</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>−</mo> <msup> <mi>y</mi> <mo>′</mo> </msup> <mo>)</mo> </mrow> <mo>]</mo> </mrow> </semantics></math> remain inside the swarmed cluster if the average radius <span class="html-italic">r</span> remains approximately unchanged during the process. The number of such particles is <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>−</mo> <msub> <mi>n</mi> <mi>out</mi> </msub> </mrow> </semantics></math>, where <span class="html-italic">N</span> is the number of particles inside the cluster in the previous step.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <msup> <mi>r</mi> <mo>*</mo> </msup> </semantics></math> in terms of <math display="inline"><semantics> <mi>α</mi> </semantics></math> based on the mean field results.</p>
Full article ">Figure 7
<p>(<b>a</b>) <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <msub> <mi>Z</mi> <mi>q</mi> </msub> </mrow> </semantics></math> in terms of <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <mi>δ</mi> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>30</mn> <mo>]</mo> </mrow> </semantics></math> in increment 2 for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> (from the top to the bottom <span class="html-italic">q</span> decreases), the slope of which is <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>(</mo> <mi>q</mi> <mo>)</mo> </mrow> </semantics></math> (inset). (<b>b</b>) The fractal dimension (<math display="inline"><semantics> <msub> <mi>D</mi> <mi>f</mi> </msub> </semantics></math>), the information dimension (<math display="inline"><semantics> <msub> <mi>D</mi> <mn>1</mn> </msub> </semantics></math>) and the correlation dimension <math display="inline"><semantics> <msub> <mi>D</mi> <mn>2</mn> </msub> </semantics></math> in terms of <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>(<b>a</b>) <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <mi>l</mi> </mrow> </semantics></math> in terms of <math display="inline"><semantics> <mrow> <mo form="prefix">log</mo> <mi>r</mi> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, the slope of which is <math display="inline"><semantics> <msubsup> <mi>d</mi> <mi>f</mi> <mi>loop</mi> </msubsup> </semantics></math>. Lower inset: the distribution of the gyration radius <span class="html-italic">r</span> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>. Top inset: <math display="inline"><semantics> <msubsup> <mi>d</mi> <mi>f</mi> <mi>loop</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>r</mi> </msub> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> in terms of <math display="inline"><semantics> <mi>α</mi> </semantics></math>. (<b>b</b>) The distribution of the loop length (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mi>l</mi> </mrow> </semantics></math>) and submass (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mi>s</mi> <mi>m</mi> </mrow> </semantics></math>). Top (Down) inset shows <math display="inline"><semantics> <msub> <mi>τ</mi> <mrow> <mi>s</mi> <mi>m</mi> </mrow> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>τ</mi> <mi>l</mi> </msub> </semantics></math>) in terms of <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure A1
<p>(<b>a</b>) Binder cumulant in terms of <math display="inline"><semantics> <mi>η</mi> </semantics></math> and (<b>b</b>) the PDF for <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>, 1, <math display="inline"><semantics> <mrow> <mn>1.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1.95</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A2
<p><math display="inline"><semantics> <mi>ϕ</mi> </semantics></math> in terms of <math display="inline"><semantics> <mi>η</mi> </semantics></math> (main) and the corresponding data collapse analysis (inset) in the low density limit <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>. The data collapse analysis shows that <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>2.098</mn> <mo>±</mo> <mn>0.01</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.38</mn> <mo>±</mo> <mn>0.1</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.9</mn> <mo>±</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A3
<p><math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>in</mi> </msub> </semantics></math> (blue line) and <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>out</mi> </msub> </semantics></math> (orange line) in terms of <span class="html-italic">r</span> for <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>max</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.01</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.25</mn> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.75</mn> </mrow> </semantics></math>, (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.95</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A4
<p><math display="inline"><semantics> <msub> <mi>n</mi> <mi>in</mi> </msub> </semantics></math> (blue line) and <math display="inline"><semantics> <msub> <mi>n</mi> <mi>out</mi> </msub> </semantics></math> (orange line) in terms of <span class="html-italic">r</span> for <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>max</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> for the coherent movement, for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.01</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.25</mn> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.75</mn> </mrow> </semantics></math>, (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.95</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A5
<p><span class="html-italic">n</span> (defined in Equation (<a href="#FD26-entropy-25-00817" class="html-disp-formula">A4</a>)) in terms of <span class="html-italic">r</span> for <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>max</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> for various <math display="inline"><semantics> <mi>α</mi> </semantics></math> values. <math display="inline"><semantics> <msup> <mi>r</mi> <mo>*</mo> </msup> </semantics></math> has been defined as the point <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>(</mo> <msup> <mi>r</mi> <mo>*</mo> </msup> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> with bold circles with the same color as the main graph (<math display="inline"><semantics> <msup> <mi>r</mi> <mo>*</mo> </msup> </semantics></math> decreases as <math display="inline"><semantics> <mi>α</mi> </semantics></math> increases).</p>
Full article ">
16 pages, 2953 KiB  
Article
On the Possibility of Reproducing Utsu’s Law for Earthquakes with a Spring-Block SOC Model
by Alfredo Salinas-Martínez, Jennifer Perez-Oregon, Ana María Aguilar-Molina, Alejandro Muñoz-Diosdado and Fernando Angulo-Brown
Entropy 2023, 25(5), 816; https://doi.org/10.3390/e25050816 - 18 May 2023
Cited by 4 | Viewed by 1455
Abstract
The Olami, Feder and Christensen (OFC) spring-block model has proven to be a powerful tool for analyzing and comparing synthetic and real earthquakes. This work proposes the possible reproduction of Utsu’s law for earthquakes in the OFC model. Based on our previous works, [...] Read more.
The Olami, Feder and Christensen (OFC) spring-block model has proven to be a powerful tool for analyzing and comparing synthetic and real earthquakes. This work proposes the possible reproduction of Utsu’s law for earthquakes in the OFC model. Based on our previous works, several simulations characterizing real seismic regions were performed. We located the maximum earthquake in these regions and applied Utsu’s formulae to identify a possible aftershock area and made comparisons between synthetic and real earthquakes. The research compares several equations to calculate the aftershock area and proposes a new one with the available data. Subsequently, the team performed new simulations and chose a mainshock to analyze the behavior of the surrounding events, so as to identify whether they could be catalogued as aftershocks and relate them to the aftershock area previously determined using the formula proposed. Additionally, the spatial location of those events was considered in order to classify them as aftershocks. Finally, we plot the epicenters of the mainshock, and the possible aftershocks comprised in the calculated area resembling the original work of Utsu. Having analyzed the results, it is likely to say that Utsu’s law is reproducible using a spring-block model with a self-organized criticality (SOC) model. Full article
Show Figures

Figure 1

Figure 1
<p>Schematics for the spring-block model.</p>
Full article ">Figure 2
<p>The ellipse in the dashed line represents the aftershock area proposed by Utsu. All the events with epicenters inside this ellipse are considered aftershocks of the main event that is marked as a double circle. Image taken from [<a href="#B1-entropy-25-00816" class="html-bibr">1</a>].</p>
Full article ">Figure 3
<p>Relaxed area in a spring-block simulation. The earthquake’s epicenter is shown as a black dot in the black circle. <math display="inline"><semantics> <mrow> <mi>L</mi> </mrow> </semantics></math> is the size of the lattice, γ is the elastic parameter or conservation level [<a href="#B24-entropy-25-00816" class="html-bibr">24</a>].</p>
Full article ">Figure 4
<p>Magnitude vs. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> (expected size) given by Equations (8)–(14).</p>
Full article ">Figure 5
<p>Time series of synthetic earthquakes in the OFC spring-block model.</p>
Full article ">Figure 6
<p>Time series of aftershocks candidates.</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math> plot of all the events between two mainshocks. The dash line represents the initial chosen area while solid line is the aftershock area calculated with Equation (16). Grey hexagons represent all the events that lie outside the rings; green diamonds represent the background noise; blue stars represent the aftershocks candidates.</p>
Full article ">Figure 8
<p>Aftershocks candidates after removing background noise and far away events.</p>
Full article ">Figure 9
<p>Aftershocks candidates’ series after removing far away events and background noise.</p>
Full article ">Figure 10
<p>Aftershocks decay with time. The dotted red line represents the adjusted decay over time that follows <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>5270.2</mn> <msup> <mrow> <mi>t</mi> </mrow> <mrow> <mo>−</mo> <mn>0.578</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">
13 pages, 1762 KiB  
Article
Self-Regulated Symmetry Breaking Model for Stem Cell Differentiation
by Madelynn McElroy, Kaylie Green and Nikolaos K. Voulgarakis
Entropy 2023, 25(5), 815; https://doi.org/10.3390/e25050815 - 18 May 2023
Cited by 1 | Viewed by 2041
Abstract
In conventional disorder–order phase transitions, a system shifts from a highly symmetric state, where all states are equally accessible (disorder) to a less symmetric state with a limited number of available states (order). This transition may occur by varying a control parameter that [...] Read more.
In conventional disorder–order phase transitions, a system shifts from a highly symmetric state, where all states are equally accessible (disorder) to a less symmetric state with a limited number of available states (order). This transition may occur by varying a control parameter that represents the intrinsic noise of the system. It has been suggested that stem cell differentiation can be considered as a sequence of such symmetry-breaking events. Pluripotent stem cells, with their capacity to develop into any specialized cell type, are considered highly symmetric systems. In contrast, differentiated cells have lower symmetry, as they can only carry out a limited number of functions. For this hypothesis to be valid, differentiation should emerge collectively in stem cell populations. Additionally, such populations must have the ability to self-regulate intrinsic noise and navigate through a critical point where spontaneous symmetry breaking (differentiation) occurs. This study presents a mean-field model for stem cell populations that considers the interplay of cell–cell cooperativity, cell-to-cell variability, and finite-size effects. By introducing a feedback mechanism to control intrinsic noise, the model can self-tune through different bifurcation points, facilitating spontaneous symmetry breaking. Standard stability analysis showed that the system can potentially differentiate into several cell types mathematically expressed as stable nodes and limit cycles. The existence of a Hopf bifurcation in our model is discussed in light of stem cell differentiation. Full article
(This article belongs to the Collection Disorder and Biological Physics)
Show Figures

Figure 1

Figure 1
<p>First row: Potential function vs. <span class="html-italic">m</span> for (<b>a</b>) Model 1, (<b>b</b>) Model 2, and (<b>c</b>) Model 3. Black lines correspond to critical values of intrinsic noise. Second row: Bifurcation and phase transition diagrams for (<b>d</b>) Model 1, (<b>e</b>) Model 2, and (<b>f</b>) Model 3. Red lines correspond to stable (solid line) and unstable (dashed line) equilibrium points. Green lines indicate phase transitions. Blue arrows demonstrate hysteresis loops. Specifically: (<b>d</b>) A supercritical pitchfork bifurcation and a second-order phase transition occur at <math display="inline"><semantics> <msub> <mi>ξ</mi> <mi>c</mi> </msub> </semantics></math>. (<b>e</b>) A subcritical pitchfork and double−saddle−node bifurcation occurs at <math display="inline"><semantics> <msub> <mi>ξ</mi> <mi>c</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ξ</mi> <mrow> <mi>c</mi> <mn>3</mn> </mrow> </msub> </semantics></math>, respectively. At <math display="inline"><semantics> <msub> <mi>ξ</mi> <msub> <mi>c</mi> <mn>1</mn> </msub> </msub> </semantics></math>, we have a first-order phase transition. (<b>f</b>) A transcritical and single-saddle-node bifurcation occur at <math display="inline"><semantics> <msub> <mi>ξ</mi> <mi>c</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ξ</mi> <mrow> <mi>c</mi> <mn>3</mn> </mrow> </msub> </semantics></math>, respectively. At <math display="inline"><semantics> <msub> <mi>ξ</mi> <msub> <mi>c</mi> <mn>2</mn> </msub> </msub> </semantics></math>, we observe a first-order phase transition.</p>
Full article ">Figure 2
<p>Model 1: (<b>a</b>,<b>b</b>) show representative trajectories for a stable spiral and a stable node, respectively. The corresponding probability density functions are displayed in (<b>c</b>,<b>d</b>), with light blue color indicating a high probability density. The parameter values used are <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math> in (<b>a</b>,<b>c</b>) and <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>=</mo> <mn>0.07</mn> </mrow> </semantics></math> in (<b>b</b>,<b>d</b>). For (<b>c</b>,<b>d</b>), we set <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Demonstration of Hopf bifurcation in Model 2: Transition from (<b>a</b>) a stable limit cycle for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>&lt;</mo> <msub> <mi>M</mi> <mi>c</mi> </msub> </mrow> </semantics></math> to (<b>b</b>) a stable spiral for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>&gt;</mo> <msub> <mi>M</mi> <mi>c</mi> </msub> </mrow> </semantics></math>. The corresponding probability densities are displayed in (<b>c</b>,<b>d</b>), respectively, with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>. (<b>e</b>,<b>f</b>) depict a limit cycle with an elevated amplitude. In (<b>f</b>), the blue and green thick lines represent two slow modes at <math display="inline"><semantics> <msub> <mi>m</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>≈</mo> <msub> <mi>m</mi> <mo>+</mo> </msub> </mrow> </semantics></math>, respectively. A single stochastic simulation for (<b>e</b>,<b>f</b>) is shown in (<b>g</b>,<b>h</b>), respectively, with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>. The parameter values used are <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.68</mn> </mrow> </semantics></math> in (<b>a</b>,<b>c</b>); <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.71</mn> </mrow> </semantics></math> in (<b>b</b>,<b>d</b>); <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.65</mn> </mrow> </semantics></math> in (<b>e</b>,<b>g</b>); and <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (<b>f</b>,<b>h</b>). In (<b>a</b>–<b>h</b>), <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Hopf bifurcation in Model 3: Transition from (<b>a</b>) a stable limit cycle for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>&lt;</mo> <msub> <mi>M</mi> <mi>c</mi> </msub> </mrow> </semantics></math> to (<b>b</b>) a stable spiral for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>&gt;</mo> <msub> <mi>M</mi> <mi>c</mi> </msub> </mrow> </semantics></math>. The corresponding probability densities are shown in (<b>c</b>,<b>d</b>), respectively, with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>. Subfigure (<b>e</b>) shows a limit cycle with elevated amplitude. The corresponding stochastic simulations for (<b>e</b>) are presented in (<b>f</b>) with <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>. The values of the parameters are <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.48</mn> </mrow> </semantics></math> and (<b>a</b>,<b>c</b>); <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.52</mn> </mrow> </semantics></math> in (<b>b</b>,<b>d</b>); and <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.35</mn> </mrow> </semantics></math> in (<b>e</b>,<b>f</b>). In all subfigures, <math display="inline"><semantics> <mrow> <mi>G</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>.</p>
Full article ">
15 pages, 648 KiB  
Article
Properties of Spherically Symmetric Black Holes in the Generalized Brans–Dicke Modified Gravitational Theory
by Mou Xu, Jianbo Lu, Shining Yang and Hongnan Jiang
Entropy 2023, 25(5), 814; https://doi.org/10.3390/e25050814 - 18 May 2023
Cited by 2 | Viewed by 1678
Abstract
The many problems faced by the theory of general relativity (GR) have always motivated us to explore the modified theory of GR. Considering the importance of studying the black hole (BH) entropy and its correction in gravity physics, we study the correction of [...] Read more.
The many problems faced by the theory of general relativity (GR) have always motivated us to explore the modified theory of GR. Considering the importance of studying the black hole (BH) entropy and its correction in gravity physics, we study the correction of thermodynamic entropy for a kind of spherically symmetric black hole under the generalized Brans–Dicke (GBD) theory of modified gravity. We derive and calculate the entropy and heat capacity. It is found that when the value of event horizon radius r+ is small, the effect of the entropy-correction term on the entropy is very obvious, while for larger values r+, the contribution of the correction term on entropy can be almost ignored. In addition, we can observe that as the radius of the event horizon increases, the heat capacity of BH in GBD theory will change from a negative value to a positive value, indicating that there is a phase transition in black holes. Given that studying the structure of geodesic lines is important for exploring the physical characteristics of a strong gravitational field, we also investigate the stability of particles’ circular orbits in static spherically symmetric BHs within the framework of GBD theory. Concretely, we analyze the dependence of the innermost stable circular orbit on model parameters. In addition, the geodesic deviation equation is also applied to investigate the stable circular orbit of particles in GBD theory. The conditions for the stability of the BH solution and the limited range of radial coordinates required to achieve stable circular orbit motion are given. Finally, we show the locations of stable circular orbits, and obtain the angular velocity, specific energy, and angular momentum of the particles which move in circular orbits. Full article
(This article belongs to the Special Issue Advances in Black Hole Thermodynamics)
Show Figures

Figure 1

Figure 1
<p>Taking the different values of <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math>, the mass for the spherically symmetric black hole as a function of the event horizon radius in the framework of GBD modified theory.</p>
Full article ">Figure 2
<p>Black hole temperature (<b>upper</b>) and heat capacity (<b>lower</b>) as function of the event horizon radius in GBD theory, where <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math> (or <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>) has been taken in the <b>left</b> (or <b>right</b>) figure.</p>
Full article ">Figure 3
<p>Black hole thermodynamic correction entropy as function of the event horizon radius in GBD theory (the red curve denotes the variation of <span class="html-italic">S</span>, and black curve denotes the variation of <math display="inline"><semantics> <msub> <mi>S</mi> <mn>0</mn> </msub> </semantics></math>), where <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> (or <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>) has been taken in the <b>left</b> (or <b>right</b>) figure.</p>
Full article ">Figure 4
<p>The radius of the ISCO as function of the parameter <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math> in GBD theory for the case of massive particles, where the different values of <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math> have been taken.</p>
Full article ">Figure 5
<p>Angular velocity as function of the radius <span class="html-italic">r</span> in the framework of GBD modified theory, where <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> (or <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>10</mn> </mrow> </semantics></math>) has been taken in the <b>left</b> (or <b>right</b>) figure.</p>
Full article ">Figure 6
<p>Specific energy (<b>upper</b>) and angular momentum (<b>lower</b>) as function of the radius in GBD theory, where <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> (or <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>10</mn> </mrow> </semantics></math>) has been taken in the <b>left</b> (or <b>right</b>) figure.</p>
Full article ">Figure 7
<p><math display="inline"><semantics> <msup> <mi>ω</mi> <mn>2</mn> </msup> </semantics></math> as function of the radius in the framework of GBD modified theory, where <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> (or <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>10</mn> </mrow> </semantics></math>) has been taken in the <b>left</b> (or <b>right</b>) figure.</p>
Full article ">
12 pages, 325 KiB  
Article
Forward and Backward Recalling Sequences in Spatial and Verbal Memory Tasks: What Do We Measure?
by Jeanette Melin, Laura Göschel, Peter Hagell, Albert Westergren, Agnes Flöel and Leslie Pendrill
Entropy 2023, 25(5), 813; https://doi.org/10.3390/e25050813 - 18 May 2023
Cited by 2 | Viewed by 1931
Abstract
There are different views in the literature about the number and inter-relationships of cognitive domains (such as memory and executive function) and a lack of understanding of the cognitive processes underlying these domains. In previous publications, we demonstrated a methodology for formulating and [...] Read more.
There are different views in the literature about the number and inter-relationships of cognitive domains (such as memory and executive function) and a lack of understanding of the cognitive processes underlying these domains. In previous publications, we demonstrated a methodology for formulating and testing cognitive constructs for visuo-spatial and verbal recall tasks, particularly for working memory task difficulty where entropy is found to play a major role. In the present paper, we applied those insights to a new set of such memory tasks, namely, backward recalling block tapping and digit sequences. Once again, we saw clear and strong entropy-based construct specification equations (CSEs) for task difficulty. In fact, the entropy contributions in the CSEs for the different tasks were of similar magnitudes (within the measurement uncertainties), which may indicate a shared factor in what is being measured with both forward and backward sequences, as well as visuo-spatial and verbal memory recalling tasks more generally. On the other hand, the analyses of dimensionality and the larger measurement uncertainties in the CSEs for the backward sequences suggest that caution is needed when attempting to unify a single unidimensional construct based on forward and backward sequences with visuo-spatial and verbal memory tasks. Full article
(This article belongs to the Special Issue Applications of Entropy in Health Care)
12 pages, 6428 KiB  
Article
Evolutionary Method of Heterogeneous Combat Network Based on Link Prediction
by Shaoming Qiu, Fen Chen, Yahui Wang and Jiancheng Zhao
Entropy 2023, 25(5), 812; https://doi.org/10.3390/e25050812 - 17 May 2023
Viewed by 1538
Abstract
Currently, research on the evolution of heterogeneous combat networks (HCNs) mainly focuses on the modeling process, with little attention paid to the impact of changes in network topology on operational capabilities. Link prediction can provide a fair and unified comparison standard for network [...] Read more.
Currently, research on the evolution of heterogeneous combat networks (HCNs) mainly focuses on the modeling process, with little attention paid to the impact of changes in network topology on operational capabilities. Link prediction can provide a fair and unified comparison standard for network evolution mechanisms. This paper uses link prediction methods to study the evolution of HCNs. Firstly, according to the characteristics of HCNs, a link prediction index based on frequent subgraphs (LPFS) is proposed. LPFS have been demonstrated on a real combat network to be superior to 26 baseline methods. The main driving force of research on evolution is to improve the operational capabilities of combat networks. Adding the same number of nodes and edges, 100 iterative experiments demonstrate that the evolutionary method (HCNE) proposed in this paper outperforms random evolution and preferential evolution in improving the operational capabilities of combat networks. Furthermore, the new network generated after evolution is more consistent with the characteristics of a real network. Full article
(This article belongs to the Topic Complex Systems and Network Science)
Show Figures

Figure 1

Figure 1
<p>Heterogeneous warfare network (HCN). Red representing search nodes, green representing decision nodes, and blue representing influence nodes.</p>
Full article ">Figure 2
<p>Example of frequent subgraphs with one edge apart.</p>
Full article ">Figure 3
<p>AUC value changes of each link prediction algorithm with different test set proportions. The x-axis represents the proportion of the test set, the y-axis represents the AUC, and each line represents the AUC change of an index at different proportions of the test set.</p>
Full article ">Figure 4
<p>The effect of min_sup and max_size on AUC (when the training set ratio is 10%).</p>
Full article ">Figure 5
<p>Combat network capability changes using different evolutionary methods. The x-axis represents the evolution steps (adding a new connection or a new node at each step), and the y-axis represents operational capabilities.</p>
Full article ">Figure 6
<p>Combat network degree distribution. The x-axis represents the node degree, and the y-axis represents the number of nodes.</p>
Full article ">
25 pages, 4135 KiB  
Article
A Secure Scheme Based on a Hybrid of Classical-Quantum Communications Protocols for Managing Classical Blockchains
by Ang Liu, Xiu-Bo Chen, Shengwei Xu, Zhuo Wang, Zhengyang Li, Liwei Xu, Yanshuo Zhang and Ying Chen
Entropy 2023, 25(5), 811; https://doi.org/10.3390/e25050811 - 17 May 2023
Cited by 7 | Viewed by 2274
Abstract
Blockchain technology affords data integrity protection and building trust mechanisms in transactions for distributed networks, and, therefore, is seen as a promising revolutionary information technology. At the same time, the ongoing breakthrough in quantum computation technology contributes toward large-scale quantum computers, which might [...] Read more.
Blockchain technology affords data integrity protection and building trust mechanisms in transactions for distributed networks, and, therefore, is seen as a promising revolutionary information technology. At the same time, the ongoing breakthrough in quantum computation technology contributes toward large-scale quantum computers, which might attack classic cryptography, seriously threatening the classic cryptography security currently employed in the blockchain. As a better alternative, a quantum blockchain has high expectations of being immune to quantum computing attacks perpetrated by quantum adversaries. Although several works have been presented, the problems of impracticality and inefficiency in quantum blockchain systems remain prominent and need to be addressed. First, this paper develops a quantum-secure blockchain (QSB) scheme by introducing a consensus mechanism—quantum proof of authority (QPoA) and an identity-based quantum signature (IQS)—wherein QPoA is used for new block generation and IQS is used for transaction signing and verification. Second, QPoA is developed by adopting a quantum voting protocol to achieve secure and efficient decentralization for the blockchain system, and a quantum random number generator (QRNG) is deployed for randomized leader node election to protect the blockchain system from centralized attacks like distributed denial of service (DDoS). Compared to previous work, our scheme is more practical and efficient without sacrificing security, greatly contributing to better addressing the challenges in the quantum era. Extensive security analysis demonstrates that our scheme provides better protection against quantum computing attacks than classic blockchains. Overall, our scheme presents a feasible solution for blockchain systems against quantum computing attacks through a quantum strategy, contributing toward quantum-secured blockchain in the quantum era. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of QSB.</p>
Full article ">Figure 2
<p>A transaction process.</p>
Full article ">Figure 3
<p>Content of LV array.</p>
Full article ">Figure 4
<p>The process of the leader node election in one consensus.</p>
Full article ">Figure 5
<p>One consensus round in QPoA.</p>
Full article ">Figure 6
<p>The hybrid network diagram of the multi-party QKD protocol.</p>
Full article ">Figure 7
<p>The quantum circuit of quantum voting.</p>
Full article ">Figure 8
<p>The quantum circuit of the counting phase in quantum voting.</p>
Full article ">Figure 9
<p>Updating process of the validating nodes in QPoA.</p>
Full article ">Figure 10
<p>Structure of QS.</p>
Full article ">Figure 11
<p>Process of IQS.</p>
Full article ">Figure 12
<p>Digital document exchange based on QSB.</p>
Full article ">
18 pages, 15138 KiB  
Article
Wasserstein Distance-Based Deep Leakage from Gradients
by Zifan Wang, Changgen Peng, Xing He and Weijie Tan
Entropy 2023, 25(5), 810; https://doi.org/10.3390/e25050810 - 17 May 2023
Cited by 2 | Viewed by 2510
Abstract
Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information [...] Read more.
Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information leakage. However, the algorithm has the disadvantages of slow model convergence and poor inverse generated images accuracy. To address these issues, a Wasserstein distance-based DLG method is proposed, named WDLG. The WDLG method uses Wasserstein distance as the training loss function achieved to improve the inverse image quality and the model convergence. The hard-to-calculate Wasserstein distance is converted to be calculated iteratively using the Lipschit condition and Kantorovich–Rubinstein duality. Theoretical analysis proves the differentiability and continuity of Wasserstein distance. Finally, experiment results show that the WDLG algorithm is superior to DLG in training speed and inversion image quality. At the same time, we prove through the experiments that differential privacy can be used for disturbance protection, which provides some ideas for the development of a deep learning framework to protect privacy. Full article
(This article belongs to the Special Issue Information Theory for Interpretable Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Inversion of original private training data via shared gradients.</p>
Full article ">Figure 2
<p>Overview of WDLG algorithm.</p>
Full article ">Figure 3
<p>On the LeNet network model, the private data (part) of the four data sets are completely restored by WDLG algorithm in SVHN, MNIST, Fashion MNIST, and CIFAR-100.</p>
Full article ">Figure 4
<p>On the CNN6 network model, the private data (part) of the four data sets are completely restored by WDLG algorithm in SVHN, MNIST, Fashion MNIST, and CIFAR-100.</p>
Full article ">Figure 5
<p>Training loss comparison.</p>
Full article ">Figure 6
<p>Fidelity comparison of reconstructed image.</p>
Full article ">Figure 7
<p>First loss calculation comparison.</p>
Full article ">Figure 8
<p>The image inversion recovery process results of WDLG trained 448 times in one batch of CIFAR-10 data set.</p>
Full article ">Figure 9
<p>WDLG-trained 448 image inversion recovery process maps under 4 batches in CIFAR-10.</p>
Full article ">Figure 10
<p>DLG-trained 448 image inversion recovery process maps under 4 batches in CIFAR-10.</p>
Full article ">Figure 11
<p><math display="inline"><semantics> <mrow> <mi mathvariant="bold-sans-serif">σ</mi> </mrow> </semantics></math> = 10, FashionMNIST, SVHN resisting gradient inversion effect.</p>
Full article ">Figure 12
<p><math display="inline"><semantics> <mrow> <mi mathvariant="bold-sans-serif">σ</mi> </mrow> </semantics></math> = 2, gradient inversion defense effect diagram under CIFAR-100.</p>
Full article ">Figure 13
<p><math display="inline"><semantics> <mrow> <mi mathvariant="bold-sans-serif">σ</mi> </mrow> </semantics></math> = 4, gradient inversion defense effect map under CIFAR-100.</p>
Full article ">Figure 14
<p><math display="inline"><semantics> <mrow> <mi mathvariant="bold-sans-serif">σ</mi> </mrow> </semantics></math> = 10, gradient inversion defense effect diagram under CIFAR-100.</p>
Full article ">
14 pages, 2498 KiB  
Article
Subdomain Adaptation Capsule Network for Partial Discharge Diagnosis in Gas-Insulated Switchgear
by Yanze Wu, Jing Yan, Zhuofan Xu, Guoqing Sui, Meirong Qi, Yingsan Geng and Jianhua Wang
Entropy 2023, 25(5), 809; https://doi.org/10.3390/e25050809 - 17 May 2023
Cited by 4 | Viewed by 1585
Abstract
Deep learning methods, especially convolutional neural networks (CNNs), have achieved good results in the partial discharge (PD) diagnosis of gas-insulated switchgear (GIS) in the laboratory. However, the relationship of features ignored in CNNs and the heavy dependance on the amount of sample data [...] Read more.
Deep learning methods, especially convolutional neural networks (CNNs), have achieved good results in the partial discharge (PD) diagnosis of gas-insulated switchgear (GIS) in the laboratory. However, the relationship of features ignored in CNNs and the heavy dependance on the amount of sample data make it difficult for the model developed in the laboratory to achieve high-precision, robust diagnosis of PD in the field. To solve these problems, a subdomain adaptation capsule network (SACN) is adopted for PD diagnosis in GIS. First, the feature information is effectively extracted by using a capsule network, which improves feature representation. Then, subdomain adaptation transfer learning is used to accomplish high diagnosis performance on the field data, which alleviates the confusion of different subdomains and matches the local distribution at the subdomain level. Experimental results demonstrate that the accuracy of the SACN in this study reaches 93.75% on the field data. The SACN has better performance than traditional deep learning methods, indicating that the SACN has potential application value in PD diagnosis of GIS. Full article
Show Figures

Figure 1

Figure 1
<p>Dynamic routing algorithm.</p>
Full article ">Figure 2
<p>Structure of SACN.</p>
Full article ">Figure 3
<p>Experimental wiring schematic.</p>
Full article ">Figure 4
<p>Waveform diagrams of four kinds of defects.</p>
Full article ">Figure 5
<p>(<b>a</b>) Confusion matrix of CapsNet; (<b>b</b>) Confusion matrix of CapsNet with MMD domain adaptation; (<b>c</b>) Confusion matrix of CapsNet with LMMD subdomain adaptation; (<b>d</b>) Confusion matrix of CapsNet with ALMMD subdomain adaptation.</p>
Full article ">Figure 6
<p>t-SNE results of different domain adaptation methods.</p>
Full article ">
19 pages, 7368 KiB  
Article
MSIA-Net: A Lightweight Infrared Target Detection Network with Efficient Information Fusion
by Jimin Yu, Shun Li, Shangbo Zhou and Hui Wang
Entropy 2023, 25(5), 808; https://doi.org/10.3390/e25050808 - 17 May 2023
Cited by 7 | Viewed by 1947
Abstract
In order to solve the problems of infrared target detection (i.e., the large models and numerous parameters), a lightweight detection network, MSIA-Net, is proposed. Firstly, a feature extraction module named MSIA, which is based on asymmetric convolution, is proposed, and it can greatly [...] Read more.
In order to solve the problems of infrared target detection (i.e., the large models and numerous parameters), a lightweight detection network, MSIA-Net, is proposed. Firstly, a feature extraction module named MSIA, which is based on asymmetric convolution, is proposed, and it can greatly reduce the number of parameters and improve the detection performance by reusing information. In addition, we propose a down-sampling module named DPP to reduce the information loss caused by pooling down-sampling. Finally, we propose a feature fusion structure named LIR-FPN that can shorten the information transmission path and effectively reduce the noise in the process of feature fusion. In order to improve the ability of the network to focus on the target, we introduce coordinate attention (CA) into the LIR-FPN; this integrates the location information of the target into the channel so as to obtain more expressive feature information. Finally, a comparative experiment with other SOTA methods was completed on the FLIR on-board infrared image dataset, which proved the powerful detection performance of MSIA-Net. Full article
(This article belongs to the Special Issue Entropy in Soft Computing and Machine Learning Algorithms II)
Show Figures

Figure 1

Figure 1
<p>Structure of As-Conv. (<b>a</b>) Diagram of enhancement of asymmetric convolution effect; (<b>b</b>) Schematic diagram of As-Conv.</p>
Full article ">Figure 2
<p>Diagram of the structure of the MSIA module.</p>
Full article ">Figure 3
<p>Structure of the DPP module.</p>
Full article ">Figure 4
<p>Structure of the SPPF module.</p>
Full article ">Figure 5
<p>Information compensation branch.</p>
Full article ">Figure 6
<p>Structure of CA. (Where different colors represent different weights.)</p>
Full article ">Figure 7
<p>Several FPN diagrams. (<b>a</b>) FPN; (<b>b</b>) PA-net; (<b>c</b>) LIR-FPN; (<b>d</b>) PCM.</p>
Full article ">Figure 8
<p>Network structure diagram for MSIA-Net. The numbers in parentheses are the numbers of modules.</p>
Full article ">Figure 9
<p>Bar diagram of the target instances.</p>
Full article ">Figure 10
<p>The Mosaic data augmentation method.</p>
Full article ">Figure 11
<p>Anchor boxes optimization diagram.</p>
Full article ">Figure 12
<p>Decay curve of the learning rate.</p>
Full article ">Figure 13
<p>Visual detection results of the proposed MSIA-Net. (<b>a</b>) Infrared images and their labels; (<b>b</b>) model detection results. (The green triangle is the false detection result of the network.)</p>
Full article ">Figure 14
<p>Curve of the training results. (<b>a</b>) Training and validation of various loss curves; (<b>b</b>) Change curve of evaluation index.</p>
Full article ">Figure 14 Cont.
<p>Curve of the training results. (<b>a</b>) Training and validation of various loss curves; (<b>b</b>) Change curve of evaluation index.</p>
Full article ">Figure 15
<p>(<b>a</b>) P–R curves; (<b>b</b>) F1 score curves; the blue line is the average of all classes.</p>
Full article ">Figure 16
<p>Detection results of different models. GT is the label for the target. Rows 2–5 show the test results of the SSD, Yolov3-tiny, Yolov5s, Yolov5, Yolov7-tiny, and MSIA-Net models, respectively.</p>
Full article ">Figure 16 Cont.
<p>Detection results of different models. GT is the label for the target. Rows 2–5 show the test results of the SSD, Yolov3-tiny, Yolov5s, Yolov5, Yolov7-tiny, and MSIA-Net models, respectively.</p>
Full article ">Figure 17
<p>Comparison of test results of PAN and LIR-FPN structures.</p>
Full article ">
15 pages, 2121 KiB  
Article
Inferring a Causal Relationship between Environmental Factors and Respiratory Infections Using Convergent Cross-Mapping
by Daipeng Chen, Xiaodan Sun and Robert A. Cheke
Entropy 2023, 25(5), 807; https://doi.org/10.3390/e25050807 - 17 May 2023
Cited by 5 | Viewed by 2814
Abstract
The incidence of respiratory infections in the population is related to many factors, among which environmental factors such as air quality, temperature, and humidity have attracted much attention. In particular, air pollution has caused widespread discomfort and concern in developing countries. Although the [...] Read more.
The incidence of respiratory infections in the population is related to many factors, among which environmental factors such as air quality, temperature, and humidity have attracted much attention. In particular, air pollution has caused widespread discomfort and concern in developing countries. Although the correlation between respiratory infections and air pollution is well known, establishing causality between them remains elusive. In this study, by conducting theoretical analysis, we updated the procedure of performing the extended convergent cross-mapping (CCM, a method of causal inference) to infer the causality between periodic variables. Consistently, we validated this new procedure on the synthetic data generated by a mathematical model. For real data in Shaanxi province of China in the period of 1 January 2010 to 15 November 2016, we first confirmed that the refined method is applicable by investigating the periodicity of influenza-like illness cases, an air quality index, temperature, and humidity through wavelet analysis. We next illustrated that air quality (quantified by AQI), temperature, and humidity affect the daily influenza-like illness cases, and, in particular, the respiratory infection cases increased progressively with increased AQI with a time delay of 11 days. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

Figure 1
<p>The time series of influenza-like illness (ILI) cases and experimental factors in Xi’an. (<b>a</b>) The influenza-like illness (ILI) cases collected from seven hospitals. (<b>b</b>) The real time air quality index (AQI). (<b>c</b>) The lowest daily temperature. (<b>d</b>) The relative humidity.</p>
Full article ">Figure 2
<p>Numerical validation of theoretical results. (<b>a</b>) An environmental factor (F)−embedded susceptible−infectious−susceptible (SIS) epidemic model, in which the dynamics of environmental factor is periodic. The effect of environmental factor on disease incidence has a time delay 1. (<b>b</b>) The performance of CCM and the CCM skill as a function of the length of time series used to reconstruct the high−dimensional manifold. (<b>c</b>) The performance of extended CCM and the CCM skill as a function of the tested time delay. Here, the length of time series used to reconstruct the high−dimensional manifold is fixed.</p>
Full article ">Figure 3
<p>Correlation analysis and wavelet analysis of real data in Xi’an. (<b>a</b>) The correlation and statistical significance among the observed time series of influenza−like illness (ILI) cases, air quality index (AQI), lowest daily temperature, and relative humidity in Xi’an. <span class="html-italic">p</span>-values are &lt; 0.01 (**). (<b>b</b>) The correlation network among the four variables we studied. (<b>c</b>) Wavelet analysis for time series of influenza−like illness (ILI) cases, air quality index (AQI), lowest daily temperature, and relative humidity. Wavelet power spectra are depicted on the left, and the right−hand panels show the mean spectra (vertical solid black line) with their significant threshold value of 0.05 (blue dashed line).</p>
Full article ">Figure 4
<p>Causal evidence among four variables (ILI−influenza like illness, AQI−air quality index, Tem.−temperature and Rhu.−relative humidity). (<b>a</b>–<b>c</b>) The CCM skills between involved variables as a function of tested cross-map lag. The negative optimal cross-map lag is the estimated interaction delay between them, e.g., the estimated delay for air pollution to drive influenza−like illness cases is 11 days. (<b>d</b>) Estimated causal network. The signs associated with arrows represent positive or negative correlation between two nodes. A negative sign means that increasing the drive variable would inhibit the response variable, and a positive sign means that higher drive variable would promote the response variable.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) Causal evidence between environmental factors. (<b>d</b>) Reconstructed manifold using the time series of temperature. Consistent with the theoretical analysis, the variations of points density in the reconstructed manifold is periodic (the blue points and red points are the 6 nearest neighbors of some points).</p>
Full article ">Figure A1
<p>Results of Kalman filtering. (<b>a</b>) Data of the influenza−like illness (ILI) cases, air quality index (AQI), lowest daily temperature and relative humidity, low−frequency component, and original date (high−frequency component). (<b>b</b>) The time series and distribution of standardized residuals. The residuals were calculated by subtracting low−frequency component from the original series.</p>
Full article ">
59 pages, 15006 KiB  
Article
Causality Analysis with Information Geometry: A Comparison
by Heng Jie Choong, Eun-jin Kim and Fei He
Entropy 2023, 25(5), 806; https://doi.org/10.3390/e25050806 - 16 May 2023
Cited by 3 | Viewed by 2482
Abstract
The quantification of causality is vital for understanding various important phenomena in nature and laboratories, such as brain networks, environmental dynamics, and pathologies. The two most widely used methods for measuring causality are Granger Causality (GC) and Transfer Entropy (TE), which rely on [...] Read more.
The quantification of causality is vital for understanding various important phenomena in nature and laboratories, such as brain networks, environmental dynamics, and pathologies. The two most widely used methods for measuring causality are Granger Causality (GC) and Transfer Entropy (TE), which rely on measuring the improvement in the prediction of one process based on the knowledge of another process at an earlier time. However, they have their own limitations, e.g., in applications to nonlinear, non-stationary data, or non-parametric models. In this study, we propose an alternative approach to quantify causality through information geometry that overcomes such limitations. Specifically, based on the information rate that measures the rate of change of the time-dependent distribution, we develop a model-free approach called information rate causality that captures the occurrence of the causality based on the change in the distribution of one process caused by another. This measurement is suitable for analyzing numerically generated non-stationary, nonlinear data. The latter are generated by simulating different types of discrete autoregressive models which contain linear and nonlinear interactions in unidirectional and bidirectional time-series signals. Our results show that information rate causalitycan capture the coupling of both linear and nonlinear data better than GC and TE in the several examples explored in the paper. Full article
(This article belongs to the Special Issue Causality and Complex Systems)
Show Figures

Figure 1

Figure 1
<p>The procedure of implementing the causality analyses used in this paper, each colour of the lines in the image represents a different simulation. In this study, each window contains 0.5 s of data points and overlaps with the previous window by 0.25 s. The causality analyses are conducted within each window. (<b>a</b>) illustrates the components (<math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mo>…</mo> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mo>…</mo> <mo>)</mo> </mrow> </semantics></math>) calculated within the windows for the non-parametric GC analysis. Refer to <a href="#sec2dot1-entropy-25-00806" class="html-sec">Section 2.1</a> to know the corresponding components. (<b>b</b>) shows the estimation of distributions <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <mo>…</mo> <mo>)</mo> </mrow> </semantics></math> for TE evaluation. The distributions <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <mo>…</mo> <mo>)</mo> </mrow> </semantics></math> are estimated based on the samples <math display="inline"><semantics> <msubsup> <mi>x</mi> <mi>n</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math>, <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math>, and <math display="inline"><semantics> <msubsup> <mi>y</mi> <mi>n</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </semantics></math>. Refer to <a href="#sec2dot2-entropy-25-00806" class="html-sec">Section 2.2</a> for the definition of TE. (<b>c</b>) demonstrates the evaluation of information rate causality. Each window (labelled as 1st window) is divided into two windows with the first half labelled as 2nd window. The distribution <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <mi>x</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>y</mi> <mrow> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </semantics></math> estimates the evolution of distribution <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> while <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>(</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </semantics></math> is fixed at the 2nd window. Refer to <a href="#sec2dot3-entropy-25-00806" class="html-sec">Section 2.3</a> for the definition of information rate causality.</p>
Full article ">Figure 2
<p>Model of the flow of information of the processes <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> for Equations (24) and (25). In this paper, the equations are simulated with the physical time of 25 s and sampling frequency of 200 Hz (5000 realizations) with either large noise or small noise. The coupling between the processes occurs at physical time of 10 s.</p>
Full article ">Figure 3
<p>Result of the simulation based on Equations (24) and (25) (refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for graphical explanation of the process). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and [bottom, green] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [right blue] is the phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. Note that the coupling of the processes occurs at 10 s.</p>
Full article ">Figure 3 Cont.
<p>Result of the simulation based on Equations (24) and (25) (refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for graphical explanation of the process). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and [bottom, green] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [right blue] is the phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. Note that the coupling of the processes occurs at 10 s.</p>
Full article ">Figure 4
<p>Result of the spectral and time-varying frequency of GC. Note that for each subfigure, the [left] shows the spectral (frequency domain) of GC, [blue] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [orange] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [top, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the processes in Equations (24) and (25). Refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for the graphical explanation of the process.</p>
Full article ">Figure 4 Cont.
<p>Result of the spectral and time-varying frequency of GC. Note that for each subfigure, the [left] shows the spectral (frequency domain) of GC, [blue] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [orange] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [top, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the processes in Equations (24) and (25). Refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for the graphical explanation of the process.</p>
Full article ">Figure 4 Cont.
<p>Result of the spectral and time-varying frequency of GC. Note that for each subfigure, the [left] shows the spectral (frequency domain) of GC, [blue] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [orange] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [top, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the processes in Equations (24) and (25). Refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for the graphical explanation of the process.</p>
Full article ">Figure 5
<p>Result of the net TE and window sliding TE. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals; while the [right] shows the window sliding TE. The results shown are based on the processes in Equations (24) and (25). Refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for the graphical explanation of the process.</p>
Full article ">Figure 5 Cont.
<p>Result of the net TE and window sliding TE. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals; while the [right] shows the window sliding TE. The results shown are based on the processes in Equations (24) and (25). Refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for the graphical explanation of the process.</p>
Full article ">Figure 5 Cont.
<p>Result of the net TE and window sliding TE. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals; while the [right] shows the window sliding TE. The results shown are based on the processes in Equations (24) and (25). Refer to <a href="#entropy-25-00806-f002" class="html-fig">Figure 2</a> for the graphical explanation of the process.</p>
Full article ">Figure 6
<p>Failure of TE in capturing the causality between the signals <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> for [linear: couple] (refer to Equation (24)) when lag 9 is used for evaluation, as compared to <a href="#entropy-25-00806-f005" class="html-fig">Figure 5</a>a, where the TE accurately shows that the coupling occurs after 10 s.</p>
Full article ">Figure 7
<p>Result of information rate causality for Equations (24) and (25).</p>
Full article ">Figure 7 Cont.
<p>Result of information rate causality for Equations (24) and (25).</p>
Full article ">Figure 8
<p>Zoom-in of <a href="#entropy-25-00806-f007" class="html-fig">Figure 7</a> for the information rate causality between 9.6 s and 10.4 s. Note that the change in the causality occurs at 10 s.</p>
Full article ">Figure 8 Cont.
<p>Zoom-in of <a href="#entropy-25-00806-f007" class="html-fig">Figure 7</a> for the information rate causality between 9.6 s and 10.4 s. Note that the change in the causality occurs at 10 s.</p>
Full article ">Figure 9
<p>The information rate causality of Equation (26) between 9.6 s and 10.4 s where the causality occurs at 10 s.</p>
Full article ">Figure 10
<p>Model of the flow of information of the processes <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> for Equations (27) and (28). In this paper, the equations are simulated with the physical time of 25 s and sampling frequency of 200 Hertz (5000 realizations) with either large noise or small noise. Note that the process <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> coupling with <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> occurs before 10 s and the interchange of coupling occurs after 10 s. In the context of noncoupling, it is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Result of simulation based on Equations (27) and (28) (refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for graphical explanation of the process). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and [bottom, green] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [right, blue] is the phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. Note that in the context of noncoupling, it is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11 Cont.
<p>Result of simulation based on Equations (27) and (28) (refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for graphical explanation of the process). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and [bottom, green] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [right, blue] is the phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. Note that in the context of noncoupling, it is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Result of the spectral and time-varying frequency of GC. Note that for each subfigure, the [left] shows the spectral (frequency domain) of GC, [blue] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [orange] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [top, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the processes in Equations (27) and (28). Refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 12 Cont.
<p>Result of the spectral and time-varying frequency of GC. Note that for each subfigure, the [left] shows the spectral (frequency domain) of GC, [blue] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [orange] indicates <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [top, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, right] shows the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the processes in Equations (27) and (28). Refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 13
<p>Result of the net TE and window sliding TE with the lag of 0. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the processes in Equations (27) and (28). Refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 13 Cont.
<p>Result of the net TE and window sliding TE with the lag of 0. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the processes in Equations (27) and (28). Refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 14
<p>Result of the net TE and window sliding TE with the lag of 1. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the processes in Equations (27) and (28). Refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 14 Cont.
<p>Result of the net TE and window sliding TE with the lag of 1. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the processes in Equations (27) and (28). Refer to <a href="#entropy-25-00806-f010" class="html-fig">Figure 10</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 15
<p>Result of information rate causality for Equations (27) and (28). Note that the interchange of the causality occurs at 10 s.</p>
Full article ">Figure 15 Cont.
<p>Result of information rate causality for Equations (27) and (28). Note that the interchange of the causality occurs at 10 s.</p>
Full article ">Figure 16
<p>Zoom-in of <a href="#entropy-25-00806-f015" class="html-fig">Figure 15</a> for the information rate causality between 9.6 s and 10.4 s. Note that the interchange of the causality occurs at 10 s.</p>
Full article ">Figure 16 Cont.
<p>Zoom-in of <a href="#entropy-25-00806-f015" class="html-fig">Figure 15</a> for the information rate causality between 9.6 s and 10.4 s. Note that the interchange of the causality occurs at 10 s.</p>
Full article ">Figure 17
<p>Model of the flow of information of the processes <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> for Equations (29) and (30). In this paper, the equations are simulated with physical time of 25 s and sampling frequency of 200 Hz (5000 realizations) with either large noise or small noise. Note that the <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> coupled with <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> throughout the process and the occurrence of bidirectional causality happens after 10 s. In the context of noncoupling, it is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Result of simulation based on Equations (29) and (30) (refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for graphical explanation of the process). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, green] shows the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [right, blue] shows the phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. Note that the noncoupling in the context is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18 Cont.
<p>Result of simulation based on Equations (29) and (30) (refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for graphical explanation of the process). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, green] shows the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [right, blue] shows the phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. Note that the noncoupling in the context is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18 Cont.
<p>Result of simulation based on Equations (29) and (30) (refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for graphical explanation of the process). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>; [bottom, green] shows the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>; [right, blue] shows the phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. Note that the noncoupling in the context is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Result of the spectral and time-varying frequency of GC. Within each subfigure, the [left] shows the spectral (frequency domain) of GC, with [blue] showing that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>→</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </msub> </semantics></math>) and [orange] indicating that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>→</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math>). The [top, right] figure shows that the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and [bottom, right] shows that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 19 Cont.
<p>Result of the spectral and time-varying frequency of GC. Within each subfigure, the [left] shows the spectral (frequency domain) of GC, with [blue] showing that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>→</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </msub> </semantics></math>) and [orange] indicating that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>→</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math>). The [top, right] figure shows that the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and [bottom, right] shows that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 19 Cont.
<p>Result of the spectral and time-varying frequency of GC. Within each subfigure, the [left] shows the spectral (frequency domain) of GC, with [blue] showing that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>→</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> </mrow> </msub> </semantics></math>) and [orange] indicating that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>→</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> </mrow> </msub> </semantics></math>). The [top, right] figure shows that the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and [bottom, right] shows that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 20
<p>Result of the net TE and window sliding TE with the lag at 0. Within each subfigure, the [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] figures show the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 20 Cont.
<p>Result of the net TE and window sliding TE with the lag at 0. Within each subfigure, the [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] figures show the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 21
<p>Result of the net TE and window sliding TE with the lag at 1. Within each subfigure, the [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] figures show the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 21 Cont.
<p>Result of the net TE and window sliding TE with the lag at 1. Within each subfigure, the [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] figures show the net TE for the whole signals, while the [right] shows the window sliding TE. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 22
<p>Result of information rate causality for Equations (29) and (30). Note that the bidirectional causation begins at 10 s. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 22 Cont.
<p>Result of information rate causality for Equations (29) and (30). Note that the bidirectional causation begins at 10 s. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 23
<p>Zoom-in of <a href="#entropy-25-00806-f022" class="html-fig">Figure 22</a> for the information rate causality between 9.6 s and 10.4 s. Note that the bidirectional causation begins at 10 s. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure 23 Cont.
<p>Zoom-in of <a href="#entropy-25-00806-f022" class="html-fig">Figure 22</a> for the information rate causality between 9.6 s and 10.4 s. Note that the bidirectional causation begins at 10 s. The results shown are based on the expression in Equations (29) and (30). Refer to <a href="#entropy-25-00806-f017" class="html-fig">Figure 17</a> for the graphical explanation of the process. Note that the noncoupling is referring to <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>(</mo> <mi>τ</mi> <mo>−</mo> <mn>10</mn> <mo>)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for all the cases.</p>
Full article ">Figure A1
<p>Result of the simulation based on Equations (A65) and (A66). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while the [bottom left, green] shows the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> is shown in [right blue] figure. Note that the noncoupling here refers to the processes oscillating by themselves without sharing any information with each other.</p>
Full article ">Figure A1 Cont.
<p>Result of the simulation based on Equations (A65) and (A66). The [top left, red] is the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while the [bottom left, green] shows the result of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The phase space of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> is shown in [right blue] figure. Note that the noncoupling here refers to the processes oscillating by themselves without sharing any information with each other.</p>
Full article ">Figure A2
<p>Result of the spectral and time-varying frequency of GC. In each subfigure, the [left panel] shows the spectral (frequency domain) of GC with [blue line] indicating that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [orange line] indicates that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The [top, right] subfigure illustrates the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [bottom, right] shows <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results are based on Equations (A65) and (A66).</p>
Full article ">Figure A2 Cont.
<p>Result of the spectral and time-varying frequency of GC. In each subfigure, the [left panel] shows the spectral (frequency domain) of GC with [blue line] indicating that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [orange line] indicates that <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> causes <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The [top, right] subfigure illustrates the time-varying frequency of GC of <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, while [bottom, right] shows <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> to <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math>. The results are based on Equations (A65) and (A66).</p>
Full article ">Figure A3
<p>Result of the net TE and window sliding TE. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals. The [right] subfigure shows the window sliding TE. The results shown are based on Equations (<a href="#FD95-entropy-25-00806" class="html-disp-formula">A65</a>) and (<a href="#FD96-entropy-25-00806" class="html-disp-formula">A66</a>).</p>
Full article ">Figure A3 Cont.
<p>Result of the net TE and window sliding TE. Note that for each subfigure, [left, <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>4</mn> </mrow> </semantics></math>] depicts the net TE for the whole signals. The [right] subfigure shows the window sliding TE. The results shown are based on Equations (<a href="#FD95-entropy-25-00806" class="html-disp-formula">A65</a>) and (<a href="#FD96-entropy-25-00806" class="html-disp-formula">A66</a>).</p>
Full article ">Figure A4
<p>Result of information rate causality for Equations (<a href="#FD95-entropy-25-00806" class="html-disp-formula">A65</a>) and (<a href="#FD96-entropy-25-00806" class="html-disp-formula">A66</a>).</p>
Full article ">Figure A4 Cont.
<p>Result of information rate causality for Equations (<a href="#FD95-entropy-25-00806" class="html-disp-formula">A65</a>) and (<a href="#FD96-entropy-25-00806" class="html-disp-formula">A66</a>).</p>
Full article ">Figure A5
<p>Zoom-in of <a href="#entropy-25-00806-f0A4" class="html-fig">Figure A4</a> for the information rate causality between 9.6 s and 10.4 s.</p>
Full article ">Figure A5 Cont.
<p>Zoom-in of <a href="#entropy-25-00806-f0A4" class="html-fig">Figure A4</a> for the information rate causality between 9.6 s and 10.4 s.</p>
Full article ">
18 pages, 1517 KiB  
Article
Dynamical Analysis of Hyper-ILSR Rumor Propagation Model with Saturation Incidence Rate
by Xuehui Mei, Ziyu Zhang and Haijun Jiang
Entropy 2023, 25(5), 805; https://doi.org/10.3390/e25050805 - 16 May 2023
Cited by 7 | Viewed by 1817
Abstract
With the development of the Internet, it is more convenient for people to obtain information, which also facilitates the spread of rumors. It is imperative to study the mechanisms of rumor transmission to control the spread of rumors. The process of rumor propagation [...] Read more.
With the development of the Internet, it is more convenient for people to obtain information, which also facilitates the spread of rumors. It is imperative to study the mechanisms of rumor transmission to control the spread of rumors. The process of rumor propagation is often affected by the interaction of multiple nodes. To reflect higher-order interactions in rumor-spreading, hypergraph theories are introduced in a Hyper-ILSR (Hyper-Ignorant–Lurker–Spreader–Recover) rumor-spreading model with saturation incidence rate in this study. Firstly, the definition of hypergraph and hyperdegree is introduced to explain the construction of the model. Secondly, the existence of the threshold and equilibrium of the Hyper-ILSR model is revealed by discussing the model, which is used to judge the final state of rumor propagation. Next, the stability of equilibrium is studied by Lyapunov functions. Moreover, optimal control is put forward to suppress rumor propagation. Finally, the differences between the Hyper-ILSR model and the general ILSR model are shown in numerical simulations. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>A hypergraph.</p>
Full article ">Figure 2
<p>Structure of the Hyper-ILSR rumor propagation process.</p>
Full article ">Figure 3
<p>(<b>a</b>–<b>e</b>) The stability of Hyper-ILSR model when <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) The stability of ILSR model when <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>e</b>) The stability of Hyper-ILSR model when <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>–<b>e</b>) The stability of Hyper-ILSR model when <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>) The stability of ILSR model when <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>d</b>) The effects of <span class="html-italic">a</span> on rumor spreading (<math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>–<b>d</b>) The effects of <span class="html-italic">a</span> on rumor spreading (<math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Optimal control (<b>a</b>) and control costs (<b>b</b>).</p>
Full article ">Figure 9
<p>The comparison of Hyper-ILSR and ILSR model with factual data.</p>
Full article ">
10 pages, 369 KiB  
Article
Radial Basis Function Finite Difference Method Based on Oseen Iteration for Solving Two-Dimensional Navier–Stokes Equations
by Liru Mu and Xinlong Feng
Entropy 2023, 25(5), 804; https://doi.org/10.3390/e25050804 - 16 May 2023
Cited by 2 | Viewed by 1600
Abstract
In this paper, the radial basis function finite difference method is used to solve two-dimensional steady incompressible Navier–Stokes equations. First, the radial basis function finite difference method with polynomial is used to discretize the spatial operator. Then, the Oseen iterative scheme is used [...] Read more.
In this paper, the radial basis function finite difference method is used to solve two-dimensional steady incompressible Navier–Stokes equations. First, the radial basis function finite difference method with polynomial is used to discretize the spatial operator. Then, the Oseen iterative scheme is used to deal with the nonlinear term, constructing the discrete scheme for Navier–Stokes equation based on the finite difference method of the radial basis function. This method does not require complete matrix reorganization in each nonlinear iteration, which simplifies the calculation process and obtains high-precision numerical solutions. Finally, several numerical examples are obtained to verify the convergence and effectiveness of the radial basis function finite difference method based on Oseen Iteration. Full article
(This article belongs to the Collection Foundations of Statistical Mechanics)
Show Figures

Figure 1

Figure 1
<p>Right-angled node layout (<b>left</b>) and hexagonal node layout (<b>right</b>).</p>
Full article ">Figure 2
<p>Relative error <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> of the velocity <span class="html-italic">u</span> and pressure <span class="html-italic">p</span> (right-angled node layout).</p>
Full article ">Figure 3
<p>Relative error <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> of the velocity <math display="inline"><semantics> <mi mathvariant="bold-italic">u</mi> </semantics></math> (<b>left</b>) and the pressure <span class="html-italic">p</span> (<b>right</b>) (<math display="inline"><semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.00001</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>Relative error <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> of the velocity <math display="inline"><semantics> <mi mathvariant="bold-italic">u</mi> </semantics></math> and pressure <span class="html-italic">p</span> (hexagonal node layout).</p>
Full article ">
31 pages, 402 KiB  
Article
Process and Time
by William Sulis
Entropy 2023, 25(5), 803; https://doi.org/10.3390/e25050803 - 15 May 2023
Cited by 3 | Viewed by 4716
Abstract
In regards to the nature of time, it has become commonplace to hear physicists state that time does not exist and that the perception of time passing and of events occurring in time is an illusion. In this paper, I argue that physics [...] Read more.
In regards to the nature of time, it has become commonplace to hear physicists state that time does not exist and that the perception of time passing and of events occurring in time is an illusion. In this paper, I argue that physics is actually agnostic on the question of the nature of time. The standard arguments against its existence all suffer from implicit biases and hidden assumptions, rendering many of them circular in nature. An alternative viewpoint to that of Newtonian materialism is the process view of Whitehead. I will show that the process perspective supports the reality of becoming, of happening, and of change. At the fundamental level, time is an expression of the action of process generating the elements of reality. Metrical space–time is an emergent aspect of relations between process-generated entities. Such a view is compatible with existing physics. The situation of time in physics is reminiscent of that of the continuum hypothesis in mathematical logic. It may be an independent assumption, not provable within physics proper (though it may someday be amenable to experimental exploration). Full article
(This article belongs to the Special Issue Quantum Information and Probability: From Foundations to Engineering)
32 pages, 4255 KiB  
Review
Reviewing Evolution of Learning Functions and Semantic Information Measures for Understanding Deep Learning
by Chenguang Lu
Entropy 2023, 25(5), 802; https://doi.org/10.3390/e25050802 - 15 May 2023
Cited by 3 | Viewed by 2343
Abstract
A new trend in deep learning, represented by Mutual Information Neural Estimation (MINE) and Information Noise Contrast Estimation (InfoNCE), is emerging. In this trend, similarity functions and Estimated Mutual Information (EMI) are used as learning and objective functions. Coincidentally, EMI is essentially the [...] Read more.
A new trend in deep learning, represented by Mutual Information Neural Estimation (MINE) and Information Noise Contrast Estimation (InfoNCE), is emerging. In this trend, similarity functions and Estimated Mutual Information (EMI) are used as learning and objective functions. Coincidentally, EMI is essentially the same as Semantic Mutual Information (SeMI) proposed by the author 30 years ago. This paper first reviews the evolutionary histories of semantic information measures and learning functions. Then, it briefly introduces the author’s semantic information G theory with the rate-fidelity function R(G) (G denotes SeMI, and R(G) extends R(D)) and its applications to multi-label learning, the maximum Mutual Information (MI) classification, and mixture models. Then it discusses how we should understand the relationship between SeMI and Shannon’s MI, two generalized entropies (fuzzy entropy and coverage entropy), Autoencoders, Gibbs distributions, and partition functions from the perspective of the R(G) function or the G theory. An important conclusion is that mixture models and Restricted Boltzmann Machines converge because SeMI is maximized, and Shannon’s MI is minimized, making information efficiency G/R close to 1. A potential opportunity is to simplify deep learning by using Gaussian channel mixture models for pre-training deep neural networks’ latent layers without considering gradients. It also discusses how the SeMI measure is used as the reward function (reflecting purposiveness) for reinforcement learning. The G theory helps interpret deep learning but is far from enough. Combining semantic information theory and deep learning will accelerate their development. Full article
(This article belongs to the Special Issue Entropy: The Cornerstone of Machine Learning)
Show Figures

Figure 1

Figure 1
<p>The distinctions and relations between four types of learning [<a href="#B31-entropy-25-00802" class="html-bibr">31</a>,<a href="#B34-entropy-25-00802" class="html-bibr">34</a>].</p>
Full article ">Figure 2
<p>Illustrating a GPS device’s positioning with a deviation. We predict the probability distribution of <span class="html-italic">x</span> according to <span class="html-italic">y<sub>j</sub></span> and the prior knowledge <span class="html-italic">P</span>(<span class="html-italic">x</span>). The red star represents the most probable position.</p>
Full article ">Figure 3
<p>The semantic information conveyed by <span class="html-italic">y<sub>j</sub></span> about <span class="html-italic">x<sub>i</sub></span> decreases with the deviation or distortion increasing. The larger the deviation is, the less information there is.</p>
Full article ">Figure 4
<p>The information rate-fidelity function <span class="html-italic">R</span>(<span class="html-italic">G</span>) for binary communication. Any <span class="html-italic">R</span>(<span class="html-italic">G</span>) function is a bowl-like function. There is a point at which <span class="html-italic">R</span>(<span class="html-italic">G</span>) = <span class="html-italic">G</span> (<span class="html-italic">s</span> = 1). For given <span class="html-italic">R</span>, two anti-functions exist: <span class="html-italic">G</span><sup>-</sup>(<span class="html-italic">R</span>) and <span class="html-italic">G</span><sup>+</sup>(<span class="html-italic">R</span>).</p>
Full article ">Figure 5
<p>Illustrating the medical test and the signal detection. We choose <span class="html-italic">y<sub>j</sub></span> according to <span class="html-italic">z</span> ∈ <span class="html-italic">C<sub>j</sub></span>. The task is to find the dividing point <span class="html-italic">z</span>’ that results in MaxMI between <span class="html-italic">X</span> and <span class="html-italic">Y</span>.</p>
Full article ">Figure 6
<p>The MMI classification with a very bad initial partition. The convergence is very fast and stable without considering gradients. (<b>a</b>) The very bad initial partition. (<b>b</b>) The partition after the first iteration. (<b>c</b>) The partition after the second iteration. (<b>d</b>) The mutual information changes with iterations.</p>
Full article ">Figure 7
<p>Comparing EM and E3M algorithms with an example that is hard to converge. The EM algorithm needs about 340 iterations, whereas the E3M algorithm needs about 240 iterations. In the convergent process, complete data log-likelihood <span class="html-italic">Q</span> is not monotonously increasing. <span class="html-italic">H</span>(<span class="html-italic">P||P<sub>θ</sub></span>) decreases with <span class="html-italic">R − G.</span> (<b>a</b>) Initial components with (<span class="html-italic">µ</span><sub>1</sub>, <span class="html-italic">µ</span><sub>2</sub>) = (80, 95). (<b>b</b>) Globally convergent two components. (<b>c</b>) <span class="html-italic">Q</span>, <span class="html-italic">R</span>, <span class="html-italic">G</span>, and <span class="html-italic">H</span>(<span class="html-italic">P||P<sub>θ</sub></span>) changes with iterations (initialization: (<span class="html-italic">µ</span><sub>1</sub>, <span class="html-italic">µ</span><sub>2</sub>, <span class="html-italic">σ</span><sub>1</sub>, <span class="html-italic">σ</span><sub>2</sub>, <span class="html-italic">P</span>(<span class="html-italic">y</span><sub>1</sub>)) = (80, 95, 5, 5, 0.5)).</p>
Full article ">Figure 7 Cont.
<p>Comparing EM and E3M algorithms with an example that is hard to converge. The EM algorithm needs about 340 iterations, whereas the E3M algorithm needs about 240 iterations. In the convergent process, complete data log-likelihood <span class="html-italic">Q</span> is not monotonously increasing. <span class="html-italic">H</span>(<span class="html-italic">P||P<sub>θ</sub></span>) decreases with <span class="html-italic">R − G.</span> (<b>a</b>) Initial components with (<span class="html-italic">µ</span><sub>1</sub>, <span class="html-italic">µ</span><sub>2</sub>) = (80, 95). (<b>b</b>) Globally convergent two components. (<b>c</b>) <span class="html-italic">Q</span>, <span class="html-italic">R</span>, <span class="html-italic">G</span>, and <span class="html-italic">H</span>(<span class="html-italic">P||P<sub>θ</sub></span>) changes with iterations (initialization: (<span class="html-italic">µ</span><sub>1</sub>, <span class="html-italic">µ</span><sub>2</sub>, <span class="html-italic">σ</span><sub>1</sub>, <span class="html-italic">σ</span><sub>2</sub>, <span class="html-italic">P</span>(<span class="html-italic">y</span><sub>1</sub>)) = (80, 95, 5, 5, 0.5)).</p>
Full article ">Figure 8
<p>Comparing a typical neuron and a neuron in a CMMM. (<b>a</b>) A typical neuron in neural networks. (<b>b</b>) A neuron in the CMMM and its optimization.</p>
Full article ">Figure 9
<p>Illustrating population death age control for measuring purposive information. <span class="html-italic">P(x|a<sub>j</sub></span>) approximates to <span class="html-italic">P(x|θ<sub>j</sub></span>) = <span class="html-italic">P(x|θ<sub>j</sub></span>, <span class="html-italic">s</span> = 1) for information efficiency <span class="html-italic">G/R</span> = 1. <span class="html-italic">G</span> and <span class="html-italic">R</span> are close to their maxima as <span class="html-italic">P(x|a<sub>j</sub></span>) approximates to <span class="html-italic">P(x|θ<sub>j</sub></span>, <span class="html-italic">s</span> = 20).</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop