[go: up one dir, main page]

Next Issue
Volume 21, November
Previous Issue
Volume 21, September
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 21, Issue 10 (October 2019) – 111 articles

Cover Story (view full-size image): Entropy applications in hydrometric network design problems have been extensively studied in the most recent decade. This paper introduces a methodology called ensemble-based dual entropy and multiobjective optimization (EnDEMO), which considers uncertainty from the ensemble generation of the input data. First, the current network was evaluated by transinformation analysis. Then, the optimal networks were explored using the traditional deterministic network design method and the newly proposed ensemble-based method. A comparison of the results shows that the most frequently selected stations by EnDEMO were fewer and appeared more reliable for practical use. The maps of station selection frequency from both DEMO and EnDEMO allowed us to identify preferential locations for additional stations; however, EnDEMO provided a more robust outcome than the traditional approach. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 3451 KiB  
Article
Effect of Solution Treatment on the Shape Memory Functions of (TiZrHf)50Ni25Co10Cu15 High Entropy Shape Memory Alloy
by Hao-Chen Lee, Yue-Jin Chen and Chih-Hsuan Chen
Entropy 2019, 21(10), 1027; https://doi.org/10.3390/e21101027 - 22 Oct 2019
Cited by 25 | Viewed by 5531
Abstract
This study investigated the effects of solution treatment at 1000 °C on the transformation behaviors, microstructure, and shape memory functions of a novel (TiZrHf)50Ni25Co10Cu15 high entropy shape memory alloy (HESMA). The solution treatment caused partial dissolution [...] Read more.
This study investigated the effects of solution treatment at 1000 °C on the transformation behaviors, microstructure, and shape memory functions of a novel (TiZrHf)50Ni25Co10Cu15 high entropy shape memory alloy (HESMA). The solution treatment caused partial dissolution of non-oxygen-stabilized Ti2Ni-like phase. This phenomenon resulted in the increment of (Ti, Zr, Hf) content in the matrix and thus increment of the Ms and Af temperatures. At the same time, the solution treatment induced a high entropy effect and thus increased the degree of lattice distortion, which led to increment of the friction force during martensitic transformation, resulting in a broad transformation temperature range. The dissolution of the Ti2Ni-like phase also improved the functional performance of the HESMA by reducing its brittleness and increasing its strength. The experimental results presented in this study demonstrate that solution treatment is an effective and essential way to improve the functional performance of the HESMA. Full article
(This article belongs to the Special Issue High-Entropy Materials)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The heat flow curves of the furnace-cooled (FC) and water-quenched (WQ) Ti<sub>16.67</sub>Zr<sub>16.67</sub>Hf<sub>16.67</sub>Ni<sub>25</sub>Co<sub>10</sub>Cu<sub>15</sub> HESMAs measured with DSC.</p>
Full article ">Figure 2
<p>XRD spectra of the FC and WQ Ti<sub>16.67</sub>Zr<sub>16.67</sub>Hf<sub>16.67</sub>Ni<sub>25</sub>Co<sub>10</sub>Cu<sub>15</sub> HESMAs collected at room temperature.</p>
Full article ">Figure 3
<p>SEM backscattered electron image (BSI) of (<b>a</b>) FC (<b>b</b>) WQ Ti<sub>16.67</sub>Zr<sub>16.67</sub>Hf<sub>16.67</sub>Ni<sub>25</sub>Co<sub>10</sub>Cu<sub>15</sub> HESMAs.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) Bright field (BF) images of the FC sample, showing a lamellar B19′ martensite structure. (<b>c</b>) The selected area diffraction pattern (SADP) taken from the circled area shown in (<b>b</b>), which shows the diffraction pattern of B19′ martensite. (<b>d</b>) A Ti<sub>2</sub>Ni-like phase with size of several micrometers in the matrix. The inset in (<b>d</b>) shows the diffraction pattern taken from the Ti<sub>2</sub>Ni-like phase.</p>
Full article ">Figure 5
<p>Shape memory effects of (<b>a</b>) FC and (<b>b</b>) WQ specimens with three-point bending test. (<b>c</b>) The recoverable and irrecoverable strains of the FC and WQ Ti<sub>16.67</sub>Zr<sub>16.67</sub>Hf<sub>16.67</sub>Ni<sub>25</sub>Co<sub>10</sub>Cu<sub>15</sub> HESMAs. (<a href="#entropy-21-01027-f005" class="html-fig">Figure 5</a>b is reprinted from Shape memory characteristics of (TiZrHf)<sub>50</sub>Ni<sub>25</sub>Co<sub>10</sub>Cu<sub>15</sub> high entropy shape memory alloy, Vol. 162, Chih-Hsuan Chen and Yue-JinChen, Pages 185-189, Copyright (2019), with permission from Elsevier.)</p>
Full article ">Figure 6
<p>Pseudoelasticity of (<b>a</b>) FC and (<b>b</b>) WQ specimens under compression at 150 °C. (<b>c</b>) Summary of the recoverable and irrecoverable strains of the FC and WQ Ti<sub>16.67</sub>Zr<sub>16.67</sub>Hf<sub>16.67</sub>Ni<sub>25</sub>Co<sub>10</sub>Cu<sub>15</sub> HESMAs. The green dotted line indicates 100% recovery of the applied strain.</p>
Full article ">
24 pages, 28547 KiB  
Article
A High Spectral Entropy (SE) Memristive Hidden Chaotic System with Multi-Type Quasi-Periodic and its Circuit
by Licai Liu, Chuanhong Du, Lixiu Liang and Xiefu Zhang
Entropy 2019, 21(10), 1026; https://doi.org/10.3390/e21101026 - 22 Oct 2019
Cited by 25 | Viewed by 3695
Abstract
As a new type of nonlinear electronic component, a memristor can be used in a chaotic system to increase the complexity of the system. In this paper, a flux-controlled memristor is applied to an existing chaotic system, and a novel five-dimensional chaotic system [...] Read more.
As a new type of nonlinear electronic component, a memristor can be used in a chaotic system to increase the complexity of the system. In this paper, a flux-controlled memristor is applied to an existing chaotic system, and a novel five-dimensional chaotic system with high complexity and hidden attractors is proposed. Analyzing the nonlinear characteristics of the system, we can find that the system has new chaotic attractors and many novel quasi-periodic limit cycles; the unique attractor structure of the Poincaré map also reflects the complexity and novelty of the hidden attractor for the system; the system has a very high complexity when measured through spectral entropy. In addition, under different initial conditions, the system exhibits the coexistence of chaotic attractors with different topologies, quasi-periodic limit cycles, and chaotic attractors. At the same time, an interesting transient chaos phenomenon, one kind of novel quasi-periodic, and weak chaotic hidden attractors are found. Finally, we realize the memristor model circuit and the proposed chaotic system use off-the-shelf electronic components. The experimental results of the circuit are consistent with the numerical simulation, which shows that the system is physically achievable and provides a new option for the application of memristive chaotic systems. Full article
Show Figures

Figure 1

Figure 1
<p>Memristor model: (<b>a</b>) the relationship of magnetic flux and charge; (<b>b</b>) the I-V characteristic curve.</p>
Full article ">Figure 2
<p>3-D chaotic attractor of system (2): (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> space, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> space, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>w</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> space, and (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>−</mo> <mi>w</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> space.</p>
Full article ">Figure 3
<p>2-D chaotic attractor of system: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi mathvariant="normal">u</mi> </mrow> </semantics></math> plane; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane; and (<b>f</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">w</mi> <mo>−</mo> <mi>i</mi> </mrow> </semantics></math> plane of memristor model (1).</p>
Full article ">Figure 3 Cont.
<p>2-D chaotic attractor of system: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi mathvariant="normal">u</mi> </mrow> </semantics></math> plane; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane; and (<b>f</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">w</mi> <mo>−</mo> <mi>i</mi> </mrow> </semantics></math> plane of memristor model (1).</p>
Full article ">Figure 4
<p>Frequency spectrum and time series of <math display="inline"><semantics> <mi>x</mi> </semantics></math> variable for system (2): (<b>a</b>) frequency spectrum; (<b>b</b>) time series.</p>
Full article ">Figure 5
<p>Poincaré map of system (2): (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> </mrow> </semantics></math> plane; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 6
<p>Bifurcation diagram of <math display="inline"><semantics> <mi>x</mi> </semantics></math> versus <math display="inline"><semantics> <mi>a</mi> </semantics></math> for system (2) when <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mi>e</mi> <mo>=</mo> <mi>g</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>w</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>∈</mo> <mo stretchy="false">(</mo> <mn>0</mn> <mo>,</mo> <mn>4</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Les of system (2) versus <math display="inline"><semantics> <mi>a</mi> </semantics></math>, when <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mi>e</mi> <mo>=</mo> <mi>g</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>w</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>∈</mo> <mo stretchy="false">(</mo> <mn>0</mn> <mo>,</mo> <mn>4</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 9
<p>Projections of a 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 10
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>0.70</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 11
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 12
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1.17</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 13
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1.28</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 14
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1.891</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 15
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1.89678</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 16
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>2.56</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 17
<p>Projections of 2-D phase diagram with parameter <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>2.65</mn> </mrow> </semantics></math>: (<b>a</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>b</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane; (<b>c</b>) attractor on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>w</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 18
<p>Bifurcation diagram of <math display="inline"><semantics> <mi>y</mi> </semantics></math> and Les versus <math display="inline"><semantics> <mi>u</mi> </semantics></math> for system (2) with initial value <math display="inline"><semantics> <mrow> <msub> <mi>O</mi> <mn>0</mn> </msub> <mo>=</mo> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>w</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mi>u</mi> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>0</mn> <mi>,</mi> <mn>4</mn> <mo stretchy="false">]</mo> </mrow> </semantics></math>: (<b>a</b>) bifurcation diagram; (<b>b</b>) Les graph.</p>
Full article ">Figure 19
<p>Bifurcation diagram of <math display="inline"><semantics> <mi>y</mi> </semantics></math> and Les versus <math display="inline"><semantics> <mi>u</mi> </semantics></math> for system (2) with initial value <math display="inline"><semantics> <mrow> <msub> <mi>O</mi> <mn>1</mn> </msub> <mo>=</mo> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>z</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>w</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>u</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> <mo>=</mo> <mo stretchy="false">(</mo> <mi>u</mi> <mo>,</mo> <mi>u</mi> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>∈</mo> <mo stretchy="false">[</mo> <mn>0</mn> <mi>,</mi> <mn>4</mn> <mo stretchy="false">]</mo> </mrow> </semantics></math>: (<b>a</b>) bifurcation diagram; (<b>b</b>) Les graph.</p>
Full article ">Figure 20
<p>Phase diagram on the <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane for system (2) with different initial values: (<b>a</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>0.51</mn> <mo>,</mo> <mn>0.51</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (blue), <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>3.34</mn> <mo>,</mo> <mn>3.34</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (red), <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>0.01</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (green); (<b>b</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>3</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (blue), <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>0.9711</mn> <mo>,</mo> <mn>0.9711</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (red), <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>1.922</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> (green).</p>
Full article ">Figure 21
<p>Multi-stable nonlinear dynamic behavior distribution of chaotic for memristive system with different initial values.</p>
Full article ">Figure 22
<p>Time series and phase diagram of attractors for transient chaotic: (<b>a</b>) time series of <math display="inline"><semantics> <mi>z</mi> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>8000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) phase diagrams in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> space when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>2000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) phase diagrams on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>2000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) phase diagrams in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> space when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>2000</mn> <mo>,</mo> <mn>8000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) phase diagrams on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>2000</mn> <mo>,</mo> <mn>8000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 22 Cont.
<p>Time series and phase diagram of attractors for transient chaotic: (<b>a</b>) time series of <math display="inline"><semantics> <mi>z</mi> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>8000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) phase diagrams in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> space when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>2000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) phase diagrams on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>2000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) phase diagrams in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> space when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>2000</mn> <mo>,</mo> <mn>8000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) phase diagrams on the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>u</mi> </mrow> </semantics></math> plane when <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>2000</mn> <mo>,</mo> <mn>8000</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 23
<p>SE with different <math display="inline"><semantics> <mi>a</mi> </semantics></math> and <math display="inline"><semantics> <mi>b</mi> </semantics></math>: (<b>a</b>) SE vs. <math display="inline"><semantics> <mi>a</mi> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics></math>); (<b>b</b>) SE vs. <math display="inline"><semantics> <mi>b</mi> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) .</p>
Full article ">Figure 24
<p>SE distribution of the system under different conditions: (<b>a</b>) the interaction of parameters <math display="inline"><semantics> <mi>a</mi> </semantics></math> and <math display="inline"><semantics> <mi>b</mi> </semantics></math>; (<b>b</b>) the interaction of initial values <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 25
<p>The memristor model circuit: (<b>a</b>) memristor unit; (<b>b</b>) absolute circuit.</p>
Full article ">Figure 26
<p>Schematic of the memristor-based chaotic system.</p>
Full article ">Figure 27
<p>Multisim circuit simulation of system (2).</p>
Full article ">Figure 28
<p>Phase diagrams observed by oscilloscope: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>y</mi> </msub> <mo>−</mo> <msub> <mi>u</mi> <mi>w</mi> </msub> </mrow> </semantics></math> plane; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>z</mi> </msub> <mo>−</mo> <msub> <mi>u</mi> <mi>w</mi> </msub> </mrow> </semantics></math> plane; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>w</mi> </msub> <mo>−</mo> <msub> <mi>u</mi> <mi>u</mi> </msub> </mrow> </semantics></math> plane; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>x</mi> </msub> <mo>−</mo> <msub> <mi>u</mi> <mi>u</mi> </msub> </mrow> </semantics></math> plane; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>z</mi> </msub> <mo>−</mo> <msub> <mi>u</mi> <mi>u</mi> </msub> </mrow> </semantics></math> plane; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mi>x</mi> </msub> <mo>−</mo> <msub> <mi>u</mi> <mi>z</mi> </msub> </mrow> </semantics></math> plane.</p>
Full article ">
15 pages, 4060 KiB  
Article
Research on Bearing Fault Diagnosis Method Based on Filter Features of MOMLMEDA and LSTM
by Yong Li, Gang Cheng, Xihui Chen and Yusong Pang
Entropy 2019, 21(10), 1025; https://doi.org/10.3390/e21101025 - 22 Oct 2019
Cited by 8 | Viewed by 3427
Abstract
As the supporting unit of rotating machinery, bearing can ensure efficient operation of the equipment. Therefore, it is very important to monitor the status of bearings accurately. A bearing fault diagnosis mothed based on Multipoint Optimal Minimum Local Mean Entropy Deconvolution Adjusted (MOMLMEDA) [...] Read more.
As the supporting unit of rotating machinery, bearing can ensure efficient operation of the equipment. Therefore, it is very important to monitor the status of bearings accurately. A bearing fault diagnosis mothed based on Multipoint Optimal Minimum Local Mean Entropy Deconvolution Adjusted (MOMLMEDA) and Long Short-Term Memory (LSTM) is proposed. MOMLMEDA is an improved algorithm based on Multipoint Optimal Minimum Entropy Deconvolution Adjusted (MOMEDA). By setting the local kurtosis mean as a new selection criterion, it can effectively avoid the interference of false kurtosis caused by noise and improve the accuracy of optimal kurtosis position. The optimal filter designed by optimal kurtosis position has periodic and amplitude characteristics, which are used as the fault feature in this paper. However, this feature has temporal characteristics and cannot be used as input of general neural network directly. LSTM is selected as the classification network in this paper. It can effectively avoid the influence of the temporal problem existing in feature vectors. Accurate diagnosis of bearing faults is realized by training classification neural network with samples. The overall recognition rate is up to 93.50%. Full article
(This article belongs to the Special Issue Entropy-Based Fault Diagnosis)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the proposed method.</p>
Full article ">Figure 2
<p>The framework of LSTM.</p>
Full article ">Figure 3
<p>SQI-MFS bearing fault test bench.</p>
Full article ">Figure 4
<p>Time domain comparison of different bearing signals.</p>
Full article ">Figure 5
<p>Multi kurtosis spectrum of different bearing signals.</p>
Full article ">Figure 6
<p>Test result under different lengths of filter via MOMEDA.</p>
Full article ">Figure 7
<p>Output signals obtained by MOMEDA.</p>
Full article ">Figure 8
<p>Multi kurtosis spectrum with the incorrect optimal kurtosis location.</p>
Full article ">Figure 9
<p>Test result under different lengths of filter via MOMLMEDA.</p>
Full article ">Figure 10
<p>Filters of different bearing signals designed by MOMLMEDA.</p>
Full article ">Figure 11
<p>Comparison of coarse-grained feature vectors between different fault signals: (<b>a</b>) Different types of bearing signals, (<b>b</b>) Different degrees of inner race fault, (<b>c</b>) Different degrees of outer race fault, (<b>d</b>) Same degree of inner race fault, (<b>e</b>) Same degree of outer race fault.</p>
Full article ">Figure 11 Cont.
<p>Comparison of coarse-grained feature vectors between different fault signals: (<b>a</b>) Different types of bearing signals, (<b>b</b>) Different degrees of inner race fault, (<b>c</b>) Different degrees of outer race fault, (<b>d</b>) Same degree of inner race fault, (<b>e</b>) Same degree of outer race fault.</p>
Full article ">Figure 12
<p>Training Process of LSTM.</p>
Full article ">Figure 13
<p>Prediction result based on MOMLMEDA and LSTM.</p>
Full article ">
13 pages, 1502 KiB  
Article
Multiscale Entropy of Cardiac and Postural Control Reflects a Flexible Adaptation to a Cognitive Task
by Estelle Blons, Laurent M. Arsac, Pierre Gilfriche and Veronique Deschodt-Arsac
Entropy 2019, 21(10), 1024; https://doi.org/10.3390/e21101024 - 21 Oct 2019
Cited by 15 | Viewed by 3662
Abstract
In humans, physiological systems involved in maintaining stable conditions for health and well-being are complex, encompassing multiple interactions within and between system components. This complexity is mirrored in the temporal structure of the variability of output signals. Entropy has been recognized as a [...] Read more.
In humans, physiological systems involved in maintaining stable conditions for health and well-being are complex, encompassing multiple interactions within and between system components. This complexity is mirrored in the temporal structure of the variability of output signals. Entropy has been recognized as a good marker of systems complexity, notably when calculated from heart rate and postural dynamics. A degraded entropy is generally associated with frailty, aging, impairments or diseases. In contrast, high entropy has been associated with the elevated capacity to adjust to an ever-changing environment, but the link is unknown between entropy and the capacity to cope with cognitive tasks in a healthy young to middle-aged population. Here, we addressed classic markers (time and frequency domains) and refined composite multiscale entropy (MSE) markers (after pre-processing) of heart rate and postural sway time series in 34 participants during quiet versus cognitive task conditions. Recordings lasted 10 min for heart rate and 51.2 s for upright standing, providing time series lengths of 500–600 and 2048 samples, respectively. The main finding was that entropy increased during cognitive tasks. This highlights the possible links between our entropy measures and the systems complexity that probably facilitates a control remodeling and a flexible adaptability in our healthy participants. We conclude that entropy is a reliable marker of neurophysiological complexity and adaptability in autonomic and somatic systems. Full article
Show Figures

Figure 1

Figure 1
<p>Cardiac entropy index (E<sub>C</sub>, <b>left</b>) and postural entropy index (E<sub>P</sub>, <b>right</b>), calculated from the areas under the refined composite multiscale entropy (RCMSE) curves.</p>
Full article ">Figure 2
<p>Top: RR interval time series from a representative participant in reference (Ref, <b>left</b>) and cognitive (Cog, <b>right</b>) conditions. Middle and bottom: anteroposterior (AP, <b>middle</b>) and mediolateral (ML, <b>bottom</b>) center of pressure (COP) time series, the horizontal axes are the same for these plots.</p>
Full article ">Figure 3
<p>Refined composite multiscale entropy (RCMSE) analysis of RR interval time series (<b>left</b>) and center of pressure time series on anteroposterior axis (<b>right</b>) during reference (Ref) and cognitive (Cog) conditions. The RCMSE curves were obtained by connecting the group mean values of sample entropy for each scale. The error bars represent standard errors. The RCMSE curves for the surrogate shuffled time series are also presented.</p>
Full article ">Figure 4
<p>Receiver operating characteristic (ROC) curves (sensitivity vs 1-specificity) for cardiac (<b>left</b>) and postural (<b>right</b>) indices. RMSSD: root mean square of successive differences; LF: low frequency; HF: high frequency; E<sub>C</sub>: cardiac entropy index; AP: anteroposterior; E<sub>P</sub>: postural entropy index; ML: mediolateral.</p>
Full article ">
27 pages, 495 KiB  
Article
Teaching Ordinal Patterns to a Computer: Efficient Encoding Algorithms Based on the Lehmer Code
by Sebastian Berger, Andrii Kravtsiv, Gerhard Schneider and Denis Jordan
Entropy 2019, 21(10), 1023; https://doi.org/10.3390/e21101023 - 21 Oct 2019
Cited by 14 | Viewed by 3559
Abstract
Ordinal patterns are the common basis of various techniques used in the study of dynamical systems and nonlinear time series analysis. The present article focusses on the computational problem of turning time series into sequences of ordinal patterns. In a first step, a [...] Read more.
Ordinal patterns are the common basis of various techniques used in the study of dynamical systems and nonlinear time series analysis. The present article focusses on the computational problem of turning time series into sequences of ordinal patterns. In a first step, a numerical encoding scheme for ordinal patterns is proposed. Utilising the classical Lehmer code, it enumerates ordinal patterns by consecutive non-negative integers, starting from zero. This compact representation considerably simplifies working with ordinal patterns in the digital domain. Subsequently, three algorithms for the efficient extraction of ordinal patterns from time series are discussed, including previously published approaches that can be adapted to the Lehmer code. The respective strengths and weaknesses of those algorithms are discussed, and further substantiated by benchmark results. One of the algorithms stands out in terms of scalability: its run-time increases linearly with both the pattern order and the sequence length, while its memory footprint is practically negligible. These properties enable the study of high-dimensional pattern spaces at low computational cost. In summary, the tools described herein may improve the efficiency of virtually any ordinal pattern-based analysis method, among them quantitative measures like permutation entropy and symbolic transfer entropy, but also techniques like forbidden pattern identification. Moreover, the concepts presented may allow for putting ideas into practice that up to now had been hindered by computational burden. To enable smooth evaluation, a function library written in the C programming language, as well as language bindings and native implementations for various numerical computation environments are provided in the supplements. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>State diagram for an ordinal process of order <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and time lag <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, which can be interpreted as a first-order Markov chain. Its transition probabilities depend on the underlying process, and some can be zero. However, no other transitions than the ones depicted are possible because consecutive patterns overlap in two out of three values.</p>
Full article ">Figure 2
<p>Assume pattern order <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>, without loss of generality. For any fixed ordinal pattern <math display="inline"><semantics> <msub> <mo>Π</mo> <mrow> <mi>t</mi> <mo>−</mo> <mi>τ</mi> </mrow> </msub> </semantics></math>, its succeeding pattern <math display="inline"><semantics> <mrow> <msub> <mo>Π</mo> <mi>t</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>λ</mi> <mrow> <mi>t</mi> <mo>,</mo> <mspace width="0.166667em"/> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>λ</mi> <mrow> <mi>t</mi> <mo>,</mo> <mspace width="0.166667em"/> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>λ</mi> <mrow> <mi>t</mi> <mo>,</mo> <mspace width="0.166667em"/> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>λ</mi> <mrow> <mi>t</mi> <mo>,</mo> <mspace width="0.166667em"/> <mn>4</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> at <math display="inline"><semantics> <mi>τ</mi> </semantics></math> time steps distance has merely one degree of freedom: its rightmost rank <math display="inline"><semantics> <msub> <mi>λ</mi> <mrow> <mi>t</mi> <mo>,</mo> <mspace width="0.166667em"/> <mn>4</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Computation time (median of 20 trials) for transforming <math display="inline"><semantics> <mrow> <mn>3.6</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> samples of uniform white noise into a sequence of ordinal patterns of order <span class="html-italic">m</span>. The lag was set to a constant <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and the Matlab functions <tt>encode_vectorised</tt> and <tt>encode_overlap</tt> from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>encode_vectorised.m</tt> and <tt>encode_overlap.m</tt> were used for the simulation.</p>
Full article ">Figure 4
<p>Computation time (median of 20 trials) for transforming <math display="inline"><semantics> <mrow> <mn>3.6</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> samples of uniform white noise into a sequence of ordinal patterns of order <span class="html-italic">m</span>. The time lag was set to <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and the C functions <tt>ordpat_encode_plain</tt> and <tt>ordpat_encode_overlap</tt> from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>ordpat.c</tt> were used for the simulation. The run-time complexity of the plain algorithm is <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <msup> <mi>m</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </semantics></math>, whereas the overlap algorithm generally scales at <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <mi>m</mi> <mo>)</mo> </mrow> </semantics></math>. For <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, there is no advantage over the plain algorithm: all order relations are then disjoint, such that no overlap can be exploited (see <a href="#sec2dot3-entropy-21-01023" class="html-sec">Section 2.3</a>).</p>
Full article ">Figure 5
<p>Computation time (median of 20 trials) for transforming <math display="inline"><semantics> <mrow> <mn>3.6</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> samples of uniformly distributed white noise into a sequence of ordinal patterns of pattern order <span class="html-italic">m</span>, using the constant time lag <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The C functions <tt>ordpat_encode_overlap</tt> and <tt>ordpat_encode_overlap_mp</tt> from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>ordpat.c</tt> were used for the simulation. The arbitrary-precision arithmetic used in the <tt>ordpat_encode_overlap_mp</tt> function increases the overall run-time complexity, and the timing is less stable than for strictly hardware-based arithmetic operations.</p>
Full article ">Figure 6
<p>Computation time (median of 20 trials) for transforming <math display="inline"><semantics> <mrow> <mn>3.6</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> samples of uniformly distributed white noise into a sequence of ordinal patterns of order <span class="html-italic">m</span>. The time lag was set to <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and the C function <tt>ordpat_encode_overlap_mp</tt> (an arbitrary-precision implementation of the overlap algorithm) from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>ordpat.c</tt> was used for the simulation. The memory required per pattern is growing with <span class="html-italic">m</span> in a stepwise manner, increasing by one 64-bit word at each vertical grid line. The computational cost in turn rises linearly with the number of memory words to be iterated for each pattern, which shows as distinct jumps in the graph. Independent of that, the run-time complexity also increases linearly with the pattern order <span class="html-italic">m</span> as such. Both effects combined explain the parabolic envelope of the curve depicted.</p>
Full article ">Figure 7
<p>Computation time (median of 20 trials) for transforming <math display="inline"><semantics> <mrow> <mn>3.6</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> samples of uniform white noise, low-pass filtered to various relative bandwidths <math display="inline"><semantics> <mrow> <mi>b</mi> <mi>w</mi> </mrow> </semantics></math>, into sequences of ordinal patterns of order <span class="html-italic">m</span>. The time lag was set to a constant <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and the C functions <tt>ordpat_encode_lookup</tt> and <tt>ordpat_encode_overlap</tt> from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>ordpat.c</tt> were used for the simulation. The time required for loading lookup tables from mass storage into main memory was not taken into account. Filtering to <math display="inline"><semantics> <mrow> <mi>b</mi> <mi>w</mi> <mo>=</mo> <mn>0.0</mn> </mrow> </semantics></math> results in an all-zero input signal, whereas <math display="inline"><semantics> <mrow> <mi>b</mi> <mi>w</mi> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics></math> results in white noise. The computation time increases not only with the pattern order <span class="html-italic">m</span>, but also with the ordinal complexity of the input signal.</p>
Full article ">Figure 8
<p>Computation time (median of 20 trials) for transforming <math display="inline"><semantics> <mrow> <mn>3.6</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> samples of data into sequences of ordinal patterns of order <span class="html-italic">m</span>. Either zeroes or uniform white noise were used as input data. The time lag was set to a constant <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and the C functions <tt>ordpat_encode_lookup</tt> and <tt>ordpat_encode_overlap</tt> from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>ordpat.c</tt> were used for the simulation. The time required for loading lookup tables from mass storage into main memory was not taken into account. For an all-zero input signal, no cache contention will occur, and the lookup algorithm can outperform the overlap algorithm as the pattern order <span class="html-italic">m</span> (and thus the computational workload for the overlap algorithm) increases.</p>
Full article ">Figure 9
<p>Computation time (median of 20 trials) for transforming <span class="html-italic">N</span> samples of uniform white noise into a sequence of ordinal patterns, using order <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and time lag <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The respective C functions <tt>ordpat_encode_plain</tt>, <tt>ordpat_encode_overlap</tt> and <tt>ordpat_encode_lookup</tt> from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>ordpat.c</tt> were tested. The time required for loading lookup table data from mass storage into main memory was not taken into account. The order <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> was selected so as to operate the <tt>ordpat_encode_lookup</tt> function at its sweet spot with regard to cache utilisation. In good approximation, the computation time then increases linearly with the sequence length <span class="html-italic">N</span> for all three algorithms.</p>
Full article ">Figure 10
<p>Computation time (median of 20 trials) for transforming <math display="inline"><semantics> <mrow> <mn>3.6</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>5</mn> </msup> </mrow> </semantics></math> samples of uniform white noise into a sequence of ordinal patterns, using the fixed order <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and increasing time lags <math display="inline"><semantics> <mi>τ</mi> </semantics></math>. The respective C functions <tt>ordpat_encode_plain</tt>, <tt>ordpat_encode_overlap</tt> and <tt>ordpat_encode_lookup</tt> from the <a href="#app1-entropy-21-01023" class="html-app">supplementary file</a> <tt>ordpat.c</tt> were tested. The time required for loading lookup table data from mass storage into main memory was not taken into account. The order <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> was selected so as to operate the <tt>ordpat_encode_lookup</tt> function at its sweet spot with regard to cache utilisation. The simulations did not reveal any noticeable dependency between the time lag <math display="inline"><semantics> <mi>τ</mi> </semantics></math> and the computation time.</p>
Full article ">
80 pages, 1639 KiB  
Article
On Data-Processing and Majorization Inequalities for f-Divergences with Applications
by Igal Sason
Entropy 2019, 21(10), 1022; https://doi.org/10.3390/e21101022 - 21 Oct 2019
Cited by 18 | Viewed by 4543
Abstract
This paper is focused on the derivation of data-processing and majorization inequalities for f-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced without proofs, followed by exemplifications of the theorems [...] Read more.
This paper is focused on the derivation of data-processing and majorization inequalities for f-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced without proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. Almost all the analysis is relegated to the appendices, which form the major part of this manuscript. Full article
(This article belongs to the Special Issue Information Measures with Applications)
Show Figures

Figure 1

Figure 1
<p>The bounds in Theorem 2 applied to <inline-formula><mml:math id="mm2212" display="block"><mml:semantics><mml:mrow><mml:msub><mml:mi>D</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>R</mml:mi><mml:mrow><mml:msup><mml:mi>X</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>λ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msubsup><mml:mspace width="0.166667em"/><mml:mo>∥</mml:mo><mml:mspace width="0.166667em"/><mml:msub><mml:mi>Q</mml:mi><mml:msup><mml:mi>X</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>−</mml:mo><mml:msub><mml:mi>D</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>R</mml:mi><mml:mrow><mml:msup><mml:mi>Y</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>λ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msubsup><mml:mspace width="0.166667em"/><mml:mo>∥</mml:mo><mml:mspace width="0.166667em"/><mml:msub><mml:mi>Q</mml:mi><mml:msup><mml:mi>Y</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math></inline-formula> (vertical axis) versus <inline-formula><mml:math id="mm2213" display="block"><mml:semantics><mml:mrow><mml:mi>λ</mml:mi><mml:mo>∈</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> (horizontal axis). The <inline-formula><mml:math id="mm2214" display="block"><mml:semantics><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:semantics></mml:math></inline-formula>-divergence refers to Theorem 5. The probability mass functions <inline-formula><mml:math id="mm2215" display="block"><mml:semantics><mml:msub><mml:mi>P</mml:mi><mml:msup><mml:mi>X</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:msub></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2216" display="block"><mml:semantics><mml:msub><mml:mi>Q</mml:mi><mml:msup><mml:mi>X</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:msub></mml:semantics></mml:math></inline-formula> correspond, respectively, to discrete memoryless sources emitting <italic>n</italic> i.i.d. <inline-formula><mml:math id="mm2217" display="block"><mml:semantics><mml:mrow><mml:mi>Bernoulli</mml:mi><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2218" display="block"><mml:semantics><mml:mrow><mml:mi>Bernoulli</mml:mi><mml:mo>(</mml:mo><mml:mi>q</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> symbols; the symbols are transmitted over <inline-formula><mml:math id="mm2219" display="block"><mml:semantics><mml:mrow><mml:mi>BSC</mml:mi><mml:mo>(</mml:mo><mml:mi>δ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> with <inline-formula><mml:math id="mm2220" display="block"><mml:semantics><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>α</mml:mi><mml:mo>,</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>δ</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfenced separators="" open="(" close=")"><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mstyle scriptlevel="0" displaystyle="false"><mml:mfrac><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle scriptlevel="0" displaystyle="false"><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:mn>0.110</mml:mn></mml:mfenced></mml:mrow></mml:semantics></mml:math></inline-formula>. The bounds in the upper and middle plots are compared to the exact values, being computationally feasible for <inline-formula><mml:math id="mm2221" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2222" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, respectively. The upper, middle and lower plots correspond, respectively, to <inline-formula><mml:math id="mm2223" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm2224" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, and <inline-formula><mml:math id="mm2225" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>50</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 2
<p>The upper bound in Theorem 4 applied to <inline-formula><mml:math id="mm2226" display="block"><mml:semantics><mml:mfrac><mml:mrow><mml:msub><mml:mi>D</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>R</mml:mi><mml:mrow><mml:msup><mml:mi>Y</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>λ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msubsup><mml:mspace width="0.166667em"/><mml:mo>∥</mml:mo><mml:mspace width="0.166667em"/><mml:msub><mml:mi>Q</mml:mi><mml:msup><mml:mi>Y</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi>D</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msubsup><mml:mi>R</mml:mi><mml:mrow><mml:msup><mml:mi>X</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>λ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msubsup><mml:mspace width="0.166667em"/><mml:mo>∥</mml:mo><mml:mspace width="0.166667em"/><mml:msub><mml:mi>Q</mml:mi><mml:msup><mml:mi>X</mml:mi><mml:mi>n</mml:mi></mml:msup></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:semantics></mml:math></inline-formula> (see (<xref ref-type="disp-formula" rid="FD128-entropy-21-01022">125</xref>)–(<xref ref-type="disp-formula" rid="FD130-entropy-21-01022">127</xref>)) in the vertical axis versus <inline-formula><mml:math id="mm2227" display="block"><mml:semantics><mml:mrow><mml:mi>λ</mml:mi><mml:mo>∈</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> in the horizontal axis. The <inline-formula><mml:math id="mm2228" display="block"><mml:semantics><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:semantics></mml:math></inline-formula>-divergence refers to Theorem 5. The probability mass functions <inline-formula><mml:math id="mm2229" display="block"><mml:semantics><mml:msub><mml:mi>P</mml:mi><mml:msub><mml:mi>X</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:msub></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2230" display="block"><mml:semantics><mml:msub><mml:mi>Q</mml:mi><mml:msub><mml:mi>X</mml:mi><mml:mi>i</mml:mi></mml:msub></mml:msub></mml:semantics></mml:math></inline-formula> are <inline-formula><mml:math id="mm2231" display="block"><mml:semantics><mml:mrow><mml:mi>Bernoulli</mml:mi><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2232" display="block"><mml:semantics><mml:mrow><mml:mi>Bernoulli</mml:mi><mml:mo>(</mml:mo><mml:mi>q</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula>, respectively, for all <inline-formula><mml:math id="mm2233" display="block"><mml:semantics><mml:mrow><mml:mi>i</mml:mi><mml:mo>∈</mml:mo><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>…</mml:mo><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> with <italic>n</italic> uses of <inline-formula><mml:math id="mm2234" display="block"><mml:semantics><mml:mrow><mml:mi>BSC</mml:mi><mml:mo>(</mml:mo><mml:mi>δ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula>, and parameters <inline-formula><mml:math id="mm2235" display="block"><mml:semantics><mml:mrow><mml:mrow><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo>,</mml:mo><mml:mi>δ</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mfenced separators="" open="(" close=")"><mml:mstyle scriptlevel="0" displaystyle="false"><mml:mfrac><mml:mn>1</mml:mn><mml:mn>4</mml:mn></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:mstyle scriptlevel="0" displaystyle="false"><mml:mfrac><mml:mn>1</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mstyle><mml:mo>,</mml:mo><mml:mn>0.110</mml:mn></mml:mfenced></mml:mrow></mml:semantics></mml:math></inline-formula>. The upper and middle plots correspond to <inline-formula><mml:math id="mm2236" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> with <inline-formula><mml:math id="mm2237" display="block"><mml:semantics><mml:mrow><mml:mi>α</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2238" display="block"><mml:semantics><mml:mrow><mml:mi>α</mml:mi><mml:mo>=</mml:mo><mml:mn>100</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, respectively; the middle and lower plots correspond to <inline-formula><mml:math id="mm2239" display="block"><mml:semantics><mml:mrow><mml:mi>α</mml:mi><mml:mo>=</mml:mo><mml:mn>100</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> with <inline-formula><mml:math id="mm2240" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2241" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>100</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, respectively. The bounds in the upper and middle plots are compared to the exact values, being computationally feasible for <inline-formula><mml:math id="mm2242" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mn>10</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 3
<p>Plots of <inline-formula><mml:math id="mm2243" display="block"><mml:semantics><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>∥</mml:mo><mml:mi>q</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math></inline-formula>, its upper and lower bounds in (<xref ref-type="disp-formula" rid="FD61-entropy-21-01022">61</xref>) and (<xref ref-type="disp-formula" rid="FD65-entropy-21-01022">65</xref>), respectively, and its asymptotic approximation in (<xref ref-type="disp-formula" rid="FD66-entropy-21-01022">66</xref>) for large values of <inline-formula><mml:math id="mm2244" display="block"><mml:semantics><mml:mi>α</mml:mi></mml:semantics></mml:math></inline-formula>. The plots are shown as a function of <inline-formula><mml:math id="mm2245" display="block"><mml:semantics><mml:mrow><mml:mi>α</mml:mi><mml:mo>∈</mml:mo><mml:mfenced separators="" open="[" close="]"><mml:msup><mml:mi mathvariant="normal">e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mfrac><mml:mn>3</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msup><mml:mo>,</mml:mo><mml:mn>1000</mml:mn></mml:mfenced></mml:mrow></mml:semantics></mml:math></inline-formula>. The upper and lower plots refer, respectively, to <inline-formula><mml:math id="mm2246" display="block"><mml:semantics><mml:mrow><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mo>(</mml:mo><mml:mn>0.1</mml:mn><mml:mo>,</mml:mo><mml:mn>0.9</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> and <inline-formula><mml:math id="mm2247" display="block"><mml:semantics><mml:mrow><mml:mo>(</mml:mo><mml:mi>p</mml:mi><mml:mo>,</mml:mo><mml:mi>q</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mo>(</mml:mo><mml:mn>0.2</mml:mn><mml:mo>,</mml:mo><mml:mn>0.8</mml:mn><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 4
<p>Curves of the upper bound on the ratio of contraction coefficients <inline-formula><mml:math id="mm2248" display="block"><mml:semantics><mml:mfrac><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:msub><mml:mi>f</mml:mi><mml:mi>α</mml:mi></mml:msub></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>Q</mml:mi><mml:mi>X</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>Y</mml:mi><mml:mo>|</mml:mo><mml:mi>X</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow><mml:mrow><mml:msub><mml:mi>μ</mml:mi><mml:msup><mml:mi>χ</mml:mi><mml:mn>2</mml:mn></mml:msup></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>Q</mml:mi><mml:mi>X</mml:mi></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>W</mml:mi><mml:mrow><mml:mi>Y</mml:mi><mml:mo>|</mml:mo><mml:mi>X</mml:mi></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:mfrac></mml:semantics></mml:math></inline-formula> (see the right-side inequality of (<xref ref-type="disp-formula" rid="FD133-entropy-21-01022">130</xref>)) as a function of the parameter <inline-formula><mml:math id="mm2249" display="block"><mml:semantics><mml:mrow><mml:mi>α</mml:mi><mml:mo>≥</mml:mo><mml:msup><mml:mi mathvariant="normal">e</mml:mi><mml:mrow><mml:mo>−</mml:mo><mml:mfrac><mml:mn>3</mml:mn><mml:mn>2</mml:mn></mml:mfrac></mml:mrow></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>. The curves correspond to different values of <inline-formula><mml:math id="mm2250" display="block"><mml:semantics><mml:mi>ξ</mml:mi></mml:semantics></mml:math></inline-formula> in (<xref ref-type="disp-formula" rid="FD134-entropy-21-01022">131</xref>).</p>
Full article ">Figure 5
<p>A comparison of the maximal values of <inline-formula><mml:math id="mm2251" display="block"><mml:semantics><mml:mi>ρ</mml:mi></mml:semantics></mml:math></inline-formula> (minus 1) according to (<xref ref-type="disp-formula" rid="FD176-entropy-21-01022">171</xref>) and (<xref ref-type="disp-formula" rid="FD177-entropy-21-01022">172</xref>), asserting the satisfiability of the condition <inline-formula><mml:math id="mm2252" display="block"><mml:semantics><mml:mrow><mml:mi>D</mml:mi><mml:mo>(</mml:mo><mml:mi>Q</mml:mi><mml:mo>∥</mml:mo><mml:msub><mml:mi>U</mml:mi><mml:mi>n</mml:mi></mml:msub><mml:mo>)</mml:mo><mml:mo>≤</mml:mo><mml:mi>d</mml:mi><mml:mo form="prefix">log</mml:mo><mml:mi mathvariant="normal">e</mml:mi></mml:mrow></mml:semantics></mml:math></inline-formula>, with an arbitrary <inline-formula><mml:math id="mm2253" display="block"><mml:semantics><mml:mrow><mml:mi>d</mml:mi><mml:mo>&gt;</mml:mo><mml:mn>0</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, for all integers <inline-formula><mml:math id="mm2254" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>≥</mml:mo><mml:mn>2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> and probability mass functions <italic>Q</italic> supported on <inline-formula><mml:math id="mm2255" display="block"><mml:semantics><mml:mrow><mml:mo>{</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>…</mml:mo><mml:mo>,</mml:mo><mml:mi>n</mml:mi><mml:mo>}</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> with <inline-formula><mml:math id="mm2256" display="block"><mml:semantics><mml:mrow><mml:mstyle scriptlevel="0" displaystyle="false"><mml:mfrac><mml:msub><mml:mi>q</mml:mi><mml:mo movablelimits="true" form="prefix">max</mml:mo></mml:msub><mml:msub><mml:mi>q</mml:mi><mml:mo movablelimits="true" form="prefix">min</mml:mo></mml:msub></mml:mfrac></mml:mstyle><mml:mo>≤</mml:mo><mml:mi>ρ</mml:mi></mml:mrow></mml:semantics></mml:math></inline-formula>. The solid line refers to the necessary and sufficient condition which gives (<xref ref-type="disp-formula" rid="FD176-entropy-21-01022">171</xref>), and the dashed line refers to a stronger condition which gives (<xref ref-type="disp-formula" rid="FD177-entropy-21-01022">172</xref>).</p>
Full article ">Figure 6
<p>A comparison of the exact expression of <inline-formula><mml:math id="mm2257" display="block"><mml:semantics><mml:mrow><mml:mo>Φ</mml:mo><mml:mo>(</mml:mo><mml:mi>α</mml:mi><mml:mo>,</mml:mo><mml:mi>ρ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> in (175), with <inline-formula><mml:math id="mm2258" display="block"><mml:semantics><mml:mrow><mml:mi>α</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, and its three upper bounds in the right sides of (<xref ref-type="disp-formula" rid="FD181-entropy-21-01022">176</xref>), (<xref ref-type="disp-formula" rid="FD182-entropy-21-01022">177</xref>) and (<xref ref-type="disp-formula" rid="FD185-entropy-21-01022">180</xref>) (called ’Upper bound 1’ (dotted line), ’Upper bound 2’ (thin dashed line), and ’Upper bound 3’ (thick dashed line), respectively).</p>
Full article ">Figure 7
<p>Curves of the upper bound on the measure <inline-formula><mml:math id="mm2259" display="block"><mml:semantics><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mrow><mml:mi>ω</mml:mi><mml:mo>,</mml:mo><mml:mi>n</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>P</mml:mi><mml:mi>ℓ</mml:mi></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:mrow></mml:semantics></mml:math></inline-formula> in (<xref ref-type="disp-formula" rid="FD242-entropy-21-01022">233</xref>), valid for all <inline-formula><mml:math id="mm2260" display="block"><mml:semantics><mml:mrow><mml:mi>n</mml:mi><mml:mo>∈</mml:mo><mml:mi mathvariant="double-struck">N</mml:mi></mml:mrow></mml:semantics></mml:math></inline-formula>, as a function of <inline-formula><mml:math id="mm2261" display="block"><mml:semantics><mml:mrow><mml:mi>ω</mml:mi><mml:mo>∈</mml:mo><mml:mo stretchy="false">[</mml:mo><mml:mn>0</mml:mn><mml:mo>,</mml:mo><mml:mn>1</mml:mn><mml:mo stretchy="false">]</mml:mo></mml:mrow></mml:semantics></mml:math></inline-formula> for different values of <inline-formula><mml:math id="mm2262" display="block"><mml:semantics><mml:mrow><mml:mi>ρ</mml:mi><mml:mo>:</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mn>1</mml:mn><mml:msub><mml:mi>p</mml:mi><mml:mo movablelimits="true" form="prefix">min</mml:mo></mml:msub></mml:mfrac></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 8
<p>Curves for the smallest values of <inline-formula><mml:math id="mm2263" display="block"><mml:semantics><mml:msub><mml:mi>p</mml:mi><mml:mo movablelimits="true" form="prefix">min</mml:mo></mml:msub></mml:semantics></mml:math></inline-formula>, in the setup of Theorem 15, according to the condition in (<xref ref-type="disp-formula" rid="FD246-entropy-21-01022">236</xref>) (solid line) and the more restrictive condition in (<xref ref-type="disp-formula" rid="FD247-entropy-21-01022">237</xref>) (dashed line) for binary Tunstall codes which are used to compress memoryless and stationary binary sources.</p>
Full article ">
21 pages, 967 KiB  
Article
Fast, Asymptotically Efficient, Recursive Estimation in a Riemannian Manifold
by Jialun Zhou and Salem Said
Entropy 2019, 21(10), 1021; https://doi.org/10.3390/e21101021 - 21 Oct 2019
Cited by 7 | Viewed by 2646
Abstract
Stochastic optimisation in Riemannian manifolds, especially the Riemannian stochastic gradient method, has attracted much recent attention. The present work applies stochastic optimisation to the task of recursive estimation of a statistical parameter which belongs to a Riemannian manifold. Roughly, this task amounts to [...] Read more.
Stochastic optimisation in Riemannian manifolds, especially the Riemannian stochastic gradient method, has attracted much recent attention. The present work applies stochastic optimisation to the task of recursive estimation of a statistical parameter which belongs to a Riemannian manifold. Roughly, this task amounts to stochastic minimisation of a statistical divergence function. The following problem is considered: how to obtain fast, asymptotically efficient, recursive estimates, using a Riemannian stochastic optimisation algorithm with decreasing step sizes. In solving this problem, several original results are introduced. First, without any convexity assumptions on the divergence function, we proved that, with an adequate choice of step sizes, the algorithm computes recursive estimates which achieve a fast non-asymptotic rate of convergence. Second, the asymptotic normality of these recursive estimates is proved by employing a novel linearisation technique. Third, it is proved that, when the Fisher information metric is used to guide the algorithm, these recursive estimates achieve an optimal asymptotic rate of convergence, in the sense that they become asymptotically efficient. These results, while relatively familiar in the Euclidean context, are here formulated and proved for the first time in the Riemannian context. In addition, they are illustrated with a numerical application to the recursive estimation of elliptically contoured distributions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Fast non-asymptotic rate of convergence</p>
Full article ">Figure 2
<p>Asymptotic efficiency (optimal rate of convergence)</p>
Full article ">
20 pages, 893 KiB  
Article
Quasi-Entropies and Non-Markovianity
by Fabio Benatti and Luigi Brancati
Entropy 2019, 21(10), 1020; https://doi.org/10.3390/e21101020 - 21 Oct 2019
Cited by 2 | Viewed by 3565
Abstract
We address an informational puzzle that appears with a non-Markovian open qubit dynamics: namely the fact that, while, according to the existing witnesses of information flows, a single qubit affected by that dissipative dynamics does not show information returning to it from its [...] Read more.
We address an informational puzzle that appears with a non-Markovian open qubit dynamics: namely the fact that, while, according to the existing witnesses of information flows, a single qubit affected by that dissipative dynamics does not show information returning to it from its environment, instead two qubits do show such information when evolving independently under the same dynamics. We solve the puzzle by adding the so-called quasi-entropies to the possible witnesses of information flows. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
Show Figures

Figure 1

Figure 1
<p>Behaviour of the trace norm of the matrix (<a href="#FD53-entropy-21-01020" class="html-disp-formula">53</a>), <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>t</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>s</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mo>{</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.9</mn> <mo>,</mo> <mn>1.0</mn> <mo>,</mo> <mn>1.1</mn> <mo>,</mo> <mn>1.2</mn> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mi>γ</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo>,</mo> <mi>s</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> (continuous lines) and <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo>,</mo> <mi>s</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> (dashed lines) for <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>0.98</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.98</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>s</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mo>{</mo> <mn>0.51</mn> <mo>,</mo> <mn>0.6</mn> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 2908 KiB  
Article
State Clustering of the Hot Strip Rolling Process via Kernel Entropy Component Analysis and Weighted Cosine Distance
by Chaojun Wang and Fei He
Entropy 2019, 21(10), 1019; https://doi.org/10.3390/e21101019 - 21 Oct 2019
Cited by 4 | Viewed by 2543
Abstract
In the hot strip rolling process, many process parameters are related to the quality of the final products. Sometimes, the process parameters corresponding to different steel grades are close to, or even overlap, each other. In reality, locating overlap regions and detecting products [...] Read more.
In the hot strip rolling process, many process parameters are related to the quality of the final products. Sometimes, the process parameters corresponding to different steel grades are close to, or even overlap, each other. In reality, locating overlap regions and detecting products with abnormal quality are crucial, yet challenging. To address this challenge, in this work, a novel method named kernel entropy component analysis (KECA)-weighted cosine distance is introduced for fault detection and overlap region locating. First, KECA is used to cluster the training samples of multiple steel grades, and the samples with incorrect classes are seen as the boundary of the sample distribution. Next, the concepts of recursive-based regional center and weighted cosine distance are introduced. For each steel grade, the regional center and the weight coefficients are determined. Finally, the weighted cosine distance between the testing sample and the regional center is chosen as the index to judge abnormal batches. The samples in the overlap region of multiple steel grades need to be focused on in the real production process, which is conducive to quality grade and combined production. The weighted cosine distances between the testing sample and different regional centers are used to locate the overlap region. A dataset from a hot steel rolling process is used to evaluate the performance of the proposed methods. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Comparison of two methods. (<b>a</b>) Schematic diagram of the traditional fault detection method; (<b>b</b>) schematic diagram of the recursive based regional center.</p>
Full article ">Figure 1 Cont.
<p>Comparison of two methods. (<b>a</b>) Schematic diagram of the traditional fault detection method; (<b>b</b>) schematic diagram of the recursive based regional center.</p>
Full article ">Figure 2
<p>Schematic diagram of data distribution for different steel types.</p>
Full article ">Figure 3
<p>Flowchart of the proposed method.</p>
Full article ">Figure 4
<p>Layout drawing of hot rolling process.</p>
Full article ">Figure 5
<p>The yield strength–elongation rate diagram.</p>
Full article ">Figure 6
<p>The ROC criteria of two models. (<b>a</b>) The first steel grade; (<b>b</b>) the third steel grade; (<b>c</b>) the third steel grade.</p>
Full article ">Figure 6 Cont.
<p>The ROC criteria of two models. (<b>a</b>) The first steel grade; (<b>b</b>) the third steel grade; (<b>c</b>) the third steel grade.</p>
Full article ">
18 pages, 1549 KiB  
Article
Complexity Synchronization of Energy Volatility Monotonous Persistence Duration Dynamics
by Linlu Jia, Jinchuan Ke and Jun Wang
Entropy 2019, 21(10), 1018; https://doi.org/10.3390/e21101018 - 20 Oct 2019
Viewed by 2318
Abstract
A new concept named volatility monotonous persistence duration (VMPD) dynamics is introduced into the research of energy markets, in an attempt to describe nonlinear fluctuation behaviors from a new perspective. The VMPD sequence unites the maximum fluctuation difference and the continuous variation length, [...] Read more.
A new concept named volatility monotonous persistence duration (VMPD) dynamics is introduced into the research of energy markets, in an attempt to describe nonlinear fluctuation behaviors from a new perspective. The VMPD sequence unites the maximum fluctuation difference and the continuous variation length, which is regarded as a novel indicator to evaluate risks and optimize portfolios. Further, two main aspects of statistical and nonlinear empirical research on the energy VMPD sequence are observed: probability distribution and autocorrelation behavior. Moreover, a new nonlinear method named the cross complexity-invariant distance (CID) FuzzyEn (CCF) which is composed of cross-fuzzy entropy and complexity-invariant distance is firstly proposed to study the complexity synchronization properties of returns and VMPD series for seven representative energy items. We also apply the ensemble empirical mode decomposition (EEMD) to resolve returns and VMPD sequence into the intrinsic mode functions, and the degree that they follow the synchronization features of the initial sequence is investigated. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Illustrations of <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <msub> <mrow> <mo>Δ</mo> <mo>|</mo> <mi>R</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics></math> for crude oil WTI futures.</p>
Full article ">Figure 2
<p>(<b>a</b>) The return series <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of energy futures and spot data; (<b>b</b>) The VMPD series <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> of energy futures and spot data.</p>
Full article ">Figure 3
<p>(<b>a</b>) Logarithmic plots of probability density functions by kernel density estimation of return series <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>b</b>) Logarithmic plots of probability density functions by kernel density estimation of VMPD series <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Plots and log-log plots of cumulative distributions of absolute return series <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>|</mo> </mrow> </semantics></math>; (<b>b</b>) Plots and log-log plots of cumulative distributions of absolute VMPD series <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>V</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>|</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Autocorrelation functions of return series <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>; (<b>b</b>) Autocorrelation functions of VMPD series <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Flow chart of ensemble empirical mode decomposition algorithm.</p>
Full article ">Figure 7
<p>(<b>a</b>) Decomposition results of <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> for WF with five separate IMFs by EEMD; (<b>b</b>) Decomposition results of <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> for WTI futures with five separate IMFs by EEMD; (<b>c</b>) Box plots of five separate IMFs of <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> for WF; (<b>d</b>) Box plots of five separate IMFs of <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> for WF.</p>
Full article ">Figure 8
<p>(<b>a</b>) CCF values of WF returns and WS returns with other energy items; (<b>b</b>) CCF values of BF returns and BS returns with other energy items; (<b>c</b>) CCF values of WF and WS for VMPD series with other energy items; (<b>d</b>) CCF values WF and WS for VMPD series with other energy items.</p>
Full article ">Figure 9
<p>(<b>a</b>) CCF values of IMF1-IMF2 for WF returns and WS returns with other energy items; (<b>b</b>) CCF values of IMF3-IMF5 for WF returns and WS returns with other energy items; (<b>c</b>) CCF values of IMF1-IMF2 of WF and WS for VMPD series with other energy items; (<b>d</b>) CCF values of IMF3-IMF5 of WF and WS for VMPD series with other energy items.</p>
Full article ">
28 pages, 853 KiB  
Article
Coalescence of Kerr Black Holes—Binary Systems from GW150914 to GW170814
by Bogeun Gwak
Entropy 2019, 21(10), 1017; https://doi.org/10.3390/e21101017 - 20 Oct 2019
Cited by 4 | Viewed by 2695
Abstract
We investigate the energy of the gravitational wave from a binary black hole merger by the coalescence of two Kerr black holes with an orbital angular momentum. The coalescence is constructed to be consistent with particle absorption in the limit in which the [...] Read more.
We investigate the energy of the gravitational wave from a binary black hole merger by the coalescence of two Kerr black holes with an orbital angular momentum. The coalescence is constructed to be consistent with particle absorption in the limit in which the primary black hole is sufficiently large compared with the secondary black hole. In this limit, we analytically obtain an effective gravitational spin–orbit interaction dependent on the alignments of the angular momenta. Then, binary systems with various parameters including equal masses are numerically analyzed. According to the numerical analysis, the energy of the gravitational wave still depends on the effective interactions, as expected from the analytical form. In particular, we ensure that the final black hole obtains a large portion of its spin angular momentum from the orbital angular momentum of the initial binary black hole. To estimate the angular momentum released by the gravitational wave in the actual binary black hole, we apply our results to observations at the Laser Interferometer Gravitational-Wave Observatory: GW150914, GW151226, GW170104, GW170608 and GW170814. Full article
(This article belongs to the Section Astrophysics, Cosmology, and Black Holes)
Show Figures

Figure 1

Figure 1
<p>The energy of the gravitational wave about <math display="inline"><semantics> <msub> <mi>a</mi> <mn>2</mn> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>orb</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The energy of the gravitational wave with respect to <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>orb</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <msub> <mi>ϵ</mi> <mrow> <mi>gw</mi> <mo>,</mo> <mi>rot</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mrow> <mi mathvariant="normal">f</mi> <mo>,</mo> <mi>rot</mi> </mrow> </msub> </semantics></math> with respect to <math display="inline"><semantics> <msub> <mi>a</mi> <mn>2</mn> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>orb</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The energy of the gravitational wave for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>5</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>orb</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The energy of the gravitational wave for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>5</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>orb</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>The rotational energy of the final black hole for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>5</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>The rotational energy of the final black hole for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>5</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7 Cont.
<p>The rotational energy of the final black hole for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>5</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The released ratio <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The released ratio <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>10</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Spin and orbital parameter with respect to <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mi>eff</mi> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>36.3</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>28.6</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi mathvariant="normal">f</mi> </msub> <mo>=</mo> <mn>62.0</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mi mathvariant="normal">f</mi> </msub> <mo>/</mo> <msub> <mi>M</mi> <mi mathvariant="normal">f</mi> </msub> <mo>=</mo> <mn>0.67</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10 Cont.
<p>Spin and orbital parameter with respect to <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mi>eff</mi> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>36.3</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>28.6</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi mathvariant="normal">f</mi> </msub> <mo>=</mo> <mn>62.0</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mi mathvariant="normal">f</mi> </msub> <mo>/</mo> <msub> <mi>M</mi> <mi mathvariant="normal">f</mi> </msub> <mo>=</mo> <mn>0.67</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>≥</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Spin parameter with respect to <math display="inline"><semantics> <msub> <mi>χ</mi> <mi>eff</mi> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>36.3</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>28.6</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi mathvariant="normal">f</mi> </msub> <mo>=</mo> <mn>62.0</mn> <msub> <mi>M</mi> <mo>⊙</mo> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mi mathvariant="normal">f</mi> </msub> <mo>/</mo> <msub> <mi>M</mi> <mi mathvariant="normal">f</mi> </msub> <mo>=</mo> <mn>0.67</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 994 KiB  
Article
Optimized Dimensionality Reduction Methods for Interval-Valued Variables and Their Application to Facial Recognition
by Jorge Arce Garro and Oldemar Rodríguez Rojas
Entropy 2019, 21(10), 1016; https://doi.org/10.3390/e21101016 - 19 Oct 2019
Cited by 2 | Viewed by 3113
Abstract
The center method, which was first proposed in Rev. Stat. Appl. 1997 by Cazes et al. and Stat. Anal. Data Mining 2011 by Douzal-Chouakria et al., extends the well-known Principal Component Analysis (PCA) method to particular types of symbolic objects that are characterized [...] Read more.
The center method, which was first proposed in Rev. Stat. Appl. 1997 by Cazes et al. and Stat. Anal. Data Mining 2011 by Douzal-Chouakria et al., extends the well-known Principal Component Analysis (PCA) method to particular types of symbolic objects that are characterized by multivalued interval-type variables. In contrast to classical data, symbolic data have internal variation. The authors who originally proposed the center method used the center of a hyper-rectangle in R m as a base point to carry out PCA, followed by the projection of all vertices of the hyper-rectangles as supplementary elements. Since these publications, the center point of the hyper-rectangle has typically been assumed to be the best point for the initial PCA. However, in this paper, we show that this is not always the case, if the aim is to maximize the variance of projections or minimize the squared distance between the vertices and their respective projections. Instead, we propose the use of an optimization algorithm that maximizes the variance of the projections (or that minimizes the distances between the squares of the vertices and their respective projections) and finds the optimal point for the initial PCA. The vertices of the hyper-rectangles are, then, projected as supplementary variables to this optimal point, which we call the “Best Point” for projection. For this purpose, we propose four new algorithms and two new theorems. The proposed methods and algorithms are illustrated using a data set comprised of measurements of facial characteristics from a study on facial recognition patterns for use in surveillance. The performance of our approach is compared with that of another procedure in the literature, and the results show that our symbolic analyses provide more accurate information. Our approach can be regarded as an optimization method, as it maximizes the explained variance or minimizes the squared distance between projections and the original points. In addition, the symbolic analyses generate more informative conclusions, compared with the classical analysis in which classical surrogates replace intervals. All the methods proposed in this paper can be executed in the RSDA package developed in R. Full article
(This article belongs to the Special Issue Symbolic Entropy Analysis and Its Applications II)
Show Figures

Figure 1

Figure 1
<p>Random variables for facial description.</p>
Full article ">Figure 2
<p>Principal component analysis (PCA) comparison.</p>
Full article ">
13 pages, 261 KiB  
Article
An Entropy-Based Machine Learning Algorithm for Combining Macroeconomic Forecasts
by Carles Bretó, Priscila Espinosa, Penélope Hernández and Jose M. Pavía
Entropy 2019, 21(10), 1015; https://doi.org/10.3390/e21101015 - 19 Oct 2019
Cited by 6 | Viewed by 3673
Abstract
This paper applies a Machine Learning approach with the aim of providing a single aggregated prediction from a set of individual predictions. Departing from the well-known maximum-entropy inference methodology, a new factor capturing the distance between the true and the estimated aggregated predictions [...] Read more.
This paper applies a Machine Learning approach with the aim of providing a single aggregated prediction from a set of individual predictions. Departing from the well-known maximum-entropy inference methodology, a new factor capturing the distance between the true and the estimated aggregated predictions presents a new problem. Algorithms such as ridge, lasso or elastic net help in finding a new methodology to tackle this issue. We carry out a simulation study to evaluate the performance of such a procedure and apply it in order to forecast and measure predictive ability using a dataset of predictions on Spanish gross domestic product. Full article
(This article belongs to the Special Issue Entropy Application for Forecasting)
11 pages, 2071 KiB  
Article
Predicting Premature Video Skipping and Viewer Interest from EEG Recordings
by Arno Libert and Marc M. Van Hulle
Entropy 2019, 21(10), 1014; https://doi.org/10.3390/e21101014 - 19 Oct 2019
Cited by 13 | Viewed by 4149
Abstract
Brain–computer interfacing has enjoyed growing attention, not only due to the stunning demonstrations with severely disabled patients, but also the advent of economically viable solutions in areas such as neuromarketing, mental state monitoring, and future human–machine interaction. An interesting case, at least for [...] Read more.
Brain–computer interfacing has enjoyed growing attention, not only due to the stunning demonstrations with severely disabled patients, but also the advent of economically viable solutions in areas such as neuromarketing, mental state monitoring, and future human–machine interaction. An interesting case, at least for neuromarketers, is to monitor the customer’s mental state in response to watching a commercial. In this paper, as a novelty, we propose a method to predict from electroencephalography (EEG) recordings whether individuals decide to skip watching a video trailer. Based on multiscale sample entropy and signal power, indices were computed that gauge the viewer’s engagement and emotional affect. We then trained a support vector machine (SVM), a k-nearest neighbor (kNN), and a random forest (RF) classifier to predict whether the viewer declares interest in watching the video and whether he/she decides to skip it prematurely. Our model achieved an average single-subject classification accuracy of 75.803% for skipping and 73.3% for viewer interest for the SVM, 82.223% for skipping and 78.333% for viewer interest for the kNN, and 80.003% for skipping and 75.555% for interest for the RF. We conclude that EEG can provide indications of viewer interest and skipping behavior and provide directions for future research. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the processing pipeline. Note that each step within the black outline applies to each frequency band. Abbreviations: MSE, multiscale version of sample entropy; SVM, support vector machine; kNN, k-nearest neighbor; EEG, electroencephalography.</p>
Full article ">Figure 2
<p>Boxplots summarizing accuracies of skipping prediction using individual features for the SVM (red), kNN (blue), and RF (green) classifiers. Note: Outliers correspond to the distribution in its column.</p>
Full article ">Figure 3
<p>As in <a href="#entropy-21-01014-f003" class="html-fig">Figure 3</a>, but for interest prediction.</p>
Full article ">Figure 4
<p>Difference between features in skipped and non-skipped cases. (<b>a</b>) RCVB, (<b>b</b>) RCVM, (<b>c</b>) RCVT, (<b>d</b>) REEI, (<b>e</b>) REAI, and (<b>f</b>) REVI.</p>
Full article ">Figure 5
<p>Idem to <a href="#entropy-21-01014-f005" class="html-fig">Figure 5</a> but for interest and not interest cases. (<b>a</b>) RCVB, (<b>b</b>) RCVM, (<b>c</b>) RCVT, (<b>d</b>) REEI, (<b>e</b>) REAI, and (<b>f</b>) REVI.</p>
Full article ">
19 pages, 1167 KiB  
Article
Permutation Entropy: Enhancing Discriminating Power by Using Relative Frequencies Vector of Ordinal Patterns Instead of Their Shannon Entropy
by David Cuesta-Frau, Antonio Molina-Picó, Borja Vargas and Paula González
Entropy 2019, 21(10), 1013; https://doi.org/10.3390/e21101013 - 18 Oct 2019
Cited by 11 | Viewed by 2920
Abstract
Many measures to quantify the nonlinear dynamics of a time series are based on estimating the probability of certain features from their relative frequencies. Once a normalised histogram of events is computed, a single result is usually derived. This process can be broadly [...] Read more.
Many measures to quantify the nonlinear dynamics of a time series are based on estimating the probability of certain features from their relative frequencies. Once a normalised histogram of events is computed, a single result is usually derived. This process can be broadly viewed as a nonlinear I R n mapping into I R , where n is the number of bins in the histogram. However, this mapping might entail a loss of information that could be critical for time series classification purposes. In this respect, the present study assessed such impact using permutation entropy (PE) and a diverse set of time series. We first devised a method of generating synthetic sequences of ordinal patterns using hidden Markov models. This way, it was possible to control the histogram distribution and quantify its influence on classification results. Next, real body temperature records are also used to illustrate the same phenomenon. The experiments results confirmed the improved classification accuracy achieved using raw histogram data instead of the PE final values. Thus, this study can provide a very valuable guidance for the improvement of the discriminating capability not only of PE, but of many similar histogram-based measures. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of histograms from well-known synthetic time series. Some features of these series are clearly reflected in the histograms, with a clear correlation between the distribution of the motifs and the determinism degree of the records.</p>
Full article ">Figure 2
<p>Basic Markov models for consecutive ordinal patterns in a time series for <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Example of a synthetic record generated using an unbiased HMM and its resulting uniform motif relative frequencies.</p>
Full article ">Figure 4
<p>Example of a synthetic record generated using a biased version of the Markov model and its resulting nonuniform motif relative frequencies.</p>
Full article ">Figure 5
<p>Example of records from the two classes of body temperature data. Top: Pathological. Bottom: Control.</p>
Full article ">Figure 6
<p>Equivalent histograms from the PE point of view despite clear differences in ordinal pattern distribution. When the estimated probability is computed, the PE measure is more or less the same since the differences in bin locations are lost, only amplitudes matter.</p>
Full article ">Figure 7
<p>Contrary to case depicted in <a href="#entropy-21-01013-f006" class="html-fig">Figure 6</a>, differences in ordinal pattern distribution are supported by differences in histogram amplitudes, which results in significant PE differences as well.</p>
Full article ">
19 pages, 14074 KiB  
Article
Communication Enhancement through Quantum Coherent Control of N Channels in an Indefinite Causal-Order Scenario
by Lorenzo M. Procopio, Francisco Delgado, Marco Enríquez, Nadia Belabas and Juan Ariel Levenson
Entropy 2019, 21(10), 1012; https://doi.org/10.3390/e21101012 - 18 Oct 2019
Cited by 53 | Viewed by 4692
Abstract
In quantum Shannon theory, transmission of information is enhanced by quantum features. Up to very recently, the trajectories of transmission remained fully classical. Recently, a new paradigm was proposed by playing quantum tricks on two completely depolarizing quantum channels i.e., using coherent control [...] Read more.
In quantum Shannon theory, transmission of information is enhanced by quantum features. Up to very recently, the trajectories of transmission remained fully classical. Recently, a new paradigm was proposed by playing quantum tricks on two completely depolarizing quantum channels i.e., using coherent control in space or time of the two quantum channels. We extend here this control to the transmission of information through a network of an arbitrary number N of channels with arbitrary individual capacity i.e., information preservation characteristics in the case of indefinite causal order. We propose a formalism to assess information transmission in the most general case of N channels in an indefinite causal order scenario yielding the output of such transmission. Then, we explicitly derive the quantum switch output and the associated Holevo limit of the information transmission for N = 2 , N = 3 as a function of all involved parameters. We find in the case N = 3 that the transmission of information for three channels is twice that of transmission of the two-channel case when a full superposition of all possible causal orders is used. Full article
(This article belongs to the Special Issue Quantum Information Revolution: Impact to Foundations)
Show Figures

Figure 1

Figure 1
<p>Concept of the quantum 2-switch. <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mi>i</mi> </msub> <mo>=</mo> <msubsup> <mi mathvariant="script">N</mi> <mrow> <msub> <mi>q</mi> <mi>i</mi> </msub> </mrow> <mi>D</mi> </msubsup> </mrow> </semantics></math> is a depolarizing channel applied to the quantum state <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>, where <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>−</mo> <msub> <mi>q</mi> <mi>i</mi> </msub> </mrow> </semantics></math> is the strength of the depolarization. For two channels, depending on the control system <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </semantics></math>, there are 2! possibilities to combine the channels with definite causal order: (<b>a</b>) if <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </semantics></math> is in the state <math display="inline"><semantics> <mrow> <mfenced open="|" close="&#x232A;"> <mn>1</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>1</mn> </mfenced> </mrow> </semantics></math>, the causal order will be <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, i.e., <math display="inline"><semantics> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> </semantics></math> is before <math display="inline"><semantics> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> </semantics></math>; (<b>b</b>) on the other hand, if <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </semantics></math> is on the state <math display="inline"><semantics> <mrow> <mfenced open="|" close="&#x232A;"> <mn>2</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>2</mn> </mfenced> </mrow> </semantics></math>, the causal order will be <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) however, placing <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </semantics></math> in a superposition of its states (i.e., <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mo>+</mo> </mfenced> <mfenced open="&#x2329;" close="|"> <mo>+</mo> </mfenced> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mfenced open="|" close="&#x232A;"> <mo>+</mo> </mfenced> <mi>c</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mn>2</mn> </msqrt> </mfrac> <mrow> <mo>(</mo> <mfenced open="|" close="&#x232A;"> <mn>1</mn> </mfenced> <mo>+</mo> <mfenced open="|" close="&#x232A;"> <mn>2</mn> </mfenced> <mo>)</mo> </mrow> </mrow> </semantics></math>) results in the indefinite causal order of <math display="inline"><semantics> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> </semantics></math> to become indefinite. In this situation, we said that the quantum channels are in a superposition of causal orders. This device is called a quantum 2-switch [<a href="#B6-entropy-21-01012" class="html-bibr">6</a>] whose input and output are <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>⊗</mo> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>⊗</mo> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 2
<p>Concept of the quantum 3-switch. For three channels, depending on <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </semantics></math>, we have 3! possibilities to combine the channels in a definite causal order: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mn>1</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>1</mn> </mfenced> </mrow> </semantics></math> encodes a causal order <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, i.e., <math display="inline"><semantics> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> </semantics></math> is applied first to <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mn>2</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>2</mn> </mfenced> </mrow> </semantics></math> encodes <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mn>3</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>3</mn> </mfenced> </mrow> </semantics></math> encodes <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mn>4</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>4</mn> </mfenced> </mrow> </semantics></math> encodes <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mn>5</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>5</mn> </mfenced> </mrow> </semantics></math> encodes <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mn>6</mn> </mfenced> <mfenced open="&#x2329;" close="|"> <mn>6</mn> </mfenced> </mrow> </semantics></math> encodes <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> <mo>∘</mo> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> </mrow> </semantics></math>; (<b>g</b>) finally, if <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>=</mo> <mfenced open="|" close="&#x232A;"> <mo>+</mo> </mfenced> <mfenced open="&#x2329;" close="|"> <mo>+</mo> </mfenced> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mfenced open="|" close="&#x232A;"> <mo>+</mo> </mfenced> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mn>6</mn> </msqrt> </mfrac> <msubsup> <mo>∑</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>6</mn> </msubsup> <mfenced open="|" close="&#x232A;"> <mi>k</mi> </mfenced> </mrow> </semantics></math> we shall have a superposition of six different causal orders. This is an indefinite causal order called quantum 3-switch whose input and output are <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>⊗</mo> <msub> <mi>ρ</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="script">S</mi> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">N</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="script">N</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="script">N</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mrow> <mo>(</mo> <mi>ρ</mi> <mo>⊗</mo> <msub> <mi>ρ</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>, respectively. Notice that, for each superposition with <span class="html-italic">m</span> different causal orders, there are <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mfrac linethickness="0pt"> <mrow> <mi>N</mi> <mo>!</mo> </mrow> <mi>m</mi> </mfrac> </mfenced> </semantics></math> (with <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>6</mn> </mrow> </semantics></math>) possible combinations of causal orders to build such superposition with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> channels, where <math display="inline"><semantics> <mrow> <mfenced separators="" open="(" close=")"> <mfrac linethickness="0pt"> <mi>n</mi> <mi>r</mi> </mfrac> </mfenced> <mo>=</mo> <mfrac> <mrow> <mi>n</mi> <mo>!</mo> </mrow> <mrow> <mi>r</mi> <mo>!</mo> <mo>(</mo> <mi>n</mi> <mo>−</mo> <mi>r</mi> <mo>)</mo> <mo>!</mo> </mrow> </mfrac> </mrow> </semantics></math> is the binomial coefficient. The input and output of each channel are fixed. The arrows along the wire just indicate that the target system enters in or exits from the channel.</p>
Full article ">Figure 3
<p>Entropy map for two noisy channels. The 3D graphs represent contour surfaces of the Von-Neumann entropy <math display="inline"><semantics> <mrow> <msup> <mi>H</mi> <mi>min</mi> </msup> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">S</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> when the depolarizing parameters <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>,</mo> </mrow> </semantics></math> and the probabilities <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>=</mo> <mi>p</mi> </mrow> </semantics></math> are varied from 0 to 1. We plot several cases when the dimension <span class="html-italic">d</span> of the target <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> is: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. The value of <math display="inline"><semantics> <mrow> <msup> <mi>H</mi> <mi>min</mi> </msup> <mrow> <mo>(</mo> <msub> <mi mathvariant="script">S</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> is also depicted by the color in the bar.</p>
Full article ">Figure 4
<p>Transmission map of information for two noisy channels.The 3D graphs represent contour surfaces of the Holevo information <math display="inline"><semantics> <msub> <mi>χ</mi> <mrow> <mi mathvariant="normal">Q</mi> <mn>2</mn> <mi mathvariant="normal">S</mi> </mrow> </msub> </semantics></math> when the depolarising parameters <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>q</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and the probabilities <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>=</mo> <mi>p</mi> </mrow> </semantics></math> varied from 0 to 1. We plot several cases for the dimension <span class="html-italic">d</span> of the target system: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. In all these cases there are thirty contour surfaces of <math display="inline"><semantics> <msub> <mi>χ</mi> <mrow> <mi mathvariant="normal">Q</mi> <mn>2</mn> <mi mathvariant="normal">S</mi> </mrow> </msub> </semantics></math>. The values of <math display="inline"><semantics> <msub> <mi>χ</mi> <mrow> <mi mathvariant="normal">Q</mi> <mn>2</mn> <mi mathvariant="normal">S</mi> </mrow> </msub> </semantics></math> are shown in the color bars.</p>
Full article ">Figure 5
<p>Transmission of information for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> channels. Holevo information as a function of the depolarization strengths <math display="inline"><semantics> <msub> <mi>q</mi> <mi>i</mi> </msub> </semantics></math> of the channels. We plot the subcases of equal depolarization strengths, i.e., <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>q</mi> <mn>3</mn> </msub> <mo>=</mo> <mi>q</mi> </mrow> </semantics></math>, with equally weighted probabilities <math display="inline"><semantics> <msub> <mi>P</mi> <mi>k</mi> </msub> </semantics></math> for indefinite causal orders (solid line) with (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> channels. The transmission of information first decreases to a minimal value for Holevo information and then the transmission of information increases with <span class="html-italic">q</span>. For completely depolarizing channels, i.e., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, the transmission of information is nonzero and decreases as <span class="html-italic">d</span> increases. A comparison is shown between the Holevo information when the channels are in a definite causal order (dashed line). A full superposition of <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>!</mo> </mrow> </semantics></math> causal orders is used.</p>
Full article ">
12 pages, 1321 KiB  
Article
Performance Improvement of Underwater Continuous-Variable Quantum Key Distribution via Photon Subtraction
by Qingquan Peng, Guojun Chen, Xuan Li, Qin Liao and Ying Guo
Entropy 2019, 21(10), 1011; https://doi.org/10.3390/e21101011 - 17 Oct 2019
Cited by 14 | Viewed by 3106
Abstract
Considering the ocean water’s optical attenuation is significantly larger than that of Fiber Channel, we propose an approach to enhance the security of underwater continuous-variable quantum key distribution (CVQKD). In particular, the photon subtraction operation is performed at the emitter to enhance quantum [...] Read more.
Considering the ocean water’s optical attenuation is significantly larger than that of Fiber Channel, we propose an approach to enhance the security of underwater continuous-variable quantum key distribution (CVQKD). In particular, the photon subtraction operation is performed at the emitter to enhance quantum entanglement, thereby improving the underwater transmission performance of the CVQKD. Simulation results show that the photon subtraction operation can effectively improve the performance of CVQKD in terms of underwater transmission distance. We also compare the performance of the proposed protocol in different water qualities, which shows the advantage of our protocol against water deterioration. Therefore, we provide a suitable scheme for establishing secure communication between submarine and submarine vehicles. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

Figure 1
<p>Underwater CVQKD via photon subtraction model diagram. Het: heterodyne detection; Hom: homodyne detection; PS: photon subtraction; BS: beam splitter; <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>,</mo> <mi>ξ</mi> </mrow> </semantics></math>,: channel parameters operator.</p>
Full article ">Figure 2
<p>The EB scheme of CVQKD via photon subtraction operation. BS1,2: beam splitter; <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>,</mo> <mi>η</mi> </mrow> </semantics></math>: transmittance of BS1,2; <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>,</mo> <mi>ξ</mi> </mrow> </semantics></math>,: channel parameters operator.</p>
Full article ">Figure 3
<p>Beam-spread model diagram. <math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mi>s</mi> <mi>r</mi> <mi>c</mi> </mrow> </msub> </semantics></math>: Photon emission source; <math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mi>r</mi> <mi>e</mi> <mi>c</mi> </mrow> </msub> </semantics></math>: Photon receiver.</p>
Full article ">Figure 4
<p>Simulating Bob receiving photons. (<b>a</b>,<b>d</b>) 0-m pure sea water; (<b>b</b>,<b>e</b>) 6-m pure sea water; (<b>c</b>,<b>f</b>) 12-m pure sea water; (<b>d</b>) 12m pure sea water; (<b>g</b>) Photon intensity level.</p>
Full article ">Figure 5
<p>Clear ocean water attenuation as a function of wavelength.</p>
Full article ">Figure 6
<p>The success probability of subtracting <span class="html-italic">k</span> photons. The variance of TMSV is <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>The secret key rate of the CVQKD under pure seawater via <span class="html-italic">k</span>-photon subtraction. Parameters are set to <math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mi>ξ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>04</mn> <mo>,</mo> <mi>β</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>96</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>043</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>The secret key rate of the CVQKD under pure seawater. (<b>a</b>) One-photon subtraction operation; (<b>b</b>) Two-photon subtraction operation</p>
Full article ">Figure 9
<p>The secret key rate for the underwater CVQKD via one-photon subtraction.</p>
Full article ">
10 pages, 294 KiB  
Article
Residual Predictive Information Flow in the Tight Coupling Limit: Analytic Insights from a Minimalistic Model
by Benjamin Wahl, Ulrike Feudel, Jaroslav Hlinka, Matthias Wächter, Joachim Peinke and Jan A. Freund
Entropy 2019, 21(10), 1010; https://doi.org/10.3390/e21101010 - 17 Oct 2019
Cited by 1 | Viewed by 3709
Abstract
In a coupled system, predictive information flows from the causing to the caused variable. The amount of transferred predictive information can be quantified through the use of transfer entropy or, for Gaussian variables, equivalently via Granger causality. It is natural to expect and [...] Read more.
In a coupled system, predictive information flows from the causing to the caused variable. The amount of transferred predictive information can be quantified through the use of transfer entropy or, for Gaussian variables, equivalently via Granger causality. It is natural to expect and has been repeatedly observed that a tight coupling does not permit to reconstruct a causal connection between causing and caused variables. Here, we show that for a model of interacting social groups, carried from the master equation to the Fokker–Planck level, a residual predictive information flow can remain for a pair of uni-directionally coupled variables even in the limit of infinite coupling strength. We trace this phenomenon back to the question of how the synchronizing force and the noise strength scale with the coupling strength. A simplified model description allows us to derive analytic expressions that fully elucidate the interplay between deterministic and stochastic model parts. Full article
Show Figures

Figure 1

Figure 1
<p>Dependence of Granger causality GC on the coupling parameter <math display="inline"><semantics> <mi>μ</mi> </semantics></math>: circles, numerical simulation results of the Langevin equation based on drift and diffusion coefficients (Equations (<a href="#FD6-entropy-21-01010" class="html-disp-formula">6</a>) and (<a href="#FD7-entropy-21-01010" class="html-disp-formula">7</a>), respectively); crosses, GC of the global VAR(1) (Equation (<a href="#FD8-entropy-21-01010" class="html-disp-formula">8</a>)) with average-noise; solid line, analytic GC (Equation (20)) of the minimalistic model.</p>
Full article ">Figure 2
<p>Cross-correlations for couplings <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mfrac> <mn>1</mn> <mn>8</mn> </mfrac> <mo>,</mo> <mfrac> <mn>1</mn> <mn>4</mn> </mfrac> <mo>,</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mfrac> <mn>3</mn> <mn>4</mn> </mfrac> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>9</mn> <mo>,</mo> <mo>∞</mo> </mrow> </semantics></math> (bottom to top for all non-positive lags) and various lags <math display="inline"><semantics> <mi>τ</mi> </semantics></math>.</p>
Full article ">
20 pages, 1632 KiB  
Article
A Comparison of the Maximum Entropy Principle Across Biological Spatial Scales
by Rodrigo Cofré, Rubén Herzog, Derek Corcoran and Fernando E. Rosas
Entropy 2019, 21(10), 1009; https://doi.org/10.3390/e21101009 - 16 Oct 2019
Cited by 13 | Viewed by 4635
Abstract
Despite their differences, biological systems at different spatial scales tend to exhibit common organizational patterns. Unfortunately, these commonalities are often hard to grasp due to the highly specialized nature of modern science and the parcelled terminology employed by various scientific sub-disciplines. To explore [...] Read more.
Despite their differences, biological systems at different spatial scales tend to exhibit common organizational patterns. Unfortunately, these commonalities are often hard to grasp due to the highly specialized nature of modern science and the parcelled terminology employed by various scientific sub-disciplines. To explore these common organizational features, this paper provides a comparative study of diverse applications of the maximum entropy principle, which has found many uses at different biological spatial scales ranging from amino acids up to societies. By presenting these studies under a common approach and language, this paper aims to establish a unified view over these seemingly highly heterogeneous scenarios. Full article
(This article belongs to the Special Issue Information Theory Applications in Biology)
Show Figures

Figure 1

Figure 1
<p>The amino acid sequences of <span class="html-italic">M</span> homolog proteins are aligned in the Multiple Sequence Alignment (MSA) matrix. Each MSA column is a sequence site and each row is the sequence of a member of the protein family. To have a fixed sequence length <span class="html-italic">L</span>, a gap (“−”) may be introduced. From the MSA, two sets of observables are considered: (i) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the occurrence of the amino acid <span class="html-italic">a</span> at the site <span class="html-italic">i</span>; and (ii) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the co-occurrence of amino acid <span class="html-italic">a</span> at the site <span class="html-italic">i</span> and amino acid <span class="html-italic">b</span> at the site <span class="html-italic">j</span>.</p>
Full article ">Figure 2
<p>The retina of a vertebrate animal is extracted and mounted on the multi-electrode array in order to obtain the extracellular potential of the retinal ganglion cells responding simultaneously to natural stimuli. A signal processing procedure called spike sorting leads to the detection of the spikes of each cell. A binning procedure is applied to obtain binary patterns of activity, from which the average values of the observables are computed.</p>
Full article ">Figure 3
<p>Regions of interest in the brain (represented as circles) corresponding to the DMN and FPN are selected and their BOLD signals (continuous in time and state-space) are analyzed. To obtain binary states, as in the previous example, the time is discretized choosing time windows of 9.045 s and the BOLD signals are binarized using a threshold under which the continuous signal is zero and otherwise is one (for details about the threshold and robustness of the choice, please refer to [<a href="#B51-entropy-21-01009" class="html-bibr">51</a>]). From the binary data, the average values with respect to time of the observables are computed. The maximum entropy principle is used to find the unique joint probability distribution that maximizes entropy, which is consistent with constraints computed from data.</p>
Full article ">Figure 4
<p>(<b>A</b>) From a large dataset where several plant species have their recorded mean trait value, we extract (<b>B</b>) a reduced database with the possible plant species present in <math display="inline"><semantics> <msub> <mi>A</mi> <mn>0</mn> </msub> </semantics></math> (blue rows selected in (<b>A</b>)) and traits possible to be measured (blue columns selected in (<b>A</b>)). (<b>C</b>) Then, traits are measured in the field for all possible plants without specifying the species. The average values of these traits are the constraints for the maximum entropy problem of finding an estimate for the proportion of each plant species in <math display="inline"><semantics> <msub> <mi>A</mi> <mn>0</mn> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>(<b>A</b>) Database with the species, their counts, and the mass of that species for a given area <math display="inline"><semantics> <msub> <mi>A</mi> <mn>0</mn> </msub> </semantics></math>. From here, the quantities used to compute the average value of the observables <math display="inline"><semantics> <msub> <mi>S</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>N</mi> <mn>0</mn> </msub> </semantics></math> are obtained; (<b>B</b>) Using the metabolic theory, the metabolic rate (MR) of each species is estimated. The quantity <math display="inline"><semantics> <msub> <mi>E</mi> <mn>0</mn> </msub> </semantics></math> is computed from the standardized metabolic rate (SMR), which is obtained by dividing all of the MRs by the minimum MR; (<b>C</b>) The species-abundance distribution <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>(</mo> <msub> <mi>n</mi> <mn>0</mn> </msub> <mo>|</mo> <msub> <mi>A</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </semantics></math> is computed from the joint maximum entropy distribution and a graph of rank versus abundance is plotted; (<b>D</b>) The metabolic rate distribution over all individuals is obtained <math display="inline"><semantics> <mrow> <mi>ψ</mi> <mo>(</mo> <mi>ϵ</mi> <mo>|</mo> <msub> <mi>A</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>S</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>N</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>E</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </semantics></math>. A graph of rank versus metabolic rate is shown. (<b>C</b>,<b>D</b>) Images were obtained from the maximum entropy distributions fitted to data available in the R package meteR [<a href="#B75-entropy-21-01009" class="html-bibr">75</a>] using Dan Gruner’s data [<a href="#B76-entropy-21-01009" class="html-bibr">76</a>].</p>
Full article ">Figure 6
<p>(<b>A</b>) Interacting random variables <math display="inline"><semantics> <msup> <mi>x</mi> <mi>i</mi> </msup> </semantics></math> representing the votes of the nine justices; (<b>B</b>) Correlation matrix between random variables <math display="inline"><semantics> <msup> <mi>x</mi> <mi>i</mi> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi>x</mi> <mi>j</mi> </msup> </semantics></math> measured directly from data; (<b>C</b>) Interaction matrix computed from the maximum entropy principle.</p>
Full article ">
13 pages, 2018 KiB  
Article
Multiscale Horizontal Visibility Graph Analysis of Higher-Order Moments for Estimating Statistical Dependency
by Keqiang Dong, Haowei Che and Zhi Zou
Entropy 2019, 21(10), 1008; https://doi.org/10.3390/e21101008 - 16 Oct 2019
Cited by 6 | Viewed by 4386
Abstract
The horizontal visibility graph is not only a powerful tool for the analysis of complex systems, but also a promising way to analyze time series. In this paper, we present an approach to measure the nonlinear interactions between a non-stationary time series based [...] Read more.
The horizontal visibility graph is not only a powerful tool for the analysis of complex systems, but also a promising way to analyze time series. In this paper, we present an approach to measure the nonlinear interactions between a non-stationary time series based on the horizontal visibility graph. We describe how a horizontal visibility graph may be calculated based on second-order and third-order statistical moments. We compare the new methods with the first-order measure, and then give examples including stock markets and aero-engine performance parameters. These analyses suggest that measures derived from the horizontal visibility graph may be of particular relevance to the growing interest in quantifying the information exchange between time series. Full article
(This article belongs to the Special Issue Entropy, Nonlinear Dynamics and Complexity)
Show Figures

Figure 1

Figure 1
<p>Schematic sketch of the higher-order multiscale horizontal visibility graph correlation analysis.</p>
Full article ">Figure 2
<p>1st-order MHVGCA method and 2nd-order MHVGCA for the two-component auto-regressive fractionally integrated moving average process with <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math> and <span class="html-italic">W</span> = 0.5, 0.7 &amp; 0.9.</p>
Full article ">Figure 3
<p>1st-order MHVGCA method and 3rd-order MHVGCA for the two-component ARFIMA process with <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math> and <span class="html-italic">W</span> = 0.5, 0.7, and 0.9.</p>
Full article ">Figure 4
<p>1st-order MHVGCA method, 2nd-order MHVGCA, and 3rd-order MHVGCA for the two-component ARFIMA process with <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and <span class="html-italic">W</span> = 0.5, 0.7 &amp; 0.9 (<b>left</b>), and with <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>ρ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <span class="html-italic">W</span> = 0.5, 0.7 (<b>right</b>).</p>
Full article ">Figure 5
<p>The association <span class="html-italic">G</span>(<span class="html-italic">s</span>) of the Stock Exchange Composite Index (SSEC) vs. Standard &amp; Poors500 Composite Stock Price Index (S&amp;P500) time series for 1st-order MHVGCA method, 2nd-order MHVGCA, and 3rd-order MHVGCA.</p>
Full article ">Figure 6
<p>The association <span class="html-italic">G</span>(<span class="html-italic">s</span>) of the SSEC vs. other 10 stock time series for the 1st-order MHVGCA method, 2nd-order MHVGCA, and 3rd-order MHVGCA.</p>
Full article ">Figure 7
<p>The association <span class="html-italic">G</span>(<span class="html-italic">s</span>) of N1 vs. N2 time series for 1st-order MHVGCA method, 2nd-order MHVGCA, and 3rd-order MHVGCA.</p>
Full article ">Figure 8
<p>The association <span class="html-italic">G</span>(<span class="html-italic">s</span>) of N1 vs. the other four time series for the 1st-order MHVGCA method, 2nd-order MHVGCA, and 3rd-order MHVGCA.</p>
Full article ">
14 pages, 6223 KiB  
Article
Conjugate Heat Transfer Investigation on Swirl-Film Cooling at the Leading Edge of a Gas Turbine Vane
by Haifen Du, Ziyue Mei, Jiayao Zou, Wei Jiang and Danmei Xie
Entropy 2019, 21(10), 1007; https://doi.org/10.3390/e21101007 - 15 Oct 2019
Cited by 12 | Viewed by 4144
Abstract
Numerical calculation of conjugate heat transfer was carried out to study the effect of combined film and swirl cooling at the leading edge of a gas turbine vane with a cooling chamber inside. Two cooling chambers (C1 and C2 cases) were [...] Read more.
Numerical calculation of conjugate heat transfer was carried out to study the effect of combined film and swirl cooling at the leading edge of a gas turbine vane with a cooling chamber inside. Two cooling chambers (C1 and C2 cases) were specially designed to generate swirl in the chamber, which could enhance overall cooling effectiveness at the leading edge. A simple cooling chamber (C0 case) was designed as a baseline. The effects of different cooling chambers were studied. Compared with the C0 case, the cooling chamber in the C1 case consists of a front cavity and a back cavity and two cavities are connected by a passage on the pressure side to improve the overall cooling effectiveness of the vane. The area-averaged overall cooling effectiveness of the leading edge ( ϕ ¯ ¯ ) was improved by approximately 57%. Based on the C1 case, the passage along the vane was divided into nine segments in the C2 case to enhance the cooling effectiveness at the leading edge, and ϕ ¯ ¯ was enhanced by 75% compared with that in the C0 case. Additionally, the cooling efficiency on the pressure side was improved significantly by using swirl-cooling chambers. Pressure loss in the C2 and C1 cases was larger than that in the C0 case. Full article
Show Figures

Figure 1

Figure 1
<p>Geometry of the turbine vane with a cooling chamber.</p>
Full article ">Figure 2
<p>Different cooling chamber configurations.</p>
Full article ">Figure 3
<p>Grid in the computational model.</p>
Full article ">Figure 4
<p>Grid independence study using temperature and turbulent kinetic energy along the span direction downstream of Row 4. (<b>a</b>) temperature along the span direction downstream of Row 4; (<b>b</b>) turbulent kinetic energy along the span direction downstream of Row 4.</p>
Full article ">Figure 5
<p>Comparison of spanwise averaged coefficient of pressure.</p>
Full article ">Figure 6
<p>Gas static pressure in the C<sub>1</sub> case at Z/H = 0.5.</p>
Full article ">Figure 7
<p>The streamlines from film holes and streamlines at the endwall.</p>
Full article ">Figure 8
<p>Vortex core region and streamlines in Fluid 2 at Z/H = 0.5.</p>
Full article ">Figure 9
<p>Vortex core region (swirling strength level = 0.025) and velocity w.</p>
Full article ">Figure 10
<p>Wall heat flux on the inside wall of the vane.</p>
Full article ">Figure 11
<p>Wall heat flux on the outside wall of the vane.</p>
Full article ">Figure 12
<p>Overall cooling effectiveness of the three cases. (<b>a</b>) overall cooling effectiveness on the outside wall of the vane; (<b>b</b>) laterally averaged overall cooling effectiveness.</p>
Full article ">Figure 13
<p>Area-averaged overall cooling effectiveness in different cases.</p>
Full article ">Figure 14
<p>Pressure loss coefficient in different cases.</p>
Full article ">
41 pages, 434 KiB  
Article
The Generalized Stochastic Smoluchowski Equation
by Pierre-Henri Chavanis
Entropy 2019, 21(10), 1006; https://doi.org/10.3390/e21101006 - 15 Oct 2019
Cited by 32 | Viewed by 4197
Abstract
We study the dynamics of a system of overdamped Brownian particles governed by the generalized stochastic Smoluchowski equation associated with a generalized form of entropy and involving a long-range potential of interaction [P.H. Chavanis, Entropy 17, 3205 (2015)]. We first neglect fluctuations [...] Read more.
We study the dynamics of a system of overdamped Brownian particles governed by the generalized stochastic Smoluchowski equation associated with a generalized form of entropy and involving a long-range potential of interaction [P.H. Chavanis, Entropy 17, 3205 (2015)]. We first neglect fluctuations and provide a macroscopic description of the system based on the deterministic mean field Smoluchowski equation. We then take fluctuations into account and provide a mesoscopic description of the system based on the stochastic mean field Smoluchowski equation. We establish the main properties of this equation and derive the Kramers escape rate formula, giving the lifetime of a metastable state, from the theory of instantons. We relate the properties of the generalized stochastic Smoluchowski equation to a principle of maximum dissipation of free energy. We also discuss the connection with the dynamical density functional theory of simple liquids. Full article
(This article belongs to the Special Issue Entropy Production and Its Applications: From Cosmology to Biology)
33 pages, 3933 KiB  
Article
Thermodynamics of a Phase-Driven Proximity Josephson Junction
by Francesco Vischi, Matteo Carrega, Alessandro Braggio, Pauli Virtanen and Francesco Giazotto
Entropy 2019, 21(10), 1005; https://doi.org/10.3390/e21101005 - 15 Oct 2019
Cited by 5 | Viewed by 4532
Abstract
We study the thermodynamic properties of a superconductor/normal metal/superconductor Josephson junction in the short limit. Owing to the proximity effect, such a junction constitutes a thermodynamic system where phase difference, supercurrent, temperature and entropy are thermodynamical variables connected by equations of state. These [...] Read more.
We study the thermodynamic properties of a superconductor/normal metal/superconductor Josephson junction in the short limit. Owing to the proximity effect, such a junction constitutes a thermodynamic system where phase difference, supercurrent, temperature and entropy are thermodynamical variables connected by equations of state. These allow conceiving quasi-static processes that we characterize in terms of heat and work exchanged. Finally, we combine such processes to construct a Josephson-based Otto and Stirling cycles. We study the related performance in both engine and refrigerator operating mode. Full article
(This article belongs to the Special Issue Quantum Thermodynamics II)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Sketch of the SNS proximized system. It consists of superconducting ring, <math display="inline"><semantics> <msub> <mi>L</mi> <mi>S</mi> </msub> </semantics></math> long, pierced by a magnetic flux <math display="inline"><semantics> <mi mathvariant="sans-serif">Φ</mi> </semantics></math>. The ring is interrupted by a normal metal weak link. The electron system of the whole device is thermally and electrically isolated and at temperature <span class="html-italic">T</span>. The system is connected to a thermal reservoir at temperature <math display="inline"><semantics> <mover> <mi>T</mi> <mo>¯</mo> </mover> </semantics></math> through a heat valve <span class="html-italic">v</span>. (<b>b</b>) Magnification of the SNS junction. The normal metal weak, <math display="inline"><semantics> <msub> <mi>L</mi> <mi>N</mi> </msub> </semantics></math> long, is in clean electric contact with the superconducting leads. <math display="inline"><semantics> <mrow> <msub> <mi>A</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">N</mi> <mi>j</mi> </msub> </mrow> </semantics></math> are respectively the cross-section and the DoS at Fermi energy of the <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mi>N</mi> </mrow> </semantics></math> or <span class="html-italic">S</span> metal. The phase drop <math display="inline"><semantics> <mi>φ</mi> </semantics></math> of the superconducting order parameter takes place across the junction.</p>
Full article ">Figure 2
<p>Color plots of the quasi-particle local normalized Density of States (DoS) <span class="html-italic">N</span> in a Superconductor/Normal metal/Superconductor (SNS) junction, versus energy <math display="inline"><semantics> <mi>ε</mi> </semantics></math> and position <span class="html-italic">x</span>, for <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>π</mi> <mo>/</mo> <mn>3</mn> <mo>,</mo> <mn>2</mn> <mi>π</mi> <mo>/</mo> <mn>3</mn> <mo>,</mo> <mi>π</mi> </mrow> </semantics></math>. The dashed lines separate the S regions (on the sides) to the N region (in the center), as shown by the junction sketch. The phase dependence of the DoS is mirrored in a phase-dependence of the junction entropy <span class="html-italic">S</span>. The numerical calculation has been obtained within the quasi-classical methods of Reference [<a href="#B84-entropy-21-01005" class="html-bibr">84</a>] with <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mi>l</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> <mo>Δ</mo> <mrow> <mo>(</mo> <mi>T</mi> <mo>→</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mo>Δ</mo> <mn>0</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Characteristics of the KO theory, reported versus phase <math display="inline"><semantics> <mi>φ</mi> </semantics></math> for chosen temperatures <span class="html-italic">T</span> in legend. (<b>a</b>) Supercurrent <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>, in Equation (<a href="#FD11-entropy-21-01005" class="html-disp-formula">11</a>). The dotted curve at <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> is given by Equation (<a href="#FD12-entropy-21-01005" class="html-disp-formula">12</a>). (<b>b</b>) Electric Energy <math display="inline"><semantics> <mrow> <mi mathvariant="script">E</mi> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>, in Equation (<a href="#FD14-entropy-21-01005" class="html-disp-formula">14</a>). The dotted curve at <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> is given by Equation (<a href="#FD15-entropy-21-01005" class="html-disp-formula">15</a>). (<b>c</b>) Entropy variation <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>S</mi> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>, in Equation (<a href="#FD5-entropy-21-01005" class="html-disp-formula">5</a>).</p>
Full article ">Figure 4
<p>Total entropy <span class="html-italic">S</span> of the system for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>. (<b>a</b>) <span class="html-italic">S</span> versus temperature <span class="html-italic">T</span> for chosen phases <math display="inline"><semantics> <mi>φ</mi> </semantics></math> in legend. The case <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> correspond to the BCS entropy <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in Equation (<a href="#FD10-entropy-21-01005" class="html-disp-formula">10</a>). (<b>b</b>) Magnification of panel (<b>a</b>) around <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0.2</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics></math>, highlighting the passage from a exponential suppressed behavior at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> to a linear behavior at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mi>π</mi> </mrow> </semantics></math>. The dashed curve is the analytical low-temperature in expression (<a href="#FD23-entropy-21-01005" class="html-disp-formula">23</a>), (<a href="#FD24-entropy-21-01005" class="html-disp-formula">24</a>).</p>
Full article ">Figure 5
<p>Isophasic heat capacity properties. (<b>a</b>) Map of the isophasic heat capacity <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math>. (<b>b</b>) Cuts from panel (<b>a</b>) for chosen phases in legend. The dashed line shows the low temperature expression (<a href="#FD41-entropy-21-01005" class="html-disp-formula">41</a>). (<b>c</b>) Cuts from panel (<b>a</b>) for chosen temperatures in legend.</p>
Full article ">Figure 6
<p>Isentropic processes properties. (<b>a</b>) Colormap of the temperature decrease <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>φ</mi> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>T</mi> <mi>i</mi> </msub> </mrow> </semantics></math>) for an isentropic process from initial temperature <math display="inline"><semantics> <msub> <mi>T</mi> <mi>i</mi> </msub> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mi>φ</mi> </semantics></math>. (<b>b</b>) Cuts from panel (<b>a</b>) for the chosen temperatures in legend. (<b>c</b>) Temperature decrease <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>f</mi> </msub> <mo>/</mo> <msub> <mi>T</mi> <mi>i</mi> </msub> </mrow> </semantics></math> for an isentropic process from <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msub> <mi>T</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>=</mo> <mi>π</mi> <mo>,</mo> <msub> <mi>T</mi> <mi>f</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for different values of <math display="inline"><semantics> <mi>α</mi> </semantics></math>. (<b>d</b>) Isentropic current phase relation (red solid curve) across the state <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <msub> <mi>T</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0.6</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>. For comparison, the dashed curves report two isothermal current phase relations at <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0.6</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0.51</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics></math> (see legend).</p>
Full article ">Figure 7
<p>Sketch of the system connected to two reservoirs, identified as Left Reservoir (L) and Right Reservoir (R), through two heat valves <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>v</mi> <mi>R</mi> </msub> </mrow> </semantics></math> respectively. Thermodynamic cycles can be implemented varying configurations between different temperatures <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math>, achieving also opposite operational modes such as engine or refrigerator configurations (see text).</p>
Full article ">Figure 8
<p>Otto cycle scheme. The example considers an engine from a hot reservoir at <math display="inline"><semantics> <mrow> <mn>0.6</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics></math>, cold reservoir at <math display="inline"><semantics> <mrow> <mn>0.2</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>. (<b>a</b>) Scheme in the <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>T</mi> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </semantics></math> plane. The colored areas help for the discussion in the text of the heat exchanges. (<b>b</b>) Scheme in <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi>I</mi> <mo>)</mo> </mrow> </semantics></math> plane. Of the four processes of the Otto cycle, only the two isentropic are visible, since the two isophasics are collapsed at the points <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>I</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>=</mo> <mi>π</mi> <mo>,</mo> <mi>I</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>. The colored areas help for the discussion in the text of the work exchanges. For completeness, the dotted curves represent partial isothermal CPRs at the labelled temperature in the plot.</p>
Full article ">Figure 9
<p>Particular cases of the Otto cycle on <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) Approaching the degenerate case of <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> </mrow> </semantics></math>. (<b>b</b>) Otto cycle as refrigerator for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>f</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>)</mo> </mrow> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>(<b>a</b>) Work released in a Josephson-Otto cycle as a function of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>)</mo> </mrow> </semantics></math>. The dashed red curve, given by Equation (<a href="#FD53-entropy-21-01005" class="html-disp-formula">53</a>), reports <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and separates the region where the cycle operates as engine or refrigerator. (<b>b</b>) Heat absorbed in a Josephson-Otto cycle. As an engine, the heat <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>R</mi> </msub> </semantics></math> from the Hot reservoir is represented by the R reservoir. As a refrigerator, the heat <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>L</mi> </msub> </semantics></math> from the CS is represented by the L reservoir. The dash-dotted line represents the thermal equilibrium <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> </mrow> </semantics></math>, below which the system is a cold pump. (<b>c</b>) Cuts of the work in panel (<b>a</b>) versus the Hot Reservoir temperature <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> for fixed temperatures <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> of the Cold Reservoir. (<b>d</b>) Cuts of the absorbed heat versus the CS temperature <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> for fixed temperatures <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> of the Heat Sink. The black solid curve reports the absorbed heat at <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> </mrow> </semantics></math>. The violet dash-dotted curve reports the analytical result of Equation (<a href="#FD55-entropy-21-01005" class="html-disp-formula">55</a>). The curves have been obtained with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Efficiency and COP of the Otto machine. (<b>a</b>) Color plot of <math display="inline"><semantics> <mi>η</mi> </semantics></math> and <math display="inline"><semantics> <mi>COP</mi> </semantics></math> versus <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, with different color palettes. The gray region represents the state where the cooled subsystem temperature is above the heat sink temperature. (<b>b</b>) Cuts of Otto cycle efficiency <math display="inline"><semantics> <mi>η</mi> </semantics></math> versus <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> for chosen <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> in legend. The dot-dashed line reports the Carnot limit to efficiency. The curves end at the Otto characteristic curve, Equation (<a href="#FD53-entropy-21-01005" class="html-disp-formula">53</a>), where the efficiency reaches the Carnot limit. (<b>c</b>) Cuts of Otto cycle COP versus <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> for chosen <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> in legend. The dot-dashed line report the Carnot limit to COP. The curves are limited on the right by the thermal equilibrium state <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> </mrow> </semantics></math>; on the right, the curves are limited by the Otto cycle characteristic curve. On this curve, the COP reaches the COP Carnot limit.</p>
Full article ">Figure 12
<p>Josephson-Stirling cycle scheme. The plotted example concerns an engine between a hot reservoir <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>=</mo> <mn>0.6</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and a cold reservoir <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>=</mo> <mn>0.3</mn> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>. (<b>a</b>) Scheme in the <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>T</mi> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </semantics></math> plane. The colored areas help for the discussion in the text about the exchanged heats. (<b>b</b>) Scheme in <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>,</mo> <mi mathvariant="script">I</mi> <mo>)</mo> </mrow> </semantics></math> plane. Of the four processes of the Josephson-Stirling cycle, only the two isothermals are visible, since the two isophasics are collapsed at the points <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>I</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>φ</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>I</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>. The colored areas help for the discussion in the text about the exchanged works.</p>
Full article ">Figure 13
<p>Particular examples of the Josephson-Stirling cycle for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) Stirling inverse cycle working as refrigerator. The heat absorbed from the R reservoir in the process <math display="inline"><semantics> <mrow> <mn mathvariant="bold">1</mn> <mo>→</mo> <mn mathvariant="bold">2</mn> </mrow> </semantics></math>, represented by the area defined by the related green arrow, is bigger than the heat released to R reservoir in the process <math display="inline"><semantics> <mrow> <mn mathvariant="bold">4</mn> <mo>→</mo> <mn mathvariant="bold">1</mn> </mrow> </semantics></math>, represented by the area defined by the related red arrow. (<b>b</b>) Stirling inverse cycle working as Joule pump, exploiting work to release heat to both reservoirs.</p>
Full article ">Figure 14
<p>(<b>a</b>) Work released in a Stirling cycle as a function of <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>)</mo> </mrow> </semantics></math>. The dashed curve <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> correspond to the thermal equilibrium curve <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> </mrow> </semantics></math> and separates the region where the cycle operates as engine or refrigerator. Moreover, the curves <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>R</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>L</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> further distinguish regions where the cycle is a Joule Pump (JP) or a Cold Pump. (<b>b</b>) Heat absorbed in a Stirling cycle. In both engine and refrigerator modes, the heat <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>R</mi> </msub> </semantics></math> is absorbed from the R reservoir that plays the role of Hot Reservoir or CS in the respective regions. (<b>c</b>) Cuts of the work in panel (<b>a</b>) versus the Hot Reservoir temperature <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> for fixed temperatures <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> of the Cold Reservoir. The black dashed line reports expression (<a href="#FD60-entropy-21-01005" class="html-disp-formula">60</a>). (<b>d</b>) Cuts of the absorbed heat <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>R</mi> </msub> </semantics></math> versus the CS temperature <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> for fixed temperatures <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> of the Heat Sink. The black solid curve reports the absorbed heat at <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> </mrow> </semantics></math>. The curves have been obtained with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Efficiency and COP of the Stirling machine. (<b>a</b>) Color plot of <math display="inline"><semantics> <mi>η</mi> </semantics></math> and <math display="inline"><semantics> <mi>COP</mi> </semantics></math> versus <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, with different color palettes. The gray region represents where the cycle is a Joule pump or Cold pump. (<b>b</b>) Cuts of Stirling cycle efficiency <math display="inline"><semantics> <mi>η</mi> </semantics></math> versus <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> for chosen <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> in legend. The dot-dashed line reports the Carnot limit to efficiency. The curves end at <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>R</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>L</mi> </msub> </mrow> </semantics></math>. (<b>c</b>) Cuts of Stirling cycle COP versus <math display="inline"><semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics></math> for chosen <math display="inline"><semantics> <msub> <mi>T</mi> <mi>L</mi> </msub> </semantics></math> in legend. The dot-dashed line reports the Carnot limit to the COP. The curves go to infinity on the right at the thermal equilibrium state <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>L</mi> </msub> <mo>=</mo> <msub> <mi>T</mi> <mi>R</mi> </msub> </mrow> </semantics></math>; on the left, the curves are limited by the Stirling characteristic curve.</p>
Full article ">Figure A1
<p>(<b>a</b>) Entropy scheme close to the critical temperature. <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo stretchy="false">˜</mo> </mover> </semantics></math> is equal to <span class="html-italic">S</span> where a phase transition is imposed at <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mn>2</mn> </mrow> </msub> </semantics></math>, calculated with the method in the text. <math display="inline"><semantics> <mrow> <msub> <mi>S</mi> <mi>N</mi> </msub> <mrow> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> is the normal metal entropy. (<b>b</b>) Dependence of the critical temperature of the system <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mn>2</mn> </mrow> </msub> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>φ</mi> <mo>=</mo> <mi>π</mi> </mrow> </semantics></math>, normalized to the bulk critical temperature <math display="inline"><semantics> <msub> <mi>T</mi> <mi>c</mi> </msub> </semantics></math>, versus <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">
15 pages, 1616 KiB  
Article
A Novel S-Box Design Algorithm Based on a New Compound Chaotic System
by Qing Lu, Congxu Zhu and Guojun Wang
Entropy 2019, 21(10), 1004; https://doi.org/10.3390/e21101004 - 14 Oct 2019
Cited by 71 | Viewed by 4760
Abstract
Substitution-boxes (S-Boxes) are important non-linear components in block cryptosystem, which play an important role in the security of cryptosystems. Constructing S-Boxes with a strong cryptographic feature is an important step in designing block cipher systems. In this paper, a novel algorithm for constructing [...] Read more.
Substitution-boxes (S-Boxes) are important non-linear components in block cryptosystem, which play an important role in the security of cryptosystems. Constructing S-Boxes with a strong cryptographic feature is an important step in designing block cipher systems. In this paper, a novel algorithm for constructing S-Boxes based on a new compound chaotic system is presented. Firstly, the new chaotic system, tent–logistic system, is proposed, which has better chaotic performance and wider chaotic range than the tent and logistic system, and can not only increase the randomness of the chaotic sequences but also expand the key space of cryptosystems. Secondly, a novel linear mapping is employed to construct the initial S-Box. Then, the permutation operation on the initial S-Box is performed by using chaotic sequence generated with the tent–logistic system, which improves the cryptographic features of the S-Box. The idea behind the proposed work is to make supplementary safe S-box. Detail tests for cryptographic strength of the proposed S-Box are performed by using different standard benchmarks. The test results and performance analysis show that our proposed S-Box has very smaller values of linear probability (LP) and differential probability (DP) and a satisfactory average value of nonlinearity compared with other S-Boxes, showing its excellent application potential in block cipher system. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Bifurcation diagram and the state distribution of logistic system. (<b>a</b>) Bifurcation diagram and (<b>b</b>) the distribution of state values.</p>
Full article ">Figure 2
<p>Bifurcation diagram and the state distribution of the tent system. (<b>a</b>) Bifurcation diagram and (<b>b</b>) the distribution of state values.</p>
Full article ">Figure 3
<p>Bifurcation diagram and the state distribution of the tent–logistic system. (<b>a</b>) Bifurcation diagram and (<b>b</b>) the distribution of state values.</p>
Full article ">Figure 4
<p>Approximate entropy values of a sequence generated by different chaotic maps.</p>
Full article ">Figure 5
<p>The function and basic principle of an <span class="html-italic">n</span> × <span class="html-italic">n</span> S-Box.</p>
Full article ">
16 pages, 856 KiB  
Article
Enhanced Negative Nonlocal Conductance in an Interacting Quantum Dot Connected to Two Ferromagnetic Leads and One Superconducting Lead
by Cong Lee, Bing Dong and Xiao-Lin Lei
Entropy 2019, 21(10), 1003; https://doi.org/10.3390/e21101003 - 14 Oct 2019
Cited by 1 | Viewed by 2747
Abstract
In this paper, we investigate the electronic transport properties of a quantum dot (QD) connected to two ferromagnetic leads and one superconducting lead in the Kondo regime by means of the finite-U slave boson mean field approach and the nonequilibrium Green function [...] Read more.
In this paper, we investigate the electronic transport properties of a quantum dot (QD) connected to two ferromagnetic leads and one superconducting lead in the Kondo regime by means of the finite-U slave boson mean field approach and the nonequilibrium Green function technique. In this three-terminal hybrid nanodevice, we focus our attention on the joint effects of the Kondo correlation, superconducting proximity pairing, and spin polarization of leads. It is found that the superconducting proximity effect will suppress the linear local conductance (LLC) stemming from the weakened Kondo peak, and when its coupling Γ s is bigger than the tunnel-coupling Γ of two normal leads, the linear cross conductance (LCC) becomes negative in the Kondo region. Regarding the antiparallel configuration, increasing spin polarization further suppresses LLC but enhances LCC, i.e., causing larger negative values of LCC, since it is beneficial for the emergence of cross Andreev reflection. On the contrary, for the parallel configuration, with increasing spin polarization, the LLC decreases and greatly widens with the appearance of shoulders, and eventually splits into four peaks, while the LCC decreases relatively rapidly to the normal conductance. Full article
(This article belongs to the Special Issue Quantum Transport in Mesoscopic Systems)
Show Figures

Figure 1

Figure 1
<p>(Color online) Schematic diagram of a quantum dot connected to one superconducting lead and two ferromagnetic leads.</p>
Full article ">Figure 2
<p>(Color online) (<b>a</b>) The local conductance and (<b>b</b>) the cross conductance vs. the bare dot level <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>d</mi> </msub> </semantics></math> at zero temperature for different proximity-coupling strengths <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Γ</mi> <mi>s</mi> </msub> </semantics></math> in the case of normal leads, i.e., <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>(Color online) The zero temperature local conductance (black-solid line) and the cross conductance (black-dotted line) vs. the proximity coupling <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Γ</mi> <mi>s</mi> </msub> </semantics></math> for the system with a bare dot level at the particle-hole symmetric point, <math display="inline"><semantics> <mrow> <msub> <mi>ϵ</mi> <mi>d</mi> </msub> <mo>=</mo> <mo>−</mo> <mi>U</mi> <mo>/</mo> <mn>2</mn> <mo>=</mo> <mo>−</mo> <mn>5</mn> </mrow> </semantics></math> in the case of normal leads (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>). The three parts of the conductance are also plotted for illustration purposes.</p>
Full article ">Figure 4
<p>(Colour online) (<b>a</b>) The local conductance and (<b>b</b>) the cross conductance versus the bare dot level <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>d</mi> </msub> </semantics></math> for different proximity-coupling strengths <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Γ</mi> <mi>s</mi> </msub> </semantics></math> in the AP configuration with <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>(Color online) (<b>a</b>) The local conductance and (<b>b</b>) the cross conductance vs. the bare dot level <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>d</mi> </msub> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> at zero temperature for different proximity-coupling strengths <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Γ</mi> <mi>s</mi> </msub> </semantics></math> in the P configuration with <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>(Color online) The zero-temperature local (<b>a</b>) and cross (<b>b</b>) differential conductances vs. bias voltage <span class="html-italic">V</span> for various couplings <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Γ</mi> <mi>s</mi> </msub> </semantics></math> for the system with bare dot level <math display="inline"><semantics> <mrow> <msub> <mi>ϵ</mi> <mi>d</mi> </msub> <mo>=</mo> <mo>−</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> in the case of normal leads (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>).</p>
Full article ">
19 pages, 2137 KiB  
Article
Fitness Gain of Individually Sensed Information by Cells
by Tetsuya J. Kobayashi and Yuki Sughiyama
Entropy 2019, 21(10), 1002; https://doi.org/10.3390/e21101002 - 13 Oct 2019
Cited by 4 | Viewed by 2808
Abstract
Mutual information and its causal variant, directed information, have been widely used to quantitatively characterize the performance of biological sensing and information transduction. However, once coupled with selection in response to decision-making, the sensing signal could have more or less evolutionary value than [...] Read more.
Mutual information and its causal variant, directed information, have been widely used to quantitatively characterize the performance of biological sensing and information transduction. However, once coupled with selection in response to decision-making, the sensing signal could have more or less evolutionary value than its mutual or directed information. In this work, we show that an individually sensed signal always has a better fitness value, on average, than its mutual or directed information. The fitness gain, which satisfies fluctuation relations (FRs), is attributed to the selection of organisms in a population that obtain a better sensing signal by chance. A new quantity, similar to the coarse-grained entropy production in information thermodynamics, is introduced to quantify the total fitness gain from individual sensing, which also satisfies FRs. Using this quantity, the optimizing fitness gain of individual sensing is shown to be related to fidelity allocations for individual environmental histories. Our results are supplemented by numerical verifications of FRs, and a discussion on how this problem is linked to information encoding and decoding. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

Figure 1
<p>Schematic diagrams of population dynamics of cells with individual (<b>a</b>) and common (<b>b</b>) sensing. The colors of cells and molecules on the cells represent phenotypic states and sensing signal, respectively. Bars on the diagrams indicate the histories of environmental states and common sensing. In (<b>a</b>), the sensing singal of each cell is correlated with the environmental state but has an intercellular variation due to the stochasticity of individual sensing. In (<b>b</b>), on the other hand, all the cells at certain time points share the same sensing signal, which is shown by the background colors in the diagram.</p>
Full article ">Figure 2
<p>(<b>a</b>) A diagrammatic representation of state transitions of the environment used for simulation in <a href="#entropy-21-01002-f003" class="html-fig">Figure 3</a>, <a href="#entropy-21-01002-f004" class="html-fig">Figure 4</a> and <a href="#entropy-21-01002-f005" class="html-fig">Figure 5</a>. Three states are assumed for the environment; (<b>b</b>) Replication rates of cells with two different phenotypic states under different environmental states; (<b>c</b>) Environment-dependence of the sensing signal, and the probabilities to obtain a certain sensing signal under each environmental state; (<b>d</b>) Signal-dependent phenotype switching; The thickness of arrows represent relative probabilities and rates of replications. The values of the parameters used for the simulation are given by Equations (<a href="#FD11-entropy-21-01002" class="html-disp-formula">11</a>)–(<a href="#FD14-entropy-21-01002" class="html-disp-formula">14</a>).</p>
Full article ">Figure 3
<p>(<b>a</b>,<b>b</b>) Trajectories of population sizes with individual sensing under two different realizations of the environment. The history of the environment, <math display="inline"><semantics> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> </semantics></math>, is shown by the colored bar on each panel. The color represents the state of the environment, which was defined in <a href="#entropy-21-01002-f002" class="html-fig">Figure 2</a>a. Each colored line corresponds to the population size of the cells with phenotypic state <span class="html-italic">x</span> and sensing signal <span class="html-italic">z</span>; the actual value of <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> </semantics></math> is designated in the panels. The gray lines in (<b>a</b>,<b>b</b>) are the replicate of the trajectories in (<b>c</b>,<b>d</b>), respectively, for comparison. (<b>c</b>,<b>d</b>) Trajectories of population sizes with common sensing under the same realizations of the environment as in (<b>a</b>) and (<b>b</b>), respectively. On each panel, the history of the common signal, <math display="inline"><semantics> <msub> <mi mathvariant="script">Z</mi> <mi>t</mi> </msub> </semantics></math>, is additionally shown by the colored bar. Each colored line corresponds to the population size of the cells with phenotypic state <span class="html-italic">x</span>, with the actual value of <span class="html-italic">x</span> designated in the panels. (<b>e</b>,<b>f</b>) Fitnesses of the populations with the individual and the common sensing, <math display="inline"><semantics> <mrow> <msup> <mo>Ψ</mo> <mi>i</mi> </msup> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (red solid curve) and <math display="inline"><semantics> <mrow> <msup> <mo>Ψ</mo> <mi>c</mi> </msup> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Z</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (blue solid curve) under the same realizations of the environment and common signal as in (<b>a</b>,<b>c</b>) and (<b>b</b>,<b>d</b>). Related quantities are also shown for comparison.</p>
Full article ">Figure 4
<p>(<b>a</b>) Average values of fitnesses and related quantities; (<b>b</b>,<b>c</b>) Fluctuation of the fitness with individual sensing <math display="inline"><semantics> <mrow> <msup> <mo>Ψ</mo> <mi>i</mi> </msup> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (<b>b</b>); and that with common sensing <math display="inline"><semantics> <mrow> <msup> <mo>Ψ</mo> <mi>c</mi> </msup> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (<b>c</b>); (<b>d</b>–<b>f</b>) Fluctuation of <math display="inline"><semantics> <mrow> <msub> <mo>Ψ</mo> <mn>0</mn> </msub> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (<b>d</b>); <math display="inline"><semantics> <mrow> <msub> <mo>Ψ</mo> <mn>0</mn> </msub> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> <mo>+</mo> <mi>i</mi> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Z</mi> <mi>t</mi> </msub> <mo>→</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> <mo>+</mo> <mi>g</mi> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Z</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (<b>e</b>); and <math display="inline"><semantics> <mrow> <msub> <mo>Ψ</mo> <mn>0</mn> </msub> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> <mo>+</mo> <mi>i</mi> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Z</mi> <mi>t</mi> </msub> <mo>→</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (<b>f</b>).</p>
Full article ">Figure 5
<p>Numerical verification of IFRs for <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </semantics></math> (<b>a</b>,<b>b</b>); <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mi>t</mi> </msub> <mo>-</mo> <mi>σ</mi> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (<b>c</b>,<b>d</b>); and <math display="inline"><semantics> <mrow> <msub> <mo>Ψ</mo> <mn>0</mn> </msub> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> <mo>+</mo> <mi>i</mi> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Z</mi> <mi>t</mi> </msub> <mo>→</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> <mo>+</mo> <mi>g</mi> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="script">Z</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> <mo>-</mo> <msup> <mo>Ψ</mo> <mi>i</mi> </msup> <mrow> <mo>[</mo> <msub> <mi mathvariant="script">Y</mi> <mi>t</mi> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math> (<b>e</b>,<b>f</b>). Left panels are behaviors of the integrands of the IFRs for 100 different realizations of the environmental and common signal histories. Right panels are the sample averages of the integrands of the IFRs. Thin colored curves are obtained by averaging <math display="inline"><semantics> <msup> <mn>10</mn> <mn>5</mn> </msup> </semantics></math> different samples, and the thick black curves are obtained by the average of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>.</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>8</mn> </msup> </mrow> </semantics></math> samples.</p>
Full article ">
22 pages, 9953 KiB  
Article
Permutation Entropy-Based Analysis of Temperature Complexity Spatial-Temporal Variation and Its Driving Factors in China
by Ting Zhang, Changxiu Cheng and Peichao Gao
Entropy 2019, 21(10), 1001; https://doi.org/10.3390/e21101001 - 13 Oct 2019
Cited by 7 | Viewed by 4098
Abstract
Air temperature fluctuation complexity (TFC) describes the uncertainty of temperature changes. The analysis of its spatial and temporal variation is of great significance to evaluate prediction uncertainty of the regional temperature trends and the climate change. In this study, annual-TFC from 1979–2017 and [...] Read more.
Air temperature fluctuation complexity (TFC) describes the uncertainty of temperature changes. The analysis of its spatial and temporal variation is of great significance to evaluate prediction uncertainty of the regional temperature trends and the climate change. In this study, annual-TFC from 1979–2017 and seasonal-TFC from 1983–2017 in China were calculated by permutation entropy (PE). Their temporal trend is described by the Mann-Kendall method. Driving factors of their spatial variations are explored through GeoDetector. The results show that: (1). TFC shows a downward trend generally, with obvious time variation. (2). The spatial variation of TFC is mainly manifested in the differences among the five sub-regions in China. There is low uncertainty in the short-term temperature trends in the northwest and southeast. The northeastern and southwestern regions show high uncertainties. TFC in the central region is moderate. (3). The vegetation is the main factor of spatial variation, followed by the climate and altitude, and the latitude and terrain display the lowest impact. The interactions of vegetation-altitude, vegetation-climate and altitude-latitude can interpret more than 50% of the spatial variations. These results provide insights into causes and mechanisms of the complexity of the climate system. They can help to determine the influencing process of various factors. Full article
(This article belongs to the Special Issue Spatial Information Theory)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Spatial distribution of China’s vegetation zoning with eight subzones. (1) Subtropical evergreen broad-leaved forest area, (2) Cold-temperate coniferous forest area, (3) Warm-temperate deciduous broad-leaved forest area, (4) Temperate grassland area, (5) Temperate desert area, (6) Temperate coniferous and deciduous broad-leaved mixed forest area, (7) Tropical monsoon rainforest and rainforest area and (8) Alpine vegetation area of Qinghai-Tibet Plateau. (<b>b</b>) Spatial distribution of China’s climatic zoning with nine subzones: (1) Middle Temperature Zone, (2) Warm Temperate Zone, (3) Cold Temperate Zone, (4) North Subtropical Zone, (5) Central Subtropical Zone, (6) South Subtropical Zone, (7) Middle Tropical Zone, (8) Marginal Tropical Zone and (9) Plateau Climatic Zone. (<b>c</b>) Spatial distribution of latitude zoning with seven subzones: (1) 18.75–23.75°N, (2) 24–28.5°N, (3) 29.25–34.5°N, (4) 35.25–39.75°N, (5) 40.5–45°N, (6) 45.75–51°N and (7) 51.75–53.25°N. (<b>d</b>) Spatial distribution of China’s altitude zoning with seven subzones: (1) −263–565, (2) 566–1220, (3) 1221–2045, (4) 2046–3085, (5) 3086–4050, (6) 4051–4845, (7) 4846–8535 meter. (<b>e</b>) Spatial distribution of China’s terrain zoning with seven subzones: (1) Plains, (2) Platforms, (3) Hills, (4) Small relief mountains, (5) Medium relief mountains, (6) Large relief mountains and (7) Extreme relief mountains. (<b>f</b>) The name of each province is labelled in <a href="#entropy-21-01001-f001" class="html-fig">Figure 1</a>f.</p>
Full article ">Figure 2
<p>Flow chart of the research framework.</p>
Full article ">Figure 3
<p>Spatial distribution of the annual temperature fluctuation complexity (TFC) (the average of annual permutation entropy (PE) from 1979–2017).</p>
Full article ">Figure 4
<p>Driving factor explaining ability of spatial variation annual TFC. The inner ring corresponds to the explaining ability of each single factor, and the outer ring corresponds to the explaining ability of the interaction between two factors. A–B represents the interaction between A and B.</p>
Full article ">Figure 5
<p>Spatial distribution of seasonal fluctuation complexity of air temperature (the average of seasonal PE from 1983–2017). (<b>a</b>) Spatial distribution of spring TFC in China. (<b>b</b>) Spatial distribution of summer TFC in China. (<b>c</b>) Spatial distribution of autumn TFC in China. (<b>d</b>) Spatial distribution of winter TFC in China.</p>
Full article ">Figure 6
<p>Single factor explaining ability on spatial variation of seasonal TFC.</p>
Full article ">Figure 7
<p>Driving factor explaining ability on spatial variation of seasonal TFC. The inner rings correspond to the explaining ability of each single factor, and the outer rings correspond to the explaining ability of the interaction between two factors. A–B represents the interaction between A and B. (<b>a</b>) Driving factor explaining ability on spatial variation of spring TFC. (<b>b</b>) Driving factor explaining ability on spatial variation of summer TFC. (<b>c</b>) Driving factor explaining ability on spatial variation of autumn TFC. (<b>d</b>) Driving factor explaining ability on spatial variation of winter TFC.</p>
Full article ">Figure 8
<p>(<b>a</b>) Annual mean series of annual temperature fluctuation complexity over 1979–2017. (<b>b</b>) Temporal trend and mutation of annual temperature fluctuation complexity over 1979–2017. The series of annual PE here is the average of annual PE series at all locations in China, and annual PE series at each location is calculated in <a href="#sec3dot1-entropy-21-01001" class="html-sec">Section 3.1</a>. Two black dash lines in <a href="#entropy-21-01001-f008" class="html-fig">Figure 8</a>a correspond to two special years, the mutation starting year 1986 and the lowest PE year 2011 respectively. The black point in <a href="#entropy-21-01001-f008" class="html-fig">Figure 8</a>b marks the intersection of <span class="html-italic">UF<sub>k</sub></span> and <span class="html-italic">UB<sub>k</sub></span> between the significant lines, which is noted as the mutation starting year.</p>
Full article ">Figure 9
<p>Spatial distribution of four annual TFC clusters in China.</p>
Full article ">Figure 10
<p>Spatial average time series of four annual TFC clusters over 1979–2017. The black dash line in 1986 marked the mutation of TFC intensity. The black dash line in 2011 marked the lowest TFC intensity.</p>
Full article ">Figure 11
<p>Temporal trend and mutation of seasonal temperature fluctuation complexity over 1983–2017. The series of seasonal PE here is the average of seasonal PE series at all locations in China. The seasonal PE series at each location is calculated in <a href="#sec3dot1-entropy-21-01001" class="html-sec">Section 3.1</a>. The points marked in <a href="#entropy-21-01001-f009" class="html-fig">Figure 9</a>b–d are the intersection of <span class="html-italic">UF<sub>k</sub></span> and <span class="html-italic">UB<sub>k</sub></span> between the significant lines, which are noted as the mutation starting years. (<b>a</b>) Temporal trend and mutation of spring TFC over 1983–2017. (<b>b</b>) Temporal trend and mutation of summer TFC over 1983–2017. (<b>c</b>) Temporal trend and mutation of autumn TFC over 1983–2017. (<b>d</b>) Temporal trend and mutation of winter TFC over 1983–2017.</p>
Full article ">
21 pages, 3915 KiB  
Article
Recognition of Voltage Sag Sources Based on Phase Space Reconstruction and Improved VGG Transfer Learning
by Yuting Pu, Honggeng Yang, Xiaoyang Ma and Xiangxun Sun
Entropy 2019, 21(10), 999; https://doi.org/10.3390/e21100999 - 12 Oct 2019
Cited by 4 | Viewed by 3252
Abstract
The recognition of the voltage sag sources is the basis for formulating a voltage sag governance plan and clarifying the responsibility for the accident. Aiming at the recognition problem of voltage sag sources, a recognition method of voltage sag sources based on phase [...] Read more.
The recognition of the voltage sag sources is the basis for formulating a voltage sag governance plan and clarifying the responsibility for the accident. Aiming at the recognition problem of voltage sag sources, a recognition method of voltage sag sources based on phase space reconstruction and improved Visual Geometry Group (VGG) transfer learning is proposed from the perspective of image classification. Firstly, phase space reconstruction technology is used to transform voltage sag signals, generate reconstruction images of voltage sag, and analyze the intuitive characteristics of different sag sources from reconstruction images. Secondly, combined with the attention mechanism, the standard VGG 16 model is improved to extract the features completely and prevent over-fitting. Finally, VGG transfer learning model uses the idea of transfer learning for training, which improves the efficiency of model training and the recognition accuracy of sag sources. The purpose of the training model is to minimize the cross entropy loss function. The simulation analysis verifies the effectiveness and superiority of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Voltage sag signal waveform (<b>left</b>) and corresponding phase space reconstruction image (<b>right</b>) of A-phase short circuit fault in phase (<b>a</b>) A, (<b>b</b>) B, and (<b>c</b>) C.</p>
Full article ">Figure 2
<p>Voltage sag signal waveform (<b>left</b>) and corresponding phase space reconstruction image (<b>right</b>) of large induction motor starting in phase (<b>a</b>) A, (<b>b</b>) B, and (<b>c</b>) C.</p>
Full article ">Figure 3
<p>Voltage sag signal waveform (<b>left</b>) and corresponding phase space reconstruction image (<b>right</b>) of unloaded transformer energizing in phase (<b>a</b>) A, (<b>b</b>) B, and (<b>c</b>) C.</p>
Full article ">Figure 3 Cont.
<p>Voltage sag signal waveform (<b>left</b>) and corresponding phase space reconstruction image (<b>right</b>) of unloaded transformer energizing in phase (<b>a</b>) A, (<b>b</b>) B, and (<b>c</b>) C.</p>
Full article ">Figure 4
<p>Standard VGG 16 network structure. 224 × 224 × 3 and so on represent the dimensions of matrix. Convolution + ReLU means ReLU function activation after convolution operation. Fully connected + ReLU means ReLU function activation after fully connected operation.</p>
Full article ">Figure 5
<p>The whole attention module. 224 × 224 × 3 and so on represent the dimensions of matrix.</p>
Full article ">Figure 6
<p>Channel attention module. 224 × 224 × 3 and so on represent the dimensions of matrix.</p>
Full article ">Figure 7
<p>Spatial attention module. 224 × 224 × 3 and so on represent the dimensions of matrix.</p>
Full article ">Figure 8
<p>VGG16 network structure based on attention mechanism. 224 × 224 × 3 and so on represent the dimensions of matrix. Convolution + ReLU means ReLU function activation after convolution operation. Fully connected + ReLU means ReLU function activation after fully connected operation.</p>
Full article ">Figure 9
<p>Training process of improved VGG transfer learning model.</p>
Full article ">Figure 10
<p>Voltage sag source recognition framework based on improved VGG transfer learning. The sample means an example of <a href="#entropy-21-00999-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 11
<p>The simulation models of (<b>a</b>) short circuit fault, (<b>b</b>) large induction motor starting, and (<b>c</b>) unloaded transformer energizing in MATLAB/SIMULINK.</p>
Full article ">Figure 11 Cont.
<p>The simulation models of (<b>a</b>) short circuit fault, (<b>b</b>) large induction motor starting, and (<b>c</b>) unloaded transformer energizing in MATLAB/SIMULINK.</p>
Full article ">Figure 12
<p>Samples of phase space reconstruction of voltage sag with no noise (<b>left</b>), 20 dB (<b>middle</b>) and 10 dB (<b>right</b>) white Gaussian noise in phase A of (<b>a</b>) single phase short circuit fault, (<b>b</b>) large induction motor starting and (<b>c</b>) unloaded transformer energizing.</p>
Full article ">Figure 12 Cont.
<p>Samples of phase space reconstruction of voltage sag with no noise (<b>left</b>), 20 dB (<b>middle</b>) and 10 dB (<b>right</b>) white Gaussian noise in phase A of (<b>a</b>) single phase short circuit fault, (<b>b</b>) large induction motor starting and (<b>c</b>) unloaded transformer energizing.</p>
Full article ">Figure 13
<p>The visualization process of attention mechanism. The channel attention <span class="html-italic">M<sub>c</sub></span>, spatial attention <span class="html-italic">M<sub>s</sub></span> and refined feature <b><span class="html-italic">F</span>’’</b> are shown in heat maps.</p>
Full article ">Figure 14
<p>3D projection for feature extraction based on attention-based VGG 16 model.</p>
Full article ">Figure 15
<p>(<b>a</b>) VGG 16 model separate training process, (<b>b</b>) VGG 16 transfer learning model without attention mechanism combined training process, (<b>c</b>) improved VGG transfer learning model combined training process and (<b>d</b>) combined training process directly adopting voltage sag signal image.</p>
Full article ">
19 pages, 1294 KiB  
Article
Physical-Layer Security Analysis over M-Distributed Fading Channels
by Sheng-Hong Lin, Rong-Rong Lu, Xian-Tao Fu, An-Ling Tong and Jin-Yuan Wang
Entropy 2019, 21(10), 998; https://doi.org/10.3390/e21100998 - 12 Oct 2019
Cited by 5 | Viewed by 2811
Abstract
In this paper, the physical layer security over the M-distributed fading channel is investigated. Initially, an exact expression of secrecy outage probability (SOP) is derived, which has an integral term. To get a closed-form expression, a lower bound of SOP is obtained. After [...] Read more.
In this paper, the physical layer security over the M-distributed fading channel is investigated. Initially, an exact expression of secrecy outage probability (SOP) is derived, which has an integral term. To get a closed-form expression, a lower bound of SOP is obtained. After that, the exact expression for the probability of strictly positive secrecy capacity (SPSC) is derived, which is in closed-form. Finally, an exact expression of ergodic secrecy capacity (ESC) is derived, which has two integral terms. To reduce its computational complexity, a closed-from expression for the lower bound of ESC is obtained. As special cases of M-distributed fading channels, the secure performance of the K, exponential, and Gamma-Gamma fading channels are also derived, respectively. Numerical results show that all theoretical results match well with Monte-Carlo simulation results. Specifically, when the average signal-to-noise ratio of main channel is larger than 40 dB, the relative errors for the lower bound of SOP, the probability of SPSC, and the lower bound of ESC are less than 1.936%, 6.753%, and 1.845%, respectively. This indicates that the derived theoretical expressions can be directly used to evaluate system performance without time-consuming simulations. Moreover, the derived results regarding parameters that influence the secrecy performance will enable system designers to quickly determine the optimal available parameter choices when facing different security risks. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A wireless communication system with a transmitter (Alice), a legitimate receiver (Bob), and an eavesdropper (Eve).</p>
Full article ">Figure 2
<p>The instantaneous secrecy capacity <span class="html-italic">C</span> and the asymptotic secrecy capacity <math display="inline"><semantics> <msub> <mi>C</mi> <mi>Asy</mi> </msub> </semantics></math> for different <math display="inline"><semantics> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Secrecy outage probability (SOP) versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> dB and <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">B</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">E</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>SOP versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <msub> <mi>γ</mi> <mi>th</mi> </msub> </semantics></math> when <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> <mo>=</mo> <mn>18</mn> </mrow> </semantics></math> dB and <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">B</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">E</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>SOP versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for different fading channels with <math display="inline"><semantics> <mrow> <msub> <mi>γ</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> dB.</p>
Full article ">Figure 6
<p>Probability of strictly positive secrecy capacity (SPSC) versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">B</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">E</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Probability of SPSC versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">E</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">B</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Probability of SPSC versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for different fading channels.</p>
Full article ">Figure 9
<p>Ergodic secrecy capacity (ESC) versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">B</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">E</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>ESC versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">E</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">E</mi> </msub> <mo>)</mo> </mrow> </semantics></math> when <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>α</mi> <mi mathvariant="normal">B</mi> </msub> <mo>,</mo> <msub> <mi>β</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>ESC versus <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <msub> <mi>γ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>)</mo> </mrow> </semantics></math> for different fading channels.</p>
Full article ">
17 pages, 5433 KiB  
Article
On the Calculation of the Effective Polytropic Index in Space Plasmas
by Georgios Nicolaou, George Livadiotis and Robert T. Wicks
Entropy 2019, 21(10), 997; https://doi.org/10.3390/e21100997 - 12 Oct 2019
Cited by 15 | Viewed by 4433
Abstract
The polytropic index of space plasmas is typically determined from the relationship between the measured plasma density and temperature. In this study, we quantify the errors in the determination of the polytropic index, due to uncertainty in the analyzed measurements. We model the [...] Read more.
The polytropic index of space plasmas is typically determined from the relationship between the measured plasma density and temperature. In this study, we quantify the errors in the determination of the polytropic index, due to uncertainty in the analyzed measurements. We model the plasma density and temperature measurements for a certain polytropic index, and then, we apply the standard analysis to derive the polytropic index. We explore the accuracy of the derived polytropic index for a range of uncertainties in the modeled density and temperature and repeat for various polytropic indices. Our analysis shows that the uncertainties in the plasma density introduce a systematic error in the determination of the polytropic index which can lead to artificial isothermal relations, while the uncertainties in the plasma temperature increase the statistical error of the calculated polytropic index value. We analyze Wind spacecraft observations of the solar wind protons and we derive the polytropic index in selected intervals over 2002. The derived polytropic index is affected by the plasma measurement uncertainties, in a similar way as predicted by our model. Finally, we suggest a new data-analysis approach, based on a physical constraint, that reduces the amount of erroneous derivations. Full article
(This article belongs to the Special Issue Theoretical Aspects of Kappa Distributions)
Show Figures

Figure 1

Figure 1
<p>Model of plasma temperature <span class="html-italic">T</span> as a function of the modeled plasma density <span class="html-italic">n</span> for (<b>left</b>) adiabatic plasma with <span class="html-italic">γ</span> = 5/3, (<b>middle</b>) isothermal plasma with <span class="html-italic">γ</span> = 1 and (<b>right</b>) for isobaric plasma with <span class="html-italic">γ</span> = 0. In each panel, we plot <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>∝</mo> <msup> <mi>n</mi> <mrow> <mi>γ</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> (grey dashed). For all the examples here, the plasma density ranges between <span class="html-italic">n</span><sub>min</sub> = 4 cm<sup>−3</sup> and <span class="html-italic">n</span><sub>max</sub> = 5 cm<sup>−3</sup> and <span class="html-italic">T</span><sub>0</sub> = 5 eV.</p>
Full article ">Figure 2
<p>Modeled samples of ln<span class="html-italic">T</span> as a function of ln<span class="html-italic">n</span>, for adiabatic plasma and (<b>left</b>) <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> = <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> = 1%, (<b>middle</b>) <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> = 3%, <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> = 1% and (<b>right</b>) <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> = 3%, <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> = 5%. In each plot, the black data-points correspond to the modeled plasma parameters while the red dots correspond to the measurement samples. We model 1000 measurement samples, considering a log-normal distribution around the plasma parameters (black dots) and standard deviation as indicated by the error bar.</p>
Full article ">Figure 3
<p>Histogram of the derived <span class="html-italic">γ</span> over 1000 samples, considering adiabatic plasma, and uncertainties <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> = <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> = 5%. Although the modeled plasma has <span class="html-italic">γ</span> = 5/3, the distribution of the derived values is slightly asymmetric with the most frequent value 1.45 and mean <span class="html-italic">γ</span><sub>m</sub> ~1.56. The standard deviation of the distribution is <span class="html-italic">σ<sub>γ</sub></span> ~0.33. The plasma uncertainty in the plasma parameters introduces a systematical (different mean and mode) and statistical (<span class="html-italic">σ<sub>γ</sub></span> &gt; 0) error in the calculation of <span class="html-italic">γ</span>.</p>
Full article ">Figure 4
<p>The average <span class="html-italic">γ</span><sub>m</sub> as a function of (<b>left</b>) density and (<b>right</b>) temperature measurement uncertainty. The average <span class="html-italic">γ</span><sub>m</sub> and its standard error δ<span class="html-italic"><sub>γ</sub></span> are calculated over 1000 samples of <span class="html-italic">n</span>-<span class="html-italic">T</span> measurements. Lines with different colors represent different input <span class="html-italic">γ</span> values. For the specific examples, we set <span class="html-italic">n</span><sub>min</sub> = 3.675 cm<sup>−3</sup>, Δ<span class="html-italic">n</span> = n<sub>max</sub> − n<sub>min</sub> = 0.35 cm<sup>−3</sup>, and <span class="html-italic">T</span><sub>0</sub> = 3.275 eV, which correspond to typical values of solar wind protons as observed by Wind in 2002.</p>
Full article ">Figure 5
<p>The derived polytropic index averages (over 1000 samples) as a function of (<b>left</b>) density and (<b>right</b>) temperature measurement uncertainty, for several Δ<span class="html-italic">n</span> ranges. For all the examples shown here, we consider an adiabatic plasma (input <span class="html-italic">γ</span> = 5/3).</p>
Full article ">Figure 6
<p>Wind high-resolution measurements of (from top to bottom) density, bulk speed, thermal speed, and magnetic field strength, during 2002.</p>
Full article ">Figure 7
<p>Histogram of the average solar wind protons parameters within the selected subintervals in 2002, which we analyze to derive the polytropic index, (<b>a</b>) the average density <span class="html-italic">n</span> and (<b>b</b>) average temperature <span class="html-italic">T</span>, (<b>c</b>) density range Δ<span class="html-italic">n</span>, (<b>d</b>) temperature range Δ<span class="html-italic">T</span> and (<b>e</b>) the derived polytropic index <span class="html-italic">γ</span>. In each panel, we note the most frequent value (mode) of each parameter, which we use as input to our model to predict the misestimation of <span class="html-italic">γ</span> as a function of the measurement uncertainties.</p>
Full article ">Figure 8
<p>Occurrence of (<b>upper left</b>) the average <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math>, (<b>lower</b>) the average <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> and (<b>upper right</b>) the 2D histogram of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> of the Wind observations in 2002. The white line indicates the mode of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> in each <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> bin.</p>
Full article ">Figure 9
<p>Normalized histograms of (<b>left</b>) <span class="html-italic">γ</span> as a function of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> &lt; 15% and (<b>right</b>) <span class="html-italic">γ</span> as a function of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math>, for <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> &lt; 1%. The white line is the mean value of the histogram in each column. We display only the range of uncertainties for which we have more than 100 data points. On each panel, we show the predictions of our model (red) for plasma parameters corresponding to the mode values of each parameter for the analyzed intervals (see also <a href="#entropy-21-00997-f007" class="html-fig">Figure 7</a>).</p>
Full article ">Figure 10
<p>The 2D histogram of (<b>left</b>) mean calculated polytropic index <span class="html-italic">γ</span><sub>m</sub> and (<b>right</b>) the average Pearson correlation coefficient of ln<span class="html-italic">T</span> and ln<span class="html-italic">n</span> as a function of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> for modeled plasma with <span class="html-italic">γ</span> = 1.9, <span class="html-italic">n</span><sub>min</sub> = 3.675 cm<sup>−3</sup>, Δ<span class="html-italic">n</span> = n<sub>max</sub> − n<sub>min</sub> = 0.35 cm<sup>−3</sup>, and <span class="html-italic">T</span><sub>0</sub> = 3.275 eV. The average polytropic indices and correlation coefficients are calculated for 1000 modeled measurement samples per each combination of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> − <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>(<b>Left</b>) average <span class="html-italic">ν</span><sub>inv</sub>,<sub>m</sub> as a function of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> = 0 and (<b>right</b>) as a function of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> = 0 and for several <span class="html-italic">γ</span>. The average <span class="html-italic">ν</span><sub>inv,m</sub> is calculated over 1000 samples for each uncertainty setting in our model. For the specific examples we consider <span class="html-italic">T</span><sub>0</sub> = 3.275 eV and <math display="inline"><semantics> <mrow> <mrow> <mrow> <mi mathvariant="sans-serif">Δ</mi> <mi>T</mi> </mrow> <mo>/</mo> <mrow> <msub> <mi>T</mi> <mn>0</mn> </msub> </mrow> </mrow> </mrow> </semantics></math> ~ 10%.</p>
Full article ">Figure 12
<p>(<b>Left</b>) the calculated <span class="html-italic">ν</span> as a function of the calculated <span class="html-italic">γ</span> for 250,000 samples with input <span class="html-italic">γ</span> = 1.9, <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>n</mi> </msub> </mrow> <mo>/</mo> <mi>n</mi> </mrow> </mrow> </semantics></math> = <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>σ</mi> <mi>T</mi> </msub> </mrow> <mo>/</mo> <mi>T</mi> </mrow> </mrow> </semantics></math> = 8%, and (<b>right</b>) the same plot for the analyzed intervals from Wind measurements during 2002. In both plots, the blue data-points satisfy the criterion | 1/<span class="html-italic">ν</span><sub>inv</sub> – (<span class="html-italic">γ</span>-1)| &lt; 0.1, while the red-data points do not, as they lie further from the expected <span class="html-italic">ν</span> ≡ (<span class="html-italic">γ</span> − 1)<sup>−1</sup> (dashed).</p>
Full article ">Figure 13
<p>Histograms of <span class="html-italic">γ</span> as derived by the analysis of Wind observation over 2002, before (grey) and after the filter application with (green) α = 1, and (blue) α = 0.1. The filtered <span class="html-italic">γ</span> values are recorded within a shorter range, and the corresponding histogram has a sharp dip at <span class="html-italic">γ</span> = 1 for which the linear fitting cannot derive an accurate <span class="html-italic">ν</span> index.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop