[go: up one dir, main page]

Next Issue
Volume 21, August
Previous Issue
Volume 21, June
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 21, Issue 7 (July 2019) – 95 articles

Cover Story (view full-size image): This perspective discusses parallels between high entropy at individual sequence positions of intrinsically disordered proteins and regions (IDRs) and the diversity of their dynamically sampled conformations. Arguments are put forth for abandoning the approach of deriving information from the positional sequence conservation of IDRs, due to its reliance on sequence–structure–function relationships. A recent method that relies instead on the evolutionary conservation of molecular features is proposed for elucidating IDR information content. Experimental and theoretical approaches for approximating IDR conformational entropy are reviewed, focusing on changes in conformational entropy and other biophysical features of IDRs upon biomolecular interactions, post-translational modifications, and liquid–liquid phase separation. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1312 KiB  
Article
Multi-Party Quantum Summation Based on Quantum Teleportation
by Cai Zhang, Mohsen Razavi, Zhiwei Sun, Qiong Huang and Haozhen Situ
Entropy 2019, 21(7), 719; https://doi.org/10.3390/e21070719 - 23 Jul 2019
Cited by 12 | Viewed by 4480
Abstract
We present a secure multi-party quantum summation protocol based on quantum teleportation, in which a malicious, but non-collusive, third party (TP) helps compute the summation. In our protocol, TP is in charge of entanglement distribution and Bell states are shared between participants. Users [...] Read more.
We present a secure multi-party quantum summation protocol based on quantum teleportation, in which a malicious, but non-collusive, third party (TP) helps compute the summation. In our protocol, TP is in charge of entanglement distribution and Bell states are shared between participants. Users encode the qubits in their hand according to their private bits and perform Bell-state measurements. After obtaining participants’ measurement results, TP can figure out the summation. The participants do not need to send their encoded states to others, and the protocol is therefore congenitally free from Trojan horse attacks. In addition, our protocol can be made secure against loss errors, because the entanglement distribution occurs only once at the beginning of our protocol. We show that our protocol is secure against attacks by the participants as well as the outsiders. Full article
(This article belongs to the Collection Quantum Information)
Show Figures

Figure 1

Figure 1
<p>A simple example of our protocol in the two-party scenario. (<b>a</b>) Step 1: third party (TP) shares entangled states among users to create a chain of entangled links back to herself. In this example, we assume state <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>B</mi> <mn>00</mn> </msub> <mrow> <mo>〉</mo> </mrow> </mrow> </semantics></math> is shared over all links. In general, different Bell states can be shared over different links, and only TP knows which state has been shared. (<b>b</b>) Step 2: users with private bit 1 apply operator <span class="html-italic">U</span> to thier first qubit. Here, only <math display="inline"><semantics> <msub> <mi>P</mi> <mn>2</mn> </msub> </semantics></math> must do this. (<b>c</b>) Step 3: all players perform a Bell-state measurements (BSM) on their two qubits and let TP know of the results. In our example, we have assumed <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>B</mi> <mn>00</mn> </msub> <mrow> <mo>〉</mo> </mrow> </mrow> </semantics></math> has been obtained in all cases. (<b>d</b>) Step 4: TP measures qubit 5 in the same basis as her originally chosen basis for qubit <span class="html-italic">T</span>. By comparing the result with the original state of <span class="html-italic">T</span>, TP can calcualte <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mn>1</mn> </msub> <mo>⊕</mo> <msub> <mi>M</mi> <mn>2</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Entanglement distribution by <math display="inline"><semantics> <msub> <mi>P</mi> <mn>0</mn> </msub> </semantics></math>. Each player has a qubit which is entangled with another qubit held by the next user in the chain. At the start of the protocol, TP shares <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>+</mo> <mi>R</mi> </mrow> </semantics></math> Bell states over each link, where <span class="html-italic">R</span> of which (randomly chosen) is used for detecting malicious activities.</p>
Full article ">Figure 3
<p>Entanglement swapping attack by <math display="inline"><semantics> <msub> <mi>P</mi> <mn>0</mn> </msub> </semantics></math> through sharing entangled states in a dishonest way.</p>
Full article ">Figure A1
<p>Attack by (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>−</mo> <mn>2</mn> </mrow> </semantics></math>) participants, where <math display="inline"><semantics> <msub> <mi>P</mi> <mi>p</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>q</mi> </msub> </semantics></math> are honest participants.</p>
Full article ">Figure A2
<p>Entanglement swapping attack by (<math display="inline"><semantics> <mrow> <mi>n</mi> <mo>−</mo> <mn>2</mn> </mrow> </semantics></math>) participants, where <math display="inline"><semantics> <msub> <mi>P</mi> <mi>p</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>q</mi> </msub> </semantics></math> are honest participants.</p>
Full article ">
13 pages, 597 KiB  
Article
On the Rarefied Gas Experiments
by Róbert Kovács
Entropy 2019, 21(7), 718; https://doi.org/10.3390/e21070718 - 23 Jul 2019
Cited by 9 | Viewed by 3724
Abstract
There are limits of validity of classical constitutive laws such as Fourier and Navier-Stokes equations. Phenomena beyond those limits have been experimentally found many decades ago. However, it is still not clear what theory would be appropriate to model different non-classical phenomena under [...] Read more.
There are limits of validity of classical constitutive laws such as Fourier and Navier-Stokes equations. Phenomena beyond those limits have been experimentally found many decades ago. However, it is still not clear what theory would be appropriate to model different non-classical phenomena under different conditions considering either the low-temperature or composite material structure. In this paper, a modeling problem of rarefied gases is addressed. The discussion covers the mass density dependence of material parameters, the scaling properties of different theories and aspects of how to model an experiment. In the following, two frameworks and their properties are presented. One of them is the kinetic theory based Rational Extended Thermodynamics; the other one is the non-equilibrium thermodynamics with internal variables and current multipliers. In order to compare these theories, an experiment on sound speed in rarefied gases at high frequencies, performed by Rhodes, is analyzed in detail. It is shown that the density dependence of material parameters could have a severe impact on modeling capabilities and influences the scaling properties. Full article
(This article belongs to the Special Issue Entropy and Non-Equilibrium Statistical Mechanics)
Show Figures

Figure 1

Figure 1
<p>Speed of sound measurement performed by Rhodes [<a href="#B50-entropy-21-00718" class="html-bibr">50</a>]. The vertical axis denotes the relative speed of sound, i.e., <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>/</mo> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <msub> <mi>v</mi> <mn>0</mn> </msub> </semantics></math> is the speed of sound related to the normal state. The original data can be found in [<a href="#B50-entropy-21-00718" class="html-bibr">50</a>]. The relevant points are emphasized by red squares.</p>
Full article ">Figure 2
<p>Density dependence of viscosity for dense gases when the non-zero viscosity at zero density appears. The original measurements can be found in [<a href="#B66-entropy-21-00718" class="html-bibr">66</a>]. Here, the red boxes show the region of interest together with the extrapolation to zero density.</p>
Full article ">Figure 3
<p>Pressure dependence of viscosity for rarefied gases at room temperature. The original data can be found in [<a href="#B72-entropy-21-00718" class="html-bibr">72</a>] which is only partially depicted here.</p>
Full article ">Figure 4
<p>Calculations of Arima et al. [<a href="#B24-entropy-21-00718" class="html-bibr">24</a>]. The solid red line shows the prediction, the squares and triangles are referring to different experimental data; here, the triangles represent the data from Rhodes [<a href="#B50-entropy-21-00718" class="html-bibr">50</a>]. The dashed line shows the behavior of the Navier-Stokes-Fourier equations.</p>
Full article ">Figure 5
<p>Evaluation using NET-IV (thick black line). The pressure starts at 1 atm and decreases to 2000 Pa, <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> MHz. Error bars are placed for each measurement point to indicate the uncertainty of digitalizing data, its magnitude is <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>2.5</mn> </mrow> </semantics></math> m/s. The red dashed line shows the results of Arima et al. [<a href="#B24-entropy-21-00718" class="html-bibr">24</a>].</p>
Full article ">
11 pages, 445 KiB  
Article
Efficiency Bounds for Minimally Nonlinear Irreversible Heat Engines with Broken Time-Reversal Symmetry
by Qin Liu, Wei Li, Min Zhang, Jizhou He and Jianhui Wang
Entropy 2019, 21(7), 717; https://doi.org/10.3390/e21070717 - 23 Jul 2019
Cited by 1 | Viewed by 3439
Abstract
We study the minimally nonlinear irreversible heat engines in which the time-reversal symmetry for the systems may be broken. The expressions for the power and the efficiency are derived, in which the effects of the nonlinear terms due to dissipations are included. We [...] Read more.
We study the minimally nonlinear irreversible heat engines in which the time-reversal symmetry for the systems may be broken. The expressions for the power and the efficiency are derived, in which the effects of the nonlinear terms due to dissipations are included. We show that, as within the linear responses, the minimally nonlinear irreversible heat engines can enable attainment of Carnot efficiency at positive power. We also find that the Curzon-Ahlborn limit imposed on the efficiency at maximum power can be overcome if the time-reversal symmetry is broken. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Show Figures

Figure 1

Figure 1
<p>The function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> as a function of the asymmetry parameter <span class="html-italic">x</span>, with dissipation parameter <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (black solid line), <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> (red dashed line), and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (blue dot-dashed line). The vertical asymptote of <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> is indicated by green dotted line (when <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>≠</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>C</mi> </msub> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math> is adopted).</p>
Full article ">Figure 2
<p>(Color online) Ratio <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>M</mi> </msub> <mo>/</mo> <msub> <mi>η</mi> <mi>C</mi> </msub> </mrow> </semantics></math> as a function of the asymmetry parameter <span class="html-italic">x</span>. The dissipation ratios are <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (black solid line), <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> (red dashed line), and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (blue dot-dahsed line) (when <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>≠</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>C</mi> </msub> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math> is adopted).</p>
Full article ">Figure 3
<p>(Color online) Ratio <math display="inline"><semantics> <mrow> <msubsup> <mi>η</mi> <mrow> <mi>m</mi> <mi>p</mi> </mrow> <mo>*</mo> </msubsup> <mo>/</mo> <msub> <mi>η</mi> <mi>C</mi> </msub> </mrow> </semantics></math> as a function of the dissipation ratio <math display="inline"><semantics> <mi>α</mi> </semantics></math>, with asymmetric parameters <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (black solid line), <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mo>−</mo> <mn>4</mn> </mrow> </semantics></math> (red dashed line) and <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> (blue dot-dashed line) (<math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>C</mi> </msub> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math> is adopted).</p>
Full article ">Figure 4
<p>(Color online) Ratio <math display="inline"><semantics> <mrow> <msubsup> <mi>η</mi> <mrow> <mi>m</mi> <mi>p</mi> </mrow> <mo>*</mo> </msubsup> <mo>/</mo> <msub> <mi>η</mi> <mi>C</mi> </msub> </mrow> </semantics></math> as a function of the asymmetry parameter <span class="html-italic">x</span>, with dissipation ratios <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (black solid line), <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> (red dashed line) and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (blue dot-dashed line) (when <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>≠</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>η</mi> <mi>C</mi> </msub> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math> is adopted).</p>
Full article ">Figure 5
<p>(Color online) The schematic diagram of the two-terminal thermoelectric model.</p>
Full article ">
18 pages, 282 KiB  
Communication
A Note on the Entropy Force in Kinetic Theory and Black Holes
by Rudolf A. Treumann and Wolfgang Baumjohann
Entropy 2019, 21(7), 716; https://doi.org/10.3390/e21070716 - 23 Jul 2019
Cited by 6 | Viewed by 4569
Abstract
The entropy force is the collective effect of inhomogeneity in disorder in a statistical many particle system. We demonstrate its presumable effect on one particular astrophysical object, the black hole. We then derive the kinetic equations of a large system of particles including [...] Read more.
The entropy force is the collective effect of inhomogeneity in disorder in a statistical many particle system. We demonstrate its presumable effect on one particular astrophysical object, the black hole. We then derive the kinetic equations of a large system of particles including the entropy force. It adds a collective therefore integral term to the Klimontovich equation for the evolution of the one-particle distribution function. Its integral character transforms the basic one particle kinetic equation into an integro-differential equation already on the elementary level, showing that not only the microscopic forces but the hole system reacts to its evolution of its probability distribution in a holistic way. It also causes a collisionless dissipative term which however is small in the inverse particle number and thus negligible. However it contributes an entropic collisional dissipation term. The latter is defined via the particle correlations but lacks any singularities and thus is large scale. It allows also for the derivation of a kinetic equation for the entropy density in phase space. This turns out to be of same structure as the equation for the phase space density. The entropy density determines itself holistically via the integral entropy force thus providing a self-controlled evolution of entropy in phase space. Full article
(This article belongs to the Special Issue Entropy and Non-Equilibrium Statistical Mechanics)
27 pages, 336 KiB  
Article
Dynamic Maximum Entropy Reduction
by Václav Klika, Michal Pavelka, Petr Vágner and Miroslav Grmela
Entropy 2019, 21(7), 715; https://doi.org/10.3390/e21070715 - 22 Jul 2019
Cited by 24 | Viewed by 5450
Abstract
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state [...] Read more.
Any physical system can be regarded on different levels of description varying by how detailed the description is. We propose a method called Dynamic MaxEnt (DynMaxEnt) that provides a passage from the more detailed evolution equations to equations for the less detailed state variables. The method is based on explicit recognition of the state and conjugate variables, which can relax towards the respective quasi-equilibria in different ways. Detailed state variables are reduced using the usual principle of maximum entropy (MaxEnt), whereas relaxation of conjugate variables guarantees that the reduced equations are closed. Moreover, an infinite chain of consecutive DynMaxEnt approximations can be constructed. The method is demonstrated on a particle with friction, complex fluids (equipped with conformation and Reynolds stress tensors), hyperbolic heat conduction and magnetohydrodynamics. Full article
(This article belongs to the Special Issue Entropy and Non-Equilibrium Statistical Mechanics)
Show Figures

Figure 1

Figure 1
<p>A summary of static MaxEnt highlighting relations between state variables on the higher level and the lower level of description and their conjugates. MaxEnt provides lower entropy <math display="inline"><semantics> <mrow> <mrow> <msup> <mrow/> <mo>↓</mo> </msup> <mi>S</mi> </mrow> <mrow> <mo>(</mo> <mi mathvariant="bold">y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and a relation <math display="inline"><semantics> <mrow> <mi mathvariant="bold">x</mi> <mo>=</mo> <msup> <mi>π</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo>(</mo> <mi mathvariant="bold">y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> from composition of <math display="inline"><semantics> <mrow> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo stretchy="false">˜</mo> </mover> <mrow> <mo>(</mo> <msup> <mi mathvariant="bold">y</mi> <mo>*</mo> </msup> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mover accent="true"> <msup> <mi mathvariant="bold">y</mi> <mo>*</mo> </msup> <mo stretchy="false">˜</mo> </mover> <mrow> <mo>(</mo> <mi mathvariant="bold">y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>. LT denotes a relation via Legendre transformation, <math display="inline"><semantics> <mi>π</mi> </semantics></math> stands for a projection from the microscale to the macroscale variables and by an arrow we depict a mapping (written above or below the arrow) that relates the variables in the connected nodes.</p>
Full article ">
16 pages, 3151 KiB  
Article
Efficacy of Quantitative Muscle Ultrasound Using Texture-Feature Parametric Imaging in Detecting Pompe Disease in Children
by Hong-Jen Chiou, Chih-Kuang Yeh, Hsuen-En Hwang and Yin-Yin Liao
Entropy 2019, 21(7), 714; https://doi.org/10.3390/e21070714 - 22 Jul 2019
Cited by 7 | Viewed by 4076
Abstract
Pompe disease is a hereditary neuromuscular disorder attributed to acid α-glucosidase deficiency, and accurately identifying this disease is essential. Our aim was to discriminate normal muscles from neuropathic muscles in children affected by Pompe disease using a texture-feature parametric imaging method that simultaneously [...] Read more.
Pompe disease is a hereditary neuromuscular disorder attributed to acid α-glucosidase deficiency, and accurately identifying this disease is essential. Our aim was to discriminate normal muscles from neuropathic muscles in children affected by Pompe disease using a texture-feature parametric imaging method that simultaneously considers microstructure and macrostructure. The study included 22 children aged 0.02–54 months with Pompe disease and six healthy children aged 2–12 months with normal muscles. For each subject, transverse ultrasound images of the bilateral rectus femoris and sartorius muscles were obtained. Gray-level co-occurrence matrix-based Haralick’s features were used for constructing parametric images and identifying neuropathic muscles: autocorrelation (AUT), contrast, energy (ENE), entropy (ENT), maximum probability (MAXP), variance (VAR), and cluster prominence (CPR). Stepwise regression was used in feature selection. The Fisher linear discriminant analysis was used for combination of the selected features to distinguish between normal and pathological muscles. The VAR and CPR were the optimal feature set for classifying normal and pathological rectus femoris muscles, whereas the ENE, VAR, and CPR were the optimal feature set for distinguishing between normal and pathological sartorius muscles. The two feature sets were combined to discriminate between children with and without neuropathic muscles affected by Pompe disease, achieving an accuracy of 94.6%, a specificity of 100%, a sensitivity of 93.2%, and an area under the receiver operating characteristic curve of 0.98 ± 0.02. The CPR for the rectus femoris muscles and the AUT, ENT, MAXP, and VAR for the sartorius muscles exhibited statistically significant differences in distinguishing between the infantile-onset Pompe disease and late-onset Pompe disease groups (p < 0.05). Texture-feature parametric imaging can be used to quantify and map tissue structures in skeletal muscles and distinguish between pathological and normal muscles in children or newborns. Full article
(This article belongs to the Special Issue Entropy in Image Analysis II)
Show Figures

Figure 1

Figure 1
<p>Texture-feature parametric imaging of a normal rectus femoris muscle in a 12 month old boy. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted rectus femoris muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image. F: femur bone reflection, VI: vastus intermedius muscle.</p>
Full article ">Figure 2
<p>Texture-feature parametric imaging of a pathological rectus femoris muscle in a 10 day old boy with infantile-onset Pompe disease. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted rectus femoris muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image.</p>
Full article ">Figure 3
<p>Texture-feature parametric imaging of a normal sartorius muscle in a 12 month old boy. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted sartorius muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image.</p>
Full article ">Figure 4
<p>Texture-feature parametric imaging of a pathological sartorius muscle in a five month old boy with late-onset Pompe disease. (<b>a</b>) Original B-mode image, (<b>b</b>) extracted sartorius muscle region (indicated by the white dashed line) in the B-mode image, (<b>c</b>) autocorrelation image, (<b>d</b>) contrast image, (<b>e</b>) energy image, (<b>f</b>) entropy image, (<b>g</b>) maximum probability image, (<b>h</b>) variance image, and (<b>i</b>) cluster prominence image.</p>
Full article ">Figure 5
<p>Box plots of the distributions of the seven parameters for normal rectus femoris muscles and pathological rectus femoris muscles affected by Pompe disease. (<b>a</b>) AUT: autocorrelation; (<b>b</b>) CON: contrast; (<b>c</b>) ENE: energy; (<b>d</b>) ENT: entropy; (<b>e</b>) MAXP: maximum probability; (<b>f</b>) VAR: variance; (<b>g</b>) CPR: cluster prominence; *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 6
<p>Box plots of the distributions of the seven parameters for normal sartorius muscles and pathological sartorius muscles affected by Pompe disease. (<b>a</b>) AUT: autocorrelation; (<b>b</b>) CON: contrast; (<b>c</b>) ENE: energy; (<b>d</b>) ENT: entropy; (<b>e</b>) MAXP: maximum probability; (<b>f</b>) VAR: variance; (<b>g</b>) CPR: cluster prominence; * <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01; and *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 7
<p>Receiver operating characteristic (ROC) curves of each feature set. F1: comprising the variance and cluster prominence for rectus femoris muscles. F2: comprising the energy, variance, and cluster prominence for sartorius muscles. F3: constituting a combination of F1 and F2.</p>
Full article ">
16 pages, 1105 KiB  
Article
Surrogate Data Preserving All the Properties of Ordinal Patterns up to a Certain Length
by Yoshito Hirata, Masanori Shiro and José M. Amigó
Entropy 2019, 21(7), 713; https://doi.org/10.3390/e21070713 - 22 Jul 2019
Cited by 8 | Viewed by 3669
Abstract
We propose a method for generating surrogate data that preserves all the properties of ordinal patterns up to a certain length, such as the numbers of allowed/forbidden ordinal patterns and transition likelihoods from ordinal patterns into others. The null hypothesis is that the [...] Read more.
We propose a method for generating surrogate data that preserves all the properties of ordinal patterns up to a certain length, such as the numbers of allowed/forbidden ordinal patterns and transition likelihoods from ordinal patterns into others. The null hypothesis is that the details of the underlying dynamics do not matter beyond the refinements of ordinal patterns finer than a predefined length. The proposed surrogate data help construct a test of determinism that is free from the common linearity assumption for a null-hypothesis. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic figure showing how we generate an entropy preserving surrogate.</p>
Full article ">Figure 2
<p>Example of an entropy preserving surrogate for the logistic map.</p>
Full article ">Figure 3
<p>Return plot for the original time series of the logistic map (<b>A</b>) and that for one of its entropy preserving surrogates (<b>B</b>).</p>
Full article ">Figure 4
<p>Examples of tests for nonlinearity for various models when the datasets are free from observational noise. Here an important point is whether or not the value obtained from each original time series shown in the red vertical dashed line is within the interval specified with the minimum and the maximum for the test statistic <math display="inline"><semantics> <mrow> <mi>E</mi> <mo stretchy="false">[</mo> <mi>x</mi> <msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mi>x</mi> <msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo>+</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> <mn>2</mn> </msup> <mo stretchy="false">]</mo> </mrow> </semantics></math> of the 39 truncated Fourier transform surrogates (TFTS) surrogates, which can be interpreted from each histogram. Therefore, it does not matter much whether the test statistic obtained from the original data is smaller or greater than those obtained from TFTS surrogates. (<b>A</b>) result for the AR(1) model; (<b>B</b>) result for the GARCH model; (<b>C</b>) result for the model of noise-induced order; (<b>D</b>) result for the logistic map; (<b>E</b>) result for the Lorenz model; (<b>F</b>) result for the Rössler model.</p>
Full article ">Figure 5
<p>Examples of tests of determinism beyond pseudo-periodicity using pseudo-periodic surrogates for various models when the datasets are free from observational noise. Here we use the correlation dimensions as test statistics. In this surrogate data, rough periodic behavior is preserved, while fine structure related to the possible underlying determinism in question is destroyed. Correlation dimensions are normalized so that the minimum and the maximum values for the correlation dimensions of the 39 pseudo-periodic surrogates for each dimension become 0 and 1, respectively. (<b>A</b>) result for the AR(1) model; (<b>B</b>) result for the GARCH model; (<b>C</b>) result for the model of noise-induced order; (<b>D</b>) result for the logistic map; (<b>E</b>) result for the Lorenz model; (<b>F</b>) result for the Rössler model.</p>
Full article ">Figure 6
<p>Examples of tests of determinism beyond pseudo-periodicity using pseudo-periodic surrogates when we use the proxy for the maximal Lyapunov exponent as a test statistic. In each panel, the red dashed line corresponds to the value obtained from the original time series and the histogram, obtained from the pseudo-periodic surrogates. (<b>A</b>) result for the AR(1) model; (<b>B</b>) result for the GARCH model; (<b>C</b>) result for the model of noise-induced order; (<b>D</b>) result for the logistic map; (<b>E</b>) result for the Lorenz model; (<b>F</b>) result for the Rössler model.</p>
Full article ">Figure 7
<p>Examples of tests of determinism beyond 30 steps using the proposed entropy preserving surrogates for various models when the datasets are free from observational noise. In each panel, the red dashed line corresponds to the value of test statistic obtained from the original data. (<b>A</b>) result for the AR(1) model; (<b>B</b>) result for the GARCH model; (<b>C</b>) result for the model of noise-induced order; (<b>D</b>) result for the logistic map; (<b>E</b>) result for the Lorenz model; (<b>F</b>) result for the Rössler model.</p>
Full article ">Figure 8
<p>Examples of tests of nonlinearity for various models when 5% observational noise is added. See the caption of <a href="#entropy-21-00713-f005" class="html-fig">Figure 5</a> to interpret the results. (<b>A</b>) result for the AR(1) model; (<b>B</b>) result for the GARCH model; (<b>C</b>) result for the model of noise-induced order; (<b>D</b>) result for the logistic map; (<b>E</b>) result for the Lorenz model; (<b>F</b>) result for the Rössler model.</p>
Full article ">Figure 9
<p>Examples of tests of determinism beyond pseudo-periodicity using pseudo-periodic surrogates for various models when 5% observational noise is added. (<b>A</b>) result for the AR(1) model; (<b>B</b>) result for the GARCH model; (<b>C</b>) result for the model of noise-induced order; (<b>D</b>) result for the logistic map; (<b>E</b>) result for the Lorenz model; (<b>F</b>) result for the Rössler model.</p>
Full article ">Figure 10
<p>Examples of tests of determinism using the proposed entropy preserving surrogates for various models when 5% observational noise is added. (<b>A</b>) result for the AR(1) model; (<b>B</b>) result for the GARCH model; (<b>C</b>) result for the model of noise-induced order; (<b>D</b>) result for the logistic map; (<b>E</b>) result for the Lorenz model; (<b>F</b>) result for the Rössler model.</p>
Full article ">Figure 11
<p>Example of an entropy preserving surrogate for a part of the USD/JPY data.</p>
Full article ">Figure 12
<p>Return plot for the original time series of a USD/JPY data part (<b>A</b>) and that for one of its entropy preserving surrogates (<b>B</b>).</p>
Full article ">Figure 13
<p>The Venn diagram describing the relationship among original properties for the underlying dynamics such as nonlinearity and determinism against properties we can identify with surrogate data such as determinism beyond pseudo-periodicity (pseudo-periodic surrogates [<a href="#B19-entropy-21-00713" class="html-bibr">19</a>]) and determinism beyond <span class="html-italic">L</span> steps (the proposed entropy preserving surrogates).</p>
Full article ">
22 pages, 1131 KiB  
Communication
Derivations of the Core Functions of the Maximum Entropy Theory of Ecology
by Alexander B. Brummer and Erica A. Newman
Entropy 2019, 21(7), 712; https://doi.org/10.3390/e21070712 - 21 Jul 2019
Cited by 23 | Viewed by 6300
Abstract
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the [...] Read more.
The Maximum Entropy Theory of Ecology (METE), is a theoretical framework of macroecology that makes a variety of realistic ecological predictions about how species richness, abundance of species, metabolic rate distributions, and spatial aggregation of species interrelate in a given region. In the METE framework, “ecological state variables” (representing total area, total species richness, total abundance, and total metabolic energy) describe macroecological properties of an ecosystem. METE incorporates these state variables into constraints on underlying probability distributions. The method of Lagrange multipliers and maximization of information entropy (MaxEnt) lead to predicted functional forms of distributions of interest. We demonstrate how information entropy is maximized for the general case of a distribution, which has empirical information that provides constraints on the overall predictions. We then show how METE’s two core functions are derived. These functions, called the “Spatial Structure Function” and the “Ecosystem Structure Function” are the core pieces of the theory, from which all the predictions of METE follow (including the Species Area Relationship, the Species Abundance Distribution, and various metabolic distributions). Primarily, we consider the discrete distributions predicted by METE. We also explore the parameter space defined by the METE’s state variables and Lagrange multipliers. We aim to provide a comprehensive resource for ecologists who want to understand the derivations and assumptions of the basic mathematical structure of METE. Full article
(This article belongs to the Special Issue Information Theory Applications in Biology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The relationship between the Maximum Entropy Theory of Ecology (METE)’s Lagrange multipliers <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>λ</mi> <mo>Π</mo> </msub> </semantics></math>, and the ecological state variables in the mathematical constraints that produce them. Values for each <math display="inline"><semantics> <mi>λ</mi> </semantics></math> were generated with the software package meteR [<a href="#B47-entropy-21-00712" class="html-bibr">47</a>] in the statistical computing language R [<a href="#B48-entropy-21-00712" class="html-bibr">48</a>], and a surface was interpolated to aid in visualization. In panel (<b>A</b>), we see the greater influence of log(<math display="inline"><semantics> <msub> <mi>N</mi> <mn>0</mn> </msub> </semantics></math>) than <math display="inline"><semantics> <msub> <mi>S</mi> <mn>0</mn> </msub> </semantics></math> on the overall value of the Lagrange multiplier <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math>, and a compression of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> values at low <math display="inline"><semantics> <msub> <mi>N</mi> <mn>0</mn> </msub> </semantics></math>. In panel (<b>B</b>), we can see a near-linear relationship on the log-log scale between <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> and log(<math display="inline"><semantics> <msub> <mi>E</mi> <mn>0</mn> </msub> </semantics></math>), while <math display="inline"><semantics> <msub> <mi>S</mi> <mn>0</mn> </msub> </semantics></math> does not affect its value as greatly over this range of values. In panel (<b>C</b>), we see a highly non-linear relationship between <math display="inline"><semantics> <msub> <mi>λ</mi> <mo>Π</mo> </msub> </semantics></math>, the state variable <math display="inline"><semantics> <msub> <mi>A</mi> <mn>0</mn> </msub> </semantics></math>, and the smaller area under consideration, <span class="html-italic">A</span>.</p>
Full article ">Figure 2
<p>The parameter space of ecosystems as defined by the METE Lagrange multipliers <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math>, corresponding to the constraint on <math display="inline"><semantics> <mfrac> <msub> <mi>N</mi> <mn>0</mn> </msub> <msub> <mi>S</mi> <mn>0</mn> </msub> </mfrac> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math>, corresponding to the constraint on <math display="inline"><semantics> <mfrac> <msub> <mi>E</mi> <mn>0</mn> </msub> <msub> <mi>S</mi> <mn>0</mn> </msub> </mfrac> </semantics></math>. The highest values of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> for any value of <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> correspond to values of <math display="inline"><semantics> <mfrac> <msub> <mi>N</mi> <mn>0</mn> </msub> <msub> <mi>S</mi> <mn>0</mn> </msub> </mfrac> </semantics></math> = 1 (shown in purple), or situations where there is only one individual per species (small numbers of measurements or high diversity). Most real ecosystems and empirical systems with more than a few individuals are expected to fall closer to the low <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> values for any given <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> value (shown in green).</p>
Full article ">
28 pages, 1014 KiB  
Article
Thermal Optimization of a Dual Pressure Goswami Cycle for Low Grade Thermal Sources
by Gustavo Guzmán, Lucía De Los Reyes, Eliana Noriega, Hermes Ramírez, Antonio Bula and Armando Fontalvo
Entropy 2019, 21(7), 711; https://doi.org/10.3390/e21070711 - 20 Jul 2019
Cited by 7 | Viewed by 4319
Abstract
This paper presents a theoretical investigation of a new configuration of the combined power and cooling cycle known as the Goswami cycle. The new configuration consists of two turbines operating at two different working pressures with a low-heat source temperature, below 150 °C. [...] Read more.
This paper presents a theoretical investigation of a new configuration of the combined power and cooling cycle known as the Goswami cycle. The new configuration consists of two turbines operating at two different working pressures with a low-heat source temperature, below 150 °C. A comprehensive analysis was conducted to determine the effect of key operation parameters such as ammonia mass fraction at the absorber outlet and boiler-rectifier, on the power output, cooling capacity, effective first efficiency, and effective exergy efficiency, while the performance of the dual-pressure configuration was compared with the original single pressure cycle. In addition, a Pareto optimization with a genetic algorithm was conducted to obtain the best power and cooling output combinations to maximize effective first law efficiency. Results showed that the new dual-pressure configuration generated more power than the single pressure cycle, by producing up to 327.8 kW, while the single pressure cycle produced up to 110.8 kW at a 150 °C boiler temperature. However, the results also showed that it reduced the cooling output as there was less mass flow rate in the refrigeration unit. Optimization results showed that optimum effective first law efficiency ranged between 9.1% and 13.7%. The maximum effective first law efficiency at the lowest net power (32 kW) and cooling (0.38 kW) outputs was also shown. On the other hand, it presented 13.6% effective first law efficiency when the net power output was 100 kW and the cooling capacity was 0.38 kW. Full article
(This article belongs to the Special Issue Thermodynamic Optimization)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Original single pressure Goswami cycle.</p>
Full article ">Figure 2
<p>Proposed dual-pressure Goswami cycle.</p>
Full article ">Figure 3
<p>(<b>a</b>) Genetic algorithm for dominant solution generation. (<b>b</b>) Algorithm for initial population generation.</p>
Full article ">Figure 4
<p>Optimum net power output for the simple pressure Goswami cycle.</p>
Full article ">Figure 5
<p>Optimum cooling output for the simple pressure Goswami cycle.</p>
Full article ">Figure 6
<p>Optimum effective first law efficiency for the simple pressure Goswami cycle.</p>
Full article ">Figure 7
<p>Optimum effective exergy efficiency for the simple pressure Goswami cycle.</p>
Full article ">Figure 8
<p>Optimum net power output for the dual pressure Goswami cycle. (<b>a</b>) Boiler temperature: 150 °C. (<b>b</b>) Boiler temperature: 120 °C.</p>
Full article ">Figure 9
<p>Optimum cooling output for the dual-pressure Goswami cycle. (<b>a</b>) Boiler temperature: 150 °C. (<b>b</b>) Boiler temperature: 120 °C.</p>
Full article ">Figure 10
<p>Optimum effective first law efficiency for the dual pressure Goswami cycle. (<b>a</b>) Boiler temperature: 150 °C. (<b>b</b>) Boiler temperature: 120 °C.</p>
Full article ">Figure 11
<p>Optimum effective exergy efficiency for the dual-pressure Goswami cycle. (<b>a</b>) Boiler temperature: 150 °C. (<b>b</b>) Boiler temperature: 120 °C.</p>
Full article ">Figure 12
<p>Genetic algorithm-based optimization results for the dual-pressure Goswami cycle.</p>
Full article ">
16 pages, 316 KiB  
Article
On MV-Algebraic Versions of the Strong Law of Large Numbers
by Piotr Nowak and Olgierd Hryniewicz
Entropy 2019, 21(7), 710; https://doi.org/10.3390/e21070710 - 19 Jul 2019
Cited by 1 | Viewed by 2912
Abstract
Many-valued (MV; the many-valued logics considered by ?ukasiewicz)-algebras are algebraic systems that generalize Boolean algebras. The MV-algebraic probability theory involves the notions of the state and observable, which abstract the probability measure and the random variable, both considered in the Kolmogorov probability theory. [...] Read more.
Many-valued (MV; the many-valued logics considered by ?ukasiewicz)-algebras are algebraic systems that generalize Boolean algebras. The MV-algebraic probability theory involves the notions of the state and observable, which abstract the probability measure and the random variable, both considered in the Kolmogorov probability theory. Within the MV-algebraic probability theory, many important theorems (such as various versions of the central limit theorem or the individual ergodic theorem) have been recently studied and proven. In particular, the counterpart of the Kolmogorov strong law of large numbers (SLLN) for sequences of independent observables has been considered. In this paper, we prove generalized MV-algebraic versions of the SLLN, i.e., counterparts of the Marcinkiewicz–Zygmund and Brunk–Prokhorov SLLN for independent observables, as well as the Korchevsky SLLN, where the independence of observables is not assumed. To this end, we apply the classical probability theory and some measure-theoretic methods. We also analyze examples of applications of the proven theorems. Our results open new directions of development of the MV-algebraic probability theory. They can also be applied to the problem of entropy estimation. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
18 pages, 4538 KiB  
Article
A Peak Traffic Congestion Prediction Method Based on Bus Driving Time
by Zhao Huang, Jizhe Xia, Fan Li, Zhen Li and Qingquan Li
Entropy 2019, 21(7), 709; https://doi.org/10.3390/e21070709 - 19 Jul 2019
Cited by 39 | Viewed by 7052
Abstract
Road traffic congestion has a large impact on travel. The accurate prediction of traffic congestion has become a hot topic in intelligent transportation systems (ITS). Recently, a variety of traffic congestion prediction methods have been proposed. However, most approaches focus on floating car [...] Read more.
Road traffic congestion has a large impact on travel. The accurate prediction of traffic congestion has become a hot topic in intelligent transportation systems (ITS). Recently, a variety of traffic congestion prediction methods have been proposed. However, most approaches focus on floating car data, and the prediction accuracy is often unstable due to large fluctuations in floating speed. Targeting these challenges, we propose a method of traffic congestion prediction based on bus driving time (TCP-DT) using long short-term memory (LSTM) technology. Firstly, we collected a total of 66,228 bus driving records from 50 buses for 66 working days in Guangzhou, China. Secondly, the actual and standard bus driving times were calculated by processing the buses’ GPS trajectories and bus station data. Congestion time is defined as the interval between actual and standard driving time. Thirdly, congestion time prediction based on LSTM (T-LSTM) was adopted to predict future bus congestion times. Finally, the congestion index and classification (CI-C) model was used to calculate the congestion indices and classify the level of congestion into five categories according to three classification methods. Our experimental results show that the T-LSTM model can effectively predict the congestion time of six road sections at different time periods, and the average mean absolute percentage error ( M A P E ¯ ) and root mean square error ( R M S E ¯ ) of prediction are 11.25% and 14.91 in the morning peak, and 12.3% and 14.57 in the evening peak, respectively. The TCP-DT method can effectively predict traffic congestion status and provide a driving route with the least congestion time for vehicles. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the traffic congestion prediction based on bus driving time (TCP-DT) method. LSTM, long short-term memory; CI-C, congestion index and classification.</p>
Full article ">Figure 2
<p>Longitudinal division based on the benchmark of 0° longitude.</p>
Full article ">Figure 3
<p>Bus driving time diagram.</p>
Full article ">Figure 4
<p>Structure of time prediction based on long short-term memory (T-LSTM) model.</p>
Full article ">Figure 5
<p>Structure of LSTM Cell.</p>
Full article ">Figure 6
<p>(<b>a</b>) Average driving time of six road sections; (<b>b</b>) distribution of congestion time and index.</p>
Full article ">Figure 7
<p>Distribution of real and predicted congestion time: (<b>a</b>) morning peak, (<b>b</b>) evening peak.</p>
Full article ">Figure 8
<p>Equal interval classification of predicted data: (<b>a</b>) morning peak, (<b>b</b>) evening peak.</p>
Full article ">Figure 9
<p>Natural breakpoint classification of predicted data: (<b>a</b>) morning peak, (<b>b</b>) evening peak.</p>
Full article ">Figure 10
<p>Geometric interval classification of predicted data: (<b>a</b>) morning peak, (<b>b</b>) evening peak.</p>
Full article ">
14 pages, 253 KiB  
Article
Gaussian Belief Propagation for Solving Network Utility Maximization with Delivery Contracts
by Shengbin Liao and Jianyong Sun
Entropy 2019, 21(7), 708; https://doi.org/10.3390/e21070708 - 19 Jul 2019
Viewed by 3007
Abstract
Classical network utility maximization (NUM) models fail to capture network dynamics, which are of increasing importance for modeling network behaviors. In this paper, we consider the NUM with delivery contracts, which are constraints to the classical model to describe network dynamics. This paper [...] Read more.
Classical network utility maximization (NUM) models fail to capture network dynamics, which are of increasing importance for modeling network behaviors. In this paper, we consider the NUM with delivery contracts, which are constraints to the classical model to describe network dynamics. This paper investigates a method to distributively solve the given problem. We first transform the problem into an equivalent model of linear equations by dual decomposition theory, and then use Gaussian belief propagation algorithm to solve the equivalent issue distributively. The proposed algorithm has faster convergence speed than the existing first-order methods and distributed Newton method. Experimental results have demonstrated the effectiveness of our proposed approach. Full article
Show Figures

Figure 1

Figure 1
<p>The convergence comparison of the proposed method and the dual decomposition algorithm. The green curve denotes the convergence speed of total utility for our proposed algorithm which is a distributed Newton method based on GaBP, and the red curve denotes the convergence speed of total utility for the dual decomposition algorithm which is a distributed primary-dual algorithm.</p>
Full article ">Figure 2
<p>The duality gap comparison of the proposed method and the dual decomposition algorithm. The green curve denotes the estimation errors between the primary function and dual function versus iteration number for our proposed method, and the red curve denotes the estimation errors between the primary function and dual function versus iteration number for the dual decomposition algorithm.</p>
Full article ">Figure 3
<p>The convergence comparison of the proposed method and the truncated Newton method. The green curve denotes the convergence speed of total utility for our proposed algorithm which is a distributed Newton method based on GaBP, and the blue curve denotes the convergence speed of total utility for the truncated Newton method which is a centralized algorithm.</p>
Full article ">Figure 4
<p>The duality gap comparison of the proposed method and the truncated Newton method. The green curve denotes the estimation errors between the primary function and dual function versus iteration number for our proposed method, and the blue curve denotes the estimation errors between the primary function and dual function versus iteration number for the truncated Newton algorithm.</p>
Full article ">
18 pages, 2373 KiB  
Article
Electricity Consumption Forecasting using Support Vector Regression with the Mixture Maximum Correntropy Criterion
by Jiandong Duan, Xuan Tian, Wentao Ma, Xinyu Qiu, Peng Wang and Lin An
Entropy 2019, 21(7), 707; https://doi.org/10.3390/e21070707 - 19 Jul 2019
Cited by 13 | Viewed by 3704
Abstract
The electricity consumption forecasting (ECF) technology plays a crucial role in the electricity market. The support vector regression (SVR) is a nonlinear prediction model that can be used for ECF. The electricity consumption (EC) data are usually nonlinear and non-Gaussian and present outliers. [...] Read more.
The electricity consumption forecasting (ECF) technology plays a crucial role in the electricity market. The support vector regression (SVR) is a nonlinear prediction model that can be used for ECF. The electricity consumption (EC) data are usually nonlinear and non-Gaussian and present outliers. The traditional SVR with the mean-square error (MSE), however, is insensitive to outliers and cannot correctly represent the statistical information of errors in non-Gaussian situations. To address this problem, a novel robust forecasting method is developed in this work by using the mixture maximum correntropy criterion (MMCC). The MMCC, as a novel cost function of information theoretic, can be used to solve non-Gaussian signal processing; therefore, in the original SVR, the MSE is replaced by the MMCC to develop a novel robust SVR method (called MMCCSVR) for ECF. Besides, the factors influencing users’ EC are investigated by a data statistical analysis method. We find that the historical temperature and historical EC are the main factors affecting future EC, and thus these two factors are used as the input in the proposed model. Finally, real EC data from a shopping mall in Guangzhou, China, are utilized to test the proposed ECF method. The forecasting results show that the proposed ECF method can effectively improve the accuracy of ECF compared with the traditional SVR and other forecasting algorithms. Full article
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)
Show Figures

Figure 1

Figure 1
<p>Monthly electricity consumption (EC) of a shopping mall in 2017.</p>
Full article ">Figure 2
<p>Daily EC of the commercial property common area of the mall and its corresponding daily maximum temperature.</p>
Full article ">Figure 3
<p>Flow chart of the implementation process.</p>
Full article ">Figure 4
<p>Prediction accuracy varying with the parameters <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mn>2</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Prediction accuracy varying with the parameter <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mn>1</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Forecast results of the maximum mixture correntropy criterion support vector regression (MMCCSVR) method for a mall from 4 May to 3 June 2018.</p>
Full article ">Figure 7
<p>Forecast results of MMCCSVR compared with those of other methods from 4 May to 3 June 2018.</p>
Full article ">Figure 8
<p>Relative error of MMCCSVR with different inputs compared with other methods from 4 May to 3 June 2018.</p>
Full article ">Figure 9
<p>Forecast results of MMCCSVR for a mall from 26 August to 31 December 2018.</p>
Full article ">Figure 10
<p>Relative error of MMCCSVR for different inputs compared with those of other methods from 26 August to 31 December 2018.</p>
Full article ">
21 pages, 4005 KiB  
Article
Refined Multiscale Entropy Using Fuzzy Metrics: Validation and Application to Nociception Assessment
by José F. Valencia, Jose D. Bolaños, Montserrat Vallverdú, Erik W. Jensen, Alberto Porta and Pedro L. Gambús
Entropy 2019, 21(7), 706; https://doi.org/10.3390/e21070706 - 18 Jul 2019
Cited by 4 | Viewed by 3941
Abstract
The refined multiscale entropy (RMSE) approach is commonly applied to assess complexity as a function of the time scale. RMSE is normally based on the computation of sample entropy (SampEn) estimating complexity as conditional entropy. However, SampEn is dependent on the length and [...] Read more.
The refined multiscale entropy (RMSE) approach is commonly applied to assess complexity as a function of the time scale. RMSE is normally based on the computation of sample entropy (SampEn) estimating complexity as conditional entropy. However, SampEn is dependent on the length and standard deviation of the data. Recently, fuzzy entropy (FuzEn) has been proposed, including several refinements, as an alternative to counteract these limitations. In this work, FuzEn, translated FuzEn (TFuzEn), translated-reflected FuzEn (TRFuzEn), inherent FuzEn (IFuzEn), and inherent translated FuzEn (ITFuzEn) were exploited as entropy-based measures in the computation of RMSE and their performance was compared to that of SampEn. FuzEn metrics were applied to synthetic time series of different lengths to evaluate the consistency of the different approaches. In addition, electroencephalograms of patients under sedation-analgesia procedure were analyzed based on the patient’s response after the application of painful stimulation, such as nail bed compression or endoscopy tube insertion. Significant differences in FuzEn metrics were observed over simulations and real data as a function of the data length and the pain responses. Findings indicated that FuzEn, when exploited in RMSE applications, showed similar behavior to SampEn in long series, but its consistency was better than that of SampEn in short series both over simulations and real data. Conversely, its variants should be utilized with more caution, especially whether processes exhibit an important deterministic component and/or in nociception prediction at long scales. Full article
(This article belongs to the Special Issue Information Dynamics in Brain and Physiological Networks)
Show Figures

Figure 1

Figure 1
<p>Multiscale analysis with refined multiscale entropy (RMSE), using sample entropy (SampEn) (<b>a</b>–<b>c</b>), fuzzy entropy (FuzEn) (<b>d</b>–<b>f</b>), translated FuzEn (TFuzEn) (<b>g</b>–<b>i</b>), inherent FuzEn (IFuzEn) (<b>j</b>–<b>l</b>), inherent translated FuzEn (ITFuzEn) (<b>m</b>–<b>o</b>), from 60 realizations of the type-1 simulated series (Gaussian white noise (GWN), 1/f, and AR025) for length <span class="html-italic">N</span> = 100 (left column), 1000 (middle column), and 10,000 (right column). In each case, the length of the simulated series was cropped up to the <span class="html-italic">N</span>th sample.</p>
Full article ">Figure 1 Cont.
<p>Multiscale analysis with refined multiscale entropy (RMSE), using sample entropy (SampEn) (<b>a</b>–<b>c</b>), fuzzy entropy (FuzEn) (<b>d</b>–<b>f</b>), translated FuzEn (TFuzEn) (<b>g</b>–<b>i</b>), inherent FuzEn (IFuzEn) (<b>j</b>–<b>l</b>), inherent translated FuzEn (ITFuzEn) (<b>m</b>–<b>o</b>), from 60 realizations of the type-1 simulated series (Gaussian white noise (GWN), 1/f, and AR025) for length <span class="html-italic">N</span> = 100 (left column), 1000 (middle column), and 10,000 (right column). In each case, the length of the simulated series was cropped up to the <span class="html-italic">N</span>th sample.</p>
Full article ">Figure 2
<p>Multiscale analysis with RMSE, using SampEn (<b>a</b>–<b>c</b>), FuzEn (<b>d</b>–<b>e</b>), TFuzEn (<b>g</b>–<b>i</b>), anslated-reflected FuzEn (TRFuzEn) (<b>j</b>–<b>l</b>), IFuzEn (<b>m</b>–<b>o</b>), and ITFuzEn (<b>p</b>–<b>r</b>), from 30 realizations of the type-2 simulated series (LM-3.5, LM-3.7, LM-3.9, and Henon map (HM)) for length <span class="html-italic">N</span> = 100 (left column), 1000 (middle column), and 10,000 (right column).</p>
Full article ">Figure 3
<p>Prediction probability (Pk values) of the RMSE metrics, using SampEn, FuzEn, TFuzEn, IFuzEn, and ITFuzEn, for nociception assessment (2 ≤ Ramsay sedation scale (RSS) ≤ 5 vs. RSS = 6) after a firm nail-bed pressure. Pk = 0.5 represents a complete randomness and Pk = 1 a perfect prediction.</p>
Full article ">Figure 4
<p>Prediction probability (Pk values) of the RMSE metrics, using SampEn, FuzEn, TFuzEn, IFuzEn, and ITFuzEn, for nociception assessment (gag reflex (GAG) = 1 vs. GAG = 0) after endoscopy tube insertion. Pk = 0.5 means a complete randomness and Pk = 1 a perfect prediction.</p>
Full article ">Figure 5
<p>Mean values of the SampEn (blue-continuous lines) and FuzEn (red-dotted lines) as a function of the time scales <span class="html-italic">TS</span>, obtained from the electroencephalogram (EEG) segments in responsive states 2 ≤ RSS ≤ 5 (square marker), and; unresponsive state RSS = 6 (circle marker).</p>
Full article ">Figure 6
<p>Pk values as a function of the time scales <span class="html-italic">TS</span>, for different values of the tolerance parameter <span class="html-italic">r</span> (0.10, 0.15, 0.20, 0.25, and 0.30). Results were obtained comparing responsive states (2 ≤ RSS ≤ 5) vs. unresponsive state (RSS = 6), using RMSE with (<b>a</b>) SampEn; (<b>b</b>) FuzEn.</p>
Full article ">Figure A1
<p>Empirical mode decomposition (EMD) results on a sinusoidal input signal with five frequency components: 10, 50, 100, 300, and 500 Hz, with an amplitude of 30, 10, 1, 20 and 5, respectively.</p>
Full article ">Figure A2
<p>Ratio approach: empirical distribution of <span class="html-italic">R</span> vectors of EEG signals used in this work. The vertical red line represents the mean of the distribution of the <span class="html-italic">R</span> vectors (<math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>≈</mo> <mn>2</mn> </mrow> </semantics></math>). Vertical black lines represent the left and right thresholds using a value of <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A3
<p>Energy approach: <math display="inline"><semantics> <mi>G</mi> </semantics></math> vectors of the EEG signals used in this work.</p>
Full article ">Figure A4
<p>Trend filtering process applied on a sinusoidal input signal with five frequency components.</p>
Full article ">
13 pages, 269 KiB  
Article
Quantum Features of Macroscopic Fields: Entropy and Dynamics
by Robert Alicki
Entropy 2019, 21(7), 705; https://doi.org/10.3390/e21070705 - 18 Jul 2019
Cited by 5 | Viewed by 3854
Abstract
Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random [...] Read more.
Macroscopic fields such as electromagnetic, magnetohydrodynamic, acoustic or gravitational waves are usually described by classical wave equations with possible additional damping terms and coherent sources. The aim of this paper is to develop a complete macroscopic formalism including random/thermal sources, dissipation and random scattering of waves by environment. The proposed reduced state of the field combines averaged field with the two-point correlation function called single-particle density matrix. The evolution equation for the reduced state of the field is obtained by reduction of the generalized quasi-free dynamical semigroups describing irreversible evolution of bosonic quantum field and the definition of entropy for the reduced state of the field follows from the von Neumann entropy of quantum field states. The presented formalism can be applied, for example, to superradiance phenomena and allows unifying the Mueller and Jones calculi in polarization optics. Full article
(This article belongs to the Special Issue Quantum Entropies and Complexity)
30 pages, 511 KiB  
Article
Thermodynamics and Stability of Non-Equilibrium Steady States in Open Systems
by Miroslav Bulíček, Josef Málek and Vít Průša
Entropy 2019, 21(7), 704; https://doi.org/10.3390/e21070704 - 18 Jul 2019
Cited by 20 | Viewed by 4897
Abstract
Thermodynamical arguments are known to be useful in the construction of physically motivated Lyapunov functionals for nonlinear stability analysis of spatially homogeneous equilibrium states in thermodynamically isolated systems. Unfortunately, the limitation to isolated systems is essential, and standard arguments are not applicable even [...] Read more.
Thermodynamical arguments are known to be useful in the construction of physically motivated Lyapunov functionals for nonlinear stability analysis of spatially homogeneous equilibrium states in thermodynamically isolated systems. Unfortunately, the limitation to isolated systems is essential, and standard arguments are not applicable even for some very simple thermodynamically open systems. On the other hand, the nonlinear stability of thermodynamically open systems is usually investigated using the so-called energy method. The mathematical quantity that is referred to as the “energy” is, however, in most cases not linked to the energy in the physical sense of the word. Consequently, it would seem that genuine thermo-dynamical concepts are of no use in the nonlinear stability analysis of thermodynamically open systems. We show that this is not the case. In particular, we propose a construction that in the case of a simple heat conduction problem leads to a physically well-motivated Lyapunov type functional, which effectively replaces the artificial Lyapunov functional used in the standard energy method. The proposed construction seems to be general enough to be applied in complex thermomechanical settings. Full article
(This article belongs to the Special Issue Thermodynamic Approaches in Modern Engineering Systems)
Show Figures

Figure 1

Figure 1
<p>Auxiliary functions. (<b>a</b>) Plot of function <math display="inline"><semantics> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <mi>θ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> that appears as the integrand in (<a href="#FD31-entropy-21-00704" class="html-disp-formula">31</a>) and (<a href="#FD34-entropy-21-00704" class="html-disp-formula">34</a>); (<b>b</b>) Plot of function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo stretchy="false">(</mo> <mover accent="true"> <mi>θ</mi> <mo>˜</mo> </mover> <mo stretchy="false">)</mo> </mrow> </semantics></math> that appears as the integrand in (<a href="#FD54-entropy-21-00704" class="html-disp-formula">54</a>).</p>
Full article ">Figure 2
<p>Construction of the Lyapunov functional <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">V</mi> <mi>neq</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mfenced separators="" open="" close="&#x2225;"> <msub> <mover accent="true"> <mi>x</mi> <mo>˜</mo> </mover> <mi>neq</mi> </msub> </mfenced> <msub> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> <mi>neq</mi> </msub> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> for a non-equilibrium state <math display="inline"><semantics> <msub> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> <mi>neq</mi> </msub> </semantics></math> from the Lyapunov functional <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">V</mi> <mi>eq</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mfenced separators="" open="" close="&#x2225;"> <msub> <mover accent="true"> <mi>x</mi> <mo>˜</mo> </mover> <mi>eq</mi> </msub> </mfenced> <msub> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> <mi>eq</mi> </msub> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> for the rest state <math display="inline"><semantics> <msub> <mover accent="true"> <mi>x</mi> <mo>^</mo> </mover> <mi>eq</mi> </msub> </semantics></math>.</p>
Full article ">Figure A1
<p>Auxiliary function <math display="inline"><semantics> <mrow> <mi>g</mi> <mrow> <mo stretchy="false">(</mo> <mover accent="true"> <mi>ϑ</mi> <mo>˜</mo> </mover> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mi>a</mi> </mfrac> <mo form="prefix">ln</mo> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>+</mo> <mi>a</mi> <mfrac> <mover accent="true"> <mi>ϑ</mi> <mo>˜</mo> </mover> <mover accent="true"> <mi>ϑ</mi> <mo>^</mo> </mover> </mfrac> </mfenced> <mo>−</mo> <mo form="prefix">ln</mo> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>+</mo> <mfrac> <mover accent="true"> <mi>ϑ</mi> <mo>˜</mo> </mover> <mover accent="true"> <mi>ϑ</mi> <mo>^</mo> </mover> </mfrac> </mfenced> </mrow> </semantics></math> for various values of the parameter <span class="html-italic">a</span>. (<b>a</b>) Large scale behaviour; (<b>b</b>) Behaviour in the neighborhood of zero.</p>
Full article ">
19 pages, 300 KiB  
Article
Information Geometrical Characterization of Quantum Statistical Models in Quantum Estimation Theory
by Jun Suzuki
Entropy 2019, 21(7), 703; https://doi.org/10.3390/e21070703 - 18 Jul 2019
Cited by 47 | Viewed by 4077
Abstract
In this paper, we classify quantum statistical models based on their information geometric properties and the estimation error bound, known as the Holevo bound, into four different classes: classical, quasi-classical, D-invariant, and asymptotically classical models. We then characterize each model by several equivalent [...] Read more.
In this paper, we classify quantum statistical models based on their information geometric properties and the estimation error bound, known as the Holevo bound, into four different classes: classical, quasi-classical, D-invariant, and asymptotically classical models. We then characterize each model by several equivalent conditions and discuss their properties. This result enables us to explore the relationships among these four models as well as reveals the geometrical understanding of quantum statistical models. In particular, we show that each class of model can be identified by comparing quantum Fisher metrics and the properties of the tangent spaces of the quantum statistical model. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Figure 1
<p>A schematic diagram for model classification of quantum parametric models. A generic quantum parametric model <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> is indicated by the rectangular box. The blue vertically shadowed area represents the D-invariant model. The red horizontally shadowed area is the asymptotically classical model. The green diagonally shadowed area is the quasi-classical model. The intersection of the D-invariant model and the asymptotically classical model represents the classical model.</p>
Full article ">Figure 2
<p>A schematic diagram for model classification for three classes: the classical (<math display="inline"><semantics> <msub> <mi mathvariant="script">M</mi> <mi>C</mi> </msub> </semantics></math>), D-invariant (<math display="inline"><semantics> <msub> <mi mathvariant="script">M</mi> <mi>D</mi> </msub> </semantics></math>), and asymptotically classical (<math display="inline"><semantics> <msub> <mi mathvariant="script">M</mi> <mrow> <mi>A</mi> <mi>C</mi> </mrow> </msub> </semantics></math>) in terms of four matrices <math display="inline"><semantics> <mrow> <msubsup> <mi>G</mi> <mi>θ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> <mo>,</mo> <msubsup> <mover accent="true"> <mi>G</mi> <mo stretchy="false">˜</mo> </mover> <mi>θ</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> <mo>,</mo> <msub> <mi>Z</mi> <mi>θ</mi> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi>Z</mi> <mo stretchy="false">˜</mo> </mover> <mi>θ</mi> </msub> </mrow> </semantics></math>. Two arrows in the opposite direction indicate if two matrices are identical, and the model belongs to a class indicated between these arrows.</p>
Full article ">
19 pages, 367 KiB  
Article
Model Description of Similarity-Based Recommendation Systems
by Takafumi Kanamori and Naoya Osugi
Entropy 2019, 21(7), 702; https://doi.org/10.3390/e21070702 - 17 Jul 2019
Cited by 2 | Viewed by 3786
Abstract
The quality of online services highly depends on the accuracy of the recommendations they can provide to users. Researchers have proposed various similarity measures based on the assumption that similar people like or dislike similar items or people, in order to improve the [...] Read more.
The quality of online services highly depends on the accuracy of the recommendations they can provide to users. Researchers have proposed various similarity measures based on the assumption that similar people like or dislike similar items or people, in order to improve the accuracy of their services. Additionally, statistical models, such as the stochastic block models, have been used to understand network structures. In this paper, we discuss the relationship between similarity-based methods and statistical models using the Bernoulli mixture models and the expectation-maximization (EM) algorithm. The Bernoulli mixture model naturally leads to a completely positive matrix as the similarity matrix. We prove that most of the commonly used similarity measures yield completely positive matrices as the similarity matrix. Based on this relationship, we propose an algorithm to transform the similarity matrix to the Bernoulli mixture model. Such a correspondence provides a statistical interpretation to similarity-based methods. Using this algorithm, we conduct numerical experiments using synthetic data and real-world data provided from an online dating site, and report the efficiency of the recommendation system based on the Bernoulli mixture models. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Edges from <span class="html-italic">X</span> to <span class="html-italic">Y</span>. The bold edges mean that there are many edges between the connected groups. The broken edges mean that there are few edges between the connected groups.</p>
Full article ">
11 pages, 615 KiB  
Article
A Security Enhanced Encryption Scheme and Evaluation of Its Cryptographic Security
by Miodrag J. Mihaljević
Entropy 2019, 21(7), 701; https://doi.org/10.3390/e21070701 - 17 Jul 2019
Cited by 6 | Viewed by 3598
Abstract
An approach for security enhancement of a class of encryption schemes is pointed out and its security is analyzed. The approach is based on certain results of coding and information theory regarding communication channels with erasures and deletion errors. In the security enhanced [...] Read more.
An approach for security enhancement of a class of encryption schemes is pointed out and its security is analyzed. The approach is based on certain results of coding and information theory regarding communication channels with erasures and deletion errors. In the security enhanced encryption scheme, the wiretapper faces a problem of cryptanalysis after a communication channel with bits deletion and a legitimate party faces a problem of decryption after a channel with bit erasures. This paper proposes the encryption-decryption paradigm for the security enhancement of lightweight block ciphers based on dedicated error-correction coding and a simulator of the deletion channel controlled by the secret key. The security enhancement is analyzed in terms of the related probabilities, equivocation, mutual information and channel capacity. The cryptographic evaluation of the enhanced encryption includes employment of certain recent results regarding the upper-bounds on the capacity of channels with deletion errors. It is shown that the probability of correct classification which determines the cryptographic security depends on the deletion channel capacity, i.e., the equivocation after this channel, and number of codewords in employed error-correction coding scheme. Consequently, assuming that the basic encryption scheme has certain security level, it is shown that the security enhancement factor is a function of the deletion rate and dimension of the vectors subject to error-correction encoding, i.e., dimension of the encryption block. Full article
(This article belongs to the Special Issue Information-Theoretic Security II)
Show Figures

Figure 1

Figure 1
<p>A model of the deletion channel.</p>
Full article ">Figure 2
<p>Model of the decryption at a legitimate party versus cryptanalysis at the wiretapper side which faces problem of cryptanalysis after a channel with deletion errors.</p>
Full article ">Figure 3
<p>Model of a security enhanced encryption employing a simulator of a noisy channel which appears as a deletion channel from the wiretappers prospective: the upper part shows the transmitter, and the lower part the receiver.</p>
Full article ">
18 pages, 404 KiB  
Article
Rateless Codes-Based Secure Communication Employing Transmit Antenna Selection and Harvest-To-Jam under Joint Effect of Interference and Hardware Impairments
by Phu Tran Tin, Tan N. Nguyen, Nguyen Q. Sang, Tran Trung Duy, Phuong T. Tran and Miroslav Voznak
Entropy 2019, 21(7), 700; https://doi.org/10.3390/e21070700 - 16 Jul 2019
Cited by 14 | Viewed by 4362
Abstract
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a [...] Read more.
In this paper, we propose a rateless codes-based communication protocol to provide security for wireless systems. In the proposed protocol, a source uses the transmit antenna selection (TAS) technique to transmit Fountain-encoded packets to a destination in presence of an eavesdropper. Moreover, a cooperative jammer node harvests energy from radio frequency (RF) signals of the source and the interference sources to generate jamming noises on the eavesdropper. The data transmission terminates as soon as the destination can receive a sufficient number of the encoded packets for decoding the original data of the source. To obtain secure communication, the destination must receive sufficient encoded packets before the eavesdropper. The combination of the TAS and harvest-to-jam techniques obtains the security and efficient energy via reducing the number of the data transmission, increasing the quality of the data channel, decreasing the quality of the eavesdropping channel, and supporting the energy for the jammer. The main contribution of this paper is to derive exact closed-form expressions of outage probability (OP), probability of successful and secure communication (SS), intercept probability (IP) and average number of time slots used by the source over Rayleigh fading channel under the joint impact of co-channel interference and hardware impairments. Then, Monte Carlo simulations are presented to verify the theoretical results. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>System model of the proposed scheme.</p>
Full article ">Figure 2
<p><math display="inline"> <semantics> <msub> <mi>ρ</mi> <mi mathvariant="normal">D</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>ρ</mi> <mi mathvariant="normal">E</mi> </msub> </semantics> </math> as a function of <math display="inline"> <semantics> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> </semantics> </math> in dB when <math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>OP as a function of <math display="inline"> <semantics> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> </semantics> </math> in dB when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>7.5</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>SS as a function of <math display="inline"> <semantics> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> </semantics> </math> in dB when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>SS as a function of <math display="inline"> <semantics> <mi>α</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> <mo>=</mo> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>IP as a function of <span class="html-italic">M</span> when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> <mo>=</mo> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>IP as a function of <math display="inline"> <semantics> <msub> <mi>N</mi> <mi>th</mi> </msub> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> <mo>=</mo> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>OP and IP as a function of <math display="inline"> <semantics> <mi>α</mi> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> <mo>=</mo> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 9
<p>OP and IP as a function of <math display="inline"> <semantics> <msub> <mi>N</mi> <mi>th</mi> </msub> </semantics> </math> when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> <mo>=</mo> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>Average number of time slots as a function of <math display="inline"> <semantics> <msub> <mi>Q</mi> <mi mathvariant="normal">S</mi> </msub> </semantics> </math> in dB when <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi mathvariant="normal">I</mi> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">D</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <msubsup> <mi>κ</mi> <mrow> <mi mathvariant="normal">E</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <mn>0.05</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>17</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>th</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">
25 pages, 25441 KiB  
Article
Multivariate Pointwise Information-Driven Data Sampling and Visualization
by Soumya Dutta, Ayan Biswas and James Ahrens
Entropy 2019, 21(7), 699; https://doi.org/10.3390/e21070699 - 16 Jul 2019
Cited by 14 | Viewed by 6521
Abstract
With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data [...] Read more.
With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data properties so that the reduced data can answer domain-specific queries involving multiple variables with sufficient accuracy. While analyzing complex scientific events, domain experts often analyze and visualize two or more variables together to obtain a better understanding of the characteristics of the data features. Therefore, data summarization techniques are required to analyze multi-variable relationships in detail and then perform data reduction such that the important features involving multiple variables are preserved in the reduced data. To achieve this, in this work, we propose a data sub-sampling algorithm for performing statistical data summarization that leverages pointwise information theoretic measures to quantify the statistical association of data points considering multiple variables and generates a sub-sampled data that preserves the statistical association among multi-variables. Using such reduced sampled data, we show that multivariate feature query and analysis can be done effectively. The efficacy of the proposed multivariate association driven sampling algorithm is presented by applying it on several scientific data sets. Full article
(This article belongs to the Special Issue Information Theory Application in Visualization)
Show Figures

Figure 1

Figure 1
<p>Visualization of Pressure and Velocity field of Hurricane Isabel data set. The hurricane eye at the center of Pressure field and the high velocity region around the hurricane eye can be observed.</p>
Full article ">Figure 2
<p>PMI computed from Pressure and Velocity field of Hurricane Isabel data set is visualized. (<b>a</b>) shows the 2D plot of PMI values for all value pairs of Pressure and Velocity, (<b>b</b>) provides the PMI field for analyzing the PMI values in the spatial domain. It can be seen that around the hurricane eye, the eyewall is highlighted as high PMI-valued region which indicates a joint feature in the data set involving Pressure and Velocity field.</p>
Full article ">Figure 3
<p>Sampling result on Isabel data set when Pressure and Velocity variables are used. (<b>a</b>) shows results of random sampling and (<b>b</b>) shows results of the proposed pointwise information driven sampling results for sampling fraction <math display="inline"><semantics> <mrow> <mn>0.03</mn> </mrow> </semantics></math>. By observing the PMI field presented in <a href="#entropy-21-00699-f002" class="html-fig">Figure 2</a>b, it can be seen that the proposed sampling method samples densely from the regions where statistical association between Pressure and Velocity is stronger (<b>b</b>).</p>
Full article ">Figure 4
<p>Sampling result for Isabel data set when three variables (QGraup, QCloud, and Precipitation) are used to perform sampling. In this case, the generalized specific correlation measure presented in Equation is used to compute multivariate associativity for the data points considering all three variables. (<b>a</b>–<b>c</b>) show the rendering of QGraup, QCloud, and Precipitation fields respectively. (<b>d</b>) presents the rendering of sampled data points when the proposed multivariate sampling algorithm is applied to these three variables. It can be seen that the cloud and the rain bands show stronger statistical association among three variables and hence are sampled densely. The sampling fraction used in this example is <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Results of the proposed sampling technique when the number of histogram bins is varied while computing the information theoretic measure PMI. It is observed that the overall result remains similar without impacting the outcome of the sampling algorithm significantly.</p>
Full article ">Figure 6
<p>Visualization of multivariate query-driven analysis performed on the sampled data using Hurricane Isabel data set. The multivariate query −100 &lt; Pressure &lt; −4900 AND Velocity &gt; 10 is applied on the sampled data sets. (<b>a</b>) shows all the points selected by the proposed sampling algorithm by using Pressure and Velocity variable. (<b>b</b>) shows the data points selected by the query when applied to raw data. (<b>c</b>) shows the points selected when the query is performed on the sub-sampled data produced by the proposed sampling scheme and (<b>d</b>) presents the result of the query when applied to a randomly sampled data set. The sampling fraction used in this experiment is <math display="inline"><semantics> <mrow> <mn>0.07</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Visualization of multivariate query-driven analysis performed on the sampled data using Turbulent Combustion data set. The multivariate query 0.3 &lt; mixfrac &lt; 0.7 AND 0.0006 &lt; Y_OH 0.1 is applied on the sampled data sets. (<b>a</b>) shows all the points selected by the proposed sampling algorithm by using mixfrac and Y_OH variable. (<b>b</b>) shows the data points selected by the query when applied to raw data. (<b>c</b>) shows the points selected when the query is performed on the sub-sampled data produced by the proposed sampling scheme and (<b>d</b>) presents the result of the query when applied to a randomly sampled data set. The sampling fraction used in this experiment is <math display="inline"><semantics> <mrow> <mn>0.07</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Visualization of multivariate query driven analysis performed on the sampled data using Asteroid impact data set. The multivariate query 0.13 &lt; tev &lt; 0.5 AND 0.45 &lt; v02 1.0 is applied on the sampled data sets. (<b>a</b>) shows all the points selected by the proposed sampling algorithm by using tev and v02 variable. (<b>b</b>) shows the data points selected by the query when applied to raw data. (<b>c</b>) shows the points selected when the query is performed on the sub-sampled data produced by the proposed sampling scheme and (<b>d</b>) presents the result of the query when applied to a randomly sampled data set. The sampling fraction used in this experiment is <math display="inline"><semantics> <mrow> <mn>0.07</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Reconstruction-based visualization of Velocity field of Hurricane Isabel data set. Linear interpolation is used to reconstruct the data from the sub-sampled data sets. (<b>a</b>) shows the result from the original raw data. (<b>b</b>) provides the reconstruction result from the sub-sampled data generated by the proposed method, and (<b>c</b>) presents the result of reconstruction from random sampled data. The sampling fraction used in this experiment is <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Reconstruction-based visualization of mixfrac field of Turbulent Combustion data set. Linear interpolation is used to reconstruct the data from the sub-sampled data sets. (<b>a</b>) shows the result from the original raw data. (<b>b</b>) provides the reconstruction result from the sub-sampled data generated by the proposed method, and (<b>c</b>) presents the result of reconstruction from random sampled data. The sampling fraction used in this experiment is <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Reconstruction-based visualization of Y_OH field of Turbulent Combustion data set. Linear interpolation is used to reconstruct the data from the sub-sampled data sets. (<b>a</b>) shows the result from the original raw data. (<b>b</b>) provides the reconstruction result from the sub-sampled data generated by the proposed method, and (<b>c</b>) presents the result of reconstruction from random sampled data. The sampling fraction used in this experiment is <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Reconstruction-based visualization of tev field of Asteroid impact data set. Linear interpolation is used to reconstruct the data from the sub-sampled data sets. (<b>a</b>) shows the result from the original raw data. (<b>b</b>) provides the reconstruction result from the sub-sampled data generated by the proposed method, and (<b>c</b>) presents the result of reconstruction from random sampled data. The sampling fraction used in this experiment is <math display="inline"><semantics> <mrow> <mn>0.05</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Regions of interest (ROI) of different data sets used for analysis. (<b>a</b>) shows the ROI in Isabel data set, where the hurricane eye feature is selected. (<b>b</b>) shows the ROI for Combustion data set, where the turbulent flame region is highlighted. Finally, in (<b>c</b>) the ROI for asteroid data set is shown. The ROI selected in this example indicates the region where the asteroid has impacted the ocean surface and the splash of the water is ejected to the environment.</p>
Full article ">
14 pages, 4088 KiB  
Article
Increased Sample Entropy in EEGs During the Functional Rehabilitation of an Injured Brain
by Qiqi Cheng, Wenwei Yang, Kezhou Liu, Weijie Zhao, Li Wu, Ling Lei, Tengfei Dong, Na Hou, Fan Yang, Yang Qu and Yong Yang
Entropy 2019, 21(7), 698; https://doi.org/10.3390/e21070698 - 16 Jul 2019
Cited by 25 | Viewed by 4143
Abstract
Complex nerve remodeling occurs in the injured brain area during functional rehabilitation after a brain injury; however, its mechanism has not been thoroughly elucidated. Neural remodeling can lead to changes in the electrophysiological activity, which can be detected in an electroencephalogram (EEG). In [...] Read more.
Complex nerve remodeling occurs in the injured brain area during functional rehabilitation after a brain injury; however, its mechanism has not been thoroughly elucidated. Neural remodeling can lead to changes in the electrophysiological activity, which can be detected in an electroencephalogram (EEG). In this paper, we used EEG band energy, approximate entropy (ApEn), sample entropy (SampEn), and Lempel–Ziv complexity (LZC) features to characterize the intrinsic rehabilitation dynamics of the injured brain area, thus providing a means of detecting and exploring the mechanism of neurological remodeling during the recovery process after brain injury. The rats in the injury group (n = 12) and sham group (n = 12) were used to record the bilateral symmetrical EEG on days 1, 4, and 7 after a unilateral brain injury in awake model rats. The open field test (OFT) experiments were performed in the following three groups: an injury group, a sham group, and a control group (n = 10). An analysis of the EEG data using the energy, ApEn, SampEn, and LZC features demonstrated that the increase in SampEn was associated with the functional recovery. After the brain injury, the energy values of the delta1 bands on day 4; the delta2 bands on days 4 and 7; the theta, alpha, and beta bands and the values of ApEn, SampEn, and LZC of the cortical EEG signal on days 1, 4 and 7 were significantly lower in the injured brain area than in the non-injured area. During the process of recovery for the injured brain area, the values of the beta bands, ApEn, and SampEn of the injury group increased significantly, and gradually became equal to the value of the sham group. The improvement in the motor function of the model rats significantly correlated with the increase in SampEn. This study provides a method based on EEG nonlinear features for measuring neural remodeling in injured brain areas during brain function recovery. The results may aid in the study of neural remodeling mechanisms. Full article
Show Figures

Figure 1

Figure 1
<p>Flow chart of the data processing procedure. EEG—electroencephalogram; LZC—Lempel–Ziv complexity; SampEn—sample entropy; ApEn—approximate entropy.</p>
Full article ">Figure 2
<p>Open field test (OFT) scores in the injured group, sham group, and control group (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01). (<b>a</b>) Comparison of differences between groups; (<b>b</b>) Comparison of three groups over time.</p>
Full article ">Figure 3
<p>Comparison of EEG signal features between the injured area and symmetrical non-injured area. (<b>a</b>) Energy value in the delta band; (<b>b</b>) energy value in the theta band; (<b>c</b>) energy value in the alpha and beta (beta1 and beta2) bands; (<b>d</b>) ApEn, SampEn, and LZC (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 3 Cont.
<p>Comparison of EEG signal features between the injured area and symmetrical non-injured area. (<b>a</b>) Energy value in the delta band; (<b>b</b>) energy value in the theta band; (<b>c</b>) energy value in the alpha and beta (beta1 and beta2) bands; (<b>d</b>) ApEn, SampEn, and LZC (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 4
<p>The changes in the EEG feature values in the injured area on days 1, 4, and 7 after brain injury. (<b>a</b>) Energy values of the delta1 band; (<b>b</b>) energy values of the delta2 band; (<b>c</b>) energy values of the theta band; (<b>d</b>) energy values of the alpha band; (<b>e</b>) energy values of the beta1 band; (<b>f</b>) energy values of the beta2 band; (<b>g</b>) values of ApEn; (<b>h</b>) values of LZC; (<b>i</b>) values of SampEn (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 4 Cont.
<p>The changes in the EEG feature values in the injured area on days 1, 4, and 7 after brain injury. (<b>a</b>) Energy values of the delta1 band; (<b>b</b>) energy values of the delta2 band; (<b>c</b>) energy values of the theta band; (<b>d</b>) energy values of the alpha band; (<b>e</b>) energy values of the beta1 band; (<b>f</b>) energy values of the beta2 band; (<b>g</b>) values of ApEn; (<b>h</b>) values of LZC; (<b>i</b>) values of SampEn (* <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 5
<p>Relationship between EEG feature values and motor function recovery. (<b>a</b>) Energy in each band; (<b>b</b>) Nonlinear features; (<b>c</b>) SampEn in the sham group.</p>
Full article ">Figure 5 Cont.
<p>Relationship between EEG feature values and motor function recovery. (<b>a</b>) Energy in each band; (<b>b</b>) Nonlinear features; (<b>c</b>) SampEn in the sham group.</p>
Full article ">
25 pages, 637 KiB  
Article
Entropy and Semi-Entropies of LR Fuzzy Numbers’ Linear Function with Applications to Fuzzy Programming
by Jian Zhou, Chuan Huang, Mingxuan Zhao and Hui Li
Entropy 2019, 21(7), 697; https://doi.org/10.3390/e21070697 - 16 Jul 2019
Cited by 5 | Viewed by 3796
Abstract
As a crucial concept of characterizing uncertainty, entropy has been widely used in fuzzy programming problems, while involving complicated calculations. To simplify the operations so as to broaden its applicable areas, this paper investigates the entropy within the framework of credibility theory and [...] Read more.
As a crucial concept of characterizing uncertainty, entropy has been widely used in fuzzy programming problems, while involving complicated calculations. To simplify the operations so as to broaden its applicable areas, this paper investigates the entropy within the framework of credibility theory and derives the formulas for calculating the entropy of regular LR fuzzy numbers by virtue of the inverse credibility distribution. By verifying the favorable property of this operator, a calculation formula of a linear function’s entropy is also proposed. Furthermore, considering the strength of semi-entropy in measuring one-side uncertainty, the lower and upper semi-entropies, as well as the corresponding formulas are suggested to handle return-oriented and cost-oriented problems, respectively. Finally, utilizing entropy and semi-entropies as risk measures, two types of entropy optimization models and their equivalent formulations derived from the proposed formulas are given according to different decision criteria, providing an effective modeling method for fuzzy programming from the perspective of entropy. The numerical examples demonstrate the high efficiency and good performance of the proposed methods in decision making. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Membership functions (MFs) of the fuzzy numbers in Equations (<a href="#FD2-entropy-21-00697" class="html-disp-formula">2</a>)–(<a href="#FD5-entropy-21-00697" class="html-disp-formula">5</a>) in Examples 1–4. (<b>a</b>) Triangular <math display="inline"> <semantics> <mrow> <mi mathvariant="script">T</mi> <msub> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>)</mo> </mrow> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics> </math>, (<b>b</b>) parabolic <math display="inline"> <semantics> <mrow> <mi mathvariant="script">P</mi> <msub> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>)</mo> </mrow> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics> </math>, and (<b>c</b>) normal <math display="inline"> <semantics> <mrow> <mi mathvariant="script">N</mi> <msub> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>)</mo> </mrow> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics> </math>, (<b>d</b>) mixture <math display="inline"> <semantics> <mrow> <mi mathvariant="script">M</mi> <msub> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>)</mo> </mrow> <mrow> <mi>L</mi> <mi>R</mi> </mrow> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>The function <math display="inline"> <semantics> <mrow> <mi>S</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics> </math> in Definition 6.</p>
Full article ">Figure 3
<p>Optimal portfolios and expected overall returns of the portfolios derived by the mean-entropy and mean-semi-entropy optimization models under different lower limits <span class="html-italic">a</span> in Example 15. The bars in two (three) colors stand for the allocation schemes in the mean-entropy (mean-semi-entropy) model, and the line in brown (green) represents the expected return of each portfolio.</p>
Full article ">Figure 4
<p>Optimal schemes and corresponding expected total costs obtained by the mean-entropy and mean-semi-entropy optimization models under different upper limits <span class="html-italic">b</span> in Example 16. The bars in two (three) colors stand for the allocation schemes in the mean-entropy (mean-semi-entropy) model, and the line in brown (green) represents the expected return of each scheme.</p>
Full article ">
29 pages, 3508 KiB  
Review
Beyond Boltzmann–Gibbs–Shannon in Physics and Elsewhere
by Constantino Tsallis
Entropy 2019, 21(7), 696; https://doi.org/10.3390/e21070696 - 15 Jul 2019
Cited by 36 | Viewed by 8044
Abstract
The pillars of contemporary theoretical physics are classical mechanics, Maxwell electromagnetism, relativity, quantum mechanics, and Boltzmann–Gibbs (BG) statistical mechanics –including its connection with thermodynamics. The BG theory describes amazingly well the thermal equilibrium of a plethora of so-called simple systems. However, BG statistical [...] Read more.
The pillars of contemporary theoretical physics are classical mechanics, Maxwell electromagnetism, relativity, quantum mechanics, and Boltzmann–Gibbs (BG) statistical mechanics –including its connection with thermodynamics. The BG theory describes amazingly well the thermal equilibrium of a plethora of so-called simple systems. However, BG statistical mechanics and its basic additive entropy S B G started, in recent decades, to exhibit failures or inadequacies in an increasing number of complex systems. The emergence of such intriguing features became apparent in quantum systems as well, such as black holes and other area-law-like scenarios for the von Neumann entropy. In a different arena, the efficiency of the Shannon entropy—as the BG functional is currently called in engineering and communication theory—started to be perceived as not necessarily optimal in the processing of images (e.g., medical ones) and time series (e.g., economic ones). Such is the case in the presence of generic long-range space correlations, long memory, sub-exponential sensitivity to the initial conditions (hence vanishing largest Lyapunov exponents), and similar features. Finally, we witnessed, during the last two decades, an explosion of asymptotically scale-free complex networks. This wide range of important systems eventually gave support, since 1988, to the generalization of the BG theory. Nonadditive entropies generalizing the BG one and their consequences have been introduced and intensively studied worldwide. The present review focuses on these concepts and their predictions, verifications, and applications in physics and elsewhere. Some selected examples (in quantum information, high- and low-energy physics, low-dimensional nonlinear dynamical systems, earthquakes, turbulence, long-range interacting systems, and scale-free networks) illustrate successful applications. The grounding thermodynamical framework is briefly described as well. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

Figure 1
<p><b>Applications in high-energy physics</b>. <b>Top:</b> Comparison of <math display="inline"><semantics> <mrow> <mi>A</mi> <msubsup> <mi>e</mi> <mi>q</mi> <mrow> <mo>−</mo> <msub> <mi>E</mi> <mi>T</mi> </msub> <mo>/</mo> <mi>T</mi> </mrow> </msubsup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>T</mi> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mi>m</mi> <mn>2</mn> </msup> <mo>+</mo> <msubsup> <mi>p</mi> <mi>T</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> </mrow> </semantics></math>, with the experimental transverse momentum distribution of hadrons in <math display="inline"><semantics> <mrow> <mi>p</mi> <mi>p</mi> </mrow> </semantics></math> collisions at central rapidity <span class="html-italic">y</span>. The corresponding Boltzmann–Gibbs (purely exponential) distribution is illustrated as a dashed curve. For a better visualization, both the data and the analytical curves have been divided by a constant factor, as indicated. As shown, the fittings are amazingly good over as many as 14 ordinate decades! Such a situation appears to be unprecedented. Indeed, so many experimental decades within a single experiment is a rather unique circumstance, which exhibits the talent of the experimental effort involved. To realistically appreciate this, such curves can be compared to, say, those exhibiting the crossover from Newtonian to Einstein mechanics at increasingly large values of the momentum. Indeed, if we consider the case of, say, protons within cosmic rays up to the Extreme Energy Cosmic Ray detection on Earth, we have 11 ordinate decades between the departure of the Einstein relation <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>=</mo> <msqrt> <mrow> <msup> <mi>m</mi> <mn>2</mn> </msup> <msup> <mi>c</mi> <mn>4</mn> </msup> <mo>+</mo> <msup> <mi>p</mi> <mn>2</mn> </msup> <msup> <mi>c</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </semantics></math> from the classical relation <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>=</mo> <mi>m</mi> <msup> <mi>c</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>p</mi> <mn>2</mn> </msup> <mo>/</mo> <mn>2</mn> <mi>m</mi> </mrow> </semantics></math> up to the relativistic upper experimental limit. The ratios data/fit are shown at the bottom, where a roughly log-periodic behavior is observed on top of the <span class="html-italic">q</span>-exponential one. Such log-periodic curves have been remarkably well fitted by introducing in the <span class="html-italic">q</span>-index a small imaginary part (e.g., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1.14</mn> <mo>+</mo> <mi>i</mi> <mspace width="0.166667em"/> <mn>0.03</mn> </mrow> </semantics></math>) [<a href="#B61-entropy-21-00696" class="html-bibr">61</a>,<a href="#B62-entropy-21-00696" class="html-bibr">62</a>]. From [<a href="#B56-entropy-21-00696" class="html-bibr">56</a>]. <b>Bottom:</b> The measured AMS-02 data are very well fitted by linear combinations of escort and standard distributions with <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>13</mn> <mo>/</mo> <mn>11</mn> <mo>=</mo> <mn>1.1818</mn> <mo>…</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>−</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>11</mn> <mo>/</mo> <mn>9</mn> <mo>=</mo> <mn>1.2222</mn> <mo>…</mo> </mrow> </semantics></math> From [<a href="#B63-entropy-21-00696" class="html-bibr">63</a>].</p>
Full article ">Figure 1 Cont.
<p><b>Applications in high-energy physics</b>. <b>Top:</b> Comparison of <math display="inline"><semantics> <mrow> <mi>A</mi> <msubsup> <mi>e</mi> <mi>q</mi> <mrow> <mo>−</mo> <msub> <mi>E</mi> <mi>T</mi> </msub> <mo>/</mo> <mi>T</mi> </mrow> </msubsup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>T</mi> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mi>m</mi> <mn>2</mn> </msup> <mo>+</mo> <msubsup> <mi>p</mi> <mi>T</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> </mrow> </semantics></math>, with the experimental transverse momentum distribution of hadrons in <math display="inline"><semantics> <mrow> <mi>p</mi> <mi>p</mi> </mrow> </semantics></math> collisions at central rapidity <span class="html-italic">y</span>. The corresponding Boltzmann–Gibbs (purely exponential) distribution is illustrated as a dashed curve. For a better visualization, both the data and the analytical curves have been divided by a constant factor, as indicated. As shown, the fittings are amazingly good over as many as 14 ordinate decades! Such a situation appears to be unprecedented. Indeed, so many experimental decades within a single experiment is a rather unique circumstance, which exhibits the talent of the experimental effort involved. To realistically appreciate this, such curves can be compared to, say, those exhibiting the crossover from Newtonian to Einstein mechanics at increasingly large values of the momentum. Indeed, if we consider the case of, say, protons within cosmic rays up to the Extreme Energy Cosmic Ray detection on Earth, we have 11 ordinate decades between the departure of the Einstein relation <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>=</mo> <msqrt> <mrow> <msup> <mi>m</mi> <mn>2</mn> </msup> <msup> <mi>c</mi> <mn>4</mn> </msup> <mo>+</mo> <msup> <mi>p</mi> <mn>2</mn> </msup> <msup> <mi>c</mi> <mn>2</mn> </msup> </mrow> </msqrt> </mrow> </semantics></math> from the classical relation <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>=</mo> <mi>m</mi> <msup> <mi>c</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>p</mi> <mn>2</mn> </msup> <mo>/</mo> <mn>2</mn> <mi>m</mi> </mrow> </semantics></math> up to the relativistic upper experimental limit. The ratios data/fit are shown at the bottom, where a roughly log-periodic behavior is observed on top of the <span class="html-italic">q</span>-exponential one. Such log-periodic curves have been remarkably well fitted by introducing in the <span class="html-italic">q</span>-index a small imaginary part (e.g., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1.14</mn> <mo>+</mo> <mi>i</mi> <mspace width="0.166667em"/> <mn>0.03</mn> </mrow> </semantics></math>) [<a href="#B61-entropy-21-00696" class="html-bibr">61</a>,<a href="#B62-entropy-21-00696" class="html-bibr">62</a>]. From [<a href="#B56-entropy-21-00696" class="html-bibr">56</a>]. <b>Bottom:</b> The measured AMS-02 data are very well fitted by linear combinations of escort and standard distributions with <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>13</mn> <mo>/</mo> <mn>11</mn> <mo>=</mo> <mn>1.1818</mn> <mo>…</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>−</mo> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mn>11</mn> <mo>/</mo> <mn>9</mn> <mo>=</mo> <mn>1.2222</mn> <mo>…</mo> </mrow> </semantics></math> From [<a href="#B63-entropy-21-00696" class="html-bibr">63</a>].</p>
Full article ">Figure 2
<p><b>Applications in low-energy physics</b>. Experimental verification in granular matter of the scaling relation predicted in 1996 [<a href="#B67-entropy-21-00696" class="html-bibr">67</a>]. (<b>Top left:</b>) Type of apparatus that is used (from [<a href="#B71-entropy-21-00696" class="html-bibr">71</a>]); (<b>top right:</b>) dependence of the index <span class="html-italic">q</span> of the <span class="html-italic">q</span>-Gaussian distribution of fluctuations on a wide range of the experimental parameter <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msqrt> <mrow> <mo>Δ</mo> <mi>γ</mi> </mrow> </msqrt> </mrow> </semantics></math>; (<b>middle left:</b>) dependence of the anomalous diffusion exponent <math display="inline"><semantics> <mi>α</mi> </semantics></math> (<math display="inline"><semantics> <msup> <mi>x</mi> <mn>2</mn> </msup> </semantics></math> scales with <math display="inline"><semantics> <msup> <mi>t</mi> <mi>α</mi> </msup> </semantics></math>) on the same experimental parameter and verification, within a 2% error bar, of the 1996 prediction <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <msub> <mi>α</mi> <mi>P</mi> </msub> <mo>≡</mo> <mn>2</mn> <mo>/</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>−</mo> <mi>q</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> [<a href="#B67-entropy-21-00696" class="html-bibr">67</a>]. Notice that in the <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msqrt> <mrow> <mo>Δ</mo> <mi>γ</mi> </mrow> </msqrt> <mo>→</mo> <mn>0</mn> </mrow> </semantics></math> limit, the BG values <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>α</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> emerge as the expected extrapolations. From [<a href="#B70-entropy-21-00696" class="html-bibr">70</a>]. Experimental verification for cold atoms of the 2003 prediction by Lutz [<a href="#B72-entropy-21-00696" class="html-bibr">72</a>]. (<b>Middle right:</b>) (<b>a, in Middle right</b>) results of quantum Monte Carlo simulations for the momentum distribution of atoms cooled in a 1D optical lattice. The data points correspond to the average of <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> atomic trajectories. For each trajectory, the atom is initially in the ground state of a given well. The depth of the optical lattice is <math display="inline"><semantics> <mrow> <msub> <mi>U</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>60</mn> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> </semantics></math>. The line is the best fit to the data with a <span class="html-italic">q</span>-Gaussian with <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1.791</mn> <mo>±</mo> <mn>0.004</mn> </mrow> </semantics></math> (adjusted <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.995</mn> </mrow> </semantics></math>). (<b>b, in Middle right</b>) Values of <span class="html-italic">q</span> as a function of the depth of the optical potential. The data points correspond to the full quantum Monte Carlo simulations, the line representing the analytical prediction <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> <mo>+</mo> <mn>44</mn> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>/</mo> <msub> <mi>U</mi> <mn>0</mn> </msub> </mrow> </semantics></math> [<a href="#B72-entropy-21-00696" class="html-bibr">72</a>]. (<b>Bottom left:</b>) (<b>a, in Bottom left</b>) experimental results for the momentum distribution of cold atoms in a 3D dissipative optical lattice (data points) and their best fit <span class="html-italic">q</span>-Gaussian (solid line). The obtained <span class="html-italic">q</span> value (<math display="inline"><semantics> <mrow> <mo>=</mo> <mn>1.310</mn> <mo>±</mo> <mn>0.015</mn> </mrow> </semantics></math>) is derived by fitting only the right part of the momentum distribution (adjusted <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.9985</mn> </mrow> </semantics></math>). The parameter of the optical lattice is <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>v</mi> </msub> <mo>/</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>π</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>20.8</mn> <mspace width="0.166667em"/> <mi>kHz</mi> </mrow> </semantics></math>. The distribution is normalized so that its maximum equals unity. (<b>b, in Bottom left</b>) Values for <span class="html-italic">q</span> as a function of the vibrational frequency at the bottom of the well, as obtained by fitting the experimental data with a <span class="html-italic">q</span>-Gaussian. (<b>Bottom right:</b>) (<b>a, in Bottom right</b>) experimental results for the atomic momentum distribution (black data points) and their best fit with a <span class="html-italic">q</span>-Gaussian (black solid line). The value of <span class="html-italic">q</span> is indicated in the figure (adjusted <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo>=</mo> <mn>0.9985</mn> </mrow> </semantics></math>). The parameter of the optical lattice is <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>v</mi> </msub> <mo>/</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>π</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>27.5</mn> <mspace width="0.166667em"/> <mi>kHz</mi> </mrow> </semantics></math>. For comparison, a Gaussian is indicated as well (red line). (<b>b, in Bottom right</b>) The data points for the distribution in the high-momenta region. The solid line represents the best power-law fit. From [<a href="#B73-entropy-21-00696" class="html-bibr">73</a>,<a href="#B74-entropy-21-00696" class="html-bibr">74</a>].</p>
Full article ">Figure 3
<p>Data collapse of probability density functions for the cases <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <msup> <mn>2</mn> <mrow> <mn>2</mn> <mi>n</mi> </mrow> </msup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>n</mi> </mrow> </semantics></math> is odd (<b>top</b>), or even (<b>bottom</b>). As <span class="html-italic">n</span> increases, a good fit using a <span class="html-italic">q</span>-Gaussian <math display="inline"><semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>P</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mi>e</mi> <mi>q</mi> <mrow> <mo>−</mo> <mi>β</mi> <msup> <mrow> <mo>[</mo> <mi>y</mi> <mi>P</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>]</mo> </mrow> <mn>2</mn> </msup> </mrow> </msubsup> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>β</mi> <mo>)</mo> <mspace width="3.33333pt"/> <mo>≃</mo> <mspace width="3.33333pt"/> <mo>(</mo> <mn>1.68</mn> <mo>,</mo> <mn>6.2</mn> <mo>)</mo> </mrow> </semantics></math> (<b>top</b>) and <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>β</mi> <mo>)</mo> <mo>≃</mo> <mo>(</mo> <mn>1.70</mn> <mo>,</mo> <mn>6.2</mn> <mo>)</mo> </mrow> </semantics></math> (<b>bottom</b>) is obtained for increasingly large regions. <span class="html-italic">Inset:</span> Linear-linear plots of the data for a better visualization of the central part. From [<a href="#B75-entropy-21-00696" class="html-bibr">75</a>].</p>
Full article ">Figure 4
<p>Data. Normalized probability distribution function of the attractors, where <span class="html-italic">T</span> is the number of terms in the sum and <span class="html-italic">M</span> is the number of initial conditions. (<b>Top:</b>) For <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>18</mn> </msup> </mrow> </semantics></math>. (<b>Bottom:</b>) For <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>22</mn> </msup> </mrow> </semantics></math>. (In the Inset, the central part is zoomed for a better visualization). From [<a href="#B80-entropy-21-00696" class="html-bibr">80</a>].</p>
Full article ">Figure 5
<p>Quantiles associated with the distribution for the Cretan earthquake sequence return intervals (earthquake return times). The solid curve corresponds to the <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-Weibull distribution based on the <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-exponential function (Equation (<a href="#FD12-entropy-21-00696" class="html-disp-formula">12</a>)), which, under appropriate constraints, extremizes the Kaniadakis <math display="inline"><semantics> <mi>κ</mi> </semantics></math>-entropy. From [<a href="#B81-entropy-21-00696" class="html-bibr">81</a>].</p>
Full article ">Figure 6
<p>Experimental measurements of histograms of accelerations obtained by La Mantia et al. [<a href="#B86-entropy-21-00696" class="html-bibr">86</a>] and the Beck–Cohen lognormal superstatistics distribution (solid curve). From [<a href="#B87-entropy-21-00696" class="html-bibr">87</a>].</p>
Full article ">Figure 7
<p>Representation of the different size-scaling regimes for classical <span class="html-italic">d</span>-dimensional systems. For attractive long-range interactions (i.e., <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>≤</mo> <mi>α</mi> <mo>/</mo> <mi>d</mi> <mo>≤</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mi>α</mi> </semantics></math> characterizing the interaction range in a potential with the form <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <msup> <mi>r</mi> <mi>α</mi> </msup> </mrow> </semantics></math>; for example, Newtonian gravitation corresponds to <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>d</mi> <mo>,</mo> <mi>α</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>3</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>), we may distinguish three classes of thermodynamic variables, namely, those scaling with <math display="inline"><semantics> <msup> <mi>L</mi> <mi>θ</mi> </msup> </semantics></math>, named pseudo-intensive (<span class="html-italic">L</span> is a characteristic linear length, <math display="inline"><semantics> <mi>θ</mi> </semantics></math> is a system-dependent parameter), those scaling with <math display="inline"><semantics> <msup> <mi>L</mi> <mrow> <mi>d</mi> <mo>+</mo> <mi>θ</mi> </mrow> </msup> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mi>d</mi> <mo>−</mo> <mi>α</mi> </mrow> </semantics></math>, the pseudo-extensive ones (the energies), and those scaling with <math display="inline"><semantics> <msup> <mi>L</mi> <mi>d</mi> </msup> </semantics></math> (which are always extensive). For short-range interactions (i.e., <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>&gt;</mo> <mi>d</mi> </mrow> </semantics></math>) we have <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, and the energies recover their standard <math display="inline"><semantics> <msup> <mi>L</mi> <mi>d</mi> </msup> </semantics></math> extensive scaling, falling in the same class of <span class="html-italic">S</span>, <span class="html-italic">N</span>, <span class="html-italic">V</span>, etc., whereas the previous pseudo-intensive variables become truly intensive ones (independent of <span class="html-italic">L</span>); this is the region, with only two classes of variables, that is covered by the traditional textbooks of thermodynamics. From [<a href="#B37-entropy-21-00696" class="html-bibr">37</a>,<a href="#B40-entropy-21-00696" class="html-bibr">40</a>,<a href="#B50-entropy-21-00696" class="html-bibr">50</a>,<a href="#B223-entropy-21-00696" class="html-bibr">223</a>].</p>
Full article ">Figure 8
<p><math display="inline"><semantics> <mi>α</mi> </semantics></math>-XY <span class="html-italic">d</span>-dimensional ferromagnet (for <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>). The time averages are done within the intervals <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> indicated in the insets. <b>Top left:</b> illustration of <span class="html-italic">q</span>-Gaussian fitting for the distribution of one-particle momentum (for comparison, the BG Gaussian is shown by a dashed line). <b>Top right:</b> illustration of <span class="html-italic">q</span>-exponential fitting for one-particle energy (for comparison, the BG weight is shown by a dashed line). <b>Middle left:</b> the <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>/</mo> <mi>d</mi> </mrow> </semantics></math>-dependence of the index <math display="inline"><semantics> <msub> <mi>q</mi> <mi>p</mi> </msub> </semantics></math>. <b>Middle right:</b> the <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>/</mo> <mi>d</mi> </mrow> </semantics></math>-dependence of the index <math display="inline"><semantics> <msub> <mi>q</mi> <mi>E</mi> </msub> </semantics></math>. We verify that above the critical value <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>/</mo> <mi>d</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, a region exists for which we numerically observe <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>. It cannot be excluded, at this stage, that this is not a consequence of the finiteness of the system size <span class="html-italic">N</span> and/or of the interval within which the time average is performed, and/or of the time <span class="html-italic">t</span> elapsed before starting the time average. Only (up to now intractable) analytical results or extremely heavy numerical calculations could definitively enlighten this complex region. It could, for example, happen that the relevant nontrivial results require simultaneously <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>→</mo> <mo>∞</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>→</mo> <mo>∞</mo> </mrow> </semantics></math> along appropriately scaled paths. From [<a href="#B97-entropy-21-00696" class="html-bibr">97</a>]. Asymptotically scale-free <span class="html-italic">d</span>-dimensional network (for <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>4</mn> </mrow> </semantics></math>). The distribution of degree <span class="html-italic">k</span> is well fitted by <math display="inline"><semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>P</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <msubsup> <mi>e</mi> <mi>q</mi> <mrow> <mo>−</mo> <mi>k</mi> <mo>/</mo> <mi>κ</mi> </mrow> </msubsup> </mrow> </semantics></math>. <b>Bottom left:</b> the <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mi>A</mi> </msub> <mo>/</mo> <mi>d</mi> </mrow> </semantics></math>-dependence of the index <span class="html-italic">q</span>. The red dot indicates the Barabási–Albert (BA) universality class <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>4</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math> [<a href="#B19-entropy-21-00696" class="html-bibr">19</a>,<a href="#B22-entropy-21-00696" class="html-bibr">22</a>], which is here recovered as the <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mi>A</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> particular instance. <b>Bottom right:</b> the <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mi>A</mi> </msub> <mo>/</mo> <mi>d</mi> </mrow> </semantics></math>-dependence of the characteristic degree “temperature” <math display="inline"><semantics> <mi>κ</mi> </semantics></math>. From [<a href="#B16-entropy-21-00696" class="html-bibr">16</a>]. In all cases, the BG (<math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) description naturally emerges numerically at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>/</mo> <mi>d</mi> <mo>→</mo> <mo>∞</mo> </mrow> </semantics></math> and even before.</p>
Full article ">Figure 9
<p>The index <span class="html-italic">q</span> has been determined [<a href="#B108-entropy-21-00696" class="html-bibr">108</a>] from first principles, namely from the universality class of the Hamiltonian. The values <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> respectively correspond to the Ising and XY ferromagnetic chains in the presence of a transverse field at <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> criticality. For other models, see [<a href="#B225-entropy-21-00696" class="html-bibr">225</a>,<a href="#B226-entropy-21-00696" class="html-bibr">226</a>]. In the <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>→</mo> <mo>∞</mo> </mrow> </semantics></math> limit, we recover the BG value, i.e., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. For arbitrary values of <span class="html-italic">c</span>, the subsystem nonadditive entropy <math display="inline"><semantics> <msub> <mi>S</mi> <mi>q</mi> </msub> </semantics></math> is thermodynamically extensive for, and only for, <math display="inline"><semantics> <mrow> <mi>q</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mfrac> <mrow> <msqrt> <mrow> <mn>9</mn> <mo>+</mo> <msup> <mi>c</mi> <mn>2</mn> </msup> </mrow> </msqrt> <mo>−</mo> <mn>3</mn> </mrow> <mi>c</mi> </mfrac> </mrow> </semantics></math> (hence <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mfrac> <mrow> <mn>6</mn> <mi>q</mi> </mrow> <mrow> <mn>1</mn> <mo>−</mo> <msup> <mi>q</mi> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </semantics></math>; some special values: for <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> we have <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>, and for <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> we have <math display="inline"><semantics> <mrow> <mi>q</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mfrac> <mn>2</mn> <mrow> <msqrt> <mn>5</mn> </msqrt> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mfrac> <mn>1</mn> <mo>Φ</mo> </mfrac> </mrow> </semantics></math>, where <math display="inline"><semantics> <mo>Φ</mo> </semantics></math> is the golden mean). Let us emphasize that this anomalous value of <span class="html-italic">q</span> occurs only at precisely the zero-temperature second-order quantum critical point; anywhere else, the usual short-range-interaction BG behavior (i.e., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) is valid. From [<a href="#B227-entropy-21-00696" class="html-bibr">227</a>].</p>
Full article ">
22 pages, 4938 KiB  
Article
A Method for Improving Controlling Factors Based on Information Fusion for Debris Flow Susceptibility Mapping: A Case Study in Jilin Province, China
by Qiang Dou, Shengwu Qin, Yichen Zhang, Zhongjun Ma, Junjun Chen, Shuangshuang Qiao, Xiuyu Hu and Fei Liu
Entropy 2019, 21(7), 695; https://doi.org/10.3390/e21070695 - 15 Jul 2019
Cited by 15 | Viewed by 4157
Abstract
Debris flow is one of the most frequently occurring geological disasters in Jilin province, China, and such disasters often result in the loss of human life and property. The objective of this study is to propose and verify an information fusion (IF) method [...] Read more.
Debris flow is one of the most frequently occurring geological disasters in Jilin province, China, and such disasters often result in the loss of human life and property. The objective of this study is to propose and verify an information fusion (IF) method in order to improve the factors controlling debris flow as well as the accuracy of the debris flow susceptibility map. Nine layers of factors controlling debris flow (i.e., topography, elevation, annual precipitation, distance to water system, slope angle, slope aspect, population density, lithology and vegetation coverage) were taken as the predictors. The controlling factors were improved by using the IF method. Based on the original controlling factors and the improved controlling factors, debris flow susceptibility maps were developed while using the statistical index (SI) model, the analytic hierarchy process (AHP) model, the random forest (RF) model, and their four integrated models. The results were compared using receiver operating characteristic (ROC) curve, and the spatial consistency of the debris flow susceptibility maps was analyzed while using Spearman’s rank correlation coefficients. The results show that the IF method that was used to improve the controlling factors can effectively enhance the performance of the debris flow susceptibility maps, with the IF-SI-RF model exhibiting the best performance in terms of debris flow susceptibility mapping. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering II)
Show Figures

Figure 1

Figure 1
<p>Location map of the study area.</p>
Full article ">Figure 2
<p>Debris flow field survey: (<b>a</b>) Debris flow accumulation; and, (<b>b</b>) The image of the debris flow ditch taken by drone.</p>
Full article ">Figure 3
<p>The hierarchical diagram of the controlling factors in the study area: (<b>a</b>) topography; (<b>b</b>) elevation; (<b>c</b>) annual precipitation; (<b>d</b>) distance to water system; (<b>e</b>) slope angle; (<b>f</b>) slopeaspect; (<b>g</b>) population density; (<b>h</b>) lithology; and, (<b>i</b>) vegetation coverage.</p>
Full article ">Figure 3 Cont.
<p>The hierarchical diagram of the controlling factors in the study area: (<b>a</b>) topography; (<b>b</b>) elevation; (<b>c</b>) annual precipitation; (<b>d</b>) distance to water system; (<b>e</b>) slope angle; (<b>f</b>) slopeaspect; (<b>g</b>) population density; (<b>h</b>) lithology; and, (<b>i</b>) vegetation coverage.</p>
Full article ">Figure 4
<p>The flowchart of the research methods in this study.</p>
Full article ">Figure 5
<p>Debris flow susceptibility maps based on the controlling factors before information fusion (IF): (<b>a</b>) debris flow susceptibility mapping (DFSM)using statistical index (SI); (<b>b</b>) DFSM using analytic hierarchy process (AHP); (<b>c</b>) DFSM using SI-AHP; (<b>d</b>) DFSM using SI-random forest (SI-RF); (<b>e</b>) DFSM using AHP-RF; and, (<b>f</b>) DFSM using SI-AHP-RF.</p>
Full article ">Figure 6
<p>Debris flow susceptibility maps based on the controlling factors after IF: (<b>a</b>) DFSM using IF-SI; (<b>b</b>) DFSM using IF-AHP; (<b>c</b>) DFSM using IF-SI-AHP; (<b>d</b>) DFSM using IF-SI-RF; (<b>e</b>) DFSM using IF-AHP-RF; and, (<b>f</b>) DFSM using IF-SI-AHP-RF.</p>
Full article ">Figure 6 Cont.
<p>Debris flow susceptibility maps based on the controlling factors after IF: (<b>a</b>) DFSM using IF-SI; (<b>b</b>) DFSM using IF-AHP; (<b>c</b>) DFSM using IF-SI-AHP; (<b>d</b>) DFSM using IF-SI-RF; (<b>e</b>) DFSM using IF-AHP-RF; and, (<b>f</b>) DFSM using IF-SI-AHP-RF.</p>
Full article ">Figure 7
<p>Comparison of success rate curves: (<b>a<sub>1</sub></b>) SI and IF-SI; (<b>a<sub>2</sub></b>) AHP and IF-AHP; (<b>a<sub>3</sub></b>) SI-AHP and IF-SI-AHP; (<b>a<sub>4</sub></b>) SI-RF and IF-SI-RF; (<b>a<sub>5</sub></b>) AHP-RF and IF-AHP-RF; and, (<b>a<sub>6</sub></b>) SI-AHP-RF and IF-SI-AHP-RF.</p>
Full article ">Figure 8
<p>Comparison of prediction rate curves: (<b>b<sub>1</sub></b>) SI and IF-SI; (<b>b<sub>2</sub></b>) AHP and IF-AHP; (<b>b<sub>3</sub></b>) SI-AHP and IF-SI-AHP; (<b>b<sub>4</sub></b>) SI-RF and IF-SI-RF; (<b>b<sub>5</sub></b>) AHP-RF and IF-AHP-RF; and, (<b>b<sub>6</sub></b>) SI-AHP-RF and IF-SI-AHP-RF.</p>
Full article ">
27 pages, 38676 KiB  
Review
Twenty Years of Entropy Research: A Bibliometric Overview
by Weishu Li, Yuxiu Zhao, Qi Wang and Jian Zhou
Entropy 2019, 21(7), 694; https://doi.org/10.3390/e21070694 - 15 Jul 2019
Cited by 27 | Viewed by 6221
Abstract
Entropy, founded in 1999, is an emerging international journal in the field of entropy and information studies. In the year of 2018, the journal enjoyed its 20th anniversary, and therefore, it is quite reasonable and meaningful to conduct a retrospective as its [...] Read more.
Entropy, founded in 1999, is an emerging international journal in the field of entropy and information studies. In the year of 2018, the journal enjoyed its 20th anniversary, and therefore, it is quite reasonable and meaningful to conduct a retrospective as its birthday gift. In accordance with Entropy’s distinctive name and research area, this paper creatively provides a bibliometric analysis method to not only look back at the vicissitude of the entire entropy topic, but also witness the journal’s growth and influence during this process. Based on 123,063 records extracted from the Web of Science, the work in sequence analyzes publication outputs, high-cited literature, and reference co-citation networks, in the aspects of the topic and the journal, respectively. The results indicate that the topic now has become a tremendous research domain and is still roaring ahead with great potentiality, widely researched by different kinds of disciplines. The most significant hotspots so far are suggested as the theoretical or practical innovation of graph entropy, permutation entropy, and pseudo-additive entropy. Furthermore, with the rapid growth in recent years, Entropy has attracted many dominant authors of the topic and experiences a distinctive geographical publication distribution. More importantly, in the midst of the topic, the journal has made enormous contributions to major research areas, particularly being a spear head in the studies of multiscale entropy and permutation entropy. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

Figure 1
<p>Annual number of the topic’s publications from 1999–2018.</p>
Full article ">Figure 2
<p>Annual number of the top-five most productive journals’ publications from 1999–2018.</p>
Full article ">Figure 3
<p>The network of the top-20 most productive countries or regions of the topic from 1999–2018.</p>
Full article ">Figure 4
<p>The network of the top-20 most productive countries or regions for <span class="html-italic">Entropy</span> from 1999–2018.</p>
Full article ">Figure 5
<p>The network of the top-20 most productive foundations of <span class="html-italic">Entropy</span> from 1999–2018 (including ties).</p>
Full article ">Figure 6
<p>The proportion change of the categories of the topic from 1999–2018.</p>
Full article ">Figure 7
<p>The reference co-citation network of the topic from 1999–2018.</p>
Full article ">Figure 8
<p>The reference co-citation network of the topic by timeline from 1999–2018.</p>
Full article ">Figure 9
<p>The reference co-citation network of <span class="html-italic">Entropy</span> from 1999–2018.</p>
Full article ">Figure 10
<p>The reference co-citation network of <span class="html-italic">Entropy</span> by timeline from 1999–2018.</p>
Full article ">
18 pages, 11215 KiB  
Article
A Feature Extraction Method of Ship-Radiated Noise Based on Fluctuation-Based Dispersion Entropy and Intrinsic Time-Scale Decomposition
by Zhaoxi Li, Yaan Li and Kai Zhang
Entropy 2019, 21(7), 693; https://doi.org/10.3390/e21070693 - 15 Jul 2019
Cited by 44 | Viewed by 4448
Abstract
To improve the feature extraction of ship-radiated noise in a complex ocean environment, fluctuation-based dispersion entropy is used to extract the features of ten types of ship-radiated noise. Since fluctuation-based dispersion entropy only analyzes the ship-radiated noise signal in single scale and it [...] Read more.
To improve the feature extraction of ship-radiated noise in a complex ocean environment, fluctuation-based dispersion entropy is used to extract the features of ten types of ship-radiated noise. Since fluctuation-based dispersion entropy only analyzes the ship-radiated noise signal in single scale and it cannot distinguish different types of ship-radiated noise effectively, a new method of ship-radiated noise feature extraction is proposed based on fluctuation-based dispersion entropy (FDispEn) and intrinsic time-scale decomposition (ITD). Firstly, ten types of ship-radiated noise signals are decomposed into a series of proper rotation components (PRCs) by ITD, and the FDispEn of each PRC is calculated. Then, the correlation between each PRC and the original signal are calculated, and the FDispEn of each PRC is analyzed to select the Max-relative PRC fluctuation-based dispersion entropy as the feature parameter. Finally, by comparing the Max-relative PRC fluctuation-based dispersion entropy of a certain number of the above ten types of ship-radiated noise signals with FDispEn, it is discovered that the Max-relative PRC fluctuation-based dispersion entropy is at the same level for similar ship-radiated noise, but is distinct for different types of ship-radiated noise. The Max-relative PRC fluctuation-based dispersion entropy as the feature vector is sent into the support vector machine (SVM) classifier to classify and recognize ten types of ship-radiated noise. The experimental results demonstrate that the recognition rate of the proposed method reaches 95.8763%. Consequently, the proposed method can effectively achieve the classification of ship-radiated noise. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics)
Show Figures

Figure 1

Figure 1
<p>Influence of different parameters on FDispEn.</p>
Full article ">Figure 1 Cont.
<p>Influence of different parameters on FDispEn.</p>
Full article ">Figure 2
<p>The influence of parameters <math display="inline"><semantics> <mi>c</mi> </semantics></math> and <math display="inline"><semantics> <mi>m</mi> </semantics></math> on calculation time.</p>
Full article ">Figure 3
<p>Effect of parameters <math display="inline"><semantics> <mi>c</mi> </semantics></math>, <math display="inline"><semantics> <mi>r</mi> </semantics></math> and <math display="inline"><semantics> <mi>m</mi> </semantics></math> on (<b>a</b>) fluctuation-based dispersion entropy, (<b>b</b>) sample entropy and (<b>c</b>) permutation entropy.</p>
Full article ">Figure 4
<p>The flowchart of the proposed method.</p>
Full article ">Figure 5
<p>The time-domain waveform of ten types of ship-radiated noise.</p>
Full article ">Figure 5 Cont.
<p>The time-domain waveform of ten types of ship-radiated noise.</p>
Full article ">Figure 6
<p>Time domain waveform of results of intrinsic time-scale decomposition (ITD) for ten types of ship-radiated noise signals.</p>
Full article ">Figure 6 Cont.
<p>Time domain waveform of results of intrinsic time-scale decomposition (ITD) for ten types of ship-radiated noise signals.</p>
Full article ">Figure 7
<p>Spectrum of results of ITD for ten types of ship-radiated noise signals.</p>
Full article ">Figure 8
<p>Fluctuation-based dispersion entropy of the proper rotation components (PRCs) of the ten types of ship-radiated noise signals.</p>
Full article ">Figure 9
<p>Ten types of ship-radiated noise of fluctuation-based dispersion entropy of the PRC with highest correlation coefficient distribution.</p>
Full article ">Figure 10
<p>Ten types of ship-radiated noise of fluctuation-based dispersion entropy distribution.</p>
Full article ">Figure 11
<p>SVM classification results of different methods.</p>
Full article ">
17 pages, 1213 KiB  
Article
Why the Tsirelson Bound? Bub’s Question and Fuchs’ Desideratum
by William Stuckey, Michael Silberstein, Timothy McDevitt and Ian Kohler
Entropy 2019, 21(7), 692; https://doi.org/10.3390/e21070692 - 15 Jul 2019
Cited by 12 | Viewed by 6955 | Correction
Abstract
To answer Wheeler’s question “Why the quantum?” via quantum information theory according to Bub, one must explain both why the world is quantum rather than classical and why the world is quantum rather than superquantum, i.e., “Why the Tsirelson bound?” We show that [...] Read more.
To answer Wheeler’s question “Why the quantum?” via quantum information theory according to Bub, one must explain both why the world is quantum rather than classical and why the world is quantum rather than superquantum, i.e., “Why the Tsirelson bound?” We show that the quantum correlations and quantum states corresponding to the Bell basis states, which uniquely produce the Tsirelson bound for the Clauser–Horne–Shimony–Holt (CHSH) quantity, can be derived from conservation per no preferred reference frame (NPRF). A reference frame in this context is defined by a measurement configuration, just as with the light postulate of special relativity. We therefore argue that the Tsirelson bound is ultimately based on NPRF just as the postulates of special relativity. This constraint-based/principle answer to Bub’s question addresses Fuchs’ desideratum that we “take the structure of quantum theory and change it from this very overt mathematical speak ... into something like [special relativity].” Thus, the answer to Bub’s question per Fuchs’ desideratum is, “the Tsirelson bound obtains due to conservation per NPRF”. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Figure 1
<p>Summary of the result. The “constraint” is conservation per no preferred reference frame.</p>
Full article ">Figure 2
<p>Outcomes (yellow dots) in the same reference frame, i.e., outcomes for the same measurement (blue arrows represent SG magnet orientations), for the spin singlet state explicitly conserve angular momentum.</p>
Full article ">Figure 3
<p>A spatiotemporal ensemble of 16 experimental trials for the spin singlet state. Angular momentum is not conserved in any given trial, because there are two different measurements being made, i.e., outcomes are in two different reference frames, but it is conserved on average for all 16 trials. It is impossible for angular momentum to be conserved explicitly in each trial since the measurement outcomes are binary (quantum) with values of +1 (up) or −1 (down) <math display="inline"> <semantics> <mfenced separators="" open="(" close=")"> <mfrac> <mo>ℏ</mo> <mn>2</mn> </mfrac> </mfenced> </semantics> </math> per no preferred reference frame. The conservation principle at work here assumes Alice and Bob’s measured values of angular momentum are not mere components of some hidden angular momentum, e.g., oppositely oriented <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>A</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>B</mi> </msub> </semantics> </math> so that Alice and Bob’s +1/−1 results are always components of those hidden <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>A</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>B</mi> </msub> </semantics> </math> with variable magnitudes. That is, the measured values of angular momentum <span class="html-italic">are</span> the angular momenta contributing to this conservation principle.</p>
Full article ">Figure 4
<p>Reading from left to right, as Bob rotates his SG magnets relative to Alice’s SG magnets for her +1 outcome, the average value of his outcome varies from −1 (totally down, arrow bottom) to 0 to +1 (totally up, arrow tip). This obtains per conservation of angular momentum on average in accord with no preferred reference frame. Bob can say exactly the same about Alice’s outcomes as she rotates her SG magnets relative to his SG magnets for his +1 outcome. That is, their outcomes can only satisfy conservation of angular momentum on average, because they only measure +1/−1 <math display="inline"> <semantics> <mfenced separators="" open="(" close=")"> <mfrac> <mo>ℏ</mo> <mn>2</mn> </mfrac> </mfenced> </semantics> </math>, never a fractional result. Thus, just as with the light postulate of special relativity, we see that no preferred reference frame requires quantum (+1/−1) outcomes for all measurements and that leads to a constraint-based/principle answer to Bub’s question.</p>
Full article ">Figure 5
<p>Comparing special relativity with quantum mechanics according to no preferred reference frame. Because Alice and Bob both measure the same speed of light <span class="html-italic">c</span> regardless of their relative motion, Alice (Bob) may claim that Bob’s (Alice’s) length and time measurements are erroneous and need to be corrected (length contraction and time dilation). Likewise, because Alice and Bob both measure the same values for angular momentum +1/−1 <math display="inline"> <semantics> <mfenced separators="" open="(" close=")"> <mfrac> <mo>ℏ</mo> <mn>2</mn> </mfrac> </mfenced> </semantics> </math> regardless of their relative SG magnet orientation, Alice (Bob) may claim that Bob’s (Alice’s) individual +1/−1 values are erroneous and need to be corrected (averaged). It is possible that Alice and Bob’s outcomes are equally valid, i.e., neither need to be corrected, per no preferred reference frame. In special relativity, the apparently inconsistent results can be reconciled via relativity of simultaneity. In quantum mechanics, the apparently inconsistent results can be reconciled via “average-only” conservation. It is also possible that Alice (Bob) is actually correct and Bob’s (Alice’s) outcomes need to be corrected, or that some other frame is actually preferred and both Alice and Bob need to correct their outcomes or understand them in the context of that preferred frame. In special relativity, we can do that using an empirically unverifiable ether. In quantum mechanics, we can do that using empirically unverifiable hidden variables (<a href="#entropy-21-00692-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 6
<p>Alice and Bob making spin measurements with their Stern–Gerlach (SG) magnets and detectors.</p>
Full article ">Figure 7
<p>“It is always possible to provide a complete foliation for a circuit” ([<a href="#B30-entropy-21-00692" class="html-bibr">30</a>], p. 20). Reproduced with permission from the author.</p>
Full article ">Figure 8
<p>Relative eigenbases configuration for the unlike states of Equation (<a href="#FD30-entropy-21-00692" class="html-disp-formula">30</a>) satisfying the last three PR correlations.</p>
Full article ">Figure 9
<p>Relative eigenbases configuration for the like states of Equation (<a href="#FD30-entropy-21-00692" class="html-disp-formula">30</a>) satisfying the last three PR correlations.</p>
Full article ">
14 pages, 3243 KiB  
Article
An Information Entropy-Based Modeling Method for the Measurement System
by Li Kong, Hao Pan, Xuewei Li, Shuangbao Ma, Qi Xu and Kaibo Zhou
Entropy 2019, 21(7), 691; https://doi.org/10.3390/e21070691 - 15 Jul 2019
Cited by 7 | Viewed by 3873
Abstract
Measurement is a key method to obtain information from the real world and is widely used in human life. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing models of measurement systems are [...] Read more.
Measurement is a key method to obtain information from the real world and is widely used in human life. A unified model of measurement systems is critical to the design and optimization of measurement systems. However, the existing models of measurement systems are too abstract. To a certain extent, this makes it difficult to have a clear overall understanding of measurement systems and how to implement information acquisition. Meanwhile, this also leads to limitations in the application of these models. Information entropy is a measure of information or uncertainty of a random variable and has strong representation ability. In this paper, an information entropy-based modeling method for measurement system is proposed. First, a modeling idea based on the viewpoint of information and uncertainty is described. Second, an entropy balance equation based on the chain rule for entropy is proposed for system modeling. Then, the entropy balance equation is used to establish the information entropy-based model of the measurement system. Finally, three cases of typical measurement units or processes are analyzed using the proposed method. Compared with the existing modeling approaches, the proposed method considers the modeling problem from the perspective of information and uncertainty. It focuses on the information loss of the measurand in the transmission process and the characterization of the specific role of the measurement unit. The proposed model can intuitively describe the processing and changes of information in the measurement system. It does not conflict with the existing models of the measurement system, but can complement the existing models of measurement systems, thus further enriching the existing measurement theory. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>The relationship between various entropies or mutual information.</p>
Full article ">Figure 2
<p>Structure of the actual measurement system.</p>
Full article ">Figure 3
<p>Information entropy-based model of the measurement unit.</p>
Full article ">Figure 4
<p>Venn diagram of entropy model of the first-order Markov chain.</p>
Full article ">Figure 5
<p>Information entropy-based equivalent model of measurement system.</p>
Full article ">Figure 6
<p>Gaussian random variable with additive white Gaussian noise pass through a bandpass filter.</p>
Full article ">Figure 7
<p>The model of quantization process.</p>
Full article ">Figure 8
<p>Simulation of quantization process. (<b>a</b>) The waveform of the first 5000 data points of the continuous random variable <math display="inline"><semantics> <mi>X</mi> </semantics></math>. (<b>b</b>) The probability density function of <math display="inline"><semantics> <mi>X</mi> </semantics></math>. (<b>c</b>) Information entropies of <math display="inline"><semantics> <mrow> <msup> <mi>X</mi> <mi>Q</mi> </msup> </mrow> </semantics></math> quantized by quantizers with different numbers of bits.</p>
Full article ">Figure 9
<p>Gaussian random signal with additive Gaussian noise processed by the cumulative averaging procedure.</p>
Full article ">
27 pages, 1500 KiB  
Article
A Sequence-Based Damage Identification Method for Composite Rotors by Applying the Kullback–Leibler Divergence, a Two-Sample Kolmogorov–Smirnov Test and a Statistical Hidden Markov Model
by Angelos Filippatos, Albert Langkamp, Pawel Kostka and Maik Gude
Entropy 2019, 21(7), 690; https://doi.org/10.3390/e21070690 - 15 Jul 2019
Cited by 6 | Viewed by 3805
Abstract
Composite structures undergo a gradual damage evolution from initial inter-fibre cracks to extended damage up to failure. However, most composites could remain in service despite the existence of damage. Prerequisite for a service extension is a reliable and component-specific damage identification. Therefore, a [...] Read more.
Composite structures undergo a gradual damage evolution from initial inter-fibre cracks to extended damage up to failure. However, most composites could remain in service despite the existence of damage. Prerequisite for a service extension is a reliable and component-specific damage identification. Therefore, a vibration-based damage identification method is presented that takes into consideration the gradual damage behaviour and the resulting changes of the structural dynamic behaviour of composite rotors. These changes are transformed into a sequence of distinct states and used as an input database for three diagnostic models, based on the Kullback–Leibler divergence, the two-sample Kolmogorov–Smirnov test and a statistical hidden Markov model. To identify the present damage state based on the damage-dependent modal properties, a sequence-based diagnostic system has been developed, which estimates the similarity between the present unclassified sequence and obtained sequences of damage-dependent vibration responses. The diagnostic performance evaluation delivers promising results for the further development of the proposed diagnostic method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic view of the lack of the bijectivity of the function describing the relation between the state of damage and an eigenfrequency as a diagnostic feature.</p>
Full article ">Figure 2
<p>Illustration of the damage evolution sequence and the corresponding dynamic response sequence of a composite disc-rotor.</p>
Full article ">Figure 3
<p>Representation of a time-invariant rotor with excitation signal <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </semantics></math>, response signal <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </semantics></math>, and additive noise <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Sequence-based diagnostic system as a classification of fault patterns based on the idea from Korbicz [<a href="#B19-entropy-21-00690" class="html-bibr">19</a>], as an extension for the consideration of the gradual damage behaviour of composite rotors.</p>
Full article ">Figure 5
<p>A diagnostic model using binary statistical hypothesis testing for damage detection, having as input a diagnostic database <math display="inline"><semantics> <mfenced separators="" open="{" close="}"> <msub> <mi>S</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>E</mi> <mi>i</mi> </msub> <mrow> <mo>|</mo> </mrow> <msub> <mi>S</mi> <mi>i</mi> </msub> </mfenced> </semantics></math> and an unclassified vibration response <math display="inline"><semantics> <msub> <mi>E</mi> <mi>u</mi> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>A multi-level diagnostic system, having as input an unclassified vibration response sequence <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>E</mi> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>E</mi> <mi>k</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Illustration example of classified intact and damage test cases under the null hypothesis using a statistical threshold.</p>
Full article ">Figure 8
<p>Illustration of a statistical hidden Markov model and of the gradual damage behaviour of composite rotors.</p>
Full article ">Figure 9
<p>Illustration of the developed statistical HMM.</p>
Full article ">Figure 10
<p>Illustration of two different damage sequences resulting from damage propagation during multiple rotor run-ups: one sequence without any initial damage (<b>top</b>), and one having an initial damage due to an out-of-plane loading (<b>bottom</b>).</p>
Full article ">Figure 11
<p>Position parameters (<math display="inline"><semantics> <mrow> <mi>R</mi> <mo>,</mo> <mi>θ</mi> </mrow> </semantics></math>) for an out-of-plane load for an Cartesian-orthotropic rotor.</p>
Full article ">Figure 12
<p>Illustration of the linear damage increase under in-plane loading.</p>
Full article ">Figure 13
<p>Four classifiers and their classification labels: for an out-of-plane load, a two class localisation (<b>A</b>), a three class localisation (<b>B</b>) and a two class load quantification (<b>C</b>); and, for an in-plane load, a three class damage accumulation (<b>D</b>).</p>
Full article ">Figure 14
<p>Diagnostic results under the null hypothesis for the investigated rotors for the Kullback–Leibler divergence <math display="inline"><semantics> <msub> <mi>D</mi> <mrow> <mi>K</mi> <mi>L</mi> </mrow> </msub> </semantics></math> (<b>left</b>) and the two-sample Kolmogorov–Smirnov evaluation (<b>right</b>).</p>
Full article ">Figure 15
<p>Illustration of a confusion matrix for the performance evaluation of a diagnostic model.</p>
Full article ">Figure 16
<p>Diagnostic performance of classified damage cases regarding the radial and angular position as well as the magnitude of the out-of-plane load.</p>
Full article ">Figure 17
<p>Diagnostic performance of the damage accumulation from an in-plane load.</p>
Full article ">Figure 18
<p>Diagnostic performance of the prediction of the most probable damage evolution.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop