[go: up one dir, main page]

Next Issue
Volume 22, January
Previous Issue
Volume 21, November
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 21, Issue 12 (December 2019) – 105 articles

Cover Story (view full-size image): The information bottleneck (IB) method is a technique for extracting information in one random variable that is relevant for predicting another random variable. IB has applications in many fields, including machine learning with neural networks. In order to perform IB, however, one must find an optimally-compressed "bottleneck" random variable, which involves solving a difficult optimization problem with an information-theoretic objective function. We propose a method for solving this optimization problem using neural networks and a recently proposed bound on mutual information. We demonstrate that our approach exhibits better performance than other recent proposals. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1015 KiB  
Article
Multi-Type Node Detection in Network Communities
by Chinenye Ezeh, Ren Tao, Li Zhe, Wang Yiqun and Qu Ying
Entropy 2019, 21(12), 1237; https://doi.org/10.3390/e21121237 - 17 Dec 2019
Cited by 7 | Viewed by 4027
Abstract
Patterns of connectivity among nodes on networks can be revealed by community detection algorithms. The great significance of communities in the study of clustering patterns of nodes in different systems has led to the development of various methods for identifying different node types [...] Read more.
Patterns of connectivity among nodes on networks can be revealed by community detection algorithms. The great significance of communities in the study of clustering patterns of nodes in different systems has led to the development of various methods for identifying different node types on diverse complex systems. However, most of the existing methods identify only either disjoint nodes or overlapping nodes. Many of these methods rarely identify disjunct nodes, even though they could play significant roles on networks. In this paper, a new method, which distinctly identifies disjoint nodes (node clusters), disjunct nodes (single node partitions) and overlapping nodes (nodes binding overlapping communities), is proposed. The approach, which differs from existing methods, involves iterative computation of bridging centrality to determine nodes with the highest bridging centrality value. Additionally, node similarity is computed between the bridge-node and its neighbours, and the neighbours with the least node similarity values are disconnected. This process is sustained until a stoppage criterion condition is met. Bridging centrality metric and Jaccard similarity coefficient are employed to identify bridge-nodes (nodes at cut points) and the level of similarity between the bridge-nodes and their direct neighbours respectively. Properties that characterise disjunct nodes are equally highlighted. Extensive experiments are conducted with artificial networks and real-world datasets and the results obtained demonstrate efficiency of the proposed method in distinctly detecting and classifying multi-type nodes in network communities. This method can be applied to vast areas such as examination of cell interactions and drug designs, disease control in epidemics, dislodging organised crime gangs and drug courier networks, etc. Full article
(This article belongs to the Special Issue Computation in Complex Networks)
Show Figures

Figure 1

Figure 1
<p>Example synthetic network. (<b>a</b>) Full network. (<b>b</b>) Fragmented network.</p>
Full article ">Figure 2
<p>(<b>a</b>) Normalised mutual Information performance comparison of the proposed algorithm using Lancichinetti–Fortunato–Radicchi (LFR) benchmark. Number of nodes <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> <mo>&lt;</mo> <mi>k</mi> <mo>&gt;</mo> <mspace width="0.166667em"/> <mo>=</mo> <mn>10</mn> <mo>,</mo> <msub> <mo movablelimits="true" form="prefix">min</mo> <mi>C</mi> </msub> <mo>=</mo> <mn>20</mn> <mo>,</mo> <msub> <mo movablelimits="true" form="prefix">max</mo> <mi>C</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math>. (<b>b</b>) Normalised mutual information performance comparison of the proposed algorithm using LFR benchmark. Number of nodes <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2000</mn> <mo>,</mo> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> <mo>&lt;</mo> <mi>k</mi> <mo>&gt;</mo> <mspace width="0.166667em"/> <mo>=</mo> <mn>10</mn> <mo>,</mo> <msub> <mo movablelimits="true" form="prefix">min</mo> <mi>C</mi> </msub> <mo>=</mo> <mn>20</mn> <mo>,</mo> <msub> <mo movablelimits="true" form="prefix">max</mo> <mi>C</mi> </msub> <mo>=</mo> <mn>60</mn> <mo>.</mo> </mrow> </semantics></math> The mixing parameter <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>u</mi> </mrow> </semantics></math> ranges from 0 to 0.8 with a step increment of 0.1.</p>
Full article ">Figure 3
<p>(<b>a</b>) Modularity measure comparison among CNM, LPA, Louvain, SPA, GN and the proposed algorithm. (<b>b</b>) F1-score comparison among CNM, LPA, Louvain, SPA, GN and the proposed algorithm. The email network is ommitted in the F1-score computation due to unavailability of its ground-truth data.</p>
Full article ">Figure 4
<p>(<b>a</b>) Zachary’s karate club network partitioned into 2 communities with 1 disjunct node. (<b>b</b>) Zachary’s karate club network partitioned into 4 communities with 1 disjunct node and 1 overlapping node. The partitions overlapped by node 28 are overlapping communities. The rest of the nodes not indicated on the legends in <a href="#entropy-21-01237-f004" class="html-fig">Figure 4</a>a,b represent different communities according to their respective colours.</p>
Full article ">Figure 5
<p>(<b>a</b>) Dolphins network partitioned into 2 communities with 1 disjunct node. (<b>b</b>) Dolphins network partitioned into 4 communities with 2 disjunct nodes. The rest of the nodes not indicated on the legends in <a href="#entropy-21-01237-f005" class="html-fig">Figure 5</a>a,b represent different communities according to their respective colours.</p>
Full article ">Figure 6
<p>Kreb’s network of political books at 4 communities.</p>
Full article ">
13 pages, 288 KiB  
Article
Probabilistic Modeling with Matrix Product States
by James Stokes and John Terilla
Entropy 2019, 21(12), 1236; https://doi.org/10.3390/e21121236 - 17 Dec 2019
Cited by 19 | Viewed by 5274
Abstract
Inspired by the possibility that generative models based on quantum circuits can provide a useful inductive bias for sequence modeling tasks, we propose an efficient training algorithm for a subset of classically simulable quantum circuit models. The gradient-free algorithm, presented as a sequence [...] Read more.
Inspired by the possibility that generative models based on quantum circuits can provide a useful inductive bias for sequence modeling tasks, we propose an efficient training algorithm for a subset of classically simulable quantum circuit models. The gradient-free algorithm, presented as a sequence of exactly solvable effective models, is a modification of the density matrix renormalization group procedure adapted for learning a probability distribution. The conclusion that circuit-based models offer a useful inductive bias for classical datasets is supported by experimental results on the parity learning problem. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

Figure 1
<p>A bird’s eye view of the training dynamics of exact single-site DMRG on the unit sphere. (<b>a</b>) The initial vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>0</mn> </msub> </semantics></math> and the vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> </semantics></math> lie in the unit sphere of <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> </mrow> </semantics></math>. (<b>b</b>) The vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>0</mn> </msub> </semantics></math> is used to define the subspace <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mn>1</mn> </msub> </semantics></math>. The unit vectors in <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mn>1</mn> </msub> </semantics></math> define a lower dimensional sphere in <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> </mrow> </semantics></math> (in blue). The vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>1</mn> </msub> </semantics></math> is the vector in that sphere that is closest to <math display="inline"><semantics> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> </semantics></math>. (<b>c</b>) The vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>1</mn> </msub> </semantics></math> is used to define the subspace <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mn>2</mn> </msub> </semantics></math>. The unit sphere in <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mn>2</mn> </msub> </semantics></math> (in blue) contains <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>1</mn> </msub> </semantics></math> but does not contain <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>0</mn> </msub> </semantics></math>. The vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>2</mn> </msub> </semantics></math> is the unit vector in <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mn>2</mn> </msub> </semantics></math> closest to <math display="inline"><semantics> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> </semantics></math>. (<b>d</b>) The vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>2</mn> </msub> </semantics></math> is used to define the subspace <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mn>3</mn> </msub> </semantics></math>. The vector <math display="inline"><semantics> <msub> <mi>ψ</mi> <mn>3</mn> </msub> </semantics></math> is the unit vector in <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mn>3</mn> </msub> </semantics></math> closest to <math display="inline"><semantics> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> </semantics></math>. And so on.</p>
Full article ">Figure 2
<p>A representative bias-variance tradeoff curve showing negative log-likelihood (base 2) as a function of bond dimension for exact single-site DMRG on the <math display="inline"><semantics> <msub> <mi>P</mi> <mn>20</mn> </msub> </semantics></math> dataset. For bond dimension 3, the generalization gap is approximately <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.0237</mn> </mrow> </semantics></math>. For reference, the uniform distribution on bitstrings has NLL of 20. Memorizing the training data would yield a NLL of approximately <math display="inline"><semantics> <mrow> <mn>13.356</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>A representative bias-variance tradeoff curve showing negative log-likelihood (base 2) as a function of bond dimension for exact single-site DMRG on the div7 dataset. For bond dimension 8, the generalization gap is approximately <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.032</mn> </mrow> </semantics></math>. For reference, the uniform distribution on bitstrings has NLL of 20, the target distribution has a NLL of <math display="inline"><semantics> <mrow> <mn>17.192</mn> </mrow> </semantics></math>, and memorizing the training data would yield a NLL of approximately <math display="inline"><semantics> <mrow> <mn>13.87</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A1
<p>The shaded region represents the model class <math display="inline"><semantics> <mrow> <mi mathvariant="script">M</mi> </mrow> </semantics></math>. The red points all lie in <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math>. The vector <math display="inline"><semantics> <msub> <mover accent="true"> <mi>ψ</mi> <mo>˜</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math> is defined to be the unit vector in <math display="inline"><semantics> <msub> <mi mathvariant="script">H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math> closest to the target <math display="inline"><semantics> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> </semantics></math>. Note that <math display="inline"><semantics> <msub> <mover accent="true"> <mi>ψ</mi> <mo>˜</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math> does not lie in <math display="inline"><semantics> <mrow> <mi mathvariant="script">M</mi> </mrow> </semantics></math>. The vector <math display="inline"><semantics> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>SVD</mi> </msubsup> </semantics></math> is defined to be the vector in <math display="inline"><semantics> <mrow> <mi mathvariant="script">M</mi> <mo>∩</mo> <msub> <mi mathvariant="script">H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> closest to to <math display="inline"><semantics> <msub> <mover accent="true"> <mi>ψ</mi> <mo>˜</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math>. In this picture, <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>SVD</mi> </msubsup> <mo>−</mo> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> <mrow> <mo>∥</mo> <mo>&gt;</mo> <mo>∥</mo> </mrow> <msub> <mi>ψ</mi> <mi>t</mi> </msub> <mo>−</mo> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> <mrow> <mo>∥</mo> <mo>.</mo> </mrow> </mrow> </semantics></math> There may be a point, such as the one labelled <math display="inline"><semantics> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>better</mi> </msubsup> </semantics></math>, which lies in <math display="inline"><semantics> <mrow> <mi mathvariant="script">M</mi> <mo>∩</mo> <msub> <mi mathvariant="script">H</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> and is closer to <math display="inline"><semantics> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> </semantics></math> than <math display="inline"><semantics> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>SVD</mi> </msubsup> </semantics></math>, notwithstanding the fact that is is further from <math display="inline"><semantics> <msub> <mover accent="true"> <mi>ψ</mi> <mo>˜</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math>. This figure, to scale, depicts a scenario in which <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msub> <mi>ψ</mi> <mi>t</mi> </msub> <mo>−</mo> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> = 0.09, <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>SVD</mi> </msubsup> <mo>−</mo> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> = 0.10, <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>better</mi> </msubsup> <mo>−</mo> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> = 0.07, <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msub> <mover accent="true"> <mi>ψ</mi> <mo>˜</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>−</mo> <msub> <mi>ψ</mi> <mover accent="true"> <mi>π</mi> <mo>^</mo> </mover> </msub> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> = 0.06, <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>SVD</mi> </msubsup> <mo>−</mo> <msub> <mover accent="true"> <mi>ψ</mi> <mo>˜</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> = 0.07, and <math display="inline"><semantics> <mrow> <mrow> <mo>∥</mo> </mrow> <msubsup> <mi>ψ</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>better</mi> </msubsup> <mo>−</mo> <msub> <mover accent="true"> <mi>ψ</mi> <mo>˜</mo> </mover> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>∥</mo> </mrow> </mrow> </semantics></math> = 0.08.</p>
Full article ">
19 pages, 1108 KiB  
Article
A Quantum Cellular Automata Type Architecture with Quantum Teleportation for Quantum Computing
by Dimitrios Ntalaperas, Konstantinos Giannakis and Nikos Konofaos
Entropy 2019, 21(12), 1235; https://doi.org/10.3390/e21121235 - 17 Dec 2019
Viewed by 3815
Abstract
We propose an architecture based on Quantum Cellular Automata which allows the use of only one type of quantum gate per computational step, using nearest neighbor interactions. The model is built in partial steps, each one of them analyzed using nearest neighbor interactions, [...] Read more.
We propose an architecture based on Quantum Cellular Automata which allows the use of only one type of quantum gate per computational step, using nearest neighbor interactions. The model is built in partial steps, each one of them analyzed using nearest neighbor interactions, starting with single-qubit operations and continuing with two-qubit ones. A demonstration of the model is given, by analyzing how the techniques can be used to design a circuit implementing the Quantum Fourier Transform. Since the model uses only one type of quantum gate at each phase of the computation, physical implementation can be easier since at each step only one kind of input pulse needs to be applied to the apparatus. Full article
(This article belongs to the Special Issue Quantum Information Processing)
Show Figures

Figure 1

Figure 1
<p>Quantum Cellular Automata (QCA) architecture, as proposed in Reference [<a href="#B7-entropy-21-01235" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>Archetypal spin qubit and qubit operation based on Rabi oscillation.</p>
Full article ">Figure 3
<p>Nearest Neighbor interactions in a two dimensional grid. Initial configuration. Qubits of the same color are to interact with each other while qubits having a unique color will be acted upon by a single-qubit gate.</p>
Full article ">Figure 4
<p>Applying the algorithm for grid rearrangement in an example configuration. Qubits of the same color will partake in a single two-qubit operation. The leftmost column is the initial configuration, after the transposition the qubits are teleported to the right in the position indicated by the corresponding color.</p>
Full article ">Figure 5
<p>Initial configuration of the computational grid. The horizontal line separates the qubits partaking to the main computation from the ancilla qubits.</p>
Full article ">Figure 6
<p>Single-qubit operation–Grid configuration.</p>
Full article ">Figure 7
<p>Two-qubit operation–Grid configuration.</p>
Full article ">Figure 8
<p>Flowchart of the generic algorithm. Main loop exit and algorithm termination are omitted.</p>
Full article ">Figure 9
<p>The Quantum Fourier Transform (QFT) circuit.</p>
Full article ">Figure 10
<p>Controlled <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </msub> </semantics></math>. From initial configuration (left side) to the first horizontal teleportation. Qubits 1 and 2 will interact so they are teleported to the same column.</p>
Full article ">Figure 11
<p>Controlled <math display="inline"><semantics> <msub> <mi>R</mi> <mrow> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </msub> </semantics></math>. Vertical teleportation and the application of the two-qubit quantum operation.</p>
Full article ">Figure 12
<p>Hadamard Gate. From initial configuration to the first horizontal teleportation.</p>
Full article ">Figure 13
<p>Hadamard gate. Vertical teleportation and applying the gate using control qubits in pre-prepared states.</p>
Full article ">Figure 14
<p>Stages for the execution of the Hadamard gate. It is implied that the operation performed in the last stage is the controlled Hadamard operation.</p>
Full article ">Figure 15
<p>Stages for the execution of the controlled-Phase operation.</p>
Full article ">
21 pages, 2234 KiB  
Review
Mathematics and the Brain: A Category Theoretical Approach to Go Beyond the Neural Correlates of Consciousness
by Georg Northoff, Naotsugu Tsuchiya and Hayato Saigo
Entropy 2019, 21(12), 1234; https://doi.org/10.3390/e21121234 - 17 Dec 2019
Cited by 20 | Viewed by 9773
Abstract
Consciousness is a central issue in neuroscience, however, we still lack a formal framework that can address the nature of the relationship between consciousness and its physical substrates. In this review, we provide a novel mathematical framework of category theory (CT), in which [...] Read more.
Consciousness is a central issue in neuroscience, however, we still lack a formal framework that can address the nature of the relationship between consciousness and its physical substrates. In this review, we provide a novel mathematical framework of category theory (CT), in which we can define and study the sameness between different domains of phenomena such as consciousness and its neural substrates. CT was designed and developed to deal with the relationships between various domains of phenomena. We introduce three concepts of CT which include (i) category; (ii) inclusion functor and expansion functor; and, most importantly, (iii) natural transformation between the functors. Each of these mathematical concepts is related to specific features in the neural correlates of consciousness (NCC). In this novel framework, we will examine two of the major theories of consciousness, integrated information theory (IIT) of consciousness and temporospatial theory of consciousness (TTC). We conclude that CT, especially the application of the notion of natural transformation, highlights that we need to go beyond NCC and unravels questions that need to be addressed by any future neuroscientific theory of consciousness. Full article
(This article belongs to the Special Issue Integrated Information Theory)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Objects, arrows, domain, codomain: Each arrow f is associated with two objects, dom(f) and cod(f), which are called the domain and the codomain of (f). When dom(f) = X and cod(f) = Y, we denote f: X→Y, as shown in <a href="#entropy-21-01234-f001" class="html-fig">Figure 1</a>a. (The direction of the arrow can be in any direction, from left to right or reverse, whichever is convenient.) A system with arrows and objects is called a diagram. (<b>b</b>) Composition: If there are two arrows f and g, such that cod(f) = dom(g), there is a unique arrow, (<b>c</b>) g ∘ f, called the composition of f and g. A diagram is called commutative when any compositions of arrows having the common codomain and domain are equal. (<b>d</b>) Associative law: (h ∘ g) ∘ f = h ∘ (g ∘ f). In other words, the diagram is commutative. (<b>e</b>) Unit law: For any object X there exists an arrow 1X: X→X, such that the diagram is commutative for any f: X→Y. In other words, f ∘ 1X = f = 1Y ∘ f for any f. 1X is called the identity of X.</p>
Full article ">Figure 2
<p>(<b>a</b>) In integrated information theory (IIT), category is defined by an object that is a stochastic network with transition probability matrix (TPM). The exemplar network is composed of a copy gate A and B, which copies the state of the other gate with a time delay of 1. The state of the gate is either on or off. The table on the right describes its TPM. An arrow in category N0 is “decomposition” of the network with TPM (Note that we grossed over various details that are important for IIT3.0 (e.g., distinction between past and future). In particular, how decomposed subnetwork should be embedded with the original network requires careful consideration of so-called “purview” in IIT3.0. Within the IIT’s algorithm, what we call “decomposition” corresponds to a step where one evaluates all potential candidate <span class="html-italic">φ</span> or small phi. For example, for a system ABC, its power set, A, B, C, AB, BC, AC, ABC needs to be evaluated. In some cases, decomposed candidate small phis may not exist, thus it may better be called as “potential decomposition”. However, for simplicity, we prefer to call it as “decomposition”.). Decomposition allows IIT to quantify the causal contribution of a part of the system to the whole. (<b>b</b>) Disconnection arrows find the minimally disconnected network, which captures the concept of the amount of integration in IIT.</p>
Full article ">Figure 3
<p>Schematic depiction of a functor: a structure-preserving mapping from one category to another category.</p>
Full article ">Figure 4
<p>(<b>a</b>) Definition of “inclusion functor”. (<b>b</b>) Subcategory C is included by category D if inclusion functor F: C-&gt;D exists. Note that C does not need to be “a part of” D to be “included” (unlike a commonsense definition of “inclusion”).</p>
Full article ">Figure 5
<p>(<b>a</b>) Inclusion Functor i: N0→N1. N0 is included in N1 through Inclusion Functor i. (<b>b</b>) Expansion Functor e: N0→N1. e is a different structure preserving mapping from N0 to N1 (i.e., a functor from N0 to N1), but there is “natural transformation” from i to e.</p>
Full article ">Figure 6
<p>Schematic depiction of a natural transformation: a structure-preserving mapping from one functor to another functor.</p>
Full article ">Figure 7
<p>(<b>a</b>) Inclusion functor, i, expansion functor, e, in the IIT category N0 (actual) and N1 (all possible). Objects in N0 and N1 (e.g., [AB]) are a network with TPM, and arrows in N0 and N1 are manipulation of network/TPM that is allowed in IIT. Within N0, we consider only decomposition arrows. N1 is enriched by additional disconnection arrows that represent an operation that finds a “minimally disconnected” network with TPM within N1. An expansion functor, e, finds the minimally disconnected network (e.g., [AB]’) of the original network (e.g., [AB]), as well e also preserves the structure of N0, and qualifies as a functor. A red arrow within N1 that goes from the actual to the minimally disconnected network corresponds to integrated information, <span class="html-italic">φ</span>. (<b>b</b>) Considering decomposition arrows in N0 allows N0 to consist of a powerset of the network. If natural transformation, t, from the inclusion to the expansion functor exists, t gives us a power set of <span class="html-italic">φ</span>’s, the original and the minimally disconnected network with TPMs. This corresponds to system level integration, <span class="html-italic">Φ</span>.</p>
Full article ">Figure 8
<p>Natural transformation, t.</p>
Full article ">
16 pages, 1384 KiB  
Article
Detecting Causality in Multivariate Time Series via Non-Uniform Embedding
by Ziyu Jia, Youfang Lin, Zehui Jiao, Yan Ma and Jing Wang
Entropy 2019, 21(12), 1233; https://doi.org/10.3390/e21121233 - 16 Dec 2019
Cited by 24 | Viewed by 4889
Abstract
Causal analysis based on non-uniform embedding schemes is an important way to detect the underlying interactions between dynamic systems. However, there are still some obstacles to estimating high-dimensional conditional mutual information and forming optimal mixed embedding vector in traditional non-uniform embedding schemes. In [...] Read more.
Causal analysis based on non-uniform embedding schemes is an important way to detect the underlying interactions between dynamic systems. However, there are still some obstacles to estimating high-dimensional conditional mutual information and forming optimal mixed embedding vector in traditional non-uniform embedding schemes. In this study, we present a new non-uniform embedding method framed in information theory to detect causality for multivariate time series, named LM-PMIME, which integrates the low-dimensional approximation of conditional mutual information and the mixed search strategy for the construction of the mixed embedding vector. We apply the proposed method to simulations of linear stochastic, nonlinear stochastic, and chaotic systems, demonstrating its superiority over partial conditional mutual information from mixed embedding (PMIME) method. Moreover, the proposed method works well for multivariate time series with weak coupling strengths, especially for chaotic systems. In the actual application, we show its applicability to epilepsy multichannel electrocorticographic recordings. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart of the low-dimensional approximation of CMI and the mixed search strategy(LM)-partial conditional mutual information from mixed embedding (PMIME) method.</p>
Full article ">Figure 2
<p>Matrix representation of causality for the linear vector autoregressive (VAR) process. Retrieved by (<b>a</b>) traditional PMIME method, (<b>b</b>) mixed search strategy (M)-PMIME method, (<b>c</b>) and LM-PMIME method with k-nearest neighbors (k-NNs) estimator. The length of the time series is 512. <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> is used for the M-PMIME method and the LM-PMIME method. The remaining parameters of the three methods are the same (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> <mo>,</mo> <mi>A</mi> <mo>=</mo> <mn>0.97</mn> </mrow> </semantics></math>). Color maps for the mean values of coupling measurements are obtained from 100 realizations of the linear VAR process. The direction of causal influence is from row to column in the matrix. The true causal connections in this linear VAR process are at the matrix elements (1, 2), (1, 4), (2, 4), (4, 5), (5, 1), (5, 2) and (5, 3).</p>
Full article ">Figure 3
<p>Matrix representation of causality for <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NLVAR</mi> </mrow> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. Retrieved by (<b>a</b>) traditional PMIME method, (<b>b</b>) M-PMIME method, (<b>c</b>) and LM-PMIME method with k-NNs estimator. The time series length is 512. <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> is used for the M-PMIME method and the LM-PMIME method. The remaining parameters of the three methods are the same (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.97</mn> </mrow> </semantics></math>). Color maps for the mean values of coupling measurements are obtained from 100 realizations of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NLVAR</mi> </mrow> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. The true causal connections in <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>NLVAR</mi> </mrow> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> are at the matrix elements (1,2), (1,3), (2,3).</p>
Full article ">Figure 4
<p>Matrix representation of causality for <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> variables of the coupled Henon maps (<math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>). Retrieved by (<b>a</b>) traditional PMIME method, (<b>b</b>) M-PMIME method, (<b>c</b>) and LM-PMIME method with k-NNs estimator. The time series length is 1024. <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> is used for the M-PMIME method and the LM-PMIME method. The remaining parameters of the three methods are the same (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>). Color maps for the mean values of coupling measurements are obtained from 100 realizations of the coupled Henon maps. The true causal connections in the coupled Henon maps are at the matrix elements (<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mi>i</mi> </mrow> </semantics></math>), where <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mo>⋯</mo> <mspace width="4pt"/> <mo>,</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Matrix representation of causality for <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> variables of the coupled Henon maps (<math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math>). Retrieved by (<b>a</b>) traditional PMIME method, (<b>b</b>) M-PMIME method, (<b>c</b>) and LM-PMIME method with k-NNs estimator. The time series length is 1024. <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> is used for the M-PMIME method and the LM-PMIME method. The remaining parameters of the three methods are the same (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>). Color maps for the mean values of coupling measurements are obtained from 100 realizations of the coupled Henon maps. The true causal connections in the coupled Henon maps are at the matrix elements (<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mi>i</mi> </mrow> </semantics></math>), where <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mo>⋯</mo> <mspace width="4pt"/> <mo>,</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Matrix representation of causality for the three coupled Lorenz oscillators. Retrieved by (<b>a</b>) traditional PMIME method, (<b>b</b>) M-PMIME method, (<b>c</b>) and LM-PMIME method with k-NNs estimator. The length of the time series is 512 with coupling strength <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> is used for the M-PMIME method and the LM-PMIME method. The remaining parameters of the three methods are the same (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>). Color maps for the mean values of coupling measurements are obtained from 100 realizations of the three coupled Lorenz oscillators. The true causal connections in the three coupled Lorenz oscillators are at the matrix elements (<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mi>i</mi> </mrow> </semantics></math>), where <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Matrix representation of causality for the three coupled Lorenz oscillators. Retrieved by (<b>a</b>) traditional PMIME method, (<b>b</b>) M-PMIME method, (<b>c</b>) and LM-PMIME method with k-NNs estimator. The length of the time series is 512 with coupling strength <math display="inline"><semantics> <mrow> <mi>C</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> is used for the M-PMIME method and the LM-PMIME method. The remaining parameters of the three methods are the same (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>). Color maps for the mean values of coupling measurements are obtained from 100 realizations of the three coupled Lorenz oscillators. The direction of causal influence is from row to column in the matrix. The true causal connections in the three coupled Lorenz oscillators are at the matrix elements (<math display="inline"><semantics> <mrow> <mi>i</mi> <mo>−</mo> <mn>1</mn> <mo>,</mo> <mi>i</mi> </mrow> </semantics></math>), where <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Results for multivariate electrocorticographic (ECoG) data. Matrices of causalities reflect the pre-seizure state (<b>top</b>) and the seizure state (<b>bottom</b>)) estimated by the PMIME method and the LM-PMIME method. The causal strengths are averaged (the mean values of the coupling measurements over all epochs in the same physiological state). Contacts 1 to 64 belong to an eight-by-eight electrode grid, and contacts 65 to 76 belong to two depth electrodes. The direction of causal influence is from row to column in the matrices. The brighter colors indicate more significant values. The key contact is marked by a rectangular box. The parameter <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> are set for the different methods.</p>
Full article ">Figure 9
<p>Results for multivariate ECoG data. Matrices reflect the difference of total numbers of significant connections between the seizure state and the pre-seizure state (seizure minus pre-seizure). The numbers are respectively summed from 8 seizure epochs and eight pre-seizure epochs. Contacts 1 to 64 belong to an eight-by-eight electrode grid, and contacts 65 to 76 belong to two depth electrodes. The brighter colors indicate more significant values. The key contact is marked by a rectangular box. The parameter <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> are set for the different methods.</p>
Full article ">
13 pages, 979 KiB  
Article
Progress in Carnot and Chambadal Modeling of Thermomechanical Engine by Considering Entropy Production and Heat Transfer Entropy
by Michel Feidt and Monica Costea
Entropy 2019, 21(12), 1232; https://doi.org/10.3390/e21121232 - 16 Dec 2019
Cited by 33 | Viewed by 3859
Abstract
Nowadays the importance of thermomechanical engines is recognized worldwide. Since the industrial revolution, physicists and engineers have sought to maximize the efficiency of these machines, but also the mechanical energy or the power output of the engine, as we have recently found. The [...] Read more.
Nowadays the importance of thermomechanical engines is recognized worldwide. Since the industrial revolution, physicists and engineers have sought to maximize the efficiency of these machines, but also the mechanical energy or the power output of the engine, as we have recently found. The optimization procedure applied in many works in the literature focuses on considering new objective functions including economic and environmental criteria (i.e., ECOP ecological coefficient of performance). The debate here is oriented more towards fundamental aspects. It is known that the maximum of the power output is not obtained under the same conditions as the maximum of efficiency. This is shown, among other things, by the so-called nice radical that accounts for efficiency at maximum power, most often for the endoreversible configuration. We propose here to enrich the model and the debate by emphasizing the fundamental role of the heat transfer entropy together with the production of entropy, accounting for the external or internal irreversibilities of the converter. This original modeling to our knowledge, leads to new and more general results that are reported here. The main consequences of the approach are emphasized, and new limits of the efficiency at maximum energy or power output are obtained. Full article
(This article belongs to the Special Issue Carnot Cycle and Heat Engine Fundamentals and Applications)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the thermo-mechanical engine.</p>
Full article ">Figure 2
<p>Representation of the Carnot engine cycle with internal irreversibility and the corresponding reversible one (in red) in T-S diagram.</p>
Full article ">Figure 3
<p>Representation of the associated cycle to the Chambadal engine in T-S diagram: (<b>a</b>) source of constant temperature; (<b>b</b>) source of finite heat capacity.</p>
Full article ">
20 pages, 831 KiB  
Article
A New Approach to Fuzzy TOPSIS Method Based on Entropy Measure under Spherical Fuzzy Information
by Omar Barukab, Saleem Abdullah, Shahzaib Ashraf, Muhammad Arif and Sher Afzal Khan
Entropy 2019, 21(12), 1231; https://doi.org/10.3390/e21121231 - 16 Dec 2019
Cited by 78 | Viewed by 5110
Abstract
Spherical fuzzy set (SFS) is one of the most important and extensive concept to accommodate more uncertainties than existing fuzzy set structures. In this article, we will describe a novel enhanced TOPSIS-based procedure for tackling multi attribute group decision making (MAGDM) issues under [...] Read more.
Spherical fuzzy set (SFS) is one of the most important and extensive concept to accommodate more uncertainties than existing fuzzy set structures. In this article, we will describe a novel enhanced TOPSIS-based procedure for tackling multi attribute group decision making (MAGDM) issues under spherical fuzzy setting, in which the weights of both decision-makers (DMs) and criteria are totally unknown. First, we study the notion of SFSs, the score and accuracy functions of SFSs and their basic operating laws. In addition, defined the generalized distance measure for SFSs based on spherical fuzzy entropy measure to compute the unknown weights information. Secondly, the spherical fuzzy information-based decision-making technique for MAGDM is presented. Lastly, an illustrative example is delivered with robot selection to reveal the efficiency of the proposed spherical fuzzy decision support approach, along with the discussion of comparative results, to prove that their results are feasible and credible. Full article
(This article belongs to the Section Multidisciplinary Applications)
16 pages, 2681 KiB  
Article
A Novel Approach to Support Failure Mode, Effects, and Criticality Analysis Based on Complex Networks
by Lixiang Wang, Wei Dai, Guixiu Luo and Yu Zhao
Entropy 2019, 21(12), 1230; https://doi.org/10.3390/e21121230 - 16 Dec 2019
Cited by 9 | Viewed by 3569
Abstract
Failure Mode, Effects and Criticality Analysis (FMECA) is a method which involves quantitative failure analysis. It systematically examines potential failure modes in a system, as well as the components of the system, to determine the impact of a failure. In addition, it is [...] Read more.
Failure Mode, Effects and Criticality Analysis (FMECA) is a method which involves quantitative failure analysis. It systematically examines potential failure modes in a system, as well as the components of the system, to determine the impact of a failure. In addition, it is one of the most powerful techniques used for risk assessment and maintenance management. However, various drawbacks are inherent to the classical FMECA method, especially in ranking failure modes. This paper proposes a novel approach that uses complex networks theory to support FMECA. Firstly, the failure modes and their causes and effects are defined as nodes, and according to the logical relationship between failure modes, and their causes and effects, a weighted graph is established. Secondly, we use complex network theory to analyze the weighted graph, and the entropy centrality approach is applied to identify influential nodes. Finally, a real-world case is presented to illustrate and verify the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>Transformation and logical relation between failure modes, and their causes and effects.</p>
Full article ">Figure 2
<p>A case of structure diagram of the Failure Mode, Effects and Criticality Analysis (FMECA).</p>
Full article ">Figure 3
<p>An example of a weighted network.</p>
Full article ">Figure 4
<p>Argument structure in neighborhoods network, which is included as: (<b>a</b>) node <span class="html-italic">j</span> and <span class="html-italic">m</span> are first-order neighbors of node <span class="html-italic">i</span>; (<b>b</b>) node <span class="html-italic">j</span> is first-order neighbor of node <span class="html-italic">i</span>, and node <span class="html-italic">m</span> and <span class="html-italic">k</span> are second-order neighbors of node <span class="html-italic">i</span>; (<b>c</b>) node <span class="html-italic">j</span> and <span class="html-italic">m</span> are first-order neighbors of node <span class="html-italic">i</span>, and node <span class="html-italic">k</span> is the second-order neighbors of node <span class="html-italic">i</span>.</p>
Full article ">Figure 5
<p>The flow of the whole proposed method.</p>
Full article ">Figure 6
<p>The weighted graph of HVAC system.</p>
Full article ">Figure 7
<p>Comparison results of each node’s total influence and safe threshold.</p>
Full article ">Figure A1
<p>Failure modes and effects analysis for the HVAC system, which include three most critical components: (<b>a</b>) compressor; (<b>b</b>) evaporator blower; (<b>c</b>) air flow detector.</p>
Full article ">Figure A1 Cont.
<p>Failure modes and effects analysis for the HVAC system, which include three most critical components: (<b>a</b>) compressor; (<b>b</b>) evaporator blower; (<b>c</b>) air flow detector.</p>
Full article ">
16 pages, 4146 KiB  
Article
Digital Volume Pulse Measured at the Fingertip as an Indicator of Diabetic Peripheral Neuropathy in the Aged and Diabetic
by Hai-Cheng Wei, Na Ta, Wen-Rui Hu, Ming-Xia Xiao, Xiao-Jing Tang, Bagus Haryadi, Juin J. Liou and Hsien-Tsai Wu
Entropy 2019, 21(12), 1229; https://doi.org/10.3390/e21121229 - 16 Dec 2019
Cited by 12 | Viewed by 3834
Abstract
This study investigated the application of a modified percussion entropy index (PEIPPI) in assessing the complexity of baroreflex sensitivity (BRS) for diabetic peripheral neuropathy prognosis. The index was acquired by comparing the obedience of the fluctuation tendency in the change between [...] Read more.
This study investigated the application of a modified percussion entropy index (PEIPPI) in assessing the complexity of baroreflex sensitivity (BRS) for diabetic peripheral neuropathy prognosis. The index was acquired by comparing the obedience of the fluctuation tendency in the change between the amplitudes of continuous digital volume pulse (DVP) and variations in the peak-to-peak interval (PPI) from a decomposed intrinsic mode function (i.e., IMF6) through ensemble empirical mode decomposition (EEMD). In total, 100 middle-aged subjects were split into 3 groups: healthy subjects (group 1, 48–89 years, n = 34), subjects with type 2 diabetes without peripheral neuropathy within 5 years (group 2, 42–86 years, n = 42, HbA1c ≥ 6.5%), and type 2 diabetic patients with peripheral neuropathy within 5 years (group 3, 37–75 years, n = 24). The results were also found to be very successful at discriminating between PEIPPI values among the three groups (p < 0.017), and indicated significant associations with the anthropometric (i.e., body weight and waist circumference) and serum biochemical (i.e., triglycerides, glycated hemoglobin, and fasting blood glucose) parameters in all subjects (p < 0.05). The present study, which utilized the DVP signals of aged, overweight subjects and diabetic patients, successfully determined the PPI intervals from IMF6 through EEMD. The PEIPPI can provide a prognosis of peripheral neuropathy from diabetic patients within 5 years after photoplethysmography (PPG) measurement. Full article
(This article belongs to the Special Issue Entropy and Nonlinear Dynamics in Medicine, Health, and Life Sciences)
Show Figures

Figure 1

Figure 1
<p>A lead II electrocardiogram (ECG) obtained using the conventional method and synchronous volume pulse from an infrared photoplethysmography (PPG) sensor attached to the dominant index finger. The PPG-derived digital volume pulse (DVP) signals for certain time periods are shown. R-R interval: the period between consecutive ECG R waves; peak-to-peak interval (PPI): the period between the peaks of two consecutive volume pulses. (<b>a</b>) Healthy subject (age: 52 years), (<b>b</b>) overweight and elderly subject (age: 66 years), and (<b>c</b>) type 2 diabetic patient (age: 42 years).</p>
Full article ">Figure 2
<p>DVP amplitudes of the dominant finger {Amp(1), Amp(2), …, Amp(n)} and peak-to-peak intervals (PPIs) of IMF6 {PPI(1), PPI(2), …, PPI(n − 1)} were simultaneously acquired from a six-channel ECG-based pulse wave velocity system [<a href="#B28-entropy-21-01229" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) The four fluctuation patterns of length two and (<b>b</b>) the eight fluctuation patterns of length three, representing fluctuations of {Amp} and {PPI} time series. Here, “1” represents Amp(i + 1) up from Amp(i); “0” represents Amp(i + 1) down from Amp(i) for the {Amp} series; “1” represents PPI(i + 1) increased from PPI(i); “0” represents PPI(i + 1) decreased from PPI(i) for the {PPI} series.</p>
Full article ">Figure 4
<p>The modified percussion entropy index (PEI<sub>PPI</sub>) computation flow chart. Amp: DVP amplitudes of the dominant finger; PPI: peak-to-peak interval of IMF6. The standard deviation of the added noise was set as α = 0.2, and the trial number of the ensemble N = 200 for ensemble empirical mode decomposition (EEMD). After two synchronized series, {Amp} and {PPI} series were acquired. The computational length of the series {Amp} and {PPI} was set as 1000. Taking into account baroreflex sensitivity (BRS) regulation, the binary sequence transformations for {Amp} and {PPI} were conducted. Subsequently, the proposed PEI<sub>PPI</sub> was computed as Equation (10).</p>
Full article ">Figure 5
<p>Digital volume pulse (DVP) signals and their corresponding decomposed 6th intrinsic mode function (IMF6) from one representative subject in each group showing: (<b>a</b>) subject A: healthy elderly subject in group 1 (age: 55 years, HbA1c: 5.5%, WC: 73 cm, BMI: 22.7); (<b>b</b>) subject B: diabetic patient without peripheral neuropathy in group 2 (age: 71 years, HbA1c: 8.2%, WC: 94 cm, BMI: 26.5); (<b>c</b>) subject C: type 2 diabetic patient with peripheral neuropathy within 5 years in group 3 (age: 62 years, HbA1c: 8.5%, WC: 98 cm, BMI: 28.6). The peaks of DVP and IMF6 were in phase for subject A, {PPI} series in (2) were the same from DVP or from IMF6. Nevertheless, it was difficult to calculate exact PPI from DVP for subject B and subject C. (<b>a</b>–<b>c</b>) For all IMF6 values, exact PPI could be calculated.</p>
Full article ">Figure 6
<p>All Bland–Altman plots demonstrated that PPI (DVP) series have a good agreement with PPI (IMF6) series for (<b>a</b>) group 1, (<b>b</b>) group 2, and (<b>c</b>) group 3. Group 1: healthy aged subjects; group 2: diabetic subjects; group 3: diabetic peripheral neuropathy patients. The mean difference and the limits of agreement (mean ± 1.96SD) are also represented.</p>
Full article ">Figure 7
<p>(<b>a</b>) Correlation between PPI (IMF6) and PPI (DVP) for test subjects in group 1 (r = 0.52, <span class="html-italic">p</span> = 0.001); (<b>b</b>) Correlation between PPI (IMF6) and PPI (DVP) for test subjects in group 2 (r = 0.30, <span class="html-italic">p</span> = 0.001); (<b>c</b>) Correlation between PPI (IMF6) and PPI (DVP) for test subjects in group 3 (r = 0.37, <span class="html-italic">p</span> = 0.001). Group 1: healthy aged subjects; group 2: diabetic patients without DPN; group 3: diabetic peripheral neuropathy patients within 5 years. The regression line describes the 95% confidence interval.</p>
Full article ">
25 pages, 6519 KiB  
Article
Entropy of the Multi-Channel EEG Recordings Identifies the Distributed Signatures of Negative, Neutral and Positive Affect in Whole-Brain Variability
by Soheil Keshmiri, Masahiro Shiomi and Hiroshi Ishiguro
Entropy 2019, 21(12), 1228; https://doi.org/10.3390/e21121228 - 16 Dec 2019
Cited by 7 | Viewed by 4417
Abstract
Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some [...] Read more.
Individuals’ ability to express their subjective experiences in terms of such attributes as pleasant/unpleasant or positive/negative feelings forms a fundamental property of their affect and emotion. However, neuroscientific findings on the underlying neural substrates of the affect appear to be inconclusive with some reporting the presence of distinct and independent brain systems and others identifying flexible and distributed brain regions. A common theme among these studies is the focus on the change in brain activation. As a result, they do not take into account the findings that indicate the brain activation and its information content does not necessarily modulate and that the stimuli with equivalent sensory and behavioural processing demands may not necessarily result in differential brain activation. In this article, we take a different stance on the analysis of the differential effect of the negative, neutral and positive affect on the brain functioning in which we look into the whole-brain variability: that is the change in the brain information processing measured in multiple distributed regions. For this purpose, we compute the entropy of individuals’ muti-channel EEG recordings who watched movie clips with differing affect. Our results suggest that the whole-brain variability significantly differentiates between the negative, neutral and positive affect. They also indicate that although some brain regions contribute more to such differences, it is the whole-brain variational pattern that results in their significantly above chance level prediction. These results imply that although the underlying brain substrates for negative, neutral and positive affect exhibit quantitatively differing degrees of variability, their differences are rather subtly encoded in the whole-brain variational patterns that are distributed across its entire activity. Full article
(This article belongs to the Special Issue Entropy: The Scientific Tool of the 21st Century)
Show Figures

Figure 1

Figure 1
<p>Spatial maps of participants’ whole-brain differential entropy (DE) associated with (<b>A</b>) Negative (<b>B</b>) Neutral (<b>C</b>) Positive affect. Each of these maps corresponds to one individual that was included in the present study. For each individual, we first computed (for each channel separately) the average DE (per channel) of all movie clips’ DEs that were associated with a given affect. We then used these average DEs for each channel to construct these maps. Differential patterns of participants’ whole-brain variability in three different affect states is evident in these subplots.</p>
Full article ">Figure 2
<p>Paired Spearman correlation between participants’ DEs (<b>A</b>) positive versus negative (<b>B</b>) positive versus neutral (<b>C</b>) negative versus neutral. The subplots on the right column correspond to the bootstrap correlation test (10,000 simulation runs) at 95.0% confidence interval.</p>
Full article ">Figure 3
<p>(<b>A</b>) Grand averages of the spatial maps of participants’ DEs during negative, neutral and positive affect. Differential patterns of participants’ whole-brain DE in these three different affect states is evident in these subplots. (<b>B</b>) Descriptive statistics of participants’ DEs in negative, neutral and positive affect. The asterisks mark the significant differences in these subplots.</p>
Full article ">Figure 4
<p>Paired two-sample bootstrap test of significance (10,000 simulation runs) at 95.0% (i.e., p &lt; 0.05) confidence interval (CI) associated with participants’ whole-brain DEs. Compared pairs of affect are (<b>A</b>) positive versus neutral, (<b>B</b>) positive versus negative and (<b>C</b>) negative versus neutral. In these subplots, the x-axis shows <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> where <span class="html-italic">i</span> and <span class="html-italic">j</span> refer to one of the negative, neutral or positive affect. The blue line marks the null hypothesis <math display="inline"><semantics> <mrow> <mi>H</mi> <mn>0</mn> </mrow> </semantics></math> that is, non-significant difference between the DEs of two compared affect. The red lines are the boundaries of the 95.0% confidence interval. The yellow line shows the location of the average <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> for 10,000 simulation runs.</p>
Full article ">Figure 5
<p>(<b>A</b>) Spatial map of weights pertinent to the trained linear model on the whole-brain DEs associated with negative, neutral and positive affect. This subplot verifies that the model’s weights where within [<math display="inline"><semantics> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>⋯ 1] interval, thereby indicating that the performance of the model was not due to any potential overfitting. Distinct pattern of model’s weights distribution associated with each of three affect is evident in these maps (see <a href="#app3-entropy-21-01228" class="html-app">Appendix C</a> for results on 500 randomized simulation runs). (<b>B</b>) Descriptive statistics of linear model’s weights for negative, neutral and positive affect. The asterisks mark the significant differences in these subplots.</p>
Full article ">Figure 6
<p>Linear model’s confusion matrices associated with its prediction accuracy in negative, neutral and positive affect in 1-holdout setting using (<b>A</b>) whole-brain DEs and (<b>B</b>) subset of channels with the similar pattern of significant differences as whole-brain DEs. These results show the model’s performance after data from every participant was used for testing. Correct predictions, per affect, are the diagonal entries of these tables and the off-diagonal entries show the percentage of each of affect that was misclassified (e.g., positive affect misclassified as negative affect). (<b>C</b>) Two-sample bootstrap test of significance (10,000 simulation runs) at 99.0% (i.e., p &lt; 0.01) confidence interval between the accuracy of linear model using the whole-brain DEs versus subset of channels with the similar pattern of significant differences as whole-brain DEs.</p>
Full article ">Figure 7
<p>(<b>A</b>) Schematic diagram of an experiment as described in Reference [<a href="#B58-entropy-21-01228" class="html-bibr">58</a>]. Each experiment included a total of fifteen movie clips (i.e., n = 15), per participant. In this setting, each movie clip was proceeded with a five-second hint to prepare the participants for its start. This was then followed by a four-minute movie clip. At the end of each movie clip, the participants were asked to answer three questions that followed the Philippot [<a href="#B61-entropy-21-01228" class="html-bibr">61</a>]. These questions were the type of emotion that the participants actually felt while watching the movie clips, whether they watched the original movies from which the clips were taken and whether they understood the content of those clips. The participants responded to these three questions by scoring them in the scale of 1 to 5. (<b>B</b>) Arrangement of the EEG electrodes in this experiment. The sixty-two EEG channels were: FP1, FPZ, FP2, AF3, AF4, F7, F5, F3, F1, FZ, F2, F4, F6, F8, FT7, FC5, FC3, FC1, FCZ, FC2, FC4, FC6, FT8, T7, C5, C3, C1, CZ, C2, C4, C6, T8, TP7, CP5, CP3, CP1, CPZ, CP2, CP4, CP6, TP8, P7, P5, P3, P1, PZ, P2, P4, P6, P8, PO7, PO5, PO3, POZ, PO4, PO6, PO8, CB1, O1, OZ, O2, CB2.</p>
Full article ">Figure A1
<p>Paired two-sample bootstrap test of significance (10,000 simulation runs) at 95.0% (i.e., p &lt; 0.05) confidence interval (CI) associated with the channels with significant DE differences. In these subplots, the x-axis shows <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> where <span class="html-italic">i</span> and <span class="html-italic">j</span> refer to one of the negative, neutral or positive affect. The blue line marks the null hypothesis <math display="inline"><semantics> <mrow> <mi>H</mi> <mn>0</mn> </mrow> </semantics></math> that is, non-significant difference between the two channel’s DEs. The red lines are the boundaries of the 95.0% confidence interval. The yellow line shows the location of the average <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> for 10,000 simulation runs.</p>
Full article ">Figure A2
<p>(Continued from <a href="#entropy-21-01228-f0A1" class="html-fig">Figure A1</a>) Paired two-sample bootstrap test of significance (10,000 simulation runs) at 95.0% (i.e., p &lt; 0.05) confidence interval (CI) associated with the channels with significant DE differences. In these subplots, the x-axis shows <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> where <span class="html-italic">i</span> and <span class="html-italic">j</span> refer to one of the negative, neutral or positive affect. The blue line marks the null hypothesis <math display="inline"><semantics> <mrow> <mi>H</mi> <mn>0</mn> </mrow> </semantics></math> that is, non-significant difference between the two channel’s DEs. The red lines are the boundaries of the 95.0% confidence interval. The yellow line shows the location of the average <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> for 10,000 simulation runs.</p>
Full article ">Figure A3
<p>Paired two-sample bootstrap test of significance (10,000 simulation runs) at 95.0% (i.e., p &lt; 0.05) confidence interval (CI) associated with the channels’ weights with significant differences. In these subplots, the x-axis shows <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> where <span class="html-italic">i</span> and <span class="html-italic">j</span> refer to the weights for one of the negative, neutral or positive affect. The blue line marks the null hypothesis <math display="inline"><semantics> <mrow> <mi>H</mi> <mn>0</mn> </mrow> </semantics></math> that is, non-significant difference between these weights. The red lines are the boundaries of the 95.0% confidence interval. The yellow line shows the location of the average <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> for 10,000 simulation runs.</p>
Full article ">Figure A3 Cont.
<p>Paired two-sample bootstrap test of significance (10,000 simulation runs) at 95.0% (i.e., p &lt; 0.05) confidence interval (CI) associated with the channels’ weights with significant differences. In these subplots, the x-axis shows <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> where <span class="html-italic">i</span> and <span class="html-italic">j</span> refer to the weights for one of the negative, neutral or positive affect. The blue line marks the null hypothesis <math display="inline"><semantics> <mrow> <mi>H</mi> <mn>0</mn> </mrow> </semantics></math> that is, non-significant difference between these weights. The red lines are the boundaries of the 95.0% confidence interval. The yellow line shows the location of the average <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mi>i</mi> </msub> <mo>−</mo> <msub> <mi>μ</mi> <mi>j</mi> </msub> <mo>,</mo> <mspace width="3.33333pt"/> <mi>i</mi> <mo>≠</mo> <mi>j</mi> </mrow> </semantics></math> for 10,000 simulation runs.</p>
Full article ">Figure A4
<p>Confusion matrices associated with randomized affect prediction based on individual’s brain variability by the linear model in 1-holdout setting (500 simulation runs, per affect) using (<b>A</b>) whole-brain and (<b>B</b>) subset of channels with the similar pattern of significant differences as whole-brain DEs. These results show the model’s performance after data from every participant was used for testing. Model’s accuracy for correct and incorrect predictions are the diagonal and off-diagonal entires.</p>
Full article ">
16 pages, 376 KiB  
Article
Convolutional Recurrent Neural Networks with a Self-Attention Mechanism for Personnel Performance Prediction
by Xia Xue, Jun Feng, Yi Gao, Meng Liu, Wenyu Zhang, Xia Sun, Aiqi Zhao and Shouxi Guo
Entropy 2019, 21(12), 1227; https://doi.org/10.3390/e21121227 - 16 Dec 2019
Cited by 17 | Viewed by 4755
Abstract
Personnel performance is important for the high-technology industry to ensure its core competitive advantages are present. Therefore, predicting personnel performance is an important research area in human resource management (HRM). In this paper, to improve prediction performance, we propose a novel framework for [...] Read more.
Personnel performance is important for the high-technology industry to ensure its core competitive advantages are present. Therefore, predicting personnel performance is an important research area in human resource management (HRM). In this paper, to improve prediction performance, we propose a novel framework for personnel performance prediction to help decision-makers to forecast future personnel performance and recruit the best suitable talents. Firstly, a hybrid convolutional recurrent neural network (CRNN) model based on self-attention mechanism is presented, which can automatically learn discriminative features and capture global contextual information from personnel performance data. Moreover, we treat the prediction problem as a classification task. Then, the k-nearest neighbor (KNN) classifier was used to predict personnel performance. The proposed framework is applied to a real case of personnel performance prediction. The experimental results demonstrate that the presented approach achieves significant performance improvement for personnel performance compared to existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of our framework.</p>
Full article ">Figure 2
<p>Long short-term memory (LSTM) structure.</p>
Full article ">Figure 3
<p>Performance of all the methods on personnel performance prediction.</p>
Full article ">
16 pages, 7457 KiB  
Article
Entropy Generation and Heat Transfer in Drilling Nanoliquids with Clay Nanoparticles
by Kottakkaran Sooppy Nisar, Dolat Khan, Arshad Khan, Waqar A Khan, Ilyas Khan and Abdullah Mohammed Aldawsari
Entropy 2019, 21(12), 1226; https://doi.org/10.3390/e21121226 - 16 Dec 2019
Cited by 23 | Viewed by 3056
Abstract
Different types of nanomaterials are used these days. Among them, clay nanoparticles are the one of the most applicable and affordable options. Specifically, clay nanoparticles have numerous applications in the field of medical science for cleaning blood, water, etc. Based on this motivation, [...] Read more.
Different types of nanomaterials are used these days. Among them, clay nanoparticles are the one of the most applicable and affordable options. Specifically, clay nanoparticles have numerous applications in the field of medical science for cleaning blood, water, etc. Based on this motivation, this article aimed to study entropy generation in different drilling nanoliquids with clay nanoparticles. Entropy generation and natural convection usually occur during the drilling process of oil and gas from rocks and land, wherein clay nanoparticles may be included in the drilling fluids. In this work, water, engine oil and kerosene oil were taken as base fluids. A comparative analysis was completed for these three types of base fluid, each containing clay nanoparticles. Numerical values of viscosity and effective thermal conductivity were computed for the nanofluids based on the Maxwell–Garnett (MG) and Brinkman models. The closed-form solution of the formulated problem (in terms of partial differential equations with defined initial and boundary conditions) was determined using the Laplace transform technique. Numerical facts for temperature and velocity fields were used to calculate the Bejan number and local entropy generation. These solutions are uncommon in the literature and therefore this work can assist in the exact solutions of a number of problems of technical relevance to this type. Herein, the effect of different parameters on entropy generation and Bejan number minimization and maximization are displayed through graphs. Full article
(This article belongs to the Special Issue Entropy Generation and Heat Transfer II)
Show Figures

Figure 1

Figure 1
<p>Physical sketch of the problem.</p>
Full article ">Figure 2
<p>Velocity variation for different values of <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>, where <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Velocity variation for different values of <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Velocity variation for different values of <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>r</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Comparison of velocity variation for different nanofluids, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>.</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Temperature variation for different <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>, where <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Temperature variation for different for different <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Comparison of temperature variation for different nanofluids, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 9
<p>Entropy generation for different values of <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>, where <math display="inline"><semantics> <mrow> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Entropy generation for different values of <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Entropy generation for different values of <math display="inline"><semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Entropy generation for different values of <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>r</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Entropy generation for different values of <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>r</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Comparison of entropy generation for different nanofluids, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Bejan number variation for different values of <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>,where <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Bejan number variation for different values of <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Bejan number variation for different values of <math display="inline"><semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Bejan number variation for different values of <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>r</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Bejan number variation for different values of <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>r</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>Pr</mi> <mo>=</mo> <mn>6.21</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 20
<p>Comparison of Bejan number variation for different nanofluids, where <math display="inline"><semantics> <mrow> <mi>ϕ</mi> <mo>=</mo> <mn>0.04</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>G</mi> <mi>r</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>B</mi> <mi>r</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">
Article
A Comparative Study of Geoelectric Signals Possibly Associated with the Occurrence of Two Ms > 7 EQs in the South Pacific Coast of Mexico
by Lev Guzmán-Vargas, Carlos Carrizales-Velazquez, Israel Reyes-Ramírez, Jorge Fonseca-Campos, Arturo de la Rosa-Galindo, Víctor O. Quintana-Moreno, José Antonio Peralta and Fernando Angulo-Brown
Entropy 2019, 21(12), 1225; https://doi.org/10.3390/e21121225 - 15 Dec 2019
Cited by 1 | Viewed by 2838
Abstract
During past decades, several studies have suggested the existence of possible seismic electric precursors associated with earthquakes of magnitude M > 7 . However, additional analyses are needed to have more reliable evidence of pattern behavior prior to the occurrence of a big [...] Read more.
During past decades, several studies have suggested the existence of possible seismic electric precursors associated with earthquakes of magnitude M > 7 . However, additional analyses are needed to have more reliable evidence of pattern behavior prior to the occurrence of a big event. In this article we report analyses of self-potential Δ V records during approximately two years in three electro-seismic stations in Mexico located at Acapulco, Guerrero; Petatlán, Guerrero and Pinotepa Nacional, Oaxaca. On 18 April 2014 an M s 7.2 earthquake occurred near our Petatlán station. Our study shows two notable anomalies observed in the behavior of the Fourier power spectrum of Δ V for ultra low frequency ULF-range, and the transition of the α l -exponent of the detrended fluctuation analysis of the Δ V time series from uncorrelated to correlated signals. These anomalies lasted approximately three and a half months before the main shock. We compare this electric pattern with another electric signal we reported associated with an M s 7.4 that occurred on 14 September 1995 at Guerrero state, Mexico. Our characterization of the anomalies observed in both signals point out similar features that enrich our knowledge about precursory phenomena linked to the occurrence of earthquakes of magnitude M > 7 . Full article
Show Figures

Figure 1

Figure 1
<p>Map of locations of Acapulco, Petatlán and Pinotepa stations. The locations of the epicenters of the studied earthquakes are also shown.</p>
Full article ">Figure 2
<p>Representative segments of the signals from (<b>a</b>) Petatlán, (<b>b</b>) Pinotepa, and (<b>c</b>) Acapulco stations. These segments correspond to the first six hours of October (2013 for Petatlán and Pinotepa stations, and 1994 for Acapulco station).</p>
Full article ">Figure 3
<p>Average power spectrum values as a function of time for eight frequency bands from three stations in the South Pacific Coast in México. (<b>a</b>) Acapulco data for the period June 1994 to May 1996. The vertical line indicates the occurrence of the <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>s</mi> </msub> <mn>7.4</mn> </mrow> </semantics></math> EQ, 14 September 1995. (<b>b</b>) Petatlán data for the period January 2013 to May 2014. The vertical line indicates the <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>s</mi> </msub> <mn>7.2</mn> </mrow> </semantics></math> EQ, 18 April 2014. (<b>c</b>) Pinotepa Nacional data for the period November 2013 to May 2014.</p>
Full article ">Figure 4
<p>Fluctuation function <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi>s</mi> <mo>)</mo> </mrow> </semantics></math> of a representative segment of instantaneous amplitude corresponding to the frequency band <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </semantics></math> during October 1994 in Acapulco station. A crossover <math display="inline"><semantics> <msub> <mi>n</mi> <mi>x</mi> </msub> </semantics></math> is observed, which separates two regions. For short scales (<math display="inline"><semantics> <mrow> <mi>s</mi> <mo>&lt;</mo> <msub> <mi>n</mi> <mi>x</mi> </msub> </mrow> </semantics></math>), <math display="inline"><semantics> <msub> <mi>α</mi> <mi>s</mi> </msub> </semantics></math> is close to 2, which corresponds to a very regular (tending to continuous functions) time series, while for large scales (<math display="inline"><semantics> <mrow> <mi>s</mi> <mo>&gt;</mo> <msub> <mi>n</mi> <mi>x</mi> </msub> </mrow> </semantics></math>), <math display="inline"><semantics> <msub> <mi>α</mi> <mi>l</mi> </msub> </semantics></math> is close to one, i.e., close to <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mi>f</mi> </mrow> </semantics></math> noise.</p>
Full article ">Figure 5
<p>Heat map of evolution of the correlation <math display="inline"><semantics> <msub> <mi>α</mi> <mi>l</mi> </msub> </semantics></math> exponent for eight frequency bands obtained with data from three stations in the South Pacific Coast of Mexico located at (<b>a</b>) Acapulco, (<b>b</b>) Petatlán and (<b>c</b>) Pinotepa Nacional.</p>
Full article ">
15 pages, 1834 KiB  
Article
Alterations of Cardiovascular Complexity during Acute Exposure to High Altitude: A Multiscale Entropy Approach
by Andrea Faini, Sergio Caravita, Gianfranco Parati and Paolo Castiglioni
Entropy 2019, 21(12), 1224; https://doi.org/10.3390/e21121224 - 15 Dec 2019
Cited by 6 | Viewed by 3170
Abstract
Stays at high altitude induce alterations in cardiovascular control and are a model of specific pathological cardiovascular derangements at sea level. However, high-altitude alterations of the complex cardiovascular dynamics remain an almost unexplored issue. Therefore, our aim is to describe the altered cardiovascular [...] Read more.
Stays at high altitude induce alterations in cardiovascular control and are a model of specific pathological cardiovascular derangements at sea level. However, high-altitude alterations of the complex cardiovascular dynamics remain an almost unexplored issue. Therefore, our aim is to describe the altered cardiovascular complexity at high altitude with a multiscale entropy (MSE) approach. We recorded the beat-by-beat series of systolic and diastolic blood pressure and heart rate in 20 participants for 15 min twice, at sea level and after arrival at 4554 m a.s.l. We estimated Sample Entropy and MSE at scales of up to 64 beats, deriving average MSE values over the scales corresponding to the high-frequency (MSEHF) and low-frequency (MSELF) bands of heart-rate variability. We found a significant loss of complexity at heart-rate and blood-pressure scales complementary to each other, with the decrease with high altitude being concentrated at Sample Entropy and at MSEHF for heart rate and at MSELF for blood pressure. These changes can be ascribed to the acutely increased chemoreflex sensitivity in hypoxia that causes sympathetic activation and hyperventilation. Considering high altitude as a model of pathological states like heart failure, our results suggest new ways for monitoring treatments and rehabilitation protocols. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Profiles of Multiscale Entropy MSE calculated with embedding dimension <span class="html-italic">m</span> = 1: mean ± SD for 10 synthesized series of white noise and 10 synthesized series of pink noise, each of 1000 samples, simulating 15’ beat-by-beat recordings with mean RRI equal to 900 ms. Gray bands show the ranges of scales corresponding to the HF and LF spectral bands. (<b>b</b>) MSE calculated with <span class="html-italic">m</span> = 2 for the same data of panel (<b>a</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>) Profiles of Multiscale Entropy MSE at sea level (SL, blue lines) and high altitude (HA, red lines) for RRI calculated with embedding dimension <span class="html-italic">m</span> = 1: mean ± sem on 20 participants (gray bands show the ranges of scales corresponding to the HF and LF spectral bands); (<b>b</b>) MSE calculated as in (<b>a</b>) for SBP; (<b>c</b>) MSE calculated as in (<b>a</b>) for DBP; (<b>d</b>) MSE for RRI calculated as in (<b>a</b>) but with <span class="html-italic">m</span> = 2; (<b>e</b>) MSE calculated as in (<b>d</b>) for SBP; (<b>f</b>) MSE calculated as in (<b>d</b>) for DBP; (<b>g</b>) Wilcoxon signed-rank statistics <span class="html-italic">V</span> for the comparison between SL and HA for MSE of RRI; the red horizontal lines are the 5% (continuous) or 1% (dashed) percentiles of the distribution for the null hypothesis: when <span class="html-italic">V</span> is above these thresholds the hypothesis of similar entropies can be rejected at the corresponding significance level; (<b>h</b>) <span class="html-italic">V</span> statistics for the comparison between SL and HA for SBP MSE; (<b>i</b>) <span class="html-italic">V</span> statistics for the comparison between SL and HA for DBP MSE.</p>
Full article ">Figure 3
<p>95% confidence intervals of the difference between high-altitude and sea-level conditions of entropy indices. <span class="html-italic">From top to bottom</span>: Sample Entropy (<span class="html-italic">SampEn</span>), multiscale entropy over the HF (<span class="html-italic">MSE<sub>HF</sub></span>) and over the LF (<span class="html-italic">MSE<sub>LF</sub></span>) bands; <span class="html-italic">m</span> is the embedding dimension.</p>
Full article ">Figure 4
<p>(<b>a</b>) Profiles of multiscale cross-entropy XMSE between SBP and PI at sea level (SL, blue lines) and high altitude (HA, red lines): mean ± sem for embedding dimension <span class="html-italic">m</span> = 1; (<b>b</b>) XMSE from the same data of panel (<b>a</b>) calculated for embedding dimension <span class="html-italic">m</span> = 2; (<b>c</b>) Wilcoxon signed-rank statistics <span class="html-italic">V</span> for the comparison between SL and HA; the red horizontal lines are the 5% (continuous) or 1% (dashed) percentiles of the distribution for the null hypothesis: when <span class="html-italic">V</span> is above these thresholds, the hypothesis of similar entropies can be rejected at the corresponding significance level. Gray bands show the ranges of scales corresponding to the HF and LF spectral bands.</p>
Full article ">Figure 5
<p>95% confidence intervals of the difference between high-altitude and sea-level conditions of cross-entropy indices. <span class="html-italic">From top to bottom</span>: Cross sample entropy (<span class="html-italic">XSampEn</span>), cross multiscale entropy over the HF (<span class="html-italic">XMSE<sub>HF</sub></span>) and the LF (<span class="html-italic">XMSE<sub>LF</sub></span>) bands; <span class="html-italic">m</span> is the embedding dimension.</p>
Full article ">
21 pages, 534 KiB  
Article
Information Theory for Non-Stationary Processes with Stationary Increments
by Carlos Granero-Belinchón, Stéphane G. Roux and Nicolas B. Garnier
Entropy 2019, 21(12), 1223; https://doi.org/10.3390/e21121223 - 15 Dec 2019
Cited by 14 | Viewed by 4300
Abstract
We describe how to analyze the wide class of non-stationary processes with stationary centered increments using Shannon information theory. To do so, we use a practical viewpoint and define ersatz quantities from time-averaged probability distributions. These ersatz versions of entropy, mutual information, and [...] Read more.
We describe how to analyze the wide class of non-stationary processes with stationary centered increments using Shannon information theory. To do so, we use a practical viewpoint and define ersatz quantities from time-averaged probability distributions. These ersatz versions of entropy, mutual information, and entropy rate can be estimated when only a single realization of the process is available. We abundantly illustrate our approach by analyzing Gaussian and non-Gaussian self-similar signals, as well as multi-fractal signals. Using Gaussian signals allows us to check that our approach is robust in the sense that all quantities behave as expected from analytical derivations. Using the stationarity (independence on the integration time) of the ersatz entropy rate, we show that this quantity is not only able to fine probe the self-similarity of the process, but also offers a new way to quantify the multi-fractality. Full article
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Dependence of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (for <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>) on <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>/</mo> <msup> <mi>T</mi> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </msup> </mrow> </semantics></math> for the fractional Brownian motion (fBm) (<b>a</b>, in black) and for the Hermitian (<b>b</b>, in blue) and even-Hermitian (<b>c</b>, in red) log-normal processes.</p>
Full article ">Figure 2
<p>Standard deviations of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>H</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (triangles), <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>I</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (circles), and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> (stars), for <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, as functions of <span class="html-italic">T</span>, for the fBm (<b>a</b>, black) the Hermitian (<b>b</b>, in blue), and even-Hermitian (<b>c</b>, in red) log-normal processes.</p>
Full article ">Figure 3
<p>(<b>a</b>) Entropy <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>H</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> and (<b>c</b>) auto-mutual information <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>I</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> of the fBm in function of the logarithm of the window size <math display="inline"><semantics> <mrow> <mo form="prefix">ln</mo> <mo>(</mo> <mi>T</mi> <mo>)</mo> </mrow> </semantics></math> for a fixed scale <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>b</b>) Entropy and (<b>d</b>) auto-mutual information in function of the logarithm of the scale of analysis <math display="inline"><semantics> <mrow> <mo form="prefix">ln</mo> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math> for a fixed <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math>. Each symbol corresponds to a different embedding dimension <span class="html-italic">m</span>. In (<b>a</b>,<b>c</b>) the black line has a slope <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>, while in (<b>d</b>) its slope is <math display="inline"><semantics> <mrow> <mo>−</mo> <mi mathvariant="script">H</mi> <mo>=</mo> <mo>−</mo> <mn>0.7</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Ersatz entropy rate <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> of a fBm with <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>. (<b>a</b>): as a function of the window size <span class="html-italic">T</span> for fixed <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>b</b>): as a function of the scale <math display="inline"><semantics> <mi>τ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math>. Each symbol corresponds to a different embedding dimension <span class="html-italic">m</span>. The horizontal black line in (<b>a</b>) indicates the theoretical value <math display="inline"><semantics> <msubsup> <mi>H</mi> <mn>1</mn> <mi>fBm</mi> </msubsup> </semantics></math>. The black line in (<b>b</b>) represents the linear function <math display="inline"><semantics> <mrow> <msubsup> <mi>H</mi> <mn>1</mn> <mi>fBm</mi> </msubsup> <mo>+</mo> <mi mathvariant="script">H</mi> <mo form="prefix">ln</mo> <mi>τ</mi> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Ersatz entropy rate <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> <mo>−</mo> <mo form="prefix">ln</mo> <mrow> <mo>(</mo> <msub> <mi>σ</mi> <mi>τ</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> of the fBm as a function of <math display="inline"><semantics> <mrow> <mo form="prefix">ln</mo> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math> for fixed <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math> and varying embedding dimension <span class="html-italic">m</span>. The thick horizontal black line represents the constant value <math display="inline"><semantics> <msubsup> <mi>H</mi> <mn>1</mn> <mi>fBm</mi> </msubsup> </semantics></math>.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> for a motion built from a Hermitian (blue) or even-Hermitian (red) log-normal noise, as a function of (<b>a</b>) the time window size <span class="html-italic">T</span> or (<b>b</b>) the time scale <math display="inline"><semantics> <mi>τ</mi> </semantics></math>. Results for the fBm (from <a href="#entropy-21-01223-f004" class="html-fig">Figure 4</a> with <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) are reported in black for comparison. <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. The horizontal lines in (<b>a</b>) indicates the entropy <math display="inline"><semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics></math> of the noise (in black for the fBm and in red and blue for a log-normal process).</p>
Full article ">Figure 7
<p>Ersatz entropy rate <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> <mo>−</mo> <mo form="prefix">ln</mo> <mrow> <mo>(</mo> <msub> <mi>σ</mi> <mi>τ</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> for motions built on Hermitian (blue) or even-Hermitian (red) log-normal noise, together with results for the fBm (black) as a function of <math display="inline"><semantics> <mrow> <mo form="prefix">ln</mo> <mi>τ</mi> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. The horizontal straight lines indicate the theoretical values of the entropy of the processes.</p>
Full article ">Figure 8
<p>Probability density function (PDF) of the increments of the (<b>a</b>) Hermitian and (<b>b</b>) even-Hermitian log-normal motions of size <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <msup> <mn>2</mn> <mi>j</mi> </msup> </mrow> </semantics></math>, from <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (bottom) up to <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> (up). Curves have been arbitrarily shifted on the Y-axis for clarity.</p>
Full article ">Figure 9
<p>PDF of the increments of the (<b>a</b>) fBm and (<b>b</b>) of a multifractal random walk (MRW) of size <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <msup> <mn>2</mn> <mi>j</mi> </msup> </mrow> </semantics></math>, from <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (bottom) up to <math display="inline"><semantics> <mrow> <mi>j</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> (up). Curves have been arbitrarily shifted on the Y-axis for clarity.</p>
Full article ">Figure 10
<p>Ersatz entropy rate <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> </semantics></math> of a MRW with <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>. (<b>a</b>): as a function of the window size <span class="html-italic">T</span> for fixed <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>b</b>): as a function of the scale <math display="inline"><semantics> <mi>τ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math>. The horizontal line in (<b>a</b>) indicates the numerical value <math display="inline"><semantics> <msubsup> <mi>H</mi> <mn>1</mn> <mi>MRW</mi> </msubsup> </semantics></math> of the noise. The straight line in (<b>b</b>) has a slope <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> <mo>=</mo> <mn>0.7</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Ersatz entropy rate <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>h</mi> <mo>¯</mo> </mover> <mrow> <mi>T</mi> </mrow> <mrow> <mo>(</mo> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>τ</mi> <mo>)</mo> </mrow> </msubsup> <mo>−</mo> <mo form="prefix">ln</mo> <mrow> <mo>(</mo> <msub> <mi>σ</mi> <mi>τ</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> of the MRW as a function of <math display="inline"><semantics> <mrow> <mo form="prefix">ln</mo> <mo>(</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math> for fixed <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math>.</p>
Full article ">
24 pages, 3803 KiB  
Article
Deep Learning and Artificial Intelligence for the Determination of the Cervical Vertebra Maturation Degree from Lateral Radiography
by Masrour Makaremi, Camille Lacaule and Ali Mohammad-Djafari
Entropy 2019, 21(12), 1222; https://doi.org/10.3390/e21121222 - 14 Dec 2019
Cited by 55 | Viewed by 8354
Abstract
Deep Learning (DL) and Artificial Intelligence (AI) tools have shown great success in different areas of medical diagnostics. In this paper, we show another success in orthodontics. In orthodontics, the right treatment timing of many actions and operations is crucial because many environmental [...] Read more.
Deep Learning (DL) and Artificial Intelligence (AI) tools have shown great success in different areas of medical diagnostics. In this paper, we show another success in orthodontics. In orthodontics, the right treatment timing of many actions and operations is crucial because many environmental and genetic conditions may modify jaw growth. The stage of growth is related to the Cervical Vertebra Maturation (CVM) degree. Thus, determining the CVM to determine the suitable timing of the treatment is important. In orthodontics, lateral X-ray radiography is used to determine it. Many classical methods need knowledge and time to look and identify some features. Nowadays, ML and AI tools are used for many medical and biological diagnostic imaging. This paper reports on the development of a Deep Learning (DL) Convolutional Neural Network (CNN) method to determine (directly from images) the degree of maturation of CVM classified in six degrees. The results show the performances of the proposed method in different contexts with different number of images for training, evaluation and testing and different pre-processing of these images. The proposed model and method are validated by cross validation. The implemented software is almost ready for use by orthodontists. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>Left</b>) CVM radiological and morphological stages superposed with Björk growth curve [<a href="#B16-entropy-21-01222" class="html-bibr">16</a>]; (<b>Right</b>) Cephalometric landmarks for CVM stages determination [<a href="#B1-entropy-21-01222" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Isolated part (vertebra) of a standard radiography.</p>
Full article ">Figure 3
<p>Originals and different preprocessing before training: (<b>a</b>) Originals (2012 × 2020), (<b>b</b>) test0: cropped images (488 × 488), (<b>c</b>) test1: cropped and sobel edge detector filter (488 × 488), (<b>d</b>) test2: cropped and resized (244 × 244), (<b>e</b>) test3: cropped, resized and sobel edge detector filter (244 × 244), (<b>f</b>) test4: cropped and resized (64 × 64).</p>
Full article ">Figure 4
<p>The structure of the proposed Deep Learning network for test with different preprocessed images.</p>
Full article ">Figure 5
<p>Evolution of the Loss function and the accuracy as a function of the epoch numbers.</p>
Full article ">Figure 6
<p>Comparison between three local filters: mean, median and entropy.</p>
Full article ">Figure 7
<p>Comparison between three local filters: mean, median and entropy. Columns: from left to right: original, mean, median, entropy. Rows: from top to down CVS1, CVS2 to CVS6 cases.</p>
Full article ">Figure 8
<p>Evolution of the loss function and the accuracy as a function of the epoch numbers for the case of 360 images.</p>
Full article ">Figure 9
<p>The prediction probabilities for each of the six classes are shown as an image. In each class, there are 50 images. The classes CVS1, …, CVS6 are shown from top to down.</p>
Full article ">Figure 10
<p>Evolution of the loss function and the accuracy as a function of the epoch numbers for the case of 600 images.</p>
Full article ">Figure 11
<p>Evolution of the loss function and the accuracy as a function of the epoch numbers for the case of 900 images.</p>
Full article ">Figure 12
<p>Evolution of the loss function and the accuracy as a function of the epoch numbers for the case of 1870 images.</p>
Full article ">Figure 13
<p>Evolution of the accuracy as a function of the epoch numbers for two different optimization algorithms SGD (<b>left</b>) and ADAM (<b>right</b>).</p>
Full article ">Figure 14
<p>Evolution of loss and accuracy during the training as a function of the epoch numbers for two cases: without any filtering (<b>left</b>) and with entropic filtering (<b>right</b>). from top to botumn: The cases with 300, 600 and 900 training images.</p>
Full article ">Figure 15
<p>Evolution of loss and accuracy during the training (<b>upper row</b>) and during the validation (<b>lower row</b>) as a function of the epoch numbers for two cases: 6 layers and 7 layers networks. (The case with 300 images).</p>
Full article ">Figure 16
<p>Evolution of loss and accuracy during the training (<b>upper row</b>) and during the validation (<b>lower row</b>) as a function of the epoch numbers for two cases: 6 layers and 7 layers networks. (The case with 900 images).</p>
Full article ">
14 pages, 2626 KiB  
Article
Identification of Functional Bioprocess Model for Recombinant E. Coli Cultivation Process
by Renaldas Urniezius and Arnas Survyla
Entropy 2019, 21(12), 1221; https://doi.org/10.3390/e21121221 - 14 Dec 2019
Cited by 9 | Viewed by 3675
Abstract
The purpose of this study is to introduce an improved Luedeking–Piret model that represents a structurally simple biomass concentration approach. The developed routine provides acceptable accuracy when fitting experimental data that incorporate the target protein concentration of Escherichia coli culture BL21 (DE3) pET28a [...] Read more.
The purpose of this study is to introduce an improved Luedeking–Piret model that represents a structurally simple biomass concentration approach. The developed routine provides acceptable accuracy when fitting experimental data that incorporate the target protein concentration of Escherichia coli culture BL21 (DE3) pET28a in fed-batch processes. This paper presents system identification, biomass, and product parameter fitting routines, starting from their roots of origin to the entropy-related development, characterized by robustness and simplicity. A single tuning coefficient allows for the selection of an optimization criterion that serves equally well for higher and lower biomass concentrations. The idea of the paper is to demonstrate that the use of fundamental knowledge can make the general model more common for technological use compared to a sophisticated artificial neural network. Experimental validation of the proposed model involved data analysis of six cultivation experiments compared to 19 experiments used for model fitting and parameter estimation. Full article
(This article belongs to the Special Issue Entropy-Based Algorithms for Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Dependence of oxygen consumption for maintenance on biomass concentration of <span class="html-italic">E. coli</span> estimated as a function of biomass and observed at discrete time <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">t</mi> <mi mathvariant="normal">m</mi> </msub> </mrow> </semantics></math>, taken from Reference [<a href="#B3-entropy-21-01221" class="html-bibr">3</a>].</p>
Full article ">Figure 2
<p>Workflow of structural scheme for convex optimization method identifying stoichiometric and product model fitting parameters.</p>
Full article ">Figure 3
<p>Biomass model fitting results with cultivation processes data, where time is the cultivation time since inoculation in the bioreactor.</p>
Full article ">Figure 4
<p>Biomass validation results with cultivation processes data, where time is the cultivation time since inoculation in the bioreactor.</p>
Full article ">Figure 5
<p>Protein model fitting results compared with cultivation experiment data, where time is the cultivation time since inoculation in the bioreactor.</p>
Full article ">Figure 6
<p>Protein validation results compared with cultivation experiment data, where time is the cultivation time since inoculation in the bioreactor.</p>
Full article ">
10 pages, 960 KiB  
Article
Permutation Entropy and Statistical Complexity Analysis of Brazilian Agricultural Commodities
by Fernando Henrique Antunes de Araujo, Lucian Bejan, Osvaldo A. Rosso and Tatijana Stosic
Entropy 2019, 21(12), 1220; https://doi.org/10.3390/e21121220 - 14 Dec 2019
Cited by 31 | Viewed by 4661
Abstract
Agricultural commodities are considered perhaps the most important commodities, as any abrupt increase in food prices has serious consequences on food security and welfare, especially in developing countries. In this work, we analyze predictability of Brazilian agricultural commodity prices during the period after [...] Read more.
Agricultural commodities are considered perhaps the most important commodities, as any abrupt increase in food prices has serious consequences on food security and welfare, especially in developing countries. In this work, we analyze predictability of Brazilian agricultural commodity prices during the period after 2007/2008 food crisis. We use information theory based method Complexity/Entropy causality plane (CECP) that was shown to be successful in the analysis of market efficiency and predictability. By estimating information quantifiers permutation entropy and statistical complexity, we associate to each commodity the position in CECP and compare their efficiency (lack of predictability) using the deviation from a random process. Coffee market shows highest efficiency (lowest predictability) while pork market shows lowest efficiency (highest predictability). By analyzing temporal evolution of commodities in the complexity–entropy causality plane, we observe that during the analyzed period (after 2007/2008 crisis) the efficiency of cotton, rice, and cattle markets increases, the soybeans market shows the decrease in efficiency until 2012, followed by the lower predictability and the increase of efficiency, while most commodities (8 out of total 12) exhibit relatively stable efficiency, indicating increased market integration in post-crisis period. Full article
(This article belongs to the Special Issue Information Theoretic Measures and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Time series of prices of agricultural commodities recorded daily for the period January 04, 2010, to July 03, 2018.</p>
Full article ">Figure 2
<p>Position in the entropy–complexity plane of (<b>A</b>) the original and (<b>B</b>) randomized commodities series for embedding dimension d = 4.</p>
Full article ">Figure 3
<p>Position in the entropy–complexity plane of (<b>A</b>) the original and (<b>B</b>) randomized commodities series for embedding dimension d = 5.</p>
Full article ">Figure 4
<p>Time evolution of inefficiency measure (distance from complexity–entropy causality plane (CECP) point <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mi>s</mi> </msub> <mo>=</mo> <mtext> </mtext> </mrow> </semantics></math>1, <math display="inline"><semantics> <mi>C</mi> </semantics></math> = 0) for commodities series for embedding dimension d = 5. The points on the graph correspond to the beginning of the corresponding 1000 data point windows.</p>
Full article ">
40 pages, 3058 KiB  
Article
Finite Amplitude Stability of Internal Steady Flows of the Giesekus Viscoelastic Rate-Type Fluid
by Mark Dostalík, Vít Průša and Karel Tůma
Entropy 2019, 21(12), 1219; https://doi.org/10.3390/e21121219 - 13 Dec 2019
Cited by 9 | Viewed by 3390
Abstract
Using a Lyapunov type functional constructed on the basis of thermodynamical arguments, we investigate the finite amplitude stability of internal steady flows of viscoelastic fluids described by the Giesekus model. Using the functional, we derive bounds on the Reynolds and the Weissenberg number [...] Read more.
Using a Lyapunov type functional constructed on the basis of thermodynamical arguments, we investigate the finite amplitude stability of internal steady flows of viscoelastic fluids described by the Giesekus model. Using the functional, we derive bounds on the Reynolds and the Weissenberg number that guarantee the unconditional asymptotic stability of the corresponding steady internal flow, wherein the distance between the steady flow field and the perturbed flow field is measured with the help of the Bures–Wasserstein distance between positive definite matrices. The application of the theoretical results is documented in the finite amplitude stability analysis of Taylor–Couette flow. Full article
(This article belongs to the Special Issue Entropies: Between Information Geometry and Kinetics)
Show Figures

Figure 1

Figure 1
<p>Cylindrical Taylor–Couette flow.</p>
Full article ">Figure 2
<p>Taylor–Couette flow, spatially inhomogeneous non-equilibrium steady state for various values of Weissenberg number <math display="inline"><semantics> <mi>Wi</mi> </semantics></math>, Giesekus parameter <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </semantics></math>, Reynolds number <math display="inline"><semantics> <mrow> <mi>Re</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, dimensionless shear modulus <math display="inline"><semantics> <mrow> <mo>Ξ</mo> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and problem parameters <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ζ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Taylor–Couette flow, numerical values of constants <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math> for various values of the Reynolds number <math display="inline"><semantics> <mi>Re</mi> </semantics></math>, Weissenberg number <math display="inline"><semantics> <mi>Wi</mi> </semantics></math> and the dimensionless shear modulus <math display="inline"><semantics> <mo>Ξ</mo> </semantics></math>. <span class="html-italic">Unconditional asymptotic stability is granted provided that</span> <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>1</mn> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math> <span class="html-italic">and</span> <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mn>2</mn> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>, numerical values of constants <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math> are evaluated using Equation (41). Giesekus parameter <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </semantics></math> and problem parameters <math display="inline"><semantics> <mrow> <mi>η</mi> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ζ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Scenario A, snapshots of <math display="inline"><semantics> <mrow> <mo>|</mo> <mover accent="true"> <msub> <mi mathvariant="double-struck">B</mi> <msub> <mi>κ</mi> <mrow> <mi>p</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msub> </msub> <mo>˜</mo> </mover> <mo>|</mo> </mrow> </semantics></math> at different time instants.</p>
Full article ">Figure 5
<p>Scenario A, snapshots of <math display="inline"><semantics> <mrow> <mo>|</mo> <mover accent="true"> <mi mathvariant="bold-italic">v</mi> <mo>˜</mo> </mover> <mo>|</mo> </mrow> </semantics></math> at different time instants.</p>
Full article ">Figure 6
<p>Scenario A, time evolution of the net quantities.</p>
Full article ">Figure 7
<p>Scenario B, snapshots of <math display="inline"><semantics> <mrow> <mo>|</mo> <mover accent="true"> <msub> <mi mathvariant="double-struck">B</mi> <msub> <mi>κ</mi> <mrow> <mi>p</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </msub> </msub> <mo>˜</mo> </mover> <mo>|</mo> </mrow> </semantics></math> at different time instants.</p>
Full article ">Figure 8
<p>Scenario B, snapshots of <math display="inline"><semantics> <mrow> <mo>|</mo> <mover accent="true"> <mi mathvariant="bold-italic">v</mi> <mo>˜</mo> </mover> <mo>|</mo> </mrow> </semantics></math> at different time instants.</p>
Full article ">Figure 9
<p>Scenario B, time evolution of the net quantities.</p>
Full article ">
31 pages, 691 KiB  
Article
Ordering of Trotterization: Impact on Errors in Quantum Simulation of Electronic Structure
by Andrew Tranter, Peter J. Love, Florian Mintert, Nathan Wiebe and Peter V. Coveney
Entropy 2019, 21(12), 1218; https://doi.org/10.3390/e21121218 - 13 Dec 2019
Cited by 40 | Viewed by 7366
Abstract
Trotter–Suzuki decompositions are frequently used in the quantum simulation of quantum chemistry. They transform the evolution operator into a form implementable on a quantum device, while incurring an error—the Trotter error. The Trotter error can be made arbitrarily small by increasing the Trotter [...] Read more.
Trotter–Suzuki decompositions are frequently used in the quantum simulation of quantum chemistry. They transform the evolution operator into a form implementable on a quantum device, while incurring an error—the Trotter error. The Trotter error can be made arbitrarily small by increasing the Trotter number. However, this increases the length of the quantum circuits required, which may be impractical. It is therefore desirable to find methods of reducing the Trotter error through alternate means. The Trotter error is dependent on the order in which individual term unitaries are applied. Due to the factorial growth in the number of possible orderings with respect to the number of terms, finding an optimal strategy for ordering Trotter sequences is difficult. In this paper, we propose three ordering strategies, and assess their impact on the Trotter error incurred. Initially, we exhaustively examine the possible orderings for molecular hydrogen in a STO-3G basis. We demonstrate how the optimal ordering scheme depends on the compatibility graph of the Hamiltonian, and show how it varies with increasing bond length. We then use 44 molecular Hamiltonians to evaluate two strategies based on coloring their incompatibility graphs, while considering the properties of the obtained colorings. We find that the Trotter error for most systems involving heavy atoms, using a reference magnitude ordering, is less than 1 kcal/mol. Relative to this, the difference between ordering schemes can be substantial, being approximately on the order of millihartrees. The coloring-based ordering schemes are reasonably promising—particularly for systems involving heavy atoms—however further work is required to increase dependence on the magnitude of terms. Finally, we consider ordering strategies based on the norm of the Trotter error operator, including an iterative method for generating the new error operator terms added upon insertion of a term into an ordered Hamiltonian. Full article
(This article belongs to the Special Issue Quantum Information: Fragility and the Challenges of Fault Tolerance)
Show Figures

Figure 1

Figure 1
<p>Cumulative density plot of hydrogen ordering errors for one Trotter step, using a bond length of <math display="inline"><semantics> <mrow> <mn>0.7414</mn> </mrow> </semantics></math> Å and an STO-3G atomic basis. The vertical line denotes an error of 1 kcal/mol. Approximately <math display="inline"><semantics> <mrow> <mn>20</mn> <mo>%</mo> </mrow> </semantics></math> of the orderings achieve this error or lower for the first-order Trotter–Suzuki approximation. Around <math display="inline"><semantics> <mrow> <mn>80</mn> <mo>%</mo> </mrow> </semantics></math> of orderings achieve an error of 0.005 Hartree, approximately half that of the worst possible ordering.</p>
Full article ">Figure 2
<p>The incompatibility graph of the Jordan–Wigner and Bravyi–Kitaev Hamiltonians, with the totally commuting set removed. Nodes correspond to Hamiltonian terms, edges correspond to non-commutativity between terms. Two independent sets are clearly revealed, with the XY-set colored blue and the Z-set colored red.</p>
Full article ">Figure 3
<p>Distribution of Trotter errors by ordering for varying bond lengths for H<sub>2</sub> in a minimal basis. (<b>a</b>): versus absolute Trotter error. As the bond length decreases, both the Trotter error and the dependence of the Trotter error on the ordering chosen increase. (<b>b</b>): versus the Trotter error as a percentage of the ground state energy. The same trend as with the absolute Trotter error is observed, although the ordering dependence at extremely low bond length is accentuated.</p>
Full article ">Figure 4
<p>Trotter errors for the dataset of molecular Hamiltonians using a magnitude ordering. The vertical bar indicates chemical accuracy. Most of the systems achieve chemical accuracy with one Trotter step. (<b>a</b>): versus the number of spin-orbitals. Most of the high-error results are for low numbers of spin-orbitals. (<b>b</b>): versus the number of terms in the Hamiltonian. Again, most of the high-error results are for low numbers of terms. (<b>c</b>): versus the maximum nuclear charge. All of the high-error systems are for systems with exclusively light atoms, and the overall trend roughly follows the predictions of prior literature.</p>
Full article ">Figure 5
<p>Statistics of the fully commuting sets of terms found in the coloring of the Hamiltonians in the dataset. (<b>a</b>): number of fully commuting sets versus the number of terms in the Hamiltonian. (<b>b</b>): number of independent sets divided by the number of terms, versus the number of spin-orbitals. A roughly linear trend is observed, indicating a <math display="inline"><semantics> <mrow> <mo>Θ</mo> <mfenced separators="" open="(" close=")"> <msup> <mi>N</mi> <mn>3</mn> </msup> </mfenced> </mrow> </semantics></math> scaling. (<b>c</b>): average number of terms in each fully commuting subset for a given Hamiltonian. (<b>d</b>): standard deviation of number of terms in each fully commuting subset for a given Hamiltonian. The increasing variance in group sizes could be problematic for ordering purposes.</p>
Full article ">Figure 6
<p>Trotter error of the depleteGroups and equaliseGroups orderings relative to a magnitude ordering. Upper plots are linear within <math display="inline"><semantics> <mrow> <mo>±</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>. (<b>a</b>): by number of qubits. (<b>b</b>): by maximum nuclear charge. Again, the magnitude ordering is preferable in most cases; however, for systems with period three atoms, the depleteGroups and equaliseGroups are best. (<b>c</b>): frequency of “ordering orders”, being the sequence of ordering performance. The distribution here is relatively flat.</p>
Full article ">Figure 7
<p>The Trotter error operator norm versus true Trotter error, for various orderings. (<b>a</b>): hydrogen molecule in a minimal basis. Two hundred and fifty bins are used. Loose correlation is observed, although there is ambiguity for a true Trotter error of less than 0.001 a.u. (<b>b</b>): helium hydride in a minimal basis, using 100,000 random orderings. One thousand bins are used. Little obvious correlation is observed.</p>
Full article ">Figure 8
<p>The procedure for performing the error operator norm minimization ordering.</p>
Full article ">Figure 9
<p>Trotter error of the errorOperator ordering relative to a magnitude ordering. Upper plots are linear between <math display="inline"><semantics> <mrow> <mo>±</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>. (<b>a</b>): by number of qubits. (<b>b</b>): by maximum nuclear charge. As with the previous orderings, the variance between Trotter ordering schemes is low for systems involving heavy atoms. In these cases, the errorOperator ordering performs roughly commensurately with the magnitude ordering. (<b>c</b>): frequency of “ordering orders”, being the sequence of ordering performance. The distribution here is relatively flat.</p>
Full article ">Figure A1
<p>Flowcharts representing the commutator (<b>a</b>) and reverseCommutator (<b>b</b>) ordering schemes.</p>
Full article ">Figure A2
<p>Trotter error of the commutator and reverseCommutator ordering relative to a magnitude ordering. Upper plots are linear within <math display="inline"><semantics> <mrow> <mo>±</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>6</mn> </mrow> </msup> </mrow> </semantics></math>. (<b>a</b>): by number of qubits. (<b>b</b>): by maximum nuclear charge. The reverseCommutator ordering is best for almost all systems including heavy atoms, however in almost all cases, a simple magnitude ordering outperforms all orderings considered. (<b>c</b>): Frequency of “ordering orders”, being the sequence of ordering performance, ignoring the magnitude ordering. (<b>d</b>): as lower left, but with the magnitude ordering.</p>
Full article ">
4 pages, 177 KiB  
Editorial
The Fractional View of Complexity
by António M. Lopes and J.A. Tenreiro Machado
Entropy 2019, 21(12), 1217; https://doi.org/10.3390/e21121217 - 13 Dec 2019
Cited by 2 | Viewed by 2375
Abstract
Fractal analysis and fractional differential equations have been proven as useful tools for describing the dynamics of complex phenomena characterized by long memory and spatial heterogeneity [...] Full article
(This article belongs to the Special Issue The Fractional View of Complexity)
16 pages, 3020 KiB  
Article
Predicting Student Performance and Deficiency in Mastering Knowledge Points in MOOCs Using Multi-Task Learning
by Shaojie Qu, Kan Li, Bo Wu, Xuri Zhang and Kaihao Zhu
Entropy 2019, 21(12), 1216; https://doi.org/10.3390/e21121216 - 12 Dec 2019
Cited by 18 | Viewed by 4294
Abstract
Massive open online courses (MOOCs), which have been deemed a revolutionary teaching mode, are increasingly being used in higher education. However, there remain deficiencies in understanding the relationship between online behavior of students and their performance, and in verifying how well a student [...] Read more.
Massive open online courses (MOOCs), which have been deemed a revolutionary teaching mode, are increasingly being used in higher education. However, there remain deficiencies in understanding the relationship between online behavior of students and their performance, and in verifying how well a student comprehends learning material. Therefore, we propose a method for predicting student performance and mastery of knowledge points in MOOCs based on assignment-related online behavior; this allows for those providing academic support to intervene and improve learning outcomes of students facing difficulties. The proposed method was developed while using data from 1528 participants in a C Programming course, from which we extracted assignment-related features. We first applied a multi-task multi-layer long short-term memory-based student performance predicting method with cross-entropy as the loss function to predict students’ overall performance and mastery of each knowledge point. Our method incorporates the attention mechanism, which might better reflect students’ learning behavior and performance. Our method achieves an accuracy of 92.52% for predicting students’ performance and a recall rate of 94.68%. Students’ actions, such as submission times and plagiarism, were related to their performance in the MOOC, and the results demonstrate that our method predicts the overall performance and knowledge points that students cannot master well. Full article
(This article belongs to the Special Issue Theory and Applications of Information Theoretic Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Proposed framework. (<b>a</b>) Shared parameters layer, (<b>b</b>) multi-task part with multi-layer LSTM. (<b>c</b>) multi-layer perceptron (MLP) using comprehensive features, and (<b>d</b>) attention mechanism.</p>
Full article ">Figure 2
<p>Shared parameters layer. For the inputs <span class="html-italic">X</span> and <span class="html-italic">Y</span>, they share the same network parameters by the shared parameters layer. In this figure, we take the input <span class="html-italic">X</span> as example.</p>
Full article ">Figure 3
<p>Relationship between completion duration and grade.</p>
Full article ">Figure 4
<p>Relationship knowledge between average score and average submission order.</p>
Full article ">Figure 5
<p>Variations of loss and accuracy of the proposed method with iterations.</p>
Full article ">Figure A1
<p>Logs from MOOC platform.</p>
Full article ">
17 pages, 9175 KiB  
Article
A Novel Improved Feature Extraction Technique for Ship-Radiated Noise Based on IITD and MDE
by Zhaoxi Li, Yaan Li, Kai Zhang and Jianli Guo
Entropy 2019, 21(12), 1215; https://doi.org/10.3390/e21121215 - 12 Dec 2019
Cited by 25 | Viewed by 3100
Abstract
Ship-radiated noise signal has a lot of nonlinear, non-Gaussian, and nonstationary information characteristics, which can reflect the important signs of ship performance. This paper proposes a novel feature extraction technique for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion [...] Read more.
Ship-radiated noise signal has a lot of nonlinear, non-Gaussian, and nonstationary information characteristics, which can reflect the important signs of ship performance. This paper proposes a novel feature extraction technique for ship-radiated noise based on improved intrinsic time-scale decomposition (IITD) and multiscale dispersion entropy (MDE). The proposed feature extraction technique is named IITD-MDE. First, IITD is applied to decompose the ship-radiated noise signal into a series of intrinsic scale components (ISCs). Then, we select the ISC with the main information through the correlation analysis, and calculate the MDE value as feature vectors. Finally, the feature vectors are input into the support vector machine (SVM) for ship classification. The experimental results indicate that the recognition rate of the proposed technique reaches 86% accuracy. Therefore, compared with the other feature extraction methods, the proposed method provides a new solution for classifying different types of ships effectively. Full article
Show Figures

Figure 1

Figure 1
<p>The comparison of the interpolation methods: (<b>a</b>) linear interpolation, (<b>b</b>) cubic spline interpolation, and (<b>c</b>) akima interpolation.</p>
Full article ">Figure 2
<p>Intrinsic scale component (ISC) satisfies the conditions.</p>
Full article ">Figure 3
<p>The coarse-grained process of MDE.</p>
Full article ">Figure 4
<p>The time-frequency domain waveforms of <math display="inline"><semantics> <mrow> <mi>x</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The results of decomposing.</p>
Full article ">Figure 6
<p>The time waveform for two simulated signals: (<b>a</b>) Gaussian white noise, (<b>b</b>) <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mi>f</mi> </mrow> </semantics></math> noise.</p>
Full article ">Figure 7
<p>The multi-entropy value of Gaussian white noise and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>/</mo> <mi>f</mi> </mrow> </semantics></math> noise: (<b>a</b>) MSE, (<b>b</b>) MPE and (<b>c</b>) MDE.</p>
Full article ">Figure 8
<p>The flowchart of feature extraction of ship-radiated noise based on IITD-MDE.</p>
Full article ">Figure 9
<p>Five types of ship signals.</p>
Full article ">Figure 10
<p>Spectrum analysis.</p>
Full article ">Figure 11
<p>Time domain of decomposed results by IITD.</p>
Full article ">Figure 11 Cont.
<p>Time domain of decomposed results by IITD.</p>
Full article ">Figure 12
<p>Spectrum of decomposed results by IITD.</p>
Full article ">Figure 12 Cont.
<p>Spectrum of decomposed results by IITD.</p>
Full article ">Figure 13
<p>Correlation coefficients of ISCs.</p>
Full article ">Figure 14
<p>The distribution of the four methods.</p>
Full article ">Figure 15
<p>Error bar graph of the methods (<b>a</b>) IITD-MDE and (<b>b</b>) ITD-MDE.</p>
Full article ">Figure 16
<p>Classification results.</p>
Full article ">
20 pages, 2235 KiB  
Article
A Two Phase Method for Solving the Distribution Problem in a Fuzzy Setting
by Krzysztof Kaczmarek, Ludmila Dymova and Pavel Sevastjanov
Entropy 2019, 21(12), 1214; https://doi.org/10.3390/e21121214 - 11 Dec 2019
Cited by 6 | Viewed by 3495
Abstract
In this paper, a new method for the solution of distribution problem in a fuzzy setting is presented. It consists of two phases. In the first of them, the problem is formulated as the classical, fully fuzzy transportation problem. A new, straightforward numerical [...] Read more.
In this paper, a new method for the solution of distribution problem in a fuzzy setting is presented. It consists of two phases. In the first of them, the problem is formulated as the classical, fully fuzzy transportation problem. A new, straightforward numerical method for solving this problem is proposed. This method is implemented using the α -cut approximation of fuzzy values and the probability approach to interval comparison. The method allows us to provide the straightforward fuzzy extension of a simplex method. It is important that the results are fuzzy values. To validate our approach, these results were compared with those obtained using the competing method and those we got using the Monte–Carlo method. In the second phase, the results obtained in the first one (the fuzzy profit) are used as the natural constraints on the parameters of multiobjective task. In our approach to the solution of distribution problem, the fuzzy local criteria based on the overall profit and contracts breaching risks are used. The particular local criteria are aggregated with the use of most popular aggregation modes. To obtain a compromise solution, the compromise general criterion is introduced, which is the aggregation of aggregating modes with the use of level-2 fuzzy sets. As the result, a new two phase method for solving the fuzzy, nonlinear, multiobjective distribution problem aggregating the fuzzy local criteria based on the overall profit and contracts breaching risks has been developed. Based on the comparison of the results obtained using our method with those obtained by competing one, and on the results of the sensitivity analysis, we can conclude that the method may be successfully used in applications. Numerical examples illustrate the proposed method. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Non trivial interval relations. (<b>a</b>) Overlapping case and (<b>b</b>) inclusion case.</p>
Full article ">Figure 2
<p>The distributing transactions.</p>
Full article ">Figure 3
<p>The frequency distribution (<b>a</b>) and the membership function of fuzzy value (<b>b</b>) for the optimal <math display="inline"><semantics> <msub> <mi>x</mi> <mn>11</mn> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>The frequency distribution (<b>a</b>) and the membership function of fuzzy value (<b>b</b>) for the optimal <math display="inline"><semantics> <msub> <mi>x</mi> <mn>22</mn> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>The frequency distribution (<b>a</b>) and the membership function of fuzzy value (<b>b</b>) for the optimal <math display="inline"><semantics> <msub> <mi>x</mi> <mn>33</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>The frequency distribution <span class="html-italic">f</span> and the fuzzy number <math display="inline"><semantics> <mi>μ</mi> </semantics></math> for optimized profit <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>P</mi> <mo stretchy="false">^</mo> </mover> <mi>r</mi> </mrow> </semantics></math>: (<b>a</b>) Monte–Carlo method on the basis of 10,000 random steps. (<b>b</b>) Monte–Carlo method on the basis of one million random steps. (<b>c</b>) Membership function of the fuzzy solution.</p>
Full article ">Figure 7
<p>The enlarged flowchart of the developed method.</p>
Full article ">
9 pages, 735 KiB  
Article
Improving Neural Machine Translation by Filtering Synthetic Parallel Data
by Guanghao Xu, Youngjoong Ko and Jungyun Seo
Entropy 2019, 21(12), 1213; https://doi.org/10.3390/e21121213 - 11 Dec 2019
Cited by 7 | Viewed by 4498
Abstract
Synthetic data has been shown to be effective in training state-of-the-art neural machine translation (NMT) systems. Because the synthetic data is often generated by back-translating monolingual data from the target language into the source language, it potentially contains a lot of noise—weakly paired [...] Read more.
Synthetic data has been shown to be effective in training state-of-the-art neural machine translation (NMT) systems. Because the synthetic data is often generated by back-translating monolingual data from the target language into the source language, it potentially contains a lot of noise—weakly paired sentences or translation errors. In this paper, we propose a novel approach to filter this noise from synthetic data. For each sentence pair of the synthetic data, we compute a semantic similarity score using bilingual word embeddings. By selecting sentence pairs according to these scores, we obtain better synthetic parallel data. Experimental results on the IWSLT 2017 Korean→English translation task show that despite using much less data, our method outperforms the baseline NMT system with back-translation by up to 0.72 and 0.62 Bleu points for tst2016 and tst2017, respectively. Full article
(This article belongs to the Section Multidisciplinary Applications)
27 pages, 698 KiB  
Article
Dissipation in Non-Steady State Regulatory Circuits
by Paulina Szymańska-Rożek, Dario Villamaina, Jacek Miȩkisz and Aleksandra M. Walczak
Entropy 2019, 21(12), 1212; https://doi.org/10.3390/e21121212 - 10 Dec 2019
Cited by 2 | Viewed by 3178
Abstract
In order to respond to environmental signals, cells often use small molecular circuits to transmit information about their surroundings. Recently, motivated by specific examples in signaling and gene regulation, a body of work has focused on the properties of circuits that function out [...] Read more.
In order to respond to environmental signals, cells often use small molecular circuits to transmit information about their surroundings. Recently, motivated by specific examples in signaling and gene regulation, a body of work has focused on the properties of circuits that function out of equilibrium and dissipate energy. We briefly review the probabilistic measures of information and dissipation and use simple models to discuss and illustrate trade-offs between information and dissipation in biological circuits. We find that circuits with non-steady state initial conditions can transmit more information at small readout delays than steady state circuits. The dissipative cost of this additional information proves marginal compared to the steady state dissipation. Feedback does not significantly increase the transmitted information for out of steady state circuits but does decrease dissipative costs. Lastly, we discuss the case of bursty gene regulatory circuits that, even in the fast switching limit, function out of equilibrium. Full article
(This article belongs to the Special Issue Information Flow and Entropy Production in Biomolecular Networks)
Show Figures

Figure 1

Figure 1
<p>A cartoon of the possible states and transitions for both models: without feedback (<b>A</b>), and with feedback (<b>B</b>). Since there are two binary variables there are four states; transition rates are marked next to respective arrows. Note the symmetry between the “pure” (<math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mo>−</mo> <mo>,</mo> <mo>−</mo> <mo stretchy="false">)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mo>+</mo> <mo>,</mo> <mo>+</mo> <mo stretchy="false">)</mo> </mrow> </semantics></math>) states and the “mixed” states (<math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mo>−</mo> <mo>,</mo> <mo>+</mo> <mo stretchy="false">)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <mo>+</mo> <mo>,</mo> <mo>−</mo> <mo stretchy="false">)</mo> </mrow> </semantics></math>) in both models. Representation of a possible time evolution of the system. Two variables flip between active (+) and inactive (−) states with respective rates. In the model without feedback (<b>C</b>) the output variable depends on the input variable (the output aligns to the input with rate <span class="html-italic">r</span> or anti-aligns, with rate <span class="html-italic">s</span>), the input variable <span class="html-italic">z</span> flips freely between its active and inactive state, regardless of the state of the output. In the model with feedback (<b>D</b>), there is a difference in rates of flipping of the input that depends on the state of the output.</p>
Full article ">Figure 2
<p>Schematic representation of system’s relaxation. The entropy dissipation rate, <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo stretchy="false">(</mo> <mi>τ</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> relaxes with time to its steady state value, <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>. At <math display="inline"><semantics> <msub> <mi>τ</mi> <mi mathvariant="normal">p</mi> </msub> </semantics></math> the system is “kicked out” or reset, thus the pink area represents the total energy dissipated until that time. The information is collected at an earlier readout time <math display="inline"><semantics> <mi>τ</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>Results of the unconstrained optimization—mutual information for the models without feedback (<span class="html-italic">S</span> and <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>) and with feedback (<span class="html-italic">F</span> and <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>) with respect to the readout time <math display="inline"><semantics> <mi>τ</mi> </semantics></math>. Optimization done both when the initial distribution is fixed to its steady state value (no tilde) and when the parameter is subjected to optimization as well (with tilde).</p>
Full article ">Figure 4
<p>Results of the optimization problem with constrained steady state dissipation for models without feedback. Optimal mutual information as function of the readout time, <math display="inline"><semantics> <mi>τ</mi> </semantics></math>, for different constrained steady state dissipation rates, <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>, for the model <span class="html-italic">S</span> (dashed lines) an <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math> (solid lines).</p>
Full article ">Figure 5
<p>Results of the optimization problem with constrained steady state dissipation for all four models. Optimal mutual information as function of the readout time, <math display="inline"><semantics> <mi>τ</mi> </semantics></math>, for two different constrained steady state dissipation rates, <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>, for the models <span class="html-italic">S</span> and <span class="html-italic">F</span> (dashed lines), and the models <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math> and <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math> (solid lines).</p>
Full article ">Figure 6
<p>A graphical representation of the optimal circuits without (<math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>) and with (<math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>) feedback for delayed information transmission with optimized non-steady state initial conditions with a constraint on steady state dissipation <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>. The exact rate values depend on the value of <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math> and examples are shown in <a href="#entropy-21-01212-f0A1" class="html-fig">Figure A1</a> (model <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>) and <a href="#entropy-21-01212-f0A2" class="html-fig">Figure A2</a> (model <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>). The depicted circuits are close to equilibrium. The gray arrow indicates a smaller rate than the black arrow. Optimal non-steady state initial states that have highest probability are shown in red.</p>
Full article ">Figure 7
<p>Optimal mutual information (<math display="inline"><semantics> <msup> <mi>I</mi> <mo>*</mo> </msup> </semantics></math>) and optimal parameters <math display="inline"><semantics> <msub> <mi>μ</mi> <mn>0</mn> </msub> </semantics></math>, <span class="html-italic">u</span>, and <span class="html-italic">s</span> for the <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math> model without feedback as function of the average dissipation, <math display="inline"><semantics> <msup> <mi mathvariant="sans-serif">Σ</mi> <mi>avg</mi> </msup> </semantics></math>, for two values of the readout time, <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> ((<b>A</b>) panels), and <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> ((<b>B</b>) panels), and three values of the reset time, <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>p</mi> </msub> </semantics></math> (different colours of curves). Steady state dissipation, <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>, was fixed to <math display="inline"><semantics> <mrow> <mn>0.1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>(<b>A</b>) Cartoon depicting the relaxation cost (pink area) <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>p</mi> </msub> <mrow> <mo stretchy="false">(</mo> <msup> <mi mathvariant="sans-serif">Σ</mi> <mi>avg</mi> </msup> <mo>−</mo> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> of the system equilibrating from a non-steady state initial state, and thus <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo stretchy="false">(</mo> <mi>τ</mi> <mo stretchy="false">)</mo> </mrow> <mo>≠</mo> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </mrow> </semantics></math>. (<b>B</b>) The total cost, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>p</mi> </msub> <msup> <mi mathvariant="sans-serif">Σ</mi> <mi>avg</mi> </msup> </mrow> </semantics></math>, of the optimal information transmitted as a function of the steady state entropy dissipation rate, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>p</mi> </msub> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </mrow> </semantics></math>, for models without feedback, that start with the steady state distribution, <span class="html-italic">S</span>, and that optimize the initial distribution, <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>. Results shown for two choices of reset <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>p</mi> </msub> </semantics></math> and readout <math display="inline"><semantics> <mi>τ</mi> </semantics></math> timescales. For the steady state models <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>p</mi> </msub> <msup> <mi mathvariant="sans-serif">Σ</mi> <mi>avg</mi> </msup> <mo>=</mo> <msub> <mi>τ</mi> <mi>p</mi> </msub> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </mrow> </semantics></math>. (<b>C</b>) The information gain, <math display="inline"><semantics> <mrow> <msup> <mi>I</mi> <mo>*</mo> </msup> <mo>−</mo> <msup> <mi>I</mi> <mi>ss</mi> </msup> </mrow> </semantics></math>, of the optimized initital condition model (<math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>) compared to the steady state initial condition model (<span class="html-italic">S</span>) and the relaxation cost, <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>p</mi> </msub> <mrow> <mo stretchy="false">(</mo> <msup> <mi mathvariant="sans-serif">Σ</mi> <mrow> <mi>avg</mi> <mo>−</mo> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </mrow> </msup> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>, as a function of the steady state entropy dissipation rate for the same choices of <math display="inline"><semantics> <msub> <mi>τ</mi> <mi>p</mi> </msub> </semantics></math> and <math display="inline"><semantics> <mi>τ</mi> </semantics></math> as in panel (<b>B</b>). (<b>D</b>) Comparison of the optimal delayed information and total dissipative cost as a function of the steady state entropy dissipation rate for all four models: without feedback (<span class="html-italic">S</span>, <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>) and with feedback (<span class="html-italic">F</span>, <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>), with the initial distribution equal to the steady state one (<span class="html-italic">S</span>, <span class="html-italic">F</span>) or optimized over (<math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>). <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <msub> <mi>τ</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>. (<b>E</b>) The information gain and relaxation cost of circuits with optimized initial conditions compared to steady state ones for the models with (<math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>) and without feedback (<math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>). <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <msub> <mi>τ</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Information for model <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math> (panels (<b>A</b>,<b>C</b>,<b>E</b>)) and model <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math> (panels (<b>G</b>,<b>I</b>,<b>K</b>)) and <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="sans-serif">Σ</mi> <mi>avg</mi> </msup> <mrow> <mo stretchy="false">(</mo> <msub> <mi>τ</mi> <mi>p</mi> </msub> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> for model <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>(panels (<b>B</b>,<b>D</b>,<b>F</b>)) and model <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math> (panels (<b>H</b>,<b>J</b>,<b>L</b>)) of information-optimal circuits with <math display="inline"><semantics> <mrow> <msub> <mi>μ</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> evaluated for different values of the initial condition <math display="inline"><semantics> <msub> <mi>μ</mi> <mn>0</mn> </msub> </semantics></math>. The circuits parameters are evaluated by optimizing information transmission for <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (<b>A</b>,<b>B</b>), <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (<b>C</b>,<b>D</b>) and <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (<b>E</b>,<b>F</b>) and fixed <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> <mo>=</mo> <mn>0.15</mn> </mrow> </semantics></math> (blue lines), <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> <mo>=</mo> <mn>0.35</mn> </mrow> </semantics></math> (magenta lines), <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math> (green lines). <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mi>p</mi> </msub> <mo>=</mo> <mi>τ</mi> </mrow> </semantics></math> in all plots. For comparison we plot the optimal information of the steady state circuit <span class="html-italic">S</span> and <span class="html-italic">F</span>, respectively, optimized for the same steady state dissipation <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math> and readout delay <math display="inline"><semantics> <mi>τ</mi> </semantics></math> (solid lines). The information always decreases for non-optimal values of <math display="inline"><semantics> <msub> <mi>μ</mi> <mn>0</mn> </msub> </semantics></math> but the mean dissipation can be smaller for unexpected initial conditions.</p>
Full article ">Figure A1
<p>The optimal parameters as a function of the readout delay, <math display="inline"><semantics> <mi>τ</mi> </semantics></math>, for the models without feedback, <span class="html-italic">S</span> and <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>, at different constrained steady state dissipation rates <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>.</p>
Full article ">Figure A2
<p>The optimal parameters as a function of the readout delay <math display="inline"><semantics> <mi>τ</mi> </semantics></math> for models with feedback, <span class="html-italic">F</span> and <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>, at different constrained steady state dissipation rates <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>.</p>
Full article ">Figure A3
<p>The learning rate for the output variable <span class="html-italic">x</span> as a function of the rescaled steady state dissipation, <math display="inline"><semantics> <msup> <mover accent="true"> <mi>σ</mi> <mo stretchy="false">^</mo> </mover> <mi>ss</mi> </msup> </semantics></math>, calculated at steady state for models with (<span class="html-italic">F</span> and <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math>) and without feedback (<span class="html-italic">S</span> and <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math>). Models <math display="inline"><semantics> <mover accent="true"> <mi>S</mi> <mo>˜</mo> </mover> </semantics></math> and <math display="inline"><semantics> <mover accent="true"> <mi>F</mi> <mo>˜</mo> </mover> </semantics></math> have optimized initial conditions (that do not enter this calculations except for the optimal parameters) and models <span class="html-italic">S</span> and <span class="html-italic">F</span> are constrained to have initial conditions in steady state.</p>
Full article ">
18 pages, 1123 KiB  
Article
On the Statistical Mechanics of Life: Schrödinger Revisited
by Kate Jeffery, Robert Pollack and Carlo Rovelli
Entropy 2019, 21(12), 1211; https://doi.org/10.3390/e21121211 - 10 Dec 2019
Cited by 32 | Viewed by 9124
Abstract
We study the statistical underpinnings of life, in particular its increase in order and complexity over evolutionary time. We question some common assumptions about the thermodynamics of life. We recall that contrary to widespread belief, even in a closed system entropy growth can [...] Read more.
We study the statistical underpinnings of life, in particular its increase in order and complexity over evolutionary time. We question some common assumptions about the thermodynamics of life. We recall that contrary to widespread belief, even in a closed system entropy growth can accompany an increase in macroscopic order. We view metabolism in living things as microscopic variables directly driven by the second law of thermodynamics, while viewing the macroscopic variables of structure, complexity and homeostasis as mechanisms that are entropically favored because they open channels for entropy to grow via metabolism. This perspective reverses the conventional relation between structure and metabolism, by emphasizing the role of structure for metabolism rather than the converse. Structure extends in time, preserving information along generations, particularly in the genetic code, but also in human culture. We argue that increasing complexity is an inevitable tendency for systems with these dynamics and explain this with the notion of metastable states, which are enclosed regions of the phase-space that we call “bubbles,” and channels between these, which are discovered by random motion of the system. We consider that more complex systems inhabit larger bubbles (have more available states), and also that larger bubbles are more easily entered and less easily exited than small bubbles. The result is that the system entropically wanders into ever-larger bubbles in the foamy phase space, becoming more complex over time. This formulation makes intuitive why the increase in order/complexity over time is often stepwise and sometimes collapses catastrophically, as in biological extinction. Full article
(This article belongs to the Special Issue Biological Statistical Mechanics)
Show Figures

Figure 1

Figure 1
<p>The intuitive understanding of the logic of the second law. The space in the picture represents all possible states of a system. If (i) there is a variable that has value L in a small L (“Low entropy”) region and value H in a large H (“High entropy”) region, and if (ii) the evolution begins in L, then it is likely to end up in H. The converse is not true: a generic evolution that begins in H likely remains in H. Hence the system evolves irreversibly from L to H but not vice versa. Note that in this example the envelope surrounding L is completely porous—there is no barrier to movement of the state from L to H or vice versa; the directionality of movement purely arises from statistics.</p>
Full article ">Figure 2
<p>Intuitive representation of metastable states: the metastable state is the region L of phase space in a system for which the dynamics cannot cross the boundary of L except through a very narrow gap. A microstate will remain long trapped in the region L (the metastable state) before occasionally finding its way out towards the stable state H. The impetus to cross the channel is something that transiently lowers entropy.</p>
Full article ">Figure 3
<p>(<b>a</b>): Intuitive (oversimplified) representation of the complex phase space of living physical system: extremely numerous metastable state regions (“bubbles”) are linked to one another via channels. (<b>b</b>): a system can wander from one bubble to a connected one and will tend to more easily make its way through the space (dark bubbles) to larger bubbles that are easier to find and harder to leave. This means that over time, the system will tend to increase in complexity (find larger bubbles).</p>
Full article ">
10 pages, 3317 KiB  
Article
The Static Standing Postural Stability Measured by Average Entropy
by Sung-Yang Wei, Chang Francis Hsu, Yun-Ju Lee, Long Hsu and Sien Chi
Entropy 2019, 21(12), 1210; https://doi.org/10.3390/e21121210 - 10 Dec 2019
Cited by 3 | Viewed by 3073
Abstract
Static standing postural stability has been measured by multiscale entropy (MSE), which is used to measure complexity. In this study, we used the average entropy (AE) to measure the static standing postural stability, as AE is a good measure of disorder. The center [...] Read more.
Static standing postural stability has been measured by multiscale entropy (MSE), which is used to measure complexity. In this study, we used the average entropy (AE) to measure the static standing postural stability, as AE is a good measure of disorder. The center of pressure (COP) trajectories were collected from 11 subjects under four kinds of balance conditions, from stable to unstable: bipedal with open eyes, bipedal with closed eyes, unipedal with open eyes, and unipedal with closed eyes. The AE, entropy of entropy (EoE), and MSE methods were used to analyze these COP data, and EoE was found to be a good measure of complexity. The AE of the 11 subjects sequentially increased by 100% as the balance conditions progressed from stable to unstable, but the results of EoE and MSE did not follow this trend. Therefore, AE, rather than EoE or MSE, is a good measure of static standing postural stability. Furthermore, the comparison of EoE and AE plots exhibited an inverted U curve, which is another example of a complexity versus disorder inverted U curve. Full article
Show Figures

Figure 1

Figure 1
<p>An application of the average entropy (AE) and entropy of entropy (EoE) methods. (<b>a</b>) The four original center of pressure (COP) speed time series {<span class="html-italic">v<sub>i</sub></span>} recorded from a certain subject under the four kinds of balance conditions, separately. Each series was equally divided into 16 windows with each of 5 data points in a red frame. The Shannon entropy value of the 5 data points in each window in red was calculated individually. (<b>b</b>) The four sets of Shannon entropy sequences {<span class="html-italic">y<sub>j</sub></span><sup>(5)</sup>} with each of the 16 elements, separately.</p>
Full article ">Figure 2
<p>Analytical procedure flowchart in this study.</p>
Full article ">Figure 3
<p>AE values of the COP speed time series from the 11 subjects under four kinds of balance conditions. Recall that the abbreviations of conditions O2, C2, O1, and C1 stood for bipedal with open eyes, bipedal with closed eyes, unipedal with open eyes, and unipedal with closed eyes, respectively. The symbols of blue triangles, yellow stars, green triangles, and red circles represent O2, C2, O1, and C1 conditions, respectively.</p>
Full article ">Figure 4
<p>(<b>a</b>) EoE values and (<b>b</b>) MSE values of the COP speed time series from the 11 subjects under four kinds of balance conditions: O2, C2, O1, and C1.</p>
Full article ">Figure 5
<p>MSE complexity index (CI) values of COP trajectories time series in (<b>a</b>) mediolateral (ML) direction and (<b>b</b>) anteroposterior (AP) directions from the 11 subjects under four kinds of balance conditions: O2, C2, O1, and C1.</p>
Full article ">Figure 6
<p>The EoE versus AE of the 44 COP speed time series from the 11 subjects under four kinds of balance conditions: O2, C2, O1, and C1.</p>
Full article ">
21 pages, 383 KiB  
Article
Statistical Inference on the Shannon Entropy of Inverse Weibull Distribution under the Progressive First-Failure Censoring
by Jiao Yu, Wenhao Gui and Yuqi Shan
Entropy 2019, 21(12), 1209; https://doi.org/10.3390/e21121209 - 10 Dec 2019
Cited by 18 | Viewed by 3424
Abstract
Entropy is an uncertainty measure of random variables which mathematically represents the prospective quantity of the information. In this paper, we mainly focus on the estimation for the parameters and entropy of an Inverse Weibull distribution under progressive first-failure censoring using classical (Maximum [...] Read more.
Entropy is an uncertainty measure of random variables which mathematically represents the prospective quantity of the information. In this paper, we mainly focus on the estimation for the parameters and entropy of an Inverse Weibull distribution under progressive first-failure censoring using classical (Maximum Likelihood) and Bayesian methods. For Bayesian approaches, the Bayesian estimates are obtained based on both asymmetric (General Entropy, Linex) and symmetric (Squared Error) loss functions. Due to the complex form of Bayes estimates, we cannot get an explicit solution. Therefore, the Lindley method as well as Importance Sampling procedure is applied. Furthermore, using Importance Sampling method, the Highest Posterior Density credible intervals of entropy are constructed. As a comparison, the asymptotic intervals of entropy are also gained. Finally, a simulation study is implemented and a real data set analysis is performed to apply the previous methods. Full article
Show Figures

Figure 1

Figure 1
<p>The failure rate function.</p>
Full article ">
19 pages, 914 KiB  
Article
Learning from Both Experts and Data
by Rémi Besson, Erwan Le Pennec and Stéphanie Allassonnière
Entropy 2019, 21(12), 1208; https://doi.org/10.3390/e21121208 - 10 Dec 2019
Cited by 3 | Viewed by 2669
Abstract
In this work, we study the problem of inferring a discrete probability distribution using both expert knowledge and empirical data. This is an important issue for many applications where the scarcity of data prevents a purely empirical approach. In this context, it is [...] Read more.
In this work, we study the problem of inferring a discrete probability distribution using both expert knowledge and empirical data. This is an important issue for many applications where the scarcity of data prevents a purely empirical approach. In this context, it is common to rely first on an a priori from initial domain knowledge before proceeding to an online data acquisition. We are particularly interested in the intermediate regime, where we do not have enough data to do without the initial a priori of the experts, but enough to correct it if necessary. We present here a novel way to tackle this issue, with a method providing an objective way to choose the weight to be given to experts compared to data. We show, both empirically and theoretically, that our proposed estimator is always more efficient than the best of the two models (expert or data) within a constant. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Barycenter between expert and data when the expert belongs to the confidence interval centered in the empirical distribution. In this case, there is no sufficient empirical evidence that the expert is wrong.</p>
Full article ">Figure 2
<p>Barycenter between expert and data when the expert does not belong to the confidence interval centered in the empirical distribution. There is a high probability that the expert is outside the set where the target is located and therefore needs to be corrected.</p>
Full article ">Figure 3
<p>Evolution of the performance of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>L</mi> </msubsup> </semantics></math> as a function of the available number of empirical data. <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>n</mi> </msub> </semantics></math> is defined by Equation (<a href="#FD18-entropy-21-01208" class="html-disp-formula">18</a>).</p>
Full article ">Figure 4
<p>Evolution of the performance of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mn>1</mn> </msubsup> </semantics></math> as a function of the available number of empirical data. <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>n</mi> </msub> </semantics></math> is defined by Equation (<a href="#FD20-entropy-21-01208" class="html-disp-formula">20</a>).</p>
Full article ">Figure 5
<p>Evolution of the performance of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>L</mi> </msubsup> </semantics></math> as a function of the available number of empirical data. <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>n</mi> </msub> </semantics></math> is defined by Equation (<a href="#FD19-entropy-21-01208" class="html-disp-formula">19</a>). Number of symptoms: 7.</p>
Full article ">Figure 6
<p>Comparison of the performances of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>L</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>Bayes</mi> </msubsup> </semantics></math> as a function of the available number of empirical data with different initial a priori and <math display="inline"><semantics> <mi>δ</mi> </semantics></math>. <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>n</mi> </msub> </semantics></math> is defined by Equation (<a href="#FD18-entropy-21-01208" class="html-disp-formula">18</a>). Number of symptoms: 7.</p>
Full article ">Figure 7
<p>Evolution of the performance of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>L</mi> </msubsup> </semantics></math> as a function of the available number of empirical data. <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>n</mi> </msub> </semantics></math> is defined by Equation (<a href="#FD19-entropy-21-01208" class="html-disp-formula">19</a>). Number of symptoms: 9.</p>
Full article ">Figure 8
<p>Comparison of the performances of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>L</mi> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>Bayes</mi> </msubsup> </semantics></math> as a function of the available number of empirical data with different initial a priori and <math display="inline"><semantics> <mi>δ</mi> </semantics></math>. <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>n</mi> </msub> </semantics></math> is defined by Equation (<a href="#FD18-entropy-21-01208" class="html-disp-formula">18</a>). Number of symptoms: 9.</p>
Full article ">Figure 9
<p>Comparison of the performances of <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mn>1</mn> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>p</mi> <mo>^</mo> </mover> <mi>n</mi> <mi>Bayes</mi> </msubsup> </semantics></math> as a function of the available number of empirical data with different initial a priori and <math display="inline"><semantics> <mi>δ</mi> </semantics></math>. <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>n</mi> </msub> </semantics></math> is defined by Equation (<a href="#FD20-entropy-21-01208" class="html-disp-formula">20</a>). Number of symptoms: 9.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop