[go: up one dir, main page]

Next Issue
Volume 19, March
Previous Issue
Volume 19, January
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 19, Issue 2 (February 2017) – 40 articles

Cover Story (view full-size image): It is argued that one should make a clear distinction between Shannon’s Measure of Information (SMI) and Thermodynamic Entropy. The first is defined on any probability distribution whereas Entropy is defined on a very special set of distributions. The entropy measures the uncertainty in the distribution of the locations and momenta of all the particles. The H-function, as defined by Boltzmann, is an SMI, not entropy. Therefore, the H-theorem is irrelevant to the Second Law of Thermodynamics. It is therefore recommended to change the term MaxEnt, to MaxSMI. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
1538 KiB  
Article
Quantifying Synergistic Information Using Intermediate Stochastic Variables
by Rick Quax, Omri Har-Shemesh and Peter M. A. Sloot
Entropy 2017, 19(2), 85; https://doi.org/10.3390/e19020085 - 22 Feb 2017
Cited by 30 | Viewed by 8297
Abstract
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks [...] Read more.
Quantifying synergy among stochastic variables is an important open problem in information theory. Information synergy occurs when multiple sources together predict an outcome variable better than the sum of single-source predictions. It is an essential phenomenon in biology such as in neuronal networks and cellular regulatory processes, where different information flows integrate to produce a single response, but also in social cooperation processes as well as in statistical inference tasks in machine learning. Here we propose a metric of synergistic entropy and synergistic information from first principles. The proposed measure relies on so-called synergistic random variables (SRVs) which are constructed to have zero mutual information about individual source variables but non-zero mutual information about the complete set of source variables. We prove several basic and desired properties of our measure, including bounds and additivity properties. In addition, we prove several important consequences of our measure, including the fact that different types of synergistic information may co-exist between the same sets of variables. A numerical implementation is provided, which we use to demonstrate that synergy is associated with resilience to noise. Our measure may be a marked step forward in the study of multivariate information theory and its numerous applications. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>The values of the two MSRVs <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> which are mutually independent but highly synergistic about two 3-valued variables <math display="inline"> <semantics> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>X</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>X</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> are uniformly distributed and independent.</p>
Full article ">Figure 2
<p>Two independent input bits and as output the XOR-gate. (<b>a</b>) The relation <math display="inline"> <semantics> <mrow> <msub> <mi>Y</mi> <mo>⊕</mo> </msub> <mo>=</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>⊕</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>. In (<b>b</b>) an additional input bit is added which copies the XOR output, adding individual (unique) information <math display="inline"> <semantics> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>3</mn> </msub> <mo>:</mo> <mi>Y</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Effectiveness of the numerical implementation to find a single SRV. The input consists of two variables with 2, 3, 4, or 5 possible values each (<span class="html-italic">x</span>-axis). Red line with dots: probability that an SRV could be found with at most 10% relative error in 50 randomly generated Pr(<span class="html-italic">X</span><sub>1</sub>,<span class="html-italic">X</span><sub>2</sub>,<span class="html-italic">Y</span>) distributions. The fact that it is lowest for binary variables is consistent with the observation that perfect orthogonal decomposition is impossible in this case under at least one known condition (<a href="#secAdot6-entropy-19-00085" class="html-sec">Appendix A.6</a>). The fact that it converges to 1 is consistent with our suggestion that orthogonal decomposition could be possible for continuous variables (<a href="#sec7dot1-entropy-19-00085" class="html-sec">Section 7.1</a>). Blue box plot: expected relative error of the entropy of a single SRV, once successfully found.</p>
Full article ">Figure 4
<p>Synergistic entropy of a single SRV normalized by the theoretical upper bound. The input consists of two randomly generated stochastic variables with 2, 3, 4, or 5 possible values per variable (<span class="html-italic">x</span>-axis). The SRV is constrained to have the same number of possible values. The initial downward trend shows that individual SRVs become less efficient in storing synergistic information as the state space per variable grows. The apparent settling to a non-zero constant suggests that estimating synergistic information does not require a diverging number of SRVs to be found for any number of values per variable.</p>
Full article ">Figure 5
<p>Left: The median relative change of the mutual information <math display="inline"> <semantics> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>:</mo> <mi>Y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> after perturbing a single input variable’s marginal distribution <math display="inline"> <semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> (“local” perturbation). Error bars indicate the 25th and 75th percentiles. A perturbation is implemented by adding a random vector with norm 0.1 to the point in unit hypercube that defines the marginal distribution <math display="inline"> <semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>. Each bar is based on 100 randomly generated joint distributions <math display="inline"> <semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>,</mo> <mi>Y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>, where in the synergistic case <math display="inline"> <semantics> <mi>Y</mi> </semantics> </math> is constrained to be an SRV of <math display="inline"> <semantics> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>. Right: the same as left except that the perturbation is “non-local” in the sense that it is applied to <math display="inline"> <semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>2</mn> </msub> <mrow> <mo>|</mo> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> </mrow> </mrow> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> while keeping <math display="inline"> <semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>X</mi> <mn>2</mn> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> unchanged.</p>
Full article ">Figure 6
<p>The conditional probabilities of an SRV conditioned on two independent binary inputs. Here e.g., <math display="inline"> <semantics> <mrow> <mo>∑</mo> <msub> <mi>a</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> and <span class="html-italic">a<sub>i</sub></span> denotes the probability that <span class="html-italic">S</span> equals to state <span class="html-italic">i</span> in casein case <math display="inline"> <semantics> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">
2428 KiB  
Article
The More You Know, the More You Can Grow: An Information Theoretic Approach to Growth in the Information Age
by Martin Hilbert
Entropy 2017, 19(2), 82; https://doi.org/10.3390/e19020082 - 22 Feb 2017
Cited by 10 | Viewed by 8413
Abstract
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory [...] Read more.
In our information age, information alone has become a driver of social growth. Information is the fuel of “big data” companies, and the decision-making compass of policy makers. Can we quantify how much information leads to how much social growth potential? Information theory is used to show that information (in bits) is effectively a quantifiable ingredient of growth. The article presents a single equation that allows both to describe hands-off natural selection of evolving populations and to optimize population fitness in uncertain environments through intervention. The setup analyzes the communication channel between the growing population and its uncertain environment. The role of information in population growth can be thought of as the optimization of information flow over this (more or less) noisy channel. Optimized growth implies that the population absorbs all communicated environmental structure during evolutionary updating (measured by their mutual information). This is achieved by endogenously adjusting the population structure to the exogenous environmental pattern (through bet-hedging/portfolio management). The setup can be applied to decompose the growth of any discrete population in stationary, stochastic environments (economic, cultural, or biological). Two empirical examples from the information economy reveal inherent trade-offs among the involved information quantities during growth optimization. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Google Trends data for the search engine terms ‘chocolate’ and ‘diet’ from May 2004 to February 2015. Vertical shadings indicate different environmental states (see Results section).</p>
Full article ">Figure 2
<p>The communication channel between the environment and the average updated population. (<b>a</b>) Representation as a traditional fitness matrix for the binary case. The fitness values in brackets show the case of the diagonal fitness matrix with type fitness <math display="inline"> <semantics> <mrow> <mrow> <msup> <mrow/> <mi>d</mi> </msup> <mi>W</mi> </mrow> </mrow> </semantics> </math>. (<b>b</b>) Representation as a noiseless communication channel with transition probabilities <math display="inline"> <semantics> <mrow> <msup> <mi>p</mi> <mo>+</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>g</mi> <mo>+</mo> </msup> <mo stretchy="false">|</mo> <mi>e</mi> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math> for the binary case. The diagonal fitness matrix results in the noiseless channel, where only the identity transitions are non-zero: <math display="inline"> <semantics> <mrow> <msup> <mi>p</mi> <mo>+</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>g</mi> <mo>+</mo> </msup> <mo>=</mo> <mi>i</mi> <mo stretchy="false">|</mo> <mi>e</mi> <mo>=</mo> <mi>i</mi> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics> </math>, for all <math display="inline"> <semantics> <mi>i</mi> </semantics> </math>.</p>
Full article ">Figure 3
<p>The typical sets of the environmental states <math display="inline"> <semantics> <mi>E</mi> </semantics> </math> and the average updated future generation <math display="inline"> <semantics> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> </semantics> </math>, both over a large number of periods <math display="inline"> <semantics> <mi>t</mi> </semantics> </math>. The transmission over the channel between the environment and the average updated population induces uncertainty to the identification of each environmental state during reception by the population. The uncertainty that the environmental state <math display="inline"> <semantics> <mrow> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>e</mi> <mo>=</mo> <mn>1</mn> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math> is sent over the channel is the conditional entropy of <math display="inline"> <semantics> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> </semantics> </math>: <math display="inline"> <semantics> <mrow> <mrow> <mrow> <mi>H</mi> <mo stretchy="false">(</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo stretchy="false">|</mo> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>e</mi> <mo>=</mo> <mn>1</mn> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. According to the asymptotic equipartition property, there are approximately <math display="inline"> <semantics> <mrow> <msup> <mn>2</mn> <mrow> <mrow> <mrow> <mi>H</mi> <mo stretchy="false">(</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo stretchy="false">|</mo> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>e</mi> <mo>=</mo> <mn>1</mn> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </msup> </mrow> </semantics> </math> of those. The total number of typical <math display="inline"> <semantics> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> </semantics> </math> sequences is <math display="inline"> <semantics> <mrow> <mo>≈</mo> <msup> <mn>2</mn> <mrow> <mi>t</mi> <mrow> <mrow> <mi>H</mi> <mo stretchy="false">(</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </msup> </mrow> </semantics> </math>. Restricting ourselves to the subset of channel input such that the corresponding typical output sets do not overlap (see also <a href="#app1-entropy-19-00082" class="html-app">Supplementary Section 1</a>), we can bound the number of non-confusable inputs by dividing the size of the typical output set by the size of each typical-output-given-typical-input set: <math display="inline"> <semantics> <mrow> <msup> <mn>2</mn> <mrow> <mrow> <mrow> <mi>t</mi> <mi>H</mi> <mo stretchy="false">(</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo stretchy="false">|</mo> <mi>E</mi> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </msup> </mrow> </semantics> </math>. The total number of disjoint and non-confusable sets is less than or equal to: <math display="inline"> <semantics> <mrow> <msup> <mn>2</mn> <mrow> <mrow> <mrow> <mi>t</mi> <mo stretchy="false">(</mo> <mrow> <mrow> <mi>H</mi> <mo stretchy="false">(</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>−</mo> <mi>H</mi> <mo stretchy="false">(</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo stretchy="false">|</mo> <mi>E</mi> </mrow> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> </msup> <mo>=</mo> <msup> <mn>2</mn> <mrow> <mi>t</mi> <mtext> </mtext> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo>;</mo> <mi>E</mi> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </msup> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Venn diagram/I-diagram representation of mutual information. (<b>a</b>) Optimal bet-hedging in Kelly’s case of the diagonal fitness matrix. Mutual information can be calculated as the difference between uncertainties: <math display="inline"> <semantics> <mrow> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mi>E</mi> <mo stretchy="false">)</mo> </mrow> <mo>−</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mo stretchy="false">|</mo> <mi>C</mi> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mo>;</mo> <mi>C</mi> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. It is always nonnegative in the two-variable case, as conditioning reduces uncertainty. (<b>b</b>) Optimal bet-hedging with mixed non-diagonal fitness matrix, inside the region of bet-hedging. Also in the three variable case the circles are entropies and the intersections mutual information. One way to calculate the joint intersection of all three variables is: <math display="inline"> <semantics> <mrow> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo>;</mo> <mi>E</mi> <mo>;</mo> <mi>C</mi> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mi>E</mi> <mo stretchy="false">)</mo> </mrow> <mo>−</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mo stretchy="false">|</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>−</mo> <mtext> </mtext> <mi>I</mi> <mo stretchy="false">(</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo>;</mo> <mi>E</mi> <mtext> </mtext> <mo stretchy="false">|</mo> <mi>C</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. In the case of bet-hedging inside the bet-hedging region the three involved variables form a Markov chain <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo>↔</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo>↔</mo> <mi>C</mi> </mrow> </semantics> </math>. This implies that <math display="inline"> <semantics> <mi>E</mi> </semantics> </math> and <math display="inline"> <semantics> <mi>C</mi> </semantics> </math> do not have any mutual information outside of <math display="inline"> <semantics> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> </semantics> </math> (<math display="inline"> <semantics> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> </semantics> </math> absorbs all common structure through optimal growth); or <math display="inline"> <semantics> <mrow> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mo>;</mo> <mi>C</mi> <mtext> </mtext> <mo stretchy="false">|</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>. This can be shown by the reformulation <math display="inline"> <semantics> <mrow> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mo>;</mo> <mi>C</mi> <mtext> </mtext> <mo stretchy="false">|</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mo stretchy="false">|</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>−</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mtext> </mtext> <mo stretchy="false">|</mo> <mi>C</mi> <mo>,</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math> (which holds in general). It shows that <math display="inline"> <semantics> <mrow> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mo stretchy="false">|</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <mi>E</mi> <mtext> </mtext> <mo stretchy="false">|</mo> <mi>C</mi> <mo>,</mo> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. This means that from the perspective of the updated population, additional cues do not affect the perceived distribution of the environment (in the case of bet-hedging inside the bet-hedging region, compare with values in <a href="#entropy-19-00082-t002" class="html-table">Table 2</a>). A perfect cue in terms of a Venn diagram representation would imply a picture in <a href="#entropy-19-00082-f004" class="html-fig">Figure 4</a>b similar to the complete overlap shown in <a href="#entropy-19-00082-f004" class="html-fig">Figure 4</a>a, with the difference that <math display="inline"> <semantics> <mi>C</mi> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> </semantics> </math> are switched. From Markovity it follows that in this case the uncertainty of the updated population cannot be smaller than the entropy of the cue, as it is completely absorbed through updating: <math display="inline"> <semantics> <mrow> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>≥</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mi>E</mi> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mi>C</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>. This follows from the data processing inequality [<a href="#B14-entropy-19-00082" class="html-bibr">14</a>]: <math display="inline"> <semantics> <mrow> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>≥</mo> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo>;</mo> <mi>E</mi> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>≥</mo> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mrow> <msup> <mi>G</mi> <mo>+</mo> </msup> <mo>;</mo> <mi>C</mi> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mi>E</mi> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mi>H</mi> <mrow> <mo stretchy="false">(</mo> <mi>C</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Global resource extraction between 1900 and 1998. Distinguishing the contributions of United States and the rest of the world, based on [<a href="#B65-entropy-19-00082" class="html-bibr">65</a>,<a href="#B66-entropy-19-00082" class="html-bibr">66</a>].</p>
Full article ">Figure 6
<p>Empirical growth of Google Trends data for the search engine terms “chocolate” and “diet” from May 2004 to February 2015, and optimized growth when following different bet-hedging strategies.</p>
Full article ">
2787 KiB  
Article
Using k-Mix-Neighborhood Subdigraphs to Compute Canonical Labelings of Digraphs
by Jianqiang Hao, Yunzhan Gong, Yawen Wang, Li Tan and Jianzhi Sun
Entropy 2017, 19(2), 79; https://doi.org/10.3390/e19020079 - 22 Feb 2017
Cited by 1 | Viewed by 5744
Abstract
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the [...] Read more.
This paper presents a novel theory and method to calculate the canonical labelings of digraphs whose definition is entirely different from the traditional definition of Nauty. It indicates the mutual relationships that exist between the canonical labeling of a digraph and the canonical labeling of its complement graph. It systematically examines the link between computing the canonical labeling of a digraph and the k-neighborhood and k-mix-neighborhood subdigraphs. To facilitate the presentation, it introduces several concepts including mix diffusion outdegree sequence and entire mix diffusion outdegree sequences. For each node in a digraph G, it assigns an attribute m_NearestNode to enhance the accuracy of calculating canonical labeling. The four theorems proved here demonstrate how to determine the first nodes added into M a x Q ( G ) . Further, the other two theorems stated below deal with identifying the second nodes added into M a x Q ( G ) . When computing C m a x ( G ) , if M a x Q ( G ) already contains the first i vertices u 1 , u 2 , , u i , Diffusion Theorem provides a guideline on how to choose the subsequent node of M a x Q ( G ) . Besides, the Mix Diffusion Theorem shows that the selection of the ( i + 1 ) th vertex of M a x Q ( G ) for computing C m a x ( G ) is from the open mix-neighborhood subdigraph N + + ( Q ) of the nodes set Q = { u 1 , u 2 , , u i } . It also offers two theorems to calculate the C m a x ( G ) of the disconnected digraphs. The four algorithms implemented in it illustrate how to calculate M a x Q ( G ) of a digraph. Through software testing, the correctness of our algorithms is preliminarily verified. Our method can be utilized to mine the frequent subdigraph. We also guess that if there exists a vertex v S + ( G ) satisfying conditions C m a x ( G v ) C m a x ( G w ) for each w S + ( G ) w v , then u 1 = v for M a x Q ( G ) . Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Each close <italic>mix</italic>-neighborhood subdigraph of different nodes sets of an 8 × 8 grid digraph <inline-formula> <mml:math id="mm2037" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>8</mml:mn> <mml:mo>,</mml:mo> <mml:mn>8</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula> consists of pink and green nodes and edges. (<bold>a</bold>) The close mix-neighborhood subdigraph <inline-formula> <mml:math id="mm2038" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">[</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">]</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>b</bold>) <inline-formula> <mml:math id="mm2039" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">[</mml:mo> <mml:mn>1</mml:mn> <mml:mo>,</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">]</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) <inline-formula> <mml:math id="mm2040" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">[</mml:mo> <mml:mn>1</mml:mn> <mml:mo>,</mml:mo> <mml:mn>2</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> <mml:mo stretchy="false">]</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>d</bold>) <inline-formula> <mml:math id="mm2041" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">[</mml:mo> <mml:mn>1</mml:mn> <mml:mo>,</mml:mo> <mml:mn>2</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> <mml:mo stretchy="false">]</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) <inline-formula> <mml:math id="mm2042" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">[</mml:mo> <mml:mn>1</mml:mn> <mml:mo>,</mml:mo> <mml:mn>2</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>5</mml:mn> <mml:mo stretchy="false">]</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>f</bold>) <inline-formula> <mml:math id="mm2043" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">[</mml:mo> <mml:mn>1</mml:mn> <mml:mo>,</mml:mo> <mml:mn>2</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mn>6</mml:mn> <mml:mo stretchy="false">]</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 2
<p>A wheel graph <italic>G</italic>, two open mix-neighborhood subdigraphs <inline-formula> <mml:math id="mm2084" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula> and <inline-formula> <mml:math id="mm2085" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>, and the three relevant nodes sets generated by the boolean operations of <inline-formula> <mml:math id="mm2086" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula> and <inline-formula> <mml:math id="mm2087" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) A wheel digraph <italic>G</italic>; (<bold>b</bold>) The open mix-neighborhood subdigraph <inline-formula> <mml:math id="mm2088" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The open mix-neighborhood subdigraph <inline-formula> <mml:math id="mm2089" display="block"> <mml:semantics> <mml:mrow> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>d</bold>) <inline-formula> <mml:math id="mm2090" display="block"> <mml:semantics> <mml:mrow> <mml:mi>V</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>∩</mml:mo> <mml:mi>V</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>=</mml:mo> <mml:mrow> <mml:mo stretchy="false">{</mml:mo> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> <mml:mo stretchy="false">}</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) <inline-formula> <mml:math id="mm2091" display="block"> <mml:semantics> <mml:mrow> <mml:mi>V</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>−</mml:mo> <mml:mi>V</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>=</mml:mo> <mml:mrow> <mml:mo stretchy="false">{</mml:mo> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mn>6</mml:mn> <mml:mo>,</mml:mo> <mml:mn>7</mml:mn> <mml:mo stretchy="false">}</mml:mo> </mml:mrow> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>; (<bold>f</bold>) <inline-formula> <mml:math id="mm2092" display="block"> <mml:semantics> <mml:mrow> <mml:mi>V</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>2</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>−</mml:mo> <mml:mi>V</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msup> <mml:mi>N</mml:mi> <mml:mrow> <mml:mo>+</mml:mo> <mml:mo>+</mml:mo> </mml:mrow> </mml:msup> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>1</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>=</mml:mo> <mml:mo>∅</mml:mo> </mml:mrow> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 3
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2491" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2492" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2493" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The <inline-formula> <mml:math id="mm2494" display="block"> <mml:semantics> <mml:mrow> <mml:mn>3</mml:mn> <mml:mo>×</mml:mo> <mml:mn>3</mml:mn> <mml:mo>×</mml:mo> <mml:mn>3</mml:mn> </mml:mrow> </mml:semantics> </mml:math> </inline-formula> grid digraph <inline-formula> <mml:math id="mm2495" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 27 nodes and 54 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2497" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> <mml:mo>,</mml:mo> <mml:mn>3</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The <inline-formula> <mml:math id="mm2498" display="block"> <mml:semantics> <mml:mrow> <mml:mn>4</mml:mn> <mml:mo>×</mml:mo> <mml:mn>4</mml:mn> <mml:mo>×</mml:mo> <mml:mn>4</mml:mn> </mml:mrow> </mml:semantics> </mml:math> </inline-formula> grid digraph <inline-formula> <mml:math id="mm2499" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 64 nodes and 144 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2501" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> <mml:mo>,</mml:mo> <mml:mn>4</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) A digraph <inline-formula> <mml:math id="mm2502" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 77 nodes and 196 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2504" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 4
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2506" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>2</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2507" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>12</mml:mn> <mml:mo>,</mml:mo> <mml:mn>12</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2508" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>3</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) A <inline-formula> <mml:math id="mm2509" display="block"> <mml:semantics> <mml:mrow> <mml:mn>10</mml:mn> <mml:mo>×</mml:mo> <mml:mn>10</mml:mn> </mml:mrow> </mml:semantics> </mml:math> </inline-formula> king digraph <inline-formula> <mml:math id="mm2510" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>2</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 100 vertices and 342 directed edges ; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2512" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>2</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) A <inline-formula> <mml:math id="mm2513" display="block"> <mml:semantics> <mml:mrow> <mml:mn>12</mml:mn> <mml:mo>×</mml:mo> <mml:mn>12</mml:mn> </mml:mrow> </mml:semantics> </mml:math> </inline-formula> grid digraph <inline-formula> <mml:math id="mm2514" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>12</mml:mn> <mml:mo>,</mml:mo> <mml:mn>12</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 144 nodes and 264 edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2516" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mrow> <mml:mn>12</mml:mn> <mml:mo>,</mml:mo> <mml:mn>12</mml:mn> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) A wheel digraph <inline-formula> <mml:math id="mm2517" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>3</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 51 nodes and 100 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2519" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>3</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 5
<p>The <italic>MaxEm</italic> digraphs of <inline-formula> <mml:math id="mm2521" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>4</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2522" display="block"> <mml:semantics> <mml:msub> <mml:mi>T</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2523" display="block"> <mml:semantics> <mml:msub> <mml:mi>T</mml:mi> <mml:mn>2</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) A digraph <inline-formula> <mml:math id="mm2524" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>4</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 50 nodes and 90 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2526" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>4</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) A directed tree <inline-formula> <mml:math id="mm2527" display="block"> <mml:semantics> <mml:msub> <mml:mi>T</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 39 nodes and 38 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2529" display="block"> <mml:semantics> <mml:msub> <mml:mi>T</mml:mi> <mml:mn>1</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) A directed tree <inline-formula> <mml:math id="mm2530" display="block"> <mml:semantics> <mml:msub> <mml:mi>T</mml:mi> <mml:mn>2</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 42 nodes and 41 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2532" display="block"> <mml:semantics> <mml:msub> <mml:mi>T</mml:mi> <mml:mn>2</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 6
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2534" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>5</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2535" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>6</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2536" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>7</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) A digraph <inline-formula> <mml:math id="mm2537" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>5</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 22 nodes and 37 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2539" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>5</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) A digraph <inline-formula> <mml:math id="mm2540" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>6</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 53 nodes and 80 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2542" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>6</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> ; (<bold>e</bold>) A graph <inline-formula> <mml:math id="mm2543" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>7</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 49 nodes and 78 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2545" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>7</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 7
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2547" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>8</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2548" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>9</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2549" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>10</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The Doyle digraph <inline-formula> <mml:math id="mm2550" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>8</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 27 nodes and 54 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2552" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>8</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The Clebsch digraph <inline-formula> <mml:math id="mm2553" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>9</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 16 nodes and 40 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2555" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>9</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The 4-hypercube digraph <inline-formula> <mml:math id="mm2556" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>10</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 16 nodes and 32 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2558" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>10</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 8
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2560" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>11</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2561" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>12</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2562" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>13</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The coxeter digraph <inline-formula> <mml:math id="mm2563" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>11</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 28 nodes and 42 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2565" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>11</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The Dyck digraph <inline-formula> <mml:math id="mm2566" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>12</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 32 vertices and 48 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2568" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>12</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) A Shrikhande digraph <inline-formula> <mml:math id="mm2569" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>13</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 16 vertices and 48 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2571" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>13</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 9
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2573" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>14</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2574" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>15</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2575" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>16</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The 6th order cube-connected cycle digraph <inline-formula> <mml:math id="mm2576" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>14</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 24 vertices and 36 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2578" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>14</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) A triangle-replaced digraph <inline-formula> <mml:math id="mm2579" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>15</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 30 nodes and 45 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2581" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>15</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Thomassen digraph <inline-formula> <mml:math id="mm2582" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>16</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 34 vertices and 52 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2584" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>16</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 10
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2586" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>17</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2587" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>18</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2588" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>19</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The musical digraph <inline-formula> <mml:math id="mm2589" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>17</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 24 nodes and 60 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2591" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>17</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The 12-crossed prism digraph <inline-formula> <mml:math id="mm2592" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>18</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 24 nodes and 36 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2594" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>18</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Icosidodecahedral digraph <inline-formula> <mml:math id="mm2595" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>19</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 30 nodes and 60 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2597" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>19</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 11
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2599" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>20</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2600" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>21</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2601" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>22</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The 7-antiprism digraph <inline-formula> <mml:math id="mm2602" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>20</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 14 vertices and 28 edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2604" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>20</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) A fullerene digraph <inline-formula> <mml:math id="mm2605" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>21</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 24 vertices and 36 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2607" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>21</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The great rhombicuboctahedron digraph <inline-formula> <mml:math id="mm2608" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>22</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 48 vertices and 72 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2610" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>22</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 12
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2612" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>23</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2613" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>24</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2614" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>25</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) A Hamiltonian digraph <inline-formula> <mml:math id="mm2615" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>23</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 20 nodes and 30 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2617" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>23</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The Folkman digraph <inline-formula> <mml:math id="mm2618" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>24</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 20 nodes and 40 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2620" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>24</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The snark digraph <inline-formula> <mml:math id="mm2621" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>25</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 20 vertices and 30 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2623" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>25</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 13
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2625" display="block"> <mml:semantics> <mml:msub> <mml:mi>K</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mn>5</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2626" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>26</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2627" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>27</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The complete bipartite digraph <inline-formula> <mml:math id="mm2628" display="block"> <mml:semantics> <mml:msub> <mml:mi>K</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mn>5</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 10 nodes and 25 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2630" display="block"> <mml:semantics> <mml:msub> <mml:mi>K</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:mn>5</mml:mn> <mml:mo>,</mml:mo> <mml:mn>5</mml:mn> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The triangular digraph <inline-formula> <mml:math id="mm2631" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>26</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 10 nodes and 30 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2633" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>26</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) A generalized quadrangle digraph <inline-formula> <mml:math id="mm2634" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>27</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 15 nodes and 45 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2636" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>27</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 14
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2638" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>28</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2639" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>29</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2640" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>30</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The 6-Andrásfai digraph <inline-formula> <mml:math id="mm2641" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>28</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 17 nodes and 51 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2643" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>28</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The 4-dimensional Keller digraph <inline-formula> <mml:math id="mm2644" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>29</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 16 nodes and 46 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2646" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>29</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The <inline-formula> <mml:math id="mm2647" display="block"> <mml:semantics> <mml:mrow> <mml:mn>6</mml:mn> <mml:mo>×</mml:mo> <mml:mn>6</mml:mn> </mml:mrow> </mml:semantics> </mml:math> </inline-formula> knight digraph <inline-formula> <mml:math id="mm2648" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>30</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 36 vertices and 80 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2650" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>30</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 15
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2652" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>31</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2653" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>32</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2654" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>33</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The Loupekine snarks digraph <inline-formula> <mml:math id="mm2655" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>31</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 22 nodes and 33 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2657" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>31</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The Errera digraph <inline-formula> <mml:math id="mm2658" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>32</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 17 nodes and 45 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2660" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>32</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Sierpinski sieve digraph <inline-formula> <mml:math id="mm2661" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>33</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 42 nodes and 72 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2663" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>33</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 16
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2665" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>34</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2666" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>35</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2667" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>36</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The Grinberg digraph <inline-formula> <mml:math id="mm2668" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>34</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 44 nodes and 67 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2670" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>34</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) A digraph <inline-formula> <mml:math id="mm2671" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>35</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 38 nodes and 57 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2673" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>35</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Grinberg digraph <inline-formula> <mml:math id="mm2674" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>36</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 42 vertices and 63 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2676" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>36</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 17
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2678" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>37</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2679" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>38</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2680" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>39</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) A pentagonal icositetrahedral digraph <inline-formula> <mml:math id="mm2681" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>37</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 38 nodes and 60 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2683" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>37</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The Faulkner-Younger digraph <inline-formula> <mml:math id="mm2684" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>38</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 42 vertices and 62 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2686" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>38</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Faulkner-Younger digraph <inline-formula> <mml:math id="mm2687" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>39</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 44 nodes and 65 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2689" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>39</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 18
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2691" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>40</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2692" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>41</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2693" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>42</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The Celmins Swart snarks digraph <inline-formula> <mml:math id="mm2694" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>40</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 26 vertices and 39 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2696" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>40</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The truncated octahedron digraph <inline-formula> <mml:math id="mm2697" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>41</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 24 nodes and 36 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2699" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>41</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Nauru digraph <inline-formula> <mml:math id="mm2700" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>42</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 24 nodes and 36 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2702" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>42</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 19
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2704" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>43</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2705" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>44</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2706" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>45</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The Wiener-Araya digraph <inline-formula> <mml:math id="mm2707" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>43</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 42 nodes and 67 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2709" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>43</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The Zamfirescu digraph <inline-formula> <mml:math id="mm2710" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>44</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 48 nodes and 76 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2712" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>44</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Folkman digraph <inline-formula> <mml:math id="mm2713" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>45</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 20 nodes and 40 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2715" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>45</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 20
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2717" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>46</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2718" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>47</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2719" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>48</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The 24-cell digraph <inline-formula> <mml:math id="mm2720" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>46</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 24 nodes and 94 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2722" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>46</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) A disconnected graph <inline-formula> <mml:math id="mm2723" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>47</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 12 nodes and 12 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2725" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>47</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) A disconnected digraph <inline-formula> <mml:math id="mm2726" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>48</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> that has four connected components and a total of 100 nodes and 160 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic>&gt; digraph of <inline-formula> <mml:math id="mm2728" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>48</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">Figure 21
<p>The <italic>MaxEm</italic> digraphs of three digraphs <inline-formula> <mml:math id="mm2730" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>49</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm2731" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>50</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>, and <inline-formula> <mml:math id="mm2732" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>51</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>. (<bold>a</bold>) The projective plane digraph <inline-formula> <mml:math id="mm2733" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>49</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 26 nodes and 52 directed edges; (<bold>b</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2735" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>49</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>c</bold>) The Miyazaki digraph <inline-formula> <mml:math id="mm2736" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>50</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 40 nodes and 60 directed edges; (<bold>d</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2738" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>50</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>; (<bold>e</bold>) The Cubic Hypohamiltonian digraph <inline-formula> <mml:math id="mm2739" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>51</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula> with 44 nodes and 75 directed edges; (<bold>f</bold>) The <italic>MaxEm</italic> digraph of <inline-formula> <mml:math id="mm2741" display="block"> <mml:semantics> <mml:msub> <mml:mi>G</mml:mi> <mml:mn>51</mml:mn> </mml:msub> </mml:semantics> </mml:math> </inline-formula>.</p>
Full article ">
463 KiB  
Article
Sequential Batch Design for Gaussian Processes Employing Marginalization †
by Roland Preuss and Udo Von Toussaint
Entropy 2017, 19(2), 84; https://doi.org/10.3390/e19020084 - 21 Feb 2017
Cited by 1 | Viewed by 4433
Abstract
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the [...] Read more.
Within the Bayesian framework, we utilize Gaussian processes for parametric studies of long running computer codes. Since the simulations are expensive, it is necessary to exploit the computational budget in the best possible manner. Employing the sum over variances —being indicators for the quality of the fit—as the utility function, we establish an optimized and automated sequential parameter selection procedure. However, it is also often desirable to utilize the parallel running capabilities of present computer technology and abandon the sequential parameter selection for a faster overall turn-around time (wall-clock time). This paper proposes to achieve this by marginalizing over the expected outcomes at optimized test points in order to set up a pool of starting values for batch execution. For a one-dimensional test case, the numerical results are validated with the analytical solution. Eventually, a systematic convergence study demonstrates the advantage of the optimized approach over randomly chosen parameter settings. Full article
(This article belongs to the Special Issue Selected Papers from MaxEnt 2016)
Show Figures

Figure 1

Figure 1
<p>One-dimensional test case: Expectation values of the target function (Prediction) from Markov chain Monte Carlo (MCMC)-calculation where the grey-shaded area represents the uncertainty range. The utility (scaled and normalized) is plotted at the bottom of each figure. Its maximum <math display="inline"> <semantics> <mrow> <msub> <mi>U</mi> <mi>max</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>opt</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics> </math> shows the next input vector added to the pool of data. Top to bottom: increasing number of optimized points in input data pool. (<b>a</b>–<b>c</b>) without marginalized point. (<b>d</b>–<b>f</b>) with marginalized point at <math display="inline"> <semantics> <mrow> <msub> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mn>1</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>775</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>Deviation of the Gaussian process results from the exact model outcome as function of the number <math display="inline"> <semantics> <msub> <mi>N</mi> <mi>opt</mi> </msub> </semantics> </math> of obtained data at optimized parameter settings. Dotted line: without marginalized input values; dashed line: with one marginalized input value. Both show quadratic decay behaviour.</p>
Full article ">Figure 3
<p>Two test cases for one-dimensional input vectors, <math display="inline"> <semantics> <msub> <mi>N</mi> <mi>ROI</mi> </msub> </semantics> </math> = 81. (<b>a</b>–<b>c</b>) Gaussian model; (<b>d</b>–<b>f</b>) damped cosine model. Top row (<b>a</b>,<b>d</b>): model (solid line), initial input data value (filled circle), optimized approach (dotted line) vs. randomly chosen parameter settings (dashed line). Panels (<b>b</b>,<b>c</b>,<b>e</b>,<b>f</b>) with number of added points to the right: in the top of each figure, hyper-parameters <span class="html-italic">λ</span> and <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>n</mi> </msub> </semantics> </math>; in the bottom, total difference between target and model for <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.1 (dotted line), <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.01 (dashed line), <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.001 (dot-dashed line). Middle row (<b>b</b>,<b>e</b>): optimized approach. Bottom row (<b>c</b>,<b>f</b>): random parameter setting. The solid lines represent dedicated decay powers.</p>
Full article ">Figure 4
<p>Two test cases for two-dimensional input vectors, <math display="inline"> <semantics> <msub> <mi>N</mi> <mi>ROI</mi> </msub> </semantics> </math> = 21 × 21 (ROI: region of interest). (<b>a</b>–<b>c</b>): Gaussian model; (<b>d</b>–<b>f</b>): damped cosine model. Top row (<b>a</b>,<b>d</b>): model, five initial input vectors (plus signs in base). Panels (<b>b</b>,<b>c</b>,<b>e</b>,<b>f</b>) with number of added points to the right: in the top of each figure, hyper-parameters <span class="html-italic">λ</span> and <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>n</mi> </msub> </semantics> </math>; in the bottom, total difference between target and model for <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.1 (dotted line), <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.01 (dashed line), <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.001 (dot-dashed line). Middle row (<b>b</b>,<b>e</b>): optimized approach. Bottom row (<b>c</b>,<b>f</b>): random parameter setting. The solid lines represent dedicated decay powers. The small inset in (<b>e</b>) shows the deviations of target and model for <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.01 and <math display="inline"> <semantics> <msub> <mi>σ</mi> <mi>d</mi> </msub> </semantics> </math> = 0.001, which settles on a square root behavior around 400 points.</p>
Full article ">
785 KiB  
Article
Breakdown Point of Robust Support Vector Machines
by Takafumi Kanamori, Shuhei Fujiwara and Akiko Takeda
Entropy 2017, 19(2), 83; https://doi.org/10.3390/e19020083 - 21 Feb 2017
Cited by 10 | Viewed by 5666
Abstract
Support vector machine (SVM) is one of the most successful learning methods for solving classification problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassification is defined by a convex loss [...] Read more.
Support vector machine (SVM) is one of the most successful learning methods for solving classification problems. Despite its popularity, SVM has the serious drawback that it is sensitive to outliers in training samples. The penalty on misclassification is defined by a convex loss called the hinge loss, and the unboundedness of the convex loss causes the sensitivity to outliers. To deal with outliers, robust SVMs have been proposed by replacing the convex loss with a non-convex bounded loss called the ramp loss. In this paper, we study the breakdown point of robust SVMs. The breakdown point is a robustness measure that is the largest amount of contamination such that the estimated classifier still gives information about the non-contaminated data. The main contribution of this paper is to show an exact evaluation of the breakdown point of robust SVMs. For learning parameters such as the regularization parameter, we derive a simple formula that guarantees the robustness of the classifier. When the learning parameters are determined with a grid search using cross-validation, our formula works to reduce the number of candidate search points. Furthermore, the theoretical findings are confirmed in numerical experiments. We show that the statistical properties of robust SVMs are well explained by a theoretical analysis of the breakdown point. Full article
Show Figures

Figure 1

Figure 1
<p>Distribution of negative margins <math display="inline"> <semantics> <mrow> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>−</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>∈</mo> <mrow> <mo>[</mo> <mi>m</mi> <mo>]</mo> </mrow> </mrow> </semantics> </math> for a fixed decision function <math display="inline"> <semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>+</mo> <mi>b</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>(<b>a</b>) breakdown point of <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>D</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mi>D</mi> </msub> <mo>)</mo> </mrow> </semantics> </math> given by robust <math display="inline"> <semantics> <mrow> <mo>(</mo> <mi>ν</mi> <mo>,</mo> <mi>μ</mi> <mo>)</mo> </mrow> </semantics> </math>-SVM with bounded kernel; (<b>b</b>) breakdown point of <math display="inline"> <semantics> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>D</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mi>D</mi> </msub> <mo>)</mo> </mrow> </semantics> </math> given by robust <math display="inline"> <semantics> <mrow> <mo>(</mo> <mi>ν</mi> <mo>,</mo> <mi>μ</mi> <mo>)</mo> </mrow> </semantics> </math>-SVM with unbounded kernel.</p>
Full article ">Figure 3
<p>Plot of contaminated dataset of size <math display="inline"> <semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics> </math>. The outlier ratio is <math display="inline"> <semantics> <mrow> <mn>0.05</mn> </mrow> </semantics> </math>, and the asterisks (∗) denote the outlier. In the panels of the upper (resp. lower) row, outliers are added by flipping labels (resp. flipping positive labels) randomly. The dashed line is the true decision boundary, and the solid line is the decision boundary estimated using <span class="html-italic">ν</span>-SVM with <math display="inline"> <semantics> <mrow> <mi>ν</mi> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics> </math> in (<b>a</b>,<b>d</b>); robust <math display="inline"> <semantics> <mrow> <mo>(</mo> <mi>ν</mi> <mo>,</mo> <mi>μ</mi> <mo>)</mo> </mrow> </semantics> </math>-SVM with <math display="inline"> <semantics> <mrow> <mo>(</mo> <mi>ν</mi> <mo>,</mo> <mi>μ</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.05</mn> <mo>)</mo> </mrow> </semantics> </math> in (<b>b</b>,<b>e</b>); and <math display="inline"> <semantics> <mrow> <mo>(</mo> <mi>ν</mi> <mo>,</mo> <mi>μ</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.1</mn> <mo>)</mo> </mrow> </semantics> </math> in (<b>c</b>,<b>f</b>). The triangles denote the samples on which <math display="inline"> <semantics> <mrow> <msub> <mi>η</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> is assigned.</p>
Full article ">Figure 4
<p>(<b>a</b>) original data <span class="html-italic">D</span>; (<b>b</b>) contaminated data <math display="inline"> <semantics> <mrow> <msup> <mi>D</mi> <mo>′</mo> </msup> <mo>∈</mo> <msub> <mi mathvariant="script">D</mi> <mrow> <mi>μ</mi> <mi>m</mi> </mrow> </msub> </mrow> </semantics> </math>. In this example, the sample size is <math display="inline"> <semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics> </math>, and the outlier ratio is <math display="inline"> <semantics> <mrow> <mi>μ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Plots of maximum norms and worst-case test errors. The top (Bottom) panels show the results for a Gaussian (linear) kernel. Red points mean the top 50 percent of values; the asterisks (∗) are points that violate the inequality <math display="inline"> <semantics> <mrow> <mi>ν</mi> <mo>−</mo> <mi>μ</mi> <mo>≤</mo> <mn>2</mn> <mo>(</mo> <mi>r</mi> <mo>−</mo> <mn>2</mn> <mi>μ</mi> <mo>)</mo> </mrow> </semantics> </math>. (<b>a</b>) Gaussian kernel; (<b>b</b>) linear kernel.</p>
Full article ">Figure 6
<p>Index sets <math display="inline"> <semantics> <msub> <mover accent="true"> <mi>I</mi> <mo>˜</mo> </mover> <mo>±</mo> </msub> </semantics> </math> and value of <math display="inline"> <semantics> <msubsup> <mi>η</mi> <mi>i</mi> <mo>′</mo> </msubsup> </semantics> </math> defined in the proof of Lemma A2.</p>
Full article ">
2703 KiB  
Article
Towards Operational Definition of Postictal Stage: Spectral Entropy as a Marker of Seizure Ending
by Ancor Sanz-García, Lorena Vega-Zelaya, Jesús Pastor, Rafael G. Sola and Guillermo J. Ortega
Entropy 2017, 19(2), 81; https://doi.org/10.3390/e19020081 - 21 Feb 2017
Cited by 6 | Viewed by 6070
Abstract
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods [...] Read more.
The postictal period is characterized by several neurological alterations, but its exact limits are clinically or even electroencephalographically hard to determine in most cases. We aim to provide quantitative functions or conditions with a clearly distinguishable behavior during the ictal-postictal transition. Spectral methods were used to analyze foramen ovale electrodes (FOE) recordings during the ictal/postictal transition in 31 seizures of 15 patients with strictly unilateral drug resistant temporal lobe epilepsy. In particular, density of links, spectral entropy, and relative spectral power were analyzed. Partial simple seizures are accompanied by an ipsilateral increase in the relative Delta power and a decrease in synchronization in a 66% and 91% of the cases, respectively, after seizures offset. Complex partial seizures showed a decrease in the spectral entropy in 94% of cases, both ipsilateral and contralateral sides (100% and 73%, respectively) mainly due to an increase of relative Delta activity. Seizure offset is defined as the moment at which the “seizure termination mechanisms” actually end, which is quantified in the spectral entropy value. We propose as a definition for the postictal start the time when the ipsilateral SE reaches the first global minimum. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the implemented procedure. Upper panel: at the left, EEG recordings from scalp (16 channels) and FOE (12 channels), sampled at 200 Hz (only a subset of channels, in a bipolar montage, is displayed). In every temporal window (black rectangles) of 1000 data points (five seconds), different measures are calculated: DoL, SE and RP in each band. The obtained values for each measure produces a new time series sampled at five seconds. Lower panel: For each measure (SE, etc.) time series, an evaluation of its changes is performed by using the SMD index (Equation (5)), using six values of the measure in a preictal window of 30 s (red rectangles), and several 30-second windows located at different positions during the postictal stage.</p>
Full article ">Figure 2
<p>Twenty minutes of temporal dynamic of DoL and SE, 10 min before seizure onset (vertical thick line), and 10 min after seizure onset, for patient G. Vertical dashed lines represent five-second temporal windows with high excitability. (<b>A</b>) DoL constructed upon the whole network recordings (scalp + FOE); (<b>B</b>) Mesial SE. Upper panel corresponds to left FOE, middle panel corresponds to right FOE and bottom panel corresponds to total (both FOE).</p>
Full article ">Figure 3
<p>(<b>A</b>) Twenty minutes of temporal dynamics of the RP for each frequency band of patient G. In each panel, RP of Delta, Theta, Alpha, Beta, and Gamma for the left FOE, right FOE and total (Both FOE) are displayed. Warm colors mean higher percentages of spectral power. A thick vertical line represents seizure onset. Vertical dashed lines represent five-second temporal windows with high excitability; (<b>B</b>) Changes in RP and SE (bottom panel), in left and right FOE, quantified by the SMD index, comparing values in a preictal 30-second temporal window with the values taken in several postictal 30-second windows. Reddish hues imply a decrease and bluish hues imply an increase of values with respect to the preictal stage.</p>
Full article ">Figure 4
<p>Temporal dynamics quantified by the SMD in the DoL and SE for all the PS and CP seizures. (<b>A</b>) SMD of DoL corresponding to the ipsilateral and contralateral sides of PS seizures; (<b>B</b>) SMD of DoL corresponding to the ipsilateral and contralateral sides of CP seizures; (<b>C</b>) SMD of SE corresponding to the ipsilateral and contralateral side of PS seizures; (<b>D</b>) SMD of SE corresponding to the ipsilateral and contralateral side of CP seizures. All graphs represent the mean duration of the ictal period (shaded area) and 15 min after seizure onset. The mean and S.D. are shown. *** <span class="html-italic">p</span> &lt; 0.001 vs. contralateral side.</p>
Full article ">Figure 5
<p>Temporal dynamics quantified by the SMD in the RP of each frequency band for all the PS seizures. (<b>A</b>) SMD of RP corresponding to the contralateral side of PS seizures; (<b>B</b>) SMD of RP corresponding to the ipsilateral side of PS seizures. All graphs represent the mean duration of the ictal period (shaded area) and 15 min after seizure onset. The mean and S.D. are shown. * <span class="html-italic">p</span> &lt; 0.05 Delta vs. Alpha, + <span class="html-italic">p</span> &lt; 0.05 Delta vs. Gamma.</p>
Full article ">Figure 6
<p>Temporal dynamics quantified by the SMD in the RP of each frequency band for all the CP seizures. (<b>A</b>) SMD of RP corresponding to the contralateral side of CP seizures; (<b>B</b>) SMD of RP corresponding to the ipsilateral side of CP seizures. All graphs represent the mean duration of the ictal period (shaded area) and 15 min after seizure onset. The mean and S.D. are shown. *** <span class="html-italic">p</span> &lt; 0.001 vs. all the other frequencies.</p>
Full article ">Figure 7
<p>Temporal dynamics of the slope of the SMD of the RP of each frequency band for all the PS seizures. Slope of SMD of the relative spectral power for the frequency bands (Delta, Theta, Alpha, Beta, and Gamma) of ipsilateral and contralateral sides. All graphs represent the ictal period and 5 min after seizure onset. The mean and S.D. are shown. * <span class="html-italic">p</span> &lt; 0.05 vs. contralateral side.</p>
Full article ">
1042 KiB  
Article
A Risk-Free Protection Index Model for Portfolio Selection with Entropy Constraint under an Uncertainty Framework
by Jianwei Gao and Huicheng Liu
Entropy 2017, 19(2), 80; https://doi.org/10.3390/e19020080 - 21 Feb 2017
Cited by 8 | Viewed by 4104
Abstract
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free [...] Read more.
This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free interest rate we define a risk-free protection index (RFPI), which can measure the protection degree when the loss of risk assets happens. Third, note that the proportion entropy serves as a complementary means to reduce the risk by the preset diversification requirement. We put forward a risk-free protection index model with an entropy constraint under an uncertainty framework by applying the RFPI, Huang’s risk index model (RIM), and mean-variance-entropy model (MVEM). Furthermore, to solve our portfolio model, an algorithm is given to estimate the uncertain expected return and standard deviation of different risk assets by applying the Delphi method. Finally, an example is provided to show that the risk-free protection index model performs better than the traditional MVEM and RIM. Full article
Show Figures

Figure 1

Figure 1
<p>Membership function of a security return <math display="inline"> <semantics> <mrow> <mi mathvariant="sans-serif">ξ</mi> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mo>−</mo> <mn>0.1</mn> <mo>,</mo> <mtext> </mtext> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mn>1.1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>The relationship between RFPI and the weight of risk free asset.</p>
Full article ">Figure 3
<p>The relationship between RFPI and the expected return rates.</p>
Full article ">Figure 4
<p>The relationship between RFPI and VaRU.</p>
Full article ">Figure 5
<p>The relationship between RFPI and the variance.</p>
Full article ">
254 KiB  
Article
Admitting Spontaneous Violations of the Second Law in Continuum Thermomechanics
by Martin Ostoja-Starzewski
Entropy 2017, 19(2), 78; https://doi.org/10.3390/e19020078 - 21 Feb 2017
Cited by 5 | Viewed by 3970
Abstract
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is [...] Read more.
We survey new extensions of continuum mechanics incorporating spontaneous violations of the Second Law (SL), which involve the viscous flow and heat conduction. First, following an account of the Fluctuation Theorem (FT) of statistical mechanics that generalizes the SL, the irreversible entropy is shown to evolve as a submartingale. Next, a stochastic thermomechanics is formulated consistent with the FT, which, according to a revision of classical axioms of continuum mechanics, must be set up on random fields. This development leads to a reformulation of thermoviscous fluids and inelastic solids. These two unconventional constitutive behaviors may jointly occur in nano-poromechanics. Full article
(This article belongs to the Special Issue Limits to the Second Law of Thermodynamics: Experiment and Theory)
2098 KiB  
Article
User-Centric Key Entropy: Study of Biometric Key Derivation Subject to Spoofing Attacks
by Lavinia Mihaela Dinca and Gerhard Hancke
Entropy 2017, 19(2), 70; https://doi.org/10.3390/e19020070 - 21 Feb 2017
Cited by 9 | Viewed by 7492
Abstract
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure. [...] Read more.
Biometric data can be used as input for PKI key pair generation. The concept of not saving the private key is very appealing, but the implementation of such a system shouldn’t be rushed because it might prove less secure then current PKI infrastructure. One biometric characteristic can be easily spoofed, so it was believed that multi-modal biometrics would offer more security, because spoofing two or more biometrics would be very hard. This notion, of increased security of multi-modal biometric systems, was disproved for authentication and matching, studies showing that not only multi-modal biometric systems are not more secure, but they introduce additional vulnerabilities. This paper is a study on the implications of spoofing biometric data for retrieving the derived key. We demonstrate that spoofed biometrics can yield the same key, which in turn will lead an attacker to obtain the private key. A practical implementation is proposed using fingerprint and iris as biometrics and the fuzzy extractor for biometric key extraction. Our experiments show what happens when the biometric data is spoofed for both uni-modal systems and multi-modal. In case of multi-modal system tests were performed when spoofing one biometric or both. We provide detailed analysis of every scenario in regard to successful tests and overall key entropy. Our paper defines a biometric PKI scenario and an in depth security analysis for it. The analysis can be viewed as a blueprint for implementations of future similar systems, because it highlights the main security vulnerabilities for bioPKI. The analysis is not constrained to the biometric part of the system, but covers CA security, sensor security, communication interception, RSA encryption vulnerabilities regarding key entropy, and much more. Full article
Show Figures

Figure 1

Figure 1
<p>Fingerprint minutiae extraction process. (<b>a</b>) Enhanced fingerprint; (<b>b</b>) Binarized fingerprint; (<b>c</b>) Thinned with minutiae; (<b>d</b>) Selected minutiae; (<b>e</b>) ROI; (<b>f</b>) Minutiae with orientation.</p>
Full article ">Figure 2
<p>Iris features extraction process. (<b>a</b>) Iris segmentation; (<b>b</b>) Iris normalised; (<b>c</b>) Iris normalised noise.</p>
Full article ">Figure 3
<p>Results for keys derived from fingerprint. (<b>a</b>) KEY length 110 bits—Fingerprint hamming distance SET 1 all tests; (<b>b</b>) KEY length 110 bits—Fingerprint hamming distance SET 2 all tests; (<b>c</b>) KEY length 322 bits—Fingerprint hamming distance SET 1 all tests; (<b>d</b>) KEY length 322 bits—Fingerprint hamming distance SET 2 all tests; (<b>e</b>) KEY length 496 bits—Fingerprint hamming distance SET 1 all tests; (<b>f</b>) KEY length 496 bits—Fingerprint hamming distance SET 2 all tests.</p>
Full article ">Figure 4
<p>Results for keys derived from iris. (<b>a</b>) KEY length 110 bits—Iris hamming distance SET 1 all tests; (<b>b</b>) KEY length 110 bits—Iris hamming distance SET 2 all tests; (<b>c</b>) KEY length 322 bits—Iris hamming distance SET 1 all tests; (<b>d</b>) KEY length 322 bits—Iris hamming distance SET 2 all tests; (<b>e</b>) KEY length 496 bits—Iris hamming distance SET 1 all tests; (<b>f</b>) KEY length 496 bits—Iris hamming distance SET 2 all tests.</p>
Full article ">Figure 5
<p>Comparative results for multi-biometric keys. (<b>a</b>) OFOI-OFFI all tests—Key 110 bits SET 1; (<b>b</b>) OFOI-OFFI all tests—Key 322 bits SET 1; (<b>c</b>) OFOI-OFFI all tests—Key 496 bits SET 1; (<b>d</b>) OFOI-FFOI all tests—Key 110 bits SET 1; (<b>e</b>) OFOI-FFOI all tests—Key 322 bits SET 1; (<b>f</b>) OFOI-FFOI all tests—Key 496 bits SET 1; (<b>g</b>) OFOI-FFFI all tests—Key 110 bits SET 1; (<b>h</b>) OFOI-FFFI all tests—Key 322 bits SET 1; (<b>i</b>) OFOI-FFFI all tests—Key 496 bits SET 1.</p>
Full article ">Figure 5 Cont.
<p>Comparative results for multi-biometric keys. (<b>a</b>) OFOI-OFFI all tests—Key 110 bits SET 1; (<b>b</b>) OFOI-OFFI all tests—Key 322 bits SET 1; (<b>c</b>) OFOI-OFFI all tests—Key 496 bits SET 1; (<b>d</b>) OFOI-FFOI all tests—Key 110 bits SET 1; (<b>e</b>) OFOI-FFOI all tests—Key 322 bits SET 1; (<b>f</b>) OFOI-FFOI all tests—Key 496 bits SET 1; (<b>g</b>) OFOI-FFFI all tests—Key 110 bits SET 1; (<b>h</b>) OFOI-FFFI all tests—Key 322 bits SET 1; (<b>i</b>) OFOI-FFFI all tests—Key 496 bits SET 1.</p>
Full article ">Figure 6
<p>Biometric attack places, as defined by [<a href="#B55-entropy-19-00070" class="html-bibr">55</a>].</p>
Full article ">
530 KiB  
Article
Energy Transfer between Colloids via Critical Interactions
by Ignacio A. Martínez, Clemence Devailly, Artyom Petrosyan and Sergio Ciliberto
Entropy 2017, 19(2), 77; https://doi.org/10.3390/e19020077 - 17 Feb 2017
Cited by 24 | Viewed by 4994
Abstract
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion [...] Read more.
We report the observation of a temperature-controlled synchronization of two Brownian-particles in a binary mixture close to the critical point of the demixing transition. The two beads are trapped by two optical tweezers whose distance is periodically modulated. We notice that the motion synchronization of the two beads appears when the critical temperature is approached. In contrast, when the fluid is far from its critical temperature, the displacements of the two beads are uncorrelated. Small changes in temperature can radically change the global dynamics of the system. We show that the synchronisation is induced by the critical Casimir forces. Finally, we present the measure of the energy transfers inside the system produced by the critical interaction. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Show Figures

Figure 1

Figure 1
<p>Trajectory of the beads 1 (blue) and 2 (red) (<b>a</b>) no synchronization, <math display="inline"> <semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>60</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics> </math>; (<b>b</b>) weak synchronization, <math display="inline"> <semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>27</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics> </math>; (<b>c</b>) complete synchronization, <math display="inline"> <semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>40</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics> </math>. The black solid line corresponds to the position of the optical traps versus time. (<b>d</b>) Probability to be in a synchronous state (<math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mi>NS</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>T</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math>). Notice how the synchronization increases monotonically when the system approaches the critical temperature. The error is purely statistic <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>T</mi> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mn>1</mn> <mo>/</mo> <msqrt> <mrow> <mi>N</mi> <mo stretchy="false">(</mo> <mi>T</mi> <mo stretchy="false">)</mo> </mrow> </msqrt> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Total potential <math display="inline"> <semantics> <mrow> <msub> <mi>U</mi> <mi>total</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>d</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics> </math> as a function of the distance between the spheres surfaces <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>=</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>−</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>−</mo> <mn>2</mn> <mi>R</mi> </mrow> </semantics> </math>. The potential has been measure using the probability density function <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo stretchy="false">(</mo> <mi>d</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math> of <span class="html-italic">d</span>. It is possible to divide it in three main parts. At small distances, electrostatic repulsion avoid the sphere to stick together. At further distances the optical potential dominates and creates a local minimum of energy. The Casimir potential changes with temperature and allow the critical force to dominate the dynamic of the system. (<b>b</b>) Critical Casimir potential corresponding to <math display="inline"> <semantics> <mrow> <mi>ξ</mi> <mo>≈</mo> <mn>30</mn> </mrow> </semantics> </math> nm, <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>−</mo> <mi>T</mi> <mo>≈</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </semantics> </math> K, blue circles, and <math display="inline"> <semantics> <mrow> <mi>ξ</mi> <mo>≈</mo> <mn>70</mn> </mrow> </semantics> </math> nm, <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>−</mo> <mi>T</mi> <mo>≈</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math> K, red squares. Solid lines corresponds to a fit to Derjaguin approximation, Equation (<a href="#FD1-entropy-19-00077" class="html-disp-formula">1</a>), keeping the correlation length as a free parameter. Inset) The correlation lengths obtained by the previous fit are represented as a function of the reduced temperature (blue solid squares) while the theoretical evolution is represented by the red solid line <math display="inline"> <semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <msub> <mi>ξ</mi> <mn>0</mn> </msub> <msup> <mi>ε</mi> <mrow> <mo>−</mo> <mi>ν</mi> </mrow> </msup> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <mrow> <mi>ε</mi> <mo>=</mo> <mrow> <mo stretchy="false">(</mo> <msub> <mi>T</mi> <mi>c</mi> </msub> <mo>−</mo> <mi>T</mi> <mo stretchy="false">)</mo> </mrow> <mo>/</mo> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics> </math> is the reduced temperature, <math display="inline"> <semantics> <msub> <mi>ξ</mi> <mn>0</mn> </msub> </semantics> </math> = 1.4 nm is the characteristic correlation length of the mixture and <span class="html-italic">ν</span> = 0.63 is the universal exponent associated with the transition.</p>
Full article ">Figure 3
<p>Energetics of the system as a function of the reduced temperature <span class="html-italic">ε</span>. Work <math display="inline"> <semantics> <mrow> <mo>〈</mo> <msub> <mi>W</mi> <mi>φ</mi> </msub> <mo>〉</mo> </mrow> </semantics> </math> produced in the collective motion of the particles <math display="inline"> <semantics> <mi>φ</mi> </semantics> </math> (red empty circle). Work <math display="inline"> <semantics> <mrow> <mo>〈</mo> <msub> <mi>W</mi> <mi>r</mi> </msub> <mo>〉</mo> </mrow> </semantics> </math> in the relative motion of the particles <span class="html-italic">r</span> (blue solid squares). <math display="inline"> <semantics> <mrow> <mo>〈</mo> <msub> <mi>W</mi> <mi>φ</mi> </msub> <mo>〉</mo> </mrow> </semantics> </math> shows no dependence with the Casimir force, due to its purely dissipative nature. On the other hand, <math display="inline"> <semantics> <mrow> <mo>〈</mo> <msub> <mi>W</mi> <mi>r</mi> </msub> <mo>〉</mo> </mrow> </semantics> </math> has a dependence in <span class="html-italic">ε</span>, as the Casimir force starts to dominate the dynamics. Horizontal dashed lines corresponds to the pure dissipative work in each coordinate. Notice how the higher value of the relative viscosity appears in a higher value of <math display="inline"> <semantics> <mrow> <mo>〈</mo> <msub> <mi>W</mi> <mi>r</mi> </msub> <mo>〉</mo> </mrow> </semantics> </math> even far from the critical point.</p>
Full article ">Figure 4
<p>Coupling tensor as a function of the distance. Diagonal (blue solid line) and non diagonal (green dashed line) are represented as a function of the distance between the surfaces over the radius of the bead. The vertical solid black lines represent the interval where our experiment is performed. The hydrodynamic coupling is non negligible in this range.</p>
Full article ">Figure 5
<p>Drag term redefined in the new system of coordinates as a function of the distance between the surfaces. The collective drag term (blue dashed line) is almost constant in the experimental range, while the relative drag term (red solid line) has a non negligible variation in the temperature range.</p>
Full article ">Figure 6
<p>Typical frame of the videocamera used to detect the particle positio In order to characterize the transition point in real time, we analyze the marked square in each of the corners of the image.</p>
Full article ">Figure 7
<p>Variance of the pixels intensities in each of the regions (solid lines) as a function of the reduced temperature. The mean value of the four regions is represented with black empty circles. The behavior is almost constant until the liquid goes through the transition, where the droplets of the different components change the optical properties of the sample.</p>
Full article ">
1126 KiB  
Article
A Comparison of Postural Stability during Upright Standing between Normal and Flatfooted Individuals, Based on COP-Based Measures
by Tsui-Chiao Chao and Bernard C. Jiang
Entropy 2017, 19(2), 76; https://doi.org/10.3390/e19020076 - 16 Feb 2017
Cited by 7 | Viewed by 5959
Abstract
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a [...] Read more.
Aging causes foot arches to collapse, possibly leading to foot deformities and falls. This paper proposes a set of measures involving an entropy-based method used for two groups of young adults with dissimilar foot arches to explore and quantize postural stability on a force plate in an upright position. Fifty-four healthy young adults aged 18–30 years participated in this study. These were categorized into two groups: normal (37 participants) and flatfooted (17 participants). We collected the center of pressure (COP) displacement trajectories of participants during upright standing, on a force plate, in a static position, with eyes open (EO), or eyes closed (EC). These nonstationary time-series signals were quantized using entropy-based measures and traditional measures used to assess postural stability, and the results obtained from these measures were compared. The appropriate combinations of entropy-based measures revealed that, with respect to postural stability, the two groups differed significantly (p < 0.05) under both EO and EC conditions. The traditional commonly-used COP-based measures only revealed differences under EO conditions. Entropy-based measures are thus suitable for examining differences in postural stability for flatfooted people, and may be used by clinicians after further refinement. Full article
(This article belongs to the Special Issue Multivariate Entropy Measures and Their Applications)
Show Figures

Figure 1

Figure 1
<p>The participant steps in chalk then takes a step on a sheet of paper. The arch index ratio (R) of the footprint is then calculated [<a href="#B2-entropy-19-00076" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>Center of pressure (COP) displacement trajectories of a participant standing on a force plate were retrieved for the anterior–posterior (AP) and medial–lateral (ML) directions. The time series signal was then detrended into intrinsic mode functions (IMFs) using the empirical mode decomposition (EMD) method.</p>
Full article ">Figure 3
<p>The overall architecture of assessment procedure for postural stability from the COP displacement trajectory of a participant standing on a force plate in normal foot and flatfooted persons. EC: eyes closed; EO: eyes open; MSE: multiscale entropy; MMSE: multivariate multiscale entropy.</p>
Full article ">
462 KiB  
Article
Information Loss in Binomial Data Due to Data Compression
by Susan E. Hodge and Veronica J. Vieland
Entropy 2017, 19(2), 75; https://doi.org/10.3390/e19020075 - 16 Feb 2017
Cited by 3 | Viewed by 4672
Abstract
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only [...] Read more.
This paper explores the idea of information loss through data compression, as occurs in the course of any data analysis, illustrated via detailed consideration of the Binomial distribution. We examine situations where the full sequence of binomial outcomes is retained, situations where only the total number of successes is retained, and in-between situations. We show that a familiar decomposition of the Shannon entropy H can be rewritten as a decomposition into H t o t a l , H l o s t , and H c o m p , or the total, lost and compressed (remaining) components, respectively. We relate this new decomposition to Landauer’s principle, and we discuss some implications for the “information-dynamic” theory being developed in connection with our broader program to develop a measure of statistical evidence on a properly calibrated scale. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Graph of <math display="inline"> <semantics> <mrow> <mrow> <mrow> <msub> <mi>H</mi> <mrow> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> <mo>/</mo> <mrow> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mi>o</mi> <mi>t</mi> <mi>a</mi> <mi>l</mi> </mrow> </msub> </mrow> </mrow> </mrow> </semantics> </math> vs. sample size, for <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mn>0.3</mn> <mo>,</mo> <mtext> </mtext> <mn>0.1</mn> </mrow> </semantics> </math>.</p>
Full article ">
1495 KiB  
Article
An Approach to Data Analysis in 5G Networks
by Lorena Isabel Barona López, Jorge Maestre Vidal and Luis Javier García Villalba
Entropy 2017, 19(2), 74; https://doi.org/10.3390/e19020074 - 16 Feb 2017
Cited by 10 | Viewed by 6782
Abstract
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order [...] Read more.
5G networks expect to provide significant advances in network management compared to traditional mobile infrastructures by leveraging intelligence capabilities such as data analysis, prediction, pattern recognition and artificial intelligence. The key idea behind these actions is to facilitate the decision-making process in order to solve or mitigate common network problems in a dynamic and proactive way. In this context, this paper presents the design of Self-Organized Network Management in Virtualized and Software Defined Networks (SELFNET) Analyzer Module, which main objective is to identify suspicious or unexpected situations based on metrics provided by different network components and sensors. The SELFNET Analyzer Module provides a modular architecture driven by use cases where analytic functions can be easily extended. This paper also proposes the data specification to define the data inputs to be taking into account in diagnosis process. This data specification has been implemented with different use cases within SELFNET Project, proving its effectiveness. Full article
(This article belongs to the Special Issue Information Theory and 5G Technologies)
Show Figures

Figure 1

Figure 1
<p>Endsley vs. SELFNET Autonomic Layer.</p>
Full article ">Figure 2
<p>Centralized and Distributed Architectures.</p>
Full article ">Figure 3
<p>Example of Data Encapsulation.</p>
Full article ">Figure 4
<p>Analyzer Module architecture.</p>
Full article ">Figure 5
<p>Analyzer Module as a Black Box.</p>
Full article ">
731 KiB  
Article
Identifying Critical States through the Relevance Index
by Andrea Roli, Marco Villani, Riccardo Caprari and Roberto Serra
Entropy 2017, 19(2), 73; https://doi.org/10.3390/e19020073 - 16 Feb 2017
Cited by 15 | Viewed by 5156
Abstract
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, [...] Read more.
The identification of critical states is a major task in complex systems, and the availability of measures to detect such conditions is of utmost importance. In general, criticality refers to the existence of two qualitatively different behaviors that the same system can exhibit, depending on the values of some parameters. In this paper, we show that the relevance index may be effectively used to identify critical states in complex systems. The relevance index was originally developed to identify relevant sets of variables in dynamical systems, but in this paper, we show that it is also able to capture features of criticality. The index is applied to two prominent examples showing slightly different meanings of criticality, namely the Ising model and random Boolean networks. Results show that this index is maximized at critical states and is robust with respect to system size and sampling effort. It can therefore be used to detect criticality. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

Figure 1
<p>Bifurcation diagram for the 2D Ising model obtained by a Monte Carlo simulation. Each dot represents the magnetization of a specific run at temperature <span class="html-italic">T</span>; initial conditions are sampled with spin bias in the range <math display="inline"> <semantics> <mrow> <mo>[</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>,</mo> <mn>0</mn> <mo>.</mo> <mn>8</mn> <mo>]</mo> </mrow> </semantics> </math> for a total amount of 3000 simulations. The dashed vertical line is located at <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <msub> <mi>T</mi> <mi>c</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>The critical line in RBNs. The bold line separates the ordered region (shaded) from the chaotic one.</p>
Full article ">Figure 3
<p>Plots of susceptibility values for (<b>a</b>) <math display="inline"> <semantics> <mrow> <mn>10</mn> <mo>×</mo> <mn>10</mn> </mrow> </semantics> </math> and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mn>20</mn> <mo>×</mo> <mn>20</mn> </mrow> </semantics> </math> lattices. The median of 10 independent replicas for each temperature value is plotted.</p>
Full article ">Figure 4
<p>Plots of the relevance index (RI) for (<b>a</b>) <math display="inline"> <semantics> <mrow> <mn>10</mn> <mo>×</mo> <mn>10</mn> </mrow> </semantics> </math> and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mn>20</mn> <mo>×</mo> <mn>20</mn> </mrow> </semantics> </math> lattices. The median of the average integration values for groups of size two to 10 is plotted against <span class="html-italic">T</span>. The curves shift up with group size. Note that in the <math display="inline"> <semantics> <mrow> <mn>20</mn> <mo>×</mo> <mn>20</mn> </mrow> </semantics> </math> case, for small group sizes, the index peaks slightly before the dashed line: this discrepancy is ascribed to the small plateau around the maximal value of susceptibility, as can be observed in <a href="#entropy-19-00073-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Plots of integration for (<b>a</b>) <math display="inline"> <semantics> <mrow> <mn>10</mn> <mo>×</mo> <mn>10</mn> </mrow> </semantics> </math> and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mn>20</mn> <mo>×</mo> <mn>20</mn> </mrow> </semantics> </math> lattices. The median of the average integration values for groups of size two to 10 is plotted against <span class="html-italic">T</span>. The curves shift up with group size.</p>
Full article ">Figure 6
<p>Plots of mutual information for (<b>a</b>) <math display="inline"> <semantics> <mrow> <mn>10</mn> <mo>×</mo> <mn>10</mn> </mrow> </semantics> </math> and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mn>20</mn> <mo>×</mo> <mn>20</mn> </mrow> </semantics> </math> lattices. The median of the average mutual information values for groups of size two to 10 is plotted against <span class="html-italic">T</span>. The curves shift up with group size.</p>
Full article ">Figure 7
<p>Heat maps of the <span class="html-italic">p</span>–<span class="html-italic">K</span> diagram of RI indexes computed for groups having different sizes: respectively, groups of (<b>a</b>) size 2, (<b>b</b>) size 5 and (<b>c</b>) size 9. The superimposed red line denotes the position of the edge of chaos curve. This wide region has been sampled in 90 points by combining nine different biases and ten different connectivities; for each point, we tested 100 different RBNs, each RBN being represented by the RI obtained by sampling the states of 200 different trajectories (RBN_40 series, raw data). Each pixel represents the median of the RI of 100 different RBNs sharing the same values of bias and connectivity.</p>
Full article ">Figure 8
<p>The peaks of the RI values shown in <a href="#entropy-19-00073-f007" class="html-fig">Figure 7</a>b (the interpolating curve has been obtained by fitting a quadratic function to the measured points, and it is only a visual aid).</p>
Full article ">Figure 9
<p>Heat maps of the connectivity-bias diagrams of RI (left) and for integration <span class="html-italic">I</span> (center) and mutual information <span class="html-italic">M</span> (right) for (<b>a</b>–<b>c</b>) size five groups and (<b>d</b>–<b>f</b>) size nine groups (second row). It is possible to note that RI is closer to the critical region (identified by the superimposed red line) than the integration alone, especially in the regions of high biases.</p>
Full article ">Figure 10
<p>The median values of the RI index in RBN having <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math> and respectively (<b>a</b>) 100 and (<b>b</b>) 500 nodes, for group sizes in the range [2,10]. Bias varies from 0.5 to 1.0, with steps of 0.01. The vertical red line identifies the experimentally-determined edge of chaos position.</p>
Full article ">Figure 11
<p>The plot shows the median values of the RI and <span class="html-italic">I</span> indexes in <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math> RBNs having 500 nodes. Bias varies from 0.5 to 1.0, with steps of 0.01. Moreover, the median values of the index <span class="html-italic">λ</span>’ are also shown, defined as <math display="inline"> <semantics> <mrow> <msup> <mi>λ</mi> <mo>′</mo> </msup> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mrow> <mo>|</mo> <mn>1</mn> <mo>−</mo> <mi>λ</mi> <mo>|</mo> </mrow> </mrow> </semantics> </math>. This variable is used instead of <span class="html-italic">λ</span> itself, which would grow in the chaotic region; in this way, a better visual recognition of the critical point is possible. In order to have similar vertical scales, the RI and <span class="html-italic">I</span> values are respectively multiplied by the constants 12.0 and 7.8. The vertical red line identifies the theoretical edge of chaos position.</p>
Full article ">Figure 12
<p>The median values of the RI index in RBN having <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math> and 100 nodes obtained (<b>a</b>) by using only the states belonging to the transients or (<b>b</b>) by using only the states belonging to the attractors. Bias varies from 0.65 to 1.0, with steps of 0.01; the group sizes are in the range [2,10]. The vertical red line identifies the theoretical edge of chaos position.</p>
Full article ">Figure 13
<p>The median values of the RI index in an RBN having 20 nodes for groups of size five, obtained (<b>a</b>) by using the states belonging to the whole trajectories; (<b>b</b>) by using only the states belonging to the attractors and (<b>c</b>) by using only the states belonging to the transients. The superimposed red line identifies the theoretical edge of chaos position (computed by assuming the ergodicity of the systems).</p>
Full article ">
475 KiB  
Article
Classification of Normal and Pre-Ictal EEG Signals Using Permutation Entropies and a Generalized Linear Model as a Classifier
by Francisco O. Redelico, Francisco Traversaro, María Del Carmen García, Walter Silva, Osvaldo A. Rosso and Marcelo Risk
Entropy 2017, 19(2), 72; https://doi.org/10.3390/e19020072 - 16 Feb 2017
Cited by 26 | Viewed by 5553
Abstract
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis [...] Read more.
In this contribution, a comparison between different permutation entropies as classifiers of electroencephalogram (EEG) records corresponding to normal and pre-ictal states is made. A discrete probability distribution function derived from symbolization techniques applied to the EEG signal is used to calculate the Tsallis entropy, Shannon Entropy, Renyi Entropy, and Min Entropy, and they are used separately as the only independent variable in a logistic regression model in order to evaluate its capacity as a classification variable in a inferential manner. The area under the Receiver Operating Characteristic (ROC) curve, along with the accuracy, sensitivity, and specificity are used to compare the models. All the permutation entropies are excellent classifiers, with an accuracy greater than 94.5% in every case, and a sensitivity greater than 97%. Accounting for the amplitude in the symbolization technique retains more information of the signal than its counterparts, and it could be a good candidate for automatic classification of EEG signals. Full article
(This article belongs to the Special Issue Entropy and Electroencephalography II)
Show Figures

Figure 1

Figure 1
<p>Area under Receiver Operating Characteristic (ROC) curve calculated by 10-fold cross-validation is plotted (area under the curve (AUC) <math display="inline"> <semantics> <mrow> <mo>±</mo> <mn>1</mn> </mrow> </semantics> </math> sd) against the time delay <span class="html-italic">τ</span>. The figure is divided by the different entropies and by the embedding dimension <span class="html-italic">D</span> for better understanding. It reveals that—independent of the entropy used—the best model for classifying is when the discrete Probability Distribution Function PDF is calculated for embedding dimension <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math> and time delay <math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, with the exception of the MinEntropy that the model for <math display="inline"> <semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math> is slightly better, but with no significant difference.</p>
Full article ">Figure 2
<p>Shows the five logistic models presented in <a href="#entropy-19-00072-t001" class="html-table">Table 1</a>, with the explanatory variable in the <span class="html-italic">x</span>-axis (the different entropies), and the <span class="html-italic">y</span>-axis represents the probability that the signal is a <span class="html-italic">pre-ictal</span> EEG signal, so the curve in each plot represents the probability of the signal of being a <span class="html-italic">pre-ictal</span> EEG signal, according to the model, as a function of the value of the entropy. When this probability is larger than <math display="inline"> <semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>, this observation is classified as <span class="html-italic">pre-ictal</span> EEG (blue crosses), and when it is less than <math display="inline"> <semantics> <mrow> <mi>c</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics> </math>, as <span class="html-italic">normal</span> (red crosses). The actual class of the observations are plotted as black circles. The models with the highest (in absolute value) <span class="html-italic">β</span> coefficient have a more pronounced slope in the S-shaped curve, leading to a classification more sensible to changes. The permutation entropy <math display="inline"> <semantics> <mrow> <mi mathvariant="script">H</mi> <mo>[</mo> <mi>P</mi> <mo>]</mo> </mrow> </semantics> </math> as a classifier between normal and pre-ictal EEG signals has the strongest correlation with a <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mo>−</mo> <mn>336</mn> </mrow> </semantics> </math>, with a <span class="html-italic">p</span>-value near to zero (<a href="#entropy-19-00072-t001" class="html-table">Table 1</a>). This means that for every <math display="inline"> <semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>1000</mn> </mrow> </semantics> </math> the <math display="inline"> <semantics> <mrow> <mi mathvariant="script">H</mi> <mo>[</mo> <mi>P</mi> <mo>]</mo> </mrow> </semantics> </math> moves up, the odds ratio (the quotient between the probability of being ill and the probability of not having the disease) decreases by 28%. In other words, small increments on the <math display="inline"> <semantics> <mrow> <mi mathvariant="script">H</mi> <mo>[</mo> <mi>P</mi> <mo>]</mo> </mrow> </semantics> </math> significantly affects the probability of detecting the presence of the illness. Adding noise to a signal increases the permutation entropy, so noisy EEG signals would result in lower Sensitivity (i.e., the ability to detect <span class="html-italic">pre-ictal</span> EEG signals). On the other hand, MinEntropy <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mo>∞</mo> </msub> <mfenced open="(" close=")"> <mi>P</mi> </mfenced> </mrow> </semantics> </math> has the weakest correlation with a <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mo>−</mo> <mn>13.29</mn> </mrow> </semantics> </math>, meaning that for every <math display="inline"> <semantics> <mrow> <mn>1</mn> <mo>/</mo> <mn>1000</mn> </mrow> </semantics> </math> the <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mo>∞</mo> </msub> <mfenced open="(" close=")"> <mi>P</mi> </mfenced> </mrow> </semantics> </math> moves up, the odds ratio decreases by only 1.2%, and still is an excellent classifier. This behaviour in a classification model indicates robustness, because small increments in the value of the entropy do not affect the classification. This holds because all the entropies are on the same scale. MinEntropy <math display="inline"> <semantics> <mrow> <msub> <mi>R</mi> <mo>∞</mo> </msub> <mfenced open="(" close=")"> <mi>P</mi> </mfenced> </mrow> </semantics> </math> is the most robust model, followed by the model that uses weighted permutation entropy <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="script">H</mi> <mi>w</mi> </msub> <mrow> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>The entropy that has the best classification performance is the weighted permutation entropy, followed by the MinEntropy by less than a standard deviation, and the remaining entropies have similar performance in terms of the AUC.</p>
Full article ">
2240 KiB  
Article
Synergy and Redundancy in Dual Decompositions of Mutual Information Gain and Information Loss
by Daniel Chicharro and Stefano Panzeri
Entropy 2017, 19(2), 71; https://doi.org/10.3390/e19020071 - 16 Feb 2017
Cited by 25 | Viewed by 8050
Abstract
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or [...] Read more.
Williams and Beer (2010) proposed a nonnegative mutual information decomposition, based on the construction of information gain lattices, which allows separating the information that a set of variables contains about another variable into components, interpretable as the unique information of one variable, or redundant and synergy components. In this work, we extend this framework focusing on the lattices that underpin the decomposition. We generalize the type of constructible lattices and examine the relations between different lattices, for example, relating bivariate and trivariate decompositions. We point out that, in information gain lattices, redundancy components are invariant across decompositions, but unique and synergy components are decomposition-dependent. Exploiting the connection between different lattices, we propose a procedure to construct, in the general multivariate case, information gain decompositions from measures of synergy or unique information. We then introduce an alternative type of lattices, information loss lattices, with the role and invariance properties of redundancy and synergy components reversed with respect to gain lattices, and which provide an alternative procedure to build multivariate decompositions. We finally show how information gain and information loss dual lattices lead to a self-consistent unique decomposition, which allows a deeper understanding of the origin and meaning of synergy and redundancy. Full article
(This article belongs to the Special Issue Complexity, Criticality and Computation (C³))
Show Figures

Figure 1

Figure 1
<p>Information gain decompositions of different orders and for different subsets of collections of sources. (<b>A</b>,<b>B</b>) Lattices constructed from the complete domain of collections as defined by Equation (5) for <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>, respectively. Pale red edges in (B) identify the embedded lattice formed by collections that do not contain univariate sources. (<b>C</b>) Alternative decomposition based only on sources 1 and 23. (<b>D</b>) Alternative decomposition that does not contain bivariate sources.</p>
Full article ">Figure 2
<p>Mapping between the incremental terms of the bivariate lattice for 1, 2 and the full trivariate lattice for 1, 2, 3. (<b>A</b>) The bivariate lattice with each node marked with a different color (and also, redundantly, with a different lower case letter, for no-color printing). (<b>B</b>) The trivariate lattice with the nodes coloured consistently with the mapping to the bivariate lattice. In more detail, the incremental term of each node of the bivariate lattice is obtained as the sum of the incremental terms of the nodes of the trivariate lattice with the same color.</p>
Full article ">Figure 3
<p>Mapping of the incremental terms of the full lattice to lattices formed by subsets of collections (<b>A</b>) The lattice of <a href="#entropy-19-00071-f001" class="html-fig">Figure 1</a>D with each node marked with a different color (and also a different lower case letter, for no-color printing). (<b>B</b>) The trivariate lattice with the nodes coloured consistently with the mapping to the lattice of (A). In more detail, the incremental term of each node of the smaller lattice is obtained as the sum of the incremental terms of the nodes of the trivariate lattice with the same color. (<b>C</b>) Another lattice obtained from a subset of the collections, with each node marked with a different color (and lower case letter). (<b>D</b>) The lattice of (A) now with its nodes coloured consistently with the mapping to the lattice of (C). In contrast to the mapping between (B) and (A), here each incremental term of (D) can contribute to more than one incremental term of (C), with a positive (circle) or negative (triangles) contribution.</p>
Full article ">Figure 4
<p>Examples of information gain lattices that result in inconsistencies when trying to derive redundancy terms from a synergy definition, as explained in <a href="#sec3dot3-entropy-19-00071" class="html-sec">Section 3.3</a>.</p>
Full article ">Figure 5
<p>Information loss decompositions of different orders and for different subsets of collections of sources. (<b>A</b>–<b>D</b>) The lattices are analogous to the information gain lattices of <a href="#entropy-19-00071-f001" class="html-fig">Figure 1</a>. Note that the lattice embedded in (B), indicated with the pale red edges, corresponds to the one shown in (D), differently than in <a href="#entropy-19-00071-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 6
<p>The correspondence between information gain and information loss lattices. (<b>A</b>,<b>C</b>) Examples of information gain lattices. (<b>B</b>,<b>D</b>) Information loss lattices candidates to be their dual lattices, respectively. The shaded areas comprise the collections corresponding to incremental terms that contribute to <math display="inline"> <semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>S</mi> <mo>;</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics> </math> in each lattice.</p>
Full article ">Figure 7
<p>Correspondence between information gain and information loss lattices. (<b>A</b>,<b>C</b>) Examples of information gain lattices. (<b>B</b>,<b>D</b>) Information loss lattices candidates to be their dual lattice, respectively. The blue shaded areas comprise the collections corresponding to incremental terms that contribute to <math display="inline"> <semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>S</mi> <mo>;</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics> </math> in each lattice. The pink shaded areas surrounded with a dotted line comprise the collections corresponding to incremental terms that contribute to the complementary information <math display="inline"> <semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>S</mi> <mo>;</mo> <mn>23</mn> <mo>|</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics> </math> in each lattice. In (<b>A</b>,<b>B</b>), the dashed red lines encircle the incremental terms contributing to <math display="inline"> <semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>S</mi> <mo>;</mo> <mn>1</mn> <mo>.</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Dual trivariate decompositions for the sets of collections that do not contain bivariate sources. (<b>A</b>) Information gain lattice. (<b>B</b>) Information loss lattice. In each node together with the collection, the corresponding cumulative and incremental terms are indicated. Note that the incremental terms are common to both lattices and can be mapped by reversing the lattice up/down and right/left. In the information loss lattice, the cumulative terms of collections containing single sources, <math display="inline"> <semantics> <mrow> <mi>L</mi> <mo>(</mo> <mi>S</mi> <mo>;</mo> <mi>i</mi> <mo>)</mo> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> </mrow> </semantics> </math>, are directly expressed as the corresponding conditional information.</p>
Full article ">Figure 9
<p>Analogous to <a href="#entropy-19-00071-f008" class="html-fig">Figure 8</a> but for the trivariate decomposition based only on collections that do not contain univariate sources.</p>
Full article ">
832 KiB  
Article
Two Thermoeconomic Diagnosis Methods Applied to Representative Operating Data of a Commercial Transcritical Refrigeration Plant
by Torben Ommen, Oskar Sigthorsson and Brian Elmegaard
Entropy 2017, 19(2), 69; https://doi.org/10.3390/e19020069 - 15 Feb 2017
Cited by 7 | Viewed by 4367
Abstract
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of [...] Read more.
In order to investigate options for improving the maintenance protocol of commercial refrigeration plants, two thermoeconomic diagnosis methods were evaluated on a state-of-the-art refrigeration plant. A common relative indicator was proposed for the two methods in order to directly compare the quality of malfunction identification. Both methods were applicable to locate and categorise the malfunctions when using steady state data without measurement uncertainties. By introduction of measurement uncertainty, the categorisation of malfunctions became increasingly difficult, though depending on the magnitude of the uncertainties. Two different uncertainty scenarios were evaluated, as the use of repeated measurements yields a lower magnitude of uncertainty. The two methods show similar performance in the presented study for both of the considered measurement uncertainty scenarios. However, only in the low measurement uncertainty scenario, both methods are applicable to locate the causes of the malfunctions. For both the scenarios an outlier limit was found, which determines if it was possible to reject a high relative indicator based on measurement uncertainty. For high uncertainties, the threshold value of the relative indicator was 35, whereas for low uncertainties one of the methods resulted in a threshold at 8. Additionally, the contribution of different measuring instruments to the relative indicator in two central components was analysed. It shows that the contribution was component dependent. Full article
(This article belongs to the Special Issue Thermoeconomics for Energy Efficiency)
Show Figures

Figure 1

Figure 1
<p>Procedure to perform diagnosis based on measured data or from numerical model. (<b>a</b>) Measured data; (<b>b</b>) Data from numerical model with added measurement uncertainty.</p>
Full article ">Figure 2
<p>Schematic representation of the calculation procedure to evaluate the impact of measurement uncertainty on the indication of malfunctions in refrigeration plant.</p>
Full article ">Figure 3
<p>Schematic diagram of the transcritical refrigeration plant with control volumes for five main components. State points are shown in black squares. Instrumentation for measured quantities are presented with following notation: TI—temperature; PI—pressure; FI—Flow and EI—electrical consumption.</p>
Full article ">Figure 4
<p>Representative temperature–entropy diagram of refrigerant R744 (CO2) and a transcritical booster refrigeration plant with state points according to <a href="#entropy-19-00069-f003" class="html-fig">Figure 3</a> [<a href="#B8-entropy-19-00069" class="html-bibr">8</a>].</p>
Full article ">Figure 5
<p>The effect of measurement uncertainties on the proposed relative indicator for Real 1 with two different uncertainty scenarios. (<b>a</b>) Data sheet; (<b>b</b>) Estimated.</p>
Full article ">Figure 6
<p>The effect of measurement uncertainties on the proposed relative indicator for Real 2 with two different uncertainty scenarios. (<b>a</b>) Data sheet; (<b>b</b>) Estimated.</p>
Full article ">Figure 7
<p>The effect of measurement uncertainties on the proposed relative indicator for Real 3 with two different uncertainty scenarios. (<b>a</b>) Data sheet; (<b>b</b>) Estimated.</p>
Full article ">Figure 8
<p>The effect of measurement uncertainties on the proposed relative indicator for Real 4 with two different uncertainty scenarios. (<b>a</b>) Data sheet; (<b>b</b>) Estimated.</p>
Full article ">Figure 9
<p>Evaluation of the individual contributions of measurement uncertainties for the HP compressor. Flow measurement uncertainty is varied from negligible to high relative uncertainties. The combined measurement uncertainty on the relative indicator <math display="inline"> <semantics> <mrow> <mo>±</mo> <mi>u</mi> <mo stretchy="false">(</mo> <msubsup> <mi>I</mi> <mrow> <mi>rel</mi> </mrow> <mi>i</mi> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics> </math> is presented on the right hand ordinate. (<b>a</b>) Data sheet; (<b>b</b>) Estimated.</p>
Full article ">Figure 10
<p>Evaluation of the individual contributions of measurement uncertainties for the Gas Cooler unit. Flow measurement uncertainty is varied from negligible to high relative uncertainties. The combined measurement uncertainty on the relative indicator <math display="inline"> <semantics> <mrow> <mo>±</mo> <mi>u</mi> <mo stretchy="false">(</mo> <msubsup> <mi>I</mi> <mrow> <mi>rel</mi> </mrow> <mi>i</mi> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics> </math> is presented on the second ordinate. (<b>a</b>) Data sheet; (<b>b</b>) Estimated.</p>
Full article ">
598 KiB  
Article
Kinetic Theory of a Confined Quasi-Two-Dimensional Gas of Hard Spheres
by J. Javier Brey, Vicente Buzón, Maria Isabel García de Soria and Pablo Maynar
Entropy 2017, 19(2), 68; https://doi.org/10.3390/e19020068 - 14 Feb 2017
Cited by 4 | Viewed by 5078
Abstract
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at [...] Read more.
The dynamics of a system of hard spheres enclosed between two parallel plates separated a distance smaller than two particle diameters is described at the level of kinetic theory. The interest focuses on the behavior of the quasi-two-dimensional fluid seen when looking at the system from above or below. In the first part, a collisional model for the effective two-dimensional dynamics is analyzed. Although it is able to describe quite well the homogeneous evolution observed in the experiments, it is shown that it fails to predict the existence of non-equilibrium phase transitions, and in particular, the bimodal regime exhibited by the real system. A critical revision analysis of the model is presented , and as a starting point to get a more accurate description, the Boltzmann equation for the quasi-two-dimensional gas has been derived. In the elastic case, the solutions of the equation verify an H-theorem implying a monotonic tendency to a non-uniform steady state. As an example of application of the kinetic equation, here the evolution equations for the vertical and horizontal temperatures of the system are derived in the homogeneous approximation, and the results compared with molecular dynamics simulation results. Full article
(This article belongs to the Special Issue Nonequilibrium Phenomena in Confined Systems)
Show Figures

Figure 1

Figure 1
<p>Sketch of the quasi-two-dimensional system described in the main text. The two parallel walls are vibrating, and the interest is on the dynamics observed when looking from above or below.</p>
Full article ">Figure 2
<p>Relaxation of the granular temperature of the quasi-two-dimensional gas in a system with <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>8</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>h</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>5</mn> <mi>σ</mi> </mrow> </semantics> </math>, and three-dimensional density <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>02</mn> <msup> <mi>σ</mi> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics> </math>. The walls are vibrating in a sawtooth way with a velocity <math display="inline"> <semantics> <msub> <mi>v</mi> <mi>b</mi> </msub> </semantics> </math>. Solid lines are the results from molecular dynamics (MD) simulations, while the dashed lines are the theoretical predictions obtained as indicated in the main text. (<b>a</b>) On the left hand side, a system cooling towards its steady granular temperature; (<b>b</b>) On the right hand side, the system started from a granular temperature smaller than the stationary one.</p>
Full article ">Figure 3
<p>(<b>a</b>) Dimensionless Euler transport coefficient <math display="inline"> <semantics> <msub> <mi>ζ</mi> <mn>1</mn> </msub> </semantics> </math> as a function of the dimensionless characteristic speed <math display="inline"> <semantics> <msup> <mo>Δ</mo> <mo>∗</mo> </msup> </semantics> </math> for the effective two-dimensional granular gas. The coefficient of normal restitution is <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>85</mn> </mrow> </semantics> </math>. The dots indicate the value of the transport coefficients at the steady state, whose characteristic speed is <math display="inline"> <semantics> <msubsup> <mo>Δ</mo> <mrow> <mi>s</mi> <mi>t</mi> </mrow> <mo>∗</mo> </msubsup> </semantics> </math>; (<b>b</b>) the same for the adimensionalized shear viscosity <math display="inline"> <semantics> <mover> <mi>η</mi> <mo>¯</mo> </mover> </semantics> </math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Adimensionalized (thermal) heat conductivity <math display="inline"> <semantics> <mover> <mi>κ</mi> <mo>¯</mo> </mover> </semantics> </math> as a function of the dimensionless characteristic speed <math display="inline"> <semantics> <msup> <mo>Δ</mo> <mo>∗</mo> </msup> </semantics> </math> for the effective two-dimensional granular gas. The coefficient of normal restitution is <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>85</mn> </mrow> </semantics> </math>. The dots indicate the value of the transport coefficients at the steady state, whose characteristic reduced speed is <math display="inline"> <semantics> <msubsup> <mo>Δ</mo> <mrow> <mi>s</mi> <mi>t</mi> </mrow> <mo>∗</mo> </msubsup> </semantics> </math>; (<b>b</b>) the same for the adimensionalized diffusive heat conductivity <math display="inline"> <semantics> <mover> <mi>μ</mi> <mo>¯</mo> </mover> </semantics> </math>.</p>
Full article ">Figure 5
<p>Time evolution of the dimensionless perturbations of the hydrodynamic fields, <math display="inline"> <semantics> <mrow> <mi>ρ</mi> <mo>≡</mo> <mrow> <mo>(</mo> <mi>n</mi> <mo>−</mo> <msub> <mi>n</mi> <mi>H</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>n</mi> <mi>h</mi> </msub> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">ω</mi> <mo>≡</mo> <mi mathvariant="bold-italic">u</mi> <mo>/</mo> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>θ</mi> <mo>≡</mo> <mrow> <mo>(</mo> <mi>T</mi> <mo>−</mo> <msub> <mi>T</mi> <mi>H</mi> </msub> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>T</mi> <mi>H</mi> </msub> </mrow> </semantics> </math>, as predicted by the linearized hydrodynamic equations. The tildes indicate Fourier transforms, and the time <span class="html-italic">s</span> is a dimensionless scale defined from the original one by means of the thermal velocity <math display="inline"> <semantics> <msub> <mi>v</mi> <mrow> <mn>0</mn> <mi>H</mi> </mrow> </msub> </semantics> </math> and the mean free path, Equation (<a href="#FD18-entropy-19-00068" class="html-disp-formula">18</a>). The wavenumber is <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> </mrow> </semantics> </math>, while <math display="inline"> <semantics> <mrow> <msub> <mi>k</mi> <mo>⊥</mo> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>91</mn> </mrow> </semantics> </math>. The initial temperature of the system is larger than its stationary value, so the system is monotonically cooling in time. The values of the initial perturbations of the fields are on the order of <math display="inline"> <semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </semantics> </math>.</p>
Full article ">Figure 6
<p>Snapshot of the positions of the particles in a system of <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics> </math> smooth inelastic hard disks. The parameters defining the system are <math display="inline"> <semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>06</mn> <msup> <mi>σ</mi> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>6</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mo>=</mo> <mn>2</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> <msub> <mi>v</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>/</mo> <msqrt> <mn>2</mn> </msqrt> </mrow> </semantics> </math>. At the time shown, the average accumulated number of collisions per particle was around 40.</p>
Full article ">Figure 7
<p>Density profile between the two plates. The symbols are simulation results, and the dashed line is the theoretical prediction given in the main text. The number of particles used in the simulations is <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics> </math>, the separation of the plates is <math display="inline"> <semantics> <mrow> <mi>h</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>9</mn> <mi>σ</mi> </mrow> </semantics> </math>, and the average density is given by <math display="inline"> <semantics> <mrow> <mi>N</mi> <msup> <mi>σ</mi> <mn>2</mn> </msup> <mo>/</mo> <mi>A</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>285</mn> </mrow> </semantics> </math>. The plotted dimensionless density is <math display="inline"> <semantics> <mrow> <msub> <mi>n</mi> <mn>2</mn> </msub> <mo>=</mo> <mi>n</mi> <mi>σ</mi> <mi>A</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mover> <mi>z</mi> <mo>¯</mo> </mover> <mo>≡</mo> <mrow> <mo>(</mo> <mn>2</mn> <mi>z</mi> <mo>−</mo> <mi>σ</mi> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> <mi>σ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Decay of the vertical, <math display="inline"> <semantics> <msub> <mi>T</mi> <mi>z</mi> </msub> </semantics> </math>, and horizontal, <math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </msub> </semantics> </math>, temperatures towards their equal steady values in a homogenous system. Time is measured in the dimensionless units indicated in the label, where <math display="inline"> <semantics> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> <mo>≡</mo> <msqrt> <mrow> <mn>2</mn> <mi>T</mi> <mo>(</mo> <mn>0</mn> <mo>)</mo> <mo>/</mo> <mi>m</mi> </mrow> </msqrt> </mrow> </semantics> </math>. The solid lines are MD simulation results, while the dashed lines are the theoretical predictions given by Equations (<a href="#FD39-entropy-19-00068" class="html-disp-formula">39</a>) and (<a href="#FD40-entropy-19-00068" class="html-disp-formula">40</a>). Moreover, the two upper lines correspond to the vertical temperature, and the two lower lines to the horizontal temperature.</p>
Full article ">
1492 KiB  
Article
An Android Malicious Code Detection Method Based on Improved DCA Algorithm
by Chundong Wang, Zhiyuan Li, Liangyi Gong, Xiuliang Mo, Hong Yang and Yi Zhao
Entropy 2017, 19(2), 65; https://doi.org/10.3390/e19020065 - 11 Feb 2017
Cited by 5 | Viewed by 7092
Abstract
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source [...] Read more.
Recently, Android malicious code has increased dramatically and the technology of reinforcement is increasingly powerful. Due to the development of code obfuscation and polymorphic deformation technology, the current Android malicious code static detection method whose feature selected is the semantic of application source code can not completely extract malware’s code features. The Android malware static detection methods whose features used are only obtained from the AndroidManifest.xml file are easily affected by useless permissions. Therefore, there are some limitations in current Android malware static detection methods. The current Android malware dynamic detection algorithm is mostly required to customize the system or needs system root permissions. Based on the Dendritic Cell Algorithm (DCA), this paper proposes an Android malware algorithm that has a higher detection rate, does not need to modify the system, and reduces the impact of code obfuscation to a certain degree. This algorithm is applied to an Android malware detection method based on oriented Dalvik disassembly sequence and application interface (API) calling sequence. Through the designed experiments, the effectiveness of this method is verified for the detection of Android malware. Full article
Show Figures

Figure 1

Figure 1
<p>Dalvik reduced instruction sequence (DRIS) letters and instructions mapping table.</p>
Full article ">Figure 2
<p>Dalvik instruction set sequence sample.</p>
Full article ">Figure 3
<p>Dendritic Cell (DC) state transformation.</p>
Full article ">Figure 4
<p>Overall system flow.</p>
Full article ">Figure 5
<p>Training flow.</p>
Full article ">Figure 6
<p>Effects of N-perm parameter N on detection accuracy.</p>
Full article ">Figure 7
<p>Detection flow.</p>
Full article ">Figure 8
<p>Comparison of feature classification results.</p>
Full article ">Figure 9
<p>Comparison of different numbers of malicious samples.</p>
Full article ">
3717 KiB  
Article
Investigation into Multi-Temporal Scale Complexity of Streamflows and Water Levels in the Poyang Lake Basin, China
by Feng Huang, Xunzhou Chunyu, Yuankun Wang, Yao Wu, Bao Qian, Lidan Guo, Dayong Zhao and Ziqiang Xia
Entropy 2017, 19(2), 67; https://doi.org/10.3390/e19020067 - 10 Feb 2017
Cited by 11 | Viewed by 4880
Abstract
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and [...] Read more.
The streamflow and water level complexity of the Poyang Lake basin has been investigated over multiple time-scales using daily observations of the water level and streamflow spanning from 1954 through 2013. The composite multiscale sample entropy was applied to measure the complexity and the Mann-Kendall algorithm was applied to detect the temporal changes in the complexity. The results show that the streamflow and water level complexity increases as the time-scale increases. The sample entropy of the streamflow increases when the timescale increases from a daily to a seasonal scale, also the sample entropy of the water level increases when the time-scale increases from a daily to a monthly scale. The water outflows of Poyang Lake, which is impacted mainly by the inflow processes, lake regulation, and the streamflow processes of the Yangtze River, is more complex than the water inflows. The streamflow and water level complexity over most of the time-scales, between the daily and monthly scales, is dominated by the increasing trend. This indicates the enhanced randomness, disorderliness, and irregularity of the streamflows and water levels. This investigation can help provide a better understanding to the hydrological features of large freshwater lakes. Ongoing research will be made to analyze and understand the mechanisms of the streamflow and water level complexity changes within the context of climate change and anthropogenic activities. Full article
(This article belongs to the Special Issue Entropy Applications in Environmental and Water Engineering)
Show Figures

Figure 1

Figure 1
<p>Locations of Poyang Lake and the hydrometric stations.</p>
Full article ">Figure 2
<p>Composite multiscale sample entropy of the streamflows of Poyang Lake.</p>
Full article ">Figure 3
<p>Average sample entropy of the streamflows of Poyang Lake.</p>
Full article ">Figure 4
<p>Composite multiscale sample entropy of the water levels of Poyang Lake.</p>
Full article ">Figure 5
<p>Trends in multiscale complexity of the streamflows of Poyang Lake.</p>
Full article ">Figure 6
<p>Temporal changes in complexity of the daily streamflows of Poyang Lake.</p>
Full article ">Figure 7
<p>Trends in multiscale complexity of the water levels of Poyang Lake.</p>
Full article ">Figure 8
<p>Temporal changes in complexity of the daily water levels of Poyang Lake.</p>
Full article ">
246 KiB  
Concept Paper
Discussing Landscape Compositional Scenarios Generated with Maximization of Non-Expected Utility Decision Models Based on Weighted Entropies
by José Pinto Casquilho and Francisco Castro Rego
Entropy 2017, 19(2), 66; https://doi.org/10.3390/e19020066 - 10 Feb 2017
Cited by 8 | Viewed by 5606
Abstract
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision [...] Read more.
The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies. Full article
(This article belongs to the Special Issue Entropy in Landscape Ecology)
4069 KiB  
Article
Bullwhip Entropy Analysis and Chaos Control in the Supply Chain with Sales Game and Consumer Returns
by Wandong Lou, Junhai Ma and Xueli Zhan
Entropy 2017, 19(2), 64; https://doi.org/10.3390/e19020064 - 10 Feb 2017
Cited by 13 | Viewed by 4786
Abstract
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable [...] Read more.
In this paper, we study a supply chain system which consists of one manufacturer and two retailers including a traditional retailer and an online retailer. In order to gain a larger market share, the retailers often take the sales as a decision-making variable in the competition game. We devote ourselves to analyze the bullwhip effect in the supply chain with sales game and consumer returns via the theory of entropy and complexity and take the delayed feedback control method to control the system’s chaotic state. The impact of a statutory 7-day no reason for return policy for online retailers is also investigated. The bounded rational expectation is adopt to forecast the future demand in the sales game system with weak noise. Our results show that high return rates will hurt the profits of both the retailers and the adjustment speed of the bounded rational sales expectation has an important impact on the bullwhip effect. There is a stable area for retailers where the bullwhip effect doesn’t appear. The supply chain system suffers a great bullwhip effect in the quasi-periodic state and the quasi-chaotic state. The purpose of chaos control on the sales game can be achieved and the bullwhip effect would be effectively mitigated by using the delayed feedback control method. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>Supply chain model.</p>
Full article ">Figure 2
<p>(<b>a</b>) Stability region; and (<b>b</b>) basin of attraction of system.</p>
Full article ">Figure 3
<p>Parameter basin.</p>
Full article ">Figure 4
<p>Bifurcation diagram of the deterministic system.</p>
Full article ">Figure 5
<p>(<b>a</b>) Quasi-bifurcation; (<b>b</b>) Largest Lyapunov exponent; and (<b>c</b>) Entropy diagram.</p>
Full article ">Figure 6
<p>(<b>a</b>) The attractor of the deterministic system; and (<b>b</b>) the quasi-chaos attractor of the system with weak noise.</p>
Full article ">Figure 7
<p>The optimal profits with respect to the return rate.</p>
Full article ">Figure 8
<p>(<b>a</b>) Timing diagram of the traditional retailer’s bullwhip effect; (<b>b</b>) Timing diagram of the online retailer’s bullwhip effect.</p>
Full article ">Figure 9
<p>(<b>a</b>) Effect of <math display="inline"> <semantics> <mi mathvariant="sans-serif">β</mi> </semantics> </math> on the traditional retailer’s OVR; (<b>b</b>) Effect of β on the online retailer’s OVR.</p>
Full article ">Figure 10
<p>(<b>a</b>) Quasi-bifurcation diagram; and (<b>b</b>) the online retailer’s bullwhip effect with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> </mrow> </semantics> </math> 0.5, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics> </math> and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>∈</mo> <mrow> <mo>[</mo> <mrow> <mn>0</mn> <mo>,</mo> <mtext> </mtext> <mn>6.5</mn> </mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>(<b>a</b>) Quasi-bifurcation diagram; and (<b>b</b>) the online retailer’s bullwhip effect with <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>4.15</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>K</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mtext> </mtext> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 12
<p>Entropy diagram of the online retailer with <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>4.15</mn> <mo>,</mo> <mo> </mo> <mo> </mo> <mi>K</mi> <mo>∈</mo> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mtext> </mtext> <mn>1</mn> </mrow> <mo>)</mo> </mrow> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">
12388 KiB  
Article
Response Surface Methodology Control Rod Position Optimization of a Pressurized Water Reactor Core Considering Both High Safety and Low Energy Dissipation
by Yi-Ning Zhang, Hao-Chun Zhang, Hai-Yan Yu and Chao Ma
Entropy 2017, 19(2), 63; https://doi.org/10.3390/e19020063 - 10 Feb 2017
Cited by 5 | Viewed by 5336
Abstract
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal [...] Read more.
Response Surface Methodology (RSM) is introduced to optimize the control rod positions in a pressurized water reactor (PWR) core. The widely used 3D-IAEA benchmark problem is selected as the typical PWR core and the neutron flux field is solved. Besides, some additional thermal parameters are assumed to obtain the temperature distribution. Then the total and local entropy production is calculated to evaluate the energy dissipation. Using RSM, three directions of optimization are taken, which aim to determine the minimum of power peak factor Pmax, peak temperature Tmax and total entropy production Stot. These parameters reflect the safety and energy dissipation in the core. Finally, an optimization scheme was obtained, which reduced Pmax, Tmax and Stot by 23%, 8.7% and 16%, respectively. The optimization results are satisfactory. Full article
(This article belongs to the Special Issue Advances in Applied Thermodynamics II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Horizontal cross section of the 3D-IAEA problem.</p>
Full article ">Figure 2
<p>Vertical cross section of the 3D-IAEA problem.</p>
Full article ">Figure 3
<p>Fast neutron flux at the diagonal line at the level of <span class="html-italic">z</span> = 195 cm.</p>
Full article ">Figure 4
<p>Thermal neutron flux at the diagonal line at the level of <span class="html-italic">z</span> = 195 cm.</p>
Full article ">Figure 5
<p>Local power distribution of standard problem (<b>left</b>: vertical cross section cloud picture at <span class="html-italic">y</span> = 0; <b>right-top</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 315 cm; <b>right-bottom</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 195 cm. <span class="html-italic">P</span><sub>avg</sub> represents the average local power of standard problem).</p>
Full article ">Figure 6
<p>Temperature distribution of standard problem (<b>left</b>: vertical cross section cloud picture at <span class="html-italic">y</span> = 0; <b>right-top</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 315 cm; <b>right-bottom</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 195 cm).</p>
Full article ">Figure 7
<p>Local entropy production distribution of standard problem (<b>left</b>: vertical cross section cloud picture at <span class="html-italic">y</span> = 0; <b>right-top</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 315 cm; <b>right-bottom</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 195 cm. <span class="html-italic">S</span><sub>0</sub> represents the average local entropy production of standard problem).</p>
Full article ">Figure 8
<p>Flow diagram of optimization procedure.</p>
Full article ">Figure 9
<p>Local power distribution of optimization scheme (<b>left</b>: vertical cross section cloud picture at <span class="html-italic">y</span> = 0; <b>right-top</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 315 cm; <b>right-bottom</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 195 cm. <span class="html-italic">P</span><sub>avg</sub> represents the average local power of standard problem).</p>
Full article ">Figure 10
<p>Temperature distribution of optimization scheme (<b>left</b>: vertical cross section cloud picture at <span class="html-italic">y</span> = 0; <b>right-top</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 315 cm; <b>right-bottom</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 195 cm).</p>
Full article ">Figure 11
<p>Local entropy production distribution of optimization scheme (<b>left</b>: vertical cross section cloud picture at <span class="html-italic">y</span> = 0; <b>right-top</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 315 cm; <b>right-bottom</b>: horizontal cross section cloud picture at <span class="html-italic">z</span> = 195 cm. <span class="html-italic">S</span><sub>0</sub> represents the average local entropy production of standard problem).</p>
Full article ">Figure 12
<p>Local power, temperature and local entropy production of the standard problem and optimization scheme at the diagonal line on the midplane (the level of <span class="html-italic">z</span> = 195 cm).</p>
Full article ">
163 KiB  
Editorial
Complex and Fractional Dynamics
by J. A. Tenreiro Machado and António M. Lopes
Entropy 2017, 19(2), 62; https://doi.org/10.3390/e19020062 - 8 Feb 2017
Cited by 8 | Viewed by 3764
Abstract
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and [...] Read more.
Complex systems (CS) are pervasive in many areas, namely financial markets; highway transportation; telecommunication networks; world and country economies; social networks; immunological systems; living organisms; computational systems; and electrical and mechanical structures. CS are often composed of a large number of interconnected and interacting entities exhibiting much richer global scale dynamics than could be inferred from the properties and behavior of individual elements. [...]
Full article
(This article belongs to the Special Issue Complex and Fractional Dynamics)
627 KiB  
Editorial
Computational Complexity
by J. A. Tenreiro Machado and António M. Lopes
Entropy 2017, 19(2), 61; https://doi.org/10.3390/e19020061 - 7 Feb 2017
Cited by 1 | Viewed by 3543
Abstract
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS [...] Read more.
Complex systems (CS) involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...]
Full article
(This article belongs to the Special Issue Computational Complexity)
299 KiB  
Article
Nonlinear Wave Equations Related to Nonextensive Thermostatistics
by Angel R. Plastino and Roseli S. Wedemann
Entropy 2017, 19(2), 60; https://doi.org/10.3390/e19020060 - 7 Feb 2017
Cited by 12 | Viewed by 5268
Abstract
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive S q entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the [...] Read more.
We advance two nonlinear wave equations related to the nonextensive thermostatistical formalism based upon the power-law nonadditive S q entropies. Our present contribution is in line with recent developments, where nonlinear extensions inspired on the q-thermostatistical formalism have been proposed for the Schroedinger, Klein–Gordon, and Dirac wave equations. These previously introduced equations share the interesting feature of admitting q-plane wave solutions. In contrast with these recent developments, one of the nonlinear wave equations that we propose exhibits real q-Gaussian solutions, and the other one admits exponential plane wave solutions modulated by a q-Gaussian. These q-Gaussians are q-exponentials whose arguments are quadratic functions of the space and time variables. The q-Gaussians are at the heart of nonextensive thermostatistics. The wave equations that we analyze in this work illustrate new possible dynamical scenarios leading to time-dependent q-Gaussians. One of the nonlinear wave equations considered here is a wave equation endowed with a nonlinear potential term, and can be regarded as a nonlinear Klein–Gordon equation. The other equation we study is a nonlinear Schroedinger-like equation. Full article
Show Figures

Figure 1

Figure 1
<p>Plot of the <span class="html-italic">q</span>-Gaussian solution <math display="inline"> <semantics> <msub> <mi>ψ</mi> <mrow> <mi>q</mi> <mi>G</mi> </mrow> </msub> </semantics> </math> of the nonlinear wave Equation (<a href="#FD9-entropy-19-00060" class="html-disp-formula">9</a>), as a function of the quantity <math display="inline"> <semantics> <mrow> <mi>z</mi> <mo>=</mo> <msub> <mi>λ</mi> <mn>0</mn> </msub> <mo>+</mo> <msub> <mi>λ</mi> <mi>t</mi> </msub> <mi>t</mi> <mo>−</mo> <msub> <mi>λ</mi> <mi>x</mi> </msub> <mi>x</mi> </mrow> </semantics> </math>. All depicted quantities are dimensionless.</p>
Full article ">Figure 2
<p>Plot of <math display="inline"> <semantics> <mrow> <mi>W</mi> <mo stretchy="false">(</mo> <mi>ψ</mi> <mo stretchy="false">)</mo> <mo>/</mo> <mi>η</mi> </mrow> </semantics> </math> against the wave amplitude <span class="html-italic">ψ</span>, for different <span class="html-italic">q</span>-values. The potential <span class="html-italic">W</span> is given by expression (<a href="#FD17-entropy-19-00060" class="html-disp-formula">17</a>). All depicted quantities are dimensionless.</p>
Full article ">Figure 3
<p>(<b>a</b>) Plot of the real part <math display="inline"> <semantics> <msub> <mi>ψ</mi> <mi>a</mi> </msub> </semantics> </math> of the complex wave function (<a href="#FD28-entropy-19-00060" class="html-disp-formula">28</a>) as a function of <span class="html-italic">z</span>. (<b>b</b>) Plot of the imaginary part <math display="inline"> <semantics> <msub> <mi>ψ</mi> <mi>b</mi> </msub> </semantics> </math> of the complex wave function (<a href="#FD28-entropy-19-00060" class="html-disp-formula">28</a>) as a function of <span class="html-italic">z</span>. Both figures correspond to <math display="inline"> <semantics> <mrow> <mi>q</mi> <mo>=</mo> <mfrac> <mn>3</mn> <mn>4</mn> </mfrac> </mrow> </semantics> </math>. All depicted quantities are dimensionless.</p>
Full article ">
472 KiB  
Article
On the Binary Input Gaussian Wiretap Channel with/without Output Quantization
by Chao Qi, Yanling Chen and A. J. Han Vinck
Entropy 2017, 19(2), 59; https://doi.org/10.3390/e19020059 - 4 Feb 2017
Cited by 2 | Viewed by 4195
Abstract
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for [...] Read more.
In this paper, we investigate the effect of output quantization on the secrecy capacity of the binary-input Gaussian wiretap channel. As a result, a closed-form expression with infinite summation terms of the secrecy capacity of the binary-input Gaussian wiretap channel is derived for the case when both the legitimate receiver and the eavesdropper have unquantized outputs. In particular, computable tight upper and lower bounds on the secrecy capacity are obtained. Theoretically, we prove that when the legitimate receiver has unquantized outputs while the eavesdropper has binary quantized outputs, the secrecy capacity is larger than that when both the legitimate receiver and the eavesdropper have unquantized outputs or both have binary quantized outputs. Further, numerical results show that in the low signal-to-noise ratio (SNR) (of the main channel) region, the secrecy capacity of the binary input Gaussian wiretap channel when both the legitimate receiver and the eavesdropper have unquantized outputs is larger than the capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs; as the SNR increases, the secrecy capacity when both the legitimate receiver and the eavesdropper have binary quantized outputs tends to overtake. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>The Gaussian channel with binary inputs and quantized/unquantized outputs.</p>
Full article ">Figure 2
<p>The binary-input Gaussian wiretap channel.</p>
Full article ">Figure 3
<p>Lower and upper bounds on <math display="inline"> <semantics> <msub> <mi>C</mi> <mrow> <mi>S</mi> <mi>S</mi> </mrow> </msub> </semantics> </math> as <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mi>γ</mi> <mo>=</mo> <msub> <mi>γ</mi> <mn>1</mn> </msub> <mo>/</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Bounds on secrecy capacities of binary-input Gaussian wiretap channel (BI-GWC) as <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mi>γ</mi> <mo>=</mo> <msub> <mi>γ</mi> <mn>1</mn> </msub> <mo>/</mo> <mn>2</mn> </mrow> </semantics> </math>. SNR: signal-to-noise ratio.</p>
Full article ">Figure 5
<p><math display="inline"> <semantics> <mrow> <msubsup> <mi>C</mi> <mi>B</mi> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>γ</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>C</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>γ</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </semantics> </math>, <math display="inline"> <semantics> <msubsup> <mi>C</mi> <mrow> <mi>S</mi> <mi>S</mi> </mrow> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </msubsup> </semantics> </math>,<math display="inline"> <semantics> <msub> <mi>C</mi> <mrow> <mi>H</mi> <mi>H</mi> </mrow> </msub> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mi>γ</mi> <mo>=</mo> <msub> <mi>γ</mi> <mn>1</mn> </msub> <mo>/</mo> <mn>2</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p><math display="inline"> <semantics> <msubsup> <mi>C</mi> <mi>B</mi> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </msubsup> </semantics> </math>, <math display="inline"> <semantics> <msub> <mi>C</mi> <mi>H</mi> </msub> </semantics> </math>, <math display="inline"> <semantics> <msubsup> <mi>C</mi> <mrow> <mi>S</mi> <mi>S</mi> </mrow> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </msubsup> </semantics> </math>, <math display="inline"> <semantics> <msub> <mi>C</mi> <mrow> <mi>H</mi> <mi>H</mi> </mrow> </msub> </semantics> </math> by SNR/bit, <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mi>γ</mi> <mo>=</mo> <msub> <mi>γ</mi> <mn>1</mn> </msub> <mo>/</mo> <mn>2</mn> <mo>,</mo> <mrow> <mn>3</mn> <msub> <mi>γ</mi> <mn>1</mn> </msub> </mrow> <mo>/</mo> <mn>4</mn> </mrow> </semantics> </math>.</p>
Full article ">
289 KiB  
Article
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
by Steven H. Waldrip and Robert K. Niven
Entropy 2017, 19(2), 58; https://doi.org/10.3390/e19020058 - 2 Feb 2017
Cited by 7 | Viewed by 5448
Abstract
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of flow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of flow rates and other variables, [...] Read more.
We compare the application of Bayesian inference and the maximum entropy (MaxEnt) method for the analysis of flow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of flow rates and other variables, when there is insufficient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf) by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method finds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation. Full article
1215 KiB  
Article
The Second Law: From Carnot to Thomson-Clausius, to the Theory of Exergy, and to the Entropy-Growth Potential Principle
by Lin-Shu Wang
Entropy 2017, 19(2), 57; https://doi.org/10.3390/e19020057 - 28 Jan 2017
Cited by 9 | Viewed by 7604
Abstract
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory [...] Read more.
At its origins, thermodynamics was the study of heat and engines. Carnot transformed it into a scientific discipline by explaining engine power in terms of transfer of “caloric”. That idea became the second law of thermodynamics when Thomson and Clausius reconciled Carnot’s theory with Joule’s conflicting thesis that power was derived from the consumption of heat, which was determined to be a form of energy. Eventually, Clausius formulated the 2nd-law as the universal entropy growth principle: the synthesis of transfer vs. consumption led to what became known as the mechanical theory of heat (MTH). However, by making universal-interconvertibility the cornerstone of MTH their synthesis-project was a defective one, which precluded MTH from developing the full expression of the second law. This paper reiterates that universal-interconvertibility is demonstrably false—as the case has been made by many others—by clarifying the true meaning of the mechanical equivalent of heat. And, presents a two-part formulation of the second law: universal entropy growth principle as well as a new principle that no change in Nature happens without entropy growth potential. With the new principle as its cornerstone replacing universal-interconvertibility, thermodynamics transcends the defective MTH and becomes a coherent conceptual system. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Figure 1
<p>The mechanical theory of heat as formulated by Thomson and Clausius based on the understanding of heat being a “dynamical form of mechanical effect” and, correspondingly, the principle of universal inter-convertibility.</p>
Full article ">Figure 2
<p>The predicative entropic theory of heat: by refuting universal inter-convertibility, which is replaced with principle of entropy growth potential, thermodynamics is liberated from the problematic inferences of the doctrine, “heat is a form of energy”. There was precedence in the science of motion: Aristotle had approached it by starting off with its definition, whereas Galileo and Newton formulated the laws of motion without a definition of motion itself but with definitions for position, velocity, acceleration, force, momentum, etc. It is unexceptional, therefore, not to insist on defining <math display="inline"> <semantics> <mrow> <mi>????</mi> <mi>ℯ</mi> <mi>????</mi> <mi>????</mi> </mrow> </semantics> </math> (as Thomson and Clausius did) but defining, instead, heat (<span class="html-italic">Q</span>), energy, temperature, entropy, entropy growth, etc. What matters is an understanding of <math display="inline"> <semantics> <mrow> <mi>????</mi> <mi>ℯ</mi> <mi>????</mi> <mi>????</mi> </mrow> </semantics> </math> in a coherent conceptual system.</p>
Full article ">
1849 KiB  
Article
Bateman–Feshbach Tikochinsky and Caldirola–Kanai Oscillators with New Fractional Differentiation
by Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar, Dumitru Baleanu, Teodoro Córdova-Fraga, Ricardo Fabricio Escobar-Jiménez, Victor H. Olivares-Peregrino and Maysaa Mohamed Al Qurashi
Entropy 2017, 19(2), 55; https://doi.org/10.3390/e19020055 - 28 Jan 2017
Cited by 58 | Viewed by 5024
Abstract
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and [...] Read more.
In this work, the study of the fractional behavior of the Bateman–Feshbach–Tikochinsky and Caldirola–Kanai oscillators by using different fractional derivatives is presented. We obtained the Euler–Lagrange and the Hamiltonian formalisms in order to represent the dynamic models based on the Liouville–Caputo, Caputo–Fabrizio–Caputo and the new fractional derivative based on the Mittag–Leffler kernel with arbitrary order α. Simulation results are presented in order to show the fractional behavior of the oscillators, and the classical behavior is recovered when α is equal to 1. Full article
(This article belongs to the Special Issue Wavelets, Fractals and Information Theory II)
Show Figures

Figure 1

Figure 1
<p>Numerical evaluation of (<a href="#FD19-entropy-19-00055" class="html-disp-formula">19</a>), in (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; in (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics> </math>; in (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.90</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>Numerical evaluation of (<a href="#FD20-entropy-19-00055" class="html-disp-formula">20</a>), in (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; in (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics> </math>; in (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.90</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Numerical evaluation of (<a href="#FD23-entropy-19-00055" class="html-disp-formula">23</a>), in (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>; in (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics> </math>; in (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.90</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.85</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Numerical evaluation of (<a href="#FD32-entropy-19-00055" class="html-disp-formula">32</a>), in (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>3</mn> <mi>t</mi> </mrow> </semantics> </math>; in (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>2</mn> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </semantics> </math>; in (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>3</mn> <mi>t</mi> <mo>+</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Numerical evaluation of (<a href="#FD33-entropy-19-00055" class="html-disp-formula">33</a>), in (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>3</mn> <mi>t</mi> </mrow> </semantics> </math>; in (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>2</mn> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </semantics> </math>; in (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>3</mn> <mi>t</mi> <mo>+</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Numerical evaluation of (<a href="#FD34-entropy-19-00055" class="html-disp-formula">34</a>), in (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>3</mn> <mi>t</mi> </mrow> </semantics> </math>; in (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>2</mn> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </semantics> </math>; in (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mn>3</mn> <mi>t</mi> <mo>+</mo> <mn>2</mn> </mrow> </semantics> </math>; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> <mo>=</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop