[go: up one dir, main page]

Next Issue
Volume 15, October
Previous Issue
Volume 15, August
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 15, Issue 9 (September 2013) – 33 articles , Pages 3312-3982

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
722 KiB  
Article
Methods of Evaluating Thermodynamic Properties of Landscape Cover Using Multispectral Reflected Radiation Measurements by the Landsat Satellite
by Yuriy Puzachenko, Robert Sandlersky and Alexey Sankovski
Entropy 2013, 15(9), 3970-3982; https://doi.org/10.3390/e15093970 - 23 Sep 2013
Cited by 21 | Viewed by 5879
Abstract
The paper discusses methods of evaluating thermodynamic properties of landscape cover based on multi-spectral measurements by the Landsat satellites. Authors demonstrate how these methods could be used for studying functionality of landscapes and for spatial interpolation of Flux NET system measurements. Full article
(This article belongs to the Special Issue Exergy: Analysis and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) First factor: dark color—minimal entropy, maximal information increment and biological productivity, (<b>b</b>) Second factor: dark color—maximal surface temperature, minimal exergy.</p>
Full article ">Figure 2
<p>Seasonal changes in exergy and temperature. Forests: “1”—exergy, 2—temperature. Meadows: “3”—exergy, “4”—temperature. Bogs: “5”—exergy, “6”—temperature.</p>
Full article ">Figure 3
<p>Seasonal changes in entropy and increment of information. Forest: 1—entropy, 2 —increment of information. Meadow: 3—entropy, 4—increment of information. Bog: 5—entropy, 6 – increment of information.</p>
Full article ">Figure 4
<p>Seasonal dynamics of NDVI: 1—forests, 2—meadows, 3—bogs.</p>
Full article ">
1039 KiB  
Review
Molecular Dynamics at Constant Pressure: Allowing the System to Control Volume Fluctuations via a “Shell” Particle
by Mark J. Uline and David S. Corti
Entropy 2013, 15(9), 3941-3969; https://doi.org/10.3390/e15093941 - 23 Sep 2013
Cited by 21 | Viewed by 6543
Abstract
Since most experimental observations are performed at constant temperature and pressure, the isothermal-isobaric (NPT) ensemble has been widely used in molecular simulations. Nevertheless, the NPT ensemble has only recently been placed on a rigorous foundation. The proper formulation of the NPT [...] Read more.
Since most experimental observations are performed at constant temperature and pressure, the isothermal-isobaric (NPT) ensemble has been widely used in molecular simulations. Nevertheless, the NPT ensemble has only recently been placed on a rigorous foundation. The proper formulation of the NPT ensemble requires a “shell” particle to uniquely identify the volume of the system, thereby avoiding the redundant counting of configurations. Here, we review our recent work in incorporating a shell particle into molecular dynamics simulation algorithms to generate the correct NPT ensemble averages. Unlike previous methods, a piston of unknown mass is no longer needed to control the response time of the volume fluctuations. As the volume of the system is attached to the shell particle, the system itself now sets the time scales for volume and pressure fluctuations. Finally, we discuss a number of tests that ensure the equations of motion sample phase space correctly and consider the response time of the system to pressure changes with and without the shell particle. Overall, the shell particle algorithm is an effective simulation method for studying systems exposed to a constant external pressure and may provide an advantage over other existing constant pressure approaches when developing nonequilibrium molecular dynamics methods. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulation)
Show Figures

Figure 1

Figure 1
<p>One particular configuration of <span class="html-italic">N</span> particles enclosed within a total volume, <span class="html-italic">V</span>, demonstrating how to uniquely define one specific volume state of <span class="html-italic">n</span> particles (shaded circles). The unshaded circles represent the surrounding <math display="inline"> <mrow> <mi>N</mi> <mo>−</mo> <mi>n</mi> </mrow> </math> particles that comprise the bath. Each particle center is marked by a dot and is surrounded by an effective diameter. The first step in determining the volume occupied by the <span class="html-italic">n</span> particles is to choose a particular reference point in <span class="html-italic">V</span> as the origin, <math display="inline"> <msub> <mi>r</mi> <mi>c</mi> </msub> </math>. Yet, several volumes (dashed circles) centered at <math display="inline"> <msub> <mi>r</mi> <mi>c</mi> </msub> </math> still enclose the <span class="html-italic">n</span> particles and, therefore, include common configurations. The exact volume, <span class="html-italic">v</span> (bold circle), of the <span class="html-italic">n</span> particles is defined by the presence of a shell particle that is farthest from <math display="inline"> <msub> <mi>r</mi> <mi>c</mi> </msub> </math> and resides in the shell, <math display="inline"> <mrow> <mi>d</mi> <mi>v</mi> </mrow> </math>, encapsulating <span class="html-italic">v</span>. (Adapted from <a href="#entropy-15-03941-f002" class="html-fig">Figure 2</a> in reference [<a href="#B14-entropy-15-03941" class="html-bibr">14</a>].)</p>
Full article ">Figure 2
<p>Time response of the internal pressure of the pure component Lennard-Jones fluid to a sudden change of the external pressure from <math display="inline"> <mrow> <msup> <mi>P</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> to <math display="inline"> <mrow> <msup> <mi>P</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> and, then, back down to <math display="inline"> <mrow> <msup> <mi>P</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>0</mn> </mrow> </math>. For the given choice of the time origin, the pressure is increased after 2000 time steps and, then, reduced after another 4000 time steps. The solid line is the set external pressure, <span class="html-italic">P</span>. The dashed lines are the simulation results. In all cases, <math display="inline"> <mrow> <msup> <mi>T</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </math>. The plot in the upper-left corner is the results for the shell molecule with the Nosé-Hoover thermostat. The plot in the upper-right corner is the results for the shell molecule with the configurational temperature thermostat; The plots in the lower-left and lower-right corner are the results for the Hoover algorithm with <math display="inline"> <mrow> <msubsup> <mi>M</mi> <mi>p</mi> <mo>*</mo> </msubsup> <mo>=</mo> <mn>10</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> and <math display="inline"> <mrow> <msubsup> <mi>M</mi> <mi>p</mi> <mo>*</mo> </msubsup> <mo>=</mo> <mn>5</mn> <mo>.</mo> <mn>0</mn> </mrow> </math>, respectively.</p>
Full article ">Figure 3
<p>Time response of the internal pressure of the pure component Lennard-Jones fluid to an isothermal compression from <math display="inline"> <mrow> <msup> <mi>P</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> to <math display="inline"> <mrow> <msup> <mi>P</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>4</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> in increments of <math display="inline"> <mrow> <mn>1</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> unit of reduced pressure every 2000 time steps. For the given choice of the time origin, the pressure is increased after 2000, 4000 and, again, after 6000 time steps. The solid line is the set external pressure, <span class="html-italic">P</span>. The dashed lines are the simulation results. In all cases, <math display="inline"> <mrow> <msup> <mi>T</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </math>. The plot in the upper-left corner is the results for the shell molecule with the Nosé-Hoover thermostat. The plot in the upper-right corner is the results for the shell molecule with the configurational temperature thermostat. The plots in the lower-left and lower-right corner are the results for the Hoover algorithm with <math display="inline"> <mrow> <msubsup> <mi>M</mi> <mi>p</mi> <mo>*</mo> </msubsup> <mo>=</mo> <mn>10</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> and <math display="inline"> <mrow> <msubsup> <mi>M</mi> <mi>p</mi> <mo>*</mo> </msubsup> <mo>=</mo> <mn>5</mn> <mo>.</mo> <mn>0</mn> </mrow> </math>, respectively.</p>
Full article ">
163 KiB  
Article
Solutions of Some Nonlinear Diffusion Equations and Generalized Entropy Framework
by Ervin K. Lenzi, Maike A. F. Dos Santos, Flavio S. Michels, Renio S. Mendes and Luiz R. Evangelista
Entropy 2013, 15(9), 3931-3940; https://doi.org/10.3390/e15093931 - 18 Sep 2013
Cited by 5 | Viewed by 6016
Abstract
We investigate solutions of a generalized diffusion equation that contains nonlinear terms in the presence of external forces and reaction terms. The solutions found here can have a compact or long tail behavior and can be expressed in terms of the q-exponential [...] Read more.
We investigate solutions of a generalized diffusion equation that contains nonlinear terms in the presence of external forces and reaction terms. The solutions found here can have a compact or long tail behavior and can be expressed in terms of the q-exponential functions present in the Tsallis framework. In the case of the long-tailed behavior, in the asymptotic limit, these solutions can also be connected with the L´evy distributions. In addition, from the results presented here, a rich class of diffusive processes, including normal and anomalous ones, can be obtained. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
Show Figures

Figure 1

Figure 1
<p>The behavior of Equation (7) <span class="html-italic">versus</span> <span class="html-italic">r</span> is illustrated for different values of <span class="html-italic">q</span>, <span class="html-italic">θ</span> and <span class="html-italic">η</span> in the absence of an absorbent (source) term by considering, for simplicity, <math display="inline"> <mrow> <mover> <mi mathvariant="script">D</mi> <mo>¯</mo> </mover> <mo>=</mo> <mn>1</mn> </mrow> </math> and <span class="html-italic">V</span>(<span class="html-italic">r</span>) = <span class="html-italic">kr</span><sup>2</sup>/2 with <span class="html-italic">k</span> = 1. The red dashed and the black solid lines were obtained for <span class="html-italic">q</span> = 1/2, <span class="html-italic">θ</span> = 1, <span class="html-italic">η</span> = 1/2 and <span class="html-italic">q</span> = 1/3, <span class="html-italic">θ</span> = −1 and <span class="html-italic">η</span> = 1. The green dotted and red dashed-dotted lines were obtained for <span class="html-italic">q</span> = 6/5, <span class="html-italic">θ</span> = 1, <span class="html-italic">η</span> = 1/2 and <span class="html-italic">q</span> = 6/5, <span class="html-italic">θ</span> = −1 and <span class="html-italic">η</span> = 1/3.</p>
Full article ">Figure 2
<p>This figure illustrates the regions where the system may present an usual or anomalous behavior for the mean square displacement depending on the values of <span class="html-italic">q</span> and <span class="html-italic">η</span>; for simplicity, for <span class="html-italic">d</span> = 1 and <span class="html-italic">λ</span> = 2.</p>
Full article ">Figure 3
<p>The behavior of <math display="inline"> <mrow> <mi mathvariant="script">Z</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <msup> <mi>β</mi> <mfrac> <mi>d</mi> <mi>λ</mi> </mfrac> </msup> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>/</mo> <mfenced separators="" open="(" close=")"> <mi mathvariant="script">Z</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <msup> <mi>β</mi> <mfrac> <mi>d</mi> <mi>λ</mi> </mfrac> </msup> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mfenced> </mrow> </math> <span class="html-italic">versus t</span> is illustrated for typical values of <span class="html-italic">q</span>, <span class="html-italic">η</span> and <span class="html-italic">θ</span> by considering the presence of the reaction term with <span class="html-italic">α</span>(<span class="html-italic">t</span>) = <span class="html-italic">αe</span><sup>−<span class="html-italic">t</span></sup> and <span class="html-italic">α<sub>γ</sub></span>(<span class="html-italic">t</span>) = <span class="html-italic">α<sub>γ</sub>e</span><sup>−<span class="html-italic">t</span></sup> with, for simplicity, <span class="html-italic">α</span> = 1 and <span class="html-italic">α<sub>γ</sub></span> = 1. The red dotted and solid black lines were obtained for <span class="html-italic">q</span> = 1/2 and <span class="html-italic">q</span> = 6/5 with <span class="html-italic">θ</span> = 1 and <span class="html-italic">η</span> = 1/2.</p>
Full article ">
306 KiB  
Article
Permutation Complexity and Coupling Measures in Hidden Markov Models
by Taichi Haruna and Kohei Nakajima
Entropy 2013, 15(9), 3910-3930; https://doi.org/10.3390/e15093910 - 16 Sep 2013
Cited by 6 | Viewed by 4818
Abstract
Recently, the duality between values (words) and orderings (permutations) has been proposed by the authors as a basis to discuss the relationship between information theoretic measures for finite-alphabet stationary stochastic processes and their permutatio nanalogues. It has been used to give a simple [...] Read more.
Recently, the duality between values (words) and orderings (permutations) has been proposed by the authors as a basis to discuss the relationship between information theoretic measures for finite-alphabet stationary stochastic processes and their permutatio nanalogues. It has been used to give a simple proof of the equality between the entropy rate and the permutation entropy rate for any finite-alphabet stationary stochastic process and to show some results on the excess entropy and the transfer entropy for finite-alphabet stationary ergodic Markov processes. In this paper, we extend our previous results to hidden Markov models and show the equalities between various information theoretic complexity and coupling measures and their permutation analogues. In particular, we show the following two results within the realm of hidden Markov models with ergodic internal processes: the two permutation analogues of the transfer entropy, the symbolic transfer entropy and the transfer entropy on rank vectors, are both equivalent to the transfer entropy if they are considered as the rates, and the directed information theory can be captured by the permutation entropy approach. Full article
843 KiB  
Article
Analysis and Visualization of Seismic Data Using Mutual Information
by José A. Tenreiro Machado and António M. Lopes
Entropy 2013, 15(9), 3892-3909; https://doi.org/10.3390/e15093892 - 16 Sep 2013
Cited by 42 | Viewed by 6555
Abstract
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 [...] Read more.
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

Figure 1
<p><span class="html-italic">K</span>-means clustering of all F-E regions and Voronoi cells. Analysis based on the (<span class="html-italic">a</span>, <span class="html-italic">b</span>) parameters of the G-R law. The time period of analysis is 1962–2011. Events with magnitude <span class="html-italic">M</span> ≥ 4.5 are considered.</p>
Full article ">Figure 2
<p>Silhouette corresponding to the <span class="html-italic">K</span>-means clustering of all F-E regions. Analysis based on the (<span class="html-italic">a</span>, <span class="html-italic">b</span>) parameters of the G-R law. The time period of analysis is 1962–2011. Events with magnitude <span class="html-italic">M</span> ≥ 4.5 are considered.</p>
Full article ">Figure 3
<p>Geographical map of the F-E regions adopting the same colour map used in <a href="#entropy-15-03892-f001" class="html-fig">Figure 1</a> (green lines correspond to tectonic faults).</p>
Full article ">Figure 4
<p>Mutual information represented as a contour map. (<b>a</b>) all F-E regions are considered; (<b>b</b>) F-E regions #35, #49 and #50 were deleted. The time period of analysis is 1962–2011.</p>
Full article ">Figure 5
<p>Circular phylogram, based on mutual information, used to compare F-E regions. (<b>a</b>) all F-E regions are considered. (<b>b</b>) F-E regions #35, #49 and #50 were deleted. The time period of analysis is 1962–2011.</p>
Full article ">Figure 6
<p>Regional variation of G-R parameter <span class="html-italic">a</span>. A 14 × 14 rectangular grid is adopted and events with magnitude <span class="html-italic">M</span> ≥ 4.5 are considered. The time period of analysis is 1962–2011.</p>
Full article ">Figure 7
<p>Regional variation of G-R parameter <span class="html-italic">b</span>. A 14 × 14 rectangular grid is adopted and events with magnitude <span class="html-italic">M</span> ≥ 4.5 are considered. The time period of analysis is 1962–2011.</p>
Full article ">Figure 8
<p>Contour plot representing the mutual information. A 14 × 14 rectangular grid is adopted and events with magnitude <span class="html-italic">M</span> ≥ 4.5 are considered. The time period of analysis is 1962–2011.</p>
Full article ">Figure 9
<p>Circular phylogram based on mutual information. A 14 × 14 rectangular grid is adopted and events with magnitude <span class="html-italic">M</span> ≥ 4.5 are considered. The time period of analysis is 1962–2011.</p>
Full article ">
584 KiB  
Article
Blind Demodulation of Chaotic Direct Sequence Spread Spectrum Signals Based on Particle Filters
by Ting Li, Dexin Zhao, Zhiping Huang, Chunwu Liu, Shaojing Su and Yimeng Zhang
Entropy 2013, 15(9), 3877-3891; https://doi.org/10.3390/e15093877 - 13 Sep 2013
Cited by 6 | Viewed by 5473
Abstract
Applying the particle filter (PF) technique, this paper proposes a PF-based algorithm to blindly demodulate the chaotic direct sequence spread spectrum (CDS-SS) signals under the colored or non-Gaussian noises condition. To implement this algorithm, the PFs are modified by (i) the colored or [...] Read more.
Applying the particle filter (PF) technique, this paper proposes a PF-based algorithm to blindly demodulate the chaotic direct sequence spread spectrum (CDS-SS) signals under the colored or non-Gaussian noises condition. To implement this algorithm, the PFs are modified by (i) the colored or non-Gaussian noises are formulated by autoregressive moving average (ARMA) models, and then the parameters that model the noises are included in the state vector; (ii) the range-differentiating factor is imported into the intruder’s chaotic system equation. Since the range-differentiating factor is able to make the inevitable chaos fitting error advantageous based on the chaos fitting method, thus the CDS-SS signals can be demodulated according to the range of the estimated message. Simulations show that the proposed PF-based algorithm can obtain a good bit-error rate performance when extracting the original binary message from the CDS-SS signals without any knowledge of the transmitter’s chaotic map, or initial value, even when colored or non-Gaussian noises exist. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The range of <math display="inline"> <semantics> <mrow> <msubsup> <mover accent="true"> <mi>b</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mn>1</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mover accent="true"> <mi>b</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>; (<b>b</b>) the range of <math display="inline"> <semantics> <mrow> <msubsup> <mover accent="true"> <mi>b</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mn>1</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mover accent="true"> <mi>b</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>≠</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>Flow chart of the proposed PF-based algorithm.</p>
Full article ">Figure 3
<p>(<b>a</b>) The estimated message <math display="inline"> <semantics> <mrow> <msub> <mover accent="true"> <mi>b</mi> <mo stretchy="false">^</mo> </mover> <mi>k</mi> </msub> </mrow> </semantics> </math> when <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> at SNR = 7dB; (<b>b</b>) the result of lowpass filtering; (<b>c</b>) the original transmitted binary message <math display="inline"> <semantics> <mrow> <mo>−</mo> <msub> <mi>b</mi> <mi>n</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) The message <math display="inline"> <semantics> <mrow> <msub> <mover accent="true"> <mi>b</mi> <mo stretchy="false">^</mo> </mover> <mi>k</mi> </msub> </mrow> </semantics> </math> estimated by UKF when <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics> </math> at SNR = 7dB; (<b>b</b>) the result of lowpass filtering; (<b>c</b>) the original transmitted binary message <math display="inline"> <semantics> <mrow> <mo>−</mo> <msub> <mi>b</mi> <mi>n</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>(<b>a</b>) The message <math display="inline"> <semantics> <mrow> <msub> <mover accent="true"> <mi>b</mi> <mo stretchy="false">^</mo> </mover> <mi>k</mi> </msub> </mrow> </semantics> </math> estimated by PF when <math display="inline"> <semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics> </math> at SNR = 7dB; (<b>b</b>) the result of lowpass filtering; (<b>c</b>) the original transmitted binary message <math display="inline"> <semantics> <mrow> <msub> <mi>b</mi> <mi>n</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Comparison of BER performance among the algorithm in [<a href="#B25-entropy-15-03877" class="html-bibr">25</a>], the nonlinear RPROP neural network algorithm in [<a href="#B16-entropy-15-03877" class="html-bibr">16</a>], the UKF-based algorithm in [<a href="#B15-entropy-15-03877" class="html-bibr">15</a>] and the proposed PF-based algorithm.</p>
Full article ">
708 KiB  
Review
Biological Water Dynamics and Entropy: A Biophysical Origin of Cancer and Other Diseases
by Robert M. Davidson, Ann Lauritzen and Stephanie Seneff
Entropy 2013, 15(9), 3822-3876; https://doi.org/10.3390/e15093822 - 13 Sep 2013
Cited by 40 | Viewed by 18596
Abstract
This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they [...] Read more.
This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. Full article
(This article belongs to the Section Entropy Reviews)
Show Figures

Figure 1

Figure 1
<p>Structural formula of a typical heparan sulfate unit.</p>
Full article ">Figure 2
<p>The putative relative positioning of water molecules in 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) membranes obtained by measuring the rate of vibrational resonant (Forster) energy transfer between the water hydroxyl stretch vibrations. Reproduced here from Piatkowski <span class="html-italic">et al.</span> [<a href="#B71-entropy-15-03822" class="html-bibr">71</a>] with permission of the American Chemical Society.</p>
Full article ">Figure 3
<p>Illustration of potential linear and circular clusters of hydrogen-bonded water molecules induced by an external magnetic field as proposed by Pang 2006 [<a href="#B87-entropy-15-03822" class="html-bibr">87</a>]. Reproduced here from Pang (2006) [<a href="#B87-entropy-15-03822" class="html-bibr">87</a>] with permission of Springer-Verlag Berlin/Heidelberg.</p>
Full article ">Figure 4
<p>The Eigen-Zundel-Eigen (EZE) proton mobility phenomenon [<a href="#B103-entropy-15-03822" class="html-bibr">103</a>,<a href="#B104-entropy-15-03822" class="html-bibr">104</a>]. Reproduced here from Markovitch <span class="html-italic">et al</span>. (2008) [<a href="#B105-entropy-15-03822" class="html-bibr">105</a>] with permission of the American Chemical Society.</p>
Full article ">Figure 5
<p>Protomeric ensembles acting as substrates for Grotthuss phenomenon. Reproduced here from Verdel <span class="html-italic">et al</span>. (2011) [<a href="#B106-entropy-15-03822" class="html-bibr">106</a>] with permission of MDPI AG.</p>
Full article ">Figure 6
<p>Neither the protons nor the electrons are pinned to individual molecules. Reproduced here from Chaplin (2013) [<a href="#B113-entropy-15-03822" class="html-bibr">113</a>] with permission of the Institute of Science in Society.</p>
Full article ">Figure 7
<p>A hypothetical radical-cation cyclic water hexamer accounting for protomerism and electromerism.</p>
Full article ">Figure 8
<p>Illustration of (<b>a</b>) chiral cyclic water trimer, and (<b>b</b>) chiral cyclic water pentamer. Reproduced here from Keutsch and Saykally [<a href="#B66-entropy-15-03822" class="html-bibr">66</a>], with permission of the publisher, copyright (2001) National Academy of Sciences, USA.</p>
Full article ">Figure 9
<p>Fluoroaluminum complexes as transition state analogues for kinases, phosphatases, sulfatases, and sulfotransferases. Reproduced here from Wittinghofer (1997) [<a href="#B262-entropy-15-03822" class="html-bibr">262</a>] with permission of Elsevier.</p>
Full article ">
1339 KiB  
Article
Entropies in Alloy Design for High-Entropy and Bulk Glassy Alloys
by Akira Takeuchi, Kenji Amiya, Takeshi Wada, Kunio Yubuta, Wei Zhang and Akihiro Makino
Entropy 2013, 15(9), 3810-3821; https://doi.org/10.3390/e15093810 - 12 Sep 2013
Cited by 103 | Viewed by 10708
Abstract
High-entropy (H-E) alloys, bulk metallic glasses (BMGs) and high-entropy BMGs (HE-BMGs) were statistically analyzed with the help of a database of ternary amorphous alloys. Thermodynamic quantities corresponding to heat of mixing and atomic size differences were calculated as a function of composition of [...] Read more.
High-entropy (H-E) alloys, bulk metallic glasses (BMGs) and high-entropy BMGs (HE-BMGs) were statistically analyzed with the help of a database of ternary amorphous alloys. Thermodynamic quantities corresponding to heat of mixing and atomic size differences were calculated as a function of composition of the multicomponent alloys. Actual calculations were performed for configurational entropy (Sconfig.) in defining the H-E alloys and mismatch entropy (Ss) normalized with Boltzmann constant (kB), together with mixing enthalpy (DHmix) based on Miedema’s empirical model and Delta parameter (d) as a corresponding parameter to Ss/kB. The comparison between DHmixd and DHmix– diagrams for the ternary amorphous alloys revealed Ss/kB ~ (d /22)2. The zones S, S′ and B’s where H-E alloys with disordered solid solutions, ordered alloys and BMGs are plotted in the DHmixd diagram are correlated with the areas in the DHmixSs /kB diagram. The results provide mutual understandings among H-E alloys, BMGs and HE-BMGs. Full article
(This article belongs to the Special Issue High Entropy Alloys)
Show Figures

Figure 1

Figure 1
<p>Mismatch entropy normalized by Boltzmann constant (<span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub>) for La-Ni, Zr-Ni and Al-Ni binary systems. The broken-dot line exhibits <span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub> = ln 2, which corresponds to the value of <span class="html-italic">S</span><sub>config.</sub>/<span class="html-italic">k</span><sub>B</sub> at the equi-atomic composition for a binary alloy.</p>
Full article ">Figure 2
<p>(<b>a</b>) Mismatch entropy normalized by Boltzmann constant (<span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub>) and (<b>b</b>) Delta parameter (<span class="html-italic">δ</span>) represented in contour lines in a composition diagram for Zr-Al-Ni system.</p>
Full article ">Figure 3
<p>(<b>a</b>) Δ<span class="html-italic">H</span><sub>mix</sub>–<span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub> and (<b>b</b>) Δ<span class="html-italic">H</span><sub>mix</sub>–<span class="html-italic">δ</span> diagrams for the 6150 ternary amorphous alloys from 351 systems [<a href="#B22-entropy-15-03810" class="html-bibr">22</a>]. The trapezoid surrounded by broken lines in <a href="#entropy-15-03810-f003" class="html-fig">Figure 3</a>a indicates the threshold values, in which ternary amorphous alloys are formed, whereas zones S, S′, B’s in <a href="#entropy-15-03810-f003" class="html-fig">Figure 3</a>b exhibit the zones for H-E alloys with disordered solid solutions, H-E alloys with ordered solid solutions and BMGs, respectively, acquired form literature [<a href="#B12-entropy-15-03810" class="html-bibr">12</a>]. The solid and broken lines to in red to show the zones and areas demonstrate the original results, whereas black or gray ones are converted ones. Some of the data for Si-Ti-Zr ternary amorphous alloys in the original database [<a href="#B12-entropy-15-03810" class="html-bibr">12</a>] for Si and Ti contents were corrected according to the original reference [<a href="#B23-entropy-15-03810" class="html-bibr">23</a>], resulting in change in the plots in the original literature [<a href="#B16-entropy-15-03810" class="html-bibr">16</a>] in the areas of right triangle surrounded by broken line in <a href="#entropy-15-03810-f003" class="html-fig">Figure 3</a>a. The x-axis (δ<sup>2</sup>) in <a href="#entropy-15-03810-f003" class="html-fig">Figure 3</a>d is intentionally set up in the rage of 5 <math display="inline"> <semantics> <mo>×</mo> </semantics> </math> 10<sup>−2</sup> to 5 <math display="inline"> <semantics> <mo>×</mo> </semantics> </math> 10<sup>3</sup>.</p>
Full article ">Figure 4
<p>Relationship between <span class="html-italic">δ</span> and (<span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub>)<sup>0.5</sup> for the 6150 ternary amorphous alloys from 351 systems [<a href="#B22-entropy-15-03810" class="html-bibr">22</a>]. The statistical analysis revealed the <span class="html-italic">δ</span> ~ 22 <math display="inline"> <semantics> <mo>×</mo> </semantics> </math> (<span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub>)<sup>0.5</sup>.</p>
Full article ">Figure 5
<p>(<b>a</b>) Δ<span class="html-italic">H</span><sub>mix</sub>–<span class="html-italic">δ</span> and (<b>b</b>) Δ<span class="html-italic">H</span><sub>mix</sub>–<span class="html-italic">δ</span><sup>2</sup> diagrams, in which the data of H-E alloys, ordered alloys, intermetallic phases and BMGs including HE-BMGs. The <span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub> is estimated from δ with a relationship: <span class="html-italic">S</span><sub>σ</sub>/<span class="html-italic">k</span><sub>B</sub> = (<span class="html-italic">δ</span>/22)<sup>2</sup>.</p>
Full article ">
1782 KiB  
Article
Phase Composition of a CrMo0.5NbTa0.5TiZr High Entropy Alloy: Comparison of Experimental and Simulated Data
by Oleg N. Senkov, Fan Zhang and Jonathan D. Miller
Entropy 2013, 15(9), 3796-3809; https://doi.org/10.3390/e15093796 - 12 Sep 2013
Cited by 59 | Viewed by 8354
Abstract
Microstructure and phase composition of a CrMo0.5NbTa0.5TiZr high entropy alloy were studied in the as-solidified and heat treated conditions. In the as-solidified condition, the alloy consisted of two disordered BCC phases and an ordered cubic Laves phase. The BCC1 [...] Read more.
Microstructure and phase composition of a CrMo0.5NbTa0.5TiZr high entropy alloy were studied in the as-solidified and heat treated conditions. In the as-solidified condition, the alloy consisted of two disordered BCC phases and an ordered cubic Laves phase. The BCC1 phase solidified in the form of dendrites enriched with Mo, Ta and Nb, and its volume fraction was 42%. The BCC2 and Laves phases solidified by the eutectic-type reaction, and their volume fractions were 27% and 31%, respectively. The BCC2 phase was enriched with Ti and Zr and the Laves phase was heavily enriched with Cr. After hot isostatic pressing at 1450 °C for 3 h, the BCC1 dendrites coagulated into round-shaped particles and their volume fraction increased to 67%. The volume fractions of the BCC2 and Laves phases decreased to 16% and 17%, respectively. After subsequent annealing at 1000 °C for 100 h, submicron-sized Laves particles precipitated inside the BCC1 phase, and the alloy consisted of 52% BCC1, 16% BCC2 and 32% Laves phases. Solidification and phase equilibrium simulations were conducted for the CrMo0.5NbTa0.5TiZr alloy using a thermodynamic database developed by CompuTherm LLC. Some discrepancies were found between the calculated and experimental results and the reasons for these discrepancies were discussed. Full article
(This article belongs to the Special Issue High Entropy Alloys)
Show Figures

Figure 1

Figure 1
<p>BSE images of the microstructure of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr high entropy alloy in the as-solidified condition: (<b>a</b>) dendritic structure, (<b>b</b>) inter-dendritic region consisting of the BCC2 and Laves phases.</p>
Full article ">Figure 2
<p>Segregation of the alloying elements in different phases of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr alloy: (<b>a</b>) Cr (Laves), (<b>b</b>) Ta (BCC1), (<b>c</b>) Ti (BCC2) and (<b>d</b>) Zr (BCC2 and Laves). The brightness increases with an increase in the element concentration.</p>
Full article ">Figure 2 Cont.
<p>Segregation of the alloying elements in different phases of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr alloy: (<b>a</b>) Cr (Laves), (<b>b</b>) Ta (BCC1), (<b>c</b>) Ti (BCC2) and (<b>d</b>) Zr (BCC2 and Laves). The brightness increases with an increase in the element concentration.</p>
Full article ">Figure 3
<p>X-ray diffraction patterns of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr alloy in (<b>a</b>) as-solidified, (<b>b</b>) HIPd at 1450 °C, 207 MPa for 3 h and (<b>c</b>) annealed at 1000 °C for 100 h conditions.</p>
Full article ">Figure 4
<p>BSE images of the microstructure of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr high entropy alloy after HIP at 1450 °C for 3 h.</p>
Full article ">Figure 5
<p>Distribution of the BCC1 particles by logarithm of the equivalent diameter (in μm). HIPd condition.</p>
Full article ">Figure 6
<p>BSE image of the microstructure of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr alloy after annealing at 1000 °C for 100 h.</p>
Full article ">Figure 7
<p><b>(a)</b> The fraction of solid and (<b>b</b>) fractions of different phases as a function of temperature during solidification of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr alloy. Simulated results.</p>
Full article ">Figure 8
<p>Calculated elemental concentrations (<b>a</b>) in the liquid phase and (<b>b</b>) in the BCC phase formed at different temperatures during solidification of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr alloy (Scheil simulation).</p>
Full article ">Figure 9
<p>Equilibrium phase diagram of the CrMo<sub>0.5</sub>NbTa<sub>0.5</sub>TiZr high entropy alloy, in accord to thermodynamic analysis using <span class="html-italic">PanTi</span> database.</p>
Full article ">
1256 KiB  
Article
Improved Time Complexities for Learning Boolean Networks
by Yun Zheng and Chee Keong Kwoh
Entropy 2013, 15(9), 3762-3795; https://doi.org/10.3390/e15093762 - 11 Sep 2013
Cited by 3 | Viewed by 5120
Abstract
Existing algorithms for learning Boolean networks (BNs) have time complexities of at least O(N · n0:7(k+1)), where n is the number of variables, N is the number of samples and k is the number of inputs in Boolean functions. Some recent [...] Read more.
Existing algorithms for learning Boolean networks (BNs) have time complexities of at least O(N · n0:7(k+1)), where n is the number of variables, N is the number of samples and k is the number of inputs in Boolean functions. Some recent studies propose more efficient methods with O(N · n2) time complexities. However, these methods can only be used to learn monotonic BNs, and their performances are not satisfactory when the sample size is small. In this paper, we mathematically prove that OR/AND BNs, where the variables are related with logical OR/AND operations, can be found with the time complexity of O(k·(N+ logn)·n2), if there are enough noiseless training samples randomly generated from a uniform distribution. We also demonstrate that our method can successfully learn most BNs, whose variables are not related with exclusive OR and Boolean equality operations, with the same order of time complexity for learning OR/AND BNs, indicating our method has good efficiency for learning general BNs other than monotonic BNs. When the datasets are noisy, our method can still successfully identify most BNs with the same efficiency. When compared with two existing methods with the same settings, our method achieves a better comprehensive performance than both of them, especially for small training sample sizes. More importantly, our method can be used to learn all BNs. However, of the two methods that are compared, one can only be used to learn monotonic BNs, and the other one has a much worse time complexity than our method. In conclusion, our results demonstrate that Boolean networks can be learned with improved time complexities. Full article
Show Figures

Figure 1

Figure 1
<p>Search procedures of the DFL algorithm when learning <math display="inline"> <mrow> <msup> <mi>C</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>A</mi> <mo>∨</mo> <mi>C</mi> <mo>∨</mo> <mi>D</mi> </mrow> </math>. <math display="inline"> <msup> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> <mo>}</mo> </mrow> <mo>*</mo> </msup> </math> is the target combination. The combinations with a black dot under them are the subsets that share the largest mutual information (MI) with <math display="inline"> <msup> <mi>C</mi> <mo>′</mo> </msup> </math> on their layers. Firstly, the DFL algorithm searches the first layer, <math display="inline"> <msub> <mi mathvariant="script">L</mi> <mn>1</mn> </msub> </math>, then finds that <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>}</mo> </mrow> </math>, with a black dot under it, shares the largest MI with <math display="inline"> <msup> <mi>C</mi> <mo>′</mo> </msup> </math> among subsets on the first layer. Then, it continues to search <math display="inline"> <mrow> <msub> <mo>Δ</mo> <mn>1</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>A</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </math> (subsets with <span class="html-italic">A</span> and another variable) on the second layer, <math display="inline"> <msub> <mi mathvariant="script">L</mi> <mn>2</mn> </msub> </math>. Similarly, these calculations continue, until the target combination, <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> <mo>}</mo> </mrow> </math>, is found on the third layer, <math display="inline"> <msub> <mi mathvariant="script">L</mi> <mn>3</mn> </msub> </math>.</p>
Full article ">Figure 2
<p>The Δ<span class="html-italic">Tree</span> when searching the Boolean functions for <math display="inline"> <mrow> <msup> <mi>C</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>A</mi> <mo>∨</mo> <mi>C</mi> <mo>∨</mo> <mi>D</mi> </mrow> </math> (<b>a</b>) after searching the first layer of <b>V</b>, but before the sort step; (<b>b</b>) when searching the second layer of <b>V</b> (the <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>}</mo> </mrow> </math>, <math display="inline"> <mrow> <mo>{</mo> <mi>C</mi> <mo>}</mo> </mrow> </math> and <math display="inline"> <mrow> <mo>{</mo> <mi>D</mi> <mo>}</mo> </mrow> </math>, which are included in the <math display="inline"> <mrow> <mtext mathvariant="bold">Pa</mtext> <mo stretchy="false">(</mo> <msup> <mi>C</mi> <mo>′</mo> </msup> <mo stretchy="false">)</mo> </mrow> </math>, are listed before <math display="inline"> <mrow> <mo>{</mo> <mi>B</mi> <mo>}</mo> </mrow> </math> after the sort step); (<b>c</b>) when searching the third layer; <math display="inline"> <msup> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> <mo>}</mo> </mrow> <mo>*</mo> </msup> </math> is the target combination. Similar to part (b), the <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>C</mi> <mo>}</mo> </mrow> </math> and <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>D</mi> <mo>}</mo> </mrow> </math> are listed before <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>B</mi> <mo>}</mo> </mrow> </math>. When checking the combination, <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> <mo>}</mo> </mrow> </math>, the DFL algorithm finds that <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> <mo>}</mo> </mrow> </math> is the complete parent set for <math display="inline"> <msup> <mi>C</mi> <mo>′</mo> </msup> </math>, since <math display="inline"> <mrow> <mo>{</mo> <mi>A</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> <mo>}</mo> </mrow> </math> satisfies the criterion of Theorem 5.</p>
Full article ">Figure 3
<p>MI in OR function <math display="inline"> <mrow> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>∨</mo> <mo>…</mo> <mo>∨</mo> <msub> <mi>X</mi> <mrow> <mi>i</mi> <mi>k</mi> </mrow> </msub> </mrow> </math>, where the unit of MI (vertical axis) is a bit. (<b>a</b>) The <math display="inline"> <mrow> <mi>I</mi> <mo stretchy="false">(</mo> <msub> <mi>X</mi> <mrow> <mo stretchy="false">(</mo> <mi>j</mi> <mo stretchy="false">)</mo> </mrow> </msub> <mo>;</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo stretchy="false">)</mo> </mrow> </math> as a function of <span class="html-italic">k</span>, <math display="inline"> <mrow> <mo>∀</mo> <msub> <mi>X</mi> <mrow> <mo stretchy="false">(</mo> <mi>j</mi> <mo stretchy="false">)</mo> </mrow> </msub> <mo>∈</mo> <mtext mathvariant="bold">Pa</mtext> <mrow> <mo stretchy="false">(</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo stretchy="false">)</mo> </mrow> </mrow> </math>; (<b>b</b>) <math display="inline"> <mrow> <mi>I</mi> <mo stretchy="false">(</mo> <mrow> <mo>{</mo> <msub> <mi>X</mi> <mrow> <mo stretchy="false">(</mo> <mn>1</mn> <mo stretchy="false">)</mo> </mrow> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>X</mi> <mrow> <mo stretchy="false">(</mo> <mi>p</mi> <mo stretchy="false">)</mo> </mrow> </msub> <mo>}</mo> </mrow> <mo>;</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo stretchy="false">)</mo> </mrow> </math> as a function of <span class="html-italic">p</span>, where <math display="inline"> <mrow> <mi>k</mi> <mo>=</mo> <mn>6</mn> <mo>,</mo> <mn>10</mn> </mrow> </math>, and <span class="html-italic">p</span> goes from 1 to <span class="html-italic">k</span>.</p>
Full article ">Figure 4
<p>The estimated <math display="inline"> <mrow> <mover accent="true"> <mi>I</mi> <mo>^</mo> </mover> <mrow> <mo stretchy="false">(</mo> <msub> <mi>X</mi> <mi>j</mi> </msub> <mo>;</mo> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo stretchy="false">)</mo> </mrow> </mrow> </math> for OR BNs with 10 variables, where <math display="inline"> <mrow> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>∨</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>∨</mo> <msub> <mi>X</mi> <mn>3</mn> </msub> </mrow> </math> on different datasets. The unit is a bit. The curve marked with circles is learned from the truth table of <math display="inline"> <mrow> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>∨</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>∨</mo> <msub> <mi>X</mi> <mn>3</mn> </msub> </mrow> </math>, so it is the ideal case, or the <span class="html-italic">Golden Rule</span>. The curves marked with diamonds, squares, triangles and stars represent the values obtained from truth table of <math display="inline"> <mrow> <msubsup> <mi>X</mi> <mi>i</mi> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>∨</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>∨</mo> <msub> <mi>X</mi> <mn>3</mn> </msub> </mrow> </math>, datasets of <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </math>, <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </math>, <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>20</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>100</mn> </mrow> </math> with 10% noise, respectively.</p>
Full article ">Figure 5
<p>The run times, <span class="html-italic">t</span> (vertical axes, shown in seconds), of the DFL algorithm for inferring the bounded Boolean networks (BNs). The values shown are the average of 20 noiseless datasets. The curves marked with circles and diamonds are for OR and AND datasets, respectively. (<b>a</b>) The run time <span class="html-italic">vs.</span> <span class="html-italic">k</span>, when <span class="html-italic">n</span> = 1000 and <span class="html-italic">N</span> = 600; (<b>b</b>) The run time <span class="html-italic">vs.</span> <span class="html-italic">N</span>, when <span class="html-italic">n</span> = 1000 and <span class="html-italic">k</span> = 3; (<b>c</b>) The run time <span class="html-italic">vs.</span> <span class="html-italic">n</span>, when <span class="html-italic">k</span> = 3 and <span class="html-italic">N</span> = 200.</p>
Full article ">Figure 6
<p>The efficiency of the DFL algorithm for the unbounded OR datasets. The values shown are the average of 20 OR noiseless datasets. (<b>a</b>) The run time, <span class="html-italic">t</span> (vertical axis, shown in seconds), of the DFL algorithm to infer the unbounded BNs; (<b>b</b>) The number of the subsets checked by the DFL algorithm for learning one OR Boolean function. The curves marked with circles and diamonds are for OR-h and OR-t datasets, respectively.</p>
Full article ">Figure 7
<p>The run times of the DFL algorithm for learning general BNs. (<b>a</b>) to (<b>c</b>) are run times on noiseless <span class="html-italic">head</span>, <span class="html-italic">random</span> and <span class="html-italic">tail</span> datasets of <span class="html-italic">k</span> = 2, respectively; (<b>d</b>) to (<b>f</b>) are run times on noiseless <span class="html-italic">head</span>, <span class="html-italic">random</span> and <span class="html-italic">tail</span> datasets of <span class="html-italic">k</span> = 3, respectively. The horizontal axes are the index of datasets and vertical axes are the run times, <span class="html-italic">t</span>, in seconds. The average sensitivities of the DFL algorithm are 100% for datasets of Part (<b>a</b>) to (<b>c</b>), and over 99.3% for datasets of Part (<b>d</b>) to (<b>f</b>). The shown times are the average values of five runs. The error bars are the standard deviations. These experiments were performed on a computer with an Intel Xeon 64-bit CPU of 2.66 GHz and 32 GB memory running the CENTOS Linux operating system.</p>
Full article ">Figure 8
<p>The sensitivity of the DFL algorithm <span class="html-italic">vs.</span> sample size <span class="html-italic">N</span>. The values shown are the average of 200 noiseless datasets. (<b>a</b>) The sensitivity <span class="html-italic">vs.</span> <span class="html-italic">N</span> for OR and RANDOM datasets, when <span class="html-italic">n</span> = 100, <span class="html-italic">k</span> = 3. The curves marked with circles and diamonds are for OR and RANDOM datasets, respectively; (<b>b</b>) The sensitivity <span class="html-italic">vs.</span> <span class="html-italic">N</span> for OR datasets, when <span class="html-italic">n</span> = 100, 500, 1000, and <span class="html-italic">k</span> = 3. The curves marked with circles, diamonds and triangles are for data sets of <span class="html-italic">n</span> = 100, 500 and 1000, respectively; (<b>c</b>) The sensitivity <span class="html-italic">vs.</span> <span class="html-italic">N</span> for OR datasets, when <span class="html-italic">k</span> = 2, 3, 4, and <span class="html-italic">n</span> = 100. The curves marked with diamonds, circles and triangles are for data sets of <span class="html-italic">k</span> = 2, 3 and 4, respectively.</p>
Full article ">Figure 9
<p>The run time, <span class="html-italic">t</span> (vertical axis, shown in seconds), of the DFL algorithm for small OR datasets, where <span class="html-italic">n</span> = 100 and <span class="html-italic">k</span> = 3. The values shown are average of 200 datasets.</p>
Full article ">Figure 10
<p>The performance of the DFL algorithm when learning general BNs from datasets of 10% noise with different sample sizes. For each sample size, three (one “head”, one “random” and one “tail”) datasets were generated for each of the 218 BNs of <span class="html-italic">k</span> = 3. The sensitivities, specificities and the number of data sets, <span class="html-italic">m</span>, on which the DFL algorithm successfully achieved 100% sensitivities and specificities were calculated on the 218 “head”, 218 “random” and 218 “tail” datasets, respectively. Then, the average values in the curves and standard deviations (the error bars) were calculated from the averages of the “head”, “random” and “tail” datasets. The curves marked with circles (blue), DFLr, and dots (red), DFLx, represent the average values on the datasets of <span class="html-italic">n</span> = 80 with restricted searching of only <math display="inline"> <mrow> <msubsup> <mo>∑</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo stretchy="false">(</mo> <mi>n</mi> <mo>−</mo> <mi>i</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </math> subsets and <span class="html-italic">n</span> = 10 with exhaustive searching of <math display="inline"> <mrow> <msubsup> <mo>∑</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <mfenced separators="" open="(" close=")"> <mfrac linethickness="0pt"> <mi>n</mi> <mi>i</mi> </mfrac> </mfenced> </mrow> </math> subsets, respectively. The curves marked with triangles (green), best-fit, and diamonds (magenta), Corr., are the average values of the Best-fit and Correlation algorithm on noisy datasets of 50 monotonic Boolean networks with <span class="html-italic">n</span> = 80 and Gaussian noise with SD <span class="html-italic">σ</span> = 0.4 (reported in [<a href="#B16-entropy-15-03762" class="html-bibr">16</a>]), respectively. (<b>a</b>) The sensitivities <span class="html-italic">vs.</span> <span class="html-italic">N</span>; (<b>b</b>) The specificities <span class="html-italic">vs.</span> <span class="html-italic">N</span>; (<b>c</b>) The number of datasets, <span class="html-italic">m</span>, on which the DFL algorithm achieved 100% sensitivities and specificities.</p>
Full article ">
280 KiB  
Article
Combination Synchronization of Three Identical or Different Nonlinear Complex Hyperchaotic Systems
by Xiaobing Zhou, Murong Jiang and Yaqun Huang
Entropy 2013, 15(9), 3746-3761; https://doi.org/10.3390/e15093746 - 10 Sep 2013
Cited by 23 | Viewed by 5407
Abstract
In this paper, we investigate the combination synchronization of three nonlinear complex hyperchaotic systems: the complex hyperchaotic Lorenz system, the complex hyperchaotic Chen system and the complex hyperchaotic L¨u system. Based on the Lyapunov stability theory, corresponding controllers to achieve combination synchronization among [...] Read more.
In this paper, we investigate the combination synchronization of three nonlinear complex hyperchaotic systems: the complex hyperchaotic Lorenz system, the complex hyperchaotic Chen system and the complex hyperchaotic L¨u system. Based on the Lyapunov stability theory, corresponding controllers to achieve combination synchronization among three identical or different nonlinear complex hyperchaotic systems are derived, respectively. Numerical simulations are presented to demonstrate the validity and feasibility of the theoretical analysis. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

Figure 1
<p>Combination synchronization errors, <math display="inline"> <msub> <mi>e</mi> <mn>1</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>2</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>3</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>4</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>5</mn> </msub> </math> and <math display="inline"> <msub> <mi>e</mi> <mn>6</mn> </msub> </math>, between drive systems (5) and (6) and response system (7).</p>
Full article ">Figure 2
<p>Time responses for states <math display="inline"> <mrow> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> </mrow> </math> <span class="html-italic">versus</span> <math display="inline"> <mrow> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>6</mn> </mrow> </math>, respectively.</p>
Full article ">Figure 3
<p>Time evolution of the states for system (7).</p>
Full article ">Figure 4
<p>Combination synchronization errors, <math display="inline"> <msub> <mi>e</mi> <mn>1</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>2</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>3</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>4</mn> </msub> </math>, <math display="inline"> <msub> <mi>e</mi> <mn>5</mn> </msub> </math> and <math display="inline"> <msub> <mi>e</mi> <mn>6</mn> </msub> </math>, between drive systems (19), (20) and response system (21).</p>
Full article ">Figure 5
<p>Time responses for states <math display="inline"> <mrow> <msub> <mi>u</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>v</mi> <mi>i</mi> </msub> </mrow> </math> <span class="html-italic">versus</span> <math display="inline"> <mrow> <msub> <mi>w</mi> <mi>i</mi> </msub> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>6</mn> </mrow> </math>, respectively.</p>
Full article ">Figure 6
<p>Time evolution of the states for system (21).</p>
Full article ">
2804 KiB  
Article
On the Calculation of Solid-Fluid Contact Angles from Molecular Dynamics
by Erik E. Santiso, Carmelo Herdes and Erich A. Müller
Entropy 2013, 15(9), 3734-3745; https://doi.org/10.3390/e15093734 - 6 Sep 2013
Cited by 73 | Viewed by 12250
Abstract
A methodology for the determination of the solid-fluid contact angle, to be employed within molecular dynamics (MD) simulations, is developed and systematically applied. The calculation of the contact angle of a fluid drop on a given surface, averaged over an equilibrated MD trajectory, [...] Read more.
A methodology for the determination of the solid-fluid contact angle, to be employed within molecular dynamics (MD) simulations, is developed and systematically applied. The calculation of the contact angle of a fluid drop on a given surface, averaged over an equilibrated MD trajectory, is divided in three main steps: (i) the determination of the fluid molecules that constitute the interface, (ii) the treatment of the interfacial molecules as a point cloud data set to define a geometric surface, using surface meshing techniques to compute the surface normals from the mesh, (iii) the collection and averaging of the interface normals collected from the post-processing of the MD trajectory. The average vector thus found is used to calculate the Cassie contact angle (i.e., the arccosine of the averaged normal z-component). As an example we explore the effect of the size of a drop of water on the observed solid-fluid contact angle. A single coarse-grained bead representing two water molecules and parameterized using the SAFT-γ Mie equation of state (EoS) is employed, meanwhile the solid surfaces are mimicked using integrated potentials. The contact angle is seen to be a strong function of the system size for small nano-droplets. The thermodynamic limit, corresponding to the infinite size (macroscopic) drop is only truly recovered when using an excess of half a million water coarse-grained beads and/or a drop radius of over 26 nm. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic of a liquid drop on a solid surface showing the contact angle.</p>
Full article ">Figure 2
<p>Frequency of contact angle values of water on graphite reported in literature; both from experimental results and numerical simulations [<a href="#B1-entropy-15-03734" class="html-bibr">1</a>].</p>
Full article ">Figure 3
<p>Two-dimensional projections of a given configuration of a water droplet on a surface. The contact angles, measured using the auxiliary lines depicted in black, are: (<b>a</b>) 63.77° (<b>b</b>) 60.52° (<b>c</b>) 64.56° (<b>d</b>) 54.93°.</p>
Full article ">Figure 4
<p><b>(a)</b> A snapshot from a MD simulation showing a droplet on top of a given surface (not shown). <b>(b)</b> The discretized density profile from the same system obtained using cubic subcells of width 3 nm. Density values are in molecules/Å [<a href="#B3-entropy-15-03734" class="html-bibr">3</a>].</p>
Full article ">Figure 5
<p><b>(a)</b> Water contact angle as a function of the water molecules on Wall05, inset shows the correspondent drop diameter, dashed lines are guide to the eye. <b>(b)</b> Snapshots of the drop interfaces for the smallest and the biggest system studied.</p>
Full article ">Figure 6
<p><b>(a)</b> Water contact angle as a function of the fluid-substrate interactions, solid circles are simulation results, dashed red line marks the hydrophobic-hydrophilic threshold. <b>(b)</b> Corresponding equilibrium interface snapshots depicting interfacial beads.</p>
Full article ">
210 KiB  
Article
Consideration on Singularities in Learning Theory and the Learning Coefficient
by Miki Aoyagi
Entropy 2013, 15(9), 3714-3733; https://doi.org/10.3390/e15093714 - 6 Sep 2013
Cited by 5 | Viewed by 4855
Abstract
We consider the learning coefficients in learning theory and give two new methods for obtaining these coefficients in a homogeneous case: a method for finding a deepest singular point and a method to add variables. In application to Vandermonde matrix-type singularities, we show [...] Read more.
We consider the learning coefficients in learning theory and give two new methods for obtaining these coefficients in a homogeneous case: a method for finding a deepest singular point and a method to add variables. In application to Vandermonde matrix-type singularities, we show that these methods are effective. The learning coefficient of the generalization error in Bayesian estimation serves to measure the learning efficiency in singular learning models. Mathematically, the learning coefficient corresponds to a real log canonical threshold of singularities for the Kullback functions (relative entropy) in learning theory. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)
Show Figures

Figure 1

Figure 1
<p>The values of new bounds, <math display="inline"> <mrow> <mo movablelimits="true" form="prefix">min</mo> <mo stretchy="false">{</mo> <msub> <mi>bound</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>bound</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>bound</mi> <mn>3</mn> </msub> <mo stretchy="false">}</mo> </mrow> </math>, for (<b>a</b>) <math display="inline"> <mrow> <mi>H</mi> <mo>=</mo> <mn>7</mn> <mo>,</mo> <mi>N</mi> <mo>=</mo> <mn>6</mn> </mrow> </math>; (<b>b</b>) <math display="inline"> <mrow> <mi>H</mi> <mo>=</mo> <mn>8</mn> <mo>,</mo> <mi>N</mi> <mo>=</mo> <mn>6</mn> </mrow> </math>; (<b>c</b>) <math display="inline"> <mrow> <mi>H</mi> <mo>=</mo> <mn>7</mn> <mo>,</mo> <mi>N</mi> <mo>=</mo> <mn>7</mn> </mrow> </math> and (<b>d</b>) <math display="inline"> <mrow> <mi>H</mi> <mo>=</mo> <mn>8</mn> <mo>,</mo> <mi>N</mi> <mo>=</mo> <mn>7</mn> </mrow> </math> with <math display="inline"> <mrow> <mi>Q</mi> <mo>=</mo> <mn>2</mn> </mrow> </math>, compared with the bounds obtained by the past work in [<a href="#B14-entropy-15-03714" class="html-bibr">14</a>].</p>
Full article ">Figure 2
<p>Hironaka’s Theorem: diagram of desingularization, <span class="html-italic">μ</span>, of <span class="html-italic">f</span>: <math display="inline"> <mi mathvariant="script">E</mi> </math> maps to <math display="inline"> <mrow> <msup> <mi>f</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </mrow> </math>. <math display="inline"> <mrow> <mi>U</mi> <mo>−</mo> <mi mathvariant="script">E</mi> </mrow> </math> is isomorphic to <math display="inline"> <mrow> <mi>V</mi> <mo>−</mo> <msup> <mi>f</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mrow> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </mrow> </math> by <span class="html-italic">μ</span>, where <span class="html-italic">V</span> is a small neighborhood of <math display="inline"> <msup> <mi>w</mi> <mo>*</mo> </msup> </math> with <math display="inline"> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <msup> <mi>w</mi> <mo>*</mo> </msup> <mo stretchy="false">)</mo> <mo>=</mo> <mn>0</mn> </mrow> </math>.</p>
Full article ">
321 KiB  
Article
Correlation Distance and Bounds for Mutual Information
by Michael J. W. Hall
Entropy 2013, 15(9), 3698-3713; https://doi.org/10.3390/e15093698 - 6 Sep 2013
Cited by 7 | Viewed by 6806
Abstract
The correlation distance quantifies the statistical independence of two classical or quantum systems, via the distance from their joint state to the product of the marginal states. Tight lower bounds are given for the mutual information between pairs of two-valued classical variables and [...] Read more.
The correlation distance quantifies the statistical independence of two classical or quantum systems, via the distance from their joint state to the product of the marginal states. Tight lower bounds are given for the mutual information between pairs of two-valued classical variables and quantum qubits, in terms of the corresponding classical and quantum correlation distances. These bounds are stronger than the Pinsker inequality (and refinements thereof) for relative entropy. The classical lower bound may be used to quantify properties of statistical models that violate Bell inequalities. Partially entangled qubits can have lower mutual information than can any two-valued classical variables having the same correlation distance. The qubit correlation distance also provides a direct entanglement criterion, related to the spin covariance matrix. Connections of results with classically-correlated quantum states are briefly discussed. Full article
(This article belongs to the Special Issue Distance in Information and Statistical Physics Volume 2)
Show Figures

Figure 1

Figure 1
<p>Lower bounds for the classical mutual information between two-valued variables.</p>
Full article ">Figure 2
<p>Lower bounds for the quantum mutual information between two qubits.</p>
Full article ">
247 KiB  
Article
Dynamic Distance Measure on Spaces of Isospectral Mixed Quantum States
by Ole Andersson and Hoshang Heydari
Entropy 2013, 15(9), 3688-3697; https://doi.org/10.3390/e15093688 - 6 Sep 2013
Cited by 9 | Viewed by 4638
Abstract
Distance measures are used to quantify the extent to which information is preserved or altered by quantum processes, and thus are indispensable tools in quantum information and quantum computing. In this paper we propose a new distance measure for mixed quantum states, which [...] Read more.
Distance measures are used to quantify the extent to which information is preserved or altered by quantum processes, and thus are indispensable tools in quantum information and quantum computing. In this paper we propose a new distance measure for mixed quantum states, which we call the dynamic distance measure, and we show that it is a proper distance measure. The dynamic distance measure is defined in terms of a measurable quantity, which makes it suitable for applications. In a final section we compare the dynamic distance measure with the well-known Bures distance measure. Full article
(This article belongs to the Special Issue Quantum Information 2012)
1468 KiB  
Article
SpaGrOW—A Derivative-Free Optimization Scheme for Intermolecular Force Field Parameters Based on Sparse Grid Methods
by Marco Hülsmann and Dirk Reith
Entropy 2013, 15(9), 3640-3687; https://doi.org/10.3390/e15093640 - 6 Sep 2013
Cited by 12 | Viewed by 7258
Abstract
Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models [...] Read more.
Molecular modeling is an important subdomain in the field of computational modeling, regarding both scientific and industrial applications. This is because computer simulations on a molecular level are a virtuous instrument to study the impact of microscopic on macroscopic phenomena. Accurate molecular models are indispensable for such simulations in order to predict physical target observables, like density, pressure, diffusion coefficients or energetic properties, quantitatively over a wide range of temperatures. Thereby, molecular interactions are described mathematically by force fields. The mathematical description includes parameters for both intramolecular and intermolecular interactions. While intramolecular force field parameters can be determined by quantum mechanics, the parameterization of the intermolecular part is often tedious. Recently, an empirical procedure, based on the minimization of a loss function between simulated and experimental physical properties, was published by the authors. Thereby, efficient gradient-based numerical optimization algorithms were used. However, empirical force field optimization is inhibited by the two following central issues appearing in molecular simulations: firstly, they are extremely time-consuming, even on modern and high-performance computer clusters, and secondly, simulation data is affected by statistical noise. The latter provokes the fact that an accurate computation of gradients or Hessians is nearly impossible close to a local or global minimum, mainly because the loss function is flat. Therefore, the question arises of whether to apply a derivative-free method approximating the loss function by an appropriate model function. In this paper, a new Sparse Grid-based Optimization Workflow (SpaGrOW) is presented, which accomplishes this task robustly and, at the same time, keeps the number of time-consuming simulations relatively small. This is achieved by an efficient sampling procedure for the approximation based on sparse grids, which is described in full detail: in order to counteract the fact that sparse grids are fully occupied on their boundaries, a mathematical transformation is applied to generate homogeneous Dirichlet boundary conditions. As the main drawback of sparse grids methods is the assumption that the function to be modeled exhibits certain smoothness properties, it has to be approximated by smooth functions first. Radial basis functions turned out to be very suitable to solve this task. The smoothing procedure and the subsequent interpolation on sparse grids are performed within sufficiently large compact trust regions of the parameter space. It is shown and explained how the combination of the three ingredients leads to a new efficient derivative-free algorithm, which has the additional advantage that it is capable of reducing the overall number of simulations by a factor of about two in comparison to gradient-based optimization methods. At the same time, the robustness with respect to statistical noise is maintained. This assertion is proven by both theoretical considerations and practical evaluations for molecular simulations on chemical example substances. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulation)
Show Figures

Figure 1

Figure 1
<p>Optimization workflow: The target properties are computed for an initial guess for the force field parameters. If they do not agree sufficiently well with the experimental target properties, the optimization procedure is performed searching for new parameters with a lower loss function value.</p>
Full article ">Figure 2
<p>Motivation of a derivative-free method in the case of noisy loss function values close to the minimum; the direction of a gradient can be completely wrong. Hence, an approximation of the loss function is necessary. This regression procedure has to filter out the statistical noise.</p>
Full article ">Figure 3
<p>Overview of the Sparse Grid-based Optimization Workflow (SpaGrOW): the combination of the Trust Region approach with the interpolation on sparse grids requiring a smoothing procedure to be preceded leads to both increasing efficiency and robustness.</p>
Full article ">Figure 4
<p>Triangular scheme for the combination of a sparse grid of level 3 from two-dimensional subgrids meeting the condition, <math display="inline"> <mrow> <msub> <mrow> <mo>|</mo> <mi>ℓ</mi> <mo>|</mo> </mrow> <mn>1</mn> </msub> <mo>≤</mo> <mn>3</mn> <mo>+</mo> <mn>2</mn> <mo>-</mo> <mn>1</mn> <mo>=</mo> <mn>4</mn> <mo>∧</mo> <msub> <mo>∀</mo> <mrow> <mi>k</mi> <mo>∈</mo> <mo stretchy="false">{</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo stretchy="false">}</mo> </mrow> </msub> <mspace width="4pt"/> <msub> <mi>ℓ</mi> <mi>k</mi> </msub> <mo>&lt;</mo> <mn>3</mn> </mrow> </math>. If all eight subgrids are combined, a full grid of level 3 is obtained. If only the subgrids of levels (0,0), (1,0), (2,0), (3,0), (0,1), (0,2), (0,3), (1,1), (2,1), (1,2), (2,2), (3,1) and (1,3) are taken, <span class="html-italic">i.e</span>., if the small triangle consisting of three grids at the bottom right is left out, the corresponding sparse grid is obtained, <span class="html-italic">cf</span>. <a href="#entropy-15-03640-f005" class="html-fig">Figure 5</a> top left.</p>
Full article ">Figure 5
<p>Sparse grids of the levels 3, 4, and 5 in 2D and 3D.</p>
Full article ">Figure 6
<p>Problematic in the case of piecewise-linear interpolation of a noisy function, the interpolation leads to a staggered function reproducing the noise.</p>
Full article ">Figure 7
<p>Problematic in the case of the approximation of a noisy function, when the points are situated too close to each other, the function values cannot be differentiated anymore, and the trend of the function can be reproduced completely incorrectly by the smoothing procedure.</p>
Full article ">Figure 8
<p>Radial basis function (RBF) network: The network proceeds for a test point, <span class="html-italic">X</span>, from the input neurons, <math display="inline"> <mrow> <msup> <mi>X</mi> <mi>j</mi> </msup> <mo>,</mo> <mspace width="4pt"/> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>N</mi> <mo>,</mo> </mrow> </math> (components of <span class="html-italic">X</span>), over a layer of hidden neurons containing the RBF evaluations, <math display="inline"> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>X</mi> <mo>-</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo stretchy="false">)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>K</mi> </mrow> </math>, with the coefficients, <math display="inline"> <mrow> <msub> <mi>β</mi> <mi>i</mi> </msub> <mo>,</mo> <mspace width="4pt"/> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>.</mo> <mo>.</mo> <mo>.</mo> <mo>,</mo> <mi>K</mi> </mrow> </math>, to the output neuron, where the summation takes place. The result is the function value, <math display="inline"> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <mi>X</mi> <mo stretchy="false">)</mo> </mrow> </math>.</p>
Full article ">Figure 9
<p>Overview of the SpaGrOW algorithm, <span class="html-italic">i.e</span>., the inner iteration of the optimization procedure visualized in <a href="#entropy-15-03640-f001" class="html-fig">Figure 1</a>. The Trust Region size, Δ, is increased or decreased, depending on the quality of the approximation model on the sparse grid.</p>
Full article ">Figure 10
<p>Approximation models with different training errors, <span class="html-italic">ϑ</span>, where the function, <span class="html-italic">G</span>, approximates the noisy function, <math display="inline"> <mrow> <mover accent="true"> <mi>F</mi> <mo stretchy="false">˜</mo> </mover> <mo>=</mo> <mi>F</mi> <mo>+</mo> <mo>Δ</mo> <mi>F</mi> </mrow> </math>, overfitted model with <math display="inline"> <mrow> <mi>ϑ</mi> <mo>=</mo> <mn>0</mn> </mrow> </math> (<b>a</b>); model, where <span class="html-italic">ϑ</span> is too high (<b>b</b>); and feasible model with <math display="inline"> <mrow> <mo>|</mo> <mo>Δ</mo> <mi>F</mi> <mo>-</mo> <mi>ϑ</mi> <mo>|</mo> <mo>≤</mo> <mi>δ</mi> </mrow> </math> (<b>c</b>). In the ideal case, it holds <math display="inline"> <mrow> <mi>ϑ</mi> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> <mo>=</mo> <mo>Δ</mo> <msub> <mi>F</mi> <mi>x</mi> </msub> </mrow> </math>.</p>
Full article ">Figure 11
<p>Speed of convergency of SpaGrOW at the beginning of the optimization. For an appropriate choice of the size of the initial trust region, <math display="inline"> <msub> <mo>Δ</mo> <mn>0</mn> </msub> </math>, the number of iterations in the case of SpaGrOW (<math display="inline"> <msub> <mi>k</mi> <mi>S</mi> </msub> </math>) is significantly smaller than in the case of a gradient-based method (<math display="inline"> <msub> <mi>k</mi> <mi>g</mi> </msub> </math>). It is realistic that the number of iterations and function evaluations can be reduced by a factor of two.</p>
Full article ">Figure 12
<p>Technical realization of optimization procedure for physical target properties at different temperatures. If the simulated properties do no coincide well with their experimental reference data, the optimization control script—depicted on the left—passes the current force field parameters on to a distribution control script, which submits parallel jobs at different temperatures. Then, a parallel environment control script is executed, and a simulation control script is called, which performs the following three tasks: preparation routines, the simulation itself and the computation of the simulated target properties. The properties are written into separate files, which are read by the optimization control script. Finally, the loss function is evaluated and the workflow continues.</p>
Full article ">Figure 13
<p>Box plots of the loss function values achieved by SpaGrOW in combination with a smoothing procedure based on Radial Basis Functions (RBFs). The RBFs were the linear, cubic, multiquadric (Multi), inverse multiquadric (Invers), Gaussian, thin plate spline RBF (TPS) and a Wendland function. Suitable RBFs were only the linear, cubic, Gaussian and thin plate spline RBF.</p>
Full article ">Figure 14
<p>Approximations on the unit square, <math display="inline"> <msup> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> <mn>2</mn> </msup> </math>, based on Gaussian RBFs (<b>a</b>) and thin plate spline RBFs (<b>b</b>). The blue points mark the original (noisy) function values of <math display="inline"> <mover accent="true"> <mi>F</mi> <mo stretchy="false">¯</mo> </mover> </math> on the sparse grid. It holds <math display="inline"> <mrow> <mi>x</mi> <mn>1</mn> <mo>=</mo> <mi>ξ</mi> <mrow> <mo stretchy="false">(</mo> <msup> <mi>Q</mi> <mn>2</mn> </msup> <mo stretchy="false">)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>x</mi> <mn>2</mn> <mo>=</mo> <mi>ξ</mi> <mrow> <mo stretchy="false">(</mo> <mi>L</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </math> and <math display="inline"> <mrow> <mi>y</mi> <mo>=</mo> <mover accent="true"> <mi>F</mi> <mo stretchy="false">¯</mo> </mover> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>x</mi> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </mrow> </math>. The smoothing procedure based on thin plate spline RBFs reproduces <math display="inline"> <mover accent="true"> <mi>F</mi> <mo stretchy="false">¯</mo> </mover> </math> at the boundary of the unit square in a very bad fashion.</p>
Full article ">Figure 15
<p>Approximations of <math display="inline"> <mover accent="true"> <mi>F</mi> <mo stretchy="false">¯</mo> </mover> </math> on the unit square, <math display="inline"> <msup> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> <mn>2</mn> </msup> </math>, based on Gaussian RBFs, combined with a LASSO regularization. The blue points mark the original (noisy) function values of <math display="inline"> <mover accent="true"> <mi>F</mi> <mo stretchy="false">¯</mo> </mover> </math> on the sparse grid. It holds <math display="inline"> <mrow> <mi>x</mi> <mn>1</mn> <mo>=</mo> <mi>ξ</mi> <mrow> <mo stretchy="false">(</mo> <msup> <mi>Q</mi> <mn>2</mn> </msup> <mo stretchy="false">)</mo> </mrow> <mo>,</mo> <mspace width="4pt"/> <mi>x</mi> <mn>2</mn> <mo>=</mo> <mi>ξ</mi> <mrow> <mo stretchy="false">(</mo> <mi>L</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </math> and <math display="inline"> <mrow> <mi>y</mi> <mo>=</mo> <mover accent="true"> <mi>F</mi> <mo stretchy="false">¯</mo> </mover> <mrow> <mo stretchy="false">(</mo> <mi>x</mi> <mn>1</mn> <mo>,</mo> <mi>x</mi> <mn>2</mn> <mo stretchy="false">)</mo> </mrow> </mrow> </math>. At the boundary of the unit square, the function is reproduced in a bad fashion.</p>
Full article ">Figure 16
<p>Box plots of the loss function values (<b>a</b>) and of the number of function evaluations (<b>b</b>) resulting from the application of SpaGrOW combined with a smoothing procedure based on Gaussian RBFs and different regularization methods: Ridge Regression, LASSO, a weighted linear regression (rlm), an RBF approximation with an additional linear term (lt), an NEN (eln) with <math display="inline"> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>7</mn> </mrow> </math> and MARS. It becomes clear that MARS is the algorithm to select for regularization. Ridge Regression is reliable, as well.</p>
Full article ">Figure 17
<p>Mean Absolute Percentage Errors (MAPE) values with respect to <math display="inline"> <mrow> <msub> <mo>Δ</mo> <mi>v</mi> </msub> <mi>H</mi> </mrow> </math> (<b>a</b>) and <math display="inline"> <msub> <mi>ρ</mi> <mi>l</mi> </msub> </math> (<b>b</b>) for benzene during the SpaGrOW optimization in comparison to GROW. The smoothing procedure was based on Gaussian RBFs in combination with MARS. The force field parameters to be optimized were <math display="inline"> <mrow> <mi>σ</mi> <mo stretchy="false">(</mo> <mi mathvariant="normal">H</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>σ</mi> <mo stretchy="false">(</mo> <mi mathvariant="normal">C</mi> <mo stretchy="false">)</mo> <mo>,</mo> <mi>ε</mi> <mo stretchy="false">(</mo> <mi mathvariant="normal">H</mi> <mo stretchy="false">)</mo> </mrow> </math> and <math display="inline"> <mrow> <mi>ε</mi> <mo stretchy="false">(</mo> <mi mathvariant="normal">C</mi> <mo stretchy="false">)</mo> </mrow> </math>. A faster convergency of SpaGrOW could be confirmed.</p>
Full article ">Figure 18
<p>Development of the Lennard-Jones (LJ) parameters in the case of benzene (<math display="inline"> <mrow> <msub> <mo>Δ</mo> <mi>v</mi> </msub> <mi>H</mi> </mrow> </math>,<math display="inline"> <msub> <mi>ρ</mi> <mi>l</mi> </msub> </math>) for GROW and SpaGrOW—<span class="html-italic">σ</span>(H) and <span class="html-italic">σ</span>(C) (<b>a</b>)—as well as <span class="html-italic">ε</span>(H) and <span class="html-italic">ε</span>(C) (<b>b</b>). The unfilled circles indicate the optimal parameters. SpaGrOW led in a more direct way to the minimum than GROW. Only in the case of <span class="html-italic">ε</span>, some detours could be observed, due to the triangle.</p>
Full article ">Figure 19
<p>MAPE values with respect to <math display="inline"> <msub> <mi>ρ</mi> <mi>l</mi> </msub> </math> (<b>a</b>), <math display="inline"> <mrow> <msub> <mo>Δ</mo> <mi>v</mi> </msub> <mi>H</mi> </mrow> </math> (<b>b</b>) and <math display="inline"> <msub> <mi>p</mi> <mi>σ</mi> </msub> </math> (<b>c</b>) in the case of ethylene oxide during the Vapor-Liquid Equilibrium (VLE) optimization for SpaGrOW and GROW. The smoothing procedure was based on Gaussian RBFs combined with the MARS algorithm. The force field parameters were <math display="inline"> <mrow> <mi>ε</mi> <mrow> <mo stretchy="false">(</mo> <msub> <mi>CH</mi> <mn>2</mn> </msub> <mo stretchy="false">)</mo> </mrow> <mo>,</mo> <mi>ε</mi> <mrow> <mo stretchy="false">(</mo> <mi mathvariant="normal">O</mi> <mo stretchy="false">)</mo> </mrow> <mo>,</mo> <mi>σ</mi> <mrow> <mo stretchy="false">(</mo> <msub> <mi>CH</mi> <mn>2</mn> </msub> <mo stretchy="false">)</mo> </mrow> </mrow> </math> and <math display="inline"> <mrow> <mi>σ</mi> <mo stretchy="false">(</mo> <mi mathvariant="normal">O</mi> <mo stretchy="false">)</mo> </mrow> </math>. The faster convergency of SpaGrOW compared to GROW could be confirmed again.</p>
Full article ">Figure 20
<p>Development of the LJ parameters in the case of ethylene oxide (VLE) for GROW and SpaGrOW: <span class="html-italic">ε</span>(CH<math display="inline"> <msub> <mrow/> <mn>2</mn> </msub> </math>) and <span class="html-italic">ε</span>(O) (<b>a</b>), as well as <span class="html-italic">σ</span>(CH<math display="inline"> <msub> <mrow/> <mn>2</mn> </msub> </math>) and <span class="html-italic">σ</span>(O) (<b>b</b>). The unfilled circles show the final parameters. SpaGrOW led to a more direct way to the minimum again.</p>
Full article ">Figure 21
<p>MAPE values with respect to <span class="html-italic">ρ</span> in the case of dipropylene glycol dimethyl ether during the optimization process of SpaGrOW in comparison to the modified Gauss-Newton method. The smoothing procedure was realized by the Lipra method. The optimization problem was eight-dimensional. SpaGrOW needed two iterations only, but significantly more simulations than the modified Gauss-Newton method in order to achieve optimal liquid densities.</p>
Full article ">Figure 22
<p>Optimization of the density <span class="html-italic">ρ</span> in the case of dipropylene glycol dimethyl ether using the modified Gauss-Newton method and SpaGrOW. Optimal densities could be achieved by both methods, but SpaGrOW needed many more simulations. However, SpaGrOW could reproduce the trend of the curve better.</p>
Full article ">
427 KiB  
Article
Gravitational Entropy and Inflation
by Øystein Elgarøy and Øyvind Grøn
Entropy 2013, 15(9), 3620-3639; https://doi.org/10.3390/e15093620 - 4 Sep 2013
Cited by 1 | Viewed by 6763
Abstract
The main topic of this paper is a description of the generation of entropy at the end of the inflationary era. As a generalization of the present standard model of the Universe dominated by pressureless dust and a Lorentz invariant vacuum energy (LIVE), [...] Read more.
The main topic of this paper is a description of the generation of entropy at the end of the inflationary era. As a generalization of the present standard model of the Universe dominated by pressureless dust and a Lorentz invariant vacuum energy (LIVE), we first present a flat Friedmann universe model, where the dust is replaced with an ideal gas. It is shown that the pressure of the gas is inversely proportional to the fifth power of the scale factor and that the entropy in a comoving volume does not change during the expansion. We then review different measures of gravitational entropy related to the Weyl curvature conjecture and calculate the time evolution of two proposed measures of gravitational entropy in a LIVE-dominated Bianchi type I universe, and a Lemaitre-Bondi-Tolman universe with LIVE. Finally, we elaborate upon a model of energy transition from vacuum energy to radiation energy, that of Bonanno and Reuter, and calculate the time evolution of the entropies of vacuum energy and radiation energy. We also calculate the evolution of the maximal entropy according to some recipes and demonstrate how a gap between the maximal entropy and the actual entropy opens up at the end of the inflationary era. Full article
(This article belongs to the Special Issue Entropy and the Second Law of Thermodynamics)
Show Figures

Figure 1

Figure 1
<p>Variation of the scale factor with time for the ideal gas universe model of <a href="#sec2-entropy-15-03620" class="html-sec">Section 2</a>, with vanishing cosmological constant, for different values of <math display="inline"> <msub> <mo>Ω</mo> <mi mathvariant="normal">m</mi> </msub> </math> and <math display="inline"> <mrow> <msub> <mo>Ω</mo> <mi mathvariant="normal">p</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>−</mo> <msub> <mo>Ω</mo> <mi mathvariant="normal">m</mi> </msub> </mrow> </math>: <math display="inline"> <mrow> <msub> <mo>Ω</mo> <mi mathvariant="normal">m</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math> (full line), <math display="inline"> <mrow> <msub> <mo>Ω</mo> <mi mathvariant="normal">m</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </math> (dotted line), <math display="inline"> <mrow> <msub> <mo>Ω</mo> <mi mathvariant="normal">m</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>8</mn> </mrow> </math> (dashed line) and <math display="inline"> <mrow> <msub> <mo>Ω</mo> <mi mathvariant="normal">m</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </math> (dot-dashed line).</p>
Full article ">Figure 2
<p>Time variation of the candidate gravitational entropies, <math display="inline"> <msub> <mi>S</mi> <mrow> <mi mathvariant="normal">G</mi> <mn>1</mn> </mrow> </msub> </math> (<b>left panel</b>) and <math display="inline"> <msub> <mi>S</mi> <mrow> <mi mathvariant="normal">G</mi> <mn>2</mn> </mrow> </msub> </math> (<b>right panel</b>), in a comoving volume during the beginning of the inflationary era.</p>
Full article ">Figure 3
<p>The variation with time, relative to the initial value, of the total gravitational entropy within spheres of varying radius.The black curve corresponds to <math display="inline"> <mrow> <mi>r</mi> <mo>=</mo> <mn>2</mn> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </math>, the red curve to <math display="inline"> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </math> and the blue curve to <math display="inline"> <mrow> <mi>r</mi> <mo>=</mo> <mn>4</mn> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </math>. Here, <math display="inline"> <mrow> <msub> <mi>H</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>H</mi> <mi>c</mi> </msub> <mo form="prefix">cosh</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>/</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </math>, <math display="inline"> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>R</mi> <mi>c</mi> </msub> <mo form="prefix">sinh</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>/</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </math>.</p>
Full article ">Figure 4
<p>The variation with time, relative to the initial value, of the total gravitational entropy within spheres of varying radius. The black curve corresponds to <math display="inline"> <mrow> <mi>r</mi> <mo>=</mo> <mn>2</mn> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </math>, the red curve to <math display="inline"> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </math> and the blue curve to <math display="inline"> <mrow> <mi>r</mi> <mo>=</mo> <mn>4</mn> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> </math>. Here, <math display="inline"> <mrow> <msub> <mi>H</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>H</mi> <mi>c</mi> </msub> <mo form="prefix">tanh</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>/</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </math>, <math display="inline"> <mrow> <msub> <mi>R</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>R</mi> <mi>c</mi> </msub> <mo form="prefix">tanh</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>/</mo> <msub> <mi>r</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </math>.</p>
Full article ">Figure 5
<p>The variation with time of the Hubble parameter in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>.</p>
Full article ">Figure 6
<p>The variation with time of <math display="inline"> <mrow> <mo>Λ</mo> <mo>/</mo> <msub> <mo>Λ</mo> <mn>0</mn> </msub> </mrow> </math> in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>.</p>
Full article ">Figure 7
<p>The variation with time of the radiation energy density in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>.</p>
Full article ">Figure 8
<p>The variation with time of the vacuum energy density parameter, <math display="inline"> <msub> <mo>Ω</mo> <mo>Λ</mo> </msub> </math>, (full line), and the density parameter for radiation, Ω, (dashed line) in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>.</p>
Full article ">Figure 9
<p>The variation with time of the entropy in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>.</p>
Full article ">Figure 10
<p>The variation with time of the entropy density in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>.</p>
Full article ">Figure 11
<p>The variation with time of the entropy production in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>.</p>
Full article ">Figure 12
<p>The Bekenstein (full line) and holographic (dashed line) upper bounds on the entropy of the universe as functions of time in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>. The entropy is plotted in units of Boltzmann’s constant and have also been divided by a factor, <math display="inline"> <mrow> <mi>C</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>0295</mn> <mo>×</mo> <msup> <mn>10</mn> <mn>122</mn> </msup> </mrow> </math>.</p>
Full article ">Figure 13
<p>The radiation entropy contained within the event horizon as a function of time in the model of <a href="#sec6-entropy-15-03620" class="html-sec">Section 6</a>. Note that it is miniscule compared to the entropy bounds in the previous figures.</p>
Full article ">
228 KiB  
Article
Unification of Quantum and Gravity by Non Classical Information Entropy Space
by Germano Resconi, Ignazio Licata and Davide Fiscaletti
Entropy 2013, 15(9), 3602-3619; https://doi.org/10.3390/e15093602 - 4 Sep 2013
Cited by 18 | Viewed by 6782
Abstract
A quantum entropy space is suggested as the fundamental arena describing the quantum effects. In the quantum regime the entropy is expressed as the superposition of many different Boltzmann entropies that span the space of the entropies before any measure. When a measure [...] Read more.
A quantum entropy space is suggested as the fundamental arena describing the quantum effects. In the quantum regime the entropy is expressed as the superposition of many different Boltzmann entropies that span the space of the entropies before any measure. When a measure is performed the quantum entropy collapses to one component. A suggestive reading of the relational interpretation of quantum mechanics and of Bohm’s quantum potential in terms of the quantum entropy are provided. The space associated with the quantum entropy determines a distortion in the classical space of position, which appears as a Weyl-like gauge potential connected with Fisher information. This Weyl-like gauge potential produces a deformation of the moments which changes the classical action in such a way that Bohm’s quantum potential emerges as consequence of the non classical definition of entropy, in a non-Euclidean information space under the constraint of a minimum condition of Fisher information (Fisher Bohm- entropy). Finally, the possible quantum relativistic extensions of the theory and the connections with the problem of quantum gravity are investigated. The non classical thermodynamic approach to quantum phenomena changes the geometry of the particle phase space. In the light of the representation of gravity in ordinary phase space by torsion in the flat space (Teleparallel gravity), the change of geometry in the phase space introduces quantum phenomena in a natural way. This gives a new force to F. Shojai’s and A. Shojai’s theory where the geometry of space-time is highly coupled with a quantum potential whose origin is not the Schrödinger equation but the non classical entropy of a system of many particles that together change the geometry of the phase space of the positions (entanglement). In this way the non classical thermodynamic changes the classical geodetic as a consequence of the quantum phenomena and quantum and gravity are unified. Quantum affects geometry of multidimensional phase space and gravity changes in any point the torsion in the ordinary four-dimensional Lorenz space-time metric. Full article
231 KiB  
Article
A Discrete Meta-Control Procedure for Approximating Solutions to Binary Programs
by Pengbo Zhang, Wolf Kohn and Zelda B. Zabinsky
Entropy 2013, 15(9), 3592-3601; https://doi.org/10.3390/e15093592 - 4 Sep 2013
Viewed by 4449
Abstract
Large-scale binary integer programs occur frequently in many real-world applications. For some binary integer problems, finding an optimal solution or even a feasible solution is computationally expensive. In this paper, we develop a discrete meta-control procedure to approximately solve large-scale binary integer programs [...] Read more.
Large-scale binary integer programs occur frequently in many real-world applications. For some binary integer problems, finding an optimal solution or even a feasible solution is computationally expensive. In this paper, we develop a discrete meta-control procedure to approximately solve large-scale binary integer programs efficiently. The key idea is to map the vector of n binary decision variables into a scalar function defined over a time interval [0; n] and construct a linear quadratic tracking (LQT) problem that can be solved efficiently. We prove that an LQT formulation has an optimal binary solution, analogous to a classical bang-bang control in continuous time. Our LQT approach can provide advantages in reducing computation while generating a good approximate solution. Numerical examples are presented to demonstrate the usefulness of the proposed method. Full article
(This article belongs to the Special Issue Dynamical Systems)
732 KiB  
Article
Objective Bayesianism and the Maximum Entropy Principle
by Jürgen Landes and Jon Williamson
Entropy 2013, 15(9), 3528-3591; https://doi.org/10.3390/e15093528 - 4 Sep 2013
Cited by 25 | Viewed by 6840
Abstract
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes [...] Read more.
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem. Full article
(This article belongs to the Special Issue Maximum Entropy and Bayes Theorem)
Show Figures

Figure 1

Figure 1
<p>Plotted are the partition entropy, the standard entropy and the proposition entropy under the constraints, <inline-formula> <mml:math id="mm264" display="block"> <mml:mrow> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>1</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>+</mml:mo> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>2</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>+</mml:mo> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>3</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>+</mml:mo> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>4</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>=</mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:math> </inline-formula>, <inline-formula> <mml:math id="mm265" display="block"> <mml:mrow> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>1</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>+</mml:mo> <mml:mn>2</mml:mn> <mml:mo>.</mml:mo> <mml:mn>75</mml:mn> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>2</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>+</mml:mo> <mml:mn>7</mml:mn> <mml:mo>.</mml:mo> <mml:mn>1</mml:mn> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>3</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>=</mml:mo> <mml:mn>1</mml:mn> <mml:mo>.</mml:mo> <mml:mn>7</mml:mn> <mml:mo>,</mml:mo> <mml:mspace width="0.166667em"/> <mml:mi>P</mml:mi> <mml:mrow> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>4</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> </mml:mrow> <mml:mo>=</mml:mo> <mml:mn>0</mml:mn> </mml:mrow> </mml:math> </inline-formula>, as a function of <inline-formula> <mml:math id="mm266" display="block"> <mml:mrow> <mml:mi>P</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>2</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> <mml:mo>.</mml:mo> </mml:mrow> </mml:math> </inline-formula> The dotted lines indicate the respective maxima, which obtain for different values of <inline-formula> <mml:math id="mm267" display="block"> <mml:mrow> <mml:mi>P</mml:mi> <mml:mo stretchy="false">(</mml:mo> <mml:msub> <mml:mi>ω</mml:mi> <mml:mn>2</mml:mn> </mml:msub> <mml:mo stretchy="false">)</mml:mo> <mml:mo>.</mml:mo> </mml:mrow> </mml:math> </inline-formula></p>
Full article ">
1625 KiB  
Article
The Measurement of Information Transmitted by a Neural Population: Promises and Challenges
by Marshall Crumiller, Bruce Knight and Ehud Kaplan
Entropy 2013, 15(9), 3507-3527; https://doi.org/10.3390/e15093507 - 3 Sep 2013
Cited by 12 | Viewed by 6164
Abstract
All brain functions require the coordinated activity of many neurons, and therefore there is considerable interest in estimating the amount of information that the discharge of a neural population transmits to its targets. In the past, such estimates had presented a significant challenge [...] Read more.
All brain functions require the coordinated activity of many neurons, and therefore there is considerable interest in estimating the amount of information that the discharge of a neural population transmits to its targets. In the past, such estimates had presented a significant challenge for populations of more than a few neurons, but we have recently described a novel method for providing such estimates for populations of essentially arbitrary size. Here, we explore the influence of some important aspects of the neuronal population discharge on such estimates. In particular, we investigate the roles of mean firing rate and of the degree and nature of correlations among neurons. The results provide constraints on the applicability of our new method and should help neuroscientists determine whether such an application is appropriate for their data. Full article
(This article belongs to the Special Issue Estimating Information-Theoretic Quantities from Data)
Show Figures

Figure 1

Figure 1
<p>Fourier coefficient variance and covariance. (<b>A</b>) Fourier cosine coefficients from the monkey lateral geniculate nucleus (LGN) are collected and form Gaussian distributions at each frequency, represented by histograms. The inset shows Q-Q plots of the three highlighted distributions; the linearity of the sample points indicates Gaussianity. The variance of each of these distributions is used to calculate the entropy at each frequency. (<b>B</b>) Simulated data to illustrate the multivariate case. The variance along the principal axes (black) is determined by the covariance matrix of the coefficients and informs us of the information conveyed by the population.</p>
Full article ">Figure 2
<p>Simulated neuronal response to the repeat-unique stimulation paradigm. (<b>A</b>) General linear modeling (GLM) flow diagram, adapted with permission from Macmillan Publishers Ltd: <span class="html-italic">Nature</span> [<a href="#B12-entropy-15-03507" class="html-bibr">12</a>], ©2008. (<b>B</b>) A subset of the trials of a typical stimulus are displayed. Repeat stimuli (red) are all identical, whereas unique stimuli (blue) are each different from all others. (<b>C</b>) Raster plot of the responses of a simulated neuron to repeat and unique stimuli. Each row of the raster corresponds to a single trial, seen on the left. Responses to 128 trials are displayed in the raster; because repeat stimuli are all identical, the neuron produces similar spike trains (red spikes), evidenced by the appearance of vertical stripes. The response of the neuron to unique stimuli is different with each trial, and therefore, no stripes appear. (<b>D</b>, <b>top</b>) The entropy rate calculated in response to the repeated stimuli (red) is subtracted from the entropy rate calculated in response to the unique stimuli (blue); the difference between the entropies (shaded area) is the signal information rate. The integral of this entropy difference over frequency is dimensional information times frequency or equivalently bits per second. (<b>D</b>, <b>bottom</b>) The information rate is plotted as a cumulative sum across frequencies; the plot levels off with a near-zero slope at frequencies above which signal information is zero.</p>
Full article ">Figure 3
<p>Intrinsic variability of neural responses. (<b>A</b>) Twenty instances of cumulative information rates from three single neurons, with firing rates of 21, 7 and 2 spikes/s. (<b>B</b>) Standard deviations of information rates from twenty neurons, three of which are derived from the neurons in the left panel, fitted with the function, (<math display="inline"> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>29</mn> <msup> <mi>x</mi> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <mn>497</mn> </mrow> </msup> </mrow> </math>). The fitted curve is used to describe the 95% confidence interval of the information estimation.</p>
Full article ">Figure 4
<p>Comparison with the Direct Method. (<b>A</b>) Spike trains from 25 simulated neurons of varying firing rates and input sensitivities were subjected to both the Fourier and Direct methods of information measurement, using 4,096 trials of 30 s each to ensure enough data. (<b>B</b>) Rate errors expressed as a function of the inverse of the number of trials. The rate errors produced by the Fourier method remain small compared to those produced by the Direct Method as the number of trials decreases.</p>
Full article ">Figure 5
<p>Experimental requirements for information calculation. In this simulation, trial length and number of trials were altered independently. Information rates were calculated and compared to a reference information rate, with the difference expressed as a percentage deviation from the true (reference) rate. (<b>A</b>) Rate errors are displayed as a function of both the number of trials and trial length, with red indicating parameter choices that produced high rate errors. Slices represent the choice of input firing rate into the model. (<b>B</b>) Rate error plotted as a function of the total spike count, which is itself dependent on trial length, number of trials and firing rate. Rate errors in the right panel were fitted with a function of the form, <math display="inline"> <mrow> <mi>E</mi> <mo>=</mo> <mi>a</mi> <msup> <mi>x</mi> <mi>b</mi> </msup> </mrow> </math>.</p>
Full article ">Figure 6
<p>Effects of firing rate instability. Neurons with bimodal firing statistics were simulated, switching between <span class="html-italic">Up</span> and <span class="html-italic">Down</span> states throughout each trial. The firing rate difference between <span class="html-italic">Up</span> and <span class="html-italic">Down</span> states is represented as a proportion of the mean firing rate and the average duration of each state by its reciprocal in Hz. (<b>A</b>) Firing rates of two sample neurons are plotted in red, each with a mean firing of 10 spikes/s. The top neuron oscillates between five and 15 spikes/s (reversal amplitude = 0.5), with a mean fluctuation rate of <math display="inline"> <mrow> <mn>0</mn> <mo>.</mo> <mn>5</mn> <mspace width="0.166667em"/> <mtext>Hz</mtext> </mrow> </math>. The bottom neuron oscillates between zero and 20 spikes/s (reversal amplitude = 1.0), with a mean fluctuation rate of 3 Hz. (<b>B</b>) Heat map illustrating the effect of state fluctuation on information rates. All neurons had mean firing rates of 15 spikes/s; information decreased with reversal amplitude, with the effects of the decrease being partially mitigated by increases in reversal rate. (<b>C</b>) The fraction of of Fourier coefficients distributions that were Gaussian plotted against reversal amplitude and reversal rate. Fourier coefficient distributions at each frequency were subjected to the Shapiro-Wilk test for non-normality at the 5% significance level (dashed red line).</p>
Full article ">Figure 7
<p>Effect of spike-neuron misassignment on information rate. (<b>A</b>) The spike neuron misassignment procedure follows three steps: (1) spike rasters for individual neurons are generated; (2) a percentage of spikes from each neuron, highlighted in red, are selected at random; (3) the selected spikes are evenly distributed to the other neurons. (<b>B</b>) Average rate errors are expressed as a function of both the number of neurons and of the misassignment percentage. Sampled points are displayed as black dots, and the values are interpolated to create a smooth heat map. (<b>C</b>) Average rate errors, averaged across group sizes, with the special case of two neurons excluded.</p>
Full article ">Figure 8
<p>Spike Deletion Procedure. (<b>A</b>) Deletion of randomly selected spikes (shown in red) from the spike train with more spikes abolishes high-frequency information miscalculation. (<b>B</b>, <b>top</b>) Information accumulates (cool colors) at high frequencies in the case where the number of unique spikes exceeds the number of repeat spikes, and declines (warm colors) when the repeat set is larger. (<b>bottom</b>) After the spike deletion procedure, information accumulation trends are abolished. (<b>C</b>) The percentage of information reduced as a function of the percentage of spikes deleted in both the repeat and unique sets.</p>
Full article ">Figure 9
<p>Signal- and coupling-induced correlations. (<b>A</b>) Effects of signal correlation on redundancy. Responses of two uncoupled neurons to stimuli of increasing correlation are compared. Cumulative information plots of the two extreme cases of low and high stimulus correlation are displayed on top. For low correlation (<math display="inline"> <mrow> <mi>r</mi> <mo>≈</mo> <mn>0</mn> </mrow> </math>), group information (red curve) and the sum of information from all the individual cells (blue curve) are nearly identical, due to the lack of correlation in the neural responses; high correlation (<math display="inline"> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> </math>) in the stimulus induces correlation in the neural responses, and the amount of redundant information (shaded gray area) increases. (<b>Bottom</b>) The relationship between stimulus correlation and both group (red) and summed individual total information (blue). (<b>B</b>) Redundancy calculated as a function of stimulus correlation (solid black line) and coupling strength (dotted black line). (<b>C</b>) Effects of neuronal coupling on redundancy. In the upper left panel, neuronal coupling is weak (coupling strength = 0), and the neurons transmit the same (independent) information. In the upper right panel, the coupling is strong (coupling strength = 1), resulting in redundant information (grey area between the group information, shown in red, and total information, shown in blue).</p>
Full article ">Figure 10
<p>Information in large neural populations. (<b>A</b>) Group size <span class="html-italic">versus</span> sum total and group information using the GLM model. The sum total information, which does not take into account correlations between cells, increases linearly with the number of cells (blue), whereas the group information rate (red) climbs sub-linearly, due to the progressive increase in redundant information (shaded gray area). (<b>B</b>) Processing times on a desktop computer for group sizes of up to 500 neurons. Calculations were performed on an Intel® Core™ i7-3770K running at <math display="inline"> <mrow> <mn>3</mn> <mo>.</mo> <mn>9</mn> <mspace width="0.166667em"/> <mtext>GHz</mtext> </mrow> </math> with <math display="inline"> <mrow> <mn>32</mn> <mspace width="0.166667em"/> <mtext>GB</mtext> </mrow> </math> RAM.</p>
Full article ">
476 KiB  
Article
Land-Use Planning for Urban Sprawl Based on the CLUE-S Model: A Case Study of Guangzhou, China
by Linyu Xu, Zhaoxue Li, Huimin Song and Hao Yin
Entropy 2013, 15(9), 3490-3506; https://doi.org/10.3390/e15093490 - 2 Sep 2013
Cited by 51 | Viewed by 8619
Abstract
In recent years, changes in land use resulting from rapid urbanization or urban sprawl have brought about many negative effects to land ecosystems, and have led to entropy increases. This study introduces the novel ideas of a planning regulation coefficient for sustainable land-use [...] Read more.
In recent years, changes in land use resulting from rapid urbanization or urban sprawl have brought about many negative effects to land ecosystems, and have led to entropy increases. This study introduces the novel ideas of a planning regulation coefficient for sustainable land-use planning in order to decrease entropy, combined with the CLUE-S model to predict land-use change. Three scenarios were designed as the basis for land-use projections for Guangzhou, China, in 2015, and the changes in the land ecological service function for each scenario were predicted. The results show that, although the current land-use plan is quite reasonable, it will be necessary to further strengthen the protection of farmland and important ecological service function areas. Full article
(This article belongs to the Special Issue Entropy and Urban Sprawl)
Show Figures

Figure 1

Figure 1
<p>Structure of the CLUE-S model framework.</p>
Full article ">Figure 2
<p>Map of China showing location of Guangzhou.</p>
Full article ">Figure 3
<p>Spatial variation of the coefficient of regulation of land-use planning in Guangzhou.</p>
Full article ">Figure 4
<p>Diagram of comparative land usage in 2006 and 2009. (<b>a</b>) Simulated land-use diagram for 2006; (<b>b</b>) Actual land-use diagram for 2006; (<b>c</b>) Simulated land-use diagram for 2009; (<b>d</b>) Actual land-use diagram for 2009.</p>
Full article ">Figure 5
<p>Predicted land use in Guangzhou in 2015 for the three plans.</p>
Full article ">
134 KiB  
Article
Deformed Exponentials and Applications to Finance
by Barbara Trivellato
Entropy 2013, 15(9), 3471-3489; https://doi.org/10.3390/e15093471 - 2 Sep 2013
Cited by 48 | Viewed by 6604
Abstract
We illustrate some financial applications of the Tsallis and Kaniadakis deformed exponential. The minimization of the corresponding deformed divergence is discussed as a criterion to select a pricing measure in the valuation problems of incomplete markets. Moreover, heavy-tailed models for price processes are [...] Read more.
We illustrate some financial applications of the Tsallis and Kaniadakis deformed exponential. The minimization of the corresponding deformed divergence is discussed as a criterion to select a pricing measure in the valuation problems of incomplete markets. Moreover, heavy-tailed models for price processes are proposed, which generalized the well-known Black and Scholes model. Full article
(This article belongs to the Collection Advances in Applied Statistical Mechanics)
1070 KiB  
Article
Analysis of EEG via Multivariate Empirical Mode Decomposition for Depth of Anesthesia Based on Sample Entropy
by Qin Wei, Quan Liu, Shou-Zhen Fan, Cheng-Wei Lu, Tzu-Yu Lin, Maysam F. Abbod and Jiann-Shing Shieh
Entropy 2013, 15(9), 3458-3470; https://doi.org/10.3390/e15093458 - 30 Aug 2013
Cited by 57 | Viewed by 10207
Abstract
In monitoring the depth of anesthesia (DOA), the electroencephalography (EEG) signals of patients have been utilized during surgeries to diagnose their level of consciousness. Different entropy methods were applied to analyze the EEG signal and measure its complexity, such as spectral entropy, approximate [...] Read more.
In monitoring the depth of anesthesia (DOA), the electroencephalography (EEG) signals of patients have been utilized during surgeries to diagnose their level of consciousness. Different entropy methods were applied to analyze the EEG signal and measure its complexity, such as spectral entropy, approximate entropy (ApEn) and sample entropy (SampEn). However, as a weak physiological signal, EEG is easily subject to interference from external sources such as the electric power, electric knives and other electrophysiological signal sources, which lead to a reduction in the accuracy of DOA determination. In this study, we adopt the multivariate empirical mode decomposition (MEMD) to decompose and reconstruct the EEG recorded from clinical surgeries according to its best performance among the empirical mode decomposition (EMD), the ensemble EMD (EEMD), and the complementary EEMD (CEEMD) and the MEMD. Moreover, according to the comparison between SampEn and ApEn in measuring DOA, the SampEn is a practical and efficient method to monitor the DOA during surgeries at real time. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison among extended EMD methods: (<b>a</b>) EMD, (<b>b</b>) EEMD, (<b>c</b>) CEEMD and (<b>d</b>) N-A MEMD used for decomposing and reconstructing two different sine waves of 2 Hz and 20 Hz, respectively, from a complex original signal which has the first half part consisting of two different sine waves, and the last half part is the 2 Hz sine wave disturbed by a white Gaussian noise.</p>
Full article ">Figure 2
<p>The rEEG and EOG analyzed by EMD, EEMD, CEEMD and N-A MEMD, respectively.</p>
Full article ">Figure 3
<p>Comparison of the performance among EMD, EEMD, CEEMD and N-A MEMD in extracting EEG and EOG. (<b>a</b>) The distribution of sample entropy of the oEEG and rEEG, and (<b>b</b>) the correlation between original and reconstructed EOG by EMD, EEMD, CEEMD and N-A MEMD in twenty cases.</p>
Full article ">Figure 4
<p>An EEG recording of a female patient, age 52 undergoing the urological surgery. (<b>a</b>) oEEG signals. (<b>b</b>) rEEG signals processed by N-A MEMD. (<b>c</b>) spectrum of (<b>a</b>) and (<b>b</b>).</p>
Full article ">Figure 5
<p>Comparison of the SampEn and ApEn in the oEEG and rEEG. (<b>a</b>) The real time RE and SE recording from the device Datex-Ohmeda S/5. (<b>b</b>) SampEn and ApEn of the rEEG; (<b>c</b>) SampEn and ApEn of the oEEG.</p>
Full article ">
1102 KiB  
Article
Implication of Negative Entropy Flow for Local Rainfall
by Ying Liu, Chongjian Liu and Zhaohui Li
Entropy 2013, 15(9), 3449-3457; https://doi.org/10.3390/e15093449 - 30 Aug 2013
Cited by 2 | Viewed by 5244
Abstract
The relation between the atmospheric entropy flow field and local rainfall is examined in terms of the theory of dissipative structures. In this paper, the entropy balance equation in a form suitable for describing the entropy budget of the Earth’s atmosphere is derived [...] Read more.
The relation between the atmospheric entropy flow field and local rainfall is examined in terms of the theory of dissipative structures. In this paper, the entropy balance equation in a form suitable for describing the entropy budget of the Earth’s atmosphere is derived starting from the Gibbs relation, and, as examples, the entropy flows of the two severe weather events associated with the development of an extratropical cyclone and a tropical storm are calculated, respectively. The results show that negative entropy flow (NEF) has a significant effect on the precipitation intensity and scope with an apparent matching of the NEF’s pattern with the rainfall distribution revealed and, that the diagnosis of NEF is able to provide a good indicator for precipitation forecasting. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram for Bénard convection as the prototype of self-organization.</p>
Full article ">Figure 2
<p>Negative entropy flow field (shading in 10<sup>-5</sup>J/(K·s)), wind field (vector) at 950 hPa for the times of (<b>a</b>) 0600, (<b>b</b>) 1200, (<b>c</b>) 1800 UTC 3, and (<b>d</b>) 0000 UTC4 November 2012 and the corresponding thenceforth 6 h accumulative rainfall distributions (solid line in <span class="html-italic">mm</span>, rainfalls larger than 10 mm are numbered) for the region around North China centered near Beijing City.</p>
Full article ">Figure 3
<p>Negative entropy flow field (shading in 1 × 10<sup>−5</sup> J/(K·s)), wind field (vector) at 950 hPa for the period from 1800 UTC 14 to 0000 UTC 16 July 2006 at 6 h intervals and the corresponding thenceforth 6 h accumulative rainfall distributions (solid line in <span class="html-italic">mm</span>, rainfalls larger than 50 mm are numbered) for the region in southern China. The areas in the rectangle are the areas defined in <a href="#entropy-15-03449-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>Changes in the averaged negative entropy flow (solid line in 1 × 10<sup>−5</sup>J/(K·s)) at 950 hPa and the averaged thenceforth 6 h accumulative precipitation (dashed line in mm) over the area (23–30 °N, 110–120 °E) from 1800 UTC 14 July to 0000 UTC 16 July 2006.</p>
Full article ">
526 KiB  
Article
Bacterial DNA Sequence Compression Models Using Artificial Neural Networks
by Manuel J. Duarte and Armando J. Pinho
Entropy 2013, 15(9), 3435-3448; https://doi.org/10.3390/e15093435 - 30 Aug 2013
Viewed by 5640
Abstract
It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from [...] Read more.
It is widely accepted that the advances in DNA sequencing techniques have contributed to an unprecedented growth of genomic data. This fact has increased the interest in DNA compression, not only from the information theory and biology points of view, but also from a practical perspective, since such sequences require storage resources. Several compression methods exist, and particularly, those using finite-context models (FCMs) have received increasing attention, as they have been proven to effectively compress DNA sequences with low bits-per-base, as well as low encoding/decoding time-per-base. However, the amount of run-time memory required to store high-order finite-context models may become impractical, since a context-order as low as 16 requires a maximum of 17.2 x 109 memory entries. This paper presents a method to reduce such a memory requirement by using a novel application of artificial neural networks (ANN) to build such probabilistic models in a compact way and shows how to use them to estimate the probabilities. Such a system was implemented, and its performance compared against state-of-the art compressors, such as XM-DNA (expert model) and FCM-Mx (mixture of finite-context models) , as well as with general-purpose compressors. Using a combination of order-10 FCM and ANN, similar encoding results to those of FCM, up to order-16, are obtained using only 17 megabytes of memory, whereas the latter, even employing hash-tables, uses several hundreds of megabytes. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of an encoder architecture, which employs a probabilistic model estimator and an arithmetic encoder for the symbols {A, C, G, T}.</p>
Full article ">Figure 2
<p>Structure of the Time-Series Artificial Neural Network proposed. The bias units within the hidden layers are omitted for readability.</p>
Full article ">Figure 3
<p>Bits-per-base and artificial neural network (ANN) model usage, for the NN1 encoder, by sweeping the memory depth (L1) and the hidden node count (L2). Two different subsets were used.</p>
Full article ">Figure 4
<p>Run-time memory usage of the encoders over the number of bases of each sequence. This dependency arises from the fact that, for models of an order higher than 10, hash-tables are used to minimize the memory allocated.</p>
Full article ">
648 KiB  
Article
Determination of Optimal Water Quality Monitoring Points in Sewer Systems using Entropy Theory
by Jung Ho Lee
Entropy 2013, 15(9), 3419-3434; https://doi.org/10.3390/e15093419 - 29 Aug 2013
Cited by 20 | Viewed by 6476
Abstract
To monitor water quality continuously over the entire sewer network is important for efficient management of the system. However, it is practically impossible to implement continuous water quality monitoring of all junctions of a sewer system due to budget constraints. Therefore, water quality [...] Read more.
To monitor water quality continuously over the entire sewer network is important for efficient management of the system. However, it is practically impossible to implement continuous water quality monitoring of all junctions of a sewer system due to budget constraints. Therefore, water quality monitoring locations must be selected as those points which are the most representative of the dataset throughout a system. However, the optimal selection of water quality monitoring locations in urban sewer networks has rarely been studied. This study proposes a method for the optimal selection of water quality monitoring points in sewer systems based on entropy theory. The proposed model searches for a quantitative assessment of data collected from monitoring points. The points that maximize the total information among the collected data at multiple locations are selected using genetic algorithm (GA) for water quality monitoring. The proposed model is demonstrated for a small urban sewer system. Full article
Show Figures

Figure 1

Figure 1
<p>Model for the selection of the water quality monitoring points.</p>
Full article ">Figure 2
<p>Example sewer network.</p>
Full article ">Figure 3
<p>Hagye basin sewer network.</p>
Full article ">Figure 4
<p>Seven optimal monitoring points.</p>
Full article ">
913 KiB  
Article
Information Entropy As a Basic Building Block of Complexity Theory
by Jianbo Gao, Feiyan Liu, Jianfang Zhang, Jing Hu and Yinhe Cao
Entropy 2013, 15(9), 3396-3418; https://doi.org/10.3390/e15093396 - 29 Aug 2013
Cited by 57 | Viewed by 12112
Abstract
What is information? What role does information entropy play in this information exploding age, especially in understanding emergent behaviors of complex systems? To answer these questions, we discuss the origin of information entropy, the difference between information entropy and thermodynamic entropy, the role [...] Read more.
What is information? What role does information entropy play in this information exploding age, especially in understanding emergent behaviors of complex systems? To answer these questions, we discuss the origin of information entropy, the difference between information entropy and thermodynamic entropy, the role of information entropy in complexity theories, including chaos theory and fractal theory, and speculate new fields in which information entropy may play important roles. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

Figure 1
<p>Random fractal of discs with a Pareto-distributed size: <math display="inline"> <mrow> <mi>P</mi> <mrow> <mo>[</mo> <mi>X</mi> <mo>≥</mo> <mi>x</mi> <mo>]</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mn>8</mn> <mo>/</mo> <mi>x</mi> <mo>)</mo> </mrow> <mrow> <mn>1</mn> <mo>.</mo> <mn>8</mn> </mrow> </msup> </mrow> </math>.</p>
Full article ">Figure 2
<p>Representative results of using Tsallis distribution to fit the sea clutter radar return data. Here, <math display="inline"> <mrow> <mo>(</mo> <mi>q</mi> <mo>,</mo> <mi>β</mi> <mo>)</mo> </mrow> </math> are <math display="inline"> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mn>34</mn> <mo>,</mo> <mn>43</mn> <mo>.</mo> <mn>14</mn> <mo>)</mo> </mrow> </math> and <math display="inline"> <mrow> <mo>(</mo> <mn>1</mn> <mo>.</mo> <mn>51</mn> <mo>,</mo> <mn>147</mn> <mo>.</mo> <mn>06</mn> <mo>)</mo> </mrow> </math>, respectively (adaptive from [<a href="#B32-entropy-15-03396" class="html-bibr">32</a>]).</p>
Full article ">Figure 3
<p>Ensemble forecasting in the chaotic Lorenz system: 2,500 ensemble members, initially represented by the pink color, evolve to those represented by the red, green and blue colors at <math display="inline"> <mrow> <mi>t</mi> <mo>=</mo> <mn>2</mn> </mrow> </math>, 4 and 6 units.</p>
Full article ">Figure 4
<p>The variation of (<b>a1</b>,<b>a2</b>), the Lempel-Ziv (LZ) complexity, (<b>b1</b>,<b>b2</b>), the normalized LZ complexity, (<b>c1</b>,<b>c2</b>), the correlation entropy, and (<b>d1</b>,<b>d2</b>), the correlation dimension with time for the EEG signal of a patient. (a1–d1) are obtained by partitioning the EEG signals into short windows of length, <math display="inline"> <mrow> <mi>W</mi> <mo>=</mo> <mn>500</mn> </mrow> </math> points; (a2–d2) are obtained using <math display="inline"> <mrow> <mi>W</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mn>000</mn> </mrow> </math>. The vertical dashed lines in (a1,a2) indicate seizure occurrence times determined by medical experts.</p>
Full article ">
284 KiB  
Article
A New Perspective on Classical Ideal Gases
by Jacques Arnaud, Laurent Chusseau and Fabrice Philippe
Entropy 2013, 15(9), 3379-3395; https://doi.org/10.3390/e15093379 - 29 Aug 2013
Cited by 1 | Viewed by 6604
Abstract
The ideal-gas barometric and pressure laws are derived from the Democritian concept of independent corpuscles moving in vacuum, plus a principle of simplicity, namely that these laws are independent of the kinetic part of the Hamiltonian. A single corpuscle in contact with a [...] Read more.
The ideal-gas barometric and pressure laws are derived from the Democritian concept of independent corpuscles moving in vacuum, plus a principle of simplicity, namely that these laws are independent of the kinetic part of the Hamiltonian. A single corpuscle in contact with a heat bath in a cylinder and submitted to a constant force (weight) is considered. The paper importantly supplements a previously published paper: First, the stability of ideal gases is established. Second, we show that when walls separate the cylinder into parts and are later removed, the entropy is unaffected. We obtain full agreement with Landsberg’s and others’ (1994) classical thermodynamic result for the entropy of a column of gas submitted to gravity. Full article
Show Figures

Figure 1

Figure 1
<p>Space-time (<math display="inline"> <mrow> <mi>z</mi> <mo>,</mo> <mi>t</mi> </mrow> </math>) trajectory of a unit-weight corpuscle bouncing off the ground at <span class="html-italic">z</span>=0. In (<b>a</b>), the maximum altitude reached by the corpuscle is <math display="inline"> <mrow> <msub> <mi>z</mi> <mi>m</mi> </msub> <mo>=</mo> <mi>E</mi> </mrow> </math>, where <span class="html-italic">E</span> denotes the energy. The motion is periodic, with period <math display="inline"> <mrow> <mi>τ</mi> <mo>(</mo> <msub> <mi>z</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> </math>, where <math display="inline"> <mrow> <mi>τ</mi> <mo>(</mo> <mi>ζ</mi> <mo>)</mo> </mrow> </math> denotes the corpuscle round-trip time at a distance, <span class="html-italic">ζ</span>, from the top of the trajectory. The time during which the corpuscle is located above <span class="html-italic">z</span>, divided by the period, is evidently <math display="inline"> <mrow> <mi>τ</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>m</mi> </msub> <mo>-</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>/</mo> <mi>τ</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>. This expression holds, even if the motion is not symmetric in time. In (<b>b</b>), the maximum altitude is restricted to <span class="html-italic">h</span> by a piston. The motion period becomes <math display="inline"> <mrow> <mi>τ</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>τ</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mi>m</mi> </msub> <mo>-</mo> <mi>h</mi> <mo>)</mo> </mrow> </mrow> </math>.</p>
Full article ">
293 KiB  
Article
Information Geometry of Complex Hamiltonians and Exceptional Points
by Dorje C. Brody and Eva-Maria Graefe
Entropy 2013, 15(9), 3361-3378; https://doi.org/10.3390/e15093361 - 23 Aug 2013
Cited by 36 | Viewed by 7573
Abstract
Information geometry provides a tool to systematically investigate the parameter sensitivity of the state of a system. If a physical system is described by a linear combination of eigenstates of a complex (that is, non-Hermitian) Hamiltonian, then there can be phase transitions where [...] Read more.
Information geometry provides a tool to systematically investigate the parameter sensitivity of the state of a system. If a physical system is described by a linear combination of eigenstates of a complex (that is, non-Hermitian) Hamiltonian, then there can be phase transitions where dynamical properties of the system change abruptly. In the vicinities of the transition points, the state of the system becomes highly sensitive to the changes of the parameters in the Hamiltonian. The parameter sensitivity can then be measured in terms of the Fisher-Rao metric and the associated curvature of the parameter-space manifold. A general scheme for the geometric study of parameter-space manifolds of eigenstates of complex Hamiltonians is outlined here, leading to generic expressions for the metric. Full article
(This article belongs to the Special Issue Distance in Information and Statistical Physics Volume 2)
Previous Issue
Next Issue
Back to TopTop