[go: up one dir, main page]

Next Issue
Volume 15, September
Previous Issue
Volume 15, July
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 15, Issue 8 (August 2013) – 20 articles , Pages 2874-3311

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
274 KiB  
Article
Entropy and Computation: The Landauer-Bennett Thesis Reexamined
by Meir Hemmo and Orly Shenker
Entropy 2013, 15(8), 3297-3311; https://doi.org/10.3390/e15083297 - 21 Aug 2013
Cited by 13 | Viewed by 6486
Abstract
The so-called Landauer-Bennett thesis says that logically irreversible operations (physically implemented) such as erasure necessarily involve dissipation by at least kln2 per bit of lost information. We identify the physical conditions that are necessary and sufficient for erasure and show that the [...] Read more.
The so-called Landauer-Bennett thesis says that logically irreversible operations (physically implemented) such as erasure necessarily involve dissipation by at least kln2 per bit of lost information. We identify the physical conditions that are necessary and sufficient for erasure and show that the thesis does not follow from the principles of classical mechanics. In particular, we show that even if one assumes that information processing is constrained by the laws of classical mechanics, it need not be constrained by the Second Law of thermodynamics. Full article
(This article belongs to the Special Issue Maxwell’s Demon 2013)
Show Figures

Figure 1

Figure 1
<p>Accessible region with one-to-one correlation.</p>
Full article ">Figure 2
<p>Accessible region with one-to-many correlation.</p>
Full article ">Figure 3
<p>Macrostates of <span class="html-italic">D</span> + <span class="html-italic">G.</span></p>
Full article ">Figure 4
<p>Pre-erasure macrostate: Part a.</p>
Full article ">Figure 5
<p>Pre-erasure macrostate: Part b.</p>
Full article ">Figure 6
<p>Post-erasure macrostate.</p>
Full article ">Figure 7
<p>Blending: Part a.</p>
Full article ">Figure 8
<p>Blending: Part b.</p>
Full article ">Figure 9
<p>Blending across macrostates and entropy decrease.</p>
Full article ">Figure 10
<p>Restore-to-one.</p>
Full article ">Figure 11
<p>Erasure of known data: Case a.</p>
Full article ">Figure 12
<p>Erasure of known data: Case b.</p>
Full article ">Figure 13
<p>Post-erasure macrostate <span class="html-italic">Rb.</span></p>
Full article ">Figure 14
<p>Post-erasure macrostate <span class="html-italic">Ra.</span></p>
Full article ">Figure 15
<p>Erasure of random data: Part 1.</p>
Full article ">Figure 16
<p>Erasure of random date: Part 2.</p>
Full article ">Figure 17
<p>Erasure of random data: Part 3.</p>
Full article ">
11348 KiB  
Article
Fluctuations in the Energetic Properties of a Spark-Ignition Engine Model with Variability
by Pedro L. Curto-Risso, Alejandro Medina, Antonio Calvo-Hernández, Lev Guzmán-Vargas and Fernando Angulo-Brown
Entropy 2013, 15(8), 3277-3296; https://doi.org/10.3390/e15083367 - 19 Aug 2013
Cited by 8 | Viewed by 4034
Abstract
We study the energetic functions obtained in a simulated spark-ignited engine that incorporates cyclic variability through a quasi-dimensional combustion model. Our analyses are focused on the effects of the fuel-air equivalence ratio of the mixture simultaneously over the cycle-to-cycle fluctuations of heat release [...] Read more.
We study the energetic functions obtained in a simulated spark-ignited engine that incorporates cyclic variability through a quasi-dimensional combustion model. Our analyses are focused on the effects of the fuel-air equivalence ratio of the mixture simultaneously over the cycle-to-cycle fluctuations of heat release (QR) and the performance outputs, such as the power (P) and the efficiency (QR). We explore the fluctuant behavior for QR, P and n related to random variations of the basic physical parameters in an entrainment or eddy-burning combustion model. P and n show triangle shaped first return maps, while QR exhibits a structured map, especially at intermediated fuel-air ratios. Structure disappears to a considerable extent in the case of heat release and close-to-stoichiometry fuel-air ratios. By analyzing the fractal dimension to explore the presence of correlations at different scales, we find that whereas QR displays short-range correlations for intermediate values of the fuel ratio, both P and n are characterized by a single scaling exponent, denoting irregular fluctuations. A novel noisy loop-shaped P vs. n plot for a large number of engine cycles is obtained. This plot, which evidences different levels of irreversibilities as the fuel ratio changes, becomes the observed loop P vs. n curve when fluctuations are disregarded, and thus, only the mean values for efficiency and power are considered. Full article
Show Figures

Figure 1

Figure 1
<p>Cycle-to-cycle evolution of the heat release, <math display="inline"> <msub> <mi>Q</mi> <mi>R</mi> </msub> </math>, for three values of the fuel ratio, <math display="inline"> <mi>ϕ</mi> </math>.</p>
Full article ">Figure 2
<p>Cycle-to-cycle evolution of the power output, <span class="html-italic">P</span>, for three values of the fuel ratio, <math display="inline"> <mi>ϕ</mi> </math>.</p>
Full article ">Figure 3
<p>Cycle-to-cycle evolution of the fuel conversion efficiency, <span class="html-italic">η</span>, for three values of the fuel ratio, <math display="inline"> <mi>ϕ</mi> </math>.</p>
Full article ">Figure 4
<p>Statistical parameters of the energetic functions considered in terms of the fuel ratio, <math display="inline"> <mi>ϕ</mi> </math>. Mean value (<span class="html-italic">μ</span>), standard deviation (STD), coefficient of variation (COV), skewness (<span class="html-italic">S</span>) and kurtosis (<span class="html-italic">K</span>) of the power output, efficiency and heat release. In all plots, we have used solid or dashed lines to connect the symbols, as a guide to the eye.</p>
Full article ">Figure 5
<p>First return maps of the power output (<span class="html-italic">P</span>), efficiency (<span class="html-italic">η</span>) and heat release (<math display="inline"> <msub> <mi>Q</mi> <mi>R</mi> </msub> </math>) for three different fuel-air ratios.</p>
Full article ">Figure 6
<p>Plot of <math display="inline"> <mrow> <mo form="prefix">ln</mo> <mo>&lt;</mo> <mi>L</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> <mo>&gt;</mo> </mrow> </math> <span class="html-italic">vs</span>. <math display="inline"> <mrow> <mo form="prefix">ln</mo> <mi>k</mi> </mrow> </math> of heat release, <math display="inline"> <msub> <mi>Q</mi> <mi>R</mi> </msub> </math> (diamonds), power, <span class="html-italic">P</span> (circles) and efficiency, <span class="html-italic">η</span> (squares), for three of the fuel ratio, <math display="inline"> <mi>ϕ</mi> </math>. For a fractal signal, it is expected to have <math display="inline"> <mrow> <mrow> <mo>〈</mo> <mi>L</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>〉</mo> </mrow> <mo>∝</mo> <msup> <mi>k</mi> <mrow> <mo>−</mo> <mi>D</mi> </mrow> </msup> </mrow> </math>, where the slope, <span class="html-italic">D</span>, represents the fractal dimension of the set. Here, we see that for <span class="html-italic">P</span> and <span class="html-italic">η</span>, and for any value of <math display="inline"> <mi>ϕ</mi> </math>, the fractal dimension is <math display="inline"> <mrow> <mi>D</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </math>, indicating irregular fluctuations with limited memory. For <math display="inline"> <msub> <mi>Q</mi> <mi>R</mi> </msub> </math>, a crossover behavior is observed for intermediate values of the fuel ratio.</p>
Full article ">Figure 7
<p>Scatter plot of <span class="html-italic">P vs.</span> <math display="inline"> <msub> <mi>Q</mi> <mi>R</mi> </msub> </math> for several values of the fuel ratio, <math display="inline"> <mi>ϕ</mi> </math>.</p>
Full article ">Figure 8
<p>Scatter plot of <span class="html-italic">η vs.</span> <math display="inline"> <msub> <mi>Q</mi> <mi>R</mi> </msub> </math> for several values of the fuel ratio <math display="inline"> <mi>ϕ</mi> </math>.</p>
Full article ">Figure 9
<p>Three dimensional scatter plot of <math display="inline"> <msub> <mi>Q</mi> <mi>R</mi> </msub> </math>, <span class="html-italic">P</span> and <span class="html-italic">η</span> for different mixture compositions.</p>
Full article ">Figure 10
<p>Scatter plot of power <span class="html-italic">vs</span>. efficiency for several values of the fuel ratio, <math display="inline"> <mi>ϕ</mi> </math>.</p>
Full article ">Figure 11
<p>Power-efficiency plots showing the mean value and the standard-deviation in both axis for different values of <math display="inline"> <mi>ϕ</mi> </math>.</p>
Full article ">
421 KiB  
Article
Synchronization of a Class of Fractional-Order Chaotic Neural Networks
by Liping Chen, Jianfeng Qu, Yi Chai, Ranchao Wu and Guoyuan Qi
Entropy 2013, 15(8), 3265-3276; https://doi.org/10.3390/e15083355 - 14 Aug 2013
Cited by 71 | Viewed by 5689
Abstract
The synchronization problem is studied in this paper for a class of fractional-order chaotic neural networks. By using the Mittag-Leffler function, M-matrix and linear feedback control, a sufficient condition is developed ensuring the synchronization of such neural models with the Caputo fractional derivatives. [...] Read more.
The synchronization problem is studied in this paper for a class of fractional-order chaotic neural networks. By using the Mittag-Leffler function, M-matrix and linear feedback control, a sufficient condition is developed ensuring the synchronization of such neural models with the Caputo fractional derivatives. The synchronization condition is easy to verify, implement and only relies on system structure. Furthermore, the theoretical results are applied to a typical fractional-order chaotic Hopfield neural network, and numerical simulation demonstrates the effectiveness and feasibility of the proposed method. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

Figure 1
<p>Chaotic behaviors of fractional-order Hopfield neural network Equation (34) with fractional-order, <math display="inline"> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </math>.</p>
Full article ">Figure 2
<p>State synchronization trajectories of drive system Equation (34) and response system Equation (35).</p>
Full article ">Figure 3
<p>Synchronization error time response of drive system Equation (34) and response system Equation (35).</p>
Full article ">Figure 3 Cont.
<p>Synchronization error time response of drive system Equation (34) and response system Equation (35).</p>
Full article ">
240 KiB  
Article
Truncation Effects of Shift Function Methods in Bulk Water Systems
by Kazuaki Z. Takahashi
Entropy 2013, 15(8), 3249-3264; https://doi.org/10.3390/e15083339 - 13 Aug 2013
Cited by 14 | Viewed by 4479
Abstract
A reduction of the cost for long-range interaction calculation is essential for large-scale molecular systems that contain a lot of point charges. Cutoff methods are often used to reduce the cost of long-range interaction calculations. Molecular dynamics (MD) simulations can be accelerated by [...] Read more.
A reduction of the cost for long-range interaction calculation is essential for large-scale molecular systems that contain a lot of point charges. Cutoff methods are often used to reduce the cost of long-range interaction calculations. Molecular dynamics (MD) simulations can be accelerated by using cutoff methods; however, simple truncation or approximation of long-range interactions often offers serious defects for various systems. For example, thermodynamical properties of polar molecular systems are strongly affected by the treatment of the Coulombic interactions and may lead to unphysical results. To assess the truncation effect of some cutoff methods that are categorized as the shift function method, MD simulations for bulk water systems were performed. The results reflect two main factors, i.e., the treatment of cutoff boundary conditions and the presence/absence of the theoretical background for the long-range approximation. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulation)
Show Figures

Figure 1

Figure 1
<p>Potential energy for the shift function methods and the Ewald sum. The results from CHARMm-shift, Ohmine-shift and the Wolf method (Wolf-DSF) are far from that of the Ewald sum. In contrast, RF-metal, the isotropic periodic sum for non-polar systems (IPSn), for polar systems (IPSp) and the linear-combination-based IPS (LIPS)-fifth are close to that of the Ewald sum.</p>
Full article ">Figure 2
<p>The error of the potential energy calculated with the shift function methods against that determined with the Ewald sum. It is clearly shown that CHARMm-shift, Ohmine-shift and Wolf-DSF poorly estimated the potential energy for bulk water systems. In contrast, RF-metal, IPSn and LIPS-fifth have much better accuracy for estimating the potential energy. IPSp had intermediate values between the former and latter. The error of the potential energy for each method decreases by an increment of the cutoff radius, except for the case of the Wolf-DSF. The fastest decline was observed in the case of LIPS-fifth; the error was roughly in proportion to <math display="inline"> <msubsup> <mi>r</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>-</mo> <mn>4</mn> </mrow> </msubsup> </math>. RF-metal and LIPS-fifth at <math display="inline"> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> </math> = 2.0 nm achieved the smallest error and had the same value as that of the Ewald sum within 0.02%.</p>
Full article ">Figure 3
<p>The self-diffusion coefficient for the shift function methods and the Ewald sum. The results from CHARMm-shift could not have a similar value to that of the Ewald sum at 1.2 nm <math display="inline"> <mrow> <mo>≤</mo> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> <mo>≤</mo> </mrow> </math> 2.0 nm. Other methods seem to estimate the self-diffusion coefficient with an adequate accuracy.</p>
Full article ">Figure 4
<p>The error of the self-diffusion coefficient calculated with the shift function methods against that determined with the Ewald sum. The convergence of the IPSp and LIPS-fifth is much faster than other methods. For IPSp, the self-diffusion coefficient is saturated at <math display="inline"> <mrow> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> <mo>≥</mo> <mn>1</mn> <mo>.</mo> <mn>6</mn> </mrow> </math> nm and the saturated value is almost the same as that of the Ewald sum (within 0.35%). For LIPS-fifth, the self-diffusion coefficient is saturated at <math display="inline"> <mrow> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> <mo>≥</mo> <mn>1</mn> <mo>.</mo> <mn>4</mn> </mrow> </math> nm, and the saturated value is almost the same as that of the Ewald sum (within 0.36%).</p>
Full article ">Figure 5
<p>The oxygen-oxygen radial distribution function of the water molecule for the shift function methods at <math display="inline"> <mrow> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> nm and for the Ewald sum. CHARMm-shift, Ohmine-shift, RF-metal and IPSn have notable deviations from the result of the Ewald sum. On the other hand, Wolf-DSF, IPSp and LIPS-fifth provided adequate accuracy. The oxygen-hydrogen and hydrogen-hydrogen, <math display="inline"> <mrow> <mi>g</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math>, have very similar behavior in comparison to oxygen-oxygen (figures not shown).</p>
Full article ">Figure 6
<p>The RMSDs of (<b>a</b>) the oxygen-oxygen, (<b>b</b>) the oxygen-hydrogen and (<b>c</b>) the hydrogen-hydrogen radial distribution function for the shift function method against the Ewald sum at different cutoff radii. In CHARMm-shift, RF-metal and IPSn that have a first-order cutoff boundary condition, the deviation decreases roughly in proportion to <math display="inline"> <msubsup> <mi>r</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>-</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> </mrow> </msubsup> </math>, <math display="inline"> <msubsup> <mi>r</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </msubsup> </math> and <math display="inline"> <msubsup> <mi>r</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </msubsup> </math>, respectively. Ohmine-shift has a second-order cutoff boundary condition, and the RMSD of <math display="inline"> <mrow> <mi>g</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math> declines roughly in proportion to <math display="inline"> <msubsup> <mi>r</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>-</mo> <mn>4</mn> </mrow> </msubsup> </math>. The RMSD of LIPS-fifth has a similar tendency with these shift function methods for cutoff radii, but a faster decline is observed. The RMSD of LIPS-fifth decreases roughly in proportion to <math display="inline"> <msubsup> <mi>r</mi> <mrow> <mi mathvariant="normal">c</mi> </mrow> <mrow> <mo>-</mo> <mn>6</mn> </mrow> </msubsup> </math>. Furthermore, LIPS-fifth gives accurate estimations of <math display="inline"> <mrow> <mi>g</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math>; the RMSD converges at <math display="inline"> <mrow> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> <mo>≥</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> nm. Converged values of RMSD for LIPS-fifth are most accurate. The RMSDs of Wolf-DSF and IPSp have an adequate accuracy in any cutoff radius.</p>
Full article ">Figure 7
<p>Distance dependence of the Kirkwood factor for the shift function methods and the Ewald sum. It is clearly seen that <math display="inline"> <mrow> <msub> <mi>G</mi> <mi mathvariant="normal">K</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow> </math> calculated with CHARMm-shift, Ohmine-shift, RF-metal and IPSn fluctuate near <math display="inline"> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> </math> as in <math display="inline"> <mrow> <mi>g</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math>, and this fluctuation still remains in spite of the increment of the cutoff radius. The artificial configuration of Ohmine-shift was smaller than that of the other three methods. The defect of <math display="inline"> <mrow> <msub> <mi>G</mi> <mi mathvariant="normal">K</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow> </math> for these above shift function methods was not seen in Wolf-DSF, IPSp and LIPS-fifth. Wolf-DSF, IPSp and LIPS-fifth can estimate <math display="inline"> <mrow> <msub> <mi>G</mi> <mi mathvariant="normal">K</mi> </msub> <mrow> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </mrow> </math> more adequately than other shift function methods.</p>
Full article ">Figure 8
<p>Radial distributions of dipole ordering calculated with the shift function methods at <math display="inline"> <mrow> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>0</mn> </mrow> </math> nm and for the Ewald sum. <math display="inline"> <mrow> <mi>s</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math> calculated with the CHARMm-shift, Ohmine-shift, RF-metal and IPSn fluctuate near <math display="inline"> <msub> <mi>r</mi> <mi mathvariant="normal">c</mi> </msub> </math>, like <math display="inline"> <mrow> <mi>g</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math>, despite the long cutoff radius. Wolf-DSF, IPSp and LIPS-fifth did not calculate any singular configurations of <math display="inline"> <mrow> <mi>s</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math>, like that for the other shift function methods. These three methods can estimate <math display="inline"> <mrow> <mi>s</mi> <mo>(</mo> <mi>r</mi> <mo>)</mo> </mrow> </math> with adequate accuracy.</p>
Full article ">
280 KiB  
Article
Limits and Optimization of Power Input or Output of Actual Thermal Cycles
by Emin Açıkkalp and Hasan Yamık
Entropy 2013, 15(8), 3219-3248; https://doi.org/10.3390/e15083309 - 12 Aug 2013
Cited by 30 | Viewed by 4290
Abstract
In classical thermodynamic, maximum power obtained from system (or minimum power supplied to system) defined as availability (exergy), but availability term is only used for reversible systems. In reality, there is no reversible system, all systems are irreversible, because reversible cycles doesn’t include [...] Read more.
In classical thermodynamic, maximum power obtained from system (or minimum power supplied to system) defined as availability (exergy), but availability term is only used for reversible systems. In reality, there is no reversible system, all systems are irreversible, because reversible cycles doesn’t include constrains like time or size and they operates in quasi-equilibrium state. Purpose of this study is to define limits of the all basic thermodynamic cycles and to provide finite-time exergy models for irreversible cycles and to obtain the maximum (or minimum) available power for irreversible (finite-time exergy) cycles. In this study, available power optimization and performance limits were defined all basic irreversible thermodynamic cycles, by using first and second law of thermodynamic. Finally, these results were evaluated in terms of cycles’ first and second law efficiency, COP, power output (or input) and exergy destruction. Full article
Show Figures

Figure 1

Figure 1
<p>T-s (temperature-entropy) diagram of irreversible Rankine cycle (T<sub>H</sub> = high temperature, T<sub>L</sub> = low temperature).</p>
Full article ">Figure 2
<p>T-s (temperature-entropy) diagram of irreversible SI, CI and Brayton cycles (T<sub>H</sub> = high temperature, T<sub>L</sub> = low temperature) [<a href="#B27-entropy-15-03219" class="html-bibr">27</a>,<a href="#B28-entropy-15-03219" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>T-s (temperature-entropy) diagram of irreversible Stirling-Ericsson cycle (T<sub>H</sub> = high temperature, T<sub>L</sub> = low temperature) [<a href="#B60-entropy-15-03219" class="html-bibr">60</a>,<a href="#B61-entropy-15-03219" class="html-bibr">61</a>,<a href="#B62-entropy-15-03219" class="html-bibr">62</a>,<a href="#B63-entropy-15-03219" class="html-bibr">63</a>].</p>
Full article ">Figure 4
<p>T-s (temperature-entropy) diagram of irreversible heat pump and refrigerators cycles (T<sub>H</sub> = high temperature, T<sub>L</sub> = low temperature).</p>
Full article ">Figure 5
<p>Effect of T<sub>E</sub> (evaporator temperature) on the A (available work output for irreversible Rankine cycle), W (power output) and ExD (exergy destruction) for irreversible Rankine cycle (T<sub>C,R</sub> =350 K, I<sub>R</sub> = 1.1).</p>
Full article ">Figure 6
<p>Effect of T<sub>E</sub> (evaporator temperature) on the η (first law efficiency), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) for irreversible Rankine cycle (T<sub>C,R</sub> = 350 K, I<sub>R</sub> = 1.1).</p>
Full article ">Figure 7
<p>Effect of T<sub>C</sub> (condenser temperature) on the A (available work output for irreversible Rankine cycle), W (power output) and ExD (exergy destruction) for irreversible Rankine cycle (T<sub>E,R</sub> = 600 K, I<sub>R</sub> = 1.1).</p>
Full article ">Figure 8
<p>Effect of T<sub>C</sub> (condenser temperature) on the η (first law efficiency), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) for irreversible Rankine cycle (T<sub>E,R</sub> = 600 K, I<sub>R</sub> = 1.1).</p>
Full article ">Figure 9
<p>Effect of ε (compression ratio) on the A (available work output for irreversible SI engine), W (power output) and ExD (exergy destruction) for irreversible SI engine.</p>
Full article ">Figure 10
<p>Effect of ε (compression ratio) on the η (first law efficiency), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) and I (internal irreversibility parameter) for irreversible SI engine.</p>
Full article ">Figure 11
<p>Effect of ε (compression ratio) on the A (available work output for irreversible CI engine), W (power output) and ExD (exergy destruction) for irreversible CI engine.</p>
Full article ">Figure 12
<p>Effect of ε (compression ratio) on the η (first law efficiency), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) and I (internal irreversibility parameter) for irreversible CI engine.</p>
Full article ">Figure 13
<p>Effect of ν (pressure ratio) on the A (available work output for irreversible Brayton cyle), W (power output) and ExD (exergy destruction) for irreversible Brayton engine.</p>
Full article ">Figure 14
<p>Effect of ν (pressure ratio) on the η (first law efficiency), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) and I (internal irreversibility parameter) for irreversible Brayton cycle.</p>
Full article ">Figure 15
<p>Effect of ε (compression ratio) on the A (available work output for irreversible Stirling engine), W (power output) and ExD (exergy destruction) for irreversible Stirling engine.</p>
Full article ">Figure 16
<p>Effect of ε (compression ratio) on the η (first law efficiency), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) and I (internal irreversibility parameter) for irreversible Stirling engine.</p>
Full article ">Figure 17
<p>Effect of ν (pressure ratio) on the A (available work output for irreversible Ericsson engine), W (power output) and ExD (exergy destruction) for irreversible Ericsson engine.</p>
Full article ">Figure 18
<p>Effect of ν (pressure ratio) on the η (first law efficiency), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) and I (internal irreversibility parameter) irreversible Ericsson engine.</p>
Full article ">Figure 19
<p>Effect of T<sub>C</sub> (condenser temperature) on the Q<sub>H</sub> (heating load), W (power input), A (available power input for irreversible heat pump) and ExD (exergy destruction) for irreversible heat pump (T<sub>E,hr</sub> = 280 K, I<sub>hr</sub> = 1.1).</p>
Full article ">Figure 20
<p>Effect of T<sub>C</sub> (condenser temperature) on the COP (coefficient of performance), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) for irreversible heat pump (T<sub>E,hr</sub> = 280 K, I<sub>hr</sub> = 1.1).</p>
Full article ">Figure 21
<p>Effect of T<sub>E</sub> (evaporator temperature) on the Q<sub>H</sub> (heating load), W (power input), A (available power input for irreversible heat pump) and ExD (exergy destruction) for irreversible heat pump (T<sub>C,hr</sub> = 370 K, I<sub>hr</sub> = 1.1).</p>
Full article ">Figure 22
<p>Effect of T<sub>E</sub> (evaporator temperature) on the COP (coefficient of performance), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) for irreversible heat pump (T<sub>C,hr</sub> = 370 K, I<sub>hr</sub> = 1.1).</p>
Full article ">Figure 23
<p>Effect of T<sub>C</sub> (condenser temperature) on the Q<sub>L</sub> (cooling load), W (power input), A (available power input for irreversible refrigerator) and ExD (exergy destruction) for irreversible refrigerator (T<sub>E,hr</sub> = 280 K, I<sub>hr</sub> = 1.1).</p>
Full article ">Figure 24
<p>Effect of T<sub>C</sub> (condenser temperature) on the COP (coefficient of performance), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) for irreversible refrigerator (T<sub>E,hr</sub> = 280 K, I<sub>hr</sub> = 1.1).</p>
Full article ">Figure 25
<p>Effect of T<sub>E</sub> (evaporator temperature) on the Q<sub>L</sub> (cooling load), W (power input), A (available power input for irreversible refrigerator) and ExD (exergy destruction) for irreversible refrigerator for irreversible refrigerator (T<sub>C,hr</sub> = 370 K, I<sub>hr</sub> = 1.1).</p>
Full article ">Figure 26
<p>Effect of T<sub>E</sub> (evaporator temperature) on the COP (coefficient of performance), <math display="inline"> <semantics> <mi mathvariant="sans-serif">φ</mi> </semantics> </math> (second law efficiency) for irreversible refrigerator (T<sub>C,hr</sub> = 370 K, I<sub>hr</sub> = 1.1)</p>
Full article ">
1462 KiB  
Article
Local Feature Extraction and Information Bottleneck-Based Segmentation of Brain Magnetic Resonance (MR) Images
by Pengcheng Shen and Chunguang Li
Entropy 2013, 15(8), 3205-3218; https://doi.org/10.3390/e15083295 - 9 Aug 2013
Cited by 8 | Viewed by 3895
Abstract
Automated tissue segmentation of brain magnetic resonance (MR) images has attracted extensive research attention. Many segmentation algorithms have been proposed for this issue. However, due to the existence of noise and intensity inhomogeneity in brain MR images, the accuracy of the segmentation results [...] Read more.
Automated tissue segmentation of brain magnetic resonance (MR) images has attracted extensive research attention. Many segmentation algorithms have been proposed for this issue. However, due to the existence of noise and intensity inhomogeneity in brain MR images, the accuracy of the segmentation results is usually unsatisfactory. In this paper, a high-accuracy brain MR image segmentation algorithm based on the information bottleneck (IB) method is presented. In this approach, the MR image is first mapped into a “local-feature space”, then the IB method segments the brain MR image through an information theoretic formulation in this local-feature space. It automatically segments the image into several clusters of voxels, by taking the intensity information and spatial information of voxels into account. Then, after the IB-based clustering, each cluster of voxels is classified into one type of brain tissue by threshold methods. The performance of the algorithm is studied based on both simulated and real T1-weighted 3D brain MR images. Our results show that, compared with other well-known brain image segmentation algorithms, the proposed algorithm can improve the accuracy of the segmentation results substantially. Full article
Show Figures

Figure 1

Figure 1
<p>The flow diagram of the information bottleneck (IB)-based segmentation method.</p>
Full article ">Figure 2
<p>Tanimoto’s performance metric of the proposed segmentation algorithm on the simulated 3D brain magnetic resonance (MR) images from BrainWeb. (<b>a</b>) For gray matter (GM) segmentation; (<b>b</b>) for white matter (WM) segmentation.</p>
Full article ">Figure 3
<p>Tanimoto’s performance metric of different segmentation algorithms on 20 normal 3D brain MR images from the Internet brain segmentation repository (IBSR). (<b>a</b>) For GM segmentation; (<b>b</b>) for WM segmentation.</p>
Full article ">Figure 4
<p>Four real T1-weighted brain MR images (first row), the corresponding segmentation results obtained by our algorithms (second row) and the corresponding ground truths (third row).</p>
Full article ">Figure 5
<p>Detailed IB-based clustering results for the same four examples of MR scans as those shown in <a href="#entropy-15-03205-f004" class="html-fig">Figure 4</a> (a1–a4). Results displayed in one column correspond to one example.</p>
Full article ">Figure 6
<p>Tanimoto’s performance metric of our algorithm under different values of <span class="html-italic">β</span> and <math display="inline"> <msub> <mi>M</mi> <mtext>initial</mtext> </msub> </math> for the image, <math display="inline"> <mrow> <mo>“</mo> <mn>1</mn> <mo>_</mo> <mn>24</mn> <mo>”</mo> </mrow> </math>. (<b>a</b>) For GM segmentation; (<b>b</b>) for WM segmentation.</p>
Full article ">Figure 7
<p>Tanimoto’s performance metric of our algorithm under different value of <span class="html-italic">β</span> and <math display="inline"> <msub> <mi>M</mi> <mtext>initial</mtext> </msub> </math> for the image, <math display="inline"> <mrow> <mo>“</mo> <mn>15</mn> <mo>_</mo> <mn>3</mn> <mo>”</mo> </mrow> </math>. (<b>a</b>) For GM segmentation; (<b>b</b>) for WM segmentation.</p>
Full article ">
397 KiB  
Article
Linearized Transfer Entropy for Continuous Second Order Systems
by Jonathan M. Nichols, Frank Bucholtz and Joe V. Michalowicz
Entropy 2013, 15(8), 3186-3204; https://doi.org/10.3390/e15083276 - 7 Aug 2013
Cited by 12 | Viewed by 4124
Abstract
The transfer entropy has proven a useful measure of coupling among components of a dynamical system. This measure effectively captures the influence of one system component on the transition probabilities (dynamics) of another. The original motivation for the measure was to quantify such [...] Read more.
The transfer entropy has proven a useful measure of coupling among components of a dynamical system. This measure effectively captures the influence of one system component on the transition probabilities (dynamics) of another. The original motivation for the measure was to quantify such relationships among signals collected from a nonlinear system. However, we have found the transfer entropy to also be a useful concept in describing linear coupling among system components. In this work we derive the analytical transfer entropy for the response of coupled, second order linear systems driven with a Gaussian random process. The resulting expression is a function of the auto- and cross-correlation functions associated with the system response for different degrees-of-freedom. We show clearly that the interpretation of the transfer entropy as a measure of "information flow" is not always valid. In fact, in certain instances the "flow" can appear to switch directions simply by altering the degree of linear coupling. A safer way to view the transfer entropy is as a measure of the ability of a given system component to predict the dynamics of another. Full article
(This article belongs to the Special Issue Transfer Entropy)
Show Figures

Figure 1

Figure 1
<p>Physical system modeled by Equation (<a href="#FD10-entropy-15-03186" class="html-disp-formula">10</a>). Here, an <math display="inline"> <mrow> <mi>M</mi> <mo>=</mo> <mn>5</mn> </mrow> </math> DOF structure is represented by masses coupled together via both restoring and dissipative elements. Forcing is applied at the end mass.</p>
Full article ">Figure 2
<p>Time delay transfer entropy between masses two and three (top row) and one and five (bottom row) of a 5 DOF system driven at mass, <math display="inline"> <mrow> <mi>P</mi> <mo>=</mo> <mn>5</mn> </mrow> </math>.</p>
Full article ">Figure 3
<p>Time delay transfer entropy between the forcing (denoted as DOF “0”) and mass three for the 5 DOF system driven at mass, <math display="inline"> <mrow> <mi>P</mi> <mo>=</mo> <mn>5</mn> </mrow> </math>. The plot is consistent with the interpretation of information moving from the forcing to mass three.</p>
Full article ">Figure 4
<p>Difference in time delay transfer entropy between the driven mass five and each other DOF as a function of <math display="inline"> <msub> <mi>k</mi> <mn>3</mn> </msub> </math>. A positive difference indicates <math display="inline"> <mrow> <mi>T</mi> <msub> <mi>E</mi> <mrow> <mi>i</mi> <mo>→</mo> <mi>j</mi> </mrow> </msub> <mo>&gt;</mo> <mi>T</mi> <msub> <mi>E</mi> <mrow> <mi>j</mi> <mo>→</mo> <mi>i</mi> </mrow> </msub> </mrow> </math> and is commonly used to indicate that information is moving from mass <span class="html-italic">i</span> to mass <span class="html-italic">j</span>. Based on this interpretation, negative values indicate information moving from the driven end to the base; positive values indicate the opposite. Even for this linear system, choosing different masses in the analysis can produce very different results. In fact, <math display="inline"> <mrow> <mi>T</mi> <msub> <mi>E</mi> <mrow> <mn>2</mn> <mo>→</mo> <mn>5</mn> </mrow> </msub> <mo>-</mo> <mi>T</mi> <msub> <mi>E</mi> <mrow> <mn>5</mn> <mo>→</mo> <mn>2</mn> </mrow> </msub> </mrow> </math> implies a different direction of information transfer, depending on the strength of the coupling, <math display="inline"> <msub> <mi>k</mi> <mn>3</mn> </msub> </math></p>
Full article ">Figure 5
<p>Difference in time-delayed transfer entropy (TDTE) among different combinations of masses. By the traditional interpretation of TE, negative values indicate information moving from the driven end to the base; positive values indicate the opposite.</p>
Full article ">
1932 KiB  
Article
A Maximum Entropy-Based Chaotic Time-Variant Fragile Watermarking Scheme for Image Tampering Detection
by Young-Long Chen, Her-Terng Yau and Guo-Jheng Yang
Entropy 2013, 15(8), 3170-3185; https://doi.org/10.3390/e15083260 - 5 Aug 2013
Cited by 21 | Viewed by 4504
Abstract
The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on [...] Read more.
The fragile watermarking technique is used to protect intellectual property rights while also providing security and rigorous protection. In order to protect the copyright of the creators, it can be implanted in some representative text or totem. Because all of the media on the Internet are digital, protection has become a critical issue, and determining how to use digital watermarks to protect digital media is thus the topic of our research. This paper uses the Logistic map with parameter u = 4 to generate chaotic dynamic behavior with the maximum entropy 1. This approach increases the security and rigor of the protection. The main research target of information hiding is determining how to hide confidential data so that the naked eye cannot see the difference. Next, we introduce one method of information hiding. Generally speaking, if the image only goes through Arnold’s cat map and the Logistic map, it seems to lack sufficient security. Therefore, our emphasis is on controlling Arnold’s cat map and the initial value of the chaos system to undergo small changes and generate different chaos sequences. Thus, the current time is used to not only make encryption more stringent but also to enhance the security of the digital media. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

Figure 1
<p>Periodic phenomenon in the cat map.</p>
Full article ">Figure 2
<p>The entropy of the Logistic map for 3.5 ≤ u ≤ 4.</p>
Full article ">Figure 3
<p>Our proposed diagram of the embedding process.</p>
Full article ">Figure 4
<p>Process of embedded watermark with <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>69</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>Block diagram of the extraction process.</p>
Full article ">Figure 6
<p>Process of fetching the watermark.</p>
Full article ">Figure 6 Cont.
<p>Process of fetching the watermark.</p>
Full article ">Figure 7
<p>Reverse time process.</p>
Full article ">Figure 8
<p>Comparison of different modifications of Lena.</p>
Full article ">Figure 9
<p>Comparison of the different modifications of Baboon.</p>
Full article ">Figure 10
<p>Comparison of PSNR and MSE for Lena.</p>
Full article ">Figure 11
<p>Comparison of PSNR and MSE for Baboon.</p>
Full article ">
472 KiB  
Article
Low-Temperature Behaviour of Social and Economic Networks
by Diego Garlaschelli, Sebastian E. Ahnert, Thomas M. A. Fink and Guido Caldarelli
Entropy 2013, 15(8), 3148-3169; https://doi.org/10.3390/e15083238 - 5 Aug 2013
Cited by 11 | Viewed by 6084
Abstract
Real-world social and economic networks typically display a number of particular topological properties, such as a giant connected component, a broad degree distribution, the small-world property and the presence of communities of densely interconnected nodes. Several models, including ensembles of networks, also known [...] Read more.
Real-world social and economic networks typically display a number of particular topological properties, such as a giant connected component, a broad degree distribution, the small-world property and the presence of communities of densely interconnected nodes. Several models, including ensembles of networks, also known in social science as Exponential Random Graphs, have been proposed with the aim of reproducing each of these properties in isolation. Here, we define a generalized ensemble of graphs by introducing the concept of graph temperature, controlling the degree of topological optimization of a network. We consider the temperature-dependent version of both existing and novel models and show that all the aforementioned topological properties can be simultaneously understood as the natural outcomes of an optimized, low-temperature topology. We also show that seemingly different graph models, as well as techniques used to extract information from real networks are all found to be particular low-temperature cases of the same generalized formalism. One such technique allows us to extend our approach to real weighted networks. Our results suggest that a low graph temperature might be a ubiquitous property of real socio-economic networks, placing conditions on the diffusion of information across these systems. Full article
(This article belongs to the Special Issue Social Networks and Information Diffusion)
Show Figures

Figure 1

Figure 1
<p>A temperature-dependent small-world model with vertices arranged in a circle and chemical potential, <math display="inline"> <mrow> <mi>d</mi> <mo>&lt;</mo> <mi>μ</mi> <mo>&lt;</mo> <mn>2</mn> <mi>d</mi> </mrow> </math> (where <span class="html-italic">d</span> is the dimensionless distance between nearest neighbours along the circle). When <math display="inline"> <mrow> <mi>T</mi> <mo>=</mo> <mn>0</mn> </mrow> </math> (left), the network is a ring with first-neighbour interactions. When <math display="inline"> <mrow> <mi>T</mi> <mo>=</mo> <mi>∞</mi> </mrow> </math> (right), the network is a random graph with connection probability, <math display="inline"> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </math>. When <math display="inline"> <mrow> <mi>T</mi> <mo>=</mo> <mn>1</mn> </mrow> </math> (center), the network is a “small-world” with a few long-range connections and an incomplete circular “backbone”.</p>
Full article ">Figure 2
<p>Our “ultrametric small-world model” as a function of temperature, <span class="html-italic">T</span>, and chemical potential, <span class="html-italic">μ</span>. Nodes (blue circles) are leaves of a dendrogram (black lines), separated by an ultrametric distance, <math display="inline"> <msub> <mi>d</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </math> (increasing along the purple axis), representing the height of the closest branching point separating vertices <span class="html-italic">i</span> and <span class="html-italic">j</span>. The ultrametric distances determine the topology of the network (lying on the horizontal purple plane): (<b>a</b>) when <math display="inline"> <mrow> <mi>T</mi> <mo>=</mo> <mn>0</mn> </mrow> </math> and <span class="html-italic">μ</span> is small, the network is divided into many small cliques (blue links) corresponding to the disconnected branches obtained by “cutting” the dendrogram along the orange dashed line determined by <span class="html-italic">μ</span>; (<b>b</b>) when <math display="inline"> <mrow> <mi>T</mi> <mo>=</mo> <mn>0</mn> </mrow> </math> and <span class="html-italic">μ</span> is larger, the network is divided into fewer and larger cliques; (<b>c</b>) when <math display="inline"> <mrow> <mi>T</mi> <mo>≳</mo> <mn>0</mn> </mrow> </math> and <span class="html-italic">μ</span> is small, there are many small communities that are highly connected internally (blue links) and sparsely connected across (red links); (<b>d</b>) when <math display="inline"> <mrow> <mi>T</mi> <mo>≳</mo> <mn>0</mn> </mrow> </math> and <span class="html-italic">μ</span> is larger, there are fewer and larger communities, with a higher density contrast between intra-community (blue) and inter-community (red) links. After introducing an appropriate degree of heterogeneity at the level of vertices, this model can be turned into our “ultrametric scale-free model”, where a non-trivial community structure coexists with a broad degree distribution.</p>
Full article ">
440 KiB  
Article
Communicating through Probabilities: Does Quantum Theory Optimize the Transfer of Information?
by William K. Wootters
Entropy 2013, 15(8), 3130-3147; https://doi.org/10.3390/e15083220 - 2 Aug 2013
Cited by 10 | Viewed by 5260
Abstract
A quantum measurement can be regarded as a communication channel, in which the parameters of the state are expressed only in the probabilities of the outcomes of the measurement. We begin this paper by considering, in a non-quantum-mechanical setting, the problem of communicating [...] Read more.
A quantum measurement can be regarded as a communication channel, in which the parameters of the state are expressed only in the probabilities of the outcomes of the measurement. We begin this paper by considering, in a non-quantum-mechanical setting, the problem of communicating through probabilities. For example, a sender, Alice, wants to convey to a receiver, Bob, the value of a continuous variable, θ, but her only means of conveying this value is by sending Bob a coin in which the value of θ is encoded in the probability of heads. We ask what the optimal encoding is when Bob will be allowed to flip the coin only a finite number of times. As the number of tosses goes to infinity, we find that the optimal encoding is the same as what nature would do if we lived in a world governed by real-vector-space quantum theory. We then ask whether the problem might be modified, so that the optimal communication strategy would be consistent with standard, complex-vector-space quantum theory. Full article
(This article belongs to the Special Issue Quantum Information 2012)
Show Figures

Figure 1

Figure 1
<p>An optimal function for encoding the value of <span class="html-italic">θ</span> in the value of the probability of heads, when the coin will be tossed exactly once.</p>
Full article ">Figure 2
<p>Optimal functions for encoding the value of <span class="html-italic">θ</span> in the value of the probability of heads, when the coin will be tossed exactly twice (<b>a</b>) and 25 times (<b>b</b>). The corresponding maximum values of the mutual information are <math display="inline"> <mrow> <mi>I</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>754</mn> </mrow> </math> for <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>I</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>570</mn> </mrow> </math> for <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>25</mn> </mrow> </math>.</p>
Full article ">Figure 3
<p>A bar graph showing the specific probabilities that optimize <math display="inline"> <mrow> <mi>I</mi> <mo>(</mo> <mi>θ</mi> <mo>:</mo> <mi>n</mi> <mo>)</mo> </mrow> </math> for the case of 25 tosses, along with the weight assigned to each probability.</p>
Full article ">Figure 4
<p>The functions <math display="inline"> <mrow> <mi>θ</mi> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </math> and <math display="inline"> <mrow> <mi>w</mi> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </math>. The ratio of <math display="inline"> <mrow> <mi>θ</mi> <mo>(</mo> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </math> to <math display="inline"> <mrow> <mi>π</mi> <mo>/</mo> <mn>2</mn> </mrow> </math> should be equal to the shaded area under the curve <math display="inline"> <mrow> <mi>w</mi> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </math>.</p>
Full article ">Figure 5
<p>Alice performs a unitary transformation, <span class="html-italic">U</span>, on one of two qubits and sends it to Bob, who then performs the four-outcome Bell measurement on the pair.</p>
Full article ">
3902 KiB  
Article
Estimation Bias in Maximum Entropy Models
by Jakob H. Macke, Iain Murray and Peter E. Latham
Entropy 2013, 15(8), 3109-3129; https://doi.org/10.3390/e15083109 - 2 Aug 2013
Cited by 5 | Viewed by 8277
Abstract
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling [...] Read more.
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e., the true entropy of the data can be severely underestimated. Here, we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We focus on pairwise binary models, which are used extensively to model neural population activity. We show that if the data is well described by a pairwise model, the bias is equal to the number of parameters divided by twice the number of observations. If, however, the higher order correlations in the data deviate from those predicted by the model, the bias can be larger. Using a phenomenological model of neural population recordings, we find that this additional bias is highest for small firing probabilities, strong correlations and large population sizes—for the parameters we tested, a factor of about four higher. We derive guidelines for how long a neurophysiological experiment needs to be in order to ensure that the bias is less than a specified criterion. Finally, we show how a modified plug-in estimate of the entropy can be used for bias correction. Full article
(This article belongs to the Special Issue Estimating Information-Theoretic Quantities from Data)
Show Figures

Figure 1

Figure 1
<p>Sampling bias in maximum entropy models. The equilateral triangle represents a <span class="html-italic">D</span>-dimensional probability space (for the binary model considered here, <math display="inline"> <mrow> <mi>D</mi> <mo>=</mo> <msup> <mn>2</mn> <mi>n</mi> </msup> <mo>−</mo> <mn>1</mn> </mrow> </math>, where <span class="html-italic">n</span> is the dimensionality of <math display="inline"> <mi mathvariant="bold">x</mi> </math>). The cyan lines are contour plots of entropy; the red lines represent the <span class="html-italic">m</span> linear constraints and, thus, lie in a <math display="inline"> <mrow> <mi>D</mi> <mo>−</mo> <mi>m</mi> </mrow> </math> dimensional linear manifold. (<b>a</b>) Maximum entropy occurs at the tangential intersection of the constraints with the entropy contours. (<b>b</b>) The light red region indicates the range of constraints arising from multiple experiments in which a finite number of samples is drawn in each. Maximum entropy estimates from multiple experiments would lie along the green line. (<b>c</b>) As the entropy is concave, averaging the maximum entropy over experiments leads to an estimate that is lower than the true maximum entropy—estimating maximum entropy is subject to downward bias.</p>
Full article ">Figure 2
<p>Normalized bias, <span class="html-italic">b</span>, <span class="html-italic">versus</span> number of samples, <math display="inline"> <mi mathvariant="bold-italic">K</mi> </math>. Grey lines: <math display="inline"> <mi mathvariant="bold-italic">b</mi> </math>, computed from Equation (<a href="#FD11-entropy-15-03109" class="html-disp-formula">11</a>). Colored curves: <math display="inline"> <mrow> <mo>−</mo> <mn mathvariant="bold">2</mn> <mi mathvariant="bold-italic">K</mi> </mrow> </math> times the bias, computed numerically using the expression on the left-hand side of Equation (<a href="#FD8-entropy-15-03109" class="html-disp-formula">8</a>). We used a homogeneous Dichotomized Gaussian distribution with <math display="inline"> <mrow> <mi mathvariant="bold-italic">n</mi> <mo>=</mo> <mn mathvariant="bold">10</mn> </mrow> </math> and a mean of 0.1. Different curves correspond to different correlation coefficients [see Equation (<a href="#FD20-entropy-15-03109" class="html-disp-formula">19</a>) below], as indicated in the legend.</p>
Full article ">Figure 3
<p>Scaling of the bias with population size for a homogeneous Dichotomized Gaussian model. (<b>a</b>) Bias, <span class="html-italic">b</span>, for <math display="inline"> <mrow> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math> and a range of correlation coefficients, <span class="html-italic">ρ</span>. The bias is biggest for strong correlations and large population sizes; (<b>b</b>) <math display="inline"> <mrow> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>02</mn> </mrow> </math> and a range of (smaller) correlation coefficients. In both panels, the left axis is <math display="inline"> <mrow> <mi>b</mi> <mo>/</mo> <mi>m</mi> </mrow> </math>, and the right axis is <math display="inline"> <mrow> <mo>(</mo> <mi>b</mi> <mo>/</mo> <mi>m</mi> <mo>)</mo> <mo>/</mo> <mo form="prefix">log</mo> <mo>(</mo> <mi>e</mi> <mo>/</mo> <mo>(</mo> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>)</mo> <mo>)</mo> </mrow> </math>. The latter quantity is important for determining the minimum number of trials [Equation (<a href="#FD18-entropy-15-03109" class="html-disp-formula">17</a>)] or the minimum runtime [Equation (<a href="#FD19-entropy-15-03109" class="html-disp-formula">18</a>)] needed to reduce bias to an acceptable level.</p>
Full article ">Figure 4
<p>Effect of heterogeneity on the normalized bias in a small population. (<b>a</b>) Normalized bias relative to the within model class case, <math display="inline"> <mrow> <mi>b</mi> <mo>/</mo> <mi>m</mi> </mrow> </math>, of a heterogeneous Dichotomized Gaussian model with <math display="inline"> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </math> as a function of the median mean, <math display="inline"> <mrow> <mi>ν</mi> <mi>d</mi> <mi>t</mi> </mrow> </math>, and correlation coefficient, <span class="html-italic">ρ</span>. As with the homogeneous model, bias is largest for small means and strong correlations. (<b>b</b>) The same plot, but for a homogeneous Dichotomized Gaussian. The difference in bias between the heterogeneous and homogeneous models is largest for small means and small correlations, but overall, the two plots are very similar.</p>
Full article ">Figure 5
<p>Relationship between <math display="inline"> <mrow> <mo>Δ</mo> <mi>S</mi> </mrow> </math> and bias. (<b>a</b>) Maximum and minimum normalized bias relative to <span class="html-italic">m</span> <span class="html-italic">versus</span> <math display="inline"> <mrow> <mo>Δ</mo> <mi>S</mi> <mo>/</mo> <msub> <mi>S</mi> <mi>p</mi> </msub> </mrow> </math> (recall that <math display="inline"> <msub> <mi>S</mi> <mi>p</mi> </msub> </math> is the entropy of <math display="inline"> <mrow> <mi>p</mi> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>)</mo> </mrow> </math>) in a homogeneous population with size <math display="inline"> <mrow> <mi>n</mi> <mo>=</mo> <mn>5</mn> </mrow> </math>, <math display="inline"> <mrow> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math>, and correlation coefficients indicated by color. The crosses correspond to a set of homogeneous Dichotomized Gaussian models with <math display="inline"> <mrow> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math>. (<b>b</b>) Same as a, but for <math display="inline"> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </math>. For <math display="inline"> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </math>, the bias of the Dichotomized Gaussian model is off the right-hand side of the plot, at <math display="inline"> <mrow> <mo>(</mo> <mn>0</mn> <mo>.</mo> <mn>17</mn> <mo>,</mo> <mn>3</mn> <mo>.</mo> <mn>4</mn> <mo>)</mo> </mrow> </math>; for comparison, the maximum bias at <math display="inline"> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </math> and <math display="inline"> <mrow> <mo>Δ</mo> <mi>S</mi> <mo>/</mo> <msub> <mi>S</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>17</mn> </mrow> </math> is 3.8. (<b>c</b>) Comparison between the normalized bias of the Dichotomized Gaussian model and the maximum normalized bias. As in panels a and b, we used <math display="inline"> <mrow> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math>. Because the ratio of the biases is trivially near one when <span class="html-italic">b</span> is near <span class="html-italic">m</span>, we plot <math display="inline"> <mrow> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mrow> <mi>D</mi> <mi>G</mi> </mrow> </msub> <mo>−</mo> <mi>m</mi> <mo>)</mo> </mrow> <mo>/</mo> <mrow> <mo>(</mo> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>−</mo> <mi>m</mi> <mo>)</mo> </mrow> </mrow> </math>, where <math display="inline"> <msub> <mi>b</mi> <mrow> <mi>D</mi> <mi>G</mi> </mrow> </msub> </math> and <math display="inline"> <msub> <mi>b</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </math> are the normalized bias of the Dichotomized Gaussian and the maximum bias, respectively; this is the ratio of the “additional” bias. (<b>d</b>) Distribution of total spike count (<math display="inline"> <mrow> <mo>=</mo> <msub> <mo>∑</mo> <mi>i</mi> </msub> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </math>) for the Dichotomized Gaussian, maximum entropy (MaxEnt) and maximally biased (MaxBias) models with <math display="inline"> <mrow> <mi>n</mi> <mo>=</mo> <mn>100</mn> </mrow> </math>, <math display="inline"> <mrow> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>05</mn> </mrow> </math>. The similarity between the distributions of the Dichotomized Gaussian and maximally biased models is consistent with the similarity in normalized biases shown in panel c.</p>
Full article ">Figure 6
<p>Bias correction. (<b>a</b>) Plug-in, <math display="inline"> <msub> <mi>b</mi> <mtext>plugin</mtext> </msub> </math>, and thresholded, <math display="inline"> <msub> <mi>b</mi> <mtext>thresh</mtext> </msub> </math>, estimators <span class="html-italic">versus</span> sample size for a homogeneous Dichotomized Gaussian model with <math display="inline"> <mrow> <mi>n</mi> <mo>=</mo> <mn>10</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>ν</mi> <mi>δ</mi> <mi>t</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math>. Correlations are color coded as in <a href="#entropy-15-03109-f005" class="html-fig">Figure 5</a>. Gray lines indicate the true normalized bias as a function of sample size, computed numerically as for <a href="#entropy-15-03109-f002" class="html-fig">Figure 2</a>. (<b>b</b>) Relative error without bias correction, <math display="inline"> <mrow> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mover accent="true"> <mi mathvariant="bold-italic">μ</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold-italic">μ</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold-italic">μ</mi> <mo>)</mo> </mrow> </mrow> </math>, with the plug-in correction, <math display="inline"> <mrow> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mover accent="true"> <mi mathvariant="bold-italic">μ</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <mi>K</mi> <msub> <mi>b</mi> <mtext>plugin</mtext> </msub> <mo>−</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold-italic">μ</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold-italic">μ</mi> <mo>)</mo> </mrow> </mrow> </math>, and with the thresholded estimator, <math display="inline"> <mrow> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mover accent="true"> <mi mathvariant="bold-italic">μ</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <mi>K</mi> <msub> <mi>b</mi> <mtext>thresh</mtext> </msub> <mo>−</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold-italic">μ</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>/</mo> <msub> <mi>S</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold-italic">μ</mi> <mo>)</mo> </mrow> </mrow> </math>.</p>
Full article ">
294 KiB  
Article
On the Topological Entropy of Some Skew-Product Maps
by Jose S. Cánovas
Entropy 2013, 15(8), 3100-3108; https://doi.org/10.3390/e15083100 - 31 Jul 2013
Cited by 4 | Viewed by 5488
Abstract
The aim of this short note is to compute the topological entropy for a family of skew-product maps, whose base is a subshift of finite type, and the fiber maps are homeomorphisms defined in one dimensional spaces. We show that the skew-product map [...] Read more.
The aim of this short note is to compute the topological entropy for a family of skew-product maps, whose base is a subshift of finite type, and the fiber maps are homeomorphisms defined in one dimensional spaces. We show that the skew-product map does not increase the topological entropy of the subshift. Full article
(This article belongs to the Special Issue Dynamical Systems)
Show Figures

Figure 1

Figure 1
<p>We show the graphic on <math display="inline"> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </math> of maps <math display="inline"> <msub> <mi>f</mi> <mn>0</mn> </msub> </math> (left), <math display="inline"> <msub> <mi>f</mi> <mn>1</mn> </msub> </math> (center) and <math display="inline"> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>∘</mo> <msub> <mi>f</mi> <mn>0</mn> </msub> </mrow> </math> (right), defined in the proof of Theorem 2.</p>
Full article ">Figure 2
<p>We compute the topological entropy (<b>ent</b> in the figure) for <math display="inline"> <mrow> <mi>a</mi> <mo>∈</mo> <mo>[</mo> <mn>3.5</mn> <mo>,</mo> <mn>4</mn> <mo>]</mo> </mrow> </math> with accuracy, <math display="inline"> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </math>. We note that the first parameter value providing positive topological entropy is <math display="inline"> <mrow> <mn>3.569945</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> </mrow> </math></p>
Full article ">Figure 3
<p>We compute the topological entropy (<b>ent</b> in the figure) for <math display="inline"> <mrow> <mi>a</mi> <mo>∈</mo> <mo>[</mo> <mn>3.55</mn> <mo>,</mo> <mn>3.57</mn> <mo>]</mo> </mrow> </math> and <math display="inline"> <mrow> <mi>b</mi> <mo>∈</mo> <mo>[</mo> <mn>2.8</mn> <mo>,</mo> <mn>3.6</mn> <mo>]</mo> </mrow> </math> with accuracy, <math display="inline"> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </math>. The darker region represents those parameter values providing zero topological entropy.</p>
Full article ">
180 KiB  
Article
Folding Kinetics of Riboswitch Transcriptional Terminators and Sequesterers
by Ben Sauerwine and Michael Widom
Entropy 2013, 15(8), 3088-3099; https://doi.org/10.3390/e15083088 - 31 Jul 2013
Cited by 5 | Viewed by 5848
Abstract
To function as gene regulatory elements in response to environmental signals, riboswitches must adopt specific secondary structures on appropriate time scales. We employ kinetic Monte Carlo simulation to model the time-dependent folding during transcription of thiamine pyrophosphate (TPP) riboswitch expression platforms. According to [...] Read more.
To function as gene regulatory elements in response to environmental signals, riboswitches must adopt specific secondary structures on appropriate time scales. We employ kinetic Monte Carlo simulation to model the time-dependent folding during transcription of thiamine pyrophosphate (TPP) riboswitch expression platforms. According to our simulations, riboswitch transcriptional terminators, which must adopt a specific hairpin configuration by the time they have been transcribed, fold with higher efficiency than Shine-Dalgarno sequesterers, whose proper structure is required only at the time of ribosomal binding. Our findings suggest both that riboswitch transcriptional terminator sequences have been naturally selected for high folding efficiency, and that sequesterers can maintain their function even in the presence of significant misfolding. Full article
(This article belongs to the Special Issue Entropy and RNA Structure, Folding and Mechanics)
Show Figures

Figure 1

Figure 1
<p>Secondary structure of the aptamer and terminator of the <span class="html-italic">Bacillus subtilis</span> ykoF riboswitch. (<b>a</b>) Bound state, aptamer formed, transcription off. The P1 stem of the aptamer (pink) conflicts with the antiterminator (green), allowing formation of the terminator (blue), and, thus, halting transcription via the poly-<span class="html-italic">U</span> pause site (orange). (<b>b</b>) Unbound state, aptamer unformed, transcription on. Destabilizing the aptamer allows formation of the antiterminator, which conflicts with the terminator, and, hence, allows transcription to proceed.</p>
Full article ">Figure 2
<p>Normalized distributions of folding performances for terminator- and sequesterer-type riboswitches from a wide range of prokaryotes. (<b>a</b>) Histograms of folding fractions, <span class="html-italic">f</span>, combined for all sequences, <span class="html-italic">s</span>, of each specific type. (<b>b</b>) Histograms of folding efficiencies, <math display="inline"> <msub> <mi>e</mi> <mi>s</mi> </msub> </math>, for individual sequences, <span class="html-italic">s</span>. (<b>c</b>) Cumulative distribution of folding efficiencies, including extended terminators.</p>
Full article ">Figure 3
<p>Proportion of thiamine pyrophosphate (TPP) terminators (black line) and sequesterers (red line) that fold efficiently (<span class="html-italic">i.e</span>., with <math display="inline"> <mrow> <mi>e</mi> <mo>≥</mo> <mn>0.8</mn> </mrow> </math>) at various timescales, <span class="html-italic">ρ</span>. Data points indicate the individual folding efficiencies, <math display="inline"> <msub> <mi>e</mi> <mi>s</mi> </msub> </math>, of each hairpin sequence, <span class="html-italic">s</span>. The green line at <math display="inline"> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>4000</mn> </mrow> </math> Monte Carlo (MC) steps/nt transcribed indicates the timescale for <math display="inline"> <mrow> <msub> <mi>τ</mi> <mi>K</mi> </msub> <mo>=</mo> <mn>5</mn> <mi>μ</mi> </mrow> </math>s and <math display="inline"> <mrow> <msub> <mi>R</mi> <mi>t</mi> </msub> <mo>=</mo> <mn>50</mn> </mrow> </math> nt/s.</p>
Full article ">Figure 4
<p>Frequency weighted sequence logos [<a href="#B15-entropy-15-03088" class="html-bibr">15</a>] for TPP rho-independent transcriptional terminators (<b>a</b>) and Shine-Dalgarno sequesterers (<b>b</b>). Regions 1–5 correspond, respectively, to the first half of the <math display="inline"> <msup> <mn>5</mn> <mo>′</mo> </msup> </math> side of the stem, the second half of the same, the loop, the first half of the <math display="inline"> <msup> <mn>3</mn> <mo>′</mo> </msup> </math> side of the stem and the second half of the same.</p>
Full article ">Figure 5
<p>Alternate folds of low efficiency terminators and sequesterers. (<b>a,b</b>) Most common specific fold and MFE structure of the <span class="html-italic">Tte-ThiD</span> terminator sequence. Nucleotides forming the stem of the terminator are highlighted in blue, while the poly-<span class="html-italic">U</span> pause site is in orange. (<b>c,d</b>) Most common specific fold and MFE structure of <span class="html-italic">Sm-ThiC</span>. The Shine-Dalgarno sequence is highlighted in blue, while the translation start site is highlighted in orange.</p>
Full article ">Figure 6
<p>Free energy landscapes in units of kcal/mol. Completely unbound structures have energy of zero. Basins of depth less than 2.5 have been suppressed <span class="html-italic">Tte-ThiD</span>. (<b>a</b>) Structure number 6 corresponds to the most common fold (<a href="#entropy-15-03088-f005" class="html-fig">Figure 5</a>a). <span class="html-italic">Sm-ThiC</span>; (<b>b</b>) Structure number 5 corresponds to the most common fold (<a href="#entropy-15-03088-f005" class="html-fig">Figure 5</a>c). In both cases, structure number 1 is the MFE fold (<a href="#entropy-15-03088-f005" class="html-fig">Figure 5</a>b,d).</p>
Full article ">
713 KiB  
Article
Flow of Information during an Evolutionary Process: The Case of Influenza A Viruses
by Víctor Serrano-Solís and Marco V. José
Entropy 2013, 15(8), 3065-3087; https://doi.org/10.3390/e15083065 - 29 Jul 2013
Cited by 2 | Viewed by 5202
Abstract
The hypothesis that Mutual Information (MI) dendrograms of influenza A viruses reflect informational groups generated during viral evolutionary processes is put forward. Phylogenetic reconstructions are used for guidance and validation of MI dendrograms. It is found that MI profiles display an oscillatory behavior [...] Read more.
The hypothesis that Mutual Information (MI) dendrograms of influenza A viruses reflect informational groups generated during viral evolutionary processes is put forward. Phylogenetic reconstructions are used for guidance and validation of MI dendrograms. It is found that MI profiles display an oscillatory behavior for each of the eight RNA segments of influenza A. It is shown that dendrograms of MI values of geographically and historically different segments coming from strains of RNA virus influenza A turned out to be unexpectedly similar to the clusters, but not with the topology of the phylogenetic trees. No matter how diverse the RNA sequences are, MI dendrograms crisply discern actual viral subtypes together with gain and/or losses of information that occur during viral evolution. The amount of information during a century of evolution of RNA segments of influenza A is measured in terms of bits of information for both human and avian strains. Overall the amount of information of segments of pandemic strains oscillates during viral evolution. To our knowledge this is the first description of clades of information of the viral subtypes and the estimation of the flow content of information, measured in bits, during an evolutionary process of a virus. Full article
Show Figures

Figure 1

Figure 1
<p>Mutual Information of A(H1N1) from 1918 (red curves) and from 2009 (black curves): (<b>A</b>) S1; (<b>B</b>) S4; (<b>C</b>) S6.</p>
Full article ">Figure 2
<p>Dendrograms of the Mutual Information for: S1 (<b>A1</b>); S4 (<b>B1</b>); S6 (<b>C1</b>); and S8 (<b>D1</b>) segments of the RNA genome of influenza A. Maximum Likelihood Phylogenetic Reconstructions visualized as cladograms for: S1 (<b>A2</b>) LogLk (Log-likelihood) = −9150.268; S4 (<b>B2</b>) LogLk = −12865.908; S6 (<b>C2</b>) LogLk = −8565.225; and S8 (<b>D2</b>) LogLk = −3181.513. The numbers at each split in ML trees correspond to the Shimodaira-Hasegawa reliability estimate test which is part of the default parameter values of FastTree.</p>
Full article ">Figure 2 Cont.
<p>Dendrograms of the Mutual Information for: S1 (<b>A1</b>); S4 (<b>B1</b>); S6 (<b>C1</b>); and S8 (<b>D1</b>) segments of the RNA genome of influenza A. Maximum Likelihood Phylogenetic Reconstructions visualized as cladograms for: S1 (<b>A2</b>) LogLk (Log-likelihood) = −9150.268; S4 (<b>B2</b>) LogLk = −12865.908; S6 (<b>C2</b>) LogLk = −8565.225; and S8 (<b>D2</b>) LogLk = −3181.513. The numbers at each split in ML trees correspond to the Shimodaira-Hasegawa reliability estimate test which is part of the default parameter values of FastTree.</p>
Full article ">Figure 2 Cont.
<p>Dendrograms of the Mutual Information for: S1 (<b>A1</b>); S4 (<b>B1</b>); S6 (<b>C1</b>); and S8 (<b>D1</b>) segments of the RNA genome of influenza A. Maximum Likelihood Phylogenetic Reconstructions visualized as cladograms for: S1 (<b>A2</b>) LogLk (Log-likelihood) = −9150.268; S4 (<b>B2</b>) LogLk = −12865.908; S6 (<b>C2</b>) LogLk = −8565.225; and S8 (<b>D2</b>) LogLk = −3181.513. The numbers at each split in ML trees correspond to the Shimodaira-Hasegawa reliability estimate test which is part of the default parameter values of FastTree.</p>
Full article ">Figure 3
<p>Control experiments: (<b>A</b>) MI dendrogram from randomization of the actual viral sequences; (<b>B</b>) MI dendrogram from randomization of the MI values from the original viral sequences.</p>
Full article ">Figure 4
<p>Average amount of information for: S1 (<b>A1</b>); S4 (<b>B1</b> and inset at other scale); S6 (<b>C1</b>); and S8 (<b>D1</b>) as a function of time from 1918 to 2009 in human (red curves) and from 1902 to 2011 in avian (blue curves) subtypes of the RNA genome of influenza A.</p>
Full article ">Figure 4 Cont.
<p>Average amount of information for: S1 (<b>A1</b>); S4 (<b>B1</b> and inset at other scale); S6 (<b>C1</b>); and S8 (<b>D1</b>) as a function of time from 1918 to 2009 in human (red curves) and from 1902 to 2011 in avian (blue curves) subtypes of the RNA genome of influenza A.</p>
Full article ">
263 KiB  
Article
Casimir Friction between Dense Polarizable Media
by Johan S. Høye and Iver Brevik
Entropy 2013, 15(8), 3045-3064; https://doi.org/10.3390/e15083045 - 29 Jul 2013
Cited by 23 | Viewed by 4243
Abstract
The present paper—a continuation of our recent series of papers on Casimir friction for a pair of particles at low relative particle velocity—extends the analysis, so as to include dense media. The situation becomes, in this case, more complex, due to induced dipolar [...] Read more.
The present paper—a continuation of our recent series of papers on Casimir friction for a pair of particles at low relative particle velocity—extends the analysis, so as to include dense media. The situation becomes, in this case, more complex, due to induced dipolar correlations, both within planes and between planes. We show that the structure of the problem can be simplified by regarding the two half-planes as a generalized version of a pair of particles. It turns out that macroscopic parameters, such as permittivity, suffice to describe the friction, also in the finite density case. The expression for the friction force per unit surface area becomes mathematically well-defined and finite at finite temperature. We give numerical estimates and compare them with those obtained earlier by Pendry (1997) and by Volokitin and Persson (2007). We also show in an appendix how the statistical methods that we are using correspond to the field theoretical methods more commonly in use. Full article
340 KiB  
Article
What Do Leaders Know?
by Giacomo Livan and Matteo Marsili
Entropy 2013, 15(8), 3031-3044; https://doi.org/10.3390/e15083031 - 26 Jul 2013
Cited by 7 | Viewed by 5287
Abstract
The ability of a society to make the right decisions on relevant matters relies on its capability to properly aggregate the noisy information spread across the individuals of which it is made. In this paper, we study the information aggregation performance of a [...] Read more.
The ability of a society to make the right decisions on relevant matters relies on its capability to properly aggregate the noisy information spread across the individuals of which it is made. In this paper, we study the information aggregation performance of a stylized model of a society, whose most influential individuals—the leaders—are highly connected among themselves and uninformed. Agents update their state of knowledge in a Bayesian manner by listening to their neighbors. We find analytical and numerical evidence of a transition, as a function of the noise level in the information initially available to agents, from a regime where information is correctly aggregated, to one where the population reaches consensus on the wrong outcome with finite probability. Furthermore, information aggregation depends in a non-trivial manner on the relative size of the clique of leaders, with the limit of a vanishingly small clique being singular. Full article
(This article belongs to the Special Issue Social Networks and Information Diffusion)
Show Figures

Figure 1

Figure 1
<p>The prediction by Equation (<a href="#FD18-entropy-15-03031" class="html-disp-formula">18</a>) for the probability of correct information aggregation (solid line) is compared with the results of numerical simulations run with the parallel dynamics of Equation (<a href="#FD9-entropy-15-03031" class="html-disp-formula">9</a>). Each dot represents the empirical estimate of such a probability for a given value of <span class="html-italic">x</span> computed as the fraction (over <math display="inline"> <msup> <mn>10</mn> <mn>4</mn> </msup> </math> samples) of networks that reached consensus on the true value of <span class="html-italic">X</span>. All simulations were performed on networks with <math display="inline"> <mrow> <mi>c</mi> <mo>=</mo> <mn>4</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>f</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>05</mn> </mrow> </math> for different values of the system size <span class="html-italic">N</span> (shown in the plot). Inset: Comparison between the large <span class="html-italic">N</span> approximations (solid lines) for <math display="inline"> <msub> <mi>μ</mi> <mi>λ</mi> </msub> </math> and <math display="inline"> <msub> <mi>σ</mi> <mi>λ</mi> </msub> </math> in Equation (19) and the corresponding quantities estimated by averaging over the top eigenvectors <math display="inline"> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>λ</mi> <mn>1</mn> </msub> <mrow> <mo>〉</mo> </mrow> </mrow> </math> of 100 network configurations. As can be seen, for large enough values of <span class="html-italic">N</span> the empirically measured mean and standard deviation are in excellent agreement with the approximations in Equation (<a href="#FD16-entropy-15-03031" class="html-disp-formula">16</a>). In all cases, we have <math display="inline"> <mrow> <mi>c</mi> <mo>=</mo> <mn>4</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>f</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>05</mn> </mrow> </math>.</p>
Full article ">Figure 2
<p>Probability of correct information aggregation as a function of the informativeness level of initial signals. The solid line refers to the case of a regular graph with <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>4</mn> </msup> </mrow> </math> (see Equation (<a href="#FD19-entropy-15-03031" class="html-disp-formula">19</a>)). The other data points refer to networks with <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>4</mn> </msup> </mrow> </math> and <math display="inline"> <mrow> <mi>c</mi> <mo>=</mo> <mn>4</mn> </mrow> </math> with different fractions of hubs <span class="html-italic">f</span>. As can be seen, “perturbing” a regular graph with the introduction of a very small fraction of hubs seriously reduces the network’s information aggregation performance. Increasing the fraction of hubs up to <math display="inline"> <mrow> <mi>f</mi> <mo>≲</mo> <msup> <mi>c</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </math> progressively restores the regular graph levels of aggregation. Each data point was obtained by averaging over <math display="inline"> <msup> <mn>10</mn> <mn>4</mn> </msup> </math> independent networks.</p>
Full article ">Figure 3
<p>Left: Probability of correct information aggregation as a function of the informativeness level of initial signals. The different data sets refer to different types of dynamics run on networks with <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>3</mn> </msup> </mrow> </math> or <math display="inline"> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> <mo>·</mo> <msup> <mn>10</mn> <mn>3</mn> </msup> </mrow> </math>, <math display="inline"> <mrow> <mi>f</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>05</mn> </mrow> </math> and <math display="inline"> <mrow> <mi>c</mi> <mo>=</mo> <mn>4</mn> </mrow> </math>. As can be seen, RNS dynamics does not show any significant dependence on the system size and performs much worse than parallel dynamics at correctly aggregating information. Right: Probability of correct information aggregation as a function of the fraction Φ of agents that listen to their neighbors at each time step. The extreme cases, <math display="inline"> <mrow> <mo>Φ</mo> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mi>N</mi> </mrow> </math> and <math display="inline"> <mrow> <mo>Φ</mo> <mo>=</mo> <mn>1</mn> </mrow> </math>, correspond, respectively, to RNS and parallel dynamics. All data were obtained for signal informativeness fixed as <math display="inline"> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>16</mn> </mrow> </math>. In both plots, all data points are obtained by averaging over <math display="inline"> <msup> <mn>10</mn> <mn>4</mn> </msup> </math> independent network configurations.</p>
Full article ">
11053 KiB  
Article
Optimization of Curvilinear Tracing Applied to Solar Physics and Biophysics
by Markus J. Aschwanden, Bart De Pontieu and Eugene A. Katrukha
Entropy 2013, 15(8), 3007-3030; https://doi.org/10.3390/e15083007 - 26 Jul 2013
Cited by 28 | Viewed by 6583
Abstract
We developed an automated pattern recognition code that is particularly well suited to extract one-dimensional curvilinear features from two-dimensional digital images. A former version of this Oriented Coronal Curved Loop Tracing (OCCULT) code was applied to spacecraft images of magnetic loops in the [...] Read more.
We developed an automated pattern recognition code that is particularly well suited to extract one-dimensional curvilinear features from two-dimensional digital images. A former version of this Oriented Coronal Curved Loop Tracing (OCCULT) code was applied to spacecraft images of magnetic loops in the solar corona, recorded with the NASA spacecraft, Transition Region And Coronal Explorer (TRACE), in extreme ultra-violet wavelengths. Here, we apply an advanced version of this code (OCCULT-2), also, to similar images from the Solar Dynamics Observatory (SDO), to chromospheric H-α images obtained with the Swedish Solar Telescope (SST) and to microscopy images of microtubule filaments in live cells in biophysics. We provide a full analytical description of the code, optimize the control parameters and compare the automated tracing with visual/manual methods. The traced structures differ by up to 16 orders of magnitude in size, which demonstrates the universality of the tracing algorithm. Full article
(This article belongs to the Special Issue Advanced Signal Processing in Heliospheric Physics)
Show Figures

Figure 1

Figure 1
<p>A solar EUV image of an active region, recorded with the Transition Region And Coronal Explorer (TRACE) spacecraft on May 15, 1998, is shown with a colorscale that has the highest brightness in the white regions (center). In addition, we show (<math display="inline"> <mrow> <mn>100</mn> <mspace width="0.166667em"/> <mo>×</mo> <mspace width="0.166667em"/> <mn>100</mn> </mrow> </math> pixel) enlargements of four subregions with different textures, which contain (<b>a</b>) coronal loops, (<b>c</b>) electronic ripples, (<b>b</b>) chromospheric and transition region emissions and (<b>d</b>) moss regions with footpoints of hot coronal loops.</p>
Full article ">Figure 2
<p>A bandpass-filtered (<math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>7</mn> </mrow> </math>) version of the original image rendered in <a href="#entropy-15-03007-f001" class="html-fig">Figure 1</a> is shown (grey scale), with automated loop tracings overlaid (red curves). Cumulative size distributions, <math display="inline"> <mrow> <mi>N</mi> <mo>(</mo> <mo>&gt;</mo> <mi>L</mi> <mo>)</mo> </mrow> </math>, of loop lengths are also shown (bottom right panel), comparing the automated tracing (red distribution) with visually/manually traced loops (black distribution). The maximum lengths, <math display="inline"> <msub> <mi>L</mi> <mi>m</mi> </msub> </math> (in pixels), are listed for the longest loops detected with each method.</p>
Full article ">Figure 3
<p>The optimization of the highpass filter constant, <math display="inline"> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>2</mn> </mrow> </msub> </math> (y-axis), is shown for the number of detected loops (with lengths longer than 70 pixels) for all analyzed cases, <math display="inline"> <mrow> <msub> <mi>N</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>L</mi> <mo>≥</mo> <mn>70</mn> <mo>)</mo> </mrow> </mrow> </math> (a–e). The optimization of detected loops as a function of the curvature radius, <math display="inline"> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </math>, is also shown (f–j).</p>
Full article ">Figure 4
<p>Bandpass-filtered image of an active region complex observed with Atmospheric Imager Assembly (AIA)/Solar Dynamics Observatory (SDO) on 3 Aug. 2011, 01 UT, 171 Å shown as (<b>a</b>) an intensity image, as (<b>b</b>) a bandpass-filtered version with <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>9</mn> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>11</mn> </mrow> </math> and (<b>c</b>) overlaid with automatically traced loop structures , where the low-intensity values below the median of <math display="inline"> <mrow> <mi>f</mi> <mo>=</mo> <mn>75</mn> </mrow> </math> DN (data number) are blocked out (grey areas).</p>
Full article ">Figure 5
<p>High-resolution image of the solar Active Region 10380, recorded on June 16, 2003, with the Swedish 1-m Solar Telescope (SST) on (<b>a</b>) La Palma Spain and (<b>b</b>) automated tracing of curvilinear structures with a lowpass filter of <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>3</mn> </mrow> </math> pixels, a highpass filter of <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </math> pixels and a minimum curvature radius of <math display="inline"> <mrow> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </math> pixels, tracing out 1,757 curvilinear segments.</p>
Full article ">Figure 6
<p>False-colored images of microtubule filaments in live cells: (<b>a</b>) Cell-HC) a high-contrast image of the CHO cell and (<b>b</b> Cell-LC)a low-contrast image of the U2OS cell . The size of the images corresponds to 34 <span class="html-italic">μ</span>m. The automated curvilinear tracing of both images was carried out with the parameters: <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>3</mn> </mrow> </math>, <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>15</mn> </mrow> </math> for Cell-HC and <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>7</mn> </mrow> </math>, <math display="inline"> <mrow> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>9</mn> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>r</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </math> for Cell-LC.</p>
Full article ">Figure 7
<p>Geometry of curvature radii centers <math display="inline"> <mrow> <mo>(</mo> <mi>x</mi> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>,</mo> <mi>y</mi> <msub> <mi>r</mi> <mi>m</mi> </msub> <mo>)</mo> </mrow> </math> located on a line at angle <span class="html-italic">β</span> (dash-dotted line), perpendicular to the tangent at angle <span class="html-italic">α</span> (solid line), that intersects a curvilinear feature (thick solid curve) at position, <math display="inline"> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </math>. The angle, <span class="html-italic">γ</span>, indicates the half angular range of the curved guiding segment (thick solid line).</p>
Full article ">Figure 8
<p>Example of loop tracing in the pixel area [525:680, 453:607] of the image shown in <a href="#entropy-15-03007-f002" class="html-fig">Figure 2</a>. Loop #19 is traced (blue crosses) over a length of 115 pixels (orange numbers), crossing another structure at a small angle. The curves at position 115 indicate the three curved segments that have been used in the tracing of the last loop point. The black contours indicate the bandpass-filtered difference image and the red contours indicate the previously traced and erased structures in the residual difference image.</p>
Full article ">Figure 9
<p>Example of loop tracings in area [300:550, 600:800] of the full image shown in <a href="#entropy-15-03007-f002" class="html-fig">Figure 2</a>. The grey scale indicates the bandpass-filtered image, <math display="inline"> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> <msub> <mi>n</mi> <mrow> <mi>s</mi> <mi>m</mi> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>7</mn> <mo>)</mo> </mrow> </math>, and the loop tracings are shown with red curves.</p>
Full article ">
298 KiB  
Article
Time Evolution of Relative Entropies for Anomalous Diffusion
by Janett Prehl, Frank Boldt, Christopher Essex and Karl Heinz Hoffmann
Entropy 2013, 15(8), 2989-3006; https://doi.org/10.3390/e15082989 - 26 Jul 2013
Cited by 14 | Viewed by 6522
Abstract
The entropy production paradox for anomalous diffusion processes describes a phenomenon where one-parameter families of dynamical equations, falling between the diffusion and wave equations, have entropy production rates (Shannon, Tsallis or Renyi) that increase toward the wave equation limit unexpectedly. Moreover, also surprisingly, [...] Read more.
The entropy production paradox for anomalous diffusion processes describes a phenomenon where one-parameter families of dynamical equations, falling between the diffusion and wave equations, have entropy production rates (Shannon, Tsallis or Renyi) that increase toward the wave equation limit unexpectedly. Moreover, also surprisingly, the entropy does not order the bridging regime between diffusion and waves at all. However, it has been found that relative entropies, with an appropriately chosen reference distribution, do. Relative entropies, thus, provide a physically sensible way of setting which process is “nearer” to pure diffusion than another, placing pure wave propagation, desirably, “furthest” from pure diffusion. We examine here the time behavior of the relative entropies under the evolution dynamics of the underlying one-parameter family of dynamical equations based on space-fractional derivatives. Full article
(This article belongs to the Special Issue Distance in Information and Statistical Physics Volume 2)
Show Figures

Figure 1

Figure 1
<p>The Kullback-Leibler entropy, <math display="inline"> <mrow> <mi>K</mi> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </math>, is plotted over <span class="html-italic">α</span> for different times <span class="html-italic">t</span>. One can see that for all times, <math display="inline"> <mrow> <mi>K</mi> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </math> exhibits a monotonic decreasing behavior, thus confirming the bridge ordering property of <math display="inline"> <mrow> <mi>K</mi> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </math>.</p>
Full article ">Figure 2
<p>A comparison between the direct numerical calculation of the Kullback-Leibler entropy, <math display="inline"> <mrow> <mi>K</mi> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </math> (DCA), the saddle point method of the zeroth order (SP0) and the saddle point method of first order (SP1) is shown over logarithmic time <span class="html-italic">t</span> for two different values of <span class="html-italic">α</span>. One observes that the approximations approach the DCA data points for large times and fit the data quite well already for <math display="inline"> <mrow> <mi>t</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </math>.</p>
Full article ">Figure 3
<p>Four plots of <math display="inline"> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> </math> and <math display="inline"> <mrow> <mo form="prefix">ln</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> </mrow> </math> over <span class="html-italic">x</span> are given for <math display="inline"> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> </math> (a), <math display="inline"> <mrow> <mi>t</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>2</mn> </msup> </mrow> </math> (b), <math display="inline"> <mrow> <mi>t</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>4</mn> </msup> </mrow> </math> (c) and <math display="inline"> <mrow> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <msup> <mn>10</mn> <mn>6</mn> </msup> </mrow> </math> (d) for <math display="inline"> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mn>3</mn> </mrow> </math>. It can be seen that within the width of <math display="inline"> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> </math>, the distribution <math display="inline"> <mrow> <mo form="prefix">ln</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> </mrow> </math> becomes flatter with increasing time. Thus, <math display="inline"> <mrow> <mo form="prefix">ln</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> </mrow> </math> can be approximated well by its function value and its first two derivatives.</p>
Full article ">Figure 4
<p>For the case of <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </math>, the Tsallis relative entropy, <math display="inline"> <mrow> <msub> <mi>T</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </mrow> </math>, is given over <span class="html-italic">α</span> for different times <math display="inline"> <mrow> <mo>(</mo> <mi>t</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>0</mn> </msup> <mo>,</mo> <msup> <mn>10</mn> <mn>2</mn> </msup> <mo>,</mo> <msup> <mn>10</mn> <mn>4</mn> </msup> <mo>,</mo> <msup> <mn>10</mn> <mn>6</mn> </msup> <mo>)</mo> </mrow> </math>. One can observe that with increasing time, the monotonic decreasing behavior is preserved, and thus, the bridge ordering property of <math display="inline"> <mrow> <msub> <mi>T</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> is confirmed.</p>
Full article ">Figure 5
<p>A comparison of direct numerical calculation of the Tsallis relative entropy, <math display="inline"> <mrow> <msub> <mi>T</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mrow> <mn>1</mn> <mo>.</mo> <mn>3</mn> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </math> (DCA), and the saddle point method of the first order (SP1) is given over logarithmic time <span class="html-italic">t</span> for different values of <span class="html-italic">q</span>.</p>
Full article ">Figure 6
<p>A comparison of direct numerical calculation of the Tsallis relative entropy, <math display="inline"> <mrow> <msub> <mi>T</mi> <mrow> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> (DCA), and the saddle point method of the first order (SP1) is given over logarithmic time <span class="html-italic">t</span> for different values of <span class="html-italic">α</span>. Note that <math display="inline"> <mrow> <msub> <mi>T</mi> <mi>q</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>α</mi> </msub> <mo>)</mo> </mrow> </mrow> </math> is not monotonic in time for larger values of <span class="html-italic">α</span>, but has a clear minimum around <math display="inline"> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> </math>.</p>
Full article ">
242 KiB  
Review
On the Entropy of a Class of Irreversible Processes
by Jurgen Honig
Entropy 2013, 15(8), 2975-2988; https://doi.org/10.3390/e15082975 - 26 Jul 2013
Cited by 6 | Viewed by 5437
Abstract
We review a recent technique for determining the entropy change accompanying certain classes of irreversible processes involving changes in the state of a system anchored to a reservoir. Time is introduced as a parameter to specify the corresponding entropy evolution of the system. [...] Read more.
We review a recent technique for determining the entropy change accompanying certain classes of irreversible processes involving changes in the state of a system anchored to a reservoir. Time is introduced as a parameter to specify the corresponding entropy evolution of the system. The procedural details are outlined and their relation to the standard treatment of irreversible processes is examined. Full article
(This article belongs to the Special Issue Entropy and the Second Law of Thermodynamics)
Show Figures

Figure 1

Figure 1
<p>Temperature Profile for a System at Temperature <span class="html-italic">T</span> Attached to a Reservoir at Temperature <span class="html-italic">T</span><sub>0</sub> via a Junction of Cross Section <span class="html-italic">A</span> and Length <span class="html-italic">l</span>.</p>
Full article ">
3725 KiB  
Review
Towards Realising Secure and Efficient Image and Video Processing Applications on Quantum Computers
by Abdullah M. Iliyasu
Entropy 2013, 15(8), 2874-2974; https://doi.org/10.3390/e15082874 - 26 Jul 2013
Cited by 59 | Viewed by 9638
Abstract
Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images [...] Read more.
Exploiting the promise of security and efficiency that quantum computing offers, the basic foundations leading to commercial applications for quantum image processing are proposed. Two mathematical frameworks and algorithms to accomplish the watermarking of quantum images, authentication of ownership of already watermarked images and recovery of their unmarked versions on quantum computers are proposed. Encoding the images as 2n-sized normalised Flexible Representation of Quantum Images (FRQI) states, with n-qubits and 1-qubit dedicated to capturing the respective information about the colour and position of every pixel in the image respectively, the proposed algorithms utilise the flexibility inherent to the FRQI representation, in order to confine the transformations on an image to any predetermined chromatic or spatial (or a combination of both) content of the image as dictated by the watermark embedding, authentication or recovery circuits. Furthermore, by adopting an apt generalisation of the criteria required to realise physical quantum computing hardware, three standalone components that make up the framework to prepare, manipulate and recover the various contents required to represent and produce movies on quantum computers are also proposed. Each of the algorithms and the mathematical foundations for their execution were simulated using classical (i.e., conventional or non-quantum) computing resources, and their results were analysed alongside other longstanding classical computing equivalents. The work presented here, combined together with the extensions suggested, provide the basic foundations towards effectuating secure and efficient classical-like image and video processing applications on the quantum-computing framework. Full article
(This article belongs to the Special Issue Quantum Information 2012)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The three stages of the circuit model of quantum. The figure was adapted from [<a href="#B35-entropy-15-02874" class="html-bibr">35</a>] from where additional explanation can be obtained.</p>
Full article ">Figure 2
<p>A simple FRQI image and its quantum state.</p>
Full article ">Figure 3
<p>Generalised circuit showing how information in an FRQI quantum image state is encoded.</p>
Full article ">Figure 4
<p>Illustration of the two steps of the PPT theorem to prepare an FRQI image.</p>
Full article ">Figure 5
<p>Colour and position transformations on FRQI quantum images. The * in (c) indicates the 0 or 1 control-conditions required to confine <span class="html-italic">U<sub>3</sub></span> to a predetermined sub-block of the image.</p>
Full article ">Figure 6
<p>Left: The circuit design for the horizontal flip operation, <span class="html-italic">F<sup>X</sup></span>, and on the right that for the coordinate swap operation, <span class="html-italic">S<sub>I</sub></span>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Original 8×8 image, and its resulting output images after applying in (<b>b</b>) the vertical flip <span class="html-italic">F<sup>Y</sup></span>, (<b>c</b>) the horizontal flip <span class="html-italic">F<sup>X</sup></span>, and in (<b>d</b>) the coordinate swap <span class="html-italic">S<sub>I</sub></span> operations, respectively.</p>
Full article ">Figure 8
<p>Circuit to rotate the image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>a through an angle of 90° and (on the left) the resulting image.</p>
Full article ">Figure 9
<p>The 8 × 8 synthetic and Lena images before and after the application of the <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>(</mo> <mstyle scriptlevel="+1"> <mfrac> <mrow> <mn>2</mn> <mi>π</mi> </mrow> <mn>3</mn> </mfrac> </mstyle> <mo>)</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>(</mo> <mstyle scriptlevel="+1"> <mfrac> <mi>π</mi> <mn>3</mn> </mfrac> </mstyle> <mo>)</mo> </mrow> </semantics> </math> on the upper half and lower half of their content.</p>
Full article ">Figure 10
<p>Circuit to execute the <math display="inline"> <semantics> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mstyle scriptlevel="+1"> <mfrac> <mrow> <mn>2</mn> <mi>π</mi> </mrow> <mn>3</mn> </mfrac> </mstyle> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>R</mi> <mrow> <mo>(</mo> <mrow> <mstyle scriptlevel="+1"> <mfrac> <mi>π</mi> <mn>3</mn> </mfrac> </mstyle> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math> colour operations on the upper half and lower half of the 8 × 8 synthetic and Lena images.</p>
Full article ">Figure 11
<p>General circuit design for transforming the geometric (G) and colour (C) content of FRQI quantum images.</p>
Full article ">Figure 12
<p>Demonstrating the use of additional control to target a smaller sub-area in an image.</p>
Full article ">Figure 13
<p>The control on the <span class="html-italic">y<sub>n-1</sub></span> qubit in the circuit on the left divides an entire image into its upper and lower halves. Using this control, this circuit shows how the flip operation can be confined to the lower half of an image, while the figure to its right shows the effect of such a transformation on the 8×8 binary image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>(a). (The image on the right corrects the image for the same example in [<a href="#B18-entropy-15-02874" class="html-bibr">18</a>]).</p>
Full article ">Figure 14
<p>Circuit to realise high fidelity version of the image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>(a). On the left is the circuit to confine the flip operation to the predetermined 2 × 2 sub- area, <span class="html-italic">i.e.</span> left lower-half, of the image in <a href="#entropy-15-02874-f007" class="html-fig">Figure 7</a>(a); and to its right, the resulting transformed image. (The image on the right corrects the image for the same example in [<a href="#B18-entropy-15-02874" class="html-bibr">18</a>]).</p>
Full article ">Figure 15
<p>A 4×4 image showing sub-blocks labelled a–e within which the transformations <span class="html-italic">U<sub>a</sub></span>, <span class="html-italic">U<sub>b</sub></span>, <span class="html-italic">U<sub>c</sub></span>, <span class="html-italic">U<sub>d</sub></span> and <span class="html-italic">U<sub>e</sub></span> should be confined.</p>
Full article ">Figure 16
<p>Circuit showing the layers to confine the operations <span class="html-italic">U<sub>a</sub></span>, <span class="html-italic">U<sub>b</sub></span>, <span class="html-italic">U<sub>c</sub></span>, <span class="html-italic">U<sub>d</sub></span> and <span class="html-italic">U<sub>e</sub></span> to the layers labelled “a” to “e” of the image in <a href="#entropy-15-02874-f015" class="html-fig">Figure 15</a>. MSQ and LSQ indicate the most and least significant qubits of the FRQI representation encoding the image.</p>
Full article ">Figure 17
<p>Original Lena image with labelled sub-blocks.</p>
Full article ">Figure 18
<p>The original Lena image and the two different output images using <math display="inline"> <semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mo>(</mo> <mfrac> <mi>π</mi> <mn>12.5</mn> </mfrac> <mo>)</mo> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mo>(</mo> <mfrac> <mi>π</mi> <mn>125</mn> </mfrac> <mo>)</mo> </mrow> </semantics> </math> as discussed in the text.</p>
Full article ">Figure 19
<p>The quantum circuit to realise the output images in <a href="#entropy-15-02874-f018" class="html-fig">Figure 18</a>.</p>
Full article ">Figure 20
<p>Watermark embedding procedure of the WaQI scheme.</p>
Full article ">Figure 21
<p>Merger of 2×2 sub-block entries from the first to the 2nd iteration.</p>
Full article ">Figure 22
<p>Merging the content of 2×2 sub-block entries to realise (<b>i</b>) e = 1and (<b>ii</b>) e = –1values as explained in step 3 of the watermark-embedding algorithm.</p>
Full article ">Figure 23
<p>(<b>a</b>) the a–d Alphabet test image—HTLA text logo watermark pair, and (<b>b</b>) the watermarked version of the a–d Alphabet test image.</p>
Full article ">Figure 24
<p>Watermark map for a–d alphabet–HTLA text watermark pair.</p>
Full article ">Figure 25
<p>Watermark embedding circuit for the a–d alphabet/HTLA text logo pair in <a href="#entropy-15-02874-f023" class="html-fig">Figure 23</a>(a).</p>
Full article ">Figure 26
<p>Decomposing layer 1 of the watermark-embedding circuit in <a href="#entropy-15-02874-f025" class="html-fig">Figure 25</a> into its two sub-layers.</p>
Full article ">Figure 27
<p>Merging flip gates to realise revised <span class="html-italic">F<sup>X</sup></span>, and <span class="html-italic">F<sup>Y</sup></span> operations for (i) <span class="html-italic">R &gt; L</span> and (ii) <span class="html-italic">R &lt; L.</span></p>
Full article ">Figure 28
<p>Merger of watermark map content to realise the revised GTQI operations for <span class="html-italic">R = L</span>. The operation <span class="html-italic">G<sub>I</sub></span> could be any of the operations from our <span class="html-italic">r</span>GTQI library comprising of the flip operations, <span class="html-italic">F<sup>X</sup></span> or <span class="html-italic">F<sup>Y</sup></span>; the coordinate swap operation, <span class="html-italic">S</span>; or the do nothing operation, <span class="html-italic">D</span>.</p>
Full article ">Figure 29
<p>Merging of flip gates to realise the revised <span class="html-italic">F<sup>X</sup></span> and <span class="html-italic">F<sup>Y</sup></span> flip operations for <span class="html-italic">R = L.</span></p>
Full article ">Figure 30
<p>Quantum watermarked image authentication procedure.</p>
Full article ">Figure 31
<p>Dataset comprising of images and watermark signals used for simulation-based experiments on WaQI.</p>
Full article ">Figure 32
<p>Top row shows the watermark maps for the image paired with different watermark signals HTLA text, Baboon, and Noise image. Below is the watermarked version for each pair and their corresponding PSNR values.</p>
Full article ">Figure 33
<p>Variation of watermarked image quality (PSNR) with the size of the Lena–Noise image pair. The size of each point in the watermark maps in the top row varies with the size of the image–watermark pairs. It is 8×8 for the 256×256 and 512×512 pairs; and 16×16 for the 1024×1024 Lena–Noise pair.</p>
Full article ">Figure 34
<p>Variation of watermarked image quality (PSNR) with size of image–watermark pair.</p>
Full article ">Figure 35
<p>Relationship between the colour angle <span class="html-italic">θ<sub>i</sub></span> and greyscale value |<span class="html-italic">G<sub>i</sub></span>〉 in an FRQI image.</p>
Full article ">Figure 36
<p>Greyscale spectrum showing the correlation between the greyscale values and changes in their values that can be perceived by the HVS.</p>
Full article ">Figure 37
<p>General schematic for two-tier watermarking and authentication of greyscale quantum images.</p>
Full article ">Figure 38
<p>Generalised circuit for the two-tier watermarking of greyscale FRQI images. The visible and invisible watermark embedding transformations <span class="html-italic">T<sub>α</sub></span> and <span class="html-italic">T<sub>β</sub></span> are confined to predetermined areas of the cover image using the control-conditions specified by <span class="html-italic">I<sub>Rl</sub></span> and <span class="html-italic">I<sub>S</sub></span> respectively.</p>
Full article ">Figure 39
<p>(<b>a</b>)–(<b>d</b>) Cover images and (<b>e</b>) watermark logo used for experiments on the proposed scheme.</p>
Full article ">Figure 40
<p>Watermark embedding circuit for the Lena-Titech logo pair.</p>
Full article ">Figure 41
<p>(Top row) shows the four watermarked images while (Bottom row) shows the magnified visible watermarked windows and PSNR for each pair.</p>
Full article ">Figure 42
<p>Watermark recovery circuit for the Lena- Titech logo pair.</p>
Full article ">Figure 43
<p>Results for the Lena-Titech logo pair based on the revised watermark embedding circuit for the scheme-designated watermark window on the left and one whose watermark window has been assigned to the extreme lower-right corner by default.</p>
Full article ">Figure 44
<p>Revised watermark-embedding circuit for the Lena-Titech logo pair using the scheme-designated watermark window.</p>
Full article ">Figure 45
<p><span class="html-italic">m</span>-shots from a movie showing the key |<span class="html-italic">F<sub>m</sub></span>〉, makeup <math display="inline"> <semantics> <mrow> <mo>|</mo> <msubsup> <mi>K</mi> <mi>c</mi> <mi>m</mi> </msubsup> <mo>〉</mo> </mrow> </semantics> </math>, and viewing |<span class="html-italic">F<sub>mq</sub></span>〉 frames.</p>
Full article ">Figure 46
<p>Circuit structure to encode the input of a movie strip.</p>
Full article ">Figure 47
<p>Circuit structure to encode the input of a movie strip Circuits for SMO. Depending on the motion axis <span class="html-italic">Z<sub>n</sub></span> = <span class="html-italic">x</span> or <span class="html-italic">y</span>) the circuit on the left is used to accomplish the <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>F</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>D</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> operations when applied along the <span class="html-italic">x</span><sup>−</sup> and <span class="html-italic">y</span><sup>−</sup> axis, respectively. Similarly, the circuit on the right is used to accomplish the <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>B</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>U</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math> operations when applied the <span class="html-italic">x</span><sup>−</sup> and <span class="html-italic">y</span><sup>−</sup> axis, respectively.</p>
Full article ">Figure 48
<p>SMOs on the key frame in (<b>a</b>) to mimic the movement of the + shaped ROI on a constant white background and its viewing frames after applying (<b>b</b>) the forward motion operation <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>F</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math>, (<b>c</b>) the upward motion operation <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>U</mi> <mi>c</mi> </msubsup> </mrow> </semantics> </math>, and (<b>d</b>) a somewhat zigzag movement of the + ROI.</p>
Full article ">Figure 49
<p>Movie scenes to demonstrate SMO operations. The panels in (a) and (b) show the transcribed scripts for scene 1 and 2, (c) shows the key frame for scene 1, and (d)-(l) show the resulting viewing frames.</p>
Full article ">Figure 50
<p>Movie sub-circuit to realise the first three viewing frames of scene 1 (of the example in <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>). The layers separated by short-dashed lines labelled “a” indicate SMO operations, while the layers grouped and labelled as “b” indicate CTQI transformations on the key frame. Layers labelled <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mn>1</mn> <mn>2</mn> </msubsup> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mn>2</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mn>3</mn> <mn>0</mn> </msubsup> </mrow> </semantics> </math> indicate sub-circuits of the movie reader to recover the classical readout of frames |<span class="html-italic">f</span><sub>0,1</sub>〉, |<span class="html-italic">f</span><sub>0,2</sub>〉 and |<span class="html-italic">f</span><sub>0,3</sub>〉.</p>
Full article ">Figure 51
<p>Restricting the movie operation <math display="inline"> <semantics> <mrow> <msubsup> <mi>M</mi> <mi>F</mi> <mi>C</mi> </msubsup> </mrow> </semantics> </math> in order to move the ROI <span class="html-italic">R<sub>1</sub></span> from node 0 to node 1 as specified by the movie script.</p>
Full article ">Figure 52
<p>Movie sub-circuit for scene 2 in <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>(b). The labels 5 through 7 and 5′ through 7′ for <span class="html-italic">R<sub>1</sub></span> and <span class="html-italic">R<sub>2</sub></span> indicate the circuit layers to perform the operations that yield the viewing frames in <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>(j) –(l).</p>
Full article ">Figure 53
<p>Circuit on the <span class="html-italic">m</span>FRQI strip axis to perform the frame-to-frame transition operation.</p>
Full article ">Figure 54
<p>The cyclic shift transformation for the case <span class="html-italic">c</span> = 1 and <span class="html-italic">n</span> = 5.</p>
Full article ">Figure 55
<p>A single qubit measurement gate.</p>
Full article ">Figure 56
<p>Exploiting the position information |<span class="html-italic">i</span>〉 of the FRQI representation to predetermined the 2D grid location of each pixel in a transformed image <span class="html-italic">G<sub>I</sub></span>(|<span class="html-italic">I(θ)</span>〉).</p>
Full article ">Figure 57
<p>Control-conditions to recover the readout of the pixels of a 2<span class="html-italic"><sup>n</sup></span>×2<span class="html-italic"><sup>n</sup></span> FRQI quantum image.</p>
Full article ">Figure 58
<p>Predetermined recovery of the position information of an FRQI quantum image. The * between the colour |<span class="html-italic">c</span>(<span class="html-italic">θ<sub>i</sub></span>)〉 and ancilla |<span class="html-italic">a</span>〉 qubits indicates the dependent ancilla-driven measurement as described in <a href="#entropy-15-02874-f059" class="html-fig">Figure 59</a> and Theorem 4.</p>
Full article ">Figure 59
<p>Circuit to recover the content of the single-qubit colour information of an FRQI quantum image. This circuit represents each of the * between the colour and ancilla qubit in <a href="#entropy-15-02874-f058" class="html-fig">Figure 58</a>.</p>
Full article ">Figure 60
<p>Reader to recover the content of a 2<span class="html-italic"><sup>n</sup></span>×2<span class="html-italic"><sup>n</sup></span> FRQI quantum image.</p>
Full article ">Figure 61
<p>Movie reader sub-circuit to recover pixel <span class="html-italic">p</span><sub>0</sub> and <span class="html-italic">p</span><sub>1</sub> for frame |<span class="html-italic">f</span><sub>0,1</sub>〉 corresponding to <a href="#entropy-15-02874-f049" class="html-fig">Figure 49</a>e.</p>
Full article ">Figure 62
<p>Movie reader to recover pixels <span class="html-italic">p</span><sub>4</sub>, <span class="html-italic">p</span><sub>6</sub>, <span class="html-italic">p</span><sub>8</sub>, <span class="html-italic">p</span><sub>9</sub>, <span class="html-italic">p</span><sub>10</sub> of viewing frame |<span class="html-italic">f</span><sub>0,1</sub>〉.</p>
Full article ">Figure 63
<p>Readout of the new state of pixel <span class="html-italic">p</span><sub>4</sub> as transformed by sub-circuit 1 in <a href="#entropy-15-02874-f052" class="html-fig">Figure 52</a>.</p>
Full article ">Figure 64
<p>Movie reader sub-circuit to recover the content of pixels <span class="html-italic">p</span><sub>2</sub>, <span class="html-italic">p</span><sub>3</sub>, <span class="html-italic">p</span><sub>7</sub>, <span class="html-italic">p</span><sub>11</sub>, <span class="html-italic">p</span><sub>12</sub>, <span class="html-italic">p</span><sub>13</sub> and <span class="html-italic">p</span><sub>15</sub>.</p>
Full article ">Figure 65
<p>Key and makeup frames for the scene “The lonely duck goes swimming”. See text and [<a href="#B19-entropy-15-02874" class="html-bibr">19</a>] for additional explanation.</p>
Full article ">Figure 66
<p>Key and makeup frames for the scene “The cat and mouse chase”. See text and [<a href="#B19-entropy-15-02874" class="html-bibr">19</a>] for additional explanation.</p>
Full article ">Figure 67
<p>Framework for quantum movie representation and manipulation.</p>
Full article ">
Previous Issue
Back to TopTop