[go: up one dir, main page]

Next Issue
Volume 12, February
Previous Issue
Volume 11, December
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 12, Issue 1 (January 2010) – 11 articles , Pages 1-160

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
620 KiB  
Article
The Quantum-Classical Transition as an Information Flow
by Andres M. Kowalski, Maria T. Martin, Luciano Zunino, Angelo Plastino and Montserrat Casas
Entropy 2010, 12(1), 148-160; https://doi.org/10.3390/e12010148 - 26 Jan 2010
Cited by 5 | Viewed by 9591
Abstract
We investigate the classical limit of the semiclassical evolution with reference to a well-known model that represents the interaction between matter and a given field. This is done by recourse to a special statistical quantifier called the “symbolic transfer entropy”. We encounter that [...] Read more.
We investigate the classical limit of the semiclassical evolution with reference to a well-known model that represents the interaction between matter and a given field. This is done by recourse to a special statistical quantifier called the “symbolic transfer entropy”. We encounter that the quantum-classical transition gets thereby described as the sign-reversal of the dominating direction of the information flow between classical and quantal variables. Full article
(This article belongs to the Special Issue Information and Entropy)
Show Figures

Figure 1

Figure 1
<p>a) The directionality index <math display="inline"> <msup> <mi>T</mi> <mi>S</mi> </msup> </math> vs. <math display="inline"> <msub> <mi>E</mi> <mi>r</mi> </msub> </math> and b) <math display="inline"> <msubsup> <mi>T</mi> <mrow> <mi>q</mi> <mo>,</mo> <mi>c</mi> </mrow> <mi>S</mi> </msubsup> </math> and <math display="inline"> <msubsup> <mi>T</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>q</mi> </mrow> <mi>S</mi> </msubsup> </math> vs. <math display="inline"> <msub> <mi>E</mi> <mi>r</mi> </msub> </math>, for a wide <math display="inline"> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>-</mo> </mrow> </math>range. We took <math display="inline"> <mrow> <mi>q</mi> <mo>≡</mo> <mo stretchy="false">〈</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo stretchy="false">〉</mo> </mrow> </math> and <math display="inline"> <mrow> <mi>c</mi> <mo>≡</mo> <mi>A</mi> </mrow> </math>. The classical variable <span class="html-italic">A</span> is dominant across most of the range, except for small <math display="inline"> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>-</mo> </mrow> </math>values, for which Uncertainty Principle becomes important enough that the quantal variable <math display="inline"> <mrow> <mo stretchy="false">〈</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo stretchy="false">〉</mo> </mrow> </math> becomes dominant. Note the absolute minimum of <math display="inline"> <msup> <mi>T</mi> <mi>S</mi> </msup> </math> at <math display="inline"> <mrow> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mrow> <mi>c</mi> <mi>l</mi> </mrow> </msup> <mo>=</mo> <mn>21</mn> <mo>.</mo> <mn>55264</mn> </mrow> </math>, beginning of the transition region.</p>
Full article ">Figure 2
<p>a) The directionality index <math display="inline"> <msup> <mi>T</mi> <mi>S</mi> </msup> </math> vs. <math display="inline"> <msub> <mi>E</mi> <mi>r</mi> </msub> </math> and b) <math display="inline"> <msubsup> <mi>T</mi> <mrow> <mi>q</mi> <mo>,</mo> <mi>c</mi> </mrow> <mi>S</mi> </msubsup> </math> and <math display="inline"> <msubsup> <mi>T</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>q</mi> </mrow> <mi>S</mi> </msubsup> </math> vs. <math display="inline"> <msub> <mi>E</mi> <mi>r</mi> </msub> </math>, for an <math display="inline"> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>-</mo> </mrow> </math>range that allows to visualize the three zones of the process, i.e., quantal, transitional, and classic, delimited, respectively, by <math display="inline"> <mrow> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mi mathvariant="script">P</mi> </msup> <mo>=</mo> <mn>3</mn> <mo>.</mo> <mn>3282</mn> </mrow> </math>, and <math display="inline"> <mrow> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mrow> <mi>c</mi> <mi>l</mi> </mrow> </msup> <mo>=</mo> <mn>21</mn> <mo>.</mo> <mn>55264</mn> </mrow> </math>. We took <math display="inline"> <mrow> <mi>q</mi> <mo>≡</mo> <mo stretchy="false">〈</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo stretchy="false">〉</mo> </mrow> </math> and <math display="inline"> <mrow> <mi>c</mi> <mo>≡</mo> <mi>A</mi> </mrow> </math> as in <a href="#entropy-12-00148-f001" class="html-fig">Figure 1</a>. Note the absolute minimum of <math display="inline"> <msup> <mi>T</mi> <mi>S</mi> </msup> </math> at <math display="inline"> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mrow> <mi>c</mi> <mi>l</mi> </mrow> </msup> </math>, the local maximum at <math display="inline"> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mi mathvariant="script">P</mi> </msup> </math>, and the absolute maximum close by (<math display="inline"> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>≃</mo> <mn>2</mn> <mo>.</mo> <mn>2</mn> </mrow> </math>). Symmetric information flow obtains at <math display="inline"> <mrow> <msubsup> <mi>E</mi> <mi>r</mi> <mi>M</mi> </msubsup> <mo>=</mo> <mn>6</mn> <mo>.</mo> <mn>81</mn> </mrow> </math> (where the Statistical Complexity attains a maximum), well within the transition region. Classical variable <span class="html-italic">A</span> is the “leading” one from <math display="inline"> <mrow> <mo>+</mo> <mi>∞</mi> </mrow> </math> until this point. For smaller <math display="inline"> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> <mo>-</mo> </mrow> </math>values, <math display="inline"> <mrow> <mo stretchy="false">〈</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo stretchy="false">〉</mo> </mrow> </math> becomes dominant.</p>
Full article ">Figure 3
<p>The directionality index <math display="inline"> <msup> <mi>T</mi> <mi>S</mi> </msup> </math> vs. <math display="inline"> <msub> <mi>E</mi> <mi>r</mi> </msub> </math> and <math display="inline"> <msubsup> <mi>T</mi> <mrow> <mi>q</mi> <mo>,</mo> <mi>c</mi> </mrow> <mi>S</mi> </msubsup> </math> and <math display="inline"> <msubsup> <mi>T</mi> <mrow> <mi>c</mi> <mo>,</mo> <mi>q</mi> </mrow> <mi>S</mi> </msubsup> </math> vs. <math display="inline"> <msub> <mi>E</mi> <mi>r</mi> </msub> </math> (inset). Here we took <math display="inline"> <mrow> <mi>q</mi> <mo>≡</mo> <mo stretchy="false">〈</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> <mo stretchy="false">〉</mo> </mrow> </math> and <math display="inline"> <mrow> <mi>c</mi> <mo>≡</mo> <msub> <mi>P</mi> <mi>A</mi> </msub> </mrow> </math>. The three stages of the process are visible between <math display="inline"> <mrow> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mi mathvariant="script">P</mi> </msup> <mo>=</mo> <mn>3</mn> <mo>.</mo> <mn>3282</mn> </mrow> </math> and <math display="inline"> <mrow> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mrow> <mi>c</mi> <mi>l</mi> </mrow> </msup> <mo>=</mo> <mn>21</mn> <mo>.</mo> <mn>55264</mn> </mrow> </math>. Note the <math display="inline"> <mrow> <msup> <mi>T</mi> <mi>S</mi> </msup> <mo>-</mo> </mrow> </math>absolute maximum at <math display="inline"> <msup> <mrow> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mi mathvariant="script">P</mi> </msup> </math>. <math display="inline"> <msubsup> <mi>E</mi> <mi>r</mi> <mi>M</mi> </msubsup> </math> is here slightly off the mark (see text).</p>
Full article ">
488 KiB  
Article
On the Entropy Based Associative Memory Model with Higher-Order Correlations
by Masahiro Nakagawa
Entropy 2010, 12(1), 136-147; https://doi.org/10.3390/e12010136 - 22 Jan 2010
Cited by 1 | Viewed by 7464
Abstract
In this paper, an entropy based associative memory model will be proposed and applied to memory retrievals with an orthogonal learning model so as to compare with the conventional model based on the quadratic Lyapunov functional to be minimized during the retrieval process. [...] Read more.
In this paper, an entropy based associative memory model will be proposed and applied to memory retrievals with an orthogonal learning model so as to compare with the conventional model based on the quadratic Lyapunov functional to be minimized during the retrieval process. In the present approach, the updating dynamics will be constructed on the basis of the entropy minimization strategy which may be reduced asymptotically to the above-mentioned conventional dynamics as a special case ignoring the higher-order correlations. According to the introduction of the entropy functional, one may involve higer-order correlation effects between neurons in a self-contained manner without any heuristic coupling coefficients as in the conventional manner. In fact we shall show such higher order coupling tensors are to be uniquely determined in the framework of the entropy based approach. From numerical results, it will be found that the presently proposed novel approach realizes much larger memory capacity than that of the quadratic Lyapunov functional approach, e.g., associatron. Full article
(This article belongs to the Special Issue Entropy in Model Reduction)
Show Figures

Figure 1

Figure 1
<p>The time dependence of overlaps of the present entropy based model defined by Equation (17).</p>
Full article ">Figure 2
<p>The time dependence of overlaps of the associatron defined by Equation (18).</p>
Full article ">Figure 3
<p>The dependence of the success rate on the loading rate <span class="html-italic">α</span> = <span class="html-italic">L</span>/<span class="html-italic">N</span> of autoassociation model based on Equation (17) (entropy based approach).</p>
Full article ">Figure 3 Cont.
<p>The dependence of the success rate on the loading rate <span class="html-italic">α</span> = <span class="html-italic">L</span>/<span class="html-italic">N</span> of autoassociation model based on Equation (17) (entropy based approach).</p>
Full article ">Figure 4
<p>The dependence of the success rate on the loading rate <span class="html-italic">α</span> = <span class="html-italic">L</span>/<span class="html-italic">N</span> of autoassociation model based on Equation (18) (associatron).</p>
Full article ">Figure 5
<p>The dependence of the storage capacity on the Hamming distance. Here symbols o and x are for the entropy based approach and the associatron, respectively.</p>
Full article ">
98 KiB  
Article
Arguments for the Integration of the Non-Zero-Sum Logic of Complex Animal Communication with Information Theory
by Vincenzo Penteriani
Entropy 2010, 12(1), 127-135; https://doi.org/10.3390/e12010127 - 21 Jan 2010
Cited by 2 | Viewed by 8028
Abstract
The outstanding levels of knowledge attained today in the research on animal communication, and the new available technologies to study visual, vocal and chemical signalling, allow an ever increasing use of information theory as a sophisticated tool to improve our knowledge of the [...] Read more.
The outstanding levels of knowledge attained today in the research on animal communication, and the new available technologies to study visual, vocal and chemical signalling, allow an ever increasing use of information theory as a sophisticated tool to improve our knowledge of the complexity of animal communication. Some considerations on the way information theory and intraspecific communication can be linked are presented here. Specifically, information theory may help us to explore interindividual variations in different environmental constraints and social scenarios, as well as the communicative features of social vs. solitary species. Full article
(This article belongs to the Special Issue Information Theory Applied to Animal Communication)
380 KiB  
Article
From Maximum Entropy to Maximum Entropy Production: A New Approach
by Nathaniel Virgo
Entropy 2010, 12(1), 107-126; https://doi.org/10.3390/e12010107 - 18 Jan 2010
Cited by 17 | Viewed by 11329
Abstract
Evidence from climate science suggests that a principle of maximum thermodynamic entropy production can be used to make predictions about some physical systems. I discuss the general form of this principle and an inherent problem with it, currently unsolved by theoretical approaches: how [...] Read more.
Evidence from climate science suggests that a principle of maximum thermodynamic entropy production can be used to make predictions about some physical systems. I discuss the general form of this principle and an inherent problem with it, currently unsolved by theoretical approaches: how to determine which system it should be applied to. I suggest a new way to derive the principle from statistical mechanics, and present a tentative solution to the system boundary problem. I discuss the need for experimental validation of the principle, and its impact on the way we see the relationship between thermodynamics and kinetics. Full article
(This article belongs to the Special Issue What Is Maximum Entropy Production and How Should We Apply It?)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A diagram showing the components of the two-box atmospheric heat transport model. Labels written next to arrows indicate rates of flow of energy, whereas labels in boxes represent temperature.</p>
Full article ">Figure 2
<p>Two possible ways in which negative feedback boundary conditions could be realised. <math display="inline"> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </math> Heat flows into <math display="inline"> <mi mathvariant="normal">A</mi> </math> from a much larger reservoir <math display="inline"> <msup> <mi mathvariant="normal">A</mi> <mo>′</mo> </msup> </math>, and out of <math display="inline"> <mi mathvariant="normal">B</mi> </math> into another large reservoir <math display="inline"> <msup> <mi mathvariant="normal">B</mi> <mo>′</mo> </msup> </math>. <math display="inline"> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </math> A reversible heat engine is used to extract power by transferring heat between two external reservoirs <math display="inline"> <msup> <mi mathvariant="normal">A</mi> <mrow> <mo>″</mo> </mrow> </msup> </math> and <math display="inline"> <msup> <mi mathvariant="normal">B</mi> <mrow> <mo>″</mo> </mrow> </msup> </math>, and this power is used to transfer heat reversibly out of <math display="inline"> <mi mathvariant="normal">B</mi> </math> and into <math display="inline"> <mi mathvariant="normal">A</mi> </math>. In this version the only entropy produced is that produced inside <math display="inline"> <mi mathvariant="normal">C</mi> </math> due to the heat flow <span class="html-italic">Q</span>.</p>
Full article ">
284 KiB  
Review
Maximum Entropy Approaches to Living Neural Networks
by Fang-Chin Yeh, Aonan Tang, Jon P. Hobbs, Pawel Hottowy, Wladyslaw Dabrowski, Alexander Sher, Alan Litke and John M. Beggs
Entropy 2010, 12(1), 89-106; https://doi.org/10.3390/e12010089 - 13 Jan 2010
Cited by 40 | Viewed by 12180
Abstract
Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key [...] Read more.
Understanding how ensembles of neurons collectively interact will be a key step in developing a mechanistic theory of cognitive processes. Recent progress in multineuron recording and analysis techniques has generated tremendous excitement over the physiology of living neural networks. One of the key developments driving this interest is a new class of models based on the principle of maximum entropy. Maximum entropy models have been reported to account for spatial correlation structure in ensembles of neurons recorded from several different types of data. Importantly, these models require only information about the firing rates of individual neurons and their pairwise correlations. If this approach is generally applicable, it would drastically simplify the problem of understanding how neural networks behave. Given the interest in this method, several groups now have worked to extend maximum entropy models to account for temporal correlations. Here, we review how maximum entropy models have been applied to neuronal ensemble data to account for spatial and temporal correlations. We also discuss criticisms of the maximum entropy approach that argue that it is not generally applicable to larger ensembles of neurons. We conclude that future maximum entropy models will need to address three issues: temporal correlations, higher-order correlations, and larger ensemble sizes. Finally, we provide a brief list of topics for future research. Full article
Show Figures

Figure 1

Figure 1
<p>The problem to be solved.</p>
Full article ">Figure 2
<p>Temporal correlations are important. (a) Activity from many neurons plotted over time. Boxes highlight an ensemble of four neurons over six time bins. (b) Within the boxes, there was activity for four consecutive time bins, bracketed by no activity at the beginning and the end. This is a sequence of length <span class="html-italic">L</span> = 4 (see text). (c) Sequence length distributions from actual data were significantly longer than those produced by random concatenations of states from the model. This suggests that temporal correlations play an important part in determining activity in neuronal ensembles.</p>
Full article ">Figure 3
<p>Accounting for temporal correlations.</p>
Full article ">Figure 4
<p>Distribution of states for a model with temporal correlations <b>(</b>a) An ensemble of four neurons is selected from the raster plot. (b) Here, activity over a span of three time bins (<span class="html-italic">t</span>, <span class="html-italic">t</span> + 1, and <span class="html-italic">t</span> + 2) is considered one state. (c) The distribution of states is plotted for the model and for the data.</p>
Full article ">Figure 5
<p>Quantifying temporal correlations.</p>
Full article ">
158 KiB  
Article
A Dynamic Model of Information and Entropy
by Michael C. Parker and Stuart D. Walker
Entropy 2010, 12(1), 80-88; https://doi.org/10.3390/e12010080 - 7 Jan 2010
Cited by 9 | Viewed by 7903
Abstract
We discuss the possibility of a relativistic relationship between information and entropy, closely analogous to the classical Maxwell electro-magnetic wave equations. Inherent to the analysis is the description of information as residing in points of non-analyticity; yet ultimately also exhibiting a distributed characteristic: [...] Read more.
We discuss the possibility of a relativistic relationship between information and entropy, closely analogous to the classical Maxwell electro-magnetic wave equations. Inherent to the analysis is the description of information as residing in points of non-analyticity; yet ultimately also exhibiting a distributed characteristic: additionally analogous, therefore, to the wave-particle duality of light. At cosmological scales our vector differential equations predict conservation of information in black holes, whereas regular- and Z-DNA molecules correspond to helical solutions at microscopic levels. We further propose that regular- and Z-DNA are equivalent to the alternative words chosen from an alphabet to maintain the equilibrium of an information transmission system. Full article
(This article belongs to the Special Issue Information and Entropy)
Show Figures

Figure 1

Figure 1
<p>Information <span class="html-italic">I</span> (integration over imaginary <span class="html-italic">ct</span>-axis) and entropy <span class="html-italic">S</span> (integration over real <span class="html-italic">x</span>-axis) due to a point of non-analyticity (pole) at the space-time position <span class="html-italic">z</span><sub>0</sub>.</p>
Full article ">Figure 2
<p>(a) Right-handed polarisation helical info-entropy wave, propagating in positive <math display="inline"> <semantics> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> </semantics> </math>-direction given by <math display="inline"> <semantics> <mrow> <munder accentunder="true"> <mi>I</mi> <mo>_</mo> </munder> <mo>×</mo> <munder accentunder="true"> <mi>S</mi> <mo>_</mo> </munder> </mrow> </semantics> </math>. (b) Left-handed polarisation helical info-entropy wave travelling in same direction.</p>
Full article ">
637 KiB  
Article
Redundancy in Systems Which Entertain a Model of Themselves: Interaction Information and the Self-Organization of Anticipation
by Loet Leydesdorff
Entropy 2010, 12(1), 63-79; https://doi.org/10.3390/e12010063 - 6 Jan 2010
Cited by 25 | Viewed by 12402
Abstract
Mutual information among three or more dimensions (μ* = –Q) has been considered as interaction information. However, Krippendorff [1,2] has shown that this measure cannot be interpreted as a unique property of the interactions and has proposed an alternative measure of interaction information [...] Read more.
Mutual information among three or more dimensions (μ* = –Q) has been considered as interaction information. However, Krippendorff [1,2] has shown that this measure cannot be interpreted as a unique property of the interactions and has proposed an alternative measure of interaction information based on iterative approximation of maximum entropies. Q can then be considered as a measure of the difference between interaction information and redundancy generated in a model entertained by an observer. I argue that this provides us with a measure of the imprint of a second-order observing system—a model entertained by the system itself—on the underlying information processing. The second-order system communicates meaning hyper-incursively; an observation instantiates this meaning-processing within the information processing. The net results may add to or reduce the prevailing uncertainty. The model is tested empirically for the case where textual organization can be expected to contain intellectual organization in terms of distributions of title words, author names, and cited references. Full article
(This article belongs to the Special Issue Information and Entropy)
Show Figures

Figure 1

Figure 1
<p>Example of positive ternary interaction with <span class="html-italic">Q</span> = 0 [<a href="#B2-entropy-12-00063" class="html-bibr">2</a>].</p>
Full article ">Figure 2
<p>Two matrices for <span class="html-italic">n</span> documents with <span class="html-italic">m</span> authors and <span class="html-italic">k</span> words can be combined to a third matrix of <span class="html-italic">n</span> documents <span class="html-italic">versus</span> (<span class="html-italic">m + k</span>) variables.</p>
Full article ">Figure 3
<p>Forty-eight title words in rotated vector space [<a href="#B29-entropy-12-00063" class="html-bibr">29</a>].</p>
Full article ">Figure 4
<p>Triads and higher-order coauthorship patterns in <span class="html-italic">Social Studies of Science</span> (2004–2008).</p>
Full article ">Figure 5
<p>Cosine-normalized network among the 43 title words occurring more than twice in the document set of <span class="html-italic">Social Networks</span> (2006–2008); cosine ≥ 0.2. The size of the nodes is proportionate to the logarithm of the frequency of occurrence; the width of lines is proportionate to the cosine values; colors are based on the <span class="html-italic">k</span>-core algorithm; layout is based on energy minimization in a system of springs [<a href="#B32-entropy-12-00063" class="html-bibr">32</a>].</p>
Full article ">Figure 6
<p>Map based on bibliographic coupling of 395 references in the 102 articles from <span class="html-italic">Social Networks</span>; cosine ≥ 0.5; [<a href="#B30-entropy-12-00063" class="html-bibr">30</a>]. For the sake of readability a selection of 136 nodes (for the partitions 4 ≤ <span class="html-italic">k</span> ≤ 10) is indicated with legends.</p>
Full article ">Figure 7
<p>Interaction information (<span class="html-italic">I<sub>ABC→AB:AC:BC</sub></span>) and remaining redundancy (–<span class="html-italic">μ</span>* or <span class="html-italic">Q</span>) among the three main components in different dimensions and combinations of dimensions on the basis of <span class="html-italic">Social Networks</span> (2006–2008).</p>
Full article ">Figure 8
<p>Interaction information (<span class="html-italic">I<sub>ABC→AB:AC:BC</sub></span>) and remaining redundancy (–<span class="html-italic">μ</span>* or <span class="html-italic">Q</span>) among the three main components in different dimensions and combinations of dimension on the basis of <span class="html-italic">Social Studies of Science</span> (2004–2008).</p>
Full article ">
217 KiB  
Article
Imprecise Shannon’s Entropy and Multi Attribute Decision Making
by Farhad Hosseinzadeh Lotfi and Reza Fallahnejad
Entropy 2010, 12(1), 53-62; https://doi.org/10.3390/e12010053 - 5 Jan 2010
Cited by 300 | Viewed by 16100
Abstract
Finding the appropriate weight for each criterion is one of the main points in Multi Attribute Decision Making (MADM) problems. Shannon’s entropy method is one of the various methods for finding weights discussed in the literature. However, in many real life problems, the [...] Read more.
Finding the appropriate weight for each criterion is one of the main points in Multi Attribute Decision Making (MADM) problems. Shannon’s entropy method is one of the various methods for finding weights discussed in the literature. However, in many real life problems, the data for the decision making processes cannot be measured precisely and there may be some other types of data, for instance, interval data and fuzzy data. The goal of this paper is the extension of the Shannon entropy method for the imprecise data, especially interval and fuzzy data cases. Full article
435 KiB  
Review
Data Compression Concepts and Algorithms and Their Applications to Bioinformatics
by Özkan U. Nalbantoglu, David J. Russell and Khalid Sayood
Entropy 2010, 12(1), 34-52; https://doi.org/10.3390/e12010034 - 29 Dec 2009
Cited by 37 | Viewed by 10885
Abstract
Data compression at its base is concerned with how information is organized in data. Understanding this organization can lead to efficient ways of representing the information and hence data compression. In this paper we review the ways in which ideas and approaches fundamental [...] Read more.
Data compression at its base is concerned with how information is organized in data. Understanding this organization can lead to efficient ways of representing the information and hence data compression. In this paper we review the ways in which ideas and approaches fundamental to the theory and practice of data compression have been used in the area of bioinformatics. We look at how basic theoretical ideas from data compression, such as the notions of entropy, mutual information, and complexity have been used for analyzing biological sequences in order to discover hidden patterns, infer phylogenetic relationships between organisms and study viral populations. Finally, we look at how inferred grammars for biological sequences have been used to uncover structure in biological sequences. Full article
Show Figures

Figure 1

Figure 1
<p>Plot of the redundancy rate <span class="html-italic">versus</span> <math display="inline"> <mfrac> <msub> <mi>D</mi> <mn>2</mn> </msub> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>D</mi> <mn>2</mn> </msub> </mrow> </mfrac> </math> using the genes available at the time shows a clear segregation of Phage, Bacteria, and Vertebrate sequences.</p>
Full article ">Figure 2
<p>Inclusion of additional sequences breaks down the segregation observed by Gatlin.</p>
Full article ">Figure 3
<p>The <span class="html-italic">logo</span> of a number of sequences at the beginning of a gene. The start codon ATG is immediately apparent. The <span class="html-italic">logo</span> was constructed using the software at <a href="http://weblogo.threeplusone.com/" target="_blank">http://weblogo.threeplusone.com/</a>.</p>
Full article ">Figure 4
<p>AMI charts for HIV-1 populations isolated from patients who remained asymptomatic. The large number of white pixels indicate generally a high degree of covariation while “checkerboard" regions indicate specific segments of the envelope protein with correlated mutations [<a href="#B20-entropy-12-00034" class="html-bibr">20</a>].</p>
Full article ">Figure 5
<p>AMI charts for HIV-1 populations isolated from patients who succumbed to AIDS. The preponderance of black pixels indicates a relatively homogeneous population [<a href="#B20-entropy-12-00034" class="html-bibr">20</a>].</p>
Full article ">Figure 6
<p>A block diagram depicting the basic steps involved with a grammar-based compression scheme.</p>
Full article ">
332 KiB  
Article
Estimation of Seismic Wavelets Based on the Multivariate Scale Mixture of Gaussians Model
by Jing-Huai Gao and Bing Zhang
Entropy 2010, 12(1), 14-33; https://doi.org/10.3390/e12010014 - 28 Dec 2009
Cited by 15 | Viewed by 8626
Abstract
This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase). We can transform the estimation of the wavelet into determining these three parameters. The phase [...] Read more.
This paper proposes a new method for estimating seismic wavelets. Suppose a seismic wavelet can be modeled by a formula with three free parameters (scale, frequency and phase). We can transform the estimation of the wavelet into determining these three parameters. The phase of the wavelet is estimated by constant-phase rotation to the seismic signal, while the other two parameters are obtained by the Higher-order Statistics (HOS) (fourth-order cumulant) matching method. In order to derive the estimator of the Higher-order Statistics (HOS), the multivariate scale mixture of Gaussians (MSMG) model is applied to formulating the multivariate joint probability density function (PDF) of the seismic signal. By this way, we can represent HOS as a polynomial function of second-order statistics to improve the anti-noise performance and accuracy. In addition, the proposed method can work well for short time series. Full article
(This article belongs to the Special Issue Maximum Entropy)
Show Figures

Figure 1

Figure 1
<p>The error behavior of ODCR with different orders.</p>
Full article ">Figure 2
<p>The change of the performance in variation with wavelet frequency.</p>
Full article ">Figure 3
<p>The error performance of <math display="inline"> <semantics> <mover accent="true"> <mi>r</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math><span class="html-italic"><sub>y</sub></span><sup>(4)</sup>(<span class="html-italic">y<sub>n</sub></span><sup>(2)</sup>, <span class="html-italic">y<sub>n+m</sub></span><sup>(2)</sup>) and <math display="inline"> <semantics> <mover accent="true"> <mi>r</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math><span class="html-italic"><sub>y</sub></span><sub>0</sub><sup>(4)</sup>(<span class="html-italic">y<sub>n</sub></span><sup>(2)</sup>, <span class="html-italic">y<sub>n+m</sub></span><sup>(2)</sup>).</p>
Full article ">Figure 4
<p><b>(</b>a) Super-Gaussian reflectivity sequence. (b) Reflectivity histogram. (c) Noisy seismic trace. (d) The kurtosis value of the rotated trace <span class="html-italic">versus</span> the rotation angle (the angle is represented as the multiple of <span class="html-italic">π</span>). (e) Cost surface. (f) Wavelet.</p>
Full article ">Figure 5
<p>Real-data experiment.</p>
Full article ">
5023 KiB  
Article
Lorenz Curves, Size Classification, and Dimensions of Bubble Size Distributions
by Sonja Sauerbrei
Entropy 2010, 12(1), 1-13; https://doi.org/10.3390/e12010001 - 25 Dec 2009
Cited by 5 | Viewed by 10125
Abstract
Lorenz curves of bubble size distributions and their Gini coefficients characterize demixing processes. Through a systematic size classification, bubble size histograms are generated and investigated concerning their statistical entropy. It turns out that the temporal development of the entropy is preserved although characteristics [...] Read more.
Lorenz curves of bubble size distributions and their Gini coefficients characterize demixing processes. Through a systematic size classification, bubble size histograms are generated and investigated concerning their statistical entropy. It turns out that the temporal development of the entropy is preserved although characteristics of the histograms like number of size classes and modality are remarkably reduced. Examinations by Rényi dimensions show that the bubble size distributions are multifractal and provide information about the underlying structures like self-similarity. Full article
Show Figures

Figure 1

Figure 1
<p>A Lorenz curve (blue) with the line of equality (black). The Gini coefficient is calculated by the ratio of the areas <math display="inline"> <mrow> <mi>A</mi> <mo>/</mo> <mo stretchy="false">(</mo> <mi>A</mi> <mo>+</mo> <mi>B</mi> <mo stretchy="false">)</mo> </mrow> </math>.</p>
Full article ">Figure 2
<p>Left: Bubble image. Right: By smoothly varying from cyan to magenta the time development of the sum of the bubble diameters is shown.</p>
Full article ">Figure 3
<p>Left: The plot of the partial sum vectors of the increasingly sorted size vectors <span class="html-italic">a</span>. The plot is given by <math display="inline"> <mrow> <mo stretchy="false">(</mo> <mi>k</mi> <mo>,</mo> <msub> <mi>A</mi> <mi>k</mi> </msub> <mo stretchy="false">)</mo> </mrow> </math> where <span class="html-italic">k</span> corresponds to the bubble individuals (<math display="inline"> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mi>n</mi> </mrow> </math>), <math display="inline"> <mrow> <msub> <mi>A</mi> <mi>k</mi> </msub> <mo>=</mo> <msubsup> <mo>∑</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </msubsup> <msub> <mi>a</mi> <mi>i</mi> </msub> </mrow> </math> is the partial sum of the bubble diameters, and <math display="inline"> <msub> <mi>A</mi> <mn>0</mn> </msub> </math> is set to zero. The temporal development is illustrated by the coloration of the curves from cyan (<math display="inline"> <msub> <mi>t</mi> <mn>0</mn> </msub> </math>) to magenta (<math display="inline"> <msub> <mi>t</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>d</mi> </mrow> </msub> </math>). Middle: The Lorenz curves of the size vectors <span class="html-italic">a</span>. The plot is given by the bubble individuals <math display="inline"> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mi>n</mi> </mrow> </math> and the normalized partial sum vectors <math display="inline"> <mrow> <msub> <mi>A</mi> <mi>k</mi> </msub> <mo>/</mo> <msub> <mi>A</mi> <mi>n</mi> </msub> </mrow> </math> of the increasingly sorted size vectors <span class="html-italic">a</span>, <math display="inline"> <mrow> <mo stretchy="false">(</mo> <mi>k</mi> <mo>,</mo> <msub> <mi>A</mi> <mi>k</mi> </msub> <mo>/</mo> <msub> <mi>A</mi> <mi>n</mi> </msub> <mo stretchy="false">)</mo> </mrow> </math>, with <math display="inline"> <mrow> <msub> <mi>A</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </math>. The coloring gives the temporal development (cyan <math display="inline"> <mrow> <mo>=</mo> <msub> <mi>t</mi> <mn>0</mn> </msub> </mrow> </math> and magenta <math display="inline"> <mrow> <mo>=</mo> <msub> <mi>t</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>d</mi> </mrow> </msub> </mrow> </math>). The line of equality is shown in blue. Right: The temporal development of the Gini coefficient <span class="html-italic">G</span> of the Lorenz curves (plot in the middle). The Gini coefficient increases monotonously in time.</p>
Full article ">Figure 4
<p>The increasingly sorted bubble sizes of the initial bubble size vector <math display="inline"> <mrow> <mi>a</mi> <mo stretchy="false">(</mo> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> </math>. On the left the classification is formed by the multiples of <math display="inline"> <mrow> <mi>q</mi> <mspace width="0.166667em"/> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mo stretchy="false">(</mo> <mi>a</mi> <mrow> <mo stretchy="false">(</mo> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> </math> with <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </math> and<math display="inline"> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>39</mn> </mrow> </math>; in the middle the classification is set by <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>5</mn> </mrow> </math>, <math display="inline"> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>8</mn> </mrow> </math>, and on the right with <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>10</mn> </mrow> </math>, <math display="inline"> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mn>4</mn> </mrow> </math>.</p>
Full article ">Figure 5
<p>The corresponding histograms of the classification of <a href="#entropy-12-00001-f004" class="html-fig">Figure 4</a>. Left: <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </math>; middle: <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>5</mn> </mrow> </math>; right: <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>10</mn> </mrow> </math>.</p>
Full article ">Figure 6
<p>Left: The temporal development of the Shannon entropy (1) of the histograms for <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </math> (black plot), <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>5</mn> </mrow> </math> (red plot), <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>10</mn> </mrow> </math> (blue plot), and <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>40</mn> </mrow> </math> (magenta plot). Right: The temporal development of the Shannon entropy (1) of the histograms for <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </math> (black plot), <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </math> (light blue plot), <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>25</mn> </mrow> </math> (cyan plot), and <math display="inline"> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </math> (mauve plot).</p>
Full article ">Figure 7
<p>Development of the Shannon entropy of the histograms depending on the logarithmus dualis of the scaling factor <span class="html-italic">q</span>, <math display="inline"> <mrow> <mo stretchy="false">(</mo> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>q</mi> <mo stretchy="false">)</mo> </mrow> <mo>,</mo> <mi>I</mi> <mrow> <mo stretchy="false">(</mo> <mi>p</mi> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> </math>. One sees that for a certain section the value of <math display="inline"> <mrow> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>q</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </math><math display="inline"> <mrow> <mo>Δ</mo> <mi>I</mi> <mo>/</mo> <mo>Δ</mo> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>q</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </math> remains constant. The coloration of the plot gives the time dependence as before.</p>
Full article ">Figure 8
<p>The temporal development of the minimum integer <span class="html-italic">q</span> factor of the bubble size distributions.</p>
Full article ">Figure 9
<p>Left: The number of occupied classes (positive frequencies) <math display="inline"> <mrow> <mi>N</mi> <mo stretchy="false">(</mo> <mi>s</mi> <mo stretchy="false">)</mo> </mrow> </math> is plotted against the factor <math display="inline"> <mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mo stretchy="false">(</mo> <mi>s</mi> <mo stretchy="false">)</mo> </mrow> </math>, where <math display="inline"> <mrow> <mi>s</mi> <mo>=</mo> <mo>⌈</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo stretchy="false">(</mo> <mi>a</mi> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>/</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo stretchy="false">(</mo> <mi>a</mi> <mrow> <mo stretchy="false">(</mo> <msub> <mi>t</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> <mo>⌉</mo> <mo>/</mo> <mi>q</mi> </mrow> </math>; the mean value of the fractal dimension is <math display="inline"> <mrow> <mover> <msub> <mi>D</mi> <mn>0</mn> </msub> <mo>¯</mo> </mover> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>948</mn> </mrow> </math> with standard deviation <math display="inline"> <mrow> <msub> <mi>σ</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>031</mn> </mrow> </math>. Middle: The plot for the information dimension, <math display="inline"> <mrow> <mover> <msub> <mi>D</mi> <mn>1</mn> </msub> <mo>¯</mo> </mover> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>921</mn> </mrow> </math> with <math display="inline"> <mrow> <msub> <mi>σ</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>023</mn> </mrow> </math>. Right: The plot <math display="inline"> <mrow> <mo stretchy="false">(</mo> <mo form="prefix">log</mo> <mi>s</mi> <mo>,</mo> <mo>-</mo> <mo form="prefix">log</mo> <mrow> <mo stretchy="false">(</mo> <msubsup> <mo>∑</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>0</mn> </mrow> <mrow> <mi>N</mi> <mo stretchy="false">(</mo> <mi>s</mi> <mo stretchy="false">)</mo> </mrow> </msubsup> <msubsup> <mi>p</mi> <mi>i</mi> <mn>2</mn> </msubsup> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> </math>, where <math display="inline"> <mrow> <mover> <msub> <mi>D</mi> <mn>2</mn> </msub> <mo>¯</mo> </mover> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>863</mn> </mrow> </math> and <math display="inline"> <mrow> <msub> <mi>σ</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>052</mn> </mrow> </math>. The 23rd distribution is a maverick and is omitted for computing the fractal dimension. The dimensions are defined in the interval <math display="inline"> <mrow> <mi>s</mi> <mo>∈</mo> <mo>[</mo> <mn>3</mn> <mo>,</mo> <mn>30</mn> <mo>]</mo> </mrow> </math>, black lines.</p>
Full article ">Figure 10
<p>Construction of a self-similar distribution by iterated bisecting with <math display="inline"> <mrow> <mi>p</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>66</mn> </mrow> </math>. From top to bottom the scaling factor is <math display="inline"> <mrow> <mi>s</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>8</mn> </mrow> </math>. The corresponding Rényi dimensions are <math display="inline"> <mrow> <msub> <mi>D</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </math>, <math display="inline"> <mrow> <msub> <mi>D</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>925</mn> </mrow> </math>, <math display="inline"> <mrow> <msub> <mi>D</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>859</mn> </mrow> </math>.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop