[go: up one dir, main page]

Next Article in Journal
Efficient Search Algorithms for Identifying Synergistic Associations in High-Dimensional Datasets
Previous Article in Journal
An Information-Theoretic Proof of a Hypercontractive Inequality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Entropy of Neuronal Spike Patterns

by
Artur Luczak
Canadian Centre for Behavioural Neuroscience, University of Lethbridge, 4401, Lethbridge, AB T1K 3M4, Canada
Entropy 2024, 26(11), 967; https://doi.org/10.3390/e26110967
Submission received: 7 October 2024 / Revised: 4 November 2024 / Accepted: 10 November 2024 / Published: 11 November 2024
(This article belongs to the Section Multidisciplinary Applications)

Abstract

:
Neuronal spike patterns are the fundamental units of neural communication in the brain, which is still not fully understood. Entropy measures offer a quantitative framework to assess the variability and information content of these spike patterns. By quantifying the uncertainty and informational content of neuronal patterns, entropy measures provide insights into neural coding strategies, synaptic plasticity, network dynamics, and cognitive processes. Here, we review basic entropy metrics and then we provide examples of recent advancements in using entropy as a tool to improve our understanding of neuronal processing. It focuses especially on studies on critical dynamics in neural networks and the relation of entropy to predictive coding and cortical communication. We highlight the necessity of expanding entropy measures from single neurons to encompass multi-neuronal activity patterns, as cortical circuits communicate through coordinated spatiotemporal activity patterns, called neuronal packets. We discuss how the sequential and partially stereotypical nature of neuronal packets influences the entropy of cortical communication. Stereotypy reduces entropy by enhancing reliability and predictability in neural signaling, while variability within packets increases entropy, allowing for greater information capacity. This balance between stereotypy and variability supports both robustness and flexibility in cortical information processing. We also review challenges in applying entropy to analyze such spatiotemporal neuronal spike patterns, notably, the “curse of dimensionality” in estimating entropy for high-dimensional neuronal data. Finally, we discuss strategies to overcome these challenges, including dimensionality reduction techniques, advanced entropy estimators, sparse coding schemes, and the integration of machine learning approaches. Thus, this work summarizes the most recent developments on how entropy measures contribute to our understanding of principles underlying neural coding.

1. Introduction

Neurons communicate through electrical impulses known as action potentials or spikes, which serve as the fundamental units of neural signaling [1,2]. Sequences of these spikes over time, referred to as spike trains, encode information via the timing and frequency of spikes [3]. The precise temporal patterns within these spike trains are crucial for accurate information transmission across neural circuits [4,5]. Variations in these parameters significantly affect how sensory information is encoded and how motor commands are executed. For instance, the temporal structuring of spikes into bursts can enhance perceived sensory intensity, even when the overall spike rate remains constant [6]. Additionally, the reliability of spike timing plays a vital role in encoding complex stimuli, such as conspecific vocalizations, enabling neurons to transmit significant amounts of information beyond what is conveyed by spike count alone [7,8]. To better understand how neurons encode and transmit information, the concept of entropy from information theory has been applied to neural spike patterns [9,10]. Entropy quantifies the uncertainty or variability within a system, serving as a measure of information content [11]. In neuroscience, entropy is used to assess the variability of spike trains, providing insights into the amount of information that neuronal firing patterns can convey [12].
Understanding entropy in spike patterns is crucial for understanding the efficiency of neural coding mechanisms [2,13]. By quantifying the information content of neuronal responses, entropy offers a framework to analyze both the reliability and variability inherent in neuronal signaling [14]. High entropy in spike trains indicates a high degree of variability and potential information richness, while low entropy suggests more predictable patterns that could reflect either reliable signaling or reduced informational capacity [15].
However, focusing solely on individual spike trains may overlook the complex dynamics of cortical circuits, which communicate through coordinated patterns of activity involving thousands of neurons simultaneously. These high-dimensional spatiotemporal patterns are constrained by the underlying synaptic connectivity, resulting in only a subset of possible activity configurations being utilized for neural communication [16]. Consequently, information in the cortex appears to be conveyed as variations from common templates of activity rather than through entirely unrelated patterns for different stimuli [17]. This necessitates expanding entropy measures from single neurons to encompass multi-neuronal activity, a task that presents significant computational and theoretical challenges. Quantifying entropy in such complex neural data is essential for a deeper understanding of cortical information processing but requires novel approaches to handle the high dimensionality and interdependencies of neuronal networks. In the following section, we explore the concept of neuronal spike packets as a framework for understanding these coordinated activity patterns and their implications for neural coding.

Neuronal Spike Packets

An emerging concept in neuroscience is that cortical circuits communicate using coordinated patterns of neuronal activity, known as packets [16,18]. These neural packets are sequential bursts of spiking activity lasting approximately 50–200 milliseconds and are partially stereotypical in nature. As illustrated in Figure 1A, during deep sleep, these packets occur sporadically, with neurons firing in a stereotyped sequential pattern within each packet. In the awake state, packets occur more frequently, indicating increased information transmission while maintaining similar temporal relationships between neurons as in the sleep state (Figure 1B). This suggests that instead of continuous signaling, cortical communication occurs by sending discrete packets of activity, allowing for efficient and flexible information transmission across neural networks [19,20]. Neural packets consist of organized sequences of spikes across populations of neurons, containing both stereotypical structures and variable components that convey specific information [17]. The possible spiking patterns that a local neural circuit can produce are constrained by the neurons’ connectivity and intrinsic cellular properties. Consequently, certain activity patterns are more likely to emerge than others (Figure 1C, left panel). This idea can be visualized geometrically by representing each potential population spiking pattern as a single point within a space (Figure 1C, center panel). Experimentally, it was observed that spontaneous patterns occupy only a small subregion of this entire space of possible patterns [17]. Stimulus-evoked patterns are also limited by the same circuit constraints and form subspaces within the spontaneous pattern space. Each type of stimulus leads to variations in neuronal firing rates and, to a lesser extent, differences in spike timing, while the overall structure of the activity packet is preserved (Figure 1C, right panel) [18]. Thus, the stereotypical aspect refers to the consistent and repeatable patterns of spike activity observed across different packets, which likely reflect the underlying network architecture and synaptic connectivity. Variability arises from differences in spike timing and count between packets, enabling the encoding of diverse information within a consistent framework [16].
The partially stereotypical nature of these packets has significant implications for the entropy of cortical communication. Stereotypy reduces entropy by enhancing reliability and predictability in neural signaling, ensuring that essential information is consistently transmitted. Conversely, variability within packets increases the entropy of the system, reflecting greater information capacity and allowing for the transmission of specific details pertinent to sensory inputs or motor commands. This balance between stereotypy and variability enables cortical circuits to maintain both robustness and flexibility in information processing.

2. Quantifying Entropy in Spike Patterns

Understanding the informational content of neuronal spike patterns requires precise quantification methods from information theory. Several techniques have been adapted to neural data to measure entropy and mutual dependencies, providing insights into how neurons encode and transmit information.
Shannon Entropy: Shannon entropy is a fundamental measure introduced by Claude Shannon in 1948 [9] to quantify the uncertainty or randomness in a set of possible outcomes. In neuroscience, Shannon entropy is used to measure the variability of spike trains by calculating the average information content per spike. To compute it, spike trains are often discretized into binary sequences over small time bins, indicating the presence (1) or absence (0) of a spike in each bin. The probability distribution of these binary patterns is then used in the entropy formula as follows:
H = i p i   l o g 2 p i
where pi is the probability of occurrence of each distinct spike pattern i. A higher Shannon entropy indicates greater variability and potential information capacity in the neuron’s firing patterns. This method helps researchers assess the diversity of neural responses and the neuron’s ability to encode different stimuli [2,12].
Entropy Rate: The entropy rate extends the concept of Shannon entropy to account for temporal correlations in spike trains. It measures the average uncertainty per unit of time, considering the dependencies between spikes at different times. Calculating the entropy rate involves estimating the joint probabilities of sequences of spikes over multiple time bins, which can capture patterns not evident when considering spikes independently. The entropy rate of a stochastic process is defined as follows:
H rate = lim n 1 n H X 1 , X 2 , , X n
where H(X1, X2,…, Xn) is the joint entropy of the spike sequence over n time steps. This formulation reflects how the uncertainty grows with longer sequences of spikes and accounts for the dependencies across time, unlike the traditional Shannon entropy, which treats individual events independently. This method is particularly useful for neurons exhibiting burst firing or temporal coding strategies, as it reflects the information carried by the timing and patterns of spikes over time [12,21].
Mutual Information: Mutual information measures the amount of information shared between two variables—in this case, the stimulus (S) and the neuronal response (R). It quantifies how much knowing the stimulus reduces the uncertainty about the response and vice versa. The mutual information is calculated using the joint probability distribution of stimuli and responses as follows:
I S ; R = s , r p s , r log 2 p s , r p s p r
where p(s,r) is the joint probability of stimulus s and response r, and p(s) and p(r) are their marginal probabilities. Mutual information captures both linear and nonlinear dependencies between stimuli and responses, making it a powerful tool for understanding neural coding efficiency. It can provide a direct measure of how much information about the stimulus is conveyed by the neuron’s spike patterns [11,13].
Mutual information can also be expressed in terms of entropy as follows:
I S ; R = H R H R S
where H(R) is the entropy of the response, and H(RS) is the conditional entropy (see below). This expression shows that mutual information quantifies how much uncertainty about the neuronal response is reduced by knowing the stimulus. Thus, conditional entropy provides insight into how unpredictable the response remains even with knowledge of the stimulus, directly complementing mutual information by highlighting the variability that is not explained by the stimulus.
Conditional Entropy: Conditional entropy can quantify the remaining uncertainty about the neuronal response given knowledge of the stimulus. It is defined as follows:
H R S = s p s r p r s log 2 p r s
where p(r/s) is the probability of response r given stimulus s. A lower conditional entropy indicates that the response is more predictable given the stimulus, suggesting higher reliability in encoding that stimulus feature [10]. In contrast, if conditional entropy is high, the neuron’s response varies significantly across instances of the same stimulus, indicating a noisier or less reliable encoding.
Kullback–Leibler Divergence: While not an entropy measure per se, Kullback–Leibler (KL) divergence is often used to quantify the difference between two probability distributions. In neural data analysis, KL divergence can compare the observed spike train distribution to a reference model, such as a Poisson process. It is calculated as follows:
D K L P Q = i p i log 2 p i q i
where pi is the observed probability distribution, and qi is the reference distribution. KL divergence indicates how much the neural firing patterns deviate from the reference, highlighting unique features of neuronal coding that may carry important information [22].
Measuring entropy in dynamical systems:
A dynamical system refers to a system where a set of variables evolves over time according to specific deterministic or probabilistic rules. These systems are widely studied across physics, biology, and neuroscience. In the analysis of such systems, entropy-based measures capture the degree of disorder and complexity in the evolving states. These measures are particularly useful for distinguishing between predictable and chaotic dynamics. Entropy measures like approximate entropy (ApEn) and sample entropy (SampEn) are prominent tools used to quantify the regularity and unpredictability of time series data [23,24]. Approximate entropy measures the likelihood that similar patterns in a time series will remain similar at the next point in the sequence, providing a robust estimate of system complexity in noisy environments. However, ApEn tends to overestimate the amount of regularity, leading to the development of SampEn, which refines this measure by excluding self-matching patterns and being less sensitive to data length and parameter settings [24,25,26]. These measures have been employed extensively in physiology and neuroscience, including applications, such as analyzing electroencephalogram (EEG) data, to assess changes in brain states, such as transitions between sleep stages or during cognitive load [27], where higher entropy values correspond to more complex and less predictable dynamics. For instance, using SampEn, researchers can distinguish between healthy and pathological brain states, where reduced entropy might signal a loss of complexity associated with diseases, such as schizophrenia or epilepsy [25]. These methods are particularly valuable in systems where traditional linear methods fail to capture non-stationary and nonlinear behaviors, underscoring the importance of entropy as a bridge between statistical descriptions and the dynamic evolution of complex systems [26].

3. Entropy as a Tool for Understanding Neuronal Processing

Entropy, as a measure of uncertainty or variability, has become an indispensable tool in unraveling the complexities of neuronal processing and neural coding strategies [11,13]. By quantifying the informational content of neuronal spike patterns, entropy enables researchers to dissect how neurons encode, transmit, and process information through their spiking activity [2]. Many studies have leveraged entropy-based methods to explore various facets of neural dynamics and neuronal information processing, as illustrated in the examples presented below.
Several studies have used entropy measures to investigate neural coding strategies. For example, Strong et al. [12] analyzed the entropy of spike trains in the visual system of flies, demonstrating how neurons maximize information transmission by balancing variability and precision. Similarly, DeWeese et al. [14] applied entropy calculations to auditory cortex neurons, revealing that neurons can transmit information with high temporal precision, thus enhancing the efficiency of neural codes. These applications highlight the relevance of entropy as a tool for understanding the balance between reliability and variability in neuronal communication, ultimately shedding light on how the brain processes and encodes information.
An important application of entropy in neuroscience is the investigation of critical dynamics in spiking neuron data. The concept of a critical regime refers to a balanced state between two extremes: one where neural activity spreads uncontrollably and another where activity quickly dies out. Operating at criticality allows the brain to optimize its capacity for information processing, enhancing both responsiveness and flexibility in neural communication [28]. Recent work by Lotfi et al. further illuminates this concept by identifying criticality signatures in cortical states using maximum entropy models in anesthetized rat brains. By segmenting data based on spiking variability, they observed that critical dynamics emerge within an intermediate range of variability, suggesting that cortical state shifts influence criticality and associated information processing. Their findings propose a universal dynamic, where the normalized distance to criticality collapses across cortical states, supporting a phase transition model within neural systems [29]. Serafim et al. [30] also used a maximum entropy approach based on firing rates to identify signatures of criticality in computational models and data from cortical neurons. Their results showed that neural networks exhibit behaviors indicative of phase transitions—abrupt changes in system dynamics—supporting the hypothesis that the brain may adjust its activity to operate near criticality for optimal functioning. Similarly, it has been shown that statistical complexity, measured using symbolic information theory, is maximized near this critical point in cortical spiking data [31]. By quantifying complexity across synchronized and desynchronized states, the findings revealed that complexity peaks at an intermediate state, aligning with the criticality hypothesis and suggesting an optimal balance between order and disorder for neural communication [31]. These studies highlight the utility of entropy-based models in capturing the complex dynamics of brain networks, showing that the brain’s tendency to operate near a critical point may be fundamental for efficient information transmission and adaptability in changing environments.
Entropy measures have also been instrumental in understanding the quality and limitations of neural models, particularly in describing large cortical populations. Olsen et al. [32] investigated the performance of pairwise maximum entropy (PME) models in capturing the spiking activity of large populations of neurons across various cortical areas. They found that while PME models perform well for small population sizes (N < 20), their performance diminishes for larger populations, indicating that these models may not adequately capture the higher-order interactions present in large neural networks. This limitation highlights the need for more sophisticated entropy-based models capable of accommodating the complexity of large-scale neuronal interactions [33,34].
Moreover, entropy-based methods have been applied to analyze pattern separation in neural circuits by leveraging concepts from information geometry. Information geometry is a mathematical framework that studies the relationships between probability distributions as points on a geometric surface, known as a manifold. In the context of neural activity, each point on this manifold represents a possible pattern of neuronal firing. Wang et al. [35] modeled pattern separation as the transformation of these patterns on a manifold, where small changes in input coordinates result in large geometric distances between output patterns. This reflects how even subtle differences in neural inputs can lead to distinct outputs, facilitating pattern separation. Using a two-neuron system, the authors demonstrated that existing similarity indices—commonly used to quantify how neural patterns differ—are highly sensitive to firing rate changes but fail to adequately capture differences in synchrony between neurons. This gap indicates the need for more robust entropy-based measures capable of capturing both firing rates and the temporal coordination of spikes, as both are critical for accurately quantifying neural information transmission [36,37].
Entropy also contributes to the development of computational models for interpreting neuronal data. Bardella et al. [38] introduced a mathematical framework based on lattice field theory to analyze neural systems, expanding the maximum entropy model to account for the time evolution of neural networks. Lattice field theory, originally developed in particle physics, represents complex systems as grids or lattices, where each point corresponds to a state of the system at a particular location and time. In neuroscience, this framework helps model neurons as discrete units interacting over time, similar to how physical particles interact on a lattice. Using this approach, Bardella et al. [38] captured both the spatial and temporal dynamics of neural networks, allowing for a more comprehensive analysis of collective neuronal behavior. Their methods enable researchers to interpret empirical observations from chronic neural interfaces—such as spike rasters—within a unified framework that links local neural interactions to broader network dynamics. This blending of concepts from particle physics and neuroscience offers new insights into brain processes, making it possible to predict and simulate complex neural activity with greater accuracy [33,34].
An example of information encoding in neuronal patterns was shown by Stasenko and Kazantsev [39], who investigated a mathematical model of a spiking neural network interacting with astrocytes. They found that astrocytic modulation prevented stimulation-induced hyperexcitation and non-periodic bursting activity, allowing the network to restore input images supplied during stimulation. This suggests that astrocytes play a role in homeostatic regulation of neuronal activity, with entropy measures capturing the effects of such modulation on the complexity and information content of neural signals [40,41].
An important consideration in applying entropy measures to neural data is ensuring that the temporal scales accurately capture underlying neural dynamics. Multiscale Entropy (MSE) is a method designed to assess how irregular a signal is across different time scales. Fine scales capture fast, small fluctuations, while coarse scales reflect slower, broader changes over time. This multiscale approach is particularly suitable for analyzing complex neural dynamics because neural systems can operate over a wide range of temporal and spatial scales. Traditional single-scale entropy measures may fail to capture this richness, making MSE a valuable tool for understanding the intricate patterns present in high-dimensional neural data. This method is intended to complement other neural measures, like signal variance and spectral power, by capturing nonlinear aspects of brain activity. Kosciessa et al. [42] critically evaluated MSE using simulated and real EEG data. Their study revealed that MSE’s results are often influenced by spectral power (the strength of different frequency components), leading to potential misinterpretations. Specifically, they found that coarse MSE scales, which should reflect slow dynamics, were biased by high-frequency components, while fine MSE scales, expected to capture fast dynamics, were strongly affected by low-frequency activity. This happens because MSE uses a similarity threshold to define patterns, which does not always align with the timescale of interest. This overlap complicates the interpretation of MSE—what looks like irregularity at one scale might actually reflect activity from a different frequency range. To address these issues, Kosciessa et al. proposed adjustments to reduce these biases, improving the precision of scale-specific entropy estimates. Their work underscores the importance of considering the interactions between entropy measures and the spectral properties of neural signals, advocating for best practices to ensure valid interpretations across time scales.
In addition, entropy measured at multiple scales has been utilized to study the impact of excitation–inhibition (E/I) balance on neural dynamics. Park et al. [43] examined how locally altered E/I balance affects neural connectivity, complexity, and information transmission. Their results showed that an increased E/I ratio strengthens excitatory connections but reduces the complexity of neural activity and decreases information transmission between neuron groups. This indicates that entropy can reflect changes in neural network dynamics resulting from imbalances in excitation and inhibition, which may have implications for understanding neuropsychiatric disorders characterized by altered E/I balance [44,45].
Entropy has also been instrumental in assessing the complexity of neural activity in relation to cognitive functions. Vivekanandhan et al. [46] analyzed spiking activity from the middle temporal area (MT) neurons and found that the Shannon entropy and conditional entropy were found to be capable of capturing the working memory content. This suggests that complexity measures derived from entropy can capture the modulation of neural activity associated with cognitive processes, such as working memory, offering potential biomarkers for cognitive states [47,48].
Entropy is also used in image processing, where higher entropy values denote greater shape irregularity [49]. Thus, entropy measures can also be applied to images of neurons, providing valuable insights into their complexity and spatial distribution. A neuron with a highly branched, irregular dendritic structure would exhibit higher entropy, while a simpler, more uniform structure would have lower entropy. Analyses of neuronal patterns not only can provide a quantitative measure of dendritic complexity but also can help to study environmental factors contributing to the diversity of neuronal morphologies observed in the brain [50,51,52].
Predictive coding: The brain is increasingly conceptualized as a prediction machine that continuously generates and updates internal models to anticipate sensory inputs and minimize prediction errors [53,54,55,56,57]. Predictive coding frameworks posit that the brain actively infers the causes of its sensations by reducing the discrepancy between expected and actual sensory input [58,59]. However, this process is not solely about minimizing entropy or uncertainty in neural representations. Instead, it involves the broader principle of minimizing variational free energy, which balances the trade-off between the accuracy of sensory predictions and the complexity of the internal models generating them [53]. Variational free energy comprises two key components: prediction error (reflecting accuracy) and complexity (reflecting the simplicity of the model). By minimizing free energy, the brain strives to optimize this balance, ensuring that its models are precise enough to explain sensory inputs without becoming unnecessarily complex [53,60]. Entropy reduction is part of this optimization but represents just one facet of the overarching goal of minimizing prediction error [61].

4. Challenges and Future Directions

One of the significant challenges in analyzing the entropy of neuronal spike patterns, particularly within neuronal packets, is the “curse of dimensionality”. Neuronal packets can involve coordinated activity across vast populations of neurons—potentially millions—over time scales of hundreds of milliseconds [16,62]. To capture the intricate spatiotemporal dynamics within these packets, researchers often divide the data into fine temporal bins ranging from 1 to 5 milliseconds [17]. This granular approach results in a high-dimensional data space, where each neuron’s activity in each time bin represents a separate dimension. Consequently, estimating entropy in such a high-dimensional space becomes increasingly complex and less reliable due to the exponential growth in computational and data requirements [63].
The curse of dimensionality poses several problems for entropy estimation in neuronal data. Firstly, as dimensionality increases, the volume of the data space expands exponentially, causing data points (spike patterns) to become sparser relative to the space they occupy [64]. This sparsity makes it challenging to obtain accurate probability distributions necessary for entropy calculations since traditional estimation methods require an impractically large amount of data to sample the space adequately [65,66]. For instance, in a system with just 100 neurons binned over 100 time points, the number of possible spike patterns exceeds the number of atoms in the observable universe, rendering exhaustive sampling infeasible [67].
Moreover, high dimensionality affects the reliability of entropy measures due to increased variance and bias in estimators. Estimators such as the plug-in method or histogram-based approaches become less effective because they suffer from bias when data are insufficient to populate the high-dimensional bins [12,68]. Nearest neighbor estimators, while more data efficient, also face challenges as distances between points become less meaningful in high dimensions [69]. These issues can lead to inaccurate assessments of the informational content of neuronal spike patterns, hindering our understanding of neural coding mechanisms.
To address these challenges, several strategies have been proposed to mitigate the curse of dimensionality in entropy estimation. One approach is to employ dimensionality reduction techniques that project high-dimensional data onto a lower-dimensional representation while preserving essential features of the data [70,71]. Methods such as principal component analysis (PCA) or more advanced nonlinear techniques, like t-distributed stochastic neighbor embedding (t-SNE), can reduce the effective dimensionality, making entropy estimation more tractable [72]. For example, in studies of neuronal patterns, applying methods like PCA can identify principal components that capture the majority of variance in neural activity, thereby simplifying the entropy calculation without significant loss of information [73].
Another strategy involves developing advanced entropy estimators that are more robust to high dimensionality. Adaptive methods, such as the Bayesian entropy estimator or the k-nearest neighbor estimator, adjust their parameters based on the data distribution, providing more accurate estimates with limited data samples [74,75]. These estimators can better handle the sparsity of data in high-dimensional spaces by effectively utilizing the available information. For instance, the Kozachenko–Leonenko estimator has been successfully applied to estimate the entropy of high-dimensional neural data with improved accuracy [75]. To illustrate, consider neuronal spike train data, which are often high-dimensional and sparse because neurons often fire infrequently. Traditional entropy estimators, like histogram-based methods, struggle in this context because they require dividing the data into bins; with sparse data, many bins remain empty, leading to unreliable entropy estimates. In contrast, the Kozachenko–Leonenko estimator bypasses the need for binning by calculating the distances between each data point and its nearest neighbors. This adaptability enables more precise entropy estimates, even when dealing with sparse and high-dimensional neural datasets, making it particularly suitable for analyzing complex neural dynamics.
Exploring sparse coding schemes offers another avenue to mitigate dimensionality challenges. The brain may utilize sparse representations, where only a small subset of neurons is active at any given time, reducing the dimensionality of the active neural space [76,77]. By modeling neural data under the assumption of sparsity, entropy estimations become more manageable, and the relevant informational content can be extracted more efficiently. This approach aligns with evidence suggesting that neuronal packets may operate under principles of sparse and efficient coding to optimize information transmission [78].
Looking forward, integrating machine learning techniques holds promise for addressing high-dimensional entropy estimation. Deep learning models, particularly those designed for high-dimensional data, such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs), can learn compact representations of neural data [79,80]. By training these models on spike patterns, one can extract latent variables that capture the essential dynamics of neuronal packets, effectively reducing dimensionality. Furthermore, generative models, like autoencoders or generative adversarial networks (GANs), can model the underlying probability distributions of neural data, facilitating more accurate entropy estimation [81,82,83].
Another future direction involves improving experimental designs to collect data that are more amenable to entropy analysis. Advances in neural recording technologies, such as high-density electrode arrays and optical imaging techniques, enable simultaneous recording from larger populations of neurons with high temporal resolution [67,84,85,86,87]. Carefully designed experiments that selectively target relevant neural populations or time periods can reduce the dimensionality of the data while preserving critical information about neuronal packets.
Implementing entropy-based analyses in neural data research presents several practical challenges, particularly concerning the limitations inherent in fMRI and EEG modalities. For instance, fMRI data, characterized by low temporal resolution and susceptibility to motion artifacts, can complicate the accurate estimation of entropy measures. To address these issues, researchers have developed novel windowing approaches that select and concatenate low-motion segments of fMRI data, thereby reducing the impact of motion on sample entropy estimates [88].

5. Conclusions

Entropy serves as a powerful quantitative framework for understanding neuronal processing across multiple levels, from single neurons to large-scale networks. By quantifying the variability and information content of neuronal spike patterns and packets, entropy-based methods provide critical insights into neural coding strategies, network dynamics, synaptic plasticity, and cognitive functions. These applications underscore the importance of entropy in advancing our understanding of brain function and highlight the potential for future research to further exploit entropy in neuroscience. While the curse of dimensionality presents a significant hurdle in estimating the entropy of neuronal spike patterns, particularly within neuronal packets, it also opens avenues for methodological innovation. By adopting dimensionality reduction techniques, developing advanced entropy estimators, leveraging models that exploit neural data structure, and integrating machine learning approaches, researchers can mitigate these challenges. Addressing the high dimensionality inherent in neuronal patterns is essential for advancing our understanding of neural coding and information processing in the brain.
Future research could explore the application of entropy measures to study the temporal evolution of neuronal packets, providing insights into how information is dynamically processed and integrated over time. Additionally, investigating the role of entropy in understanding the impact of neurological disorders on neuronal communication could offer novel perspectives on disease mechanisms and potential therapeutic targets. Continued efforts in this direction will enhance our ability to decipher the complex language of neuronal communication and unravel the fundamental principles underlying brain function.

Funding

This research received no external funding.

Acknowledgments

The author developed AI agents to work with them, like with a good MSc student. Agents helped to identify the most relevant literature, design a plan for the paper, implement suggested improvements, and draft paper sections and rewrite them based on comments provided by the author. The author assumes full responsibility for the accuracy of the content presented here.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Kandel, E.R.; Schwartz, J.H.; Jessell, T.M. Principles of Neural Science; McGraw-Hill: New York, NY, USA, 2000. [Google Scholar]
  2. Rieke, F.; Warland, D.; van Steveninck, R.d.R.; Bialek, W. Spikes: Exploring the Neural Code; MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
  3. Gerstner, W.; Kistler, W.M. Spiking Neuron Models: Single Neurons, Populations, Plasticity; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
  4. Mainen, Z.F.; Sejnowski, T.J. Reliability of spike timing in neocortical neurons. Science 1995, 268, 1503–1506. [Google Scholar] [CrossRef] [PubMed]
  5. Shadlen, M.N.; Newsome, W.T. The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding. J. Neurosci. 1998, 18, 3870–3896. [Google Scholar] [CrossRef] [PubMed]
  6. Sharma, D.; Ng, K.K.; Birznieks, I.; Vickery, R.M. Perceived tactile intensity at a fixed primary afferent spike rate varies with the temporal pattern of spikes. J. Neurophysiol. 2022, 128, 1074–1084. [Google Scholar] [CrossRef]
  7. Huetz, C.; Del Negro, C.; Lebas, N.; Tarroux, P.; Edeline, J.M. Contribution of spike timing to the information transmitted by HVC neurons. Eur. J. Neurosci. 2006, 24, 1091–1108. [Google Scholar] [CrossRef]
  8. Huetz, C.; Philibert, B.; Edeline, J.M. A spike-timing code for discriminating conspecific vocalizations in the thalamocortical system of anesthetized and awake guinea pigs. J. Neurosci. 2009, 29, 334–350. [Google Scholar] [CrossRef]
  9. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  10. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley-Interscience: New York, NY, USA, 2006. [Google Scholar]
  11. Borst, A.; Theunissen, F.E. Information theory and neural coding. Nat. Neurosci. 1999, 2, 947–957. [Google Scholar] [CrossRef] [PubMed]
  12. Strong, S.P.; Koberle, R.; de Ruyter van Steveninck, R.R.; Bialek, W. Entropy and information in neural spike trains. Phys. Rev. Lett. 1998, 80, 197. [Google Scholar] [CrossRef]
  13. Quiroga, R.Q.; Panzeri, S. Extracting information from neuronal populations: Information theory and decoding approaches. Nat. Rev. Neurosci. 2009, 10, 173–185. [Google Scholar] [CrossRef]
  14. DeWeese, M.R.; Wehr, M.; Zador, A.M. Binary spiking in auditory cortex. J. Neurosci. 2003, 23, 7940–7949. [Google Scholar] [CrossRef]
  15. Baddeley, R.; Abbott, L.F.; Booth MC, A.; Sengpiel, F.; Freeman, T.; Wakeman, E.A.; Rolls, E.T. Responses of neurons in primary and inferior temporal visual cortices to natural scenes. Proc. R. Soc. B Biol. Sci. 1997, 264, 1775–1783. [Google Scholar] [CrossRef] [PubMed]
  16. Luczak, A.; McNaughton, B.L.; Harris, K.D. Packet-based communication in the cortex. Nat. Rev. Neurosci. 2015, 16, 745–755. [Google Scholar] [CrossRef]
  17. Luczak, A.; Bartho, P.; Harris, K.D. Spontaneous events outline the realm of possible sensory responses in neocortical populations. Neuron 2009, 62, 413–425. [Google Scholar] [CrossRef]
  18. Luczak, A. Packets of Sequential Neural Activity in Sensory Cortex. In Analysis and Modeling of Coordinated Multi-Neuronal Activity. Springer Series in Computational Neuroscience; Tatsuno, M., Ed.; Springer: New York, NY, USA, 2015; Volume 12. [Google Scholar]
  19. Luczak, A.; MacLean, J.N. Default activity patterns at the neocortical microcircuit level. Front. Neural Circuits 2012, 6, 127. [Google Scholar] [CrossRef]
  20. Contreras, E.J.B.; Schjetnan, A.G.P.; Muhammad, A.; Bartho, P.; McNaughton, B.L.; Kolb, B.; Gruber, A.J.; Luczak, A. Formation and reverberation of sequential neural activity patterns evoked by sensory stimulation are enhanced during cortical desynchronization. Neuron 2013, 79, 555–566. [Google Scholar]
  21. Bialek, W.; Rieke, F.; van Steveninck, R.; Warland, D. Reading a neural code. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 1989; Volume 2. [Google Scholar]
  22. Johnson, D.H.; Gruner, C.M.; Baggerly, K.; Seshagiri, C. Information-theoretic analysis of neural coding. J. Comput. Neurosci. 2001, 10, 47–69. [Google Scholar] [CrossRef]
  23. Pincus, S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef]
  24. Richman, J.S.; Moorman, J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef]
  25. Lau, Z.J.; Pham, T.; Chen, S.A.; Makowski, D. Brain entropy, fractal dimensions and predictability: A review of complexity measures for EEG in healthy and neuropsychiatric populations. Eur. J. Neurosci. 2022, 56, 5047–5069. [Google Scholar] [CrossRef]
  26. Rosso, O.A.; Montani, F. Information theoretic measures and their applications. Entropy 2020, 22, 1382. [Google Scholar] [CrossRef]
  27. Ma, Y.; Shi, W.; Peng, C.K.; Yang, A.C. Nonlinear dynamical analysis of sleep electroencephalography using fractal and entropy approaches. Sleep Med. Rev. 2018, 37, 85–93. [Google Scholar] [CrossRef] [PubMed]
  28. Beggs, J.M.; Plenz, D. Neuronal avalanches in neocortical circuits. J. Neurosci. 2003, 23, 11167–11177. [Google Scholar] [CrossRef]
  29. Lotfi, N.; Fontenele, A.J.; Feliciano, T.; Aguiar, L.A.; de Vasconcelos, N.A.; Soares-Cunha, C.; Coimbra, B.; Rodrigues, A.J.; Sousa, N.; Copelli, M.; et al. Signatures of brain criticality unveiled by maximum entropy analysis across cortical states. Phys. Rev. E 2020, 102, 012408. [Google Scholar] [CrossRef] [PubMed]
  30. Serafim, F.; Carvalho, T.T.; Copelli, M.; Carelli, P.V. Maximum-entropy-based metrics for quantifying critical dynamics in spiking neuron data. Phys. Rev. E 2024, 110, 024401. [Google Scholar] [CrossRef]
  31. Lotfi, N.; Feliciano, T.; Aguiar, L.A.A.; Silva, T.P.L.; Carvalho, T.T.A.; Rosso, O.A.; Copelli, M.; Matias, F.S.; Carelli, P.V. Statistical complexity is maximized close to criticality in cortical dynamics. Phys. Rev. E 2021, 103, 012415. [Google Scholar] [CrossRef]
  32. Olsen, V.K.; Whitlock, J.R.; Roudi, Y. The quality and complexity of pairwise maximum entropy models for large cortical populations. PLoS Comput. Biol. 2024, 20, e1012074. [Google Scholar] [CrossRef]
  33. Schneidman, E.; Berry, M.J.; Segev, R.; Bialek, W. Weak pairwise correlations imply strongly correlated network states in a neural population. Nature 2006, 440, 1007–1012. [Google Scholar] [CrossRef]
  34. Tkacik, G.; Marre, O.; Amodei, D.; Schneidman, E.; Bialek, W.; Berry, M.J. Searching for collective behavior in a large network of sensory neurons. PLoS Comput. Biol. 2014, 10, e1003408. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, H.; Singh, S.; Trappenberg, T.; Nunes, A. An Information-Geometric Formulation of Pattern Separation and Evaluation of Existing Indices. Entropy 2024, 26, 737. [Google Scholar] [CrossRef]
  36. Panzeri, S.; Schultz, S.R.; Treves, A.; Rolls, E.T. Correlations and the encoding of information in the nervous system. Proc. R. Soc. B Biol. Sci. 1999, 266, 1001–1012. [Google Scholar] [CrossRef]
  37. Latham, P.E.; Nirenberg, S. Synergy, redundancy, and independence in population codes, revisited. J. Neurosci. 2005, 25, 5195–5206. [Google Scholar] [CrossRef] [PubMed]
  38. Bardella, G.; Franchini, S.; Pan, L.; Balzan, R.; Ramawat, S.; Brunamonti, E.; Pani, P.; Ferraina, S. Neural activity in quarks language: Lattice Field Theory for a network of real neurons. Entropy 2024, 26, 495. [Google Scholar] [CrossRef] [PubMed]
  39. Stasenko, S.V.; Kazantsev, V.B. Information encoding in bursting spiking neural network modulated by astrocytes. Entropy 2023, 25, 745. [Google Scholar] [CrossRef] [PubMed]
  40. Perea, G.; Araque, A. Glial calcium signaling and neuron–glia communication. Cell Calcium 2005, 38, 375–382. [Google Scholar] [CrossRef]
  41. Araque, A.; Carmignoto, G.; Haydon, P.G.; Oliet, S.H.R.; Robitaille, R.; Volterra, A. Gliotransmitters travel in time and space. Neuron 2014, 81, 728–739. [Google Scholar] [CrossRef]
  42. Kosciessa, J.Q.; Kloosterman, N.A.; Garrett, D.D. Standard multiscale entropy reflects neural dynamics at mismatched temporal scales: What’s signal irregularity got to do with it? PLoS Comput. Biol. 2020, 16, e1007885. [Google Scholar] [CrossRef]
  43. Park, J.; Kawai, Y.; Asada, M. Spike timing-dependent plasticity under imbalanced excitation and inhibition reduces the complexity of neural activity. Front. Comput. Neurosci. 2023, 17, 1169288. [Google Scholar] [CrossRef]
  44. Yizhar, O.; Fenno, L.E.; Prigge, M.; Schneider, F.; Davidson, T.J.; O’Shea, D.J.; Sohal, V.S.; Goshen, I.; Finkelstein, J.; Paz, J.T.; et al. Neocortical excitation/inhibition balance in information processing and social dysfunction. Nature 2011, 477, 171–178. [Google Scholar] [CrossRef]
  45. Nelson, S.B.; Valakh, V. Excitatory/inhibitory balance and circuit homeostasis in autism spectrum disorders. Neuron 2015, 87, 684–698. [Google Scholar] [CrossRef]
  46. Vivekanandhan, G.; Mehrabbeik, M.; Rajagopal, K.; Jafari, S.; Lomber, S.G.; Merrikhi, Y. Higuchi fractal dimension is a unique indicator of working memory content represented in spiking activity of visual neurons in extrastriate cortex. Math. Biosci. Eng. MBE 2022, 20, 3749–3767. [Google Scholar] [CrossRef]
  47. McIntosh, A.R.; Kovacevic, N.; Itier, R.J. Increased brain signal variability accompanies lower behavioral variability in development. PLoS Comput. Biol. 2008, 4, e1000106. [Google Scholar] [CrossRef] [PubMed]
  48. Tononi, G.; Edelman, G.M. Consciousness and complexity. Science 1998, 282, 1846–1851. [Google Scholar] [CrossRef] [PubMed]
  49. Silva LE, V.; Senra Filho AC, S.; Fazan VP, S.; Felipe, J.C.; Junior, L.M. Two-dimensional sample entropy: Assessing image texture through irregularity. Biomed. Phys. Eng. Express 2016, 2, 045002. [Google Scholar] [CrossRef]
  50. Ascoli, G.A. Neuroanatomical algorithms for dendritic modeling. Netw. Comput. Neural Syst. 2002, 13, 247–260. [Google Scholar] [CrossRef]
  51. Luczak, A. Measuring neuronal branching patterns using model-based approach. Front. Comput. Neurosci. 2010, 4, 135. [Google Scholar] [CrossRef] [PubMed]
  52. Luczak, A. Shaping of Neurons by Environmental Interaction. In The Computing Dendrite. Springer Series in Computational Neuroscience; Cuntz, H., Remme, M., Torben-Nielsen, B., Eds.; Springer: New York, NY, USA, 2014; Volume 11. [Google Scholar]
  53. Friston, K. The free-energy principle: A unified brain theory? Nat. Rev. Neurosci. 2010, 11, 127–138. [Google Scholar] [CrossRef]
  54. Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behav. Brain Sci. 2013, 36, 181–204. [Google Scholar] [CrossRef]
  55. Seth, A. “Preface: The brain as a prediction machine”. In The Philosophy and Science of Predictive Processing; Mendonça, D., Curado, M., Gouveia, S.S., Eds.; Bloomsbury Academic: London, UK, 2020. [Google Scholar]
  56. Luczak, A.; McNaughton, B.L.; Kubo, Y. Neurons learn by predicting future activity. Nat. Mach. Intell. 2022, 4, 62–72. [Google Scholar] [CrossRef] [PubMed]
  57. Luczak, A.; Kubo, Y. Predictive neuronal adaptation as a basis for consciousness. Front. Syst. Neurosci. 2022, 15, 767461. [Google Scholar] [CrossRef]
  58. Rao, R.P.; Ballard, D.H. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 1999, 2, 79–87. [Google Scholar] [CrossRef]
  59. Friston, K.; Kiebel, S. Predictive coding under the free-energy principle. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 1211–1221. [Google Scholar] [CrossRef] [PubMed]
  60. Millidge, B.; Seth, A.K.; Buckley, C.L. Predictive coding: A theoretical and experimental review. arXiv 2021, arXiv:2107.12979. [Google Scholar]
  61. Friston, K. The free-energy principle: A rough guide to the brain? Trends Cogn. Sci. 2009, 13, 293–301. [Google Scholar] [CrossRef] [PubMed]
  62. Luczak, A.; Barthó, P.; Marguet, S.L.; Buzsáki, G.; Harris, K.D. Sequential structure of neocortical spontaneous activity in vivo. Proc. Natl. Acad. Sci. USA 2007, 104, 347–352. [Google Scholar] [CrossRef]
  63. Verleysen, M.; François, D. The curse of dimensionality in data mining and time series prediction. In International Work-Conference on Artificial Neural Networks; Springer: Berlin/Heidelberg, Germany, 2005; pp. 758–770. [Google Scholar]
  64. Friedman, J.H. On bias, variance, 0/1—Loss, and the curse-of-dimensionality. Data Min. Knowl. Discov. 1997, 1, 55–77. [Google Scholar] [CrossRef]
  65. Paninski, L. Estimation of entropy and mutual information. Neural Comput. 2003, 15, 1191–1253. [Google Scholar] [CrossRef]
  66. Álvarez Chaves, M.; Gupta, H.V.; Ehret, U.; Guthke, A. On the Accurate Estimation of Information-Theoretic Quantities from Multi-Dimensional Sample Data. Entropy 2024, 26, 387. [Google Scholar] [CrossRef]
  67. Stevenson, I.H.; Kording, K.P. How advances in neural recording affect data analysis. Nat. Neurosci. 2011, 14, 139–142. [Google Scholar] [CrossRef]
  68. Panzeri, S.; Treves, A. Analytical estimates of limited sampling biases in different information measures. Netw. Comput. Neural Syst. 1996, 7, 87. [Google Scholar] [CrossRef]
  69. Kraskov, A.; Stögbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E—Stat. Nonlinear Soft Matter Phys. 2004, 69, 066138. [Google Scholar] [CrossRef]
  70. Cunningham, J.P.; Yu, B.M. Dimensionality reduction for large-scale neural recordings. Nat. Neurosci. 2014, 17, 1500–1509. [Google Scholar] [CrossRef] [PubMed]
  71. Stopfer, M.; Jayaraman, V.; Laurent, G. Intensity versus identity coding in an olfactory system. Neuron 2003, 39, 991–1004. [Google Scholar] [CrossRef] [PubMed]
  72. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  73. Luczak, A.; Hackett, T.A.; Kajikawa, Y.; Laubach, M. Multivariate receptive field mapping in marmoset auditory cortex. J. Neurosci. Methods 2004, 136, 77–85. [Google Scholar] [CrossRef]
  74. Nemenman, I.; Shafee, F.; Bialek, W. Entropy and inference, revisited. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2001; Volume 14. [Google Scholar]
  75. Victor, J.D. Binless strategies for estimation of information from neural data. Phys. Rev. E 2002, 66, 051903. [Google Scholar] [CrossRef]
  76. Olshausen, B.A.; Field, D.J. Sparse coding of sensory inputs. Curr. Opin. Neurobiol. 2004, 14, 481–487. [Google Scholar] [CrossRef]
  77. Vinje, W.E.; Gallant, J.L. Sparse coding and decorrelation in primary visual cortex during natural vision. Science 2000, 287, 1273–1276. [Google Scholar] [CrossRef]
  78. Barlow, H.B. Possible principles underlying the transformation of sensory messages. In Sensory Communication; MIT Press: Cambridge, MA, USA, 1961; pp. 217–234. [Google Scholar]
  79. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  80. Pandarinath, C.; O’shea, D.J.; Collins, J.; Jozefowicz, R.; Stavisky, S.D.; Kao, J.C.; Trautmann, E.M.; Kaufman, M.T.; Ryu, S.I.; Hochberg, L.R.; et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nat. Methods 2018, 15, 805–815. [Google Scholar] [CrossRef]
  81. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; Volume 27. [Google Scholar]
  82. Turchenko, V.; Chalmers, E.; Luczak, A. A deep convolutional auto-encoder with pooling-unpooling layers in caffe. Int. J. Comput. 2019, 18, 8–31. [Google Scholar] [CrossRef]
  83. Turchenko, V.; Luczak, A. Creation of a deep convolutional auto-encoder in caffe. In Proceedings of the 2017 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Bucharest, Romania, 21–23 September 2017; Volume 2, pp. 651–659. [Google Scholar]
  84. Jun, J.J.; Steinmetz, N.A.; Siegle, J.H.; Denman, D.J.; Bauza, M.; Barbarits, B.; Lee, A.K.; Anastassiou, C.A.; Andrei, A.; Aydın, Ç.; et al. Fully integrated silicon probes for high-density recording of neural activity. Nature 2017, 551, 232–236. [Google Scholar] [CrossRef] [PubMed]
  85. Schjetnan, A.G.P.; Luczak, A. Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat. J. Vis. Exp. 2011, 56, e3282. [Google Scholar]
  86. Luczak, A.; Narayanan, N.S. Spectral representation—Analyzing single-unit activity in extracellularly recorded neuronal data without spike sorting. J. Neurosci. Methods 2005, 144, 53–61. [Google Scholar] [CrossRef] [PubMed]
  87. Molina, L.A.; Ivan, V.E.; Gruber, A.J.; Luczak, A. Using Neuron Spiking Activity to Trigger Closed-Loop Stimuli in Neurophysiological Experiments. J. Vis. Exp. 2019, 153, e59812. [Google Scholar]
  88. Roediger, D.J.; Butts, J.; Falke, C.; Fiecas, M.B.; Klimes-Dougan, B.; Mueller, B.A.; Cullen, K.R. Optimizing the measurement of sample entropy in resting-state fMRI data. Front. Neurol. 2024, 15, 1331365. [Google Scholar] [CrossRef]
Figure 1. Cartoon illustration of neuronal activity packets. (A) Sequential activity patterns (called packets) during deep sleep where activity occurs sporadically. Within each packet, neurons fire with a stereotyped sequential pattern (each neuron marked with different color). (B) In an awake state, when more information is transmitted, packets occur right after each other, without long periods of silence, but temporal relationships between neurons are similar to those in the sleep state. (C) Consistency and variability in neuronal packets (geometrical interpretation). The gray area illustrates the space of all spiking patterns theoretically possible for a packet. The left-side panels show a cartoon of sample packets, each corresponding to a single point in gray space. The white area inside represents the space of packets experimentally observed in the brain. Packets evoked by different sensory stimuli occupy smaller subspaces (colored blobs). The right-side panels illustrate stimulus-evoked packets. The overall structure of evoked packets is similar, with differences in the firing rate and in the spike timing of neurons encoding information about different stimuli (figure modified from [18]).
Figure 1. Cartoon illustration of neuronal activity packets. (A) Sequential activity patterns (called packets) during deep sleep where activity occurs sporadically. Within each packet, neurons fire with a stereotyped sequential pattern (each neuron marked with different color). (B) In an awake state, when more information is transmitted, packets occur right after each other, without long periods of silence, but temporal relationships between neurons are similar to those in the sleep state. (C) Consistency and variability in neuronal packets (geometrical interpretation). The gray area illustrates the space of all spiking patterns theoretically possible for a packet. The left-side panels show a cartoon of sample packets, each corresponding to a single point in gray space. The white area inside represents the space of packets experimentally observed in the brain. Packets evoked by different sensory stimuli occupy smaller subspaces (colored blobs). The right-side panels illustrate stimulus-evoked packets. The overall structure of evoked packets is similar, with differences in the firing rate and in the spike timing of neurons encoding information about different stimuli (figure modified from [18]).
Entropy 26 00967 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luczak, A. Entropy of Neuronal Spike Patterns. Entropy 2024, 26, 967. https://doi.org/10.3390/e26110967

AMA Style

Luczak A. Entropy of Neuronal Spike Patterns. Entropy. 2024; 26(11):967. https://doi.org/10.3390/e26110967

Chicago/Turabian Style

Luczak, Artur. 2024. "Entropy of Neuronal Spike Patterns" Entropy 26, no. 11: 967. https://doi.org/10.3390/e26110967

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop