Are there logically possible types of conscious experience that are nomologically impossible, giv... more Are there logically possible types of conscious experience that are nomologically impossible, given independently justified assumptions about the neural underpinnings of consciousness in human beings? In one sense, this is trivial: just consider the fact that the types of perceptual experiences we can have are limited by our sensory organs. But there may be non-trivial types of conscious experience that are impossible. For instance, if there is a basic type of self-consciousness, corresponding to a phenomenal property that is nomologically necessary for consciousness, then experiences lacking this phenomenal property will be (nomologically) impossible. More generally, it may be that there are causal dependencies between the neural mechanisms that are required to instantiate distinct phenomenal properties (in human beings). If this is the case, instantiating one of these phenomenal properties without certain others may be impossible, which means there are non-trivial cases of nomologically impossible types of conscious experience. This paper clarifies this hypothesis, outlines a general methodology for its investigation, and relates it to research on radical disruptions of self-consciousness.
In Being No One, Metzinger (2004[2003]) introduces an approach to the scientific study of conscio... more In Being No One, Metzinger (2004[2003]) introduces an approach to the scientific study of consciousness that draws on theories and results from different disciplines, targeted at multiple levels of analysis. Descriptions and assumptions formulated at, for instance, the phenomenological, representationalist, and neurobiological levels of analysis provide different perspectives on the same phenomenon, which can ultimately yield necessary and sufficient conditions for applying the concept of phenomenal representation. In this way, the " method of interdisciplinary constraint satisfaction (MICS) " (as it has been called by Weisberg, 2005) promotes our understanding of consciousness. However, even more than a decade after the first publication of Being No One, we still lack a mature science of consciousness. This paper makes the following meta-theoretical contribution: It analyzes the hurdles an approach such as MICS has yet to overcome and discusses to what extent existing approaches solve the problems left open by MICS. Furthermore, it argues that a unifying theory of different features of consciousness is required to reach a mature science of consciousness.
The goal of this short chapter, aimed at philosophers, is to provide an overview and brief explan... more The goal of this short chapter, aimed at philosophers, is to provide an overview and brief explanation of some central concepts involved in predictive processing (PP). Even those who consider themselves experts on the topic may find it helpful to see how the central terms are used in this collection. To keep things simple, we will first informally define a set of features important to predictive processing, supplemented by some short explanations and an alphabetic glossary. The features described here are not shared in all PP accounts. Some may not be necessary for an individual model; others may be contested. Indeed, not even all authors of this collection will accept all of them. To make this transparent, we have encouraged contributors to indicate briefly which of the features are necessary to support the arguments they provide, and which (if any) are incompatible with their account. For the sake of clarity , we provide the complete list here, very roughly ordered by how central we take them to be for " Vanilla PP " (i.e., a formulation of predictive processing that will probably be accepted by most researchers working on this topic). More detailed explanations will be given below. Note that these features do not specify individually necessary and jointly sufficient conditions for the application of the concept of " predictive processing ". All we currently have is a semantic cluster, with perhaps some overlapping sets of jointly sufficient criteria. The framework is still developing, and it is difficult, maybe impossible, to provide theory-neutral explanations of all PP ideas without already introducing strong background assumptions. Nevertheless, at least features 1-7 can be regarded as necessary properties of what is called PP in this volume: 1. Top-down Processing: Computation in the brain crucially involves an interplay between top-down and bottom-up processing, and PP emphasizes the relative weighting of top-down and bottom-up signals in both perception and action. 2. Statistical Estimation: PP involves computing estimates of random variables. Estimates can be regarded as statistical hypotheses which can serve to explain sensory signals. 3. Hierarchical Processing: PP deploys hierarchically organized estimators (which track features at different spatial and temporal scales). 4. Prediction: PP exploits the fact that many of the relevant random variables in the hierarchy are predictive of each other. 5. Prediction Error Minimization (PEM): PP involves computing prediction errors; these prediction error terms have to be weighted by precision estimates, and a central goal of PP is to minimize precision-weighted prediction errors. 6. Bayesian Inference: PP accords with the norms of Bayesian inference: over the long term, prediction error minimization in the hierarchical model will approximate exact Bayesian inference. 7. Predictive Control: PP is action-oriented in the sense that the organism can act to change its sensory input to fit with its predictions and thereby minimize prediction error; among other benefits, this enables the organism to regulate its vital parameters (like levels of blood oxygenation, blood sugar, etc.). 8. Environmental Seclusion: The organism does not have direct access to the states of its environment and body (for a conceptual analysis of " direct perception " , see Snowdon 1992), but infers them (by inferring the hidden causes of interoceptive and extero-ceptive sensory signals). Although this is a basic feature of some philosophical accounts of PP (cf. Hohwy 2016; Hohwy 2017), it is controversial (cf. Anderson 2017; Clark 2017; Fabry 2017a; Fabry 2017b). 9. The Ideomotor Principle: There are " ideomotor " estimates; computing them underpins both perception and action, because they encode changes in the world which are registered by perception and can be brought about by action. 10. Attention and Precision: Attention can be described as the process of optimizing precision estimates. 11. Hypothesis-Testing: The computational processes underlying perception, cognition, and action can usefully be described as hypothesis testing (or the process of accumulating evidence for the internal model). Conceptually, we can distinguish between passive and active hypothesis-testing (and one might try to match active hypothesis-testing with action, and passive hypothesis-testing with perception). It may however turn out that all hypothesis-testing in the brain (if it makes sense to say that) is active hypothesis-testing. 12. The Free Energy Principle: Fundamentally, PP is just a way of minimizing free energy, which on most PP accounts would amount to the long-term average of prediction error. In the following, we do not assume any familiarity with PP or any mathematical background knowledge, and this introduction will, for the most part, be restricted to the conceptual basics of the PP framework. Having read this primer, one should be able to follow the discussion in the other papers of this collection. However, we would also strongly encourage readers to deepen their understanding of PP by reading (Clark 2016) and (Hohwy 2013), two excellent first philosophical monographs on this topic.
This chapter explores to what extent some core ideas of predictive processing can be applied to t... more This chapter explores to what extent some core ideas of predictive processing can be applied to the phenomenology of time consciousness. The focus is on the experienced continuity of consciously perceived, temporally extended phenomena (such as enduring processes and successions of events). The main claim is that the hierarchy of representations posited by hierarchical predictive processing models can contribute to a deepened understanding of the continuity of consciousness. Computationally, such models show that sequences of events can be represented as states of a hierarchy of dynamical systems. Phenomenologically, they suggest a more fine-grained analysis of the perceptual contents of the specious present, in terms of a hierarchy of temporal wholes. Visual perception of static scenes not only contains perceived objects and regions but also spatial gist; similarly, auditory perception of temporal sequences, such as melodies, involves not only perceiving individual notes but also slightly more abstract features (temporal gist), which have longer temporal durations (e.g., emotional character or rhythm). Further investigations into these elusive contents of conscious perception may be facilitated by findings regarding its neural underpinnings. Predictive processing models suggest that sensorimotor areas may influence these contents.
According to active inference (which subsumes the framework of predictive processing), action is ... more According to active inference (which subsumes the framework of predictive processing), action is enabled by a top-down modulation of sensory signals. Computational models of this mechanism complement ideomotor theories of action representation. Such theories postulate common neural representations for action and perception, without specifying how action is enabled by such representations. In active inference, motor commands are replaced by proprioceptive predictions. In order to initiate action through such predictions, sensory prediction errors have to be attenuated. This paper argues that such top-down modulation involves systematic (but paradoxically beneficial) misrepresentations. More specifically, the paper first argues for the following conditional claim. If active inference provides an accurate computational description of how action is enabled in the brain, then action is enabled by systematic misrepresentations. Furthermore, it is argued that an inference to the best explanation provides reason for believing the antecedent is true: Firstly, active inference provides a crucial extension to ideomotor theories. Secondly, active inference explains otherwise puzzling phenomena related to sensory attenuation, e.g. in force-matching or self-tickling paradigms. Taken together, these reasons support the claim that action is indeed enabled by systematic misrepresentations. The claim casts doubt on the assumption that representations are systematically beneficial to the extent that they are true: if the argument in this paper is sound, systematically beneficial misrepresentations may lie at the heart of our neural architecture.
The problem of phenomenal unity (PPU) consists in providing a phenomenological characterization o... more The problem of phenomenal unity (PPU) consists in providing a phenomenological characterization of the difference between phenomenally unified and disunified conscious experiences. Potential solutions to PPU are faced with an important challenge (which Tim Bayne calls the Bexplanatory regress objection^). I show that this challenge can be conceived as a phenomenological dual to what is known as Bradley's regress. This perspective (i) facilitates progress on PPU by finding duals to possible solutions to Bradley's regress and (ii) makes it intelligible why many characterize phenomenal unity in terms of the existence of a single global conscious state. I call this latter view the Bsingle state conception^ (SSC). SSC is superficially attractive, because it seems to provide a solution to the phenomenological dual to Bradley's regress, but should still be rejected, because (1) it does not solve PPU; (2) instead, it creates more problems; (3) these problems can be avoided by alternative conceptions of phenomenal unity.
Paweł Gładziejewski has recently argued that the framework of predictive processing (PP) postulat... more Paweł Gładziejewski has recently argued that the framework of predictive processing (PP) postulates genuine representations. His focus is on establishing that certain structures posited by PP actually play a representational role. The goal of this paper is to promote this discussion by exploring the contents of representations posited by PP. Gładziejewski already points out that structural theories of representational content can successfully be applied to PP. Here, I propose to make the treatment slightly more rigorous by invoking Francis Egan's distinction between mathematical and cognitive contents. Applying this distinction to representational contents in PP, I first show that cognitive contents in PP are (partly) determined by mathematical contents, at least in the sense that computational descriptions in PP put constraints on ascriptions of cognitive contents. After that, I explore to what extent these constraints are specific (i.e., whether PP puts unique constraints on ascriptions of cognitive contents). I argue that the general mathematical contents posited by PP do not constrain ascriptions of cognitive content in a specific way (because they are not relevantly different from mathematical contents entailed by, for instance, emulators in Rick Grush’s emulation theory). However, there are at least three aspects of PP that constrain ascriptions of cognitive contents in more specific ways: (i) formal PP models posit specific mathematical contents that define more specific constraints; (ii) PP entails claims about how computational mechanisms underpin cognitive phenomena (e.g. attention); (iii) the processing hierarchy posited by PP goes along with more specific constraints.
Anil Seth’s target paper connects the framework of PP (predictive processing) and the FEP (free-e... more Anil Seth’s target paper connects the framework of PP (predictive processing) and the FEP (free-energy principle) to cybernetic principles. Exploiting an analogy to theory of science, Seth draws a distinction between three types of active inference. The first type involves confirmatory hypothesis-testing. The other types involve seeking disconfirming and disambiguating evidence, respectively. Furthermore, Seth applies PP to various fascinating phenomena, including perceptual presence. In this commentary, I explore how far we can take the analogy between explanation in perception and explanation in science.
In the first part, I draw a slightly broader analogy between PP and concepts in theory of science, by asking whether the Bayesian brain is Kuhnian or Popperian. While many aspects of PP are in line with Karl Popper’s falsificationism, other aspects of PP conform to how Thomas Kuhn described scientific revolutions. Thus, there is both a sense in which the Bayesian brain is Kuhnian, and a sense in which it is Popperian. The upshot of these considerations is that falsification in PP can take many different forms. In particular, active inference can be used to falsify a model in more ways than identified by Seth.
In the second part of this commentary, I focus on Seth’s PPSMCT (predictive processing account of sensorimotor contingency theory) and its application to perceptual presence, which assigns a crucial role to counterfactual richness. In my discussion, I question the significance of counterfactual richness for perceptual presence. First, I highlight an ambiguity inherent in Seth’s descriptions of the target phenomenon (perceptual presence vs. objecthood). Then I suggest that counterfactual richness may not be the crucial underlying feature (of either perceptual presence of objecthood). Giving a series of examples, I argue that the degree of represented causal integration is an equally good candidate for accounting for perceptual presence (or objecthood), although more work needs to be done.
Are there logically possible types of conscious experience that are nomologically impossible, giv... more Are there logically possible types of conscious experience that are nomologically impossible, given independently justified assumptions about the neural underpinnings of consciousness in human beings? In one sense, this is trivial: just consider the fact that the types of perceptual experiences we can have are limited by our sensory organs. But there may be non-trivial types of conscious experience that are impossible. For instance, if there is a basic type of self-consciousness, corresponding to a phenomenal property that is nomologically necessary for consciousness, then experiences lacking this phenomenal property will be (nomologically) impossible. More generally, it may be that there are causal dependencies between the neural mechanisms that are required to instantiate distinct phenomenal properties (in human beings). If this is the case, instantiating one of these phenomenal properties without certain others may be impossible, which means there are non-trivial cases of nomologically impossible types of conscious experience. This paper clarifies this hypothesis, outlines a general methodology for its investigation, and relates it to research on radical disruptions of self-consciousness.
In Being No One, Metzinger (2004[2003]) introduces an approach to the scientific study of conscio... more In Being No One, Metzinger (2004[2003]) introduces an approach to the scientific study of consciousness that draws on theories and results from different disciplines, targeted at multiple levels of analysis. Descriptions and assumptions formulated at, for instance, the phenomenological, representationalist, and neurobiological levels of analysis provide different perspectives on the same phenomenon, which can ultimately yield necessary and sufficient conditions for applying the concept of phenomenal representation. In this way, the " method of interdisciplinary constraint satisfaction (MICS) " (as it has been called by Weisberg, 2005) promotes our understanding of consciousness. However, even more than a decade after the first publication of Being No One, we still lack a mature science of consciousness. This paper makes the following meta-theoretical contribution: It analyzes the hurdles an approach such as MICS has yet to overcome and discusses to what extent existing approaches solve the problems left open by MICS. Furthermore, it argues that a unifying theory of different features of consciousness is required to reach a mature science of consciousness.
The goal of this short chapter, aimed at philosophers, is to provide an overview and brief explan... more The goal of this short chapter, aimed at philosophers, is to provide an overview and brief explanation of some central concepts involved in predictive processing (PP). Even those who consider themselves experts on the topic may find it helpful to see how the central terms are used in this collection. To keep things simple, we will first informally define a set of features important to predictive processing, supplemented by some short explanations and an alphabetic glossary. The features described here are not shared in all PP accounts. Some may not be necessary for an individual model; others may be contested. Indeed, not even all authors of this collection will accept all of them. To make this transparent, we have encouraged contributors to indicate briefly which of the features are necessary to support the arguments they provide, and which (if any) are incompatible with their account. For the sake of clarity , we provide the complete list here, very roughly ordered by how central we take them to be for " Vanilla PP " (i.e., a formulation of predictive processing that will probably be accepted by most researchers working on this topic). More detailed explanations will be given below. Note that these features do not specify individually necessary and jointly sufficient conditions for the application of the concept of " predictive processing ". All we currently have is a semantic cluster, with perhaps some overlapping sets of jointly sufficient criteria. The framework is still developing, and it is difficult, maybe impossible, to provide theory-neutral explanations of all PP ideas without already introducing strong background assumptions. Nevertheless, at least features 1-7 can be regarded as necessary properties of what is called PP in this volume: 1. Top-down Processing: Computation in the brain crucially involves an interplay between top-down and bottom-up processing, and PP emphasizes the relative weighting of top-down and bottom-up signals in both perception and action. 2. Statistical Estimation: PP involves computing estimates of random variables. Estimates can be regarded as statistical hypotheses which can serve to explain sensory signals. 3. Hierarchical Processing: PP deploys hierarchically organized estimators (which track features at different spatial and temporal scales). 4. Prediction: PP exploits the fact that many of the relevant random variables in the hierarchy are predictive of each other. 5. Prediction Error Minimization (PEM): PP involves computing prediction errors; these prediction error terms have to be weighted by precision estimates, and a central goal of PP is to minimize precision-weighted prediction errors. 6. Bayesian Inference: PP accords with the norms of Bayesian inference: over the long term, prediction error minimization in the hierarchical model will approximate exact Bayesian inference. 7. Predictive Control: PP is action-oriented in the sense that the organism can act to change its sensory input to fit with its predictions and thereby minimize prediction error; among other benefits, this enables the organism to regulate its vital parameters (like levels of blood oxygenation, blood sugar, etc.). 8. Environmental Seclusion: The organism does not have direct access to the states of its environment and body (for a conceptual analysis of " direct perception " , see Snowdon 1992), but infers them (by inferring the hidden causes of interoceptive and extero-ceptive sensory signals). Although this is a basic feature of some philosophical accounts of PP (cf. Hohwy 2016; Hohwy 2017), it is controversial (cf. Anderson 2017; Clark 2017; Fabry 2017a; Fabry 2017b). 9. The Ideomotor Principle: There are " ideomotor " estimates; computing them underpins both perception and action, because they encode changes in the world which are registered by perception and can be brought about by action. 10. Attention and Precision: Attention can be described as the process of optimizing precision estimates. 11. Hypothesis-Testing: The computational processes underlying perception, cognition, and action can usefully be described as hypothesis testing (or the process of accumulating evidence for the internal model). Conceptually, we can distinguish between passive and active hypothesis-testing (and one might try to match active hypothesis-testing with action, and passive hypothesis-testing with perception). It may however turn out that all hypothesis-testing in the brain (if it makes sense to say that) is active hypothesis-testing. 12. The Free Energy Principle: Fundamentally, PP is just a way of minimizing free energy, which on most PP accounts would amount to the long-term average of prediction error. In the following, we do not assume any familiarity with PP or any mathematical background knowledge, and this introduction will, for the most part, be restricted to the conceptual basics of the PP framework. Having read this primer, one should be able to follow the discussion in the other papers of this collection. However, we would also strongly encourage readers to deepen their understanding of PP by reading (Clark 2016) and (Hohwy 2013), two excellent first philosophical monographs on this topic.
This chapter explores to what extent some core ideas of predictive processing can be applied to t... more This chapter explores to what extent some core ideas of predictive processing can be applied to the phenomenology of time consciousness. The focus is on the experienced continuity of consciously perceived, temporally extended phenomena (such as enduring processes and successions of events). The main claim is that the hierarchy of representations posited by hierarchical predictive processing models can contribute to a deepened understanding of the continuity of consciousness. Computationally, such models show that sequences of events can be represented as states of a hierarchy of dynamical systems. Phenomenologically, they suggest a more fine-grained analysis of the perceptual contents of the specious present, in terms of a hierarchy of temporal wholes. Visual perception of static scenes not only contains perceived objects and regions but also spatial gist; similarly, auditory perception of temporal sequences, such as melodies, involves not only perceiving individual notes but also slightly more abstract features (temporal gist), which have longer temporal durations (e.g., emotional character or rhythm). Further investigations into these elusive contents of conscious perception may be facilitated by findings regarding its neural underpinnings. Predictive processing models suggest that sensorimotor areas may influence these contents.
According to active inference (which subsumes the framework of predictive processing), action is ... more According to active inference (which subsumes the framework of predictive processing), action is enabled by a top-down modulation of sensory signals. Computational models of this mechanism complement ideomotor theories of action representation. Such theories postulate common neural representations for action and perception, without specifying how action is enabled by such representations. In active inference, motor commands are replaced by proprioceptive predictions. In order to initiate action through such predictions, sensory prediction errors have to be attenuated. This paper argues that such top-down modulation involves systematic (but paradoxically beneficial) misrepresentations. More specifically, the paper first argues for the following conditional claim. If active inference provides an accurate computational description of how action is enabled in the brain, then action is enabled by systematic misrepresentations. Furthermore, it is argued that an inference to the best explanation provides reason for believing the antecedent is true: Firstly, active inference provides a crucial extension to ideomotor theories. Secondly, active inference explains otherwise puzzling phenomena related to sensory attenuation, e.g. in force-matching or self-tickling paradigms. Taken together, these reasons support the claim that action is indeed enabled by systematic misrepresentations. The claim casts doubt on the assumption that representations are systematically beneficial to the extent that they are true: if the argument in this paper is sound, systematically beneficial misrepresentations may lie at the heart of our neural architecture.
The problem of phenomenal unity (PPU) consists in providing a phenomenological characterization o... more The problem of phenomenal unity (PPU) consists in providing a phenomenological characterization of the difference between phenomenally unified and disunified conscious experiences. Potential solutions to PPU are faced with an important challenge (which Tim Bayne calls the Bexplanatory regress objection^). I show that this challenge can be conceived as a phenomenological dual to what is known as Bradley's regress. This perspective (i) facilitates progress on PPU by finding duals to possible solutions to Bradley's regress and (ii) makes it intelligible why many characterize phenomenal unity in terms of the existence of a single global conscious state. I call this latter view the Bsingle state conception^ (SSC). SSC is superficially attractive, because it seems to provide a solution to the phenomenological dual to Bradley's regress, but should still be rejected, because (1) it does not solve PPU; (2) instead, it creates more problems; (3) these problems can be avoided by alternative conceptions of phenomenal unity.
Paweł Gładziejewski has recently argued that the framework of predictive processing (PP) postulat... more Paweł Gładziejewski has recently argued that the framework of predictive processing (PP) postulates genuine representations. His focus is on establishing that certain structures posited by PP actually play a representational role. The goal of this paper is to promote this discussion by exploring the contents of representations posited by PP. Gładziejewski already points out that structural theories of representational content can successfully be applied to PP. Here, I propose to make the treatment slightly more rigorous by invoking Francis Egan's distinction between mathematical and cognitive contents. Applying this distinction to representational contents in PP, I first show that cognitive contents in PP are (partly) determined by mathematical contents, at least in the sense that computational descriptions in PP put constraints on ascriptions of cognitive contents. After that, I explore to what extent these constraints are specific (i.e., whether PP puts unique constraints on ascriptions of cognitive contents). I argue that the general mathematical contents posited by PP do not constrain ascriptions of cognitive content in a specific way (because they are not relevantly different from mathematical contents entailed by, for instance, emulators in Rick Grush’s emulation theory). However, there are at least three aspects of PP that constrain ascriptions of cognitive contents in more specific ways: (i) formal PP models posit specific mathematical contents that define more specific constraints; (ii) PP entails claims about how computational mechanisms underpin cognitive phenomena (e.g. attention); (iii) the processing hierarchy posited by PP goes along with more specific constraints.
Anil Seth’s target paper connects the framework of PP (predictive processing) and the FEP (free-e... more Anil Seth’s target paper connects the framework of PP (predictive processing) and the FEP (free-energy principle) to cybernetic principles. Exploiting an analogy to theory of science, Seth draws a distinction between three types of active inference. The first type involves confirmatory hypothesis-testing. The other types involve seeking disconfirming and disambiguating evidence, respectively. Furthermore, Seth applies PP to various fascinating phenomena, including perceptual presence. In this commentary, I explore how far we can take the analogy between explanation in perception and explanation in science.
In the first part, I draw a slightly broader analogy between PP and concepts in theory of science, by asking whether the Bayesian brain is Kuhnian or Popperian. While many aspects of PP are in line with Karl Popper’s falsificationism, other aspects of PP conform to how Thomas Kuhn described scientific revolutions. Thus, there is both a sense in which the Bayesian brain is Kuhnian, and a sense in which it is Popperian. The upshot of these considerations is that falsification in PP can take many different forms. In particular, active inference can be used to falsify a model in more ways than identified by Seth.
In the second part of this commentary, I focus on Seth’s PPSMCT (predictive processing account of sensorimotor contingency theory) and its application to perceptual presence, which assigns a crucial role to counterfactual richness. In my discussion, I question the significance of counterfactual richness for perceptual presence. First, I highlight an ambiguity inherent in Seth’s descriptions of the target phenomenon (perceptual presence vs. objecthood). Then I suggest that counterfactual richness may not be the crucial underlying feature (of either perceptual presence of objecthood). Giving a series of examples, I argue that the degree of represented causal integration is an equally good candidate for accounting for perceptual presence (or objecthood), although more work needs to be done.
Uploads
Papers by Wanja Wiese
2. Statistical Estimation: PP involves computing estimates of random variables. Estimates can be regarded as statistical hypotheses which can serve to explain sensory signals.
3. Hierarchical Processing: PP deploys hierarchically organized estimators (which track features at different spatial and temporal scales).
4. Prediction: PP exploits the fact that many of the relevant random variables in the hierarchy are predictive of each other.
5. Prediction Error Minimization (PEM): PP involves computing prediction errors; these prediction error terms have to be weighted by precision estimates, and a central goal of PP is to minimize precision-weighted prediction errors.
6. Bayesian Inference: PP accords with the norms of Bayesian inference: over the long term, prediction error minimization in the hierarchical model will approximate exact Bayesian inference.
7. Predictive Control: PP is action-oriented in the sense that the organism can act to change its sensory input to fit with its predictions and thereby minimize prediction error; among other benefits, this enables the organism to regulate its vital parameters (like levels of blood oxygenation, blood sugar, etc.).
8. Environmental Seclusion: The organism does not have direct access to the states of its environment and body (for a conceptual analysis of " direct perception " , see Snowdon 1992), but infers them (by inferring the hidden causes of interoceptive and extero-ceptive sensory signals). Although this is a basic feature of some philosophical accounts of PP (cf. Hohwy 2016; Hohwy 2017), it is controversial (cf. Anderson 2017; Clark 2017; Fabry 2017a; Fabry 2017b).
9. The Ideomotor Principle: There are " ideomotor " estimates; computing them underpins both perception and action, because they encode changes in the world which are registered by perception and can be brought about by action.
10. Attention and Precision: Attention can be described as the process of optimizing precision estimates.
11. Hypothesis-Testing: The computational processes underlying perception, cognition, and action can usefully be described as hypothesis testing (or the process of accumulating evidence for the internal model). Conceptually, we can distinguish between passive and active hypothesis-testing (and one might try to match active hypothesis-testing with action, and passive hypothesis-testing with perception). It may however turn out that all hypothesis-testing in the brain (if it makes sense to say that) is active hypothesis-testing.
12. The Free Energy Principle: Fundamentally, PP is just a way of minimizing free energy, which on most PP accounts would amount to the long-term average of prediction error.
In the following, we do not assume any familiarity with PP or any mathematical background knowledge, and this introduction will, for the most part, be restricted to the conceptual basics of the PP framework. Having read this primer, one should be able to follow the discussion in the other papers of this collection. However, we would also strongly encourage readers to deepen their understanding of PP by reading (Clark 2016) and (Hohwy 2013), two excellent first philosophical monographs on this topic.
In the first part, I draw a slightly broader analogy between PP and concepts in theory of science, by asking whether the Bayesian brain is Kuhnian or Popperian. While many aspects of PP are in line with Karl Popper’s falsificationism, other aspects of PP conform to how Thomas Kuhn described scientific revolutions. Thus, there is both a sense in which the Bayesian brain is Kuhnian, and a sense in which it is Popperian. The upshot of these considerations is that falsification in PP can take many different forms. In particular, active inference can be used to falsify a model in more ways than identified by Seth.
In the second part of this commentary, I focus on Seth’s PPSMCT (predictive processing account of sensorimotor contingency theory) and its application to perceptual presence, which assigns a crucial role to counterfactual richness. In my discussion, I question the significance of counterfactual richness for perceptual presence. First, I highlight an ambiguity inherent in Seth’s descriptions of the target phenomenon (perceptual presence vs. objecthood). Then I suggest that counterfactual richness may not be the crucial underlying feature (of either perceptual presence of objecthood). Giving a series of examples, I argue that the degree of represented causal integration is an equally good candidate for accounting for perceptual presence (or objecthood), although more work needs to be done.
2. Statistical Estimation: PP involves computing estimates of random variables. Estimates can be regarded as statistical hypotheses which can serve to explain sensory signals.
3. Hierarchical Processing: PP deploys hierarchically organized estimators (which track features at different spatial and temporal scales).
4. Prediction: PP exploits the fact that many of the relevant random variables in the hierarchy are predictive of each other.
5. Prediction Error Minimization (PEM): PP involves computing prediction errors; these prediction error terms have to be weighted by precision estimates, and a central goal of PP is to minimize precision-weighted prediction errors.
6. Bayesian Inference: PP accords with the norms of Bayesian inference: over the long term, prediction error minimization in the hierarchical model will approximate exact Bayesian inference.
7. Predictive Control: PP is action-oriented in the sense that the organism can act to change its sensory input to fit with its predictions and thereby minimize prediction error; among other benefits, this enables the organism to regulate its vital parameters (like levels of blood oxygenation, blood sugar, etc.).
8. Environmental Seclusion: The organism does not have direct access to the states of its environment and body (for a conceptual analysis of " direct perception " , see Snowdon 1992), but infers them (by inferring the hidden causes of interoceptive and extero-ceptive sensory signals). Although this is a basic feature of some philosophical accounts of PP (cf. Hohwy 2016; Hohwy 2017), it is controversial (cf. Anderson 2017; Clark 2017; Fabry 2017a; Fabry 2017b).
9. The Ideomotor Principle: There are " ideomotor " estimates; computing them underpins both perception and action, because they encode changes in the world which are registered by perception and can be brought about by action.
10. Attention and Precision: Attention can be described as the process of optimizing precision estimates.
11. Hypothesis-Testing: The computational processes underlying perception, cognition, and action can usefully be described as hypothesis testing (or the process of accumulating evidence for the internal model). Conceptually, we can distinguish between passive and active hypothesis-testing (and one might try to match active hypothesis-testing with action, and passive hypothesis-testing with perception). It may however turn out that all hypothesis-testing in the brain (if it makes sense to say that) is active hypothesis-testing.
12. The Free Energy Principle: Fundamentally, PP is just a way of minimizing free energy, which on most PP accounts would amount to the long-term average of prediction error.
In the following, we do not assume any familiarity with PP or any mathematical background knowledge, and this introduction will, for the most part, be restricted to the conceptual basics of the PP framework. Having read this primer, one should be able to follow the discussion in the other papers of this collection. However, we would also strongly encourage readers to deepen their understanding of PP by reading (Clark 2016) and (Hohwy 2013), two excellent first philosophical monographs on this topic.
In the first part, I draw a slightly broader analogy between PP and concepts in theory of science, by asking whether the Bayesian brain is Kuhnian or Popperian. While many aspects of PP are in line with Karl Popper’s falsificationism, other aspects of PP conform to how Thomas Kuhn described scientific revolutions. Thus, there is both a sense in which the Bayesian brain is Kuhnian, and a sense in which it is Popperian. The upshot of these considerations is that falsification in PP can take many different forms. In particular, active inference can be used to falsify a model in more ways than identified by Seth.
In the second part of this commentary, I focus on Seth’s PPSMCT (predictive processing account of sensorimotor contingency theory) and its application to perceptual presence, which assigns a crucial role to counterfactual richness. In my discussion, I question the significance of counterfactual richness for perceptual presence. First, I highlight an ambiguity inherent in Seth’s descriptions of the target phenomenon (perceptual presence vs. objecthood). Then I suggest that counterfactual richness may not be the crucial underlying feature (of either perceptual presence of objecthood). Giving a series of examples, I argue that the degree of represented causal integration is an equally good candidate for accounting for perceptual presence (or objecthood), although more work needs to be done.