Most experts hold that visual experience is remarkably sparse and its apparent richness is illuso... more Most experts hold that visual experience is remarkably sparse and its apparent richness is illusory. Indeed, we fail to notice the vast majority of what we think we see, and seem to rely instead on a high-level summary of a visual scene. However, we argue here that seeing is much more than noticing, and is in fact unfathomably rich. We distinguish among three levels of visual phenomenology: a high-level description of a scene based on the categorization of "objects," an intermediate level composed of "groupings" of simple visual features such as colors, and a baselevel visual field composed of "spots" and their spatial relations. We illustrate that it is impossible to see the objects that underlie a high-level description without seeing the groupings that compose them, and we cannot see the groupings without seeing the visual field to which they are bound. We then argue that the way the visual field feels-its spatial extendedness-can only be accounted for by a phenomenal structure composed of innumerable distinctions and relations. It follows that most of what we see has no functional counterpart-it cannot be used, reported, or remembered. And yet we see it.
This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properti... more This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate translation of axioms into postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system's irreducible cause-effect power, the distinctions and relations specified by a substrate can account for the quality of experience.
Neuroscience has made remarkable advances in accounting for how the brain performs its various fu... more Neuroscience has made remarkable advances in accounting for how the brain performs its various functions. Consciousness, too, is usually approached in functional terms: the goal is to understand how the brain represents information, accesses that information, and acts on it. While useful for prediction, this functional, information-processing approach leaves out the subjective structure of experience: it does not account for how experience feels. Here, we consider a simple model of how a “grid-like” network meant to resemble posterior cortical areas can represent spatial information and act on it to perform a simple “fixation” function. Using standard neuroscience tools, we show how the model represents topographically the retinal position of a stimulus and triggers eye muscles to fixate or follow it. Encoding, decoding, and tuning functions of model units illustrate the working of the model in a way that fully explains what the model does. However, these functional properties have nothing to say about the fact that a human fixating a stimulus would also “see” it—experience it at a location in space. Using the tools of Integrated Information Theory, we then show how the subjective properties of experienced space—its extendedness—can be accounted for in objective, neuroscientific terms by the “cause-effect structure” specified by the grid-like cortical area. By contrast, a “map-like” network without lateral connections, meant to resemble a pretectal circuit, is functionally equivalent to the grid-like system with respect to representation, action, and fixation but cannot account for the phenomenal properties of space.
Objective correlates—behavioral, functional, and neural—provide essential tools for the scientifi... more Objective correlates—behavioral, functional, and neural—provide essential tools for the scientific study of consciousness. But reliance on these correlates should not lead to the ‘fallacy of misplaced objectivity’: the assumption that only objective properties should and can be accounted for objectively through science. Instead, what needs to be explained scientifically is what experience is intrinsically—its subjective properties—not just what we can do with it extrinsically. And it must be explained; otherwise the way experience feels would turn out to be magical rather than physical. We argue that it is possible to account for subjective properties objectively once we move beyond cognitive functions and realize what experience is and how it is structured. Drawing on integrated information theory, we show how an objective science of the subjective can account, in strictly physical terms, for both the essential properties of every experience and the specific properties that make particular experiences feel the way they do.
It is sometimes claimed that because the resolution and sensitivity of visual perception are bett... more It is sometimes claimed that because the resolution and sensitivity of visual perception are better in the fovea than in the periphery, peripheral vision cannot support the same kinds of colour and sharpness percepts as foveal vision. The fact that a scene nevertheless seems colourful and sharp throughout the visual field then poses a puzzle. In this study, I use a detailed model of human spatial vision to estimate the visibility of certain properties of natural scenes, including aspects of colourfulness, sharpness, and blurriness, across the visual field. The model is constructed to reproduce basic aspects of human contrast and colour sensitivity over a range of retinal eccentricities. I apply the model to colourful, complex natural scene images, and estimate the degree to which colour and edge information are present in the model's representation of the scenes. I find that, aside from the intrinsic drift in the spatial scale of the representation, there are not large qualitative differences between foveal and peripheral representations of 'colourfulness' and 'sharpness'.
In this paper I use a detailed model of human spatial vision to estimate the visibility of some p... more In this paper I use a detailed model of human spatial vision to estimate the visibility of some perceptual properties across the visual field, including aspects of colorfulness, sharpness, and blurriness. The model is constructed to reproduce several patterns of human contrast sensitivity, functions of contrast, scale and retinal eccentricity. I apply the model to colorful, complex natural scenes, and estimate the degree to which color and edge information are present in the model’s representation of the scenes. I find that, aside from the intrinsic drift in the spatial scale of the representation, there are not large qualitative differences between foveal and peripheral representations of ‘colorfulness’ and ‘sharpness’.
There must be a reason why an experience feels the way it does. A good place to begin addressing ... more There must be a reason why an experience feels the way it does. A good place to begin addressing this question is spatial experience, because it may be more penetrable by introspection than other qualities of consciousness such as color or pain. Moreover, much of experience is spatial, from that of our body to the visual world, which appears as if painted on an extended canvas in front of our eyes. Because it is 'right there', we usually take space for granted and overlook its qualitative properties. However, we should realize that a great number of phenomenal distinctions and relations are required for the canvas of space to feel 'extended'. Here we argue that, to be experienced as extended, the canvas of space must be composed of countless spots, here and there, small and large, and these spots must be related to each other in a characteristic manner through connection, fusion, and inclusion. Other aspects of the structure of spatial experience follow from extendedness: every spot can be experienced as enclosing a particular region, with its particular location, size, boundary, and distance from other spots. We then propose an account of the phenomenal properties of spatial experiences based on integrated information theory (IIT). The theory provides a principled approach for characterizing both the quantity and quality of experience by unfolding the cause-effect structure of a physical substrate. Specifically, we show that a simple simulated substrate of units connected in a grid-like manner yields a cause-effect structure whose properties can account for the main properties of spatial experience. These results uphold the hypothesis that our experience of space is supported by brain areas whose units are linked by a grid-like connectivity. They also predict that changes in connectivity, even in the absence of changes in activity, should lead to a warping of experienced space. To the extent that this approach provides an initial account of phenomenal space, it may also serve as a starting point for investigating other aspects of the quality of experience and their physical correspondents.
In their recent article, the unfolding argument, Doerig et al argue that a theory of consciousnes... more In their recent article, the unfolding argument, Doerig et al argue that a theory of consciousness cannot be based in the characterization of the physical structure of the brain. They argue that such theories must be “either false or outside the realm of science”. Instead, they prefer theories of consciousness based only on “input-output” descriptions. By their implicit treatment of phenomenal structure as impossible to study, the authors seem to be advocating for a new mode of extreme methodological behaviorism. We take issue with their view, and describe an alternate approach to consciousness science. We clarify some ambiguities in Doerig et al’s argument, critiquing three of their four premises, leading to different conclusions. We then explain what makes causal structure theories of consciousness empirical and falsifiable. Specifically, we propose that consciousness science must work on ways to consider phenomenal structure - i.e. to derive the structure of experience from reports, and to search for isomorphism between physical and phenomenal structures. In essence, we argue that to really take consciousness seriously as an object of study, it is unavoidable that both phenomenal structure and the causal structure of a system must be central to any theory of consciousness.
A significant problem in neuroscience concerns the distinction between neural processing that is ... more A significant problem in neuroscience concerns the distinction between neural processing that is correlated with conscious percepts from processing that is not. Here, we tested if a hierarchical structure of causal interactions between neuronal populations correlates with conscious perception. We derived the hierarchical causal structure as a pattern of integrated information, inspired by the integrated information theory of consciousness. We computed integrated information patterns from intracranial electrocorticography from 6 human neurosurgical patients with electrodes implanted over lateral and ventral cortices. During recording, subjects viewed continuous flash suppression and backward masking stimuli intended to dissociate conscious percept from stimulus, and unmasked suprathreshold stimuli. Object-sensitive areas revealed correspondence between conscious percepts and integrated information patterns. We quantified this correspondence using unsupervised classification methods that revealed clustering of visual experiences with integrated information, but not with broader information measures including mutual information and entropy. Our findings point to a significant role of locally integrated information for understanding the neural substrate of conscious object perception.
Visual space embodies all visual experiences, yet what determines the topographical structure of ... more Visual space embodies all visual experiences, yet what determines the topographical structure of visual space remains unclear. Here we test a novel theoretical framework that proposes intrinsic lateral connections in visual cortex as the mechanism underlying the structure of visual space. The framework suggests that the strength of lateral connections between neurons in visual cortex shapes the experience of spatial relatedness between locations in visual field. As such, an increase in lateral connection strength shall lead to an increase in perceived relatedness and a contraction in perceived distance. To test this framework through human psychophysics experiments, we employed a Hebbian training protocol where two point stimuli were flashed in synchrony at separate locations in visual field, to strengthen the lateral connections between two separate groups of neurons in visual cortex. After training, participants experienced a contraction in perceived distance. Intriguingly, the perceptual contraction occurred not only between the two training locations that were linked directly by the changed connections, but also between the outward untrained locations that were linked indirectly through the changed connections. Moreover, the effect of training greatly decreased, if the two training locations were too close together, or too far apart and went beyond the extent of lateral connections. These findings suggest that a local change in the strength of lateral connections is sufficient to alter the topographical structure of visual space.
It has been argued that the bandwidth of perceptual experience is low—that the richness of experi... more It has been argued that the bandwidth of perceptual experience is low—that the richness of experience is illusory and that the amount of visual information observers can perceive and remember is extremely limited. However, the evidence suggests that this postulated poverty of experiential content is illusory and that visual phenomenology is immensely rich. To properly estimate perceptual content, experimentalists must move beyond the limitations of binary alternative-forced choice procedures and analyze reports of experience more broadly. This will open our eyes to the true richness of experience and to its neuronal substrates.
Perception necessarily entails combining separate sensory estimates into a single coherent whole.... more Perception necessarily entails combining separate sensory estimates into a single coherent whole. The perception of three-dimensional (3D) motion, for instance, can rely on two binocular cues: one related to the change in binocular disparity over time (CD) and the other related to interocular velocity differences (IOVD). Although previous work has shown that neither cue is strictly necessary for the perception of 3D motion, observers are able to judge 3D motion in displays in which one or the other cue has been eliminated, it is unclear whether or how the two cues are combined in situations in which both are present. We tested the visual performance of a sample of 81 individuals (Mage = 20.34, 49 females) in four main conditions that measured, respectively, static stereoacuity, CD, IOVD, and combined CD+IOVD sensitivity. We show that the sensitivity to the two binocular cues to 3D motion varies substantially across observers (CD: Md' = 1.01, SDd' = 1.1; IOVD: Md' = 1.16, SDd' = 1.03). Furthermore, sensitivity to the two cues was independent across observers (r[48] = 0.12, P = 0.42). Importantly, however, observed CD+IOVD performance was well-predicted based on the assumption that each observer combines the two cues in a statistically optimal fashion (r[79] = 0.75, P < 0.001). Our findings provide an explanation for the previously puzzling variability found in 3D perception across observers and laboratories, with some results suggesting that motion-in-depth percepts are largely determined by changes in binocular disparity, whereas others indicate that interocular velocity differences are key. Our results underline the existence of two complementary binocular mechanisms underlying 3D motion perception, with observers relying on these two mechanisms to different extents depending on their individual sensitivity.
The C1 is one of the earliest visual evoked
potentials observed following the onset of a patterne... more The C1 is one of the earliest visual evoked potentials observed following the onset of a patterned stimulus. The polarity of its peak is dependent on whether stimuli are presented in the upper or lower regions of the peripheral visual field, but has been argued to be negative for stimuli presented to the fovea. However, there has yet to be a systematic investigation into the extent to which the peripheral C1 (pC1) and foveal C1 (fC1) can be differentiated on the basis of response characteristics to different stimuli. The current study employed checkerboard patterns (Exp 1) and sinusoidal gratings of different spatial frequency (Exp 2) presented to the fovea or within one of the four quadrants of the peripheral visual field. The checkerboard stimuli yielded a sizable difference in peak component latency, with the fC1 peaking *32 ms after the pC1. Further, the pC1 showed a band-pass response magnitude profile that peaked at 4 cycles per degree (cpd), whereas the fC1 was high-pass for spatial frequency, with a cut-off around 4 cpd. Finally, the scalp topographies of the pC1 and fC1 in both experiments differed greatly, with the fC1 being more posterior than the pC1. The results reported here call into question recent attempts to characterize general C1 processes without regard to whether stimuli are placed in the fovea or in the periphery.
Electronic magnification of an image results in a decrease in its perceived contrast. The decreas... more Electronic magnification of an image results in a decrease in its perceived contrast. The decrease in perceived contrast could be due to a perceived blur or to limited sampling of the range of contrasts in the original image. We measured the effect on perceived contrast of magnification in two contexts: either a small video was enlarged to fill a larger area, or a portion of a larger video was enlarged to fill the same area as the original. Subjects attenuated the source video contrast to match the perceived contrast of the magnified videos, with the effect increasing with magnification and decreasing with viewing distance. These effects are consistent with expectations based on both the contrast statistics of natural images and the contrast sensitivity of the human visual system. We demonstrate that local regions within videos usually have lower physical contrast than the whole, and that this difference accounts for a minor part of the perceived differences. Instead, visibility of 'missing content' (blur) in a video is misinterpreted as a decrease in contrast. We detail how the effects of magnification on perceived contrast can be measured while avoiding confounding factors.
Homonymous hemianopia (HH) is an anisotropic visual impairment characterized by the binocular ina... more Homonymous hemianopia (HH) is an anisotropic visual impairment characterized by the binocular inability to see one side of the visual field. Patients with HH often misperceive visual space. Here we investigated how HH affects visual motor control. Seven patients with complete HH and no neglect or cognitive decline and seven gender- and age-matched controls viewed displays in which a target moved randomly along the horizontal or the vertical axis. They used a joystick to control the target movement to keep it at the center of the screen. We found that the mean deviation of the target position from the center of the screen along the horizontal axis was biased toward the blind side for five out of seven HH patients. More importantly, while the normal vision controls showed more precise control and larger response amplitudes when the target moved along the horizontal rather than the vertical axis, the control performance of the HH patients was not different between these two target motion experimental conditions. Compared with normal vision controls, HH affected patients' control performance when the target moved horizontally (i.e., along the axis of their visual impairment) rather than vertically. We conclude that hemianopia affects the use of visual information for online control of a moving target specific to the axis of visual impairment. The implications of the findings for driving in hemianopic patients are discussed.
Most experts hold that visual experience is remarkably sparse and its apparent richness is illuso... more Most experts hold that visual experience is remarkably sparse and its apparent richness is illusory. Indeed, we fail to notice the vast majority of what we think we see, and seem to rely instead on a high-level summary of a visual scene. However, we argue here that seeing is much more than noticing, and is in fact unfathomably rich. We distinguish among three levels of visual phenomenology: a high-level description of a scene based on the categorization of "objects," an intermediate level composed of "groupings" of simple visual features such as colors, and a baselevel visual field composed of "spots" and their spatial relations. We illustrate that it is impossible to see the objects that underlie a high-level description without seeing the groupings that compose them, and we cannot see the groupings without seeing the visual field to which they are bound. We then argue that the way the visual field feels-its spatial extendedness-can only be accounted for by a phenomenal structure composed of innumerable distinctions and relations. It follows that most of what we see has no functional counterpart-it cannot be used, reported, or remembered. And yet we see it.
This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properti... more This paper presents Integrated Information Theory (IIT) 4.0. IIT aims to account for the properties of experience in physical (operational) terms. It identifies the essential properties of experience (axioms), infers the necessary and sufficient properties that its substrate must satisfy (postulates), and expresses them in mathematical terms. In principle, the postulates can be applied to any system of units in a state to determine whether it is conscious, to what degree, and in what way. IIT offers a parsimonious explanation of empirical evidence, makes testable predictions, and permits inferences and extrapolations. IIT 4.0 incorporates several developments of the past ten years, including a more accurate translation of axioms into postulates and mathematical expressions, the introduction of a unique measure of intrinsic information that is consistent with the postulates, and an explicit assessment of causal relations. By fully unfolding a system's irreducible cause-effect power, the distinctions and relations specified by a substrate can account for the quality of experience.
Neuroscience has made remarkable advances in accounting for how the brain performs its various fu... more Neuroscience has made remarkable advances in accounting for how the brain performs its various functions. Consciousness, too, is usually approached in functional terms: the goal is to understand how the brain represents information, accesses that information, and acts on it. While useful for prediction, this functional, information-processing approach leaves out the subjective structure of experience: it does not account for how experience feels. Here, we consider a simple model of how a “grid-like” network meant to resemble posterior cortical areas can represent spatial information and act on it to perform a simple “fixation” function. Using standard neuroscience tools, we show how the model represents topographically the retinal position of a stimulus and triggers eye muscles to fixate or follow it. Encoding, decoding, and tuning functions of model units illustrate the working of the model in a way that fully explains what the model does. However, these functional properties have nothing to say about the fact that a human fixating a stimulus would also “see” it—experience it at a location in space. Using the tools of Integrated Information Theory, we then show how the subjective properties of experienced space—its extendedness—can be accounted for in objective, neuroscientific terms by the “cause-effect structure” specified by the grid-like cortical area. By contrast, a “map-like” network without lateral connections, meant to resemble a pretectal circuit, is functionally equivalent to the grid-like system with respect to representation, action, and fixation but cannot account for the phenomenal properties of space.
Objective correlates—behavioral, functional, and neural—provide essential tools for the scientifi... more Objective correlates—behavioral, functional, and neural—provide essential tools for the scientific study of consciousness. But reliance on these correlates should not lead to the ‘fallacy of misplaced objectivity’: the assumption that only objective properties should and can be accounted for objectively through science. Instead, what needs to be explained scientifically is what experience is intrinsically—its subjective properties—not just what we can do with it extrinsically. And it must be explained; otherwise the way experience feels would turn out to be magical rather than physical. We argue that it is possible to account for subjective properties objectively once we move beyond cognitive functions and realize what experience is and how it is structured. Drawing on integrated information theory, we show how an objective science of the subjective can account, in strictly physical terms, for both the essential properties of every experience and the specific properties that make particular experiences feel the way they do.
It is sometimes claimed that because the resolution and sensitivity of visual perception are bett... more It is sometimes claimed that because the resolution and sensitivity of visual perception are better in the fovea than in the periphery, peripheral vision cannot support the same kinds of colour and sharpness percepts as foveal vision. The fact that a scene nevertheless seems colourful and sharp throughout the visual field then poses a puzzle. In this study, I use a detailed model of human spatial vision to estimate the visibility of certain properties of natural scenes, including aspects of colourfulness, sharpness, and blurriness, across the visual field. The model is constructed to reproduce basic aspects of human contrast and colour sensitivity over a range of retinal eccentricities. I apply the model to colourful, complex natural scene images, and estimate the degree to which colour and edge information are present in the model's representation of the scenes. I find that, aside from the intrinsic drift in the spatial scale of the representation, there are not large qualitative differences between foveal and peripheral representations of 'colourfulness' and 'sharpness'.
In this paper I use a detailed model of human spatial vision to estimate the visibility of some p... more In this paper I use a detailed model of human spatial vision to estimate the visibility of some perceptual properties across the visual field, including aspects of colorfulness, sharpness, and blurriness. The model is constructed to reproduce several patterns of human contrast sensitivity, functions of contrast, scale and retinal eccentricity. I apply the model to colorful, complex natural scenes, and estimate the degree to which color and edge information are present in the model’s representation of the scenes. I find that, aside from the intrinsic drift in the spatial scale of the representation, there are not large qualitative differences between foveal and peripheral representations of ‘colorfulness’ and ‘sharpness’.
There must be a reason why an experience feels the way it does. A good place to begin addressing ... more There must be a reason why an experience feels the way it does. A good place to begin addressing this question is spatial experience, because it may be more penetrable by introspection than other qualities of consciousness such as color or pain. Moreover, much of experience is spatial, from that of our body to the visual world, which appears as if painted on an extended canvas in front of our eyes. Because it is 'right there', we usually take space for granted and overlook its qualitative properties. However, we should realize that a great number of phenomenal distinctions and relations are required for the canvas of space to feel 'extended'. Here we argue that, to be experienced as extended, the canvas of space must be composed of countless spots, here and there, small and large, and these spots must be related to each other in a characteristic manner through connection, fusion, and inclusion. Other aspects of the structure of spatial experience follow from extendedness: every spot can be experienced as enclosing a particular region, with its particular location, size, boundary, and distance from other spots. We then propose an account of the phenomenal properties of spatial experiences based on integrated information theory (IIT). The theory provides a principled approach for characterizing both the quantity and quality of experience by unfolding the cause-effect structure of a physical substrate. Specifically, we show that a simple simulated substrate of units connected in a grid-like manner yields a cause-effect structure whose properties can account for the main properties of spatial experience. These results uphold the hypothesis that our experience of space is supported by brain areas whose units are linked by a grid-like connectivity. They also predict that changes in connectivity, even in the absence of changes in activity, should lead to a warping of experienced space. To the extent that this approach provides an initial account of phenomenal space, it may also serve as a starting point for investigating other aspects of the quality of experience and their physical correspondents.
In their recent article, the unfolding argument, Doerig et al argue that a theory of consciousnes... more In their recent article, the unfolding argument, Doerig et al argue that a theory of consciousness cannot be based in the characterization of the physical structure of the brain. They argue that such theories must be “either false or outside the realm of science”. Instead, they prefer theories of consciousness based only on “input-output” descriptions. By their implicit treatment of phenomenal structure as impossible to study, the authors seem to be advocating for a new mode of extreme methodological behaviorism. We take issue with their view, and describe an alternate approach to consciousness science. We clarify some ambiguities in Doerig et al’s argument, critiquing three of their four premises, leading to different conclusions. We then explain what makes causal structure theories of consciousness empirical and falsifiable. Specifically, we propose that consciousness science must work on ways to consider phenomenal structure - i.e. to derive the structure of experience from reports, and to search for isomorphism between physical and phenomenal structures. In essence, we argue that to really take consciousness seriously as an object of study, it is unavoidable that both phenomenal structure and the causal structure of a system must be central to any theory of consciousness.
A significant problem in neuroscience concerns the distinction between neural processing that is ... more A significant problem in neuroscience concerns the distinction between neural processing that is correlated with conscious percepts from processing that is not. Here, we tested if a hierarchical structure of causal interactions between neuronal populations correlates with conscious perception. We derived the hierarchical causal structure as a pattern of integrated information, inspired by the integrated information theory of consciousness. We computed integrated information patterns from intracranial electrocorticography from 6 human neurosurgical patients with electrodes implanted over lateral and ventral cortices. During recording, subjects viewed continuous flash suppression and backward masking stimuli intended to dissociate conscious percept from stimulus, and unmasked suprathreshold stimuli. Object-sensitive areas revealed correspondence between conscious percepts and integrated information patterns. We quantified this correspondence using unsupervised classification methods that revealed clustering of visual experiences with integrated information, but not with broader information measures including mutual information and entropy. Our findings point to a significant role of locally integrated information for understanding the neural substrate of conscious object perception.
Visual space embodies all visual experiences, yet what determines the topographical structure of ... more Visual space embodies all visual experiences, yet what determines the topographical structure of visual space remains unclear. Here we test a novel theoretical framework that proposes intrinsic lateral connections in visual cortex as the mechanism underlying the structure of visual space. The framework suggests that the strength of lateral connections between neurons in visual cortex shapes the experience of spatial relatedness between locations in visual field. As such, an increase in lateral connection strength shall lead to an increase in perceived relatedness and a contraction in perceived distance. To test this framework through human psychophysics experiments, we employed a Hebbian training protocol where two point stimuli were flashed in synchrony at separate locations in visual field, to strengthen the lateral connections between two separate groups of neurons in visual cortex. After training, participants experienced a contraction in perceived distance. Intriguingly, the perceptual contraction occurred not only between the two training locations that were linked directly by the changed connections, but also between the outward untrained locations that were linked indirectly through the changed connections. Moreover, the effect of training greatly decreased, if the two training locations were too close together, or too far apart and went beyond the extent of lateral connections. These findings suggest that a local change in the strength of lateral connections is sufficient to alter the topographical structure of visual space.
It has been argued that the bandwidth of perceptual experience is low—that the richness of experi... more It has been argued that the bandwidth of perceptual experience is low—that the richness of experience is illusory and that the amount of visual information observers can perceive and remember is extremely limited. However, the evidence suggests that this postulated poverty of experiential content is illusory and that visual phenomenology is immensely rich. To properly estimate perceptual content, experimentalists must move beyond the limitations of binary alternative-forced choice procedures and analyze reports of experience more broadly. This will open our eyes to the true richness of experience and to its neuronal substrates.
Perception necessarily entails combining separate sensory estimates into a single coherent whole.... more Perception necessarily entails combining separate sensory estimates into a single coherent whole. The perception of three-dimensional (3D) motion, for instance, can rely on two binocular cues: one related to the change in binocular disparity over time (CD) and the other related to interocular velocity differences (IOVD). Although previous work has shown that neither cue is strictly necessary for the perception of 3D motion, observers are able to judge 3D motion in displays in which one or the other cue has been eliminated, it is unclear whether or how the two cues are combined in situations in which both are present. We tested the visual performance of a sample of 81 individuals (Mage = 20.34, 49 females) in four main conditions that measured, respectively, static stereoacuity, CD, IOVD, and combined CD+IOVD sensitivity. We show that the sensitivity to the two binocular cues to 3D motion varies substantially across observers (CD: Md' = 1.01, SDd' = 1.1; IOVD: Md' = 1.16, SDd' = 1.03). Furthermore, sensitivity to the two cues was independent across observers (r[48] = 0.12, P = 0.42). Importantly, however, observed CD+IOVD performance was well-predicted based on the assumption that each observer combines the two cues in a statistically optimal fashion (r[79] = 0.75, P < 0.001). Our findings provide an explanation for the previously puzzling variability found in 3D perception across observers and laboratories, with some results suggesting that motion-in-depth percepts are largely determined by changes in binocular disparity, whereas others indicate that interocular velocity differences are key. Our results underline the existence of two complementary binocular mechanisms underlying 3D motion perception, with observers relying on these two mechanisms to different extents depending on their individual sensitivity.
The C1 is one of the earliest visual evoked
potentials observed following the onset of a patterne... more The C1 is one of the earliest visual evoked potentials observed following the onset of a patterned stimulus. The polarity of its peak is dependent on whether stimuli are presented in the upper or lower regions of the peripheral visual field, but has been argued to be negative for stimuli presented to the fovea. However, there has yet to be a systematic investigation into the extent to which the peripheral C1 (pC1) and foveal C1 (fC1) can be differentiated on the basis of response characteristics to different stimuli. The current study employed checkerboard patterns (Exp 1) and sinusoidal gratings of different spatial frequency (Exp 2) presented to the fovea or within one of the four quadrants of the peripheral visual field. The checkerboard stimuli yielded a sizable difference in peak component latency, with the fC1 peaking *32 ms after the pC1. Further, the pC1 showed a band-pass response magnitude profile that peaked at 4 cycles per degree (cpd), whereas the fC1 was high-pass for spatial frequency, with a cut-off around 4 cpd. Finally, the scalp topographies of the pC1 and fC1 in both experiments differed greatly, with the fC1 being more posterior than the pC1. The results reported here call into question recent attempts to characterize general C1 processes without regard to whether stimuli are placed in the fovea or in the periphery.
Electronic magnification of an image results in a decrease in its perceived contrast. The decreas... more Electronic magnification of an image results in a decrease in its perceived contrast. The decrease in perceived contrast could be due to a perceived blur or to limited sampling of the range of contrasts in the original image. We measured the effect on perceived contrast of magnification in two contexts: either a small video was enlarged to fill a larger area, or a portion of a larger video was enlarged to fill the same area as the original. Subjects attenuated the source video contrast to match the perceived contrast of the magnified videos, with the effect increasing with magnification and decreasing with viewing distance. These effects are consistent with expectations based on both the contrast statistics of natural images and the contrast sensitivity of the human visual system. We demonstrate that local regions within videos usually have lower physical contrast than the whole, and that this difference accounts for a minor part of the perceived differences. Instead, visibility of 'missing content' (blur) in a video is misinterpreted as a decrease in contrast. We detail how the effects of magnification on perceived contrast can be measured while avoiding confounding factors.
Homonymous hemianopia (HH) is an anisotropic visual impairment characterized by the binocular ina... more Homonymous hemianopia (HH) is an anisotropic visual impairment characterized by the binocular inability to see one side of the visual field. Patients with HH often misperceive visual space. Here we investigated how HH affects visual motor control. Seven patients with complete HH and no neglect or cognitive decline and seven gender- and age-matched controls viewed displays in which a target moved randomly along the horizontal or the vertical axis. They used a joystick to control the target movement to keep it at the center of the screen. We found that the mean deviation of the target position from the center of the screen along the horizontal axis was biased toward the blind side for five out of seven HH patients. More importantly, while the normal vision controls showed more precise control and larger response amplitudes when the target moved along the horizontal rather than the vertical axis, the control performance of the HH patients was not different between these two target motion experimental conditions. Compared with normal vision controls, HH affected patients' control performance when the target moved horizontally (i.e., along the axis of their visual impairment) rather than vertically. We conclude that hemianopia affects the use of visual information for online control of a moving target specific to the axis of visual impairment. The implications of the findings for driving in hemianopic patients are discussed.
Uploads
Papers
potentials observed following the onset of a patterned
stimulus. The polarity of its peak is dependent on whether
stimuli are presented in the upper or lower regions of the
peripheral visual field, but has been argued to be negative
for stimuli presented to the fovea. However, there has yet
to be a systematic investigation into the extent to which the
peripheral C1 (pC1) and foveal C1 (fC1) can be differentiated
on the basis of response characteristics to different
stimuli. The current study employed checkerboard patterns
(Exp 1) and sinusoidal gratings of different spatial frequency
(Exp 2) presented to the fovea or within one of the
four quadrants of the peripheral visual field. The checkerboard
stimuli yielded a sizable difference in peak component
latency, with the fC1 peaking *32 ms after the pC1.
Further, the pC1 showed a band-pass response magnitude
profile that peaked at 4 cycles per degree (cpd), whereas
the fC1 was high-pass for spatial frequency, with a cut-off
around 4 cpd. Finally, the scalp topographies of the pC1
and fC1 in both experiments differed greatly, with the fC1
being more posterior than the pC1. The results reported
here call into question recent attempts to characterize
general C1 processes without regard to whether stimuli are
placed in the fovea or in the periphery.
potentials observed following the onset of a patterned
stimulus. The polarity of its peak is dependent on whether
stimuli are presented in the upper or lower regions of the
peripheral visual field, but has been argued to be negative
for stimuli presented to the fovea. However, there has yet
to be a systematic investigation into the extent to which the
peripheral C1 (pC1) and foveal C1 (fC1) can be differentiated
on the basis of response characteristics to different
stimuli. The current study employed checkerboard patterns
(Exp 1) and sinusoidal gratings of different spatial frequency
(Exp 2) presented to the fovea or within one of the
four quadrants of the peripheral visual field. The checkerboard
stimuli yielded a sizable difference in peak component
latency, with the fC1 peaking *32 ms after the pC1.
Further, the pC1 showed a band-pass response magnitude
profile that peaked at 4 cycles per degree (cpd), whereas
the fC1 was high-pass for spatial frequency, with a cut-off
around 4 cpd. Finally, the scalp topographies of the pC1
and fC1 in both experiments differed greatly, with the fC1
being more posterior than the pC1. The results reported
here call into question recent attempts to characterize
general C1 processes without regard to whether stimuli are
placed in the fovea or in the periphery.