[go: up one dir, main page]

Next Issue
Volume 8, September
Previous Issue
Volume 8, March
 
 

Vision, Volume 8, Issue 2 (June 2024) – 23 articles

Cover Story (view full-size image): The question of whether the early visual cortex (EVC) is involved in visual mental imagery remains a topic of debate. In this paper, it has been proposed that the inconsistency in findings can be explained by the unique challenges associated with investigating EVC activity during imagery. If the EVC is represented by visual details during imagery as during perception, any change in the visual details of the mental image would lead to corresponding changes in EVC activity. Therefore, the question should not be whether the EVC is ‘active’ during imagery but how its activity relates to specific imagery properties. Studies explicitly investigating this relationship consistently show that imagery can indeed recruit the EVC in similar ways as perception. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 1967 KiB  
Article
Unveiling Visual Acuity in 58,712 Four-Year-Olds: Standardized Assessment Defined Normative Visual Acuity Threshold
by Mirjana Bjeloš, Mladen Bušić, Benedict Rak, Ana Ćurić and Biljana Kuzmanović Elabjer
Vision 2024, 8(2), 39; https://doi.org/10.3390/vision8020039 - 19 Jun 2024
Viewed by 527
Abstract
The purpose was to define the threshold of normal visual acuity (VA), mean monocular and binocular VA, and interocular difference in the uniform cohort of healthy four-year-old children. All the children were recruited from the Croatian National Registry of Early Amblyopia Detection database. [...] Read more.
The purpose was to define the threshold of normal visual acuity (VA), mean monocular and binocular VA, and interocular difference in the uniform cohort of healthy four-year-old children. All the children were recruited from the Croatian National Registry of Early Amblyopia Detection database. LEA Symbols® inline optotypes were used for VA testing at near and distance, binocularly and monocularly. The pass cut-off level was set to ≤0.1 logMAR. The final sample consisted of 58,712 four-year-old children. In total, 83.78% of the children had unremarkable results, and 16.22% of the children were referred to examination. Of those, 92% of the children were referred due to binocular, and 8% of the children due to monocular causes. The children referred due to binocular causes demonstrated a VA of 0.3 ± 0.24, while the children referred due to monocular causes 0.6 ± 0.21. The ROC curve analysis defined the uniform cut-off value for a normative VA of 0.78. We analyzed the largest uniform cohort of 58,712 children, and have determined normative data for binocular and monocular VA tested with gold standard logMAR chart in four-year-old children. The results presented here established no reasoning to further utilize historical protocols in testing VA in preschool children aged ≥ 4 years. Full article
Show Figures

Figure 1

Figure 1
<p>Histogram showing frequency of visual acuity results. Mean visual acuity equals 0.82 ± 0.24. VA, visual acuity; N, number of children.</p>
Full article ">Figure 2
<p>Histogram showing frequency of certain visual acuity. Mean visual acuity of children who had unremarkable screening result equals 0.91 ± 0.1 VA, visual acuity; N, number of children; n, number of VA exams.</p>
Full article ">Figure 3
<p>Receiver operating characteristic (ROC) curve for the visual acuity (VA) testing. For all the VA tests, the defined cut-off value was 0.78. The mean visual acuity and area under the curve are presented in <a href="#vision-08-00039-t001" class="html-table">Table 1</a>. The 95% CI and Youden index are presented in <a href="#vision-08-00039-t002" class="html-table">Table 2</a>. The methodology employed for determining the Youden index for each of the parameters is presented in <a href="#app1-vision-08-00039" class="html-app">Tables S1–S6</a>. A very large area under the curve numbers was indicative of the high specificity and sensitivity of the tests.</p>
Full article ">Figure 4
<p>Visual acuity analysis and sex-based variations. (<b>A</b>) Sex-based variations in visual acuity. (<b>B</b>) Interocular visual acuity difference. (<b>C</b>) Sex-based variations on median and Kolomogorov–Smirnov test. VA, visual acuity; F, female; M, male; 3, right near; 4, right distance; 5, left near; 6, left distance; N, number of VA exams.</p>
Full article ">Figure 4 Cont.
<p>Visual acuity analysis and sex-based variations. (<b>A</b>) Sex-based variations in visual acuity. (<b>B</b>) Interocular visual acuity difference. (<b>C</b>) Sex-based variations on median and Kolomogorov–Smirnov test. VA, visual acuity; F, female; M, male; 3, right near; 4, right distance; 5, left near; 6, left distance; N, number of VA exams.</p>
Full article ">Figure 5
<p>Histograms showing frequency of certain visual acuity (VA). Children referred due to bilateral causes (<b>A</b>), children referred due to monocular causes (<b>B</b>). VA, visual acuity; N, number of children; n, number of VA exams.</p>
Full article ">Figure 6
<p>No sex-based variations were observed related to the distribution of monocular (<b>A</b>), and binocular causes (<b>B</b>), and in the total number of VA exams (<b>C</b>). VA, visual acuity; F, female; M, male, N, number of VA exams.</p>
Full article ">Figure 6 Cont.
<p>No sex-based variations were observed related to the distribution of monocular (<b>A</b>), and binocular causes (<b>B</b>), and in the total number of VA exams (<b>C</b>). VA, visual acuity; F, female; M, male, N, number of VA exams.</p>
Full article ">
9 pages, 541 KiB  
Article
Prevalence of Near-Vision-Related Symptoms in a University Population
by Jessica Gomes and Sandra Franco
Vision 2024, 8(2), 38; https://doi.org/10.3390/vision8020038 - 19 Jun 2024
Viewed by 607
Abstract
The university population has high visual demands. It is therefore important to assess the prevalence of symptoms in these subjects, which may affect their academic performance. In this cross-sectional study, a randomized sample of 252 subjects from a university answered the Convergence Insufficiency [...] Read more.
The university population has high visual demands. It is therefore important to assess the prevalence of symptoms in these subjects, which may affect their academic performance. In this cross-sectional study, a randomized sample of 252 subjects from a university answered the Convergence Insufficiency Symptom Survey (CISS) questionnaire. In addition, questions were asked about blurred vision during and after near tasks, the number of hours per day spent in near vision, and whether or not they wore glasses. Furthermore, 110 subjects underwent an eye exam, including a refraction and accommodation assessment. The mean age of the subjects was 28.79 ± 11.36 years, 62.3% reported wearing glasses, and on average 7.20 ± 2.92 hours/day was spent in near vision. The mean of the CISS score was 18.69 ± 9.96, and according to its criteria, 38% of the subjects were symptomatic. Some symptoms were significantly (p < 0.05) more frequent in subjects wearing glasses. Accommodative dysfunctions were present in 30.9% of the subjects, the most common being insufficiency of accommodation. We emphasise the importance of assessing symptomatology during the clinical examination in this group of subjects, as they spend many hours a day in near vision, as well as assessing accommodation, binocular vision, and the ergonomic work environment, which may be at the origin of the symptoms, in addition to the need to wear glasses. Full article
Show Figures

Figure 1

Figure 1
<p>Mean score of the symptoms with statistically significant differences between subjects who wear and do not wear glasses. Error bars indicate the standard deviation.</p>
Full article ">
13 pages, 754 KiB  
Review
Eyes on Memory: Pupillometry in Encoding and Retrieval
by Alex Kafkas
Vision 2024, 8(2), 37; https://doi.org/10.3390/vision8020037 - 14 Jun 2024
Viewed by 729
Abstract
This review critically examines the contributions of pupillometry to memory research, primarily focusing on its enhancement of our understanding of memory encoding and retrieval mechanisms mainly investigated with the recognition memory paradigm. The evidence supports a close link between pupil response and memory [...] Read more.
This review critically examines the contributions of pupillometry to memory research, primarily focusing on its enhancement of our understanding of memory encoding and retrieval mechanisms mainly investigated with the recognition memory paradigm. The evidence supports a close link between pupil response and memory formation, notably influenced by the type of novelty detected. This proposal reconciles inconsistencies in the literature regarding pupil response patterns that may predict successful memory formation, and highlights important implications for encoding mechanisms. The review also discusses the pupil old/new effect and its significance in the context of recollection and in reflecting brain signals related to familiarity or novelty detection. Additionally, the capacity of pupil response to serve as a true memory signal and to distinguish between true and false memories is evaluated. The evidence provides insights into the nature of false memories and offers a novel understanding of the cognitive mechanisms involved in memory distortions. When integrated with rigorous experimental design, pupillometry can significantly refine theoretical models of memory encoding and retrieval. Furthermore, combining pupillometry with neuroimaging and pharmacological interventions is identified as a promising direction for future research. Full article
(This article belongs to the Special Issue Pupillometry)
Show Figures

Figure 1

Figure 1
<p>Differential pupillary responses to novelty, their brain basis and the different behavioral outputs. (<b>a</b>) Contextual novelty, characterized by the unexpected appearance of new events, triggers dopaminergic and noradrenergic signals to the hippocampus and parahippocampal gyrus. Significant dopaminergic contributions to the medial temporal lobe come from the midbrain—specifically the locus coeruleus and the substantia nigra/ventral tegmental area [<a href="#B32-vision-08-00037" class="html-bibr">32</a>,<a href="#B47-vision-08-00037" class="html-bibr">47</a>]. Their roles are evident in the sympathetic control of pupil dilation, where greater dilation correlates with enhanced memory formation [<a href="#B41-vision-08-00037" class="html-bibr">41</a>]. (<b>b</b>) In contrast, absolute stimulus or expected novelty, engages cholinergic inputs to the medial temporal lobe originating from the pedunculopontine nucleus and basal forebrain. Their impact appears to drive the parasympathetically mediated pupil constriction patterns, while the extent of pupil constriction is predictive of stronger memory formation [<a href="#B41-vision-08-00037" class="html-bibr">41</a>]. M = missed/forgotten stimuli; F = familiar; R = recollected stimuli.</p>
Full article ">Figure 2
<p>Pupil response patterns at retrieval. (<b>a</b>) The pupil old/new effect with accurately recognized old stimuli accompanied by increased pupil dilation. The memory type (F = familiar; R = recollected) further differentiates the pupil dilation pattern in accurate recognition [<a href="#B38-vision-08-00037" class="html-bibr">38</a>]. (<b>b</b>,<b>c</b>) Hypothetical cause of the pupil old/new effect, (<b>b</b>) driven by subjective memory experience irrespective of true old/new status or (<b>c</b>) by objective old/new status irrespective of subjective memory response. (<b>d</b>,<b>e</b>) Pupil response discriminates true from false memories at different temporal stages during recognition memory decisions depending on the type of reported memory (data from [<a href="#B57-vision-08-00037" class="html-bibr">57</a>]).</p>
Full article ">
13 pages, 1809 KiB  
Article
Subjective Affective Responses to Natural Scenes Require Understanding, Not Spatial Frequency Bands
by Serena Mastria, Maurizio Codispoti, Virginia Tronelli and Andrea De Cesarei
Vision 2024, 8(2), 36; https://doi.org/10.3390/vision8020036 - 4 Jun 2024
Viewed by 393
Abstract
It is debated whether emotional processing and response depend on semantic identification or are preferentially tied to specific information in natural scenes, such as global features or local details. The present study aimed to further examine the relationship between scene understanding and affective [...] Read more.
It is debated whether emotional processing and response depend on semantic identification or are preferentially tied to specific information in natural scenes, such as global features or local details. The present study aimed to further examine the relationship between scene understanding and affective response while manipulating visual content. To this end, we presented affective and neutral natural scenes which were progressively band-filtered to contain global features (low spatial frequencies) or local details (high spatial frequencies) and assessed both affective response and scene understanding. We observed that, if scene content was correctly reported, subjective ratings of arousal and valence were modulated by the affective content of the scene, and this modulation was similar across spatial frequency bands. On the other hand, no affective modulation of subjective ratings was observed if picture content was not correctly reported. The present results indicate that subjective affective response requires content understanding, and it is not tied to a specific spatial frequency range. Full article
(This article belongs to the Section Visual Neuroscience)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Original (on the top) and band-passed (on the bottom) versions of one sample picture. The image shown in the picture is not part of the experimental database and is ©University of Bologna licensed for research use. Filtering levels were adjusted to the printed version of the picture; (<b>B</b>) Procedure of each trial. After viewing each picture, participants rated their affective state of valence and arousal on a 1–9 scale. Next, they were asked to describe the gist of the scene.</p>
Full article ">Figure 2
<p>Identification rate (from 0 or inaccurate to 1 or accurate) of degraded natural scenes as a function of emotional content and the five spatial frequency bands.</p>
Full article ">Figure 3
<p>SAM ratings of arousal for pleasant, neutral, and unpleasant pictures as a function of the five spatial frequency bands when scene identification was achieved. The inset shows arousal scores in the lowest (4 cpi) and the highest (512 cpi) spatial frequency band as a function of emotional content when scene identification was not achieved. Error bars represent the within-participant standard error of the mean [<a href="#B49-vision-08-00036" class="html-bibr">49</a>].</p>
Full article ">Figure 4
<p>SAM ratings of valence for pleasant, neutral, and unpleasant pictures as a function of the five spatial frequency bands when scene identification was achieved. The inset shows valence scores in the lowest (4 cpi) and the highest (512 cpi) spatial frequency band as a function of emotional content when scene identification was not achieved. Error bars represent within-participant standard error of the mean.</p>
Full article ">
13 pages, 4536 KiB  
Communication
Dynamic Visual Acuity, Vestibulo-Ocular Reflex, and Visual Field in National Football League (NFL) Officiating: Physiology and Visualization Engineering for 3D Virtual On-Field Training
by Joshua Ong, Nicole V. Carrabba, Ethan Waisberg, Nasif Zaman, Hamza Memon, Nicholas Panzo, Virginia A. Lee, Prithul Sarker, Ashtyn Z. Vogt, Noor Laylani, Alireza Tavakkoli and Andrew G. Lee
Vision 2024, 8(2), 35; https://doi.org/10.3390/vision8020035 - 17 May 2024
Viewed by 1063
Abstract
The ability to make on-field, split-second decisions is critical for National Football League (NFL) game officials. Multiple principles in visual function are critical for accuracy and precision of these play calls, including foveation time and unobstructed line of sight, static visual acuity, dynamic [...] Read more.
The ability to make on-field, split-second decisions is critical for National Football League (NFL) game officials. Multiple principles in visual function are critical for accuracy and precision of these play calls, including foveation time and unobstructed line of sight, static visual acuity, dynamic visual acuity, vestibulo-ocular reflex, and sufficient visual field. Prior research has shown that a standardized curriculum in these neuro-ophthalmic principles have demonstrated validity and self-rated improvements in understanding, confidence, and likelihood of future utilization by NFL game officials to maximize visual performance during officiating. Virtual reality technology may also be able to help optimize understandings of specific neuro-ophthalmic principles and simulate real-life gameplay. Personal communication between authors and NFL officials and leadership have indicated that there is high interest in 3D virtual on-field training for NFL officiating. In this manuscript, we review the current and past research in this space regarding a neuro-ophthalmic curriculum for NFL officials. We then provide an overview our current visualization engineering process in taking real-life NFL gameplay 2D data and creating 3D environments for virtual reality gameplay training for football officials to practice plays that highlight neuro-ophthalmic principles. We then review in-depth the physiology behind these principles and discuss strategies to implement these principles into virtual reality for football officiating. Full article
(This article belongs to the Special Issue Eye and Head Movements in Visuomotor Tasks)
Show Figures

Figure 1

Figure 1
<p>Framework for potential football officiating training with extended reality. Traditional officiating (<b>1A</b>,<b>1B</b>) includes watching gameplay analysis followed by on-field experience. Officiating training with extended reality (<b>2A</b>,<b>2B</b>,<b>2C</b>) may include gameplay analysis followed by on-field, actively engaged, extended reality experience to simulate difficult plays that occurred in real life. This can be cycled with gameplay until officials are comfortable with on-field officiating at any level, particularly when starting to officiate at a higher stakes level.</p>
Full article ">Figure 2
<p>Stepwise visualization engineering framework for the 3D environment for football officiating training and simulation using real-life NFL play 2D data. Panel (<b>A</b>) demonstrates preprocessed data from real-life 2D NFL gameplay data that map out every individual on the field at specific time indices. Panel (<b>B</b>) showcases the 3D generation of “Actors” and the ball on the field through Unreal Engine that can play out real-life NFL play data. Panel (<b>C</b>) showcases the point of view on the field that can interact with real-life NFL play data and can be utilized with wearable virtual reality for NFL officiating VR simulation and training.</p>
Full article ">Figure 3
<p>Mechanism of the vestibulo-ocular reflex for horizontal eye movement. Rotation of the head generates a reflexive pathway for compensating eye movement. MLF = Medial longitudinal fasciculus. Reprinted with permission from CFCF in Wikimedia Commons under Creative Commons Attribution 3.0 Unported license (<a href="https://creativecommons.org/licenses/by/3.0/legalcode.en" target="_blank">https://creativecommons.org/licenses/by/3.0/legalcode.en</a>; accessed on 1 October 2023.</p>
Full article ">Figure 4
<p>Implementation of virtual reality training in neuro-ophthalmic principles to showcase changes in dynamic visual acuity based on running speed and vestibulo-ocular reflex adaptation, as well as retinal slip based on acceleration of head tilt and visual field changes based on positioning.</p>
Full article ">
12 pages, 256 KiB  
Review
Corneal Nerve Assessment by Aesthesiometry: History, Advancements, and Future Directions
by Jordan R. Crabtree, Shadia Tannir, Khoa Tran, Charline S. Boente, Asim Ali and Gregory H. Borschel
Vision 2024, 8(2), 34; https://doi.org/10.3390/vision8020034 - 12 May 2024
Viewed by 996
Abstract
The measurement of corneal sensation allows clinicians to assess the status of corneal innervation and serves as a crucial indicator of corneal disease and eye health. Many devices are available to assess corneal sensation, including the Cochet–Bonnet aesthesiometer, the Belmonte Aesthesiometer, the Swiss [...] Read more.
The measurement of corneal sensation allows clinicians to assess the status of corneal innervation and serves as a crucial indicator of corneal disease and eye health. Many devices are available to assess corneal sensation, including the Cochet–Bonnet aesthesiometer, the Belmonte Aesthesiometer, the Swiss Liquid Jet Aesthesiometer, and the newly introduced Corneal Esthesiometer Brill. Increasing the clinical use of in vivo confocal microscopy and optical coherence tomography will allow for greater insight into the diagnosis, classification, and monitoring of ocular surface diseases such as neurotrophic keratopathy; however, formal esthesiometric measurement remains necessary to assess the functional status of corneal nerves. These aesthesiometers vary widely in their mode of corneal stimulus generation and their relative accessibility, precision, and ease of clinical use. The development of future devices to optimize these characteristics, as well as further comparative studies between device types should enable more accurate and precise diagnosis and treatment of corneal innervation deficits. The purpose of this narrative review is to describe the advancements in the use of aesthesiometers since their introduction to clinical practice, compare currently available devices for assessing corneal innervation and their relative limitations, and discuss how the assessment of corneal innervation is crucial to understanding and treating pathologies of the ocular surface. Full article
15 pages, 1892 KiB  
Article
Graph Analysis of the Visual Cortical Network during Naturalistic Movie Viewing Reveals Increased Integration and Decreased Segregation Following Mild TBI
by Tatiana Ruiz, Shael Brown and Reza Farivar
Vision 2024, 8(2), 33; https://doi.org/10.3390/vision8020033 - 10 May 2024
Viewed by 833
Abstract
Traditional neuroimaging methods have identified alterations in brain activity patterns following mild traumatic brain injury (mTBI), particularly during rest, complex tasks, and normal vision. However, studies using graph theory to examine brain network changes in mTBI have produced varied results, influenced by the [...] Read more.
Traditional neuroimaging methods have identified alterations in brain activity patterns following mild traumatic brain injury (mTBI), particularly during rest, complex tasks, and normal vision. However, studies using graph theory to examine brain network changes in mTBI have produced varied results, influenced by the specific networks and task demands analyzed. In our study, we employed functional MRI to observe 17 mTBI patients and 54 healthy individuals as they viewed a simple, non-narrative underwater film, simulating everyday visual tasks. This approach revealed significant mTBI-related changes in network connectivity, efficiency, and organization. Specifically, the mTBI group exhibited higher overall connectivity and local network specialization, suggesting enhanced information integration without overwhelming the brain’s processing capabilities. Conversely, these patients showed reduced network segregation, indicating a less compartmentalized brain function compared to healthy controls. These patterns were consistent across various visual cortex subnetworks, except in primary visual areas. Our findings highlight the potential of using naturalistic stimuli in graph-based neuroimaging to understand brain network alterations in mTBI and possibly other conditions affecting brain integration. Full article
(This article belongs to the Section Visual Neuroscience)
Show Figures

Figure 1

Figure 1
<p>The two viewing conditions and frame grabs of the movie clip (IMAX underwater documentary). In the monoscopic viewing condition (<b>left</b>), only one of the two viewpoints was shown, while in the stereoscopic viewing condition (<b>right</b>), both views were seen and fused by the subject.</p>
Full article ">Figure 2
<p>Data processing pipeline. Note that all spatial transformations were carried out in a single step, and the results of the pre-processing steps were visually inspected for validity prior to next steps.</p>
Full article ">Figure 3
<p>Measures of network organization. Schematic illustration of increasing levels of connectivity degree, efficiency, and modularity. Nodes are represented as dots, and edges are represented as lines connecting the nodes. While the measures are estimated over the whole network, they are calculated either for every node or for all node pairs. Average connectivity degree represents the average number of connections a node may have—for a single node (i.e., the red node), connectivity degree means the number of nodes it is connected to. Average efficiency represents degree of connectivity between pairs—moving from one node (red) to another (yellow) involves many intervening points in the “low” efficiency level, and the number of intervening points decreases as efficiency in the network increases. Modularity captures the extent to which nodes cluster together and away from other clusters. Clustering captures the extent to which all possible connections between nodes are realized—the graph with lower clustering has fewer connections realized compared to the network with higher clustering.</p>
Full article ">Figure 4
<p>Global network measures of the visual cortex during naturalistic viewing in mTBI participants and healthy controls. All measures reached statistically significant differences between the two groups.</p>
Full article ">Figure 5
<p>Measures of subnetwork organization during naturalistic viewing in mTBI and healthy control participants.</p>
Full article ">
9 pages, 351 KiB  
Article
In the Eyes of the Future: Eye Movement during Near and Distant Future Thinking
by Mohamad El Haj and Ahmed A. Moustafa
Vision 2024, 8(2), 32; https://doi.org/10.3390/vision8020032 - 10 May 2024
Viewed by 943
Abstract
Research has suggested that near future events are typically viewed from a first-person (an own-eyes, also known as field) perspective while distant future events are typically viewed from a third-person (an observer) perspective. We investigated whether these distinct mental perspectives would be accompanied [...] Read more.
Research has suggested that near future events are typically viewed from a first-person (an own-eyes, also known as field) perspective while distant future events are typically viewed from a third-person (an observer) perspective. We investigated whether these distinct mental perspectives would be accompanied by distinct eye movement activities. We invited participants to imagine near and distant future events while their eye movements (i.e., scan path) were recorded by eye-tracking glasses. Analysis demonstrated fewer but longer fixations for near future thinking than for distant future thinking. Analysis also demonstrated more “field” mental visual perspective responses for near than for distant future thinking. The long fixations during near future thinking may mirror a mental visual exploration involving processing of a more complex visual representation compared with distant future thinking. By demonstrating how near future thinking triggers both “field” responses and long fixations, our study demonstrates how the temporality of future thinking triggers both distinct mental imagery and eye movement patterns. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Figure 1

Figure 1
<p>Illustration of procedures.</p>
Full article ">
19 pages, 4185 KiB  
Review
Visual Deficits and Diagnostic and Therapeutic Strategies for Neurofibromatosis Type 1: Bridging Science and Patient-Centered Care
by Kiyoharu J. Miyagishima, Fengyu Qiao, Steven F. Stasheff and Francisco M. Nadal-Nicolás
Vision 2024, 8(2), 31; https://doi.org/10.3390/vision8020031 - 9 May 2024
Viewed by 993
Abstract
Neurofibromatosis type 1 (NF1) is an inherited autosomal dominant disorder primarily affecting children and adolescents characterized by multisystemic clinical manifestations. Mutations in neurofibromin, the protein encoded by the Nf1 tumor suppressor gene, result in dysregulation of the RAS/MAPK pathway leading to uncontrolled cell [...] Read more.
Neurofibromatosis type 1 (NF1) is an inherited autosomal dominant disorder primarily affecting children and adolescents characterized by multisystemic clinical manifestations. Mutations in neurofibromin, the protein encoded by the Nf1 tumor suppressor gene, result in dysregulation of the RAS/MAPK pathway leading to uncontrolled cell growth and migration. Neurofibromin is highly expressed in several cell lineages including melanocytes, glial cells, neurons, and Schwann cells. Individuals with NF1 possess a genetic predisposition to central nervous system neoplasms, particularly gliomas affecting the visual pathway, known as optic pathway gliomas (OPGs). While OPGs are typically asymptomatic and benign, they can induce visual impairment in some patients. This review provides insight into the spectrum and visual outcomes of NF1, current diagnostic techniques and therapeutic interventions, and explores the influence of NF1-OPGS on visual abnormalities. We focus on recent advancements in preclinical animal models to elucidate the underlying mechanisms of NF1 pathology and therapies targeting NF1-OPGs. Overall, our review highlights the involvement of retinal ganglion cell dysfunction and degeneration in NF1 disease, and the need for further research to transform scientific laboratory discoveries to improved patient outcomes. Full article
(This article belongs to the Section Visual Neuroscience)
Show Figures

Figure 1

Figure 1
<p>Diagram illustrating the formation of optic pathway gliomas (OPGs) in children with Neurofibromatosis Type 1. Under normal conditions, growth factors stimulate the activation of RAS-GDP to RAS-GTP through son of sevenless (SOS). Neurofibromin (<span class="html-italic">Nf1</span> gene) regulates the conversion of RAS-GTP to its inactive form (RAS-GDP), thereby modulating cell growth and migration through the MAPK/ERK and mTOR pathways. In NF1 patients, mutations of neurofibromin significantly reduce its natural activity, resulting in abnormal hyperactivation of the MAPK/ERK and mTOR pathways. Consequently, uncontrolled cell growth and migration lead to the development of gliomas in the optic pathway, potentially affecting vision. Created with BioRender.com.</p>
Full article ">Figure 2
<p>Visual field defects based on the location and size of the axonal damage in the optic pathway. (<b>A</b>) Schematic representation of the retinal ganglion cell (RGC) axon projection to superior brain areas for visual processing in the brain. Ipsilaterally (blue) or contralaterally (black and red) projecting RGCs within the optic nerves. (<b>B</b>) Depiction of individualized visual field deficits in patients with axonal damage in the optic pathway corresponding to their respective scheme in which red marks represent OPG size and location. Drawings based on concepts presented in [<a href="#B56-vision-08-00031" class="html-bibr">56</a>,<a href="#B57-vision-08-00031" class="html-bibr">57</a>,<a href="#B67-vision-08-00031" class="html-bibr">67</a>].</p>
Full article ">
9 pages, 1246 KiB  
Article
A Morphometric Study of the Pars Plana of the Ciliary Body in Human Cadaver Eyes
by Jaime Guedes, Bruno F. Fernandes, Denisse J. Mora-Paez, Rodrigo Brazuna, Alexandre Batista da Costa Neto, Dillan Cunha Amaral, Adriano Cypriano Faneli, Ricardo Danilo Chagas Oliveira, Adroaldo de Alencar Costa Filho and Adalmir Morterá Dantas
Vision 2024, 8(2), 30; https://doi.org/10.3390/vision8020030 - 8 May 2024
Viewed by 822
Abstract
This study aimed to determine the pars plana length in postmortem human eyes using advanced morphometric techniques and correlate demographics to ocular metrics such as age, sex, ethnicity, and axial length. Between February and July 2005, we conducted a cross-sectional observational study on [...] Read more.
This study aimed to determine the pars plana length in postmortem human eyes using advanced morphometric techniques and correlate demographics to ocular metrics such as age, sex, ethnicity, and axial length. Between February and July 2005, we conducted a cross-sectional observational study on 46 human cadaver eyes deemed unsuitable for transplant by the SBO Eye Bank. The morphometric analysis was performed on projected images using a surgical microscope and a video-microscopy system with a 20.5:1 correction factor. The pars plana length was measured three times per quadrant, with the final value being the mean of these measurements. Of the 46 eyes collected, 9 were unsuitable for the study due to technical constraints in conducting intraocular measurements. Overall, the average axial length was 25.20 mm. The average pars plana length was 3.8 mm in all quadrants, with no measurements below 2.8 mm or above 4.9 mm. There were no statistically significant variations across quadrants or with age, sex, axial length, or laterality. Accurately defining the pars plana dimensions is crucial for safely accessing the posterior segment of the eye and minimizing complications during intraocular procedures, such as intravitreal injections and vitreoretinal surgeries. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Crystalline-vitreous complex removed after dissection. (<b>B</b>) Arrangement of the dissected eye for measurements.</p>
Full article ">
8 pages, 491 KiB  
Review
Uncovering the Role of the Early Visual Cortex in Visual Mental Imagery
by Nadine Dijkstra
Vision 2024, 8(2), 29; https://doi.org/10.3390/vision8020029 - 2 May 2024
Viewed by 1413
Abstract
The question of whether the early visual cortex (EVC) is involved in visual mental imagery remains a topic of debate. In this paper, I propose that the inconsistency in findings can be explained by the unique challenges associated with investigating EVC activity during [...] Read more.
The question of whether the early visual cortex (EVC) is involved in visual mental imagery remains a topic of debate. In this paper, I propose that the inconsistency in findings can be explained by the unique challenges associated with investigating EVC activity during imagery. During perception, the EVC processes low-level features, which means that activity is highly sensitive to variation in visual details. If the EVC has the same role during visual mental imagery, any change in the visual details of the mental image would lead to corresponding changes in EVC activity. Within this context, the question should not be whether the EVC is ‘active’ during imagery but how its activity relates to specific imagery properties. Studies using methods that are sensitive to variation in low-level features reveal that imagery can recruit the EVC in similar ways as perception. However, not all mental images contain a high level of visual details. Therefore, I end by considering a more nuanced view, which states that imagery can recruit the EVC, but that does not mean that it always does so. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Figure 1

Figure 1
<p>Challenging properties of early visual cortex (EVC) activity during imagery. (<b>A</b>) Activity in the EVC is highly sensitive to changes in low-level features, such as where in visual space a stimulus is located. Top row indicates toy visual signals, spreading over the visual field. Bottom row indicates the activity pattern in a grid of EVC neurons. (<b>B</b>) Imagery is thought to be instantiated through inhibitory feedback connections, leading to the sharpening of representations rather than increases in general activation levels. This means that the EVC can still be involved in imagery even if there is no change or even a decrease in general activation.</p>
Full article ">
14 pages, 2200 KiB  
Article
Further Examination of the Pulsed- and Steady-Pedestal Paradigms under Hypothetical Parvocellular- and Magnocellular-Biased Conditions
by Jaeseon Song, Bruno G. Breitmeyer and James M. Brown
Vision 2024, 8(2), 28; https://doi.org/10.3390/vision8020028 - 30 Apr 2024
Viewed by 831
Abstract
The pulsed- and steady-pedestal paradigms were designed to track increment thresholds (ΔC) as a function of pedestal contrast (C) for the parvocellular (P) and magnocellular (M) systems, respectively. These paradigms produce contrasting results: linear relationships between ΔC and C are [...] Read more.
The pulsed- and steady-pedestal paradigms were designed to track increment thresholds (ΔC) as a function of pedestal contrast (C) for the parvocellular (P) and magnocellular (M) systems, respectively. These paradigms produce contrasting results: linear relationships between ΔC and C are observed in the pulsed-pedestal paradigm, indicative of the P system’s processing, while the steady-pedestal paradigm reveals nonlinear functions, characteristic of the M system’s response. However, we recently found the P model fits better than the M model for both paradigms, using Gabor stimuli biased towards the M or P systems based on their sensitivity to color and spatial frequency. Here, we used two-square pedestals under green vs. red light in the lower-left vs. upper-right visual fields to bias processing towards the M vs. P system, respectively. Based on our previous findings, we predicted the following: (1) steeper ΔC vs. C functions with the pulsed than the steady pedestal due to different task demands; (2) lower ΔCs in the upper-right vs. lower-left quadrant due to its bias towards P-system processing there; (3) no effect of color, since both paradigms track the P-system; and, most importantly (4) contrast gain should not be higher for the steady than for the pulsed pedestal. In general, our predictions were confirmed, replicating our previous findings and providing further evidence questioning the general validity of using the pulsed- and steady-pedestal paradigms to differentiate the P and M systems. Full article
Show Figures

Figure 1

Figure 1
<p>Visual hemifield (upper vs. lower and left vs. right) differences in preferential M- and P-activation patterns. The resulting strongest preferences, depicted by the bold double arrow, of the M and the P systems, are found, respectively, in the lower-left and upper-right VFs. Progressively larger activation biases are given by entries with progressively larger print.</p>
Full article ">Figure 2
<p>Best-fitting normalized CRF, modeled with the Michaelis–Menten formula, for BOLD responses obtained from human V2 P-innervated thin stripes and V2 M-innervated thick stripes. Dashed lines indicate <span class="html-italic">C<sub>sat</sub></span> values. Adapted from Tootell and Nasr [<a href="#B50-vision-08-00028" class="html-bibr">50</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) The steady- and pulsed-pedestal paradigms for the lower-left VF condition. For the steady-pedestal paradigm (top row), a two-squares-array pedestal was presented continuously in the center of a constant surround. In each trial, one of the two squares was randomly selected to have an increased luminance, designating it as the test square, while the other remained unaltered, serving as the reference. During the testing interval, this luminance-increased test square was briefly displayed (35 ms). In the pulsed-pedestal paradigm (illustrated in the bottom row), participants initially adapted to the surrounding luminance. Then, in the test interval, both the test and reference squares were presented at the same time. For both paradigms, guides to aid fixation were consistently visible. The two-square array and its background were uniformly either red or green, both set to a physical equiluminance. (<b>b</b>) The test examples are of the upper-right VF (top) and the lower-left VF (bottom) conditions. The increment is on right in the lower-left VF example and on left in the upper-right example.</p>
Full article ">Figure 4
<p>Contrast increment thresholds (Δ<span class="html-italic">C</span>s) as a function of pedestal contrast for pulsed-pedestal (blue) and steady-pedestal (purple) conditions in the lower-left (solid lines) and upper-right (dashed lines) VF quadrants. The curves are smoothed representations of our data (not model-fitted curves derived from Pokorny and Smith’s theoretical models). The color data are combined as they did not show statistical significance.</p>
Full article ">Figure 5
<p>As indicated at the top of the figure, the left and right panels display contrast-increment thresholds (∆<span class="html-italic">C</span>s) against square-pedestal contrast for pulsed and steady pedestals in the lower-left and upper-right VF quadrants, respectively. The data for red and green stimulus colors are displayed separately in the upper panels, while the lower panels show the results averaged across the colors. Additionally, the lower panels include the standard errors of the mean (SEMs) for each data point. The curves in all panels are fitted using Equation (2) from Pokorny and Smith’s [<a href="#B11-vision-08-00028" class="html-bibr">11</a>] models. The parenthesized values denote the R<sup>2</sup> values, reflecting the best fit of Equation (2) to the respective data.</p>
Full article ">
14 pages, 2167 KiB  
Article
Less Is More: Higher-Skilled Sim Racers Allocate Significantly Less Attention to the Track Relative to the Display Features than Lower-Skilled Sim Racers
by John M. Joyce, Mark J. Campbell, Fazilat Hojaji and Adam J. Toth
Vision 2024, 8(2), 27; https://doi.org/10.3390/vision8020027 - 29 Apr 2024
Viewed by 1168
Abstract
Simulated (sim) racing is an emerging esport that has garnered much interest in recent years and has been a relatively under-researched field in terms of expertise and performance. When examining expertise, visual attention has been of particular interest to researchers, with eye tracking [...] Read more.
Simulated (sim) racing is an emerging esport that has garnered much interest in recent years and has been a relatively under-researched field in terms of expertise and performance. When examining expertise, visual attention has been of particular interest to researchers, with eye tracking technology commonly used to assess visual attention. In this study, we examined the overt visual attention allocation of high- and low-skilled sim racers during a time trial task using Tobii 3 glasses. In the study, 104 participants were tested on one occasion, with 88 included in the analysis after exclusions. Participants were allocated to either group according to their fastest lap times. Independent t-tests were carried out with sidak corrections to test our hypotheses. Our results indicate that when eye tracking metrics were normalised to the lap time and corner sector time, there was a difference in the relative length of overt attention allocation (fixation behaviour) as lower-skilled racers had significantly greater total fixation durations in laps overall and across corner sectors when normalised (p = 0.013; p = 0.018). Interestingly, high- and low-skilled sim racers differed in where they allocated their attention during the task, with high-skilled sim racers allocating significantly less overt attention to the track relative to other areas of the display (p = 0.003). This would allow for higher-skilled racers to obtain relatively more information from heads-up display elements in-game, all whilst driving at faster speeds. This study provides evidence that high-skilled sim racers appear to need significantly less overt attention throughout a fast lap, and that high- and low-skilled sim racers differ in where they allocate their attention while racing. Full article
Show Figures

Figure 1

Figure 1
<p>The Brands Hatch Circuit used in data collection. Figure shows the corner sectors used in the current study with corners seen in black. Adapted from Hojaji and colleagues [<a href="#B24-vision-08-00027" class="html-bibr">24</a>].</p>
Full article ">Figure 2
<p>Snapshot image and the areas of interest (AOIs) for track (blue) (1) and HUD elements (yellow). HUD elements in the image include the lap time (2), circuit map (3), in-car information screen (4), tyre and brake temperatures (5), and speedometer (6).</p>
Full article ">Figure 3
<p>Truncated violin plots for non-normalised lap time fixation metrics; (<b>A</b>) fixation count (FC), (<b>B</b>) average fixation duration in milliseconds (AFD), and (<b>C</b>) total fixation duration in milliseconds (TFD) during participants’ fastest lap times. Data are presented as frequency distributions with medians (solid line) and quartiles (dashed lines) for lower-skilled (blue) and higher-skilled (red) sim racers. Significance (***) is denoted where <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 4
<p>Truncated violin plots for lap-time-normalised fixation metrics; (<b>A</b>) fixations per second (FC<sub>n</sub>), (<b>B</b>) average fixation duration as a proportion of participants’ fastest lap time (AFD<sub>n</sub>), and (<b>C</b>) total fixation duration as a proportion of participants’ fastest lap time (TFD<sub>n</sub>). Data are presented as frequency distributions with medians (solid line) and quartiles (dashed lines) for lower-skilled (blue) and higher-skilled (red) sim racers. Significance (*) is denoted where <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 5
<p>Truncated violin plots for ratio of on-track to HUD fixation count (TRACK:HUD). Data are presented as frequency distributions with medians (solid line) and quartiles (dashed lines) for lower-skilled (blue) and higher-skilled (red) sim racers. Dotted line represents the ratio of the AOI areas of TRACK to HUD. Significance (**) is denoted where <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 6
<p>Truncated violin plots for non-normalised fixation metrics in corners; (<b>A</b>) fixation count (Corner FC), (<b>B</b>) average fixation duration in milliseconds (Corner AFD), and (<b>C</b>) total fixation duration in milliseconds (Corner TFD) during participants’ fastest lap times. Data are presented as frequency distributions with medians (solid line) and quartiles (dashed lines) for lower-skilled (blue) and higher-skilled (red) sim racers. Significance (***) is denoted where <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 7
<p>Truncated violin plots for normalised fixation metrics in corners; (<b>A</b>) fixations per second (Corner FC<sub>n</sub>), (<b>B</b>) average fixation duration as a proportion of participants’ corner time (Corner AFD<sub>n</sub>), and (<b>C</b>) total fixation duration as a proportion of participants’ corner time (Corner TFD<sub>n</sub>). Data are presented as frequency distributions with medians (solid line) and quartiles (dashed lines) for lower-skilled (blue) and higher-skilled (red) sim racers. Significance (*) is denoted where <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">
16 pages, 9725 KiB  
Communication
Biological Sunglasses in a Deep-Sea Squid: Pigment Migration in the Retina of Gonatus onyx
by Ryan B. Howard, Jessica Kniller, Kathrin S. R. Bolstad and Monica L. Acosta
Vision 2024, 8(2), 26; https://doi.org/10.3390/vision8020026 - 25 Apr 2024
Viewed by 1063
Abstract
The outward migration of ommin pigment granules from the bases to the tips of the photoreceptors in response to light has been reported in the retina of several (mostly coastal) squid species. Following exposure to light and then dark conditions, we collected and [...] Read more.
The outward migration of ommin pigment granules from the bases to the tips of the photoreceptors in response to light has been reported in the retina of several (mostly coastal) squid species. Following exposure to light and then dark conditions, we collected and processed retinal tissue from juvenile specimens of a deep-sea oegopsid squid, Gonatus onyx. We aimed to determine whether the ommin pigment returns to baseline, and to investigate the presence of glutamate neurotransmitter signaling under both dark and light conditions. We confirmed the presence of ommin granules but observed variability in the return of pigment to the basal layer in dark conditions, as well as changes in glutamate distribution. These findings provide support for the migration of retinal ommin pigment granules as a mechanism for regulating incoming light. Full article
(This article belongs to the Special Issue Vision in Aquatic Environment—Volume II)
Show Figures

Figure 1

Figure 1
<p>Juvenile ‘black-eyed’ squid, <span class="html-italic">Gonatus onyx</span>, feeding on a myctophid. © Monterey Bay Aquarium Research Institute. Scale bar = 1 cm.</p>
Full article ">Figure 2
<p>(<b>a</b>) Transverse cross-section of juvenile <span class="html-italic">G. onyx</span> (15 mm ML) eye. (<b>b</b>) Enlarged section of b’ depicting retinal layers. (<b>c</b>) Enlarged section of c’ depicting the iris. (<b>d</b>) Enlarged section of d’ depicting the transition from the retina to the ciliary body. The asterisks indicate the cartilaginous parts of the sclera (s). Ommin granules are not visible due to glutaraldehyde preservation (see <a href="#sec2-vision-08-00026" class="html-sec">Section 2</a>. Abbreviations: dorsal (do), ventral (ve), posterior (po), anterior (an), inner segment of photoreceptors (isp), basal membrane (bm), supporting cells (sc), expected location of ommin layer (ol), outer segment of photoreceptors (osp), limiting membrane (lm), sclera (s). Scale bars = 20 µm.</p>
Full article ">Figure 3
<p>Toluidine blue staining of a horizontally sectioned juvenile <span class="html-italic">Gonatus onyx</span> retina at the photoreceptor level. The lattice-like arrangement of rhabdomeres in the outer segment is typical of cephalopods. Ommin granules are not visible due to glutaraldehyde preservation (see <a href="#sec2-vision-08-00026" class="html-sec">Section 2</a>), but the ommin layer (ol) should be near the base of the outer segment of the photoreceptors (osp). Abbreviations: inner segment of the photoreceptors (isp), basal membrane (bm), supporting cells (sc), expected location of ommin layer (ol), outer segment of the photoreceptors (osp). Scale bar = 20 µm.</p>
Full article ">Figure 4
<p>Ommin pigment location in the retina of a juvenile <span class="html-italic">G. onyx</span> exposed to 90 min of light at 350 lux then subjected to darkness for 24 h. The screening pigment of this specimen was preserved, revealing the single-cell resolution and location of ommin pigment granules. No pigment was observed at the distal tip of the photoreceptors. Arrows indicate location of migrating ommin pigment. Abbreviations: inner segment of the photoreceptors (isp), basal membrane (bm), supporting cells (sc), ommin layer (ol), outer segment of the photoreceptors (osp). Scale bar = 20 µm.</p>
Full article ">Figure 5
<p>Ommin pigment location in the specimen exposed to 30 min of light at 350 lux then subjected to darkness for 20 min (G07). The screening pigment of this specimen was preserved, revealing the location of ommin pigment granules close to the basal ommin layer and absent from the tip of the photoreceptors for all retinal sections. The top row of images depicts the dorsal section of the retina, the middle row depicts the central section and the bottom row depicts the ventral section. Arrows indicate location of migrating ommin pigment. Abbreviations: outer segment of the photoreceptors (osp), ommin layer (ol), basal membrane (bm), inner segment of the photoreceptors (isp), rudimentary cartilage (c), plexiform layer (pl). Scale bars = 20 µm.</p>
Full article ">Figure 6
<p>Ommin pigment location in the <span class="html-italic">G. onyx</span> specimen exposed to 20 min of light at 350 lux then subjected to darkness for 75 min. The screening pigment of this specimen was preserved, revealing the location of ommin pigment granules close to the ommin layer and absent from the tip of the photoreceptors. The top row of images depicts the dorsal section of the retina, the middle row depicts the central section and the bottom row depicts the ventral section. Arrows indicate location of migrating ommin pigment. Abbreviations: outer segment of the photoreceptors (osp), ommin layer (ol), basal membrane (bm), inner segment of the photoreceptors (isp), rudimentary cartilage (c), plexiform layer (pl). Scale bars = 20 µm.</p>
Full article ">Figure 7
<p>Ommin pigment granule migration in the outer segment of the photoreceptors of Group 1 juvenile (ML 10–20 mm) <span class="html-italic">G. onyx</span> retinas following exposure to light (350 lux) for 20 min, followed by dark (minutes indicated below each cross-section). Percentage values to the right of the image and dotted lines refer to the outer segment of the photoreceptors. Abbreviations: ommin layer (ol), migrating ommin pigment (mop), ommin-free outer segment of the photoreceptors (osp), limiting membrane (lm). All images depicted at 40× magnification. Scale bar = 20 µm.</p>
Full article ">Figure 8
<p>Ommin pigment granule migration in the outer segment of the photoreceptors of Group 2 juvenile (ML 10–20 mm) <span class="html-italic">G. onyx</span> retinas following exposure to light (350 lux) for 30 min, followed by dark (minutes indicated below each cross-section). Percentage values on the side of the image and dotted line refer to photoreceptor length. Abbreviations: ommin layer (ol), migrating ommin pigment (mop), ommin-free outer segment of the photoreceptors (osp), limiting membrane (lm). All images depicted at 40× magnification. Scale bar = 20 µm.</p>
Full article ">Figure 9
<p>Transverse retinal cross-sections of the inner photoreceptor layer of juvenile <span class="html-italic">G. onyx</span> (G12–G13, both 15 mm ML). (<b>a</b>) Retinal sections without primary anti-glutamate serve as negative controls. (<b>b</b>) Retinal sections labeled with antibodies indicate the expression of glutamate neurotransmitters (dark grey areas). Glutamate expression in retinas subjected to darkness for 90 min (G12, top row) or without dark adaptation (G13, bottom row) following light exposure (350 lux for 30 min). Arrows indicate examples of cells reactive to the antibody. Arrowheads indicate ommin pigment granules within the supporting cells not destroyed by the preservation process. Abbreviations: supporting cells (sc), basal membrane (bm), inner layer of photoreceptors (isp). Scale bar = 20 µm.</p>
Full article ">
13 pages, 1552 KiB  
Article
A Pilot Study to Improve Cognitive Performance and Pupil Responses in Mild Cognitive Impaired Patients Using Gaze-Controlled Gaming
by Maria Solé Puig, Patricia Bustos Valenzuela, August Romeo and Hans Supèr
Vision 2024, 8(2), 25; https://doi.org/10.3390/vision8020025 - 24 Apr 2024
Viewed by 1066
Abstract
Mild cognitive impairment (MCI) may progress to severe forms of dementia, so therapy is needed to maintain cognitive abilities. The neural circuitry for oculomotor control is closely linked to that which controls cognitive behavior. In this study, we tested whether training the oculomotor [...] Read more.
Mild cognitive impairment (MCI) may progress to severe forms of dementia, so therapy is needed to maintain cognitive abilities. The neural circuitry for oculomotor control is closely linked to that which controls cognitive behavior. In this study, we tested whether training the oculomotor system with gaze-controlled video games could improve cognitive behavior in MCI patients. Patients played a simple game for 2–3 weeks while a control group played the same game using a mouse. Cognitive improvement was assessed using the MoCA screening test and CANTAB. We also measured eye pupil and vergence responses in an oddball paradigm. The results showed an increased score on the MoCA test specifically for the visuospatial domain and on the Rapid Visual Information Processing test of the CANTAB battery. Pupil responses also increased to target stimuli. Patients in the control group did not show significant improvements. This pilot study provides evidence for the potential cognitive benefits of gaze-controlled gaming in MCI patients. Full article
(This article belongs to the Section Visual Neuroscience)
Show Figures

Figure 1

Figure 1
<p>Illustration of the video game. Note that in the actual game, only 1 target (dartboard) or distractor (owl) was presented. The pointer indicated the position of the eye gaze or mouse cursor in real time.</p>
Full article ">Figure 2
<p>MoCA scores obtained before (pre) and after (post) the gaming sessions.</p>
Full article ">Figure 3
<p>Scores of the MoCA in the visuospatial obtained before and after the gaming sessions. Each dot represents the score of one participant.</p>
Full article ">Figure 4
<p>Rapid visual information processing (RVPA) scores of the CANTAB obtained before and after the gaming sessions. Each dot represents the score of one participant.</p>
Full article ">Figure 5
<p>Scores of the MoCA in the visuospatial domain obtained before and after the gaming sessions. Each dot represents the score of one participant.</p>
Full article ">Figure 6
<p>Pupil responses before (pre) and after (post) the gaming sessions to targets (targ) and distractors (distr).</p>
Full article ">
14 pages, 1633 KiB  
Article
Features Associated with Visible Lamina Cribrosa Pores in Individuals of African Ancestry with Glaucoma: Primary Open-Angle African Ancestry Glaucoma Genetics (POAAGG) Study
by Jalin A. Jordan, Ebenezer Daniel, Yineng Chen, Rebecca J. Salowe, Yan Zhu, Eydie Miller-Ellis, Victoria Addis, Prithvi S. Sankar, Di Zhu, Eli J. Smith, Roy Lee, Gui-Shuang Ying and Joan M. O’Brien
Vision 2024, 8(2), 24; https://doi.org/10.3390/vision8020024 - 18 Apr 2024
Viewed by 1012
Abstract
There are scarce data regarding the rate of the occurrence of primary open-angle glaucoma (POAG) and visible lamina cribrosa pores (LCPs) in the eyes of individuals with African ancestry; the potential impact of these features on disease burden remains unknown. We recruited subjects [...] Read more.
There are scarce data regarding the rate of the occurrence of primary open-angle glaucoma (POAG) and visible lamina cribrosa pores (LCPs) in the eyes of individuals with African ancestry; the potential impact of these features on disease burden remains unknown. We recruited subjects with POAG to the Primary Open-Angle African American Glaucoma Genetics (POAAGG) study. Through regression models, we evaluated the association between the presence of LCPs and various phenotypic features. In a multivariable analysis of 1187 glaucomatous eyes, LCPs were found to be more likely to be present in eyes with cup-to-disc ratios (CDR) of ≥0.9 (adjusted risk ratio (aRR) 1.11, 95%CI: 1.04–1.19, p = 0.005), eyes with cylindrical-shaped (aRR 1.22, 95%CI: 1.11–1.33) and bean pot (aRR 1.24, 95%CI: 1.13–1.36) cups versus conical cups (p < 0.0001), moderate cup depth (aRR 1.24, 95%CI: 1.06–1.46) and deep cups (aRR 1.27, 95%CI: 1.07–1.50) compared to shallow cups (p = 0.01), and the nasalization of central retinal vessels (aRR 1.33, 95%CI: 1.23–1.44), p < 0.0001). Eyes with LCPs were more likely to have a higher degree of African ancestry (q0), determined by means of SNP analysis (aRR 0.96, 95%CI: 0.93–0.99, p = 0.005 for per 0.1 increase in q0). Our large cohort of POAG cases of people with African ancestry showed that LCPs may be an important risk factor in identifying severe disease, potentially warranting closer monitoring by physicians. Full article
Show Figures

Figure 1

Figure 1
<p>Fundoscopic image of optic nerve head with multiple LCPs (<b>left</b>). Blue dots highlighting the location of LCPs at the ONH (<b>right</b>).</p>
Full article ">
21 pages, 1321 KiB  
Article
The Influence of Competing Social and Symbolic Cues on Observers’ Gaze Behaviour
by Flora Ioannidou and Frouke Hermens
Vision 2024, 8(2), 23; https://doi.org/10.3390/vision8020023 - 16 Apr 2024
Viewed by 1269
Abstract
The effects of social (eye gaze, pointing gestures) and symbolic (arrows) cues on observers’ attention are often studied by presenting such cues in isolation and at fixation. Here, we extend this work by embedding cues in natural scenes. Participants were presented with a [...] Read more.
The effects of social (eye gaze, pointing gestures) and symbolic (arrows) cues on observers’ attention are often studied by presenting such cues in isolation and at fixation. Here, we extend this work by embedding cues in natural scenes. Participants were presented with a single cue (Experiment 1) or a combination of cues (Experiment 2) embedded in natural scenes and were asked to ‘simply look at the images’ while their eye movements were recorded to assess the effects of the cues on (overt) attention. Single-gaze and pointing cues were fixated for longer than arrows but at the cost of shorter dwell times on the cued object. When presented together, gaze and pointing cues were fixated faster and for longer than simultaneously presented arrows. Attention to the cued object depended on the combination of cues and whether both cues were directed towards or away from the target object. Together, the findings confirm earlier observations that people attract attention more strongly than arrows but that arrows more strongly direct attention. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of images (<b>a</b>–<b>d</b>) and regions of interest (<b>e</b>–<b>h</b>) for the four conditions for one of the scenes. For human actors, three regions of interest were used: the head, the body, and the pointing arm (only for pointing cues). In panels (<b>e</b>–<b>h</b>), cyan pixels indicate the region of the body of the person, dark-blue pixels the head of the person, yellow pixels the arm of the person, and green pixels the position of the arrow. The light-pink region indicates the target. We also coded the positions of other objects in the scene (other colours and black) but did not use these regions for the analysis (these regions were all mapped onto the ‘elsewhere’ category).</p>
Full article ">Figure 2
<p>Dwell times on the different regions of the cues and the target for each of the four conditions in Experiment 1. Dwell times are shown as a percentage of the overall time spent looking at the image. Error bars show the standard error of the mean across cues.</p>
Full article ">Figure 3
<p>A comparison of dwell times on cues and the target. Each symbol indicates one image in the experiment. Different shapes show different conditions. The grey line shows the best-fitting regression line.</p>
Full article ">Figure 4
<p>The influence of the central bias and ROI size on dwell times on ROIs. Relatively long dwell times on heads and targets are found, given their size and position in the image, whereas dwell times are lower than expected for bodies (reference category = arrow).</p>
Full article ">Figure 5
<p>Inverse survival curves for the time until fixation on the cue or the target for the different conditions. For cue fixations, the entire cue was used (including body and arm). In constructing the curves, the individual trials were treated as independent measurements.</p>
Full article ">Figure 6
<p>Saccades that start from the cue or parts of the cue landing on the target.</p>
Full article ">Figure 7
<p>Examples of stimuli from Experiment 2. Only scenes with arrows are shown to protect the privacy of our actors. There were also scenes with congruent and competing gaze and pointing cues together with the arrow cue. The target object for this particular scene was the phone on the wall. The ‘away’ cue can be seen pointing in a different direction, but not to an object in particular. This was the case for most of the ‘away’ cues.</p>
Full article ">Figure 8
<p>Dwell times on the cue of interest (as a percentage of the total viewing time in a trial) and on the second cue in the scene (if present).</p>
Full article ">Figure 9
<p>Dwell times on the target (as a percentage of the overall time spent looking at the image). Error bars show the standard error of the mean across participants.</p>
Full article ">Figure 10
<p>Inverse survival curves for the time until fixation on the cue for the different conditions. For the gaze and pointing cues, the entire cue is used (including body and possibly arm). The focus is on combinations with a gaze cue towards, a pointing cue towards, or an arrow cue towards the target. In constructing the curves, the individual trials were treated as independent measurements.</p>
Full article ">Figure 11
<p>Inverse survival curves for the time until fixation on the target for the different conditions. The focus is on combinations with a gaze cue towards, a pointing cue towards, or an arrow cue towards the target. In constructing the curves, the individual trials were treated as independent measurements.</p>
Full article ">Figure 12
<p>The percentageof saccades that leave the cue of interest and directly land on the target (compared to elsewhere in the display, not returning to the cue of interest). Error bars show the standard error of the mean across participants.</p>
Full article ">
19 pages, 1705 KiB  
Article
An Insight into Knowledge, Perspective, and Practices of Indian Optometrists towards Childhood Myopia
by Archana Naik, Siddharth K. Karthikeyan, Jivitha Jyothi Ramesh, Shwetha Bhaskar, Chinnappa A. Ganapathi and Sayantan Biswas
Vision 2024, 8(2), 22; https://doi.org/10.3390/vision8020022 - 16 Apr 2024
Cited by 2 | Viewed by 2062
Abstract
The current understanding of clinical approaches and barriers in managing childhood myopia among Indian optometrists is limited. This research underscores the necessity and relevance of evidence-based practice guidelines by exploring their knowledge, attitude, and practice towards childhood myopia. A self-administered internet-based 26-item survey [...] Read more.
The current understanding of clinical approaches and barriers in managing childhood myopia among Indian optometrists is limited. This research underscores the necessity and relevance of evidence-based practice guidelines by exploring their knowledge, attitude, and practice towards childhood myopia. A self-administered internet-based 26-item survey was circulated online among practicing optometrists in India. The questions assessed the demographics, knowledge, self-reported clinical practice behavior, barriers, source of information guiding their management, and extent of adult caregiver engagement for childhood myopia. Of 393 responses, a significant proportion of respondents (32.6–92.4%) were unaware of the ocular complications associated with high myopia, with less than half (46.5%) routinely performing ocular biometry in clinical practice. Despite the growing awareness of emerging myopia management options, the uptake remains generally poor, with single-vision distance full-correction spectacles (70.3%) being the most common mode of vision correction. Barriers to adopting optimal myopia care are medicolegal concerns, absence of clinical practice guidelines, and inadequate consultation time. Own clinical experience and original research articles were the primary sources of information supporting clinical practice. Most (>70%) respondents considered involving the adult caregiver in their child’s clinical decision-making process. While practitioners’ awareness and activity of newer myopia management strategies are improving, there is plenty of scope for its enhancement. The importance of evidence-based practice guidelines and continuing education on myopia control might help practitioners enhance their clinical decision-making skills. Full article
Show Figures

Figure 1

Figure 1
<p>Percentage (%) of respondents who indicated performing each clinical procedure on a school-aged child with myopia at the nominated frequency.</p>
Full article ">Figure 2
<p>Perspective on the first three options for effective management other than single-vision spectacle (full-correction).</p>
Full article ">Figure 3
<p>Percentage of respondents (%) who rated the relative importance of each: (<b>a</b>) potential factor when deciding upon the management approach for a child with myopia; (<b>b</b>) barrier limiting their ability to provide optimal clinical care to children with myopia.</p>
Full article ">Figure 4
<p>Percentage of respondents (%) who rated the frequency with which they prescribe each management strategy to children with myopia.</p>
Full article ">Figure 5
<p>Percentage of respondents (%) who rated the relative importance of each: (<b>a</b>) information source in guiding their current approach to managing childhood myopia; (<b>b</b>) potential resources for supporting their future clinical management of children with myopia.</p>
Full article ">Figure 6
<p>Percentage of respondents (%) who rated the relative importance of each potential topic to discuss with an adult caregiver of a child with myopia.</p>
Full article ">
8 pages, 1057 KiB  
Article
Short-Term Morpho-Functional Changes before and after Strabismus Surgery in Children Using Structural Optical Coherence Tomography: A Pilot Study
by Pasquale Viggiano, Marida Gaudiomonte, Ugo Procoli, Luisa Micelli Ferrari, Enrico Borrelli, Giacomo Boscia, Andrea Ferrara, Fabio De Vitis, Gemma Scalise, Valeria Albano, Giovanni Alessio and Francesco Boscia
Vision 2024, 8(2), 21; https://doi.org/10.3390/vision8020021 - 16 Apr 2024
Viewed by 1179
Abstract
Purpose: To evaluate the immediate alterations in the thickness of the macular ganglion cell–inner plexiform layer (mGCIPL), peripapillary retinal nerve fiber layer (RNFL), inner retinal layer (IRL), and outer retinal layer (ORL) using spectral domain optical coherence tomography (SD-OCT) subsequent to strabismus surgery [...] Read more.
Purpose: To evaluate the immediate alterations in the thickness of the macular ganglion cell–inner plexiform layer (mGCIPL), peripapillary retinal nerve fiber layer (RNFL), inner retinal layer (IRL), and outer retinal layer (ORL) using spectral domain optical coherence tomography (SD-OCT) subsequent to strabismus surgery in pediatric patients diagnosed with horizontal esotropia. Methods: Twenty-eight eyes from twenty-one child patients who had undergone uncomplicated horizontal rectus muscle surgery due to strabismus were included. Measurements of RNFL, mGCL-IPL, IRL, and ORL using structural OCT were conducted both before the surgery and one month after the surgical procedure. Importantly, a control group comprising 14 healthy eyes, matched for age and significant refractive error (<3.00 diopters), was included in the current analysis. Results: Our analysis indicated no significant disparity before and after surgery in terms of best-corrected visual acuity (BCVA), RNFL, IRL, and ORL. Conversely, concerning the macular ganglion cell layer–inner plexiform layer analysis, a substantial increase in mGCL-IPL was observed following the surgical intervention. The mean mGCL-IPL measured 60.8 ± 9.2 μm at baseline and 66.1 ± 13.2 μm one month after the surgery (p = 0.026). Notably, comparison between the strabismus group at baseline and the healthy group revealed a significant reduction in mGCL-IPL in the strabismus group (60.8 ± 9.2) compared to the healthy control group (68.3 ± 7.2; p = 0.014). Conclusions: Following strabismus surgery, our observations pointed towards a thickening of the mGCL-IPL layer, which is likely attributable to transient local inflammation. Additionally, we identified a significant differentiation in the mGCL-IPL complex between the pediatric patient group with strabismus and the control group. Full article
(This article belongs to the Section Retinal Function and Disease)
Show Figures

Figure 1

Figure 1
<p>Illustrates (<b>A</b>) an instance of automated segmentation of the retinal nerve fiber layer, ganglion cell layer, and inner plexiform layer using Heidelberg Spectralis. (<b>B</b>) Additionally, it shows an example of automated segmentation of the inner retinal and outer retinal layers.</p>
Full article ">Figure 2
<p>Illustrates (<b>A</b>) a colorimetric map of the ganglion cell layer with ETDRS circle diameters (1, 3, 6 mm) before and after the strabismus surgery. (<b>B</b>) Additionally, it showcases the changes in the inner plexiform layer before and after the surgical procedure.</p>
Full article ">
11 pages, 2625 KiB  
Article
Properties of Gaze Strategies Based on Eye–Head Coordination in a Ball-Catching Task
by Seiji Ono, Yusei Yoshimura, Ryosuke Shinkai and Tomohiro Kizuka
Vision 2024, 8(2), 20; https://doi.org/10.3390/vision8020020 - 15 Apr 2024
Viewed by 1085
Abstract
Visual motion information plays an important role in the control of movements in sports. Skilled ball players are thought to acquire accurate visual information by using an effective visual search strategy with eye and head movements. However, differences in catching ability and gaze [...] Read more.
Visual motion information plays an important role in the control of movements in sports. Skilled ball players are thought to acquire accurate visual information by using an effective visual search strategy with eye and head movements. However, differences in catching ability and gaze movements due to sports experience and expertise have not been clarified. Therefore, the purpose of this study was to determine the characteristics of gaze strategies based on eye and head movements during a ball-catching task in athlete and novice groups. Participants were softball and tennis players and college students who were not experienced in ball sports (novice). They performed a one-handed catching task using a tennis ball-shooting machine, which was placed at 9 m in front of the participants, and two conditions were set depending on the height of the ball trajectory (high and low conditions). Their head and eye velocities were detected using a gyroscope and electrooculography (EOG) during the task. Our results showed that the upward head velocity and the downward eye velocity were lower in the softball group than in the tennis and novice groups. When the head was pitched upward, the downward eye velocity was induced from the vestibulo-ocular reflex (VOR) during ball catching. Therefore, it is suggested that skilled ball players have relatively stable head and eye movements, which may lead to an effective gaze strategy. An advantage of the stationary gaze in the softball group could be to acquire visual information about the surroundings other than the ball. Full article
(This article belongs to the Special Issue Eye and Head Movements in Visuomotor Tasks)
Show Figures

Figure 1

Figure 1
<p>Measurement of the gaze–ball angle. Vertical head and eye movements were detected during a ball-catching task.</p>
Full article ">Figure 2
<p>Typical examples of head velocity (blue), eye velocity (red), and gaze velocity (green) in the softball group (<b>A</b>), tennis group (<b>B</b>) and novice group (<b>C</b>) are shown. Two vertical dashed lines indicate the timing of the launch (left) and catch (right) of the ball.</p>
Full article ">Figure 3
<p>Comparison of the catching ratio of the softball, tennis, and novice groups for a ball-catching task. ***: <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 4
<p>Comparison of the upward peak head velocity (UPHV) of the softball, tennis, and novice groups for a ball-catching task under high and low conditions. Box plots indicate 25–75 percentile ranges and central values, and error bars indicate 5–95 percentile ranges.</p>
Full article ">Figure 5
<p>Comparison of the downward peak eye velocity (DPEV) of the softball, tennis, and novice groups for a ball-catching task under high and low conditions. Box plots indicate 25–75 percentile ranges and central values, and error bars indicate 5–95 percentile ranges.</p>
Full article ">Figure 6
<p>The relationship between the upward peak head velocity (UPHV) and the downward peak eye velocity (DPEV) under high (<b>A</b>) and low (<b>B</b>) conditions.</p>
Full article ">
17 pages, 2188 KiB  
Article
Enhancement of the Inner Foveal Response of Young Adults with Extended-Depth-of-Focus Contact Lens for Myopia Management
by Ana Amorim-de-Sousa, Rute J. Macedo-de-Araújo, Paulo Fernandes, José M. González-Méijome and António Queirós
Vision 2024, 8(2), 19; https://doi.org/10.3390/vision8020019 - 14 Apr 2024
Viewed by 1166
Abstract
Background: Myopia management contact lenses have been shown to successfully decrease the rate of eye elongation in children by changing the peripheral refractive profile of the retina. Despite the efforts of the scientific community, the retinal response mechanism to defocus is still unknown. [...] Read more.
Background: Myopia management contact lenses have been shown to successfully decrease the rate of eye elongation in children by changing the peripheral refractive profile of the retina. Despite the efforts of the scientific community, the retinal response mechanism to defocus is still unknown. The purpose of this study was to evaluate the local electrophysiological response of the retina with a myopia control contact lens (CL) compared to a single-vision CL of the same material. Methods: The retinal electrical activity and peripheral refraction of 16 eyes (16 subjects, 27.5 ± 5.7 years, 13 females and 3 males) with myopia between −0.75 D and −6.00 D (astigmatism < 1.00 D) were assessed with two CLs (Filcon 5B): a single-vision (SV) CL and an extended-depth-of-focus (EDOF) CL used for myopia management. The peripheral refraction was assessed with an open-field WAM-5500 auto-refractometer/keratometer in four meridians separated by 45° at 2.50 m distance. The global-flash multifocal electroretinogram (gf-mfERG) was recorded with the Reti-port/scan21 (Roland Consult) using a stimulus of 61 hexagons. The implicit time (in milliseconds) and response density (RD, in nV/deg2) of the direct (DC) and induced (IC) components were used for comparison between lenses in physiological pupil conditions. Results: Although the EDOF decreased both the HCVA and the LCVA (one and two lines, respectively; p < 0.003), it still allowed a good VA. The EDOF lens induced a myopic shift in most retinal areas, with a higher and statistically significant effect on the nasal retina. No differences in the implicit times of the DC and IC components were observed between SV and EDOF. Compared with the SV, the EDOF lens showed a higher RD in the IC component in the foveal region (p = 0.032). In the remaining retinal areas, the EDOF evoked lower, non-statistically significant RD in both the DC and IC components. Conclusions: The EDOF myopia control CL enhanced the response of the inner layers of the fovea. This might suggest that, besides other mechanisms potentially involved, the central foveal retinal activity might be involved in the mechanism of myopia control with these lenses. Full article
Show Figures

Figure 1

Figure 1
<p>Power profile distribution, in diopters, of a −3.00 D extended-depth-of-focus contact lens for myopia management, with up to +1.75 D of add power.</p>
Full article ">Figure 2
<p>Scheme of the fixation device for different retinal eccentricity projections for the measurement of the peripheral refraction at 2.50 m with eye rotation (head fixed in the auto-refractometer’s chin cup) in four meridians (horizontal (0°), 45°, vertical (90°) and 135°, all in dark blue). Each fixation point is separated by 5° of retinal eccentricity, considering the viewing distance. Subjects were asked to look at a red light lit at each point for the refraction measurement at the respective position. In the scheme, the condition of on-axis refraction measurement (central-point fixation (0°) of eccentricity) is illustrated.</p>
Full article ">Figure 3
<p>The global-flash multifocal ERG (<b>A</b>) stimulation is divided into four frames: the first frame presents the m-sequence pattern, followed by a dark frame, a global-flash frame and a second dark frame. After the recording, the wave response (<b>B</b>) with two defined components can be depicted: the direct component (DC), which mirrors the activity of the outer-to-middle retina, and the induced component (IC), with the main influence of the inner retinal layers. (<b>C</b>) Retinal eccentric areas with hexagon groups as fovea (from 0° to 4.80°, in orange), para-macula (from 4.80° to 21.62°, in yellow) and periphery (from 21.62° to 54.10°, in blue). (<b>D</b>) represents the grouped areas of analysis by quadrants, as follows: inferior–nasal (INQ, in yellow), superior–nasal (SNQ, in green), superior–temporal (STQ, in orange) and inferior–temporal (ITQ, in blue).</p>
Full article ">Figure 4
<p>Boxplot of the high-order aberration coefficients obtained with the IRx3 Hartman–Shack aberrometer while patients were wearing the SV (light-brown) and the EDOF (blue) contact lenses. * Statistically significant differences with <span class="html-italic">p</span> ≤ 0.050, Wilcoxon test.</p>
Full article ">Figure 5
<p>Mean relative refractive peripheral refractive error (RPRE) profile, in diopters (D), of the three components of refraction (M, FT′ and FS′) with the single-vision (SV, in light-brown) and the extended-depth-of-focus (EDOF, in blue) contact lenses for four retinal meridians: horizontal (<b>A</b>), vertical (<b>B</b>), superior–nasal (SN) to inferior–temporal (IT) (<b>C</b>) and inferior–nasal (IN) to superior–temporal (ST) (<b>D</b>). The M component is represented by a full line and darker colors, while FT′ and FS′ are represented in light colors (long-dashed lines and short-dashed lines, respectively). (*) statistically significant differences from the post hoc Bonferroni pairwise comparison between the two contact lenses at each point for the M component (black asterisk), FT′ (dark-gray asterisk) and FS′ (light-gray asterisk).</p>
Full article ">Figure 6
<p>Change in DC (<b>A</b>) and IC (<b>B</b>) response density with eccentricity with SV (light-brown) and EDOF (blue) contact lenses. Variance and IQR are not represented for clarity. EDOF showed a statistically significantly higher IC response density in the fovea compared to SV (*).</p>
Full article ">Figure 7
<p>Correlation between the differences in the response density of the induced component (IC, in nV/deg<sup>2</sup>) in the para-macula and the median value of the differences in the M component of the RPRE (<b>A</b>) and the sagittal foci (FS′) of the RPRE (<b>B</b>) in the para-macular retinal area (median from the 5° to the 20° nasal and temporal points of the horizontal meridian), according to the areas established for gf-mfERG analysis. ∆ refers to the difference between EDOF and SV (EDOF–SV) values. <span class="html-italic">r</span><sup>2</sup> refers to the determination coefficient of Spearman’s correlation. The gray line represents the best-fit line adjusted to the data.</p>
Full article ">
22 pages, 8733 KiB  
Article
The Neural Basis of a Cognitive Function That Suppresses the Generation of Mental Imagery: Evidence from a Functional Magnetic Resonance Imaging Study
by Hiroki Motoyama and Shinsuke Hishitani
Vision 2024, 8(2), 18; https://doi.org/10.3390/vision8020018 - 10 Apr 2024
Viewed by 1213
Abstract
This study elucidated the brain regions associated with the perception-driven suppression of mental imagery generation by comparing brain activation in a picture observation condition with that in a positive imagery generation condition. The assumption was that mental imagery generation would be suppressed in [...] Read more.
This study elucidated the brain regions associated with the perception-driven suppression of mental imagery generation by comparing brain activation in a picture observation condition with that in a positive imagery generation condition. The assumption was that mental imagery generation would be suppressed in the former condition but not in the latter. The results show significant activation of the left posterior cingulate gyrus (PCgG) in the former condition compared to in the latter condition. This finding is generally consistent with a previous study showing that the left PCgG suppresses mental imagery generation. Furthermore, correlational analyses showed a significant correlation between the activation of the left PCgG and participants’ subjective richness ratings, which are a measure of the clarity of a presented picture. Increased activity in the PCgG makes it more difficult to generate mental imagery. As visual perceptual processing and visual imagery generation are in competition, the suppression of mental imagery generation leads to enhanced visual perceptual processing. In other words, the greater the suppression of mental imagery, the clearer the presented pictures are perceived. The significant correlation found is consistent with this idea. The current results and previous studies suggest that the left PCgG plays a role in suppressing the generation of mental imagery. Full article
(This article belongs to the Special Issue Visual Mental Imagery System: How We Image the World)
Show Figures

Figure 1

Figure 1
<p>The experimental procedure: The asterisk (*) represents the cue for the participant’s task for the next 20 s. “Picture” indicates that their task was to observe the presented picture. “Imagery” indicates that their task was to generate imagery of the cue while seeing at the presented picture. The pictures presented in the picture observation and positive imagery generation conditions were, for example, a puppy and a chair.</p>
Full article ">Figure 2
<p>(<b>a</b>) The blue areas represent the ROI (the left PCgG) in this study, and the size was 3704 mm<sup>3</sup> (463 voxels). (<b>b</b>) The red areas represent the areas (−4, −44, 32) in the left posterior cingulate gyrus that were significantly activated during picture observation compared to during positive imagery generation. L and R indicate left and right hemispheres, respectively.</p>
Full article ">Figure 3
<p>Correlation between MRI signal changes at (−4, −44, 32) during picture observation and richness ratings. A lower richness rating means that the presented picture was perceived as clearer.</p>
Full article ">
26 pages, 6309 KiB  
Article
Method to Quickly Map Multifocal Pupillary Response Fields (mPRF) Using Frequency Tagging
by Jean Lorenceau, Suzon Ajasse, Raphael Barbet, Muriel Boucart, Frédéric Chavane, Cédric Lamirel, Richard Legras, Frédéric Matonti, Maxence Rateaux, Jean-François Rouland, José-Alain Sahel, Laure Trinquet, Mark Wexler and Catherine Vignal-Clermont
Vision 2024, 8(2), 17; https://doi.org/10.3390/vision8020017 - 9 Apr 2024
Viewed by 1176
Abstract
We present a method for mapping multifocal Pupillary Response Fields in a short amount of time using a visual stimulus covering 40° of the visual angle divided into nine contiguous sectors simultaneously modulated in luminance at specific, incommensurate, temporal frequencies. We test this [...] Read more.
We present a method for mapping multifocal Pupillary Response Fields in a short amount of time using a visual stimulus covering 40° of the visual angle divided into nine contiguous sectors simultaneously modulated in luminance at specific, incommensurate, temporal frequencies. We test this multifocal Pupillary Frequency Tagging (mPFT) approach with young healthy participants (N = 36) and show that the spectral power of the sustained pupillary response elicited by 45 s of fixation of this multipartite stimulus reflects the relative contribution of each sector/frequency to the overall pupillary response. We further analyze the phase lag for each temporal frequency as well as several global features related to pupil state. Test/retest performed on a subset of participants indicates good repeatability. We also investigate the existence of structural (RNFL)/functional (mPFT) relationships. We then summarize the results of clinical studies conducted with mPFT on patients with neuropathies and retinopathies and show that the features derived from pupillary signal analyses, the distribution of spectral power in particular, are homologous to disease characteristics and allow for sorting patients from healthy participants with excellent sensitivity and specificity. This method thus appears as a convenient, objective, and fast tool for assessing the integrity of retino-pupillary circuits as well as idiosyncrasies and permits to objectively assess and follow-up retinopathies or neuropathies in a short amount of time. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>): Distribution of the temporal modulation frequencies (TMFs) and the resulting overall luminance modulation. (<b>B</b>). Stimulus configuration of the 9 sectors, each coupled with a TMF denoted by its index. The stimulus subtends about 40° of visual angle at 57 cm (central disk 4.6°; paracentral sectors, 5–19.6°; peripheral sectors 20–40°). See <a href="#app1-vision-08-00017" class="html-app">Video S1</a>.</p>
Full article ">Figure 2
<p>Steps of analyses: (<b>A</b>). Visual inspection of raw eye movements, pupillary activity, and technical event tracks. (<b>B</b>). Analysis of the PLR after blink detection and correction from which 5 descriptive variables are derived (see <a href="#vision-08-00017-t001" class="html-table">Table 1</a> for the list of all features derived from analyses). (<b>C</b>). Analysis of eye movements—fixation (in)stability—during the stimulation, from which 6 variables are computed. Top left: position over time; Top right: position over space, whole screen; Bottom right: zoom on centered spatial eye positions. Bottom left: histogram of vertical and horizontal eye positions. (<b>D</b>). (<b>1</b>) Raw (red line) and pupillary signal corrected for blinks and transient data (green line) during mPFT stimulation, with computation of 7 descriptive variables and characterization of 5 global pupillary variables, including stimulus/signal cross-correlation lag. (<b>2</b>) FFT of the corrected signal, estimating the amplitude spectrum: full spectrum (blue lines); Raw power at FOIs (red bars); Normalized FOI power (Green bars). (<b>E</b>). Cross-correlation between the stimulus luminance oscillations and the pupillary response. Top stimulus oscillation (blue line) and pupillary response (red line). Bottom, cross-correlogram results indicating the lag and correlation distribution between the stimulus and the pupil response.</p>
Full article ">Figure 3
<p>Maps for the right eye of an individual Pupillary Response Field for raw power (<b>left</b>), normalized power (<b>middle</b>), and phase lag (<b>right</b>) according to the sectors of mPFT stimulationlabelled according to its projection onto the retina (ST: supero-temporal; SN: supero-nasal; IN: infero-nasal; IT: infero-temporal. C: central; e: eccentric p: paracentral) and has a color reflecting its value relative to the color scale (right of each figure).</p>
Full article ">Figure 4
<p>Group results showing the power distribution (boxplots) and Pupillary Response Fields (PRFs) of the right and left eyes: (<b>A</b>): FOI distribution (Hz) and retinal projections of sectors for the left eye. Labels for each sector are as in <a href="#vision-08-00017-f002" class="html-fig">Figure 2</a>. (<b>B</b>): Distribution of raw power for the 9 FOIs and associated PRFs for the right (upper panel) and left (bottom panel) eyes. (<b>C</b>): Distribution of normalized power for the 9 FOIs and associated PRFs. (<b>D</b>): Distribution of phase lags for the 9 FOIs and associated PRFs.</p>
Full article ">Figure 5
<p>(<b>A</b>): Example of an individual stimulus/pupillary signal cross-correlation using the Matlab <span class="html-italic">xcor</span> function. (<b>B</b>): Histogram of phase lags for all participants. Bottom left: right eye; bottom middle: left eye. (<b>C</b>): Correlation between cross-correlation lags of the right and left eyes of all participants. Red lines show the linear regression (r = 083, <span class="html-italic">p</span> &lt; 0.0001) together with 95% confidence intervals. Note that because pupillary signals are down-sampled to 60 Hz, the time resolution of lags is only 16.666 ms such that phase lags from different participants overlap. The high correlation shown here indicate that similar lags are observed for the right and left eyes of each participant.</p>
Full article ">Figure 6
<p>Maxima of the cross-correlations between the stimulus/signal (phase lags) and PLR latencies. (<b>A</b>). Lag vs. PLR start constriction latency: right (red disks) and left (green disks) eyes. (<b>B</b>). Lag vs. PLR maximum constriction latency: right (red disks) and left (green disks) eyes. Red lines show the linear regressions for the two eyes together with 95% confidence intervals Inserts indicate the values of Pearson’s coefficient correlation for each eye.</p>
Full article ">Figure 7
<p>Test/retest: distribution of Pearson’s r coefficient between Run 1 and Run 2 of 8 participants for right and left eyes. (<b>A</b>). Correlations for raw power; (<b>B</b>). Correlations for normalized power. (<b>C</b>). Correlations for phase lags. (<b>D</b>). Correlations for PLR variables.</p>
Full article ">Figure 8
<p>Test/Retest Pearson’s r coefficient at the group level (N = 8, pooled right and left eyes). Black dots show the PSP for all FOIs. (<b>A</b>). Correlations for raw power; (<b>B</b>). Bland–Altman plot for raw power. (<b>C</b>). Correlations for normalized power. Red lines show the linear regression together with 95% confidence intervals. (<b>D</b>). Bland–Altman plot of normalized power. Horizontal lines show the 95% confidence intervals.</p>
Full article ">Figure 9
<p>(<b>A</b>) Example of RNFL data and extraction of relevant values from a PDF file. (<b>B</b>) Distribution of the average mPFT power as a function of the average RNFL values for the right eye (red symbols) and the left eye (green symbols). Red lines show the linear regression together with 95% confidence intervals. No correlation is found between the two variables. (inserts show Pearson’s correlation coefficients, with different colors for the 2 eyes).</p>
Full article ">Figure 10
<p>(<b>A</b>). Averaged spectral power of each participant as a function of the percentage of blink-corrected data for the right (green symbols) and left (red symbols) eyes. (<b>B</b>). Averaged spectral power as a function of the number of data corrected for transients for the right (green symbols) or left (red symbols) eyes.</p>
Full article ">Figure 11
<p>Maps of differences of spectral power between healthy participants and patients in the Marseille and Paris studies for each sector of the stimulation of the right eye (see <a href="#app1-vision-08-00017" class="html-app">Supplementary Figure S9</a> for the left eye). The reference maps of healthy subjects are framed with a blue square. The remaining maps present the differences of power relative to healthy participants for each of the studied pathologies: Age-Related Macular Degeneration (AMD), Diabetic Retinopathy (RD) and Age-Related Maculopathy (ARM) for the Marseille study; Retinitis Pigmentosa (RP), Stargardt disease (SD) and Leber Hereditary Optic Neuropathy (LHON) for the Paris study. Stars within each sector indicate the significance level (<span class="html-italic">p</span> &lt; 0.05 = *; <span class="html-italic">p</span> &lt; 0.01 = **; <span class="html-italic">p</span> &lt; 0.001 = ***), written in white font. See text for details.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop