Skip to main content
Beliefs – attitudes toward some state of the environment – guide action selection and should be robust to variability but sensitive to meaningful change. Beliefs about volatility (expectation of change) are associated with paranoia in... more
Beliefs – attitudes toward some state of the environment – guide action selection and should be robust to variability but sensitive to meaningful change. Beliefs about volatility (expectation of change) are associated with paranoia in humans yet the brain regions responsible for volatility beliefs remain unknown. Orbitofrontal cortex (OFC) is central to adaptive behavior whereas magnocellular mediodorsal thalamus (MDmc) is essential for arbitrating between perceptions and action policies. We assessed belief updating in a three-choice probabilistic reversal-learning task following excitotoxic lesions of MDmc (n=3) or OFC (n=3) and compared performance with that of unoperated rhesus macaques (n=14). Computational analyses indicated that lesions of the MDmc, but not OFC, were associated with erratic switching behavior and heightened volatility belief (as in paranoia in humans). In contrast, OFC lesions were associated with increased lose-stay behavior and reward learning rates. Given t...
Policy search lets you discover rules and adapt behavior. In this issue of Neuron, Cohen et al. (2021) demonstrate that the dynamics of neurons in primate anterior cingulate cortex and putamen indicate when a correct policy is discovered... more
Policy search lets you discover rules and adapt behavior. In this issue of Neuron, Cohen et al. (2021) demonstrate that the dynamics of neurons in primate anterior cingulate cortex and putamen indicate when a correct policy is discovered and confidence in executing decisions under that policy.
Understanding the unique functions of different subregions of primate prefrontal cortex has been a longstanding goal in cognitive neuroscience. Yet, the anatomy and function of one of its largest subregions (the frontopolar cortex) remain... more
Understanding the unique functions of different subregions of primate prefrontal cortex has been a longstanding goal in cognitive neuroscience. Yet, the anatomy and function of one of its largest subregions (the frontopolar cortex) remain enigmatic and underspecified. Our Society for Neuroscience minisymposiumPrimate Frontopolar Cortex: From Circuits to Complex Behaviorswill comprise a range of new anatomic and functional approaches that have helped to clarify the basic circuit anatomy of the frontal pole, its functional involvement during performance of cognitively demanding behavioral paradigms in monkeys and humans, and its clinical potential as a target for noninvasive brain stimulation in patients with brain disorders. This review consolidates knowledge about the anatomy and connectivity of frontopolar cortex and provides an integrative summary of its function in primates. We aim to answer the question: what, if anything, does frontopolar cortex contribute to goal-directed cogn...
Flexible decision-making requires animals to forego immediate rewards (exploitation) and try novel choice options (exploration) to discover if they are preferable to familiar alternatives. Using the same task and a partially observable... more
Flexible decision-making requires animals to forego immediate rewards (exploitation) and try novel choice options (exploration) to discover if they are preferable to familiar alternatives. Using the same task and a partially observable Markov decision process (POMDP) model to quantify the value of choices, we first determined that the computational basis for managing explore-exploit tradeoffs is conserved across monkeys and humans. We then used fMRI to identify where in the human brain the immediate value of exploitative choices and relative uncertainty about the value of exploratory choices were encoded. Consistent with prior neurophysiological evidence in monkeys, we observed divergent encoding of reward value and uncertainty in prefrontal and parietal regions, including frontopolar cortex, and parallel encoding of these computations in motivational regions including the amygdala, ventral striatum, and orbitofrontal cortex. These results clarify the interplay between prefrontal and motivational circuits that supports adaptive explore-exploit decisions in humans and nonhuman primates.
Aberrant decision-making characterizes various pediatric psychopathologies; however, deliberative choice strategies have not been investigated. A transdiagnostic sample of 95 youths completed a child-friendly sequential sampling paradigm.... more
Aberrant decision-making characterizes various pediatric psychopathologies; however, deliberative choice strategies have not been investigated. A transdiagnostic sample of 95 youths completed a child-friendly sequential sampling paradigm. Participants searched for the best offer by sampling a finite list of offers. Participants’ willingness to explore was measured as the number of offers sampled, and ideal task performance was modeled using a Markov decision-process model. As in previous findings in adults, youths explored more offers when lists were long compared with short, yet participants generally sampled fewer offers relative to model-estimated ideal performance. Searching deeper into the list was associated with choosing better price options. Analyses examining the main and interactive effects of transdiagnostic anxiety and irritability symptoms indicated a negative correlation between anxiety and task performance ( p = .01, η p2 = .08). Findings suggest the need for more res...
Goal-directed behavior requires identifying objects in the environment that can satisfy internal needs and executing actions to obtain those objects. The current study examines ventral and dorsal corticostriatal circuits that support... more
Goal-directed behavior requires identifying objects in the environment that can satisfy internal needs and executing actions to obtain those objects. The current study examines ventral and dorsal corticostriatal circuits that support complementary aspects of goal-directed behavior. We analyze activity from the amygdala, ventral striatum, orbitofrontal cortex, and lateral prefrontal cortex (LPFC) while monkeys perform a three-armed bandit task. Information about chosen stimuli and their value is primarily encoded in the amygdala, ventral striatum, and orbitofrontal cortex, while the spatial information is primarily encoded in the LPFC. Before the options are presented, information about the to-be-chosen stimulus is represented in the amygdala, ventral striatum, and orbitofrontal cortex; at the time of choice, the information is passed to the LPFC to direct a saccade. Thus, learned value information specifying behavioral goals is maintained throughout the ventral corticostriatal circuit, and it is routed through the dorsal circuit at the time actions are selected.
Explore-exploit decisions require us to trade off the benefits of exploring unknown options to learn more about them, with exploiting known options, for immediate reward. Such decisions are ubiquitous in nature, but from a computational... more
Explore-exploit decisions require us to trade off the benefits of exploring unknown options to learn more about them, with exploiting known options, for immediate reward. Such decisions are ubiquitous in nature, but from a computational perspective, they are notoriously hard. There is therefore much interest in how humans and animals make these decisions and recently there has been an explosion of research in this area. Here we provide a biased and incomplete snapshot of this field focusing on the major finding that many organisms use two distinct strategies to solve the explore-exploit dilemma: a bias for information (`directed exploration') and the randomization of choice (`random exploration'). We review evidence for the existence of these strategies, their computational properties, their neural implementations, as well as how directed and random exploration vary over the lifespan. We conclude by highlighting open questions in this field that are ripe to both explore and ...
Orbitofrontal cortex (OFC) predicts the consequences of our actions and updates our expectations based on experienced outcomes. In this issue of Neuron, Groman et al. (2019) precisely ablate pathways between the OFC, amygdala, and nucleus... more
Orbitofrontal cortex (OFC) predicts the consequences of our actions and updates our expectations based on experienced outcomes. In this issue of Neuron, Groman et al. (2019) precisely ablate pathways between the OFC, amygdala, and nucleus accumbens to reveal their separable contributions to reinforcement learning.
Few studies have used matched affective paradigms to compare humans and non-human primates. In monkeys with amygdala lesions and youth with anxiety disorders, we examined cross-species pupillary responses during a saccade-based, affective... more
Few studies have used matched affective paradigms to compare humans and non-human primates. In monkeys with amygdala lesions and youth with anxiety disorders, we examined cross-species pupillary responses during a saccade-based, affective attentional capture task. Given evidence of enhanced amygdala function in anxiety, we hypothesized that opposite patterns would emerge in lesioned monkeys and anxious participants. A total of 53 unmedicated youths (27 anxious, 26 healthy) and 8 adult male rhesus monkeys (Macaca mulatta) completed matched behavioral paradigms. Four monkeys received bilateral excitotoxic amygdala lesions and four served as unoperated controls. Compared to healthy youth, anxious youth exhibited increased pupillary constriction in response to emotional and non-emotional distractors (F(1,48) = 6.28, P = 0.02, η2p = 0.12). Pupillary response was associated significantly with anxiety symptoms severity (F(1,48) = 5.59, P = 0.02, η2p = 0.10). As hypothesized, lesioned monke...
The perception of emotionally arousing scenes modulates neural activity in ventral visual areas via reentrant signals from the amygdala. The orbitofrontal cortex (OFC) shares dense interconnections with amygdala, and has been strongly... more
The perception of emotionally arousing scenes modulates neural activity in ventral visual areas via reentrant signals from the amygdala. The orbitofrontal cortex (OFC) shares dense interconnections with amygdala, and has been strongly implicated in emotional stimulus processing in primates, but our understanding of the functional contribution of this region to emotional perception in humans is poorly defined. Here we acquired targeted rapid functional imaging from lateral OFC, amygdala, and fusiform gyrus (FG) over multiple scanning sessions (resulting in over 1,000 trials per participant) in an effort to define the activation amplitude and directional connectivity among these regions during naturalistic scene perception. All regions of interest showed enhanced activation during emotionally arousing, compared to neutral scenes. In addition, we identified bidirectional connectivity between amygdala, FG, and OFC in the great majority of individual subjects, suggesting that human emoti...
Adaptive behavior requires animals to learn from experience. Ideally, learning should both promote choices that lead to rewards and reduce choices that lead to losses. Because the ventral striatum (VS) contains neurons that respond to... more
Adaptive behavior requires animals to learn from experience. Ideally, learning should both promote choices that lead to rewards and reduce choices that lead to losses. Because the ventral striatum (VS) contains neurons that respond to aversive stimuli and aversive stimuli can drive dopamine release in the VS, it is possible that the VS contributes to learning about aversive outcomes, including losses. However, other work suggests that the VS may play a specific role in learning to choose among rewards, with other systems mediating learning from aversive outcomes. To examine the role of the VS in learning from gains and losses, we compared the performance of macaque monkeys with VS lesions and unoperated controls on a reinforcement learning task. In the task, the monkeys gained or lost tokens, which were periodically cashed out for juice, as outcomes for choices. They learned over trials to choose cues associated with gains, and not choose cues associated with losses. We found that m...
Learning the values of actions versus stimuli may depend on separable neural circuits. In the current study, we evaluated ventral striatum (VS) lesioned macaques' performance on a two-arm bandit task that had randomly interleaved... more
Learning the values of actions versus stimuli may depend on separable neural circuits. In the current study, we evaluated ventral striatum (VS) lesioned macaques' performance on a two-arm bandit task that had randomly interleaved blocks of stimulus based and action based reinforcement learning (RL). Compared to controls, monkeys with VS lesions had deficits in learning to select rewarding images but not rewarding actions. We used a RL model to quantify learning and choice consistency and found that, in stimulus based RL, the VS lesion monkeys were more influenced by negative feedback and had lower choice consistency than controls. Using a Bayesian model to parse the groups' learning strategies, we also found that VS lesion monkeys defaulted to an action based choice strategy. Thus, the VS is specifically involved in learning the value of stimuli, not actions.SIGNIFICANCE STATEMENTReinforcement learning (RL) models of the ventral striatum (VS) often assume that it maintains a...
Reinforcement learning (RL) is the behavioral process of learning the values of actions and objects. Most models of RL assume that the dopaminergic prediction error signal drives plasticity in frontal-striatal circuits. The striatum then... more
Reinforcement learning (RL) is the behavioral process of learning the values of actions and objects. Most models of RL assume that the dopaminergic prediction error signal drives plasticity in frontal-striatal circuits. The striatum then encodes value representations that drive decision processes. However, the amygdala has also been shown to play an important role in forming Pavlovian stimulus-outcome associations. These Pavlovian associations can drive motivated behavior via the amygdala projections to the ventral striatum or the ventral tegmental area. The amygdala may, therefore, play a central role in RL. Here we compare the contributions of the amygdala and the striatum to RL and show that both the amygdala and striatum learn and represent expected values in RL tasks. Furthermore, value representations in the striatum may be inherited, to some extent, from the amygdala. The striatum may, therefore, play less of a primary role in learning stimulus-outcome associations in RL than previously suggested.
Using both direct neural recordings and electrical microstimulation, Joshi et al. (2016) show that locus coeruleus (LC) activity closely matches moment-to-moment changes in pupil size. But what causes these two measures to be related is... more
Using both direct neural recordings and electrical microstimulation, Joshi et al. (2016) show that locus coeruleus (LC) activity closely matches moment-to-moment changes in pupil size. But what causes these two measures to be related is not straightforward.
Reversal learning has been extensively studied across species as a task that indexes the ability to flexibly make and reverse deterministic stimulus-reward associations. Although various brain lesions have been found to affect performance... more
Reversal learning has been extensively studied across species as a task that indexes the ability to flexibly make and reverse deterministic stimulus-reward associations. Although various brain lesions have been found to affect performance on this task, the behavioral processes affected by these lesions have not yet been determined. This task includes at least two kinds of learning. First, subjects have to learn and reverse stimulus-reward associations in each block of trials. Second, subjects become more proficient at reversing choice preferences as they experience more reversals. We have developed a Bayesian approach to separately characterize these two learning processes. Reversal of choice behavior within each block is driven by a combination of evidence that a reversal has occurred, and a prior belief in reversals that evolves with experience across blocks. We applied the approach to behavior obtained from 89 macaques, comprising 12 lesion groups and a control group. We found th...
Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of... more
Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nu...
Reversal learning has been studied as the process of learning to inhibit previously rewarded actions. Deficits in reversal learning have been seen after manipulations of dopamine and lesions of the orbitofrontal cortex. However, reversal... more
Reversal learning has been studied as the process of learning to inhibit previously rewarded actions. Deficits in reversal learning have been seen after manipulations of dopamine and lesions of the orbitofrontal cortex. However, reversal learning is often studied in animals that have limited experience with reversals. As such, the animals are learning that reversals occur during data collection. We have examined a task regime in which monkeys have extensive experience with reversals and stable behavioral performance on a probabilistic two-arm bandit reversal learning task. We developed a Bayesian analysis approach to examine the effects of manipulations of dopamine on reversal performance in this regime. We find that the analysis can clarify the strategy of the animal. Specifically, at reversal, the monkeys switch quickly from choosing one stimulus to choosing the other, as opposed to gradually transitioning, which might be expected if they were using a naive reinforcement learning ...
Cues that signal the possibility of receiving an electric shock reliably induce defensive activation. To determine whether cues can also easily reverse defensive reactions, a threat reversal paradigm was developed in which a cue signaling... more
Cues that signal the possibility of receiving an electric shock reliably induce defensive activation. To determine whether cues can also easily reverse defensive reactions, a threat reversal paradigm was developed in which a cue signaling threat of shock reversed its meaning across the course of the study. This allowed us to contrast defensive reactions to threat cues that became safe cues, with responses to cues that continued to signal threat or safety. Results showed that, when participants were instructed that a previously threatening cue now signaled safety, there was an immediate and complete attenuation of defensive reactions compared to threat cues that maintained their meaning. These findings highlight the role that language can play both in instantiating and attenuating defensive reactions, with implications for understanding emotion regulation, social communication, and clinical phenomena.
The neural systems that underlie reinforcement learning (RL) allow animals to adapt to changes in their environment. In the present study, we examined the hypothesis that the amygdala would have a preferential role in learning the values... more
The neural systems that underlie reinforcement learning (RL) allow animals to adapt to changes in their environment. In the present study, we examined the hypothesis that the amygdala would have a preferential role in learning the values of visual objects. We compared a group of monkeys (Macaca mulatta) with amygdala lesions to a group of unoperated controls on a two-armed bandit reversal learning task. The task had two conditions. In the What condition, the animals had to learn to select a visual object, independent of its location. And in the Where condition, the animals had to learn to saccade to a location, independent of the object at the location. In both conditions choice-outcome mappings reversed in the middle of the block. We found that monkeys with amygdala lesions had learning deficits in both conditions. Monkeys with amygdala lesions did not have deficits in learning to reverse choice-outcome mappings. Rather, amygdala lesions caused the monkeys to become overly sensitiv...
Evidence from animal and human studies has suggested that the amygdala plays a role in detecting threat and in directing attention to the eyes. Nevertheless, there has been no systematic investigation of whether the amygdala specifically... more
Evidence from animal and human studies has suggested that the amygdala plays a role in detecting threat and in directing attention to the eyes. Nevertheless, there has been no systematic investigation of whether the amygdala specifically facilitates attention to the eyes or whether other features can also drive attention via amygdala processing. The goal of the present study was to examine the effects of amygdala lesions in rhesus monkeys on attentional capture by specific facial features, as well as gaze patterns and changes in pupil dilation during free viewing. Here we show reduced attentional capture by threat stimuli, specifically the mouth, and reduced exploration of the eyes in free viewing in monkeys with amygdala lesions. Our findings support a role for the amygdala in detecting threat signals and in directing attention to the eye region of faces when freely viewing different expressions.
Humans and other animals often make the difficult decision to try new options (exploration) and forego immediate rewards (exploitation). Novelty-seeking is an adaptive solution to this explore-exploit dilemma, but our understanding of the... more
Humans and other animals often make the difficult decision to try new options (exploration) and forego immediate rewards (exploitation). Novelty-seeking is an adaptive solution to this explore-exploit dilemma, but our understanding of the neural computations supporting novelty-seeking in humans is limited. Here, we presented the same explore-exploit decision making task to monkeys and humans and found evidence that the computational basis for novelty-seeking is conserved across primate species. Critically, through computational model-based decomposition of event-related functional magnetic resonance imaging (fMRI) in humans, these findings reveal a previously unidentified cortico-subcortical architecture mediating explore-exploit behavior in humans.
INTRODUCTION The DSM-5 explicitly states that the neural system model of specific phobia is centered on the amygdala. However, this hypothesis is predominantly supported by human studies on animal phobia, whereas visual cuing of other... more
INTRODUCTION The DSM-5 explicitly states that the neural system model of specific phobia is centered on the amygdala. However, this hypothesis is predominantly supported by human studies on animal phobia, whereas visual cuing of other specific phobias, such as dental fear, do not consistently show amygdala activation. Considering that fear of anticipated pain is one of the best predictors of dental phobia, the current study investigated neural and autonomic activity of pain anticipation in individuals varying in the degree of fear of dental pain. METHOD Functional brain activity (fMRI) was measured in women (n = 31) selected to vary in the degree of self-reported fear of dental pain when under the threat of shock, in which one color signaled the possibility of receiving a painful electric shock and another color signaled safety. RESULTS Enhanced functional activity during threat, compared to safety, was found in regions including anterior insula and anterior/mid cingulate cortex. Im...
98: 1374–1379, 2007. Firstpublished June 27, 2007; doi:10.1152/jn.00230.2007. Recent humanfunctional imaging studies have linked the processing of pleasantvisual stimuli to activity in mesolimbic reward structures. However,whether the... more
98: 1374–1379, 2007. Firstpublished June 27, 2007; doi:10.1152/jn.00230.2007. Recent humanfunctional imaging studies have linked the processing of pleasantvisual stimuli to activity in mesolimbic reward structures. However,whether the activation is driven specifically by the pleasantness of thestimulus, or by its salience, is unresolved. Here we find in two studiesthat free viewing of pleasant images of erotic and romantic couplesprompts clear, reliable increases in nucleus accumbens (NAc) andmedial prefrontal cortex (mPFC) activity, whereas equally arousing(salient) unpleasant images, and neutral pictures, do not. These datasuggest that in visual perception, the human NAc and mPFC arespecifically reactive to pleasant, rewarding stimuli and are not en-gaged by unpleasant stimuli, despite high stimulus salience.
For decades, behavioral scientists have used the matching law to quantify how animals distribute their choices between multiple options in response to reinforcement they receive. More recently, many reinforcement learning (RL) models have... more
For decades, behavioral scientists have used the matching law to quantify how animals distribute their choices between multiple options in response to reinforcement they receive. More recently, many reinforcement learning (RL) models have been developed to explain choice by integrating reward feedback over time. Despite reasonable success of RL models in capturing choice on a trial-by-trial basis, these models cannot capture variability in matching. To address this, we developed novel metrics based on information theory and applied them to choice data from dynamic learning tasks in mice and monkeys. We found that a single entropy-based metric can explain 50% and 41% of variance in matching in mice and monkeys, respectively. We then used limitations of existing RL models in capturing entropy-based metrics to construct a more accurate model of choice. Together, our novel entropy-based metrics provide a powerful, model-free tool to predict adaptive choice behavior and reveal underlying...
Best choice problems have a long mathematical history, but their neural underpinnings remain unknown. Best choice tasks are optimal stopping problem that require subjects to view a list of options one at a time and decide whether to take... more
Best choice problems have a long mathematical history, but their neural underpinnings remain unknown. Best choice tasks are optimal stopping problem that require subjects to view a list of options one at a time and decide whether to take or decline each option. The goal is to find a high ranking option in the list, under the restriction that declined options cannot be chosen in the future. Con-ceptually, the decision to take or decline an option is related to threshold crossing in drift diffusion models, when this process is thought of as a value comparison. We studied this task in healthy volunteers using fMRI, and used a Markov decision process to quan-tify the value of continuing to search versus committing to the current option. Decisions to take versus decline an option engaged parietal and dorsolateral prefrontal cortices, as well ventral striatum, anterior insula, and anterior cingulate. Therefore, brain regions pre-viously implicated in evidence integration and reward repres...
Highlights d Stimulus and value information is encoded in the ventral corticostriatal circuit d Action information is primarily encoded in the dorsal corticostriatal circuit d Value information is converted to action information at the... more
Highlights d Stimulus and value information is encoded in the ventral corticostriatal circuit d Action information is primarily encoded in the dorsal corticostriatal circuit d Value information is converted to action information at the time of choice
For decades, behavioral scientists have used the matching law to quantify how animals distribute their choices between multiple options in response to reinforcement they receive. More recently, many reinforcement learning (RL) models have... more
For decades, behavioral scientists have used the matching law to quantify how animals distribute their choices between multiple options in response to reinforcement they receive. More recently, many reinforcement learning (RL) models have been developed to explain choice by integrating reward feedback over time. Despite reasonable success of RL models in capturing choice on a trial-by-trial basis, these models cannot capture variability in matching behavior. To address this, we developed metrics based on information theory and applied them to choice data from dynamic learning tasks in mice and monkeys. We found that a single entropy-based metric can explain 50% and 41% of variance in matching in mice and monkeys, respectively. We then used limitations of existing RL models in capturing entropy-based metrics to construct more accurate models of choice. Together, our entropy-based metrics provide a model-free tool to predict adaptive choice behavior and reveal underlying neural mechan...
The neuronal underpinning of learning cause-and-effect associations in the adolescent brain remains poorly understood. Two fundamental forms of associative learning are Pavlovian (classical) conditioning, where a stimulus is followed by... more
The neuronal underpinning of learning cause-and-effect associations in the adolescent brain remains poorly understood. Two fundamental forms of associative learning are Pavlovian (classical) conditioning, where a stimulus is followed by an outcome, and operant (instrumental) conditioning, where outcome is contingent on action execution. Both forms of learning, when associated with a rewarding outcome, rely on midbrain dopamine neurons in the ventral tegmental area (VTA) and substantia nigra (SN). We find that in adolescent male rats, reward-guided associative learning is encoded differently by midbrain dopamine neurons in each conditioning paradigm. Whereas simultaneously recorded VTA and SN adult neurons have a similar phasic response to reward delivery during both forms of conditioning, adolescent neurons display a muted reward response during operant but a profoundly larger reward response during Pavlovian conditioning suggesting that adolescent neurons assign a different value to reward when it is not gated by action. The learning rate of adolescents and adults during both forms of conditioning was similar further supporting the notion that differences in reward response in each paradigm are due to differences in motivation and independent of state versus action value learning. Static characteristics of dopamine neurons such as dopamine cell number and size were similar in the VTA and SN but there were age differences in baseline firing rate, stimulated release and correlated spike activity suggesting that differences in reward responsiveness by adolescent dopamine neurons are not due to differences in intrinsic properties of these neurons but engagement of different networks.
A recent paper published in Neuron by Tremblay et al. (2020) introduces an openly available resource detailing published and unpublished studies using optogenetics to manipulate the nonhuman primate (NHP) brain. The open science efforts... more
A recent paper published in Neuron by Tremblay et al. (2020) introduces an openly available resource detailing published and unpublished studies using optogenetics to manipulate the nonhuman primate (NHP) brain. The open science efforts of the team are important and rare in NHP neuroscience, but the conclusions drawn about the success rate of optogenetics in the NHP brain are problematic for quantitative and theoretical reasons. Quantitively, the analyses in the paper are performed at a level relevant to the rodent but not NHP brain (single injections) and individual injections are clustered within a few monkeys and a few studies. Theoretically, the report makes strong claims about the importance of the technology for disease related functional outcomes, but behavior was not widely tested. The original article reports a 91% success rate for optogenetic experiments in NHPs based on the presence of any outcome (histological, physiological, or behavioral outcomes) after an injection of...
Aberrant decision-making characterizes various pediatric psychopathologies; however, deliberative choice strategies have not been investigated. A transdiagnostic sample of 95 youths completed a child-friendly sequential sampling paradigm.... more
Aberrant decision-making characterizes various pediatric psychopathologies; however, deliberative choice strategies have not been investigated. A transdiagnostic sample of 95 youths completed a child-friendly sequential sampling paradigm. Participants searched for the best offer by sampling a finite list of offers. Participants’ willingness to explore was measured as the number of offers sampled, and ideal task performance was modeled using a Markov decision-process model. As in previous findings in adults, youths explored more offers when lists were long compared with short, yet participants generally sampled fewer offers relative to model-estimated ideal performance. Searching deeper into the list was associated with choosing better price options. Analyses examining the main and interactive effects of transdiagnostic anxiety and irritability symptoms indicated a negative correlation between anxiety and task performance (p = .01, ηp2 = .08). Findings suggest the need for more research on exploratory decision impairments in youths with anxiety symptoms.

And 33 more