Andrew Heathcote
University of Tasmania, School of Medicine, Faculty Member
MicroRNAs (miRNAs) within the ventral and dorsal striatum have been shown to regulate addiction-relevant behaviours. However, it is unclear how cocaine experience alone can alter the expression of addiction-relevant miRNAs within striatal... more
MicroRNAs (miRNAs) within the ventral and dorsal striatum have been shown to regulate addiction-relevant behaviours. However, it is unclear how cocaine experience alone can alter the expression of addiction-relevant miRNAs within striatal subregions. Further, it is not known whether differential expression of miRNAs in the striatum contributes to individual differences in addiction vulnerability. We first examined the effect of cocaine self-administration on the expression of miR-101b, miR-137, miR-212 and miR-132 in nucleus accumbens core and nucleus accumbens shell (NAcSh), as well as dorsomedial striatum and dorsolateral striatum (DLS). We then examined the expression of these same miRNAs in striatal subregions of animals identified as being 'addiction-prone', either immediately following self-administration training or following extinction and relapse testing. Cocaine self-administration was associated with changes in miRNA expression in a regionally discrete manner within the striatum, with the most marked changes occurring in the nucleus accumbens core. When we examined the miRNA profile of addiction-prone rats following self-administration, we observed increased levels of miR-212 in the dorsomedial striatum. After extinction and relapse testing, addiction-prone rats showed significant increases in the expression of miR-101b, miR-137, miR-212 and miR-132 in NAcSh, and miR-137 in the DLS. This study identifies temporally specific changes in miRNA expression consistent with the engagement of distinct striatal subregions across the course of the addiction cycle. Increased dysregulation of miRNA expression in NAcSh and DLS at late stages of the addiction cycle may underlie habitual drug seeking, and may therefore aid in the identification of targets designed to treat addiction.
Research Interests:
Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that " aids model evaluation by providing a metric for gauging the persuasiveness of a given fit " (p. 755) Model flexibility analysis measures the... more
Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that " aids model evaluation by providing a metric for gauging the persuasiveness of a given fit " (p. 755) Model flexibility analysis measures the complexity of a model in terms of the proportion of all possible data patterns it can predict. We show that this measure does not provide a reliable way to gauge complexity, which prevents model flexibility analysis from fulfilling either of the 2 aims outlined by Veksler et al. (2015): absolute and relative model evaluation. We also show that model flexibility analysis can even fail to correctly quantify complexity in the most clear cut case, with nested models. We advocate for the use of well-established techniques with these characteristics, such as Bayes factors, normalized maximum likelihood, or cross-validation, and against the use of model flexibility analysis. In the discussion, we explore 2 issues relevant to the area of model evaluation: the completeness of current model selection methods and the philosophical debate of absolute versus relative model evaluation.
We develop a broad theoretical framework for modelling difficult perceptual information integration tasks under different decision rules. The framework allows us to compare coac-tive architectures, which combine information before it... more
We develop a broad theoretical framework for modelling difficult perceptual information integration tasks under different decision rules. The framework allows us to compare coac-tive architectures, which combine information before it enters the decision process, with parallel architectures, where logical rules combine independent decisions made about each perceptual source. For both architectures we test the novel hypothesis that participants break the decision rules on some trials, making a response based on only one stimulus even though task instructions require them to consider both. Our models take account of not only the decisions made but also the distribution of the time that it takes to make them, providing an account of speed-accuracy tradeoffs and response biases occurring when one response is required more often than another. We also test a second novel hypothesis, that the nature of the decision rule changes the evidence on which choices are based. We apply the models to data from a perceptual integration task with near threshold stimuli under two different decision rules. The coactive architecture was clearly rejected in favor of logical-rules. The logical-rule models were shown to provide an accurate account of all aspects of the data, but only when they allow for response bias and the possibility for subjects to break those rules. We discuss how our framework can be applied more broadly, and its relationship to Townsend and Nozawa's (1995) Systems-Factorial Technology.
Address correspondence to Adam Osth (adamosth@gmail.com). We would like to thank Jeff Starns and Amy Criss for generously providing their data, Matthew Gretton for coding up a Python wrapper for fast-DM, Brandon Turner for some... more
Address correspondence to Adam Osth (adamosth@gmail.com). We would like to thank Jeff Starns and Amy Criss for generously providing their data, Matthew Gretton for coding up a Python wrapper for fast-DM, Brandon Turner for some indispensable advice on achieving convergence with hierarchical models, and Caren Rotello and two anonymous reviewers for providing very helpful comments on a previous version of this manuscript.
Although post-error slowing and the " hot hand " (streaks of good performance) are both types of sequential dependencies arising from the differential influence of success and failure, they have not previously been studied together. We... more
Although post-error slowing and the " hot hand " (streaks of good performance) are both types of sequential dependencies arising from the differential influence of success and failure, they have not previously been studied together. We bring together these two streams of research in a task where difficulty can be controlled by participants delaying their decisions, and where responses required a degree deliberation, and so are relatively slow. We compared performance of unpaid participants against paid participants who were rewarded differentially, with higher reward for better performance. In contrast to most previous results, we found no post-error slowing for paid or unpaid participants. For the unpaid group, we found post-error speeding and a hot hand, even though the hot hand is typically considered a fallacy. Our results suggest that the effect of success and failure on subsequent performance may differ substantially with task characteristics and demands. We also found payment affected post-error performance; financially rewarding successful performance led to a more cautious approach following errors, whereas unrewarded performance led to recklessness following errors.
The published version can be accessed via the following web address:
http://authors.elsevier.com/a/1SN09,H2pbEOGh
http://authors.elsevier.com/a/1SN09,H2pbEOGh
In cued task switching, performance relies on proactive and reactive control processes. Proactive control is evident in the reduction in switch cost under conditions that promote advance preparation. However, the residual switch cost that... more
In cued task switching, performance relies on proactive and reactive control processes. Proactive control is evident in the reduction in switch cost under conditions that promote advance preparation. However, the residual switch cost that remains under conditions of optimal proactive control indicates that, on switch trials, the target continues to elicit interference that is resolved using reactive control. We examined whether posttarget interference varies as a function of trial-by-trial variability in preparation. We investigated target congruence effects on behavior and target-locked ERPs extracted across the response time (RT) distribution, using orthogonal polynomial trend analysis (OPTA). Early N2, late N2, and P3b amplitudes were differentially modulated across the RT distribution. There was a large congruence effect on late N2 and P3b, which increased with RT for P3b amplitude, but did not vary with trial type. This suggests that target properties impact switch and repeat trials equally and do not contribute to residual switch cost. P3b amplitude was larger, and latency later, for switch than repeat trials, and this difference became larger with increasing RT, consistent with sustained carryover effects on highly prepared switch trials. These results suggest that slower, less prepared responses are associated with greater target-related interference during target identification and processing, as well as slower, more difficult decision processes. They also suggest that neither general nor switch-specific preparation can ameliorate the effects of target-driven interference. These findings highlight the theoretical advances achieved by integrating RT distribution analyses with ERP and OPTA to examine trial-by-trial variability in performance and brain function.