Skip to main content
Andrew  Heathcote
  • University of Tasmania,
    Psychology,
    Private Bag 30
    Hobart, Tasmania
    7001 Australia

Andrew Heathcote

MicroRNAs (miRNAs) within the ventral and dorsal striatum have been shown to regulate addiction-relevant behaviours. However, it is unclear how cocaine experience alone can alter the expression of addiction-relevant miRNAs within striatal... more
MicroRNAs (miRNAs) within the ventral and dorsal striatum have been shown to regulate addiction-relevant behaviours. However, it is unclear how cocaine experience alone can alter the expression of addiction-relevant miRNAs within striatal subregions. Further, it is not known whether differential expression of miRNAs in the striatum contributes to individual differences in addiction vulnerability. We first examined the effect of cocaine self-administration on the expression of miR-101b, miR-137, miR-212 and miR-132 in nucleus accumbens core and nucleus accumbens shell (NAcSh), as well as dorsomedial striatum and dorsolateral striatum (DLS). We then examined the expression of these same miRNAs in striatal subregions of animals identified as being 'addiction-prone', either immediately following self-administration training or following extinction and relapse testing. Cocaine self-administration was associated with changes in miRNA expression in a regionally discrete manner within the striatum, with the most marked changes occurring in the nucleus accumbens core. When we examined the miRNA profile of addiction-prone rats following self-administration, we observed increased levels of miR-212 in the dorsomedial striatum. After extinction and relapse testing, addiction-prone rats showed significant increases in the expression of miR-101b, miR-137, miR-212 and miR-132 in NAcSh, and miR-137 in the DLS. This study identifies temporally specific changes in miRNA expression consistent with the engagement of distinct striatal subregions across the course of the addiction cycle. Increased dysregulation of miRNA expression in NAcSh and DLS at late stages of the addiction cycle may underlie habitual drug seeking, and may therefore aid in the identification of targets designed to treat addiction.
Research Interests:
Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that " aids model evaluation by providing a metric for gauging the persuasiveness of a given fit " (p. 755) Model flexibility analysis measures the... more
Recently, Veksler, Myers, and Gluck (2015) proposed model flexibility analysis as a method that " aids model evaluation by providing a metric for gauging the persuasiveness of a given fit " (p. 755) Model flexibility analysis measures the complexity of a model in terms of the proportion of all possible data patterns it can predict. We show that this measure does not provide a reliable way to gauge complexity, which prevents model flexibility analysis from fulfilling either of the 2 aims outlined by Veksler et al. (2015): absolute and relative model evaluation. We also show that model flexibility analysis can even fail to correctly quantify complexity in the most clear cut case, with nested models. We advocate for the use of well-established techniques with these characteristics, such as Bayes factors, normalized maximum likelihood, or cross-validation, and against the use of model flexibility analysis. In the discussion, we explore 2 issues relevant to the area of model evaluation: the completeness of current model selection methods and the philosophical debate of absolute versus relative model evaluation.
We develop a broad theoretical framework for modelling difficult perceptual information integration tasks under different decision rules. The framework allows us to compare coac-tive architectures, which combine information before it... more
We develop a broad theoretical framework for modelling difficult perceptual information integration tasks under different decision rules. The framework allows us to compare coac-tive architectures, which combine information before it enters the decision process, with parallel architectures, where logical rules combine independent decisions made about each perceptual source. For both architectures we test the novel hypothesis that participants break the decision rules on some trials, making a response based on only one stimulus even though task instructions require them to consider both. Our models take account of not only the decisions made but also the distribution of the time that it takes to make them, providing an account of speed-accuracy tradeoffs and response biases occurring when one response is required more often than another. We also test a second novel hypothesis, that the nature of the decision rule changes the evidence on which choices are based. We apply the models to data from a perceptual integration task with near threshold stimuli under two different decision rules. The coactive architecture was clearly rejected in favor of logical-rules. The logical-rule models were shown to provide an accurate account of all aspects of the data, but only when they allow for response bias and the possibility for subjects to break those rules. We discuss how our framework can be applied more broadly, and its relationship to Townsend and Nozawa's (1995) Systems-Factorial Technology.
Address correspondence to Adam Osth (adamosth@gmail.com). We would like to thank Jeff Starns and Amy Criss for generously providing their data, Matthew Gretton for coding up a Python wrapper for fast-DM, Brandon Turner for some... more
Address correspondence to Adam Osth (adamosth@gmail.com). We would like to thank Jeff Starns and Amy Criss for generously providing their data, Matthew Gretton for coding up a Python wrapper for fast-DM, Brandon Turner for some indispensable advice on achieving convergence with hierarchical models, and Caren Rotello and two anonymous reviewers for providing very helpful comments on a previous version of this manuscript.
Evidence suggests that there is a tendency to verbally recode visually-presented information, and that in some cases verbal recoding can boost memory performance. According to multi-component models of working memory , memory performance... more
Evidence suggests that there is a tendency to verbally recode visually-presented information, and that in some cases verbal recoding can boost memory performance. According to multi-component models of working memory , memory performance is increased because task-relevant information is simultaneously maintained in two codes. The possibility of dual encoding is problematic if the goal is to measure capacity for visual information exclusively. To counteract this possibility, articulatory suppression is frequently used with visual change detection tasks specifically to prevent verbalization of visual stimuli. But is this precaution always necessary? There is little reason to believe that concurrent articulation affects performance in typical visual change detection tasks, suggesting that verbal recod-ing might not be likely to occur in this paradigm, and if not, precautionary articulatory suppression would not always be necessary. We present evidence confirming that articulatory suppression has no discernible effect on performance in a typical visual change-detection task in which abstract patterns are briefly presented. A comprehensive analysis using both descriptive statistics and Bayesian state-trace analysis revealed no evidence for any complex relationship between articulatory suppression and performance that would be consistent with a verbal recoding explanation. Instead, the evidence favors the simpler explanation that verbal strategies were either not deployed in the task or, if they were, were not effective in improving performance, and thus have no influence on visual working memory as measured during visual change detection. We conclude that in visual change detection experiments in which abstract visual stimuli are briefly presented, precautionary articulatory suppression is unnecessary.
A Simon effect occurs when the irrelevant spatial attributes of a stimulus conflict with choice responses based on non-spatial stimulus attributes. Many theories of the Simon effect assume that activation from task-irrelevant spatial... more
A Simon effect occurs when the irrelevant spatial attributes of a stimulus conflict with choice responses based on non-spatial stimulus attributes. Many theories of the Simon effect assume that activation from task-irrelevant spatial attributes becomes available before the activation from task-relevant attributes. We refer to this as the time-difference account. Other theories follow a magnitude-difference account, assuming activation from relevant and irrelevant attributes becomes available at the same time, but with the activation from irrelevant attributes initially being stronger. To distinguish these two accounts, we incorporated the response-signal procedure into the reach-to-touch paradigm to map out the emergence of the Simon effect. We also used a carefully calibrated neutral condition to reveal differences in the initial onset of the influence of relevant and irrelevant information. Our results establish that irrelevant spatial information becomes available earlier than relevant non-spatial information. This finding is consistent with the time-difference account and inconsistent with the magnitude-difference account. However, we did find a magnitude effect, in the form of reduced interference from irrelevant information, for the second of a sequence of two incongruent trials.
Although post-error slowing and the " hot hand " (streaks of good performance) are both types of sequential dependencies arising from the differential influence of success and failure, they have not previously been studied together. We... more
Although post-error slowing and the " hot hand " (streaks of good performance) are both types of sequential dependencies arising from the differential influence of success and failure, they have not previously been studied together. We bring together these two streams of research in a task where difficulty can be controlled by participants delaying their decisions, and where responses required a degree deliberation, and so are relatively slow. We compared performance of unpaid participants against paid participants who were rewarded differentially, with higher reward for better performance. In contrast to most previous results, we found no post-error slowing for paid or unpaid participants. For the unpaid group, we found post-error speeding and a hot hand, even though the hot hand is typically considered a fallacy. Our results suggest that the effect of success and failure on subsequent performance may differ substantially with task characteristics and demands. We also found payment affected post-error performance; financially rewarding successful performance led to a more cautious approach following errors, whereas unrewarded performance led to recklessness following errors.
The published version can be accessed via the following web address:
http://authors.elsevier.com/a/1SN09,H2pbEOGh
Abstract Much of scientific psychology and cognitive science can be viewed as a search to understand the mechanisms and dynamics of perception, thought and action. Two processing attributes of particular interest to psychologists are the... more
Abstract Much of scientific psychology and cognitive science can be viewed as a search to understand the mechanisms and dynamics of perception, thought and action. Two processing attributes of particular interest to psychologists are the architecture, or temporal relationships between sub-processes of the system, and the stopping rule, which dictates how many of the sub-processes must be completed for the system to finish.

And 60 more

In cued task switching, performance relies on proactive and reactive control processes. Proactive control is evident in the reduction in switch cost under conditions that promote advance preparation. However, the residual switch cost that... more
In cued task switching, performance relies on proactive and reactive control processes. Proactive control is evident in the reduction in switch cost under conditions that promote advance preparation. However, the residual switch cost that remains under conditions of optimal proactive control indicates that, on switch trials, the target continues to elicit interference that is resolved using reactive control. We examined whether posttarget interference varies as a function of trial-by-trial variability in preparation. We investigated target congruence effects on behavior and target-locked ERPs extracted across the response time (RT) distribution, using orthogonal polynomial trend analysis (OPTA). Early N2, late N2, and P3b amplitudes were differentially modulated across the RT distribution. There was a large congruence effect on late N2 and P3b, which increased with RT for P3b amplitude, but did not vary with trial type. This suggests that target properties impact switch and repeat trials equally and do not contribute to residual switch cost. P3b amplitude was larger, and latency later, for switch than repeat trials, and this difference became larger with increasing RT, consistent with sustained carryover effects on highly prepared switch trials. These results suggest that slower, less prepared responses are associated with greater target-related interference during target identification and processing, as well as slower, more difficult decision processes. They also suggest that neither general nor switch-specific preparation can ameliorate the effects of target-driven interference. These findings highlight the theoretical advances achieved by integrating RT distribution analyses with ERP and OPTA to examine trial-by-trial variability in performance and brain function.