Cognitive architecture
A cognitive architecture refers to both a theory about the structure of the human mind and to a
computational instantiation of such a theory used in the fields of artificial intelligence (AI) and
computational cognitive science.[1] The formalized models can be used to further refine a comprehensive
theory of cognition and as a useful artificial intelligence program. Successful cognitive architectures include
ACT-R (Adaptive Control of Thought - Rational) and SOAR. The research on cognitive architectures as
software instantiation of cognitive theories was initiated by Allen Newell in 1990.[2]
The Institute for Creative Technologies defines cognitive architecture as: "hypothesis about the fixed
structures that provide a mind, whether in natural or artificial systems, and how they work together – in
conjunction with knowledge and skills embodied within the architecture – to yield intelligent behavior in a
diversity of complex environments."[3]
History
Herbert A. Simon, one of the founders of the field of artificial intelligence, stated that the 1960 thesis by his
student Ed Feigenbaum, EPAM provided a possible "architecture for cognition"[4] because it included
some commitments for how more than one fundamental aspect of the human mind worked (in EPAM's
case, human memory and human learning).
John R. Anderson started research on human memory in the early 1970s and his 1973 thesis with Gordon
H. Bower provided a theory of human associative memory.[5] He included more aspects of his research on
long-term memory and thinking processes into this research and eventually designed a cognitive
architecture he eventually called ACT. He and his students were influenced by Allen Newell's use of the
term "cognitive architecture". Anderson's lab used the term to refer to the ACT theory as embodied in a
collection of papers and designs (there was not a complete implementation of ACT at the time).
In 1983 John R. Anderson published the seminal work in this area, entitled The Architecture of
Cognition.[6] One can distinguish between the theory of cognition and the implementation of the theory.
The theory of cognition outlined the structure of the various parts of the mind and made commitments to the
use of rules, associative networks, and other aspects. The cognitive architecture implements the theory on
computers. The software used to implement the cognitive architectures were also "cognitive architectures".
Thus, a cognitive architecture can also refer to a blueprint for intelligent agents. It proposes (artificial)
computational processes that act like certain cognitive systems, most often, like a person, or acts intelligent
under some definition. Cognitive architectures form a subset of general agent architectures. The term
'architecture' implies an approach that attempts to model not only behavior, but also structural properties of
the modelled system.
Distinctions
Cognitive architectures can be symbolic, connectionist, or hybrid.[7] Some cognitive architectures or
models are based on a set of generic rules, as, e.g., the Information Processing Language (e.g., Soar based
on the unified theory of cognition, or similarly ACT-R). Many of these architectures are based on the-mind-
is-like-a-computer analogy. In contrast subsymbolic processing specifies no such rules a priori and relies on
emergent properties of processing units (e.g. nodes). Hybrid architectures combine both types of processing
(such as CLARION). A further distinction is whether the architecture is centralized with a neural correlate
of a processor at its core, or decentralized (distributed). The decentralized flavor, has become popular under
the name of parallel distributed processing in mid-1980s and connectionism, a prime example being neural
networks. A further design issue is additionally a decision between holistic and atomistic, or (more
concrete) modular structure.
In traditional AI, intelligence is often programmed from above: the programmer is the creator, and makes
something and imbues it with its intelligence, though many traditional AI systems were also designed to
learn (e.g. improving their game-playing or problem-solving competence). Biologically inspired computing,
on the other hand, takes sometimes a more bottom-up, decentralised approach; bio-inspired techniques
often involve the method of specifying a set of simple generic rules or a set of simple nodes, from the
interaction of which emerges the overall behavior. It is hoped to build up complexity until the end result is
something markedly complex (see complex systems). However, it is also arguable that systems designed
top-down on the basis of observations of what humans and other animals can do rather than on
observations of brain mechanisms, are also biologically inspired, though in a different way.
Notable examples
Some well-known cognitive architectures, in alphabetical order:
Name Description
4CAPS developed at Carnegie Mellon University by Marcel A. Just and Sashank Varma.
developed by James Albus at NIST is a reference model architecture that provides a
4D-RCS Reference
theoretical foundation for designing, engineering, integrating intelligent systems software
Model Architecture
for unmanned ground vehicles.[8]
ACT-R developed at Carnegie Mellon University under John R. Anderson.
developed by Rony Novianto, Mary-Anne Williams and Benjamin Johnston at the
ASMO[9] University of Technology Sydney. This cognitive architecture is based on the idea that
actions/behaviours compete for an agents resources.
developed under Fernand Gobet at Brunel University and Peter C. Lane at the University
CHREST
of Hertfordshire.
the cognitive architecture, developed under Ron Sun at Rensselaer Polytechnic Institute
CLARION
and University of Missouri.
The Cerebellar Model Articulation Controller (CMAC) is a type of neural network based on
a model of the mammalian cerebellum. It is a type of associative memory.[10] The CMAC
CMAC was first proposed as a function modeler for robotic controllers by James Albus in 1975
and has been extensively used in reinforcement learning and also as for automated
classification in the machine learning community.
Copycat by Douglas Hofstadter and Melanie Mitchell at the Indiana University.
developed by Erik Mueller at the University of California in Los Angeles under Michael G.
DAYDREAMER
Dyer
DUAL developed at the New Bulgarian University under Boicho Kokinov.
FORR developed by Susan L. Epstein at The City University of New York.
a connectionist distributed neural architecture for simulated creatures or robots, where
Framsticks modules of neural networks composed of heterogenous neurons (including receptors and
effectors) can be designed and evolved.
The company has created a neural network that learns how to play video games in a
similar fashion to humans[11] and a neural network that may be able to access an
external memory like a conventional Turing machine,[12] resulting in a computer that
Google DeepMind appears to possibly mimic the short-term memory of the human brain. The underlying
algorithm is based on a combination of Q-learning with multilayer recurrent neural
network.[13] (Also see an overview by Jürgen Schmidhuber on earlier related work in Deep
learning[14][15])
This architecture is part of the family of correlation-based associative memories, where
information is mapped onto the phase orientation of complex numbers on a Riemann
Holographic
plane. It was inspired by holonomic brain model by Karl H. Pribram. Holographs have
associative memory
been shown to be effective for associative memory tasks, generalization, and pattern
recognition with changeable attention.
This architecture is an online machine learning model developed by Jeff Hawkins and
Dileep George of Numenta, Inc. that models some of the structural and algorithmic
Hierarchical temporal properties of the neocortex. HTM is a biomimetic model based on the memory-prediction
memory theory of brain function described by Jeff Hawkins in his book On Intelligence. HTM is a
method for discovering and inferring the high-level causes of observed input patterns and
sequences, thus building an increasingly complex model of the world.
An ACT-R inspired extension to the JACK multi-agent system that adds a cognitive
CoJACK architecture to the agents for eliciting more realistic (human-like) behaviors in virtual
environments.
implementing Global Workspace Theory, developed under Stan Franklin at the University
IDA and LIDA
of Memphis.
MANIC (Cognitive
Michael S. Gashler, University of Arkansas.
Architecture)
'Procedural Reasoning System', developed by Michael Georgeff and Amy Lansky at SRI
PRS
International.
Psi-Theory developed under Dietrich Dörner at the Otto-Friedrich University in Bamberg, Germany.
by Chris Eliasmith at the Centre for Theoretical Neuroscience at the University of
Waterloo – Spaun is a network of 2,500,000 artificial spiking neurons, which uses groups
Spaun (Semantic of these neurons to complete cognitive tasks via flexibile coordination. Components of
Pointer Architecture the model communicate using spiking neurons that implement neural representations
Unified Network) called "semantic pointers" using various firing patterns. Semantic pointers can be
understood as being elements of a compressed neural vector space.[16]
developed under Allen Newell and John Laird at Carnegie Mellon University and the
Soar
University of Michigan.
Society of Mind proposed by Marvin Minsky.
The Emotion
proposed by Marvin Minsky.
Machine
was proposed by Pentti Kanerva at NASA Ames Research Center as a realizable
Sparse distributed architecture that could store large patterns and retrieve them based on partial matches
memory
with patterns representing current sensory inputs.[17]
Subsumption
developed e.g. by Rodney Brooks (though it could be argued whether they are cognitive).
architectures
See also
Artificial brain
Artificial consciousness
Autonomous agent
Biologically inspired cognitive architectures
Blue Brain Project
BRAIN Initiative
Cognitive architecture comparison
Cognitive computing
Cognitive science
Commonsense reasoning
Computer architecture
Conceptual space
Deep learning
Google Brain
Image schema
Knowledge level
Neocognitron
Neural correlates of consciousness
Pandemonium architecture
Simulated reality
Social simulation
Unified theory of cognition
Never-Ending Language Learning
Bayesian Brain
Open Mind Common Sense
References
1. Lieto, Antonio (2021). Cognitive Design for Artificial Minds. London, UK: Routledge, Taylor &
Francis. ISBN 9781138207929.
2. Newell, Allen. 1990. Unified Theories of Cognition. Harvard University Press, Cambridge,
Massachusetts.
3. Refer to the ICT website: http://cogarch.ict.usc.edu/
4. "Notes--very early EPAM seminar" (https://saltworks.stanford.edu/catalog/druid:st035tk175
5).
5. "This Week's Citation Classic: Anderson J R & Bower G H. Human associative memory.
Washington (http://garfield.library.upenn.edu/classics1979/A1979HX09600001.pdf)," in: CC.
Nr. 52 Dec 24-31, 1979.
6. John R. Anderson. The Architecture of Cognition (https://books.google.com/books?id=zL0eA
gAAQBAJ), 1983/2013.
7. Vernon, David; Metta, Giorgio; Sandini, Giulio (April 2007). "A Survey of Artificial Cognitive
Systems: Implications for the Autonomous Development of Mental Capabilities in
Computational Agents". IEEE Transactions on Evolutionary Computation. 11 (2): 151–180.
doi:10.1109/TEVC.2006.890274 (https://doi.org/10.1109%2FTEVC.2006.890274).
S2CID 9709702 (https://api.semanticscholar.org/CorpusID:9709702).
8. Douglas Whitney Gage (2004). Mobile robots XVII: 26–28 October 2004, Philadelphia,
Pennsylvania, USA. Society of Photo-optical Instrumentation Engineers. page 35.
9. Novianto, Rony (2014). Flexible Attention-based Cognitive Architecture for Robots (https://op
us.lib.uts.edu.au/bitstream/10453/34414/2/02whole.pdf) (PDF) (Thesis).
10. Albus, James S. (August 1979). "Mechanisms of planning and problem solving in the brain".
Mathematical Biosciences. 45 (3–4): 247–293. doi:10.1016/0025-5564(79)90063-4 (https://d
oi.org/10.1016%2F0025-5564%2879%2990063-4).
11. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Graves, Alex; Antonoglou, Ioannis;
Wierstra, Daan; Riedmiller, Martin (2013). "Playing Atari with Deep Reinforcement
Learning". arXiv:1312.5602 (https://arxiv.org/abs/1312.5602) [cs.LG (https://arxiv.org/archive/
cs.LG)].
12. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Graves, Alex; Antonoglou, Ioannis;
Wierstra, Daan; Riedmiller, Martin (2014). "Neural Turing Machines". arXiv:1410.5401 (http
s://arxiv.org/abs/1410.5401) [cs.NE (https://arxiv.org/archive/cs.NE)].
13. Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Rusu, Andrei A.; Veness, Joel;
Bellemare, Marc G.; Graves, Alex; Riedmiller, Martin; Fidjeland, Andreas K.; Ostrovski,
Georg; Petersen, Stig; Beattie, Charles; Sadik, Amir; Antonoglou, Ioannis; King, Helen;
Kumaran, Dharshan; Wierstra, Daan; Legg, Shane; Hassabis, Demis (25 February 2015).
"Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533.
Bibcode:2015Natur.518..529M (https://ui.adsabs.harvard.edu/abs/2015Natur.518..529M).
doi:10.1038/nature14236 (https://doi.org/10.1038%2Fnature14236). PMID 25719670 (https://
pubmed.ncbi.nlm.nih.gov/25719670). S2CID 205242740 (https://api.semanticscholar.org/Co
rpusID:205242740).
14. "DeepMind's Nature Paper and Earlier Related Work" (http://people.idsia.ch/~juergen/nature
deepmind.html).
15. Schmidhuber, Jürgen; Kavukcuoglu, Koray; Silver, David; Graves, Alex; Antonoglou, Ioannis;
Wierstra, Daan; Riedmiller, Martin (2015). "Deep learning in neural networks: An overview".
Neural Networks. 61: 85–117. arXiv:1404.7828 (https://arxiv.org/abs/1404.7828).
doi:10.1016/j.neunet.2014.09.003 (https://doi.org/10.1016%2Fj.neunet.2014.09.003).
PMID 25462637 (https://pubmed.ncbi.nlm.nih.gov/25462637). S2CID 11715509 (https://api.s
emanticscholar.org/CorpusID:11715509).
16. Eliasmith, C.; Stewart, T. C.; Choo, X.; Bekolay, T.; DeWolf, T.; Tang, Y.; Rasmussen, D. (29
November 2012). "A Large-Scale Model of the Functioning Brain". Science. 338 (6111):
1202–1205. Bibcode:2012Sci...338.1202E (https://ui.adsabs.harvard.edu/abs/2012Sci...338.
1202E). doi:10.1126/science.1225266 (https://doi.org/10.1126%2Fscience.1225266).
PMID 23197532 (https://pubmed.ncbi.nlm.nih.gov/23197532). S2CID 1673514 (https://api.se
manticscholar.org/CorpusID:1673514).
17. Denning, Peter J. "Sparse distributed memory." (1989).Url:
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920002425.pdf
External links
Media related to Cognitive architecture at Wikimedia Commons
Quotations related to Cognitive architecture at Wikiquote
Retrieved from "https://en.wikipedia.org/w/index.php?title=Cognitive_architecture&oldid=1147296396"