[go: up one dir, main page]

Skip to main content
Log in

A hierarchical model for structure learning based on the physiological characteristics of neurons

  • Research Article
  • Published:
Frontiers of Computer Science in China Aims and scope Submit manuscript

Abstract

Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability. The characteristics of typical ANN models are fixed connections, with evolved weights, globalized representations, and globalized optimizations, all based on a mathematical approach. This makes those models to be deficient in robustness, efficiency of learning, capacity, anti-jamming between training sets, and correlativity of samples, etc. In this paper, we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing. A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures. The basic characteristics of this model are localized and random connections, field limitations of neuron fan-in and fan-out, dynamic behavior of neurons, and samples represented through different sub-circuits of neurons specialized into different response patterns. At the end of this paper, some important aspects of error correction, capacity, learning efficiency, and soundness of structural representation are analyzed theoretically. This paper has demonstrated the feasibility and advantages of structure learning and representation. This model can serve as a fundamental element of cognitive systems such as perception and associative memory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Thompson R F. Neurobiology of learning and memory. Science, 1986, 233: 941–947

    Article  Google Scholar 

  2. Carew T J. Molecular enhancement of memory formation. Neuron, 1996, 16: 5–8

    Article  Google Scholar 

  3. Kandel E R, Schwartz J II, Jessell T M. Principles of neural science (4th edition). McGraw-Hill Companies, Inc., 2000, 175–316

  4. Kandel E R. The molecular biology of memory storage: a dialogue between genes and synapses. Science, 2001, 294: 1030–1038

    Article  Google Scholar 

  5. Sharkey N E. An Oral History of Neural Networks. Artificial Intelligence, 2000, 119: 287–293

    Article  Google Scholar 

  6. Kimoto T, Okada M. Mixed state on a sparsely encoded associative memory model. Biological Cybernetics, 2001, 85: 319–325

    Article  Google Scholar 

  7. Kimoto T, Okada M. Mixed states on neural network with structural learning. Neural Networks, 2004, 17: 103–112

    Article  MATH  Google Scholar 

  8. Bohland J W, Minai A A. Efficient associative memory using small-world architecture. Neurocomputing, 2001, 38: 489–496

    Article  Google Scholar 

  9. Ganguly N, Maji P, Sidkar B K, et al. Design and characterization of cellular automata based associative memory for pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 2004, 34(1): 672–679

    Article  Google Scholar 

  10. Ferster D, Spruston N. Cracking the neuronal code. Science, 1995, 270: 756–757

    Article  Google Scholar 

  11. Yoshio S. How do cell assemblies encode information in the brain? Neuroscience and Biobehavioral Reviews, 1999, 23: 785–796

    Article  Google Scholar 

  12. Quinlan P T. Structural change and development in real and artificial neural networks. Neural Networks, 1998, 11: 577–599

    Article  Google Scholar 

  13. Tanaka K. Representation of visual features of objects in the interotemportal cortex. Neural Networks, 1996, 9(8): 1459–1475

    Article  MATH  Google Scholar 

  14. Yao II S, Li C Y. Clustered organization of neurons with similar extra-receptive field properties in the primary visual cortex. Neuron, 2002, 35: 547–553

    Article  Google Scholar 

  15. Sun C, Chen X, Huang L, et al. Orientation bias of the extraclassical receptive field of the relay cells in the cat’s dorsal lateral geniculate nucleus. Neuroscience, 2004, 125: 495–505

    Article  Google Scholar 

  16. Bickhard H M, Terveen L. Foundational issues in artificial intelligence and cognitive science. Amsterdam, Elsevier publishing company, 1995, 11–18

    Google Scholar 

  17. Wermter S, Austin J, Willshaw D. Towards novel neuroscience-inspired computing. In: Stefan Wermter, Jim Austin, David Willshaw (Eds.), Emergent neural computational architectures based on neuroscience. Berlin: Springer, 2001, 1–19

    Google Scholar 

  18. Fuster J M. Cortical dynamics of memory. International Journal of Psychophysiology, 2000, 35: 155–164

    Article  Google Scholar 

  19. Sandler U, Tsitolovsky L. Fuzzy dynamics of brain activity. Fuzzy Sets and Systems, 2001, 121: 237–245

    Article  MATH  Google Scholar 

  20. Glassman R B. Hypothesized neural dynamics of working memory: Several chunks might be marked simultaneously by harmonic frequencies within an octave band of brain waves. Brain Research Bulletin, 1999, 50(2): 77–93

    Article  Google Scholar 

  21. Cariani P. Symbols and dynamics in the brain. BioSystems, 2001, 60: 59–83

    Article  Google Scholar 

  22. Rosser Rosemary A. Cognitive development: psychological and biological perspectives, Needham Heights Massachusetts: Simon & Schuster Inc., 1994, 285–289

    Google Scholar 

  23. Watts D, Strogatz S H. Collective dynamics of ’small-world’ networks. Nature, 1998, 393

  24. Watts D. Small World: The Dynamics of Networks between Order and Randomness. Princeton Univ. Press, 1999.

  25. Cassar A. Coordination and cooperation in local, random and small world networks: Experimental evidence. Games and Economic Behavior, 2007, 58: 209–230

    Article  MATH  Google Scholar 

  26. Dekhtyarenko O K. Systematic rewiring in associative neural networks with small-world architecture. In: Proceedings of 2005. IJCNN’05. Vol.(2), 2005, 1178–1181.

    Google Scholar 

  27. Bohland J W, Minai A A. Efficient associative memory using small-world architecture. Neurocomputing, 2001, 38(40): 489–496

    Article  Google Scholar 

  28. Bohland J W, Minai A A. Small-World model of associative memory. In: Proceedings of IJCNN 2000, Vol.(5), 2000, 597–601

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Hui.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wei, H. A hierarchical model for structure learning based on the physiological characteristics of neurons. Front. Comput. Sc. China 1, 361–372 (2007). https://doi.org/10.1007/s11704-007-0035-y

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-007-0035-y

Keywords

Navigation