Abstract
Hierarchical Incremental Class Learning (HICL) is a new task decomposition method that addresses the pattern classification problem. The HICL is proven to be a good classifier but closer examination reveals areas for potential improvement. This paper proposes a theoretical model to evaluate the performance of HICL and presents an approach to improve the classification accuracy of HICL by applying the concept of Reduced Pattern Training (RPT). The theoretical analysis shows that HICL can achieve better classification accuracy than Output Parallelism [Guan and Li: IEEE Transaction on Neural Networks, 13 (2002), 542–550]. The procedure for RPT is described and compared with the original training procedure. The RPT reduces systematically the size of the training data set based on the order of sub-networks built. The results from four benchmark classification problems show much promise for the improved model.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Auda G., Kamel M., Raafat H. (1996): Modular neural network architectures for classification. IEEE International Conference on Neural Networks, 2, 1279–1284
Jacobs R.A., Jordan M.I., Nowlan M.I., Hinton G.E. (1991): Adaptive mixtures of local experts. Neural Computation, 3, 79–87
Auda G., Kamel M., Raafat H. (1994): A new neural network structure with cooperative modules. World Congress on Computational Intelligence, 3, 1301–1306
Jacobs R., Jordan M., Barto A. (1991): Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive Science, 15, 219–250
Murre, J.: Learning and categorization in modular neural networks. Harvester-Wheatcheaf. (1992).
Romaniuk S.G., Hall L.O. (1993): Divide and conquer neural networks. Neural Networks, 6, 1105–1116
Sharkey A.J.C. (1997): Modularity, combining and artificial neural nets. Connection Science, 9, 3–10
Feldman J. (1989): Neural representation of conceptual knowledge. In: Nadel et al. (eds). Neural Connections, Mental Computation. MIT Press, Cambridge, Massachusetts, USA
Anand R., Mehrotra K., Mohan C.K., Ranka S. (1995): Efficient classification for multiclass problems using modular neural networks. IEEE Transaction on Neural Networks, 6, 117–124
Lu, B. L., Kita, H. and Nishikawa, Y.: A multisieving neural network architecture that decomposes learning tasks automatically. Proceedings of IEEE Conference on Neural Networks, pp. 1319–1324, Orlando, FL, (1994).
Lu B.L., Ito M. (1999): Task decomposition and module combination based on class relations: A modular neural network for pattern classification. IEEE Transaction on Neural Networks, 10, 1244–1256
Guan S.-U., Li P. (2002): A Hierarchical incremental learning approach to task decomposition. Journal of Intelligent Systems, 12, 194–205
Guan S.-U., Zhu F. (2004): Class decomposition for GA-based classifier agents – a pitt approach. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 34, 381–392
Guan S.-U., Neo T.N., Bao C (2004): Task decomposition using pattern distributor. Journal of Intelligent Systems, 13, 123–150
Guan S.-U., Li S.C., Tan S.K. (2004): Neural network task decomposition based on output partitioning. Journal of the Institution of Engineers Singapore, 44, 78–89
Guan S.-U., Li P. (2004): Incremental learning in terms of output attributes. Journal of Intelligent Systems, 13, 95–122
Guan S.-U., Zhu F. (2005): A class decomposition approach for GA-based classifier agents. Engineering Applications of Artificial Intelligence, 18, 271–278
Guan, S.-U. and Li, S. C.: An approach to parallel growing and training of neural networks, Proceedings of 2000 IEEE International Symposium on Intelligent Signal Processing and Communication Systems, Honolulu, Hawaii, USA (2000).
Guan S.-U., Li S. (2002): Parallel growing and training of neural networks using output parallelism. IEEE Transaction on Neural Networks, 13, 542–550
Squires, C. S. and Shavlik, J. W.: Experimental analysis of aspects of the cascade-correlation learning architecture, Machine Learning Research Group Working Paper 91–1, Computer Science Department, University of Wisconsin-Madison (1991).
Auda G., Kamel M., Raafat H. (1996): Modular neural network architectures for classification. IEEE International Conference on Neural Networks, 2, 1279–1284
Lehtokangas M. (1999): Modeling with constructive backpropagation. Neural Networks, 12, 707–716
Riedmiller, M. and Braun, H.: A direct adaptive method for faster backpropagation learning: the PRPOP algorithm, Proceedings of the IEEE International Conference on Neural Networks, (1993), 586–591
Prechelt, L.: PROBEN1: A set of neural network benchmark problems and benchmarking rules, Technical Report 21/94, Department of Informatics, University of Karlsruhe, Germany (1994).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Guan, SU., Bao, C. & Sun, RT. Hierarchical Incremental Class Learning with Reduced Pattern Training. Neural Process Lett 24, 163–177 (2006). https://doi.org/10.1007/s11063-006-9019-4
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-006-9019-4