Abstract
This work investigates the scalability of Probabilistic Neural Networks via parallelization and localization, and a chain gradient tuning. Since PNN model is inherently parallel three common parallel approaches are studied here, namely data parallel, neuron parallel and pipelining. Localization methods via clustering algorithms are utilized to reduce the hidden layer size of PNNs. A problem of localization may be present in the case of multi-class data. In this paper we propose two simple fast approximate solutions. The first is using sigma smoothing parameters obtained from the parallel PNN initial training directly to clustering. In this case a substantial reduction of neurons is achieved without significant loss of recognition accuracy. The second is an effort for an additional tuning. Via confidence outputs we employ a chain training approach to tune for the best possible PNN architecture.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Dunham, M.H.: Data mining introductory and advanced topics. Prentice Hall (2004)
Specht, D.: Probabilistic neural networks. Neural Networks 3, 109–118 (1990)
Secretan, J., Georgiopoulos, M., Maidhof, I., Shibly, P., Hecker, J.: Methods for Parallelizing the Probabilistic Neural Network on a Beowulf Cluster Computer. In: International Joint Conference on Neural Networks, IJCNN 2006, Vancouver, pp. 2378–2385 (2006)
Cardona, K., Secretan, J., Georgiopoulos, M., Anagnostopoulos, G.: A Grid Based System for Data Mining Using MapReduce. Technical Report, TR-2007-02 (2007)
Bastke, S., Deml, M., Schmidt, S.: Combining statistical network data, probabilistic neural networks and the computational power of GPUs for anomaly detection in computer networks. In: 19th International Conference on Automated Planning and Scheduling, Workshop on Intelligent Security (SecArt 2009), Thessaloniki, Greece (2009)
Šerbedžija, N.: Simulating Artificial Neural Networks on Parallel Architectures. IEEE Computer, Special Issue on Neural Computing 29(3), 56–63 (1996)
Pethick, M., Liddle, M., Werstein, P., Huang, Z.: Parallelization of a backpropagation neural network on a cluster computer. In: 15th IASTED International Conference on Parallel and Distributed Computing and Systems, CA, USA, November 3-5, pp. 574–582 (2003)
Specht, D.F.: Enhancements to the probabilistic neural networks. In: Proc. IEEE Int. Joint Conf. Neural Networks, Baltimore, MD, pp. 761–768 (1992)
Burrascano, P.: Learning vector quantization for the probabilistic neural network. IEEE Transactions on Neural Networks 2, 458–461 (1991)
Traven, H.G.C.: A neural network approach to statistical pattern classification by “semi-parametric” estimation of probability density functions. IEEE Transactions on Neural Networks 2, 366–377 (1991)
Babich, G.A., Camps, O.I.: Weighted Parzen windows for pattern classification. IEEE Trans. Pattern Anal. Mach. Intell. 18(5), 567–570 (1996)
Berthold, M., Diamond, J.: Constructive training of probabilistic neural networks. Neurocomputing 19, 167–183 (1998)
Zhong, M., et al.: Gap-Based Estimation: Choosing the Smoothing Parameters for Probabilistic and General Regression Neural Networks. Neural Computation 19, 2840–2864 (2007)
Chang, R.K.Y., Loo, C.K., Rao, M.V.C.: A Global k-means Approach for Autonomous Cluster Initialization of Probabilistic Neural Network. Informatica 32, 219–225 (2008)
Georgiou, V.L., Alevizos, P.D., Vrahatis, M.N.: Novel approaches to probabilistic neural networks through bagging and evolutionary estimating of prior probabilities. Neural Processing Letters 27, 153–162 (2008)
Frey, B.J., Dueck, D.: Clustering by passing messages between data points. Science 315(5814), 972–976 (2007)
Sarimveis, H., Alexandridis, A., Bafas, G.: A fast training algorithm for RBF networks based on subtractive clustering. Neurocomputing, 501–505 (2003)
Gonzalez, T.F.: Clustering to minimize the maximum intercluster distance. Theoretical Computer Science 38(2-3), 293–306 (1985)
Bern, M., Eppstein, D.: Approximation Algorithms for Geometric Problems. In: Approximation algorithms for NP-hard problems, pp. 296–345. PWS Publishing (1997)
Frank, A., Asuncion, A.: UCI Machine Learning Repository. University of California, Irvine (2010), http://archive.ics.uci.edu/ml
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kokkinos, Y., Margaritis, K. (2012). Parallelism, Localization and Chain Gradient Tuning Combinations for Fast Scalable Probabilistic Neural Networks in Data Mining Applications. In: Maglogiannis, I., Plagianakos, V., Vlahavas, I. (eds) Artificial Intelligence: Theories and Applications. SETN 2012. Lecture Notes in Computer Science(), vol 7297. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-30448-4_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-30448-4_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-30447-7
Online ISBN: 978-3-642-30448-4
eBook Packages: Computer ScienceComputer Science (R0)