Summary
Derived from the concept of self-adaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multi-variate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
S. Baluja and R. Caruana. Removing the genetics from standard genetic algorithm. In A. Prieditis and S. Russell, editors, Proceedings of the International Conference on Machine Learning, pp. 38–46. Morgan Kaufmann, 1995.
H.G. Beyer. The Theory of Evolution Strategies. Springer, 2001.
H.G. Beyer and D. Arnold. Qualms regarding the optimality of cumulative path length control in CSA/CMA-evolution strategies. Evolutionary Computation, 11(1):19–28, 2003.
P.A.N. Bosman and D. Thierens. Expanding from discrete to continuous estimation of distribution algorithms: The IDEA. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature — PPSN VI. Lecture Notes in Computer Science 1917, pp. 767–776, 2000.
M. Gallagher and M. Frean. Population-based continuous optimization and probabilistic modeling. Technical Report MG-1-2001, echnical report, School of Information Technology and Electrical Engineering, University of Queensland, 2001.
N. Hansen. Verallgemeinerte individuelle Schrittweitenregelung in der Evolutionsstrategie. Mensch und Buch Verlag, 1998.
N. Hansen. Invariance, self-adaptation and correlated mutations in evolution strategies. In M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J.J. Merelo, and H.-P. Schwefel, editors, Parallel Problem Solving from Nature-PPSN VI, pp. 355–364. Springer, 2000.
N. Hansen and S. Kern. Evaluating the CMA evolution strategy on multimodal test functions. In Xin Yao et al., editor, Parallel Problem Solving from Nature-PPSN VIII, pp. 282–291. Springer, 2004.
N. Hansen, S.D. Müller, and P. Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1):1–18, 2003.
N. Hansen and A. Ostermeier. Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. In Proceedings of the 1996 IEEE Conference on Evolutionary Computation (ICEC’ 96), pp. 312–317, 1996.
N. Hansen and A. Ostermeier. Convergence properties of evolution strategies with the derandomized covariance matrix adaptation: The (μ/μI, λ)-CMA-ES. In Proceedings of the 5th European Congresson Intelligent Techniques and Soft Computing, pp. 650–654, 1997.
N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159–195, 2001.
K. Deb and H.G. Beyer. On self-adaptive features in real-parameter evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 5(3):250–270, 2001.
S. Kern, S.D. Müller, N. Hansen, D. Büche, J. Ocenasek, and P. Koumoutsakos. Learning probability distributions in continuous evolutionary algorithms — a comparative review. Natural Computing, 3:77–112, 2004.
P. Larrañaga. A review on estimation of distribution algorithms. In P. Larrañaga and J. A. Lozano, editors, Estimation of Distribution Algorithms. A New Tool for Evolutionary Computation, pp. 80–90. Kluwer Academic Publishers, 2002.
P. Larrañaga, J. A. Lozano, and E. Bengoetxea. Estimation of distribution algorithms based on multivariate normal and Gaussian networks. Technical Report KZAA-IK-1-01, Dept. of Computer Science and Artificial Intelligence, University of the Basque Country, 2001.
I. Rechenberg. Evolutionsstrategie’ 94. Frommann-Holzboog, Stuttgart, Germany, 1994.
S. Rudlof and M. Köppen. Stochastic hill climbing by vectors of normal distributions. In Proceedings of the First Online Workshop on Soft Computing (WSC1), 1996. Nagoya, Japan.
H.-P. Schwefel. Evolution and Optimum Seeking. John Wiley & Sons, Inc., 1995.
M. Sebag and A. Ducoulombier. Extending population-based incremental learning to continuos search spaces. In Parallel Problem Solving from Nature-PPSN V, pp. 418–427. Springer-Verlag, 1998. Berlin.
B. Yuan and M. Gallagher. Playing in continuous spaces: Some analysis and extension of population-based incremental learning. In Sarkar et al., editor, Proc. Congress on Evolutionary Computation (CEC), pp. 443–450, 2003.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds) Towards a New Evolutionary Computation. Studies in Fuzziness and Soft Computing, vol 192. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32494-1_4
Download citation
DOI: https://doi.org/10.1007/3-540-32494-1_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-29006-3
Online ISBN: 978-3-540-32494-2
eBook Packages: EngineeringEngineering (R0)