Improved Shuffled Frog Leaping Algorithm For Continuous Optimization Problem
Improved Shuffled Frog Leaping Algorithm For Continuous Optimization Problem
AbstractShuffled frog leaping algorithm (SFLA) is mainly used for the discrete space optimization. For SFLA, the population is divided into several memeplexes, several frogs of each memeplex are selected to compose a submemeplex for local evolvement, according to the mechanism that the worst frog learns from the best frog in submemeplex or the best frog in population, and the memeplexes are shuffled for the global evolvement after some generations of each memeplex. Derived by the discrete SFLA, a new SFLA for continuous space optimization is presented, in which the population is divided based on the principle of uniform performance of memeplexes, and all the frogs participate in the evolvement by keeping the inertia learning behaviors and learning from better ones selected randomly. The simulation results of searching minima of several multi-peak continuous functions show that the improved SFLA can effectively overcome the problems of premature convergence and slow convergence speed, and achieve high optimization precision.
I. INTRODUCTION
owadays, swarm intelligence is a research hotspot in the artificial intelligence field. It mainly simulates the population behaviors of the complicate system in nature or society. Two typical swarm intelligence algorithms are ant colony optimization (ACO) and particle swarm optimization (PSO) [1-2]. ACO inspired by the ants foraging behavior has achieved widespread success in solving many different NP-hard combinatorial optimization problems. The ants deposit pheromone on the ground in order to mark some favorable path that can be followed by other ants. The pheromone will cause more and more ants to take the same path, which makes ACO has the trend to trap into local optimum. PSO simulates the social behavior of the bird flock. Each particle has an adaptable velocity, according to which the position in the search space is changed. Moreover, each particle has an ability of remembering the best position of the search space it has ever visited. Thus, its movement is an aggregated acceleration towards its best previously visited position and towards the best position of a topological neighborhood. The PSO is characterized by weak local
This work was supported by the Graduate Student Research Innovation Program of Jiangsu Province (CX08B_091Z), the Innovation and Excellence Foundation of Doctoral Dissertation of Nanjing University of Aeronautics and Astronautic (BCXJ08-06). Z. Y. Zhen is with the College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China (phone: 86-25-84892310; e-mail: zhenziyang@nuaa.edu.cn). D. B. Wang is with the College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China (e-mail: dbwangpe@nuaa.edu.cn). Y. Y. Liu is with the Siemens Numerical Control Ltd., Nanjing 210016, China (e-mail: liuyuanyuan.ext@siemens.com).
2992
Authorized licensed use limited to: Illinois Institute of Technology. Downloaded on May 12,2010 at 10:55:40 UTC from IEEE Xplore. Restrictions apply.
divided into several memeplexes, each memeplex contains one submemeplex which evolves independently. The frog adjusts its position by leaping in the similar manner as the evolution of the meme. The frogs leaping step is determined by the best position in submemeplex or in the swarm. After evolving for several iterations, all the memeplexes are shuffled and ordered for regrouping. A. Partition of memeplexes Suppose the whole frog swarm is represented as X = {xi , f i , i = 1, 2, , F } , where xi is the i -th frogs
If the worst frogs new position is within the feasible space and its fitness value is higher than the old fitness value, then compute its new position by (4) and go to (e), otherwise go to (c). (c) If the worst frog learns from the best frog in the swarm, then its leaping step is computed by d k +1 =
k k Min{Int[r2k ( x X xW )], d max }, for positive step, k k Max{Int[r2k ( x X xW )], d max }, for negative step.
position, f i is its fitness value. The position of the frog with best performance is represented as x X . After sorting the frogs of the memeplex in order of decreasing performance level, divide the entire swarm into m memeplexes: Y1 , Y2 , , Ym , each one contains n frogs, such that
Yi =
(5) Update the worst frogs position according to (4-5). If the new position is within the feasible space and better than old position, then replace the worst frogs position and go to (e), otherwise go to (d). (d) Randomly generate a new position for the worst frog by
k xq +1 = a + Int[r3k (b a)] .
(6)
(x , f ) x
j j
=x
i + m ( j 1)
, f j = f i + m ( j 1) , j = 1, 2,
, n , (1)
where i = 1, 2,
, m , F = mn .
B. Construction and Evolution of Submemeplex (a) In order to start the local search, each memeplex acts as an independent culture. A submemeplex including q frogs in the memeplex is produced, q n . Each frog in submemeplex leaps toward the optimum location by learning from the best frog within the submemeplex or from best frog in the swarm. The frog with higher fitness has bigger weight to be selected in the submemeplex. The weight is assigned with a triangular probability distribution, described as pi = 2(n + 1 i ) , i = 1, n(n + 1) ,n . (2)
where b and a are the maximum and minimum boundaries of frogs feasible location, respectively. (e) Return the submemeplex to the memeplex, and sort the memeplex in order of decreasing performance level. Repeat steps from (a) to (e) and evolve the submemeplexes with G1 generations. In essence, the evolution of submemeplexes is the process of the worse frog learning from the best one in submemeplex or in the swarm. The swarms performance is enhanced after the internal evolution, and the gap between the best frog and the worst one will be decreased, which actions as the local search. C. Shuffling of Population After several generations internal evolution, all the memeplexes are shuffled and sorted in order of decreasing performance value. Repartition the frog group into memeplexes and progress the evolution within each memeplex until stopping criteria is met, for example the maximum generation G2 of the population is reached. Periodic shuffling strategy promotes a global exchange of information among the frogs and helps to concentrate the search in the direction of the most promising region identified by memeplexes. Based on above description, we get a flow chart of SFLA as shown in fig.1. III. CONTINUOUS SHUFFLED FROG LEAPING ALGORITHM AND ITS IMPROVEMENT For the discrete optimization problem, the leaping step of SFLA is computed in discrete point set. When the SFLA is used to solve the continuous problems, the leaping step needs to be modified.
Equation (2) shows that the best frogs weight is p1 = 2 (n + 1) and the worst frogs weight is pn = 2 n(n + 1) . The best position and the worst position of the submemeplex are denoted as xB and xW , thus xB = x1 , xW = xq . (b) The worst frog in submemeplex changes its position by leaping action, thus the leaping step is a key factor, which is computed by
d k +1 =
k k Min{I nt[ r1k ( xB xW )], d max }, for positive step, k k Max{I nt[r1k ( xB xW )], d max }, for negative step.
(3) minimize,
size, r1 is a random number in the range [0,1] , k is the evolution generation or iteration number. Thus, the worst frogs new position is computed by k k xq +1 = xq + d k +1 . (4)
2993
Authorized licensed use limited to: Illinois Institute of Technology. Downloaded on May 12,2010 at 10:55:40 UTC from IEEE Xplore. Restrictions apply.
Yi ' =
(x , f )
j j
xj = fj =
For example, if the swarm including four frogs is partitioned into two memeplexes, according to (1), the first and third frogs belong to the first memeplex and the others belong to the second memeplex. Therefore the first memeplexs performance is better the second one. However, by the improvement method (10), the first frog and the fourth frog belong to the first memeplex and the others belong to the second memeplexe. Obviously, two memeplexes performance becomes more balance. The modified division method of memeplexes benefits to the internal learning effect. (b) In SFLA, only the worst frog in the submemeplex adjusts its position according to (7-9). It is an insufficient learning mechanism for the swarm, especially that the better frogs have fewer learning chances, unless that the worse catches up them. Moreover, by the learning strategy in (7-8), the frogs in submemeplex easily converge to the local minima. In order to avoid above problems, we cancel the construction of submemeplex and make all the frogs in the memeplex take part in the evolution, and adjust their positions by keeping inertia learning and learning from any frog with better performance. Therefore, the leaping step of the i-th frog is modified as
Fig.1 Flow chart of the basic SFLA
k di k +1 = r1k di k + r2k ( xC xik ), i = 1, 2,
,n .
(11)
In continuous search domain, the leaping step calculation equations (3), (5) and (6) are generally correspondingly modified as
k k d k +1 = r1k ( xB xW ) ,
k where xC is a randomly selected frog with better performance. The better frogs mainly keep their inertia movement while the worse ones mainly learn from the better frogs. This strategy is beneficial to avoid falling into the local minima and thus improve the performance of the SFLA.
k k d k +1 = r2k ( x X xW ) ,
IV. SIMULATION AND RESEARCHER Two benchmark functions are used to test the performance of the improved SFLA (ISFLA), by comparing with the PSO and the basic SFLA. Levy function is given by f ( x ) = sin 2 ( y1 ) +
N 1 i =1
k +1 q
= a + r (b a ) .
k 3
The analysis and modification of the continuous SFLA is described in following. (a) In the SFLA, the division method of the memeplexes according to (1) makes the performance of memeplexes not balance. The first memeplexs performance is best and the last one is worst. In this situation, the internal learning process is difficult to improve the worse memeplexes performance, because there are no very good frogs to be learned from in these memeplexes. Therefore, in order to make the memeplexes performance uniform, we introduce a new method to divide the frog swarm into m memeplexes Y1' , Y2 ' , , Ym ' by
( yi 1) 2 (1 + 10 sin 2 ( yi + 1))
+ ( y N 1) 2 (1 + 10 sin 2 (2 y N )), where yi = 1 + ( xi 1) 4 , xi [ 10,10] , N = 30 . Levy function has many local minima, but only one global maximum f min ( x* ) = 0 , x* = [1,1,...,1] . Ackley function can be described as
2994
Authorized licensed use limited to: Illinois Institute of Technology. Downloaded on May 12,2010 at 10:55:40 UTC from IEEE Xplore. Restrictions apply.
f ( x ) = 20 + e 20e
1 1 5 N
N i =1
xi 2
1 N
cos(2 xi )
i =1
better than SFLA and PSO according to the indexes including success rate, average minimum and minimum values. V. CONCLUSION This paper expands the SFLA for discrete optimization problems to solve the continuous optimization problems. Moreover, an improved SFLA is presented. The purpose of the evolution in submemeplex is for the local search, while the purpose of the shuffling strategy is for enhancing the global search. The combination of such local and global search strategies improves the randomness and the diversity of frogs distribution in continuous space, which is better than the common evolutionary algorithms. The common evolutionary algorithms have the problem of early maturity caused by gradient distribution of offspring searching space. However, for the basic SFLA, the partition method of memeplexes and the internal evolution strategy have shortcomings. Considering of these facts, a new partition method of memeplexes based on performance uniformity principle is proposed, moreover, a keeping inertia strategy and a learning strategy from better ones randomly selected are presented. The simulation results on the commonly used functions with multiple peak values show the effectiveness of improved SFLA. REFERENCES
where xi [ 15,30], N = 5 . Ackley function has a global minimum value f min ( x* ) = 0 , x* = [0, 0, ,0] . The parameters of the PSO algorithm are configured in following: learning parameters c1 = c2 = 2 , inertial weight initial value is wini = 1.2 , final value is wend = 0.1 . For applying the SFLA to solve the minimum optimization functions, the performance of frog is inversely proportional to the function value. Record 100 experiment results for every function, and only the experiment results meeting precision of 103 is considered as successful. In order to evaluate the influence of the parameters such as submemeplex evolution generation number G1 , the memeplexes number m and the frogs number n in submemeplex, following parameter settings are designed for algorithms: (a) g 2 = 100 , g1 = 10 , m = 10 , n = 10 ( q = 8 ); (b) g 2 = 100 , g1 = 50 , m = 10 , n = 10 ( q = 8 ); (c) g 2 = 100 , g1 = 10 , m = 5 , n = 20 ( q = 16 ). Table I and II show the experiments results of PSO, SFLA and ISFLA on Levy and Ackley functions.
TABLE I. Algorithm PSO (a) SFLA (b) (c) (a) ISFLA (b) (c) Success Rate (%) 0 0 83 0 47 6 51 TABLE II. Algorithm PSO (a) SFLA (b) (c) (a) ISFLA (b) (c) Success Rate (%) 100 54 100 2 100 100 100 LEVY FUNCTION Average Generation >100 >100 79 >100 21 10 15 Average Minimum 24.4971 0.3922 0.0114 1.0022 0.0990 1.5625 0.1189 Minimum 3.6893 0.0895 2.6180 0.3480 1.0638 1.4998 1.4998
-24 -32 -32 -5
[1]
ACKLEY FUNCTION Average Generation 86 82 23 65 8 2 6 Average Minimum 1.353 0.0185 7.2400 0.1726 2.4158 3.4817 2.7001
-15 -15 -15 -11 -5
4.1876
8.8818 8.8818 8.8818
From the optimization results, several points can be drawn: firstly, increasing of evolution generations of submemeplexes is benefit to improve the global searching capability of the algorithm, and the SFLA can achieve good results in a quite fewer evolution generations; secondly, increasing the memeplexes number is benefit to the SFLA, while increasing the memeplex size is benefit to ISFLA; thirdly, ISFLA is
A. Colorni, M. Dorigo, V. Maniezzo, et al, Distributed optimization by ant colonies, Proceedings of 1st European Conference of Artificial Life, Paris, France, 1991, pp.134-142. [2] J. Kennedy, R. Eberhart, Particle swarm optimization, Proceedings of IEEE International Conference on Neural Networks. Perth, Australia, 1995, pp. 1942-1948. [3] M. M. Eusuff, K. E. Lansey, Water distribution network design using the shuffled frog leaping algorithm, In Proceedings of the 2nd World Water Congress of the International Water Association, Berlin, Germany, 2001. [4] M. M. Eusuff, K. E. Lansey, Optimization of water distribution network design using the shuffled frog leaping algorithm, Journal of Water Resource Planning and Management. 2003, vol. 129, no. 3, pp. 210-225. [5] M. M. Eusuff, K. E. Lansey, F. Pasha, Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization, Engineering Optimization, 2006, vol. 38, no. 2, pp. 129-154. [6] Q. Duan, V. K. Gupta, S. Sorooshian, A shuffled complex evolution approach for effective and efficient global minimization, Optimization Theory and Application, 1993, vol. 76, no. 3, pp. 501-521. [7] Q. Duan, S. Sorooshian, V. K. Gupta, Effective and efficient global optimization for conceptual rainfall-runoff models, Water Resources Research, 1992, vol. 28, no. 4, pp. 1015-1031. [8] E. Elbeltagi, T. Hegazy, D. Grierson, Comparison among five evolutionary-based optimization algorithms, Advanced Engineering Informatics, 2005, vol. 19, no. 1, pp. 43-53. [9] E. Elbeltagi, T. Hegazy, D. Grierson, A modified shuffled frog-leaping optimization algorithm: applications to project management, Structure and Infrastructure Engineering: Maintenance, Management, Life-Cycl, 2007, vol. 3, no. 1, pp. 53-60(68). [10] Z. Y. Zhen, Z. S. Wang, Z. Gu, Y. Y. Liu, A novel memetic algorithm for global optimization based on PSO and SFLA, Lecture Notes in Computer Science, Springer, 2007, vol. 4683, pp. 127-136. [11] X. C. Zhang, X. M. Hu, G. Z. Cui, et al, An improved shuffled frog leaping algorithm with cognitive behavior, In Proceedings of the 7th World Congress on Intelligent Control and Automation, June 25-27, Chongqing, China, 2008, pp. 6197-6202.
2995
Authorized licensed use limited to: Illinois Institute of Technology. Downloaded on May 12,2010 at 10:55:40 UTC from IEEE Xplore. Restrictions apply.