A kind of difference searching algorithm improving optimizing strategy
Technical field
The present invention relates to a kind of optimization method of improvement difference search, in particular to a kind of difference for improving optimizing strategy is searched
Rope algorithm belongs to intelligence computation field.
Background technique
The creation inspiration of difference searching algorithm (Differential Search Algorithm, DSA) derives from nature
Different kind organism body migration course.The algorithm has been continued to use it and has been randomly generated initially using differential evolution algorithm as basic structure
The method of position and boundary limitation, and Brownian movement has been used to simulate random motion of the organism in migration course, it uses
The mode of Random Activation virtual individual carrys out the update of problem of modelling dimension, is a kind of newer, efficient optimizing algorithm.
During evolution, difference searching algorithm easily falls into local extremum, and main cause is the direction mistake that biota is searched
In diverging, cause whole fluctuation range very big, is not easy to optimize along optimal path.It is random due to Brownian movement
Property, algorithm are easily trapped into local extremum in the iteration later period, are not easy to find peak value, and optimum point often has not yet been reached in searching process
It just stopped iteration.The present invention proposes a kind of difference searching algorithm for improving optimizing strategy to solve the above-mentioned problems, solves
Former algorithm is easily trapped into the problems such as local extremum and long calculating time.
Summary of the invention
The invention discloses a kind of difference searching algorithms for improving optimizing strategy, mainly change original error and divide searching algorithm
Optimizing strategy specially improves virtual individual active mode and algorithm Search Range.Firstly, the present invention is set according to algorithm
Search Range generate an initial point position, which, which is referred to as, inhabites a little, and the value for inhabiting a little is substituted for global optimum
Value.Then, algorithm according to the present invention in improved method carry out continuous iteration and update to operate, by what is generated after each iteration
Candidate value is compared with global optimum, is selected preferably to solve as global optimum, be updated repeatedly until meeting iteration end
Only until condition.In the process, if the candidate value generated after iteration has exceeded the search range that algorithm is set, side is executed
Boundary determines method.
Specific step is as follows for the difference searching algorithm for improving optimizing strategy:
Step 1: setting initial parameter: setting search range, the global optimum that algorithm generates search for model without departing from this
It encloses, sets the dimension of quantity and optimization problem individual in population, set the number of iterations or termination condition, set optimizing mould
Formula, and set timing program;
Step 2: the population for improving the difference searching algorithm of optimizing strategy can be randomly generated one initially according to search range
Position, which, which is referred to as, inhabites a little, and the value for inhabiting a little is substituted for global optimum.Because optimal value is to produce at random thus
It is raw, so it is last solution that optimal value is almost impossible;
Step 3: judging whether optimization process at this time reaches preset iterations max or iterative value equal to objective function
Value, carries out next round iteration if being unsatisfactory for;
Step 4: the difference searching algorithm for improving optimizing strategy can be next according to Brownian movement stochastic evolution population position
The population position of evolution is referred to as dwell point, the candidate value that thus this wheel iteration of available one of dwell point generates;It is specific:
Stopover=Pos+Rmap (Dir-Pos)
Wherein, Stopover is dwell point, indicates virtual superior biological body current location;R is the random of movement setting
Value, for simulating Blang's random motion;Map is one and forms selector by 0 and 1 by what problem dimension was constituted, and 0 represents the problem
Direction is taken turns herein does not execute search in iteration, 1 is on the contrary;Pos indicates the position that random selection individual is migrated;Dir-Pos indicates super
The direction that grade organism migrates.
The difference searching algorithm for improving optimizing strategy has adjusted using Rmap as the iterative manner of optimizing strategy, specific:
Original error divides the R candidate value of searching algorithm to have 5, is respectively: R1=4*randn;R2=4*randg;R3=
Lognrnd (rand, 5*rand);R4=1./gamrnd (1,0.5);R5=1/normrnd (0,5), wherein rand representative function
What is generated is the pseudo random number between 0 to 1;Randn indicates that mean value is the normal distribution that 0 variance is 1;Randg indicates ruler
Degree parameter and form parameter are 1 Gamma distribution;Lognrnd indicates logarithm normal distribution;Gamrnd indicates Gamma distribution;
Normrnd indicates normal distribution.Above-mentioned R1-R5Value all have the characteristics that fluctuation range is larger and unstable.In the present invention
It is required that the search range of R becomes smaller and can be more stable.In the present invention, R is assigned R=2*rand.
In difference searching algorithm, map simulates the more new state of problem dimension, and former algorithm is more biased towards in updating a certain ask
The dimension of topic, and it is possible to not replacement problem dimension;The present invention is more heavily weighted toward while updating the dimension of multiple problems, and eliminates
Not the case where not updating dimension.
Step 5: exercise boundary limits method (inessential), the borders method in difference searching algorithm derived from difference into
Change algorithm, because often generating certain candidate values beyond search range in optimization process, for these invalid candidate values,
This step replaces the candidate value beyond search range by way of random assignment, and the value that this borders generates is replaced
Change the candidate value that wheel iteration generates thus into;If the candidate solution continues to execute step 6 without departing from search range;
Step 6: whether the candidate value for judging that this wheel iteration generates is better than existing global optimum, if YES then this is waited
Choosing value is substituted for global optimum, if it is otherwise, not replacing.In the present invention, judge that condition whether candidate value is superior refers to,
Candidate value whether than global optimum existing in algorithm closer to the global optimum of test function;
Step 7: repeating step 3-6 until meeting the iteration termination target of step 3, finally export global optimum at this time
Value, and timing is terminated, obtain the time of algorithm operation.
Beneficial effects of the present invention: the difference searching algorithm for improving optimizing strategy improves the optimizing that original error divides searching algorithm
Precision, while also accelerating the speed of algorithm optimizing compensates for primal algorithm and easily falls into local extremum, Premature Convergence or iteration and stops
The defects of stagnant, can more quickly and accurately find out the optimal value of parameter to be optimized.
Detailed description of the invention
Fig. 1 is the overview flow chart for the difference searching algorithm that the present invention improves optimizing strategy;
Fig. 2 is the function curve diagram of the Ackley function that uses in the present invention under two-dimensional problems;
Fig. 3 is the function curve diagram of the Griewank function that uses in the present invention under two-dimensional problems;
Fig. 4 is the function curve diagram of the Rastrigin function that uses in the present invention under two-dimensional problems;
Fig. 5 is the function curve diagram of the Rosenbrock function that uses in the present invention under two-dimensional problems;
Fig. 6 is the function curve diagram of the Sphere function that uses in the present invention under two-dimensional problems;
Fig. 7 is the function curve diagram of the Zakharov function that uses in the present invention under two-dimensional problems.
Specific embodiment
Embodiment 1: as shown in Figure 1, the present invention generates an initial point position according to the Search Range that algorithm is set, it should
Position, which is referred to as, to be inhabited a little, and the value for inhabiting a little is substituted for global optimum.Then, algorithm according to the present invention in improved side
Method carries out continuous iteration and updates to operate, and the candidate value generated after each iteration is compared with global optimum, is selected
Preferably solution is used as global optimum, is updated until meeting stopping criterion for iteration repeatedly.In the process, if being produced after iteration
Raw candidate value has exceeded the search range that algorithm is set, then exercise boundary limits method.
Specific step is as follows for the difference searching algorithm for improving optimizing strategy:
Step 1: setting initial parameter: setting search range, the global optimum that algorithm generates search for model without departing from this
It encloses.The dimension of quantity and optimization problem individual in population is set, the number of iterations or termination condition are set, sets optimizing mould
Formula, and set timing program.In the present invention, for all unconstrained optimization test functions, the difference of optimizing strategy is improved
Divide searching algorithm and original error that the search range of searching algorithm (DSA) is divided uniformly to be set as [- 10,10], individual quantity i=50,
The dimension d=3 of optimization problem, the number of iterations G=100.Selected test function, such as Sphere function is selected to test letter
Number, function curve diagram of the function under two-dimensional problems is as shown in fig. 7, its function expression are as follows:
Wherein, F (x) is required non trivial solution, and d is the dimension of target problem, and i is the number of optimizing particle in algorithm,
In this test function, when x=(0,0,0 ..., 0)TWhen, there is globally optimal solution minF (x)=0.
Step 2: the population for improving the difference searching algorithm of optimizing strategy can be randomly generated one initially according to search range
Position, which, which is referred to as, inhabites a little, and the value for inhabiting a little is substituted for global optimum.Because optimal value is to produce at random thus
It is raw, so it is last solution that optimal value is almost impossible, it is specific:
popmn=rand (upn-lown)+lown
Wherein, the scale pop of populationm, m={ 1,2 ..., S }, wherein S indicates individual sum, total dimension popn, n=1,
2 ..., D }, wherein D indicates the dimension of institute's optimization problem.What rand representative function generated is the pseudo random number between 0 to 1,
upnAnd lownRespectively indicate the upper bound and the lower bound of preset search range.
Step 3: judging whether optimization process at this time meets the condition of iteration termination or meet iteration ends target, if not
Satisfaction then carries out next round iteration;
Step 4: the difference searching algorithm for improving optimizing strategy can be next according to Brownian movement stochastic evolution population position
The population position of evolution is referred to as dwell point, the candidate value that thus this wheel iteration of available one of dwell point generates, specific:
Stopover=Pos+Rmap (Dir-Pos)
Wherein, Stopover is dwell point, indicates virtual superior biological body current location;R is the random of movement setting
Value, for simulating Blang's random motion;Map is one and forms selector by 0 and 1 by what problem dimension was constituted, and 0 represents the problem
Direction is taken turns herein does not execute search in iteration, 1 is on the contrary;Pos indicates the position that random selection individual is migrated;Dir-Pos indicates super
The direction that grade organism migrates.
Step 5: judging whether the candidate value generated by step 4 has exceeded pre-set search range, if then holding
Row bound limits method, and the value that this borders generates is substituted for the candidate value that wheel iteration generates thus;If it is not, then continuing
Step 6 is executed, specific:
Sitemn=rand (upn-lown)+low n
Wherein, SitemnFor the position of virtual population, m={ 1,2 ..., S }, wherein S indicates that individual is total, n=1,
2 ..., D }, wherein D indicates the dimension of institute's optimization problem.Work as Sitemn<lownOr Sitemn>upnWhen, to SitemnAccording to above formula into
The distribution of row random site.That rand representative function generates is the pseudo random number between 0 to 1, upnAnd lownIt respectively indicates pre-
The upper bound of the search range first set and lower bound.
Step 6: whether the candidate value for judging that this wheel iteration generates is better than existing global optimum, if then by this candidate
Value is substituted for global optimum, if otherwise not replacing;
Step 7: repeating step 3-6 until meeting the iteration termination target of step 3, finally export global optimum at this time
Value, and timing is terminated, obtain the time of algorithm operation;
Embodiment 2: as shown in figs. 1-7, present invention incorporates attached drawings and specific implementation case to do to technical solution of the present invention
It is described in further detail.The present invention is to improve the difference searching algorithm of optimizing strategy, in unconstrained optimization problem or is had about
On beam Global Optimal Problem, optimal solution or feasible solution can be obtained.Several examples in unconstrained optimization problem are given below
Son, how to illustrate using the difference searching algorithm of the invention for improving optimizing strategy.Ackley, Griewank are selected,
The typical extreme value of a function problem of Rastrigin, Rosenbrock, Sphere, Zakharov, Hartmann this 7, with the present invention
The difference searching algorithm and original error for improving optimizing strategy divide searching algorithm (DSA) to carry out test and comparison.
(1) Ackley function:
Ackley function is the experiment function of continuous type obtained from the cosine moderately amplified on index function superposition, and belonging to has
The function of many local extremums.It is characterized in that one almost flat region modulated by cosine wave and form hole one by one or peak, from
And keep curved surface uneven.Ackley points out that the search of this function is sufficiently complex, because a stringent suboptimization is calculated
Method inevitably falls into the trap of local optimum in hill climbing process;And the mountain of interference can be crossed by scanning larger field
Paddy progressively reaches preferable optimum point.The function when optimizing, works as x in the region of search of restrictioniWhen=0, most global minima is obtained
Value 0.
(2) Griewank function:
The function belongs to the function there are many local extremum.The number of local extremum and the dimension of problem are related, this function
It is typical non-linear multi-modal function, there is extensive search space, it is very intractable multiple to be typically considered optimization algorithm
Miscellaneous multi-modal problem.The function when optimizing, works as x in the region of search of restrictioniWhen=0, most global minimum 0 is obtained.
(3) Rastrigin function:
The function belongs to the function there are many local extremum, is the function of a multi-peak, there are a large amount of local minimums
And maximum of points, it is a kind of typical nonlinear multi-modal function, peak shape is in the appearance of height ups and downs jumping characteristic, to intelligence
Energy algorithm has duplicity, and algorithm is made easily to fall into local extremum.The function when optimizing, works as x in the region of search of restrictioniWhen=0,
Obtain most global minimum 0.
(4) Rosenbrock function:
The function belongs to the function of valleys, the also known as unimodal function of Banana Type, is a kind of non-convex, pathological function,
There are interactional effect between correlated variables, cause what many optimization algorithms can not prepare to find global optimum, it should
Function when optimizing, works as x in the region of search of restrictioniWhen=1, most global minimum 0 is obtained.
(5) Sphere function:
This function belongs to bowl-shape type function, is a simple unimodal function, function is interior, and there is no Local Extremums, mainly
Intelligent algorithm is investigated for the precise search ability of extreme point.The function when optimizing, works as x in the region of search of restrictioniWhen=0,
Obtain most global minimum 0.
(6) Zakharov function:
This function belongs to smooth type function, is a simple unimodal function, which has no other than global minimum
Other local minimum points work as x in the region of search of restriction when optimizingiWhen=0, which obtains most global minimum 0.
(7) Hartmann function:
Hartmann function can be used for 3 dimensions or the optimizing of 6 dimensions solves, and what is be used in the present invention is 3
The problem of a dimension, solves, wherein preset The function in the region of search of restriction when optimizing, when x=(0.114614,
0.555649,0.852547) when, most global minimum -3.86278 is obtained.
Test result: according to 1 step of embodiment, test is brought into using 7 functions in embodiment 2, to each function
50 simulation trials are carried out alone, obtained data are counted, and are finally obtained suitable between the method for the present invention and original method
Angle value (optimal, worst, average, time) is answered, as shown in table 1:
Table 1: the method for the present invention and original error divide between searching algorithm data compared with fitness
As it can be seen from table 1 inventive algorithm is in 7 kinds of different function tests, the function embodied is adapted to
Angle value (optimal, worst, average, time), the original error that compares divides for searching algorithm, and optimizing effect is better, numerically
Close to the theoretical extreme of function, in the test of the functions such as Ackley, calculating speed is also superior to former algorithm.By in embodiment 2 to 7
The performance test of functional equation, it was demonstrated that the optimizing performance of the difference searching algorithm of the invention for improving optimizing strategy is compared with original error point
For searching algorithm, there is very big raising.
Although above-mentioned experimental section and attached drawing part have carried out certain description to the present invention, the present invention does not limit to
In above-mentioned specific embodiment, described above to belong to be illustrative nature, is not belonging to restrictive.The ordinary skill people of this field
Member under the inspiration of the present invention, without deviating from the spirit of the invention, can also make many changes to inventive algorithm
Shape, these are belonged within present invention protection.