How to handle constraints using evolutionary algorithms in optimisation problem?
1 view (last 30 days)
Show older comments
I want to solve a problem which contains objective function with the constraints (Large number of constraints) using evolutionary algorithms. How to handle constraints in evolutionary algorithms such that the solution must be feasible.
When I am applying PSO in matlab, the input arguments are generated randomly so the input argument does not satisfy the constraints and I did not get the feasible solution. How to write a code for this type of problem?
0 Comments
Answers (2)
Walter Roberson
on 1 May 2019
Use optimoptions() to create an options structure, and configure the InitialSwarmMatrix to a matrix of values all of which are known to be within constraints.
Alternately, supply a CreateFcn that generates initial values that satisfy the constraints.
If you do not know any locations that satisfy the constraints, then you will have to find some.
particleswarm() itself does not handle any constraints other than ub and lb. If you are using some other implementation of PSO, then your method of proceeding will depend upon the facilities it has available.
If you are programming your own PSO or evolutionary algorithm, then finding positions that fit the constraints is part of the hard work. If you only have linear inequalities and linear equalities, then there are several approaches, including dividing the space into simplexes and considering each one in turn. If you have non-linear constraints, then mechanically finding a position that fits the constraints can be very very difficult.
As an example of the difficulty, suppose I were to put in a constraint that the input must be between the first two odd perfect numbers. No-one knows if odd perfect numbers exist at all, but no-one has been able to prove that they cannot exist. Therefore, nonlinear constraints can involve solving very difficult problems, and so it is not possible to mechanically process nonlinear constraints to find feasible regions.
2 Comments
Walter Roberson
on 1 May 2019
Are there upper and lower bounds to that? My tests suggest that the output is indefinitely negative without upper / lower bounds.
... Which would probably be a problem. A penalty of 500 would be trivial compared to reducing the objective by 10^20.
I notice that you arrange that if constt <= 0 then you add the penalty. That is the opposite sense from how constraints work for the Mathworks optimizations such as fmincon: for those, a negative or 0 value returned from nonlcon indicates success and a positive value indicates value.
Prachi Agrawal
on 2 May 2019
4 Comments
Walter Roberson
on 3 May 2019
At the moment the best configuration I have found so-far is
[7.57121647487799621, 17.9980244782672685, 3.40357628523824005e-06, 0.000346812760876816684, 2.58617508908583638, 0.00109019729179134998, 10.7226234776916076, 17.9998511375832599, 6.45476744697441379e-05, 0.124110625428277194, 13.715683702506432, 11.8050282221007592]
The values near 18 probably represent values that would improve past the upper bound of 18, and the values near 1e-5 probably represent values that would improve below the lower bound of 0.
The bounds on all of the entries are within 0.03 with the first of them currently being the widest error bound.
I have not checked that all of the conditions are met: the function result is about 277, but it is possible that a condition is violated but that the result of the pure function is negative enough that the net after penalization is still less than 500.
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!