8000 ENH: Add constrained optimization by kernc · Pull Request #971 · scikit-optimize/scikit-optimize · GitHub
[go: up one dir, main page]

Skip to content
This repository was archived by the owner on Feb 28, 2024. It is now read-only.

ENH: Add constrained optimization #971

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

kernc
Copy link
Contributor
@kernc kernc commented Nov 17, 2020

Fixes #355
Fixes #700
Fixes #770
Fixes #108
Fixes #977
Probably fixes #730
Probably fixes #371
Probably fixes #457
Closes #836

A proposal for constrained optimization. An alternative to #836 that you might find less invasive and, hopefully, far better designed!

This allows, e.g., to only seek 2D solutions within a unit circle:

def constraint(params):
    x, y = params
    return x**2 + y**2 <= 1

# or

@skopt.util.use_named_args(dimensions)
def constraint(x, y):
    return x**2 + y**2 <= 1

...

Optimizer(..., space_constraint=constraint)

The issues with:

def objective(params):
    if invalid(params):
        return np.finfo(float).max  # i.e. np.inf
    return ...

are:

Will tidy-up/tests after gotten 🆗 Apparently, this is the number one missing feature here, so feedback strongly appreciated!

@kernc kernc changed the title ENH: Add space constraint ENH: Add constrained optimization Nov 17, 2020
@kernc kernc marked this pull request as ready for review November 18, 2020 22:50
@schmoelder
Copy link

I have a question regarding the upcoming implementation of constrained optimization.

If I understood correctly, it will simply check a callable whether a new individual is within the constraints and keep on drafting until that function returns 'True'. Is that correct?

Or will it also include a way of efficiently sampling higher dimensional polytopes (which might be necessary depending on the problem at hand).

Anyway, looking forward to the release. :)

@kernc
Copy link
Contributor Author
kernc commented Dec 11, 2020

simply check a callable whether a new individual is within the constraints and keep on drafting until that function returns 'True'. Is that correct?

With this implementation, as well as with the alternative abomination in #836, that's correct. That implementation also features a terminate condition. 🤔 Added in 1e4110b.

efficiently sampling higher dimensional polytopes

How would one go about that given an arbitrary constraint fun 8000 ction?

@schmoelder
Copy link

I'm no expert on this topic either, but some colleagues of mine have published something which might be interesting: https://academic.oup.com/bioinformatics/advance-article-abstract/doi/10.1093/bioinformatics/btaa872/5921168

This approach should work for linearly constrained problems. Not sure if it's applicable for arbitrarily/nonlinear constrained functions, though.

@kernc kernc mentioned this pull request Jan 22, 2021
@psmgeelen
Copy link

Maybe this is a silly question, but when can we expect the merge?

@DanyBIM
8000
Copy link
DanyBIM commented Jul 19, 2021

Hello !
Thank you for this update, this is definitely what I was looking for !
Just a comment since this code did not work immediately for me. It seems that the copy function of Optimizer object does not copy the constraint !

@00sapo
Copy link
00sapo commented Oct 1, 2021

Hello, will this branch be merged sooner or later?

@00sapo
Copy link
00sapo commented Oct 1, 2021

To me, this PR is not working with "grid" mode:

import numpy as np
import skopt
from skopt.space import Real
import matplotlib.pyplot as plt

space = [
    Real(1, 10, name='a'),
    Real(1, 10, name='b'),
]


@skopt.utils.use_named_args(space)
def objective(**hyperparams):
    print("--------------------")
    print("Testing hyperparams:")
    print(hyperparams)
    return np.random.rand()


@skopt.utils.use_named_args(space)
def constraint(**hyperparams):
    return hyperparams['b'] < hyperparams['a']


res = skopt.dummy_minimize(
    objective,
    dimensions=space,
    space_constraint=constraint,
    initial_point_generator='grid',
    n_calls=100)

x_iter = np.array(res['x_iters'])

plt.scatter(x_iter[:, 0], x_iter[:, 1])
plt.show()

I got:

--------------------
Testing hyperparams:
{'a': 8.363636363636363, 'b': 9.181818181818182}
Traceback (most recent call last):
  File "/home/sapo/Develop/test_skopt/test.py", line 25, in <module>
    res = skopt.dummy_minimize(
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/optimizer/dummy.py", line 11
9, in dummy_minimize
    return base_minimize(func, dimensions, base_estimator="dummy",
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/optimizer/base.py", line 304
, in base_minimize
    result = optimizer.tell(next_x, next_y)
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/optimizer/optimizer.py", lin
e 496, in tell
    check_x_in_space(x, self.space)
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/utils.py", line 195, in chec
k_x_in_space
    raise ValueError("Point (%s) is not within the bounds of"
ValueError: Point ([8.363636363636363, 9.181818181818182]) is not within the bounds of the space ([(1, 10)
, (1, 10)]).

@qup20
Copy link
qup20 commented Oct 9, 2022

Hello,When will this PR be merged into the master branch?
Thank you!

@purbeshmitra
Copy link

Is there any chance that this will be added to the master branch?

@SaeednHT
Copy link
SaeednHT commented Aug 8, 2023

To me, this PR is not working with "grid" mode:

import numpy as np
import skopt
from skopt.space import Real
import matplotlib.pyplot as plt

space = [
    Real(1, 10, name='a'),
    Real(1, 10, name='b'),
]


@skopt.utils.use_named_args(space)
def objective(**hyperparams):
    print("--------------------")
    print("Testing hyperparams:")
    print(hyperparams)
    return np.random.rand()


@skopt.utils.use_named_args(space)
def constraint(**hyperparams):
    return hyperparams['b'] < hyperparams['a']


res = skopt.dummy_minimize(
    objective,
    dimensions=space,
    space_constraint=constraint,
    initial_point_generator='grid',
    n_calls=100)

x_iter = np.array(res['x_iters'])

plt.scatter(x_iter[:, 0], x_iter[:, 1])
plt.show()

I got:

--------------------
Testing hyperparams:
{'a': 8.363636363636363, 'b': 9.181818181818182}
Traceback (most recent call last):
  File "/home/sapo/Develop/test_skopt/test.py", line 25, in <module>
    res = skopt.dummy_minimize(
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/optimizer/dummy.py", line 11
9, in dummy_minimize
    return base_minimize(func, dimensions, base_estimator="dummy",
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/optimizer/base.py", line 304
, in base_minimize
    result = optimizer.tell(next_x, next_y)
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/optimizer/optimizer.py", lin
e 496, in tell
    check_x_in_space(x, self.space)
  File "/home/sapo/Develop/test_skopt/.venv/lib/python3.9/site-packages/skopt/utils.py", line 195, in chec
k_x_in_space
    raise ValueError("Point (%s) is not within the bounds of"
ValueError: Point ([8.363636363636363, 9.181818181818182]) is not within the bounds of the space ([(1, 10)
, (1, 10)]).

You can use initial_point_generator="grid_modified" with constrained skopt which is available in the following link:
https://github.com/SaeednHT/scikit-optimize-constrained

@AdamCoxson
Copy link
AdamCoxson commented Dec 19, 2023

@SaeednHT

Brilliant. skopt_modcn was exactly what I needed to constrain my neural network configurations :) Thank you!

I had to install:

pip install -i https://test.pypi.org/simple/ skopt-modcn==0.0.1
conda install -c conda-forge pydoe

For anyone who wants to see how I used it:

bounds = [                               # A quick set for debug
  Categorical([50,80,100,120,140,150,180,200], name = 'h1'),
  Categorical([10, 20, 50,80,100,120,140,150, 180, 200], name = 'h2'),
  Categorical([0,50,80,100,120,140,150,180,200], name = 'h3'),
  Categorical([0,50,80,100,120,140,150,180,200], name = 'h4'),
  Categorical([0,50,80,100,120,140,150,180,200], name = 'h5'),
  Integer(5, 20,  name='epochs'), 
 Categorical([50,100,150,200,300,400], name = 'batch'),
  Real(0.00001,0.02,"log-uniform",  name='lr')]

@use_named_args(bounds)
def neural_net_bayesian_eval(h1, h2, h3, h4, h5, epochs, batch, lr):
  #Some neural network training 
    return error
        
 @use_named_args(bounds)
 def layer_constraint(h1, h2, h3, h4, h5, epochs, batch, lr):
      # neuron number must always decrease for increasing layer number
      neurons=[h1,h2,h3,h4,h5]
      valid=True
      for i in range(len(neurons)-1):
          if neurons[i] < neurons[i+1]: valid=False
      return valid

gp_output = gp_minimize(neural_net_bayesian_eval,                      
                     dimensions=bounds,             
                     space_constraint=layer_constraint, # valid neuron config
                     acq_func="gp_hedge",                 
                     n_calls = num_evaluations,     
                     n_initial_points = random_vals,                      
                     random_state=1234,
                     n_jobs=4,
                     verbose=True )      

@SaeednHT
Copy link

@SaeednHT

Brilliant. skopt_modcn was exactly what I needed to constrain my neural network configurations :) Thank you!

I had to install:

pip install -i https://test.pypi.org/simple/ skopt-modcn==0.0.1
conda install -c conda-forge pydoe

For anyone who wants to see how I used it:

bounds = [                               # A quick set for debug
  Categorical([50,80,100,120,140,150,180,200], name = 'h1'),
  Categorical([10, 20, 50,80,100,120,140,150, 180, 200], name = 'h2'),
  Categorical([0,50,80,100,120,140,150,180,200], name = 'h3'),
  Categorical([0,50,80,100,120,140,150,180,200], name = 'h4'),
  Categorical([0,50,80,100,120,140,150,180,200], name = 'h5'),
  Integer(5, 20,  name='epochs'), 
 Categorical([50,100,150,200,300,400], name = 'batch'),
  Real(0.00001,0.02,"log-uniform",  name='lr')]

@use_named_args(bounds)
def neural_net_bayesian_eval(h1, h2, h3, h4, h5, epochs, batch, lr):
  #Some neural network training 
    return error
        
 @use_named_args(bounds)
 def layer_constraint(h1, h2, h3, h4, h5, epochs, batch, lr):
      # neuron number must always decrease for increasing layer number
      neurons=[h1,h2,h3,h4,h5]
      valid=True
      for i in range(len(neurons)-1):
          if neurons[i] < neurons[i+1]: valid=False
      return valid

gp_output = gp_minimize(neural_net_bayesian_eval,                      
                     dimensions=bounds,             
                     space_constraint=layer_constraint, # valid neuron config
                     acq_func="gp_hedge",                 
                     n_calls = num_evaluations,     
                     n_initial_points = random_vals,                      
                     random_state=1234,
                     n_jobs=4,
                     verbose=True )      

I am glad to know this could help. Thank you for providing an example.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
0