8000 Is there a way to restrict the parameter space differently at each iteration? · Issue #730 · scikit-optimize/scikit-optimize · GitHub
[go: up one dir, main page]

Skip to content
This repository was archived by the owner on Feb 28, 2024. It is now read-only.

Is there a way to restrict the parameter space differently at each iteration? #730

Open
michaelyli opened this issue Nov 7, 2018 · 8000 ; 6 comments · May be fixed by #971
Open

Is there a way to restrict the parameter space differently at each iteration? #730

michaelyli opened this issue Nov 7, 2018 · 6 comments · May be fixed by #971

Comments

@michaelyli
Copy link

I'd like to only compute acquisition function values for a subset of the parameters within the total parameter space. Is there an easy way for me to do this? In other words, could I restrict the parameter space considered to some input set of parameters?

@iaroslav-ai
Copy link
Member

You can always create a new instance of Optimizer / or surrogate directly from sratch with your observations, where you drop or add any of the dimensions that are necessary for your cause. This will restrict your estimator / acquisition function to work only with the parameters that you require. Feel free to reopen if this does not address your issue.

@sqbl
Copy link
sqbl commented Apr 21, 2019

First of: @iaroslav-ai, you really rock tonight! I've never seen as many updates on one night.

But, with this question: Setting a new instance of the Optimizer with one of the previous dimensions left out (to mimic that a parameter in that dimension is set to a set-point) - would this not remodel everything - and "forget" about potential cross-interactions between any of the other remaining dimensions.
I do not have the coding skills to do this, but my intuition of the problem would be to go along the lines of making a model like "expected_minimum" with full flexibility in all but one dimension and then calculate an aquisition function. Ofcourse for this to work, we need to make the "expected_minimum-look-alike" through a bayesian process, that returns a std.
Could this work?

@iaroslav-ai
8000 Copy link
Member

Ah I see. I understood that some dimensions are to be dropped; But I guess that you just want to fix some, and calculate acquisition function over the rest of parameters? I think I had a code snippet that does someting similar, I take a look. But I remember that it was not straightforward at least

@iaroslav-ai iaroslav-ai reopened this Apr 21, 2019
@iaroslav-ai
Copy link
Member

Thanks for a positive comment btw :)

@iaroslav-ai
Copy link
Member
iaroslav-ai commented Apr 21, 2019

Hm the best I can come up with is #325 and this corresponding snippet. What this does though is allow a fixed "context" vector, which is fixed for optimization of acquisition function. This is done in context of Transfer Learning. Maybe something for you to look at, but not exactly fixing arbitrary dimensions of the acquisition function / surrogate, but only values in the context vector.

I guess if I was in your position I would look at the latest surrogate that Optimizer produces, and use that one directly; You could define a function of your own, that takes a subset of dimensions, and for the rest appends fixed values. You could use minimize from scipy in gradient free mode, to avoid calculating the gradients.

Beyond that, I do not have anything better at the moment :/

@nxorable
Copy link
nxorable commented Sep 8, 2020

Would a simple API update be to allow the caller to constrain the space of expected_minimum_random_sampling to a subspace on a subsequent call?

@kernc kernc linked a pull request Nov 17, 2020 that will close this issue
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants
0