8000 MNT: fix plot_evaluation when skipping dimension, add test by QuentinSoubeyran · Pull Request #1066 · scikit-optimize/scikit-optimize · GitHub
[go: up one dir, main page]

Skip to content
This repository was archived by the owner on Feb 28, 2024. It is now read-only.

MNT: fix plot_evaluation when skipping dimension, add test #1066

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

QuentinSoubeyran
Copy link
Contributor

Fix #1056 :

  • Fixed wrong indexing when a constant dimension is ignored
  • Added test for that case

@pep8speaks
Copy link
pep8speaks commented Sep 30, 2021

Hello @QuentinSoubeyran! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-10-13 15:22:17 UTC

@Abdelgha-4
Copy link

Hello, for some reason this doesn't work with HistGradientBoostingClassifier, here is the code that raises a ValueError: too many values to unpack (expected 2):

from skopt import BayesSearchCV
from skopt.plots import plot_evaluations, plot_histogram, plot_objective, plot_convergence, plot_regret
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.dummy import DummyClassifier
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.datasets import load_iris

X, y = load_iris(return_X_y=True)

search_spaces = [
    {'clf': [HistGradientBoostingClassifier(random_state=42)],
     'clf__max_iter': [700, 1300, 2000],  
     'clf__learning_rate': (1e-2, 0.3, 'uniform'), 
     'clf__max_depth' : [15, 50, 90, None]
    }
]
pipe = Pipeline(
    steps=[('scaler', StandardScaler()), 
           ('clf', DummyClassifier())])
searchcv = BayesSearchCV(
    pipe,
    search_spaces=search_spaces,
    n_iter=10,
    cv=5,
    scoring='f1_macro'
)

searchcv.fit(X, y)
plot_evaluations(searchcv.optimizer_results_[0])

Here is a collab notebook that produce the error.

@QuentinSoubeyran
Copy link
Contributor Author

@Abdelgha-4 generally speaking, when your classifier doesn't vary, you should define it in the pipeline instead of using DummyClassifierand leave it out of the hyper-parameter space.
I'll look into the second error.

8000

@QuentinSoubeyran
Copy link
Contributor Author

I found the problem, it was the same as before: a wrong indexing causing problems when some dimensions are skipped (iscat[j] instead of iscat[index])

@QuentinSoubeyran QuentinSoubeyran changed the title [WIP] fix plot_evaluation when skipping dimension, add test [MRG] fix plot_evaluation when skipping dimension, add test Oct 1, 2021
@Abdelgha-4
Copy link

generally speaking, when your classifier doesn't vary, you should define it in the pipeline instead of using DummyClassifierand leave it out of the hyper-parameter space.

Yes I'm sorry this was only because I copied from my code where I use multiple classifiers.
Thank you for the fix and the test branch.

@QuentinSoubeyran QuentinSoubeyran changed the title [MRG] fix plot_evaluation when skipping dimension, add test MNT: fix plot_evaluation when skipping dimension, add test Oct 19, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

plot_evaluations() incompatible with BayesSearchCV
3 participants
0