optuna
optuna
Release 3.5.0.dev
Optuna Contributors.
1 Key Features 3
2 Basic Concepts 5
3 Communication 7
4 Contribution 9
5 License 11
6 Reference 13
6.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.2 Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
6.3 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6.4 FAQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Index 319
i
ii
Optuna Documentation, Release 3.5.0.dev
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning.
It features an imperative, define-by-run style user API. Thanks to our define-by-run API, the code written with Optuna
enjoys high modularity, and the user of Optuna can dynamically construct the search spaces for the hyperparameters.
CONTENTS: 1
Optuna Documentation, Release 3.5.0.dev
2 CONTENTS:
CHAPTER
ONE
KEY FEATURES
3
Optuna Documentation, Release 3.5.0.dev
TWO
BASIC CONCEPTS
import ...
X, y = sklearn.datasets.fetch_california_housing(return_X_y=True)
X_train, X_val, y_train, y_val = sklearn.model_selection.train_test_split(X, y,␣
˓→random_state=0)
regressor_obj.fit(X_train, y_train)
y_pred = regressor_obj.predict(X_val)
5
Optuna Documentation, Release 3.5.0.dev
THREE
COMMUNICATION
7
Optuna Documentation, Release 3.5.0.dev
8 Chapter 3. Communication
CHAPTER
FOUR
CONTRIBUTION
Any contributions to Optuna are welcome! When you send a pull request, please follow the contribution guide.
9
Optuna Documentation, Release 3.5.0.dev
10 Chapter 4. Contribution
CHAPTER
FIVE
LICENSE
11
Optuna Documentation, Release 3.5.0.dev
12 Chapter 5. License
CHAPTER
SIX
REFERENCE
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next-
generation Hyperparameter Optimization Framework. In KDD (arXiv).
6.1 Installation
You can also install the development version of Optuna from master branch of Git repository:
6.2 Tutorial
If you are new to Optuna or want a general introduction, we highly recommend the below video.
13
Optuna Documentation, Release 3.5.0.dev
6.2.2 Recipes
Showcases the recipes that might help you using Optuna with comfort.
• 20_recipes/001_rdb
• 20_recipes/002_multi_objective
• 20_recipes/003_attributes
• 20_recipes/004_cli
• 20_recipes/005_user_defined_sampler
• 20_recipes/006_user_defined_pruner
• 20_recipes/007_optuna_callback
• 20_recipes/008_specify_params
• 20_recipes/009_ask_and_tell
• 20_recipes/010_reuse_best_trial
• 20_recipes/011_journal_storage
• Human-in-the-loop Optimization with Optuna Dashboard
• 20_recipes/012_artifact_tutorial
6.3.1 optuna
The optuna module is primarily used as an alias for basic Optuna functionality coded in other modules. Currently,
two modules are aliased: (1) from optuna.study, functions regarding the Study lifecycle, and (2) from optuna.
exceptions, the TrialPruned Exception raised when a trial is pruned.
optuna.create_study
14 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
Parameters
• storage (str | storages.BaseStorage | None) – Database URL. If this argument is
set to None, in-memory storage is used, and the Study will not be persistent.
Note:
When a database URL is passed, Optuna internally uses SQLAlchemy to handle the
database. Please refer to SQLAlchemy’s document for further details. If you want to
specify non-default options to SQLAlchemy Engine, you can instantiate RDBStorage
with your desired options and pass it to the storage argument instead of a URL.
Note: If none of direction and directions are specified, the direction of the study is set to
“minimize”.
Returns
A Study object.
Return type
Study
See also:
optuna.create_study() is an alias of optuna.study.create_study().
See also:
The rdb tutorial provides concrete examples to save and resume optimization using RDB.
optuna.load_study
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2
Parameters
• study_name (str | None) – Study’s name. Each study has a unique name as an identi-
fier. If None, checks whether the storage contains a single study, and if so loads that study.
study_name is required if there are multiple studies in the storage.
• storage (str | storages.BaseStorage) – Database URL such as sqlite:///
example.db. Please see also the documentation of create_study() for further details.
• sampler ('samplers.BaseSampler' | None) – A sampler object that implements back-
ground algorithm for value suggestion. If None is specified, TPESampler is used as the
default. See also samplers.
• pruner (pruners.BasePruner | None) – A pruner object that decides early stopping of
unpromising trials. If None is specified, MedianPruner is used as the default. See also
pruners.
Return type
Study
16 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
See also:
optuna.load_study() is an alias of optuna.study.load_study().
optuna.delete_study
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
optuna.delete_study(study_name="example-study", storage="sqlite:///example.db")
Parameters
• study_name (str) – Study’s name.
• storage (str | BaseStorage) – Database URL such as sqlite:///example.db.
Please see also the documentation of create_study() for further details.
Return type
None
See also:
optuna.delete_study() is an alias of optuna.study.delete_study().
optuna.copy_study
Note: copy_study() copies a study even if the optimization is working on. It means users will get a copied
study that contains a trial that is not finished.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study = optuna.create_study(
study_name="example-study",
storage="sqlite:///example.db",
)
study.optimize(objective, n_trials=3)
optuna.copy_study(
from_study_name="example-study",
from_storage="sqlite:///example.db",
to_storage="sqlite:///example_copy.db",
)
study = optuna.load_study(
study_name=None,
storage="sqlite:///example_copy.db",
)
Parameters
• from_study_name (str) – Name of study.
• from_storage (str | BaseStorage) – Source database URL such as sqlite:///
example.db. Please see also the documentation of create_study() for further details.
• to_storage (str | BaseStorage) – Destination database URL.
• to_study_name (str | None) – Name of the created study. If omitted,
from_study_name is used.
Raises
DuplicatedStudyError – If a study with a conflicting name already exists in the destination
storage.
Return type
None
18 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
optuna.get_all_study_names
optuna.get_all_study_names(storage)
Get all study names stored in a specified storage.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
study_names = optuna.study.get_all_study_names(storage="sqlite:///example.db")
assert len(study_names) == 1
Parameters
storage (str | BaseStorage) – Database URL such as sqlite:///example.db. Please
see also the documentation of create_study() for further details.
Returns
List of all study names in the storage.
Return type
list[str]
See also:
optuna.get_all_study_names() is an alias of optuna.study.get_all_study_names().
optuna.get_all_study_summaries
optuna.get_all_study_summaries(storage, include_best_trial=True)
Get all history of studies stored in a specified storage.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
study_summaries = optuna.study.get_all_study_summaries(storage="sqlite:///example.db
˓→")
assert len(study_summaries) == 1
study_summary = study_summaries[0]
assert study_summary.study_name == "example-study"
Parameters
• storage (str | BaseStorage) – Database URL such as sqlite:///example.db.
Please see also the documentation of create_study() for further details.
• include_best_trial (bool) – Include the best trials if exist. It potentially increases the
number of queries and may take longer to fetch summaries depending on the storage.
Returns
List of study history summarized as StudySummary objects.
Return type
list[StudySummary]
See also:
optuna.get_all_study_summaries() is an alias of optuna.study.get_all_study_summaries().
optuna.TrialPruned
exception optuna.TrialPruned
Exception for pruned trials.
This error tells a trainer that the current Trial was pruned. It is supposed to be raised after optuna.trial.
Trial.should_prune() as shown in the following example.
See also:
optuna.TrialPruned is an alias of optuna.exceptions.TrialPruned.
20 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
n_train_iter = 100
if trial.should_prune():
raise optuna.TrialPruned()
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=20)
add_note()
Exception.add_note(note) – add a note to the exception
6.3.2 optuna.artifacts
The artifacts module provides the way to manage artifacts (output files) in Optuna.
optuna.artifacts.FileSystemArtifactStore
class optuna.artifacts.FileSystemArtifactStore(base_path)
An artifact store for file systems.
Parameters
base_path (str | Path ) – The base path to a directory to store artifacts.
Example
import os
import optuna
from optuna.artifacts import FileSystemArtifactStore
from optuna.artifacts import upload_artifact
base_path = "./artifacts"
os.makedirs(base_path, exist_ok=True)
artifact_store = FileSystemArtifactStore(base_path=base_path)
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Methods
open_reader(artifact_id)
remove(artifact_id)
write(artifact_id, content_body)
22 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
optuna.artifacts.Boto3ArtifactStore
Example
import optuna
from optuna.artifacts import upload_artifact
from optuna.artifacts import Boto3ArtifactStore
artifact_store = Boto3ArtifactStore("my-bucket")
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Methods
open_reader(artifact_id)
remove(artifact_id)
write(artifact_id, content_body)
optuna.artifacts.GCSArtifactStore
Example
import optuna
from optuna.artifacts import GCSArtifactStore, upload_artifact
artifact_backend = GCSArtifactStore("my-bucket")
Before running this code, you will have to install gcloud and run
so that the Cloud Storage library can automatically find the credential.
Note: Added in v3.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.4.0.
Methods
open_reader(artifact_id)
remove(artifact_id)
write(artifact_id, content_body)
24 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
optuna.artifacts.Backoff
Example
import optuna
from optuna.artifacts import upload_artifact
from optuna.artifacts import Boto3ArtifactStore
from optuna.artifacts import Backoff
artifact_store = Backoff(Boto3ArtifactStore("my-bucket"))
Methods
open_reader(artifact_id)
remove(artifact_id)
write(artifact_id, content_body)
Parameters
• backend (ArtifactStore) –
• max_retries (int) –
• multiplier (float) –
• min_delay (float) –
• max_delay (float) –
optuna.artifacts.upload_artifact
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
6.3.3 optuna.cli
$ optuna --help
See also:
The cli tutorial provides use-cases with examples.
6.3.4 optuna.distributions
The distributions module defines various classes representing probability distributions, mainly used to sug-
gest initial hyperparameter values for an optimization trial. Distribution classes inherit from a library-internal
BaseDistribution, and is initialized with specific parameters, such as the low and high endpoints for a
IntDistribution.
Optuna users should not use distribution classes directly, but instead use utility functions provided by Trial such as
suggest_int().
26 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
optuna.distributions.FloatDistribution
Note: When step is not None, if the range [low, high] is not divisible by step, high will be replaced with the
maximum of 𝑘 × step + low < high, where 𝑘 is an integer.
Parameters
• low (float) –
• high (float) –
• log (bool) –
• step (None | float) –
low
Lower endpoint of the range of the distribution. low is included in the range. low must be less than or
equal to high. If log is True, low must be larger than 0.
high
Upper endpoint of the range of the distribution. high is included in the range. high must be greater than
or equal to low.
log
If log is True, this distribution is in log-scaled domain. In this case, all parameters enqueued to the
distribution must be positive values. This parameter must be False when the parameter step is not None.
step
A discretization step. step must be larger than 0. This parameter must be None when the parameter log
is True.
Methods
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
Any
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (float) – Optuna’s external representation of a param-
eter value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.IntDistribution
Note: When step is not None, if the range [low, high] is not divisible by step, high will be replaced with the
maximum of 𝑘 × step + low < high, where 𝑘 is an integer.
Parameters
28 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
• low (int) –
• high (int) –
• log (bool) –
• step (int) –
low
Lower endpoint of the range of the distribution. low is included in the range. low must be less than or
equal to high. If log is True, low must be larger than or equal to 1.
high
Upper endpoint of the range of the distribution. high is included in the range. high must be greater than
or equal to low.
log
If log is True, this distribution is in log-scaled domain. In this case, all parameters enqueued to the
distribution must be positive values. This parameter must be False when the parameter step is not 1.
step
A discretization step. step must be a positive integer. This parameter must be 1 when the parameter log
is True.
Methods
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
int
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (int) – Optuna’s external representation of a parameter
value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.UniformDistribution
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.0.0.
Use FloatDistribution instead.
Methods
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
30 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
Any
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (float) – Optuna’s external representation of a param-
eter value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.LogUniformDistribution
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.0.0.
Use FloatDistribution instead.
Methods
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
Any
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (float) – Optuna’s external representation of a param-
eter value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.DiscreteUniformDistribution
Note: If the range [low, high] is not divisible by 𝑞, high will be replaced with the maximum of 𝑘𝑞 + low < high,
where 𝑘 is an integer.
Parameters
32 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
• low (float) – Lower endpoint of the range of the distribution. low is included in the range.
low must be less than or equal to high.
• high (float) – Upper endpoint of the range of the distribution. high is included in the
range. high must be greater than or equal to low.
• q (float) – A discretization step. q must be larger than 0.
low
Lower endpoint of the range of the distribution. low is included in the range.
high
Upper endpoint of the range of the distribution. high is included in the range.
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.0.0.
Use FloatDistribution instead.
Methods
Attributes
q Discretization step.
property q: float
Discretization step.
DiscreteUniformDistribution is a subtype of FloatDistribution. This property is a proxy for its
step attribute.
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
Any
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (float) – Optuna’s external representation of a param-
eter value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.IntUniformDistribution
Note: If the range [low, high] is not divisible by step, high will be replaced with the maximum of 𝑘×step+low <
high, where 𝑘 is an integer.
Parameters
• low (int) –
• high (int) –
• step (int) –
low
Lower endpoint of the range of the distribution. low is included in the range. low must be less than or
equal to high.
high
Upper endpoint of the range of the distribution. high is included in the range. high must be greater than
or equal to low.
step
A discretization step. step must be a positive integer.
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.0.0.
34 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Methods
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
int
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (int) – Optuna’s external representation of a parameter
value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.IntLogUniformDistribution
Warning: Deprecated in v2.0.0. step argument will be removed in the future. The removal of this
feature is currently scheduled for v4.0.0, but this schedule is subject to change.
Samplers and other components in Optuna relying on this distribution will ignore this value and assume
that step is always 1. User-defined samplers may continue to use other values besides 1 during the
deprecation.
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.0.0.
Use IntDistribution instead.
Methods
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
36 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
int
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (int) – Optuna’s external representation of a parameter
value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.CategoricalDistribution
class optuna.distributions.CategoricalDistribution(choices)
A categorical distribution.
This object is instantiated by suggest_categorical(), and passed to samplers in general.
Parameters
choices (Sequence[None | bool | int | float | str]) – Parameter value candidates.
choices must have one element at least.
Note: Not all types are guaranteed to be compatible with all storages. It is recommended to restrict the types
of the choices to None, bool, int, float and str.
choices
Parameter value candidates.
Methods
single()
Test whether the range of this distribution contains just a single value.
Returns
True if the range of this distribution contains just a single value, otherwise False.
Return type
bool
to_external_repr(param_value_in_internal_repr)
Convert internal representation of a parameter value into external representation.
Parameters
param_value_in_internal_repr (float) – Optuna’s internal representation of a param-
eter value.
Returns
Optuna’s external representation of a parameter value.
Return type
None | bool | int | float | str
to_internal_repr(param_value_in_external_repr)
Convert external representation of a parameter value into internal representation.
Parameters
param_value_in_external_repr (None | bool | int | float | str) – Optuna’s
external representation of a parameter value.
Returns
Optuna’s internal representation of a parameter value.
Return type
float
optuna.distributions.distribution_to_json
optuna.distributions.distribution_to_json(dist)
Serialize a distribution to JSON format.
Parameters
dist (BaseDistribution) – A distribution to be serialized.
Returns
A JSON string of a given distribution.
Return type
str
38 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
optuna.distributions.json_to_distribution
optuna.distributions.json_to_distribution(json_str)
Deserialize a distribution in JSON format.
Parameters
json_str (str) – A JSON-serialized distribution.
Returns
A deserialized distribution.
Return type
BaseDistribution
optuna.distributions.check_distribution_compatibility
optuna.distributions.check_distribution_compatibility(dist_old, dist_new)
A function to check compatibility of two distributions.
It checks whether dist_old and dist_new are the same kind of distributions. If dist_old is
CategoricalDistribution, it further checks choices are the same between dist_old and dist_new. Note
that this method is not supposed to be called by library users.
Parameters
• dist_old (BaseDistribution) – A distribution previously recorded in storage.
• dist_new (BaseDistribution) – A distribution newly added to storage.
Return type
None
6.3.5 optuna.exceptions
The exceptions module defines Optuna-specific exceptions deriving from a base OptunaError class. Of special
importance for library users is the TrialPruned exception to be raised if optuna.trial.Trial.should_prune()
returns True for a trial that should be pruned.
optuna.exceptions.OptunaError
exception optuna.exceptions.OptunaError
Base class for Optuna specific errors.
add_note()
Exception.add_note(note) – add a note to the exception
optuna.exceptions.TrialPruned
exception optuna.exceptions.TrialPruned
Exception for pruned trials.
This error tells a trainer that the current Trial was pruned. It is supposed to be raised after optuna.trial.
Trial.should_prune() as shown in the following example.
See also:
optuna.TrialPruned is an alias of optuna.exceptions.TrialPruned.
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
n_train_iter = 100
if trial.should_prune():
raise optuna.TrialPruned()
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=20)
add_note()
Exception.add_note(note) – add a note to the exception
40 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
optuna.exceptions.CLIUsageError
exception optuna.exceptions.CLIUsageError
Exception for CLI.
CLI raises this exception when it receives invalid configuration.
add_note()
Exception.add_note(note) – add a note to the exception
optuna.exceptions.StorageInternalError
exception optuna.exceptions.StorageInternalError
Exception for storage operation.
This error is raised when an operation failed in backend DB of storage.
add_note()
Exception.add_note(note) – add a note to the exception
optuna.exceptions.DuplicatedStudyError
exception optuna.exceptions.DuplicatedStudyError
Exception for a duplicated study name.
This error is raised when a specified study name already exists in the storage.
add_note()
Exception.add_note(note) – add a note to the exception
6.3.6 optuna.importance
The importance module provides functionality for evaluating hyperparameter importances based on completed
trials in a given study. The utility function get_param_importances() takes a Study and optional evalua-
tor as two of its inputs. The evaluator must derive from BaseImportanceEvaluator, and is initialized as a
FanovaImportanceEvaluator by default when not passed in. Users implementing custom evaluators should re-
fer to either FanovaImportanceEvaluator or MeanDecreaseImpurityImportanceEvaluator as a guide, paying
close attention to the format of the return value from the Evaluator’s evaluate function.
Note: FanovaImportanceEvaluator takes over 1 minute when given a study that contains 1000+ trials. We pub-
lished optuna-fast-fanova library, that is a Cython accelerated fANOVA implementation. By using it, you can get
hyperparameter importances within a few seconds.
optuna.importance.get_param_importances
See also:
See plot_param_importances() to plot importances.
Parameters
• study (Study) – An optimized study.
• evaluator (BaseImportanceEvaluator | None) – An importance evaluator object
that specifies which algorithm to base the importance assessment on. Defaults to
FanovaImportanceEvaluator.
Note: FanovaImportanceEvaluator takes over 1 minute when given a study that con-
tains 1000+ trials. We published optuna-fast-fanova library, that is a Cython accelerated
fANOVA implementation. By using it, you can get hyperparameter importances within a
few seconds.
• params (List[str] | None) – A list of names of parameters to assess. If None, all pa-
rameters that are present in all of the completed trials are assessed.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to evaluate importances. If it is None and study is being used for single-objective optimiza-
tion, the objective values are used. target must be specified if study is being used for
multi-objective optimization.
Note: Specify this argument if study is being used for multi-objective optimization. For
example, to get the hyperparameter importance of the first objective, use target=lambda
t: t.values[0] for the target parameter.
• normalize (bool) – A boolean option to specify whether the sum of the importance values
should be normalized to 1.0. Defaults to True.
42 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Returns
A dict where the keys are parameter names and the values are assessed importances.
Return type
Dict[str, float]
optuna.importance.FanovaImportanceEvaluator
Note: This class takes over 1 minute when given a study that contains 1000+ trials. We published optuna-fast-
fanova library, that is a Cython accelerated fANOVA implementation. By using it, you can get hyperparameter
importances within a few seconds.
Note: The performance of fANOVA depends on the prediction performance of the underlying random forest
model. In order to obtain high prediction performance, it is necessary to cover a wide range of the hyperparameter
search space. It is recommended to use an exploration-oriented sampler such as RandomSampler.
Note: For how to cite the original work, please refer to https://automl.github.io/fanova/cite.html.
Parameters
• n_trees (int) – The number of trees in the forest.
• max_depth (int) – The maximum depth of the trees in the forest.
• seed (int | None) – Controls the randomness of the forest. For deterministic behavior,
specify a value other than None.
Methods
See also:
Please refer to get_param_importances() for how a concrete evaluator should implement this method.
Parameters
• study (Study) – An optimized study.
• params (List[str] | None) – A list of names of parameters to assess. If None, all
parameters that are present in all of the completed trials are assessed.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the
value to evaluate importances. If it is None and study is being used for single-objective
optimization, the objective values are used. Can also be used for other trial attributes, such
as the duration, like target=lambda t: t.duration.total_seconds().
Note: Specify this argument if study is being used for multi-objective optimization. For
example, to get the hyperparameter importance of the first objective, use target=lambda
t: t.values[0] for the target parameter.
Returns
A dict where the keys are parameter names and the values are assessed importances.
Return type
Dict[str, float]
optuna.importance.MeanDecreaseImpurityImportanceEvaluator
Note: This evaluator requires the sklearn Python package and is based on
sklearn.ensemble.RandomForestClassifier.feature_importances_.
Parameters
• n_trees (int) – Number of trees in the random forest.
44 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
• max_depth (int) – The maximum depth of each tree in the random forest.
• seed (int | None) – Seed for the random forest.
Methods
See also:
Please refer to get_param_importances() for how a concrete evaluator should implement this method.
Parameters
• study (Study) – An optimized study.
• params (List[str] | None) – A list of names of parameters to assess. If None, all
parameters that are present in all of the completed trials are assessed.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the
value to evaluate importances. If it is None and study is being used for single-objective
optimization, the objective values are used. Can also be used for other trial attributes, such
as the duration, like target=lambda t: t.duration.total_seconds().
Note: Specify this argument if study is being used for multi-objective optimization. For
example, to get the hyperparameter importance of the first objective, use target=lambda
t: t.values[0] for the target parameter.
Returns
A dict where the keys are parameter names and the values are assessed importances.
Return type
Dict[str, float]
6.3.7 optuna.integration
The integration module contains classes used to integrate Optuna with external machine learning frameworks.
Note: Optuna’s integration modules for third-party libraries have started migrating from Optuna itself to a package
called optuna-integration. Please check the repository and the documentation.
For most of the ML frameworks supported by Optuna, the corresponding Optuna integration class serves only to im-
plement a callback object and functions, compliant with the framework’s specific callback API, to be called with each
intermediate step in the model training. The functionality implemented in these callbacks across the different ML
frameworks includes:
(1) Reporting intermediate model scores back to the Optuna trial using optuna.trial.Trial.report(),
(2) According to the results of optuna.trial.Trial.should_prune(), pruning the current model by raising
optuna.TrialPruned(), and
(3) Reporting intermediate Optuna data such as the current trial number back to the framework, as done in
MLflowCallback.
For scikit-learn, an integrated OptunaSearchCV estimator is available that combines scikit-learn BaseEstimator func-
tionality with access to a class-level Study object.
BoTorch
optuna.integration.BoTorchSampler
Note: An instance of this sampler should not be used with different studies when used with constraints. Instead,
a new instance should be created for each new study. The reason for this is that the sampler is stateful keeping
all the computed constraints.
46 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Parameters
• candidates_func (Callable[[torch.Tensor, torch.Tensor, torch.Tensor
| None, torch.Tensor, torch.Tensor | None], torch.Tensor] | None) –
An optional function that suggests the next candidates. It must take the training data,
the objectives, the constraints, the search space bounds and return the next candidates.
The arguments are of type torch.Tensor. The return value must be a torch.Tensor.
However, if constraints_func is omitted, constraints will be None. For any constraints
that failed to compute, the tensor will contain NaN.
If omitted, it is determined automatically based on the number of objectives and whether a
constraint is specified. If the number of objectives is one and no constraint is specified, log-
Expected Improvement is used. If constraints are specified, quasi MC-based batch Expected
Improvement (qEI) is used. If the number of objectives is either two or three, Quasi MC-
based batch Expected Hypervolume Improvement (qEHVI) is used. Otherwise, for larger
number of objectives, the faster Quasi MC-based extended ParEGO (qParEGO) is used.
The function should assume maximization of the objective.
See also:
See optuna.integration.botorch.qei_candidates_func() for an example.
• constraints_func (Callable[[FrozenTrial], Sequence[float]] | None) – An
optional function that computes the objective constraints. It must take a FrozenTrial and
return the constraints. The return value must be a sequence of float s. A value strictly larger
than 0 means that a constraint is violated. A value equal to or smaller than 0 is considered
feasible.
If omitted, no constraints will be passed to candidates_func nor taken into account during
suggestion.
• n_startup_trials (int) – Number of initial trials, that is the number of trials to resort to
independent sampling.
• consider_running_trials (bool) – If True, the acquisition function takes into consid-
eration the running parameters whose evaluation has not completed. Enabling this option is
considered to improve the performance of parallel optimization.
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
48 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
optuna.integration.botorch.logei_candidates_func
50 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Returns
Next set of candidates. Usually the return value of BoTorch’s optimize_acqf.
Return type
Tensor
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
optuna.integration.botorch.qei_candidates_func
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
optuna.integration.botorch.qnei_candidates_func
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• train_x (Tensor) –
• train_obj (Tensor) –
• train_con (Tensor | None) –
• bounds (Tensor) –
• pending_x (Tensor | None) –
Return type
Tensor
optuna.integration.botorch.qehvi_candidates_func
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• train_x (Tensor) –
• train_obj (Tensor) –
• train_con (Tensor | None) –
• bounds (Tensor) –
• pending_x (Tensor | None) –
52 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Return type
Tensor
optuna.integration.botorch.qnehvi_candidates_func
Note: Added in v3.1.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.1.0.
Parameters
• train_x (Tensor) –
• train_obj (Tensor) –
• train_con (Tensor | None) –
• bounds (Tensor) –
• pending_x (Tensor | None) –
Return type
Tensor
optuna.integration.botorch.qparego_candidates_func
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• train_x (Tensor) –
• train_obj (Tensor) –
• train_con (Tensor | None) –
• bounds (Tensor) –
• pending_x (Tensor | None) –
Return type
Tensor
CatBoost
optuna.integration.CatBoostPruningCallback
Note: This callback cannot be used with CatBoost on GPUs because CatBoost doesn’t support a user-defined
callback for GPU. Please refer to CatBoost issue.
Parameters
• trial (Trial) – A Trial corresponding to the current evaluation of the objective function.
• metric (str) – An evaluation metric for pruning, e.g., Logloss and AUC. Please refer to
CatBoost reference for further details.
• eval_set_index (int | None) – The index of the target validation dataset. If you set only
one eval_set, eval_set_index is None. If you set multiple datasets as eval_set, the
index of eval_set must be eval_set_index, e.g., 0 or 1 when eval_set contains two
datasets.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
54 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Methods
after_iteration(info)
Report an evaluation metric value for Optuna pruning after each CatBoost’s iteration.
This method is called by CatBoost.
Parameters
info (Any) – A SimpleNamespace containing iteraion, validation_name, metric_name
and history of losses. For example SimpleNamespace(iteration=2, metrics={
'learn': {'Logloss': [0.6, 0.5]}, 'validation': {'Logloss': [0.7,
0.6], 'AUC': [0.8, 0.9]} }).
Returns
A boolean value. If False, CatBoost internally stops the optimization with Optuna’s pruning
logic without raising optuna.TrialPruned. Otherwise, the optimization continues.
Return type
bool
check_pruned()
Raise optuna.TrialPruned manually if the CatBoost optimization is pruned.
Return type
None
Dask
optuna.integration.DaskStorage
Parameters
• storage (None | str | BaseStorage) – Optuna storage url to use for underlying Op-
tuna storage class to wrap (e.g. None for in-memory storage, sqlite:///example.db for
SQLite storage). Defaults to None.
• name (str | None) – Unique identifier for the Dask storage class. Specifying a custom
name can sometimes be useful for logging or debugging. If None is provided, a random
name will be automatically generated.
• client (distributed.Client | None) – Dask Client to connect to. If not provided,
will attempt to find an existing Client.
• register (bool) – Whether or not to register this storage instance with the cluster scheduler.
Most common usage of this storage class will not need to specify this argument. Defaults to
True.
Note: Added in v3.1.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.1.0.
Methods
56 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Attributes
client
check_trial_is_updatable(trial_id, trial_state)
Check whether a trial state is updatable.
Parameters
• trial_id (int) – ID of the trial. Only used for an error message.
• trial_state (TrialState) – Trial state to check.
Raises
RuntimeError – If the trial is already finished.
Return type
None
create_new_study(directions, study_name=None)
Create a new study from a name.
If no name is specified, the storage class generates a name. The returned study ID is unique among all
current and deleted studies.
Parameters
• directions (Sequence[StudyDirection]) – A sequence of direction whose element
is either MAXIMIZE or MINIMIZE.
• study_name (str | None) – Name of the new study to create.
Returns
ID of the created study.
Raises
optuna.exceptions.DuplicatedStudyError – If a study with the same study_name
already exists.
Return type
int
create_new_trial(study_id, template_trial=None)
Create and add a new trial to a study.
The returned trial ID is unique among all current and deleted trials.
Parameters
• study_id (int) – ID of the study.
• template_trial (FrozenTrial | None) – Template FrozenTrial with default user-
attributes, system-attributes, intermediate-values, and a state.
Returns
ID of the created trial.
Raises
KeyError – If no study with the matching study_id exists.
Return type
int
delete_study(study_id)
Delete a study.
Parameters
study_id (int) – ID of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
get_all_studies()
Read a list of FrozenStudy objects.
Returns
A list of FrozenStudy objects, sorted by study_id.
Return type
List[FrozenStudy]
get_all_trials(study_id, deepcopy=True, states=None)
Read all trials in a study.
Parameters
• study_id (int) – ID of the study.
• deepcopy (bool) – Whether to copy the list of trials before returning. Set to True if you
intend to update the list or elements of the list.
• states (Container[TrialState] | None) – Trial states to filter on. If None, include
all states.
Returns
List of trials in the study, sorted by trial_id.
Raises
KeyError – If no study with the matching study_id exists.
Return type
List[FrozenTrial]
get_base_storage()
Retrieve underlying Optuna storage instance from the scheduler.
This is a convenience method to extract the Optuna storage instance stored on the Dask scheduler process
to the local Python process.
Return type
BaseStorage
get_best_trial(study_id)
Return the trial with the best value in a study.
This method is valid only during single-objective optimization.
Parameters
study_id (int) – ID of the study.
58 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Returns
The trial with the best objective value among all finished trials in the study.
Raises
• KeyError – If no study with the matching study_id exists.
• RuntimeError – If the study has more than one direction.
• ValueError – If no trials have been completed.
Return type
FrozenTrial
get_n_trials(study_id, state=None)
Count the number of trials in a study.
Parameters
• study_id (int) – ID of the study.
• state (Tuple[TrialState, ...] | TrialState | None) – Trial states to filter on.
If None, include all states.
Returns
Number of trials in the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
int
get_study_directions(study_id)
Read whether a study maximizes or minimizes an objective.
Parameters
study_id (int) – ID of a study.
Returns
Optimization directions list of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
List[StudyDirection]
get_study_id_from_name(study_name)
Read the ID of a study.
Parameters
study_name (str) – Name of the study.
Returns
ID of the study.
Raises
KeyError – If no study with the matching study_name exists.
Return type
int
get_study_name_from_id(study_id)
Read the study name of a study.
Parameters
study_id (int) – ID of the study.
Returns
Name of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
str
get_study_system_attrs(study_id)
Read the optuna-internal attributes of a study.
Parameters
study_id (int) – ID of the study.
Returns
Dictionary with the optuna-internal attributes of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
Dict[str, Any]
get_study_user_attrs(study_id)
Read the user-defined attributes of a study.
Parameters
study_id (int) – ID of the study.
Returns
Dictionary with the user attributes of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
Dict[str, Any]
get_trial(trial_id)
Read a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Trial with a matching trial ID.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
FrozenTrial
get_trial_id_from_study_id_trial_number(study_id, trial_number)
Read the trial ID of a trial.
60 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Parameters
• study_id (int) – ID of the study.
• trial_number (int) – Number of the trial.
Returns
ID of the trial.
Raises
KeyError – If no trial with the matching study_id and trial_number exists.
Return type
int
get_trial_number_from_id(trial_id)
Read the trial number of a trial.
Note: The trial number is only unique within a study, and is sequential.
Parameters
trial_id (int) – ID of the trial.
Returns
Number of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
int
get_trial_param(trial_id, param_name)
Read the parameter of a trial.
Parameters
• trial_id (int) – ID of the trial.
• param_name (str) – Name of the parameter.
Returns
Internal representation of the parameter.
Raises
KeyError – If no trial with the matching trial_id exists. If no such parameter exists.
Return type
float
get_trial_params(trial_id)
Read the parameter dictionary of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary of a parameters. Keys are parameter names and values are internal representations
of the parameter values.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
get_trial_system_attrs(trial_id)
Read the optuna-internal attributes of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary with the optuna-internal attributes of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
get_trial_user_attrs(trial_id)
Read the user-defined attributes of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary with the user-defined attributes of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
remove_session()
Clean up all connections to a database.
Return type
None
set_study_system_attr(study_id, key, value)
Register an optuna-internal attribute to a study.
This method overwrites any existing attribute.
Parameters
• study_id (int) – ID of the study.
• key (str) – Attribute key.
• value (Any) – Attribute value. It should be JSON serializable.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
62 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Parameters
• trial_id (int) – ID of the trial.
• state (TrialState) – New state of the trial.
• values (Sequence[float] | None) – Values of the objective function.
Returns
True if the state is successfully updated. False if the state is kept the same. The latter
happens when this method tries to update the state of RUNNING trial to RUNNING.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
bool
set_trial_system_attr(trial_id, key, value)
Set an optuna-internal attribute to a trial.
This method overwrites any existing attribute.
Parameters
• trial_id (int) – ID of the trial.
• key (str) – Attribute key.
• value (Mapping[str, Mapping[str, JSONSerializable] |
Sequence[JSONSerializable] | str | int | float | bool
| None] | Sequence[Mapping[str, JSONSerializable] |
Sequence[JSONSerializable] | str | int | float | bool | None] | str
| int | float | bool | None) – Attribute value. It should be JSON serializable.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
None
set_trial_user_attr(trial_id, key, value)
Set a user-defined attribute to a trial.
This method overwrites any existing attribute.
Parameters
• trial_id (int) – ID of the trial.
• key (str) – Attribute key.
• value (Any) – Attribute value. It should be JSON serializable.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
None
64 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
fast.ai
optuna.integration.FastAIV1PruningCallback
See the example if you want to add a pruning callback which monitors validation loss of a Learner.
Example
Parameters
• learn (Learner) – fastai.basic_train.Learner.
• trial (Trial) – A Trial corresponding to the current evaluation of the objective function.
• monitor (str) – An evaluation metric for pruning, e.g. valid_loss and Accuracy. Please
refer to fastai.callbacks.TrackerCallback reference for further details.
Warning: Deprecated in v2.4.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v4.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v2.4.0.
Methods
on_epoch_end(epoch, **kwargs)
optuna.integration.FastAIV2PruningCallback
See the example if you want to add a pruning callback which monitors validation loss of a Learner.
Example
Parameters
• trial (Trial) – A Trial corresponding to the current evaluation of the objective function.
• monitor (str) – An evaluation metric for pruning, e.g. valid_loss or accuracy. Please
refer to fastai.callback.TrackerCallback reference for further details.
Methods
after_epoch()
after_fit()
66 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
optuna.integration.FastAIPruningCallback
optuna.integration.FastAIPruningCallback
alias of FastAIV2PruningCallback
LightGBM
optuna.integration.LightGBMPruningCallback
optuna.integration.lightgbm.train
optuna.integration.lightgbm.train(*args, **kwargs)
Wrapper of LightGBM Training API to tune hyperparameters.
It tunes important hyperparameters (e.g., min_child_samples and feature_fraction) in a stepwise manner.
It is a drop-in replacement for lightgbm.train(). See a simple example of LightGBM Tuner which optimizes the
validation log loss of cancer detection.
train() is a wrapper function of LightGBMTuner. To use feature in Optuna such as suspended/resumed opti-
mization and/or parallelization, refer to LightGBMTuner instead of this function.
Arguments and keyword arguments for lightgbm.train() can be passed.
Parameters
• args (Any) –
• kwargs (Any) –
Return type
Any
optuna.integration.lightgbm.LightGBMTuner
68 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
• show_progress_bar (bool) – Flag to show progress bars or not. To disable progress bar,
set this False.
Note: Progress bars will be fragmented by logging messages of LightGBM and Optuna.
Please suppress such messages to show the progress bars properly.
• optuna_seed (int | None) – seed of TPESampler for random number generator that
affects sampling for num_leaves, bagging_fraction, bagging_freq, lambda_l1, and
lambda_l2.
Note: The deterministic parameter of LightGBM makes training reproducible. Please en-
able it when you use this argument.
Methods
compare_validation_metrics(val_score,
best_score)
get_best_booster() Return the best booster.
higher_is_better()
tune_feature_fraction([n_trials])
tune_feature_fraction_stage2([n_trials])
tune_min_data_in_leaf()
tune_num_leaves([n_trials])
tune_regularization_factors([n_trials])
Attributes
70 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Return type
None
optuna.integration.lightgbm.LightGBMTunerCV
• show_progress_bar (bool) – Flag to show progress bars or not. To disable progress bar,
set this False.
Note: Progress bars will be fragmented by logging messages of LightGBM and Optuna.
Please suppress such messages to show the progress bars properly.
Note: The deterministic parameter of LightGBM makes training reproducible. Please en-
able it when you use this argument.
72 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Methods
compare_validation_metrics(val_score,
best_score)
get_best_booster() Return the best cvbooster.
higher_is_better()
tune_feature_fraction([n_trials])
tune_feature_fraction_stage2([n_trials])
tune_min_data_in_leaf()
tune_num_leaves([n_trials])
tune_regularization_factors([n_trials])
Attributes
Return type
None
MLflow
optuna.integration.MLflowCallback
Example
import optuna
from optuna.integration.mlflow import MLflowCallback
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
mlflc = MLflowCallback(
tracking_uri=YOUR_TRACKING_URI,
metric_name="my metric score",
)
study = optuna.create_study(study_name="my_study")
study.optimize(objective, n_trials=10, callbacks=[mlflc])
Parameters
• tracking_uri (str | None) – The URI of the MLflow tracking server.
Please refer to mlflow.set_tracking_uri for more details.
• metric_name (str | Sequence[str]) – Name assigned to optimized metric. In case of
multi-objective optimization, list of names can be passed. Those names will be assigned
to metrics in the order returned by objective function. If single name is provided, or this
argument is left to default value, it will be broadcasted to each objective with a number
suffix in order returned by objective function e.g. two objectives and default metric name
will be logged as value_0 and value_1. The number of metrics must be the same as the
number of values an objective function returns.
74 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
• create_experiment (bool) – When True, new MLflow experiment will be created for
each optimization run, named after the Optuna study. Setting this argument to False lets
user run optimization under existing experiment, set via mlflow.set_experiment, by passing
experiment_id as one of mlflow_kwargs or under default MLflow experiment, when no
additional arguments are passed. Note that this argument must be set to False when using
Optuna with this callback within Databricks Notebook.
• mlflow_kwargs (Dict[str, Any] | None) – Set of arguments passed when initializing
MLflow run. Please refer to MLflow API documentation for more details.
• tag_study_user_attrs (bool) – Flag indicating whether or not to add the study’s user
attrs to the mlflow trial as tags. Please note that when this flag is set, key value pairs in
user_attrs will supersede existing tags.
• tag_trial_user_attrs (bool) – Flag indicating whether or not to add the trial’s user
attrs to the mlflow trial as tags. Please note that when both trial and study user attributes are
logged, the latter will supersede the former in case of a collision.
Note: Added in v1.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v1.4.0.
Methods
track_in_mlflow()
Decorator for using MLflow logging in the objective function.
This decorator enables the extension of MLflow logging provided by the callback.
All information logged in the decorated objective function will be added to the MLflow run for the trial
created by the callback.
Example
import optuna
import mlflow
from optuna.integration.mlflow import MLflowCallback
mlflc = MLflowCallback(
tracking_uri=YOUR_TRACKING_URI,
metric_name="my metric score",
(continues on next page)
@mlflc.track_in_mlflow()
def objective(trial):
x = trial.suggest_float("x", -10, 10)
mlflow.log_param("power", 2)
mlflow.log_metric("base of metric", x - 2)
return (x - 2) ** 2
study = optuna.create_study(study_name="my_other_study")
study.optimize(objective, n_trials=10, callbacks=[mlflc])
Returns
Objective function with tracking to MLflow enabled.
Return type
Callable
Note: Added in v2.9.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.9.0.
optuna.integration.WeightsAndBiasesCallback
Note: User needs to be logged in to Weights & Biases before using this callback in online mode. For more
information, please refer to wandb setup.
Note: Users who want to run multiple Optuna studies within the same process should call wandb.finish()
between subsequent calls to study.optimize(). Calling wandb.finish() is not necessary if you are running
one Optuna study per process.
76 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Note: To ensure correct trial order in Weights & Biases, this callback should only be used with study.
optimize(n_jobs=1).
Example
import optuna
from optuna.integration.wandb import WeightsAndBiasesCallback
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study = optuna.create_study()
import optuna
from optuna.integration.wandb import WeightsAndBiasesCallback
@wandbc.track_in_wandb()
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study = optuna.create_study()
study.optimize(objective, n_trials=10, callbacks=[wandbc])
Parameters
• metric_name (str | Sequence[str]) – Name assigned to optimized metric. In case of
multi-objective optimization, list of names can be passed. Those names will be assigned
to metrics in the order returned by objective function. If single name is provided, or this
argument is left to default value, it will be broadcasted to each objective with a number
suffix in order returned by objective function e.g. two objectives and default metric name
will be logged as value_0 and value_1. The number of metrics must be the same as the
number of values objective function returns.
Note: Added in v2.9.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.9.0.
Methods
track_in_wandb() Decorator for using W&B for logging inside the ob-
jective function.
track_in_wandb()
Decorator for using W&B for logging inside the objective function.
The run is initialized with the same wandb_kwargs that are passed to the callback. All the metrics from
inside the objective function will be logged into the same run which stores the parameters for a given trial.
Example
import optuna
from optuna.integration.wandb import WeightsAndBiasesCallback
import wandb
@wandbc.track_in_wandb()
def objective(trial):
x = trial.suggest_float("x", -10, 10)
wandb.log({"power": 2, "base of metric": x - 2})
return (x - 2) ** 2
study = optuna.create_study()
study.optimize(objective, n_trials=10, callbacks=[wandbc])
Returns
Objective function with W&B tracking enabled.
Return type
Callable
78 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
pycma
optuna.integration.PyCmaSampler
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
y = trial.suggest_int("y", -1, 1)
return x**2 + y
sampler = optuna.integration.PyCmaSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=20)
Note that parallel execution of trials may affect the optimization performance of CMA-ES, especially if the
number of trials running in parallel exceeds the population size.
Note: CmaEsSampler is deprecated and renamed to PyCmaSampler in v2.0.0. Please use PyCmaSampler
instead of CmaEsSampler.
Parameters
• x0 (Dict[str, Any] | None) – A dictionary of an initial parameter values for CMA-
ES. By default, the mean of low and high for each distribution is used. Please refer to
cma.CMAEvolutionStrategy for further details of x0.
• sigma0 (float | None) – Initial standard deviation of CMA-ES. By default, sigma0 is set
to min_range / 6, where min_range denotes the minimum range of the distributions in
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
80 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
float
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
82 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, float]
optuna.integration.CmaEsSampler
Warning: Deprecated in v2.0.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v4.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v2.0.0.
This class is renamed to PyCmaSampler.
Methods
Parameters
• x0 (Dict[str, Any] | None) –
• sigma0 (float | None) –
• cma_stds (Dict[str, float] | None) –
• seed (int | None) –
• cma_opts (Dict[str, Any] | None) –
• n_startup_trials (int) –
• independent_sampler (BaseSampler | None) –
• warn_independent_sampling (bool) –
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
84 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
float
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
PyTorch
optuna.integration.PyTorchIgnitePruningHandler
optuna.integration.PyTorchLightningPruningCallback
Note: For the distributed data parallel training, the version of PyTorchLightning needs to be higher than or
equal to v1.6.0. In addition, Study should be instantiated with RDB storage.
86 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Note: If you would like to use PyTorchLightningPruningCallback in a distributed training environment, you
need to evoke PyTorchLightningPruningCallback.check_pruned() manually so that TrialPruned is properly
handled.
Methods
on_validation_end(trainer, pl_module)
check_pruned()
Raise optuna.TrialPruned manually if pruned.
Currently, intermediate_values are not properly propagated between processes due to storage cache.
Therefore, necessary information is kept in trial_system_attrs when the trial runs in a distributed situation.
Please call this method right after calling pytorch_lightning.Trainer.fit(). If a callback doesn’t
have any backend storage for DDP, this method does nothing.
Return type
None
optuna.integration.TorchDistributedTrial
Note: The methods of TorchDistributedTrial are expected to be called by all workers at once. They invoke
synchronous data transmission to share processing results and synchronize timing.
Note: Added in v2.6.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.6.0.
Methods
report(value, step)
set_system_attr(key, value)
set_user_attr(key, value)
should_prune()
suggest_categorical()
Attributes
datetime_start
distributions
number
params
system_attrs
user_attrs
set_system_attr(key, value)
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
88 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Parameters
• key (str) –
• value (Any) –
Return type
None
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float(. . . , step=. . . ) instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
• q (float) –
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float(. . . , log=True) instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
Return type
float
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
scikit-learn
optuna.integration.OptunaSearchCV
90 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
• enable_pruning (bool) – If True, pruning is performed in the case where the underlying
estimator supports partial_fit.
• error_score (Number | float | str) – Value to assign to the score if an error
occurs in fitting. If ‘raise’, the error is raised. If numeric, sklearn.exceptions.
FitFailedWarning is raised. This does not affect the refit step, which will always raise
the error.
• max_iter (int) – Maximum number of epochs. This is only used if the underlying estimator
supports partial_fit.
• n_jobs (int | None) – Number of threading based parallel jobs. None means 1. -1
means using the number is set to CPU count.
Note: n_jobs allows parallelization using threading and may suffer from Python’s
GIL. It is recommended to use process-based parallelization if func is CPU bound.
• n_trials (int) – Number of trials. If None, there is no limitation on the number of trials. If
timeout is also set to None, the study continues to create trials until it receives a termination
signal such as Ctrl+C or SIGTERM. This trades off runtime vs quality of the solution.
• random_state (int | RandomState | None) – Seed of the pseudo random number gen-
erator. If int, this is the seed used by the random number generator. If numpy.random.
RandomState object, this is the random number generator. If None, the global random state
from numpy.random is used.
• refit (bool) – If True, refit the estimator with the best found hyperparameters. The refitted
estimator is made available at the best_estimator_ attribute and permits using predict
directly.
• return_train_score (bool) – If True, training scores will be included. Computing train-
ing scores is used to get insights on how different hyperparameter settings impact the over-
fitting/underfitting trade-off. However computing training scores can be computationally
expensive and is not strictly required to select the hyperparameters that yield the best gener-
alization performance.
• scoring (Callable[[...], float] | str | None) – String or callable to evaluate the
predictions on the validation data. If None, score on the estimator is used.
• study (Study | None) – Study corresponds to the optimization task. If None, a new study
is created.
• subsample (float | int) – Proportion of samples that are used during hyperparameter
search.
– If int, then draw subsample samples.
– If float, then draw subsample * X.shape[0] samples.
• timeout (float | None) – Time limit in seconds for the search of appropriate models.
If None, the study is executed without time limitation. If n_trials is also set to None,
the study continues to create trials until it receives a termination signal such as Ctrl+C or
SIGTERM. This trades off runtime vs quality of the solution.
• verbose (int) – Verbosity level. The higher, the more messages.
• callbacks (List[Callable[[Study, FrozenTrial], None]] | None) – List of
callback functions that are invoked at the end of each trial. Each function must accept two
parameters with the following types in this order: Study and FrozenTrial.
See also:
See the tutorial of optuna_callback for how to use and implement callback functions.
best_estimator_
Estimator that was chosen by the search. This is present only if refit is set to True.
n_splits_
Number of cross-validation splits.
refit_time_
Time for refitting the best estimator. This is present only if refit is set to True.
sample_indices_
Indices of samples that are used during hyperparameter search.
scorer_
Scorer function.
study_
Actual study.
Examples
import optuna
from sklearn.datasets import load_iris
from sklearn.svm import SVC
clf = SVC(gamma="auto")
param_distributions = {
"C": optuna.distributions.FloatDistribution(1e-10, 1e10, log=True)
}
optuna_search = optuna.integration.OptunaSearchCV(clf, param_distributions)
X, y = load_iris(return_X_y=True)
optuna_search.fit(X, y)
y_pred = optuna_search.predict(X)
Note: By following the scikit-learn convention for scorers, the direction of optimization is maximize. See
https://scikit-learn.org/stable/modules/model_evaluation.html. For the minimization problem, please multiply
-1.
Note: Added in v0.17.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v0.17.0.
92 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Methods
Attributes
94 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Note: This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g.
used inside a Pipeline. Otherwise it has no effect.
Parameters
• groups (str, True, False, or None, default=sklearn.utils.
metadata_routing.UNCHANGED) – Metadata routing for groups parameter in
fit.
• self (OptunaSearchCV) –
Returns
self – The updated object.
Return type
object
set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have
parameters of the form <component>__<parameter> so that it’s possible to update each component of a
nested object.
Parameters
**params (dict) – Estimator parameters.
Returns
self – Estimator instance.
Return type
estimator instance
property set_user_attr: Callable[[...], None]
Call set_user_attr on the Study.
property transform: Callable[[...], List[List[float]] | ndarray | DataFrame |
spmatrix]
Call transform on the best estimator.
This is available only if the underlying estimator supports transform and refit is set to True.
property trials_: List[FrozenTrial]
All trials in the Study.
property trials_dataframe: Callable[[...], DataFrame]
Call trials_dataframe on the Study.
property user_attrs_: Dict[str, Any]
User attributes in the Study.
96 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
scikit-optimize
optuna.integration.SkoptSampler
Note: Added in v2.0.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.0.0.
Note: As the number of trials 𝑛 increases, each sampling takes longer and longer on a scale
of 𝑂(𝑛3 ). And, if this is True, the number of trials will increase. So, it is suggested to set
this flag False when each evaluation of the objective function is relatively faster than each
sampling. On the other hand, it is suggested to set this flag True when each evaluation of
the objective function is relatively slower than each sampling.
Warning: Deprecated in v3.4.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v4.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.4.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
98 Chapter 6. Reference
Optuna Documentation, Release 3.5.0.dev
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
TensorFlow
optuna.integration.TensorBoardCallback
Note: Added in v2.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.0.0.
XGBoost
optuna.integration.XGBoostPruningCallback
6.3.8 optuna.logging
The logging module implements logging using the Python logging package. Library users may be especially in-
terested in setting verbosity levels using set_verbosity() to one of optuna.logging.CRITICAL (aka optuna.
logging.FATAL), optuna.logging.ERROR, optuna.logging.WARNING (aka optuna.logging.WARN), optuna.
logging.INFO, or optuna.logging.DEBUG.
optuna.logging.get_verbosity Return the current level for the Optuna's root logger.
optuna.logging.set_verbosity Set the level for the Optuna's root logger.
optuna.logging.disable_default_handler Disable the default handler of the Optuna's root logger.
optuna.logging.enable_default_handler Enable the default handler of the Optuna's root logger.
optuna.logging.disable_propagation Disable propagation of the library log outputs.
optuna.logging.enable_propagation Enable propagation of the library log outputs.
optuna.logging.get_verbosity
optuna.logging.get_verbosity()
Return the current level for the Optuna’s root logger.
Example
import optuna
Returns
Logging level, e.g., optuna.logging.DEBUG and optuna.logging.INFO.
Return type
int
optuna.logging.set_verbosity
optuna.logging.set_verbosity(verbosity)
Set the level for the Optuna’s root logger.
Example
import optuna
# Setting the logging level WARNING, the INFO logs are suppressed.
optuna.logging.set_verbosity(optuna.logging.WARNING)
study.optimize(objective, n_trials=10)
Parameters
verbosity (int) – Logging level, e.g., optuna.logging.DEBUG and optuna.logging.
INFO.
Return type
None
optuna.logging.disable_default_handler
optuna.logging.disable_default_handler()
Disable the default handler of the Optuna’s root logger.
Example
import optuna
study = optuna.create_study()
Return type
None
optuna.logging.enable_default_handler
optuna.logging.enable_default_handler()
Enable the default handler of the Optuna’s root logger.
Please refer to the example shown in disable_default_handler().
Return type
None
optuna.logging.disable_propagation
optuna.logging.disable_propagation()
Disable propagation of the library log outputs.
Note that log propagation is disabled by default. You only need to use this function to stop log propagation when
you use enable_propagation().
Example
Stop propagating logs to the root logger on the second optimize call.
import optuna
import logging
study = optuna.create_study()
logger.info("Logs from first optimize call") # The logs are saved in the logs file.
study.optimize(objective, n_trials=10)
with open("foo.log") as f:
assert f.readline().startswith("A new study created")
assert f.readline() == "Logs from first optimize call\n"
# Check for logs after second optimize call.
assert f.read().split("Logs from second optimize call\n")[-1] == ""
Return type
None
optuna.logging.enable_propagation
optuna.logging.enable_propagation()
Enable propagation of the library log outputs.
Please disable the Optuna’s default handler to prevent double logging if the root logger has been configured.
Example
Propagate all log output to the root logger in order to save them to the file.
import optuna
import logging
logger = logging.getLogger()
study = optuna.create_study()
logger.info("Start optimization.")
study.optimize(objective, n_trials=10)
with open("foo.log") as f:
assert f.readline().startswith("A new study created")
assert f.readline() == "Start optimization.\n"
Return type
None
6.3.9 optuna.pruners
The pruners module defines a BasePruner class characterized by an abstract prune() method, which, for a given trial
and its associated study, returns a boolean value representing whether the trial should be pruned. This determination is
made based on stored intermediate values of the objective function, as previously reported for the trial using optuna.
trial.Trial.report(). The remaining classes in this module represent child classes, inheriting from BasePruner,
which implement different pruning strategies.
See also:
pruning tutorial explains the concept of the pruner classes and a minimal example.
See also:
user_defined_pruner tutorial could be helpful if you want to implement your own pruner classes.
optuna.pruners.BasePruner
class optuna.pruners.BasePruner
Base class for pruners.
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
optuna.pruners.MedianPruner
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
n_train_iter = 100
if trial.should_prune():
raise optuna.TrialPruned()
study = optuna.create_study(
direction="maximize",
pruner=optuna.pruners.MedianPruner(
n_startup_trials=5, n_warmup_steps=30, interval_steps=10
),
)
study.optimize(objective, n_trials=20)
Parameters
• n_startup_trials (int) – Pruning is disabled until the given number of trials finish in
the same study.
• n_warmup_steps (int) – Pruning is disabled until the trial exceeds the given number of
step. Note that this feature assumes that step starts at zero.
• interval_steps (int) – Interval in number of steps between the pruning checks, offset by
the warmup steps. If no value has been reported at the time of a pruning check, that particular
check will be postponed until a value is reported.
• n_min_trials (int) – Minimum number of reported trial results at a step to judge whether
to prune. If the number of reported intermediate values from all trials at the current step
is less than n_min_trials, the trial will not be pruned. This can be used to ensure that a
minimum number of trials are run to completion without being pruned.
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
prune(study, trial)
Judge whether the trial should be pruned based on the reported values.
Note that this method is not supposed to be called by library users. Instead, optuna.trial.Trial.
report() and optuna.trial.Trial.should_prune() provide user interfaces to implement pruning
mechanism in an objective function.
Parameters
• study (Study) – Study object of the target study.
• trial (FrozenTrial) – FrozenTrial object of the target trial. Take a copy before modify-
ing this object.
Returns
A boolean value representing whether the trial should be pruned.
Return type
bool
optuna.pruners.NopPruner
class optuna.pruners.NopPruner
Pruner which never prunes trials.
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
n_train_iter = 100
if trial.should_prune():
assert False, "should_prune() should always return False with this␣
˓→pruner."
raise optuna.TrialPruned()
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
prune(study, trial)
Judge whether the trial should be pruned based on the reported values.
Note that this method is not supposed to be called by library users. Instead, optuna.trial.Trial.
report() and optuna.trial.Trial.should_prune() provide user interfaces to implement pruning
mechanism in an objective function.
Parameters
• study (Study) – Study object of the target study.
• trial (FrozenTrial) – FrozenTrial object of the target trial. Take a copy before modify-
ing this object.
Returns
A boolean value representing whether the trial should be pruned.
Return type
bool
optuna.pruners.PatientPruner
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
n_train_iter = 100
if trial.should_prune():
raise optuna.TrialPruned()
study = optuna.create_study(
direction="maximize",
pruner=optuna.pruners.PatientPruner(optuna.pruners.MedianPruner(), patience=1),
)
study.optimize(objective, n_trials=20)
Parameters
• wrapped_pruner (BasePruner | None) – Wrapped pruner to perform pruning when
PatientPruner allows a trial to be pruned. If it is None, this pruner is equivalent to early-
stopping taken the intermediate values in the individual trial.
• patience (int) – Pruning is disabled until the objective doesn’t improve for patience
consecutive steps.
• min_delta (float) – Tolerance value to check whether or not the objective improves. This
value should be non-negative.
Note: Added in v2.8.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.8.0.
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
prune(study, trial)
Judge whether the trial should be pruned based on the reported values.
Note that this method is not supposed to be called by library users. Instead, optuna.trial.Trial.
report() and optuna.trial.Trial.should_prune() provide user interfaces to implement pruning
mechanism in an objective function.
Parameters
• study (Study) – Study object of the target study.
• trial (FrozenTrial) – FrozenTrial object of the target trial. Take a copy before modify-
ing this object.
Returns
A boolean value representing whether the trial should be pruned.
Return type
bool
optuna.pruners.PercentilePruner
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
n_train_iter = 100
if trial.should_prune():
raise optuna.TrialPruned()
study = optuna.create_study(
direction="maximize",
pruner=optuna.pruners.PercentilePruner(
25.0, n_startup_trials=5, n_warmup_steps=30, interval_steps=10
),
)
study.optimize(objective, n_trials=20)
Parameters
• percentile (float) – Percentile which must be between 0 and 100 inclusive (e.g., When
given 25.0, top of 25th percentile trials are kept).
• n_startup_trials (int) – Pruning is disabled until the given number of trials finish in
the same study.
• n_warmup_steps (int) – Pruning is disabled until the trial exceeds the given number of
step. Note that this feature assumes that step starts at zero.
• interval_steps (int) – Interval in number of steps between the pruning checks, offset by
the warmup steps. If no value has been reported at the time of a pruning check, that particular
check will be postponed until a value is reported. Value must be at least 1.
• n_min_trials (int) – Minimum number of reported trial results at a step to judge whether
to prune. If the number of reported intermediate values from all trials at the current step
is less than n_min_trials, the trial will not be pruned. This can be used to ensure that a
minimum number of trials are run to completion without being pruned.
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
prune(study, trial)
Judge whether the trial should be pruned based on the reported values.
Note that this method is not supposed to be called by library users. Instead, optuna.trial.Trial.
report() and optuna.trial.Trial.should_prune() provide user interfaces to implement pruning
mechanism in an objective function.
Parameters
• study (Study) – Study object of the target study.
• trial (FrozenTrial) – FrozenTrial object of the target trial. Take a copy before modify-
ing this object.
Returns
A boolean value representing whether the trial should be pruned.
Return type
bool
optuna.pruners.SuccessiveHalvingPruner
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
n_train_iter = 100
if trial.should_prune():
(continues on next page)
study = optuna.create_study(
direction="maximize", pruner=optuna.pruners.SuccessiveHalvingPruner()
)
study.optimize(objective, n_trials=20)
Parameters
• min_resource (str | int) – A parameter for specifying the minimum resource allocated
to a trial (in the paper this parameter is referred to as 𝑟). This parameter defaults to ‘auto’
where the value is determined based on a heuristic that looks at the number of required steps
for the first trial to complete.
A trial is never pruned until it executes min_resource × reduction_factormin_early_stopping_rate
steps (i.e., the completion point of the first rung). When the trial completes the first rung, it
will be promoted to the next rung only if the value of the trial is placed in the top reduction_factor
1
fraction of the all trials that already have reached the point (otherwise it will be pruned there).
If the trial won the competition, it runs until the next completion point (i.e., min_resource ×
reduction_factor(min_early_stopping_rate+rung) steps) and repeats the same procedure.
Note: If the step of the last intermediate value may change with each trial, please manually
specify the minimum possible step to min_resource.
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
prune(study, trial)
Judge whether the trial should be pruned based on the reported values.
Note that this method is not supposed to be called by library users. Instead, optuna.trial.Trial.
report() and optuna.trial.Trial.should_prune() provide user interfaces to implement pruning
mechanism in an objective function.
Parameters
• study (Study) – Study object of the target study.
• trial (FrozenTrial) – FrozenTrial object of the target trial. Take a copy before modify-
ing this object.
Returns
A boolean value representing whether the trial should be pruned.
Return type
bool
optuna.pruners.HyperbandPruner
Note:
• In the Hyperband paper, the counterpart of RandomSampler is used.
• Optuna uses TPESampler by default.
• The benchmark result shows that optuna.pruners.HyperbandPruner supports both samplers.
Note: If you use HyperbandPruner with TPESampler, it’s recommended to consider setting larger n_trials
or timeout to make full use of the characteristics of TPESampler because TPESampler uses some (by default,
10) Trials for its startup.
As Hyperband runs multiple SuccessiveHalvingPruner and collects trials based on the current Trial‘s
bracket ID, each bracket needs to observe more than 10 Trials for TPESampler to adapt its search space.
Thus, for example, if HyperbandPruner has 4 pruners in it, at least 4 × 10 trials are consumed for startup.
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
classes = np.unique(y)
n_train_iter = 100
def objective(trial):
alpha = trial.suggest_float("alpha", 0.0, 1.0)
clf = SGDClassifier(alpha=alpha)
if trial.should_prune():
raise optuna.TrialPruned()
study = optuna.create_study(
direction="maximize",
pruner=optuna.pruners.HyperbandPruner(
min_resource=1, max_resource=n_train_iter, reduction_factor=3
),
)
study.optimize(objective, n_trials=20)
Parameters
• min_resource (int) – A parameter for specifying the minimum resource allocated to a
trial noted as 𝑟 in the paper. A smaller 𝑟 will give a result faster, but a larger 𝑟 will
give a better guarantee of successful judging between configurations. See the details for
SuccessiveHalvingPruner.
• max_resource (str | int) – A parameter for specifying the maximum resource allocated
to a trial. 𝑅 in the paper corresponds to max_resource / min_resource. This value rep-
resents and should match the maximum iteration steps (e.g., the number of epochs for neural
networks). When this argument is “auto”, the maximum resource is estimated according to
the completed trials. The default value of this argument is “auto”.
Note: With “auto”, the maximum resource will be the largest step reported by report()
in the first, or one of the first if trained in parallel, completed trial. No trials will be pruned
until the maximum resource is determined.
Note: If the step of the last intermediate value may change with each trial, please manually
specify the maximum possible step to max_resource.
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
prune(study, trial)
Judge whether the trial should be pruned based on the reported values.
Note that this method is not supposed to be called by library users. Instead, optuna.trial.Trial.
report() and optuna.trial.Trial.should_prune() provide user interfaces to implement pruning
mechanism in an objective function.
Parameters
• study (Study) – Study object of the target study.
• trial (FrozenTrial) – FrozenTrial object of the target trial. Take a copy before modify-
ing this object.
Returns
A boolean value representing whether the trial should be pruned.
Return type
bool
optuna.pruners.ThresholdPruner
Example
def objective_for_upper(trial):
for step, y in enumerate(ys_for_upper):
trial.report(y, step)
if trial.should_prune():
raise TrialPruned()
return ys_for_upper[-1]
def objective_for_lower(trial):
for step, y in enumerate(ys_for_lower):
trial.report(y, step)
if trial.should_prune():
raise TrialPruned()
return ys_for_lower[-1]
study = create_study(pruner=ThresholdPruner(upper=1.0))
study.optimize(objective_for_upper, n_trials=10)
study = create_study(pruner=ThresholdPruner(lower=0.0))
study.optimize(objective_for_lower, n_trials=10)
Parameters
• lower (float | None) – A minimum value which determines whether pruner prunes or
not. If an intermediate value is smaller than lower, it prunes.
• upper (float | None) – A maximum value which determines whether pruner prunes or
not. If an intermediate value is larger than upper, it prunes.
• n_warmup_steps (int) – Pruning is disabled if the step is less than the given number of
warmup steps.
• interval_steps (int) – Interval in number of steps between the pruning checks, offset by
the warmup steps. If no value has been reported at the time of a pruning check, that particular
check will be postponed until a value is reported. Value must be at least 1.
Methods
prune(study, trial) Judge whether the trial should be pruned based on the
reported values.
prune(study, trial)
Judge whether the trial should be pruned based on the reported values.
Note that this method is not supposed to be called by library users. Instead, optuna.trial.Trial.
report() and optuna.trial.Trial.should_prune() provide user interfaces to implement pruning
mechanism in an objective function.
Parameters
• study (Study) – Study object of the target study.
• trial (FrozenTrial) – FrozenTrial object of the target trial. Take a copy before modify-
ing this object.
Returns
A boolean value representing whether the trial should be pruned.
Return type
bool
6.3.10 optuna.samplers
The samplers module defines a base class for parameter sampling as described extensively in BaseSampler. The
remaining classes in this module represent child classes, deriving from BaseSampler, which implement different
sampling strategies.
See also:
pruning tutorial explains the overview of the sampler classes.
See also:
user_defined_sampler tutorial could be helpful if you want to implement your own sampler classes.
Note: ✓: Supports this feature. ▲: Works, but inefficiently. ×: Causes an error, or has no interface.
(*): We assumes that 𝑑 is the dimension of the search space, 𝑛 is the number of finished trials, 𝑚 is the
number of objectives, and 𝑝 is the population size (algorithm specific parameter). This table shows the
time complexity of the sampling algorithms. We may omit other terms that depend on the implementation
in Optuna, including 𝑂(𝑑) to call the sampling methods and 𝑂(𝑛) to collect the completed trials. This
means that, for example, the actual time complexity of RandomSampler is 𝑂(𝑑 + 𝑛 + 𝑑) = 𝑂(𝑑 + 𝑛).
From another perspective, with the exception of NSGAIISampler, all time complexity is written for single-
objective optimization.
(**): The budget depends on the number of parameters and the number of objectives.
(***): This time complexity assumes that the number of population size 𝑝 and the number of parallelization
are regular. This means that the number of parallelization should not exceed the number of population size
𝑝.
sample_relative() in Optuna. Please check the concrete documents of samplers for more details.
For conditional search space, see configurations tutorial and TPESampler. The group option of TPESampler allows
TPESampler to handle the conditional search space.
For multi-objective optimization, see multi_objective tutorial.
For batch optimization, see Batch-Optimization tutorial. Note that the constant_liar option of TPESampler allows
TPESampler to handle the batch optimization.
For distributed optimization, see distributed tutorial. Note that the constant_liar option of TPESampler allows
TPESampler to handle the distributed optimization.
For constrained optimization, see an example.
optuna.samplers.BaseSampler
class optuna.samplers.BaseSampler
Base class for samplers.
Optuna combines two types of sampling strategies, which are called relative sampling and independent sampling.
The relative sampling determines values of multiple parameters simultaneously so that sampling algorithms can
use relationship between parameters (e.g., correlation). Target parameters of the relative sampling are described
in a relative search space, which is determined by infer_relative_search_space().
The independent sampling determines a value of a single parameter without considering any relationship between
parameters. Target parameters of the independent sampling are the parameters not described in the relative search
space.
More specifically, parameters are sampled by the following procedure. At the beginning of a trial,
infer_relative_search_space() is called to determine the relative search space for the trial. During the
execution of the objective function, sample_relative() is called only once when sampling the parameters
belonging to the relative search space for the first time. sample_independent() is used to sample parameters
that don’t belong to the relative search space.
The following figure depicts the lifetime of a trial and how the above three methods are called in the trial.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
optuna.samplers.GridSampler
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_int("y", -100, 100)
return x**2 + y**2
Note: GridSampler automatically stops the optimization if all combinations in the passed search_space
have already been evaluated, internally invoking the stop() method.
Note: GridSampler does not take care of a parameter’s quantization specified by discrete suggest methods but
just samples one of values specified in the search space. E.g., in the following code snippet, either of -0.5 or
0.5 is sampled as x instead of an integer point.
import optuna
def objective(trial):
# The following suggest method specifies integer points between -5 and 5.
x = trial.suggest_float("x", -5, 5, step=1)
return x**2
Note: A parameter configuration in the grid is not considered finished until its trial is finished. Therefore,
during distributed optimization where trials run concurrently, different workers will occasionally suggest the
same parameter configuration. The total number of actual trials may therefore exceed the size of the grid.
Note: All parameters must be specified when using GridSampler with enqueue_trial().
Parameters
• search_space (Mapping[str, Sequence[None | bool | int | float | str]])
– A dictionary whose key and value are a parameter name and the corresponding candidates
of values, respectively.
• seed (int | None) – A seed to fix the order of trials as the grid is randomly shuffled.
Please note that it is not recommended using this option in distributed optimization settings
since this option cannot ensure the order of trials and may increase the number of duplicate
suggestions during distributed optimization.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
optuna.samplers.RandomSampler
class optuna.samplers.RandomSampler(seed=None)
Sampler using random sampling.
This sampler is based on independent sampling. See also BaseSampler for more details of ‘independent sam-
pling’.
Example
import optuna
from optuna.samplers import RandomSampler
def objective(trial):
x = trial.suggest_float("x", -5, 5)
return x**2
study = optuna.create_study(sampler=RandomSampler())
study.optimize(objective, n_trials=10)
Parameters
seed (int | None) – Seed for random number generator.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
optuna.samplers.TPESampler
Example
import optuna
from optuna.samplers import TPESampler
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return x**2
study = optuna.create_study(sampler=TPESampler())
study.optimize(objective, n_trials=10)
Parameters
Note: In the multi-objective case, this argument is only used to compute the weights of bad
trials, i.e., trials to construct g(x) in the paper ). The weights of good trials, i.e., trials to
construct l(x), are computed by a rule based on the hypervolume contribution proposed in
the paper of MOTPE.
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
• group (bool) – If this and multivariate are True, the multivariate TPE with the group
decomposed search space is used when suggesting parameters. The sampling algorithm
decomposes the search space based on past trials and samples from the joint distribution in
each decomposed subspace. The decomposed subspaces are a partition of the whole search
space. Each subspace is a maximal subset of the whole search space, which satisfies the
following: for a trial in completed trials, the intersection of the subspace and the search space
of the trial becomes subspace itself or an empty set. Sampling from the joint distribution on
the subspace is realized by multivariate TPE. If group is True, multivariate must be
True as well.
Note: Added in v2.8.0 as an experimental feature. The interface may change in newer
Example:
import optuna
def objective(trial):
x = trial.suggest_categorical("x", ["A", "B"])
if x == "A":
return trial.suggest_float("y", -10, 10)
else:
return trial.suggest_int("z", -10, 10)
Note: Abnormally terminated trials often leave behind a record with a state of RUNNING
in the storage. Such “zombie” trial parameters will be avoided by the constant liar algo-
rithm during subsequent sampling. When using an RDBStorage, it is possible to enable the
heartbeat_interval to change the records for abnormally terminated trials to FAIL.
Note: It is recommended to set this value to True during distributed optimization to avoid
having multiple workers evaluating similar parameter configurations. In particular, if each
objective function evaluation is costly and the durations of the running states are significant,
and/or the number of workers is high.
Note: This feature can be used for only single-objective optimization; this argument is
ignored for multi-objective optimization.
Note: Added in v2.8.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.8.0.
The constraints_func will be evaluated after each successful trial. The function won’t be
called when trials fail or they are pruned, but this behavior is subject to change in the future
releases.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
• categorical_distance_func (Optional[dict[str,
Callable[[CategoricalChoiceType, CategoricalChoiceType], float]]])
– A dictionary of distance functions for categorical parameters. The key is the
name of the categorical parameter and the value is a distance function that takes two
CategoricalChoiceType s and returns a float value. The distance function must return
a non-negative value.
While categorical choices are handled equally by default, this option allows users to specify
prior knowledge on the structure of categorical parameters. When specified, categorical
choices closer to current best choices are more likely to be sampled.
Note: Added in v3.4.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.4.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
static hyperopt_parameters()
Return the the default parameters of hyperopt (v0.1.2).
TPESampler can be instantiated with the parameters returned by this method.
Example
import optuna
from optuna.samplers import TPESampler
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return x**2
sampler = TPESampler(**TPESampler.hyperopt_parameters())
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
Returns
A dictionary containing the default parameters of hyperopt.
Return type
Dict[str, Any]
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
optuna.samplers.CmaEsSampler
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
y = trial.suggest_int("y", -1, 1)
return x**2 + y
sampler = optuna.samplers.CmaEsSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=20)
Please note that this sampler does not support CategoricalDistribution. However, FloatDistribution with
step, (suggest_float()) and IntDistribution (suggest_int()) are supported.
If your search space contains categorical parameters, I recommend you to use TPESampler instead. Furthermore,
there is room for performance improvements in parallel optimization settings. This sampler cannot use some
trials for updating the parameters of multivariate normal distribution.
For further information about CMA-ES algorithm, please refer to the following papers:
• N. Hansen, The CMA Evolution Strategy: A Tutorial. arXiv:1604.00772, 2016.
• A. Auger and N. Hansen. A restart CMA evolution strategy with increasing population size. In Proceedings
of the IEEE Congress on Evolutionary Computation (CEC 2005), pages 1769–1776. IEEE Press, 2005.
• N. Hansen. Benchmarking a BI-Population CMA-ES on the BBOB-2009 Function Testbed. GECCO Work-
shop, 2009.
• Raymond Ros, Nikolaus Hansen. A Simple Modification in CMA-ES Achieving Linear Time and Space
Complexity. 10th International Conference on Parallel Problem Solving From Nature, Sep 2008, Dort-
mund, Germany. inria-00287367.
• Masahiro Nomura, Shuhei Watanabe, Youhei Akimoto, Yoshihiko Ozaki, Masaki Onishi. Warm Starting
CMA-ES for Hyperparameter Optimization, AAAI. 2021.
• R. Hamano, S. Saito, M. Nomura, S. Shirakawa. CMA-ES with Margin: Lower-Bounding Marginal Prob-
ability for Mixed-Integer Black-Box Optimization, GECCO. 2022.
• M. Nomura, Y. Akimoto, I. Ono. CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default
Population Size Solve Multimodal and Noisy Problems?, GECCO. 2023.
See also:
You can also use optuna.integration.PyCmaSampler which is a sampler using cma library as the backend.
Parameters
• x0 (Optional[Dict[str, Any]]) – A dictionary of an initial parameter values for CMA-
ES. By default, the mean of low and high for each distribution is used. Note that
x0 is sampled uniformly within the search space domain for each restart if you specify
restart_strategy argument.
• sigma0 (Optional[float]) – Initial standard deviation of CMA-ES. By default, sigma0
is set to min_range / 6, where min_range denotes the minimum range of the distributions
in the search space.
• seed (Optional[int]) – A random seed for CMA-ES.
• n_startup_trials (int) – The independent sampling is used instead of the CMA-ES
algorithm until the given number of trials finish in the same study.
• independent_sampler (Optional[BaseSampler]) – A BaseSampler instance that is
used for independent sampling. The parameters not contained in the relative search space
are sampled by this sampler. The search space for CmaEsSampler is determined by
intersection_search_space().
If None is specified, RandomSampler is used as the default.
See also:
optuna.samplers module provides built-in independent samplers such as
RandomSampler and TPESampler.
• warn_independent_sampling (bool) – If this is True, a warning message is emitted when
the value of a parameter is sampled by using an independent sampler.
Note that the parameters of the first trial in a study are always sampled via an independent
sampler, so no warning messages are emitted in this case.
Note: Added in v2.1.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.1.0.
Note: Added in v2.0.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.0.0.
Note: It is suggested to set this flag False when the MedianPruner is used. On the other
hand, it is suggested to set this flag True when the HyperbandPruner is used. Please see
the benchmark result for the details.
Note: Added in v2.6.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.6.0.
• with_margin (bool) – If this is True, CMA-ES with margin is used. This algo-
rithm prevents samples in each discrete distribution (FloatDistribution with step and
IntDistribution) from being fixed to a single point. Currently, this option cannot be
used with use_separable_cma=True.
Note: Added in v3.1.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.1.0.
• lr_adapt (bool) – If this is True, CMA-ES with learning rate adaptation is used.
This algorithm focuses on working well on multimodal and/or noisy problems with de-
fault settings. Currently, this option cannot be used with use_separable_cma=True or
with_margin=True.
Note: Added in v3.3.0 or later, as an experimental feature. The interface may change in
newer versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.
3.0.
Note: Added in v2.6.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.6.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
optuna.samplers.PartialFixedSampler
Example
After several steps of optimization, you can fix the value of y and re-optimize it.
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
y = trial.suggest_int("y", -1, 1)
return x**2 + y
study = optuna.create_study()
study.optimize(objective, n_trials=10)
best_params = study.best_params
fixed_params = {"y": best_params["y"]}
partial_sampler = optuna.samplers.PartialFixedSampler(fixed_params, study.sampler)
study.sampler = partial_sampler
study.optimize(objective, n_trials=10)
Parameters
• fixed_params (Dict[str, Any]) – A dictionary of parameters to be fixed.
• base_sampler (BaseSampler) – A sampler which samples unfixed parameters.
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
optuna.samplers.NSGAIISampler
Parameters
• population_size (int) – Number of individuals (trials) in a generation.
population_size must be greater than or equal to crossover.n_parents. For
UNDXCrossover and SPXCrossover, n_parents=3, and for the other algorithms,
n_parents=2.
• mutation_prob (float | None) – Probability of mutating each parameter when creating
a new individual. If None is specified, the value 1.0 / len(parent_trial.params) is
used where parent_trial is the parent trial of the target individual.
• crossover (BaseCrossover | None) – Crossover to be applied when creating child in-
dividuals. The available crossovers are listed here: https://optuna.readthedocs.io/en/stable/
reference/samplers/nsgaii.html.
UniformCrossover is always applied to parameters sampled from
CategoricalDistribution, and by default for parameters sampled from other dis-
tributions unless this argument is specified.
For more information on each of the crossover method, please refer to specific crossover
documentation.
• crossover_prob (float) – Probability that a crossover (parameters swapping between
parents) will occur when creating a new individual.
• swapping_prob (float) – Probability of swapping each parameter of the parents during
crossover.
• seed (int | None) – Seed for random number generator.
• constraints_func (Callable[[FrozenTrial], Sequence[float]] | None) – An
optional function that computes the objective constraints. It must take a FrozenTrial and
return the constraints. The return value must be a sequence of float s. A value strictly
larger than 0 means that a constraints is violated. A value equal to or smaller than 0 is
considered feasible. If constraints_func returns more than one value for a trial, that trial
is considered feasible if and only if all values are equal to 0 or smaller.
The constraints_func will be evaluated after each successful trial. The function won’t be
called when trials fail or they are pruned, but this behavior is subject to change in the future
releases.
The constraints are handled by the constrained domination. A trial x is said to constrained-
dominate a trial y, if any of the following conditions is true:
1. Trial x is feasible and trial y is not.
2. Trial x and y are both infeasible, but trial x has a smaller overall violation.
3. Trial x and y are feasible and trial x dominates trial y.
Note: Added in v2.5.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v2.5.0.
• elite_population_selection_strategy (Callable[[Study,
list[FrozenTrial]], list[FrozenTrial]] | None) – The selection strategy
for determining the individuals to survive from the current population pool. Default to
None.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
dict[str, Any]
optuna.samplers.NSGAIIISampler
Parameters
• reference_points (np.ndarray | None) – A 2 dimension numpy.ndarray with ob-
jective dimension columns. Represents a list of reference points which is used to determine
who to survive. After non-dominated sort, who out of borderline front are going to survived
is determined according to how sparse the closest reference point of each individual is. In
the default setting the algorithm uses uniformly spread points to diversify the result. It is
also possible to reflect your preferences by giving an arbitrary set of target points since the
algorithm prioritizes individuals around reference points.
• dividing_parameter (int) – A parameter to determine the density of default reference
points. This parameter determines how many divisions are made between reference points
on each axis. The smaller this value is, the less reference points you have. The default value
is 3. Note that this parameter is not used when reference_points is not None.
• population_size (int) –
• mutation_prob (float | None) –
• crossover (BaseCrossover | None) –
• crossover_prob (float) –
• swapping_prob (float) –
• seed (int | None) –
• constraints_func (Callable[[FrozenTrial], Sequence[float]] | None) –
• child_generation_strategy (Callable[[Study, dict[str,
BaseDistribution], list[FrozenTrial]], dict[str, Any]] | None) –
• after_trial_strategy (Callable[[Study, FrozenTrial, TrialState,
Sequence[float] | None], None] | None) –
Note: Other parameters than reference_points and dividing_parameter are the same as
NSGAIISampler.
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
dict[str, Any]
optuna.samplers.MOTPESampler
Parameters
• consider_prior (bool) – Enhance the stability of Parzen estimator by imposing a Gaus-
sian prior when True. The prior is only effective if the sampling distribution is either
FloatDistribution, or IntDistribution.
• prior_weight (float) – The weight of the prior. This argument is used in
FloatDistribution, IntDistribution, and CategoricalDistribution.
• consider_magic_clip (bool) – Enable a heuristic to limit the smallest variances of Gaus-
sians used in the Parzen estimator.
• consider_endpoints (bool) – Take endpoints of domains into account when calculating
variances of Gaussians in Parzen estimator. See the original paper for details on the heuristics
to calculate the variances.
• n_startup_trials (int) – The random sampling is used instead of the MOTPE algorithm
until the given number of trials finish in the same study. 11 * number of variables - 1 is
recommended in the original paper.
• n_ehvi_candidates (int) – Number of candidate samples used to calculate the expected
hypervolume improvement.
• gamma (Callable[[int], int]) – A function that takes the number of finished trials and
returns the number of trials to form a density function for samples with low grains. See the
original paper for more details.
• weights_above (Callable[[int], ndarray]) – A function that takes the number of
finished trials and returns a weight for them. As default, weights are automatically calculated
by the MOTPE’s default strategy.
• seed (int | None) – Seed for random number generator.
Note: Initialization with Latin hypercube sampling may improve optimization performance. However, the
current implementation only supports initialization with random sampling.
Example
import optuna
seed = 128
num_variables = 2
n_startup_trials = 11 * num_variables - 1
def objective(trial):
x = []
for i in range(1, num_variables + 1):
x.append(trial.suggest_float(f"x{i}", 0.0, 2.0 * i))
return x
sampler = optuna.samplers.MOTPESampler(
n_startup_trials=n_startup_trials, n_ehvi_candidates=24, seed=seed
)
study = optuna.create_study(directions=["minimize"] * num_variables,␣
˓→sampler=sampler)
Warning: Deprecated in v2.9.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v4.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v2.9.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
static hyperopt_parameters()
Return the the default parameters of hyperopt (v0.1.2).
TPESampler can be instantiated with the parameters returned by this method.
Example
import optuna
from optuna.samplers import TPESampler
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return x**2
sampler = TPESampler(**TPESampler.hyperopt_parameters())
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
Returns
A dictionary containing the default parameters of hyperopt.
Return type
Dict[str, Any]
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
optuna.samplers.QMCSampler
Note: The search space of the sampler is determined by either previous trials in the study or the first trial that
this sampler samples.
If there are previous trials in the study, QMCSampler infers its search space using the trial which was created first
in the study.
Otherwise (if the study has no previous trials), QMCSampler samples the first trial using its independent_sampler
and then infers the search space in the second trial.
As mentioned above, the search space of the QMCSampler is determined by the first trial of the study. Once the
search space is determined, it cannot be changed afterwards.
Parameters
• qmc_type (str) – The type of QMC sequence to be sampled. This must be one of “halton”
and “sobol”. Default is “sobol”.
Note: Sobol’ sequence is designed to have low-discrepancy property when the number of
samples is 𝑛 = 2𝑚 for each positive integer 𝑚. When it is possible to pre-specify the number
of trials suggested by QMCSampler, it is recommended that the number of trials should be
set as power of two.
• scramble (bool) – If this option is True, scrambling (randomization) is applied to the QMC
sequences.
• seed (int | None) – A seed for QMCSampler. This argument is used only when scramble
is True. If this is None, the seed is initialized randomly. Default is None.
Note: When using multiple QMCSampler’s in parallel and/or distributed optimization, all
the samplers must share the same seed when the scrambling is enabled. Otherwise, the low-
discrepancy property of the samples will be degraded.
Note: When using parallel and/or distributed optimization without manually setting the
seed, the seed is set randomly for each instances of QMCSampler for different workers, which
ends up asynchronous seeding for multiple samplers used in the optimization.
See also:
See parameter seed in QMCSampler.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
y = trial.suggest_int("y", -1, 1)
return x**2 + y
sampler = optuna.samplers.QMCSampler()
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=8)
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
optuna.samplers.BruteForceSampler
class optuna.samplers.BruteForceSampler(seed=None)
Sampler using brute force.
This sampler performs exhaustive search on the defined search space.
Example
import optuna
def objective(trial):
c = trial.suggest_categorical("c", ["float", "int"])
if c == "float":
return trial.suggest_float("x", 1, 3, step=0.5)
elif c == "int":
a = trial.suggest_int("a", 1, 3)
b = trial.suggest_int("b", a, 3)
return a + b
study = optuna.create_study(sampler=optuna.samplers.BruteForceSampler())
study.optimize(objective)
Note: The defined search space must be finite. Therefore, when using FloatDistribution or
suggest_float(), step=None is not allowed.
Note: The sampler may fail to try the entire search space in when the suggestion ranges or parameters are
changed in the same Study.
Parameters
seed (int | None) – A seed to fix the order of trials as the search order randomly shuffled.
Please note that it is not recommended using this option in distributed optimization settings since
this option cannot ensure the order of trials and may increase the number of duplicate suggestions
during distributed optimization.
Note: Added in v3.1.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.1.0.
Methods
Note: Added in v2.4.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.4.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• state (TrialState) – Resulting trial state.
• values (Sequence[float] | None) – Resulting trial values. Guaranteed to not be None
if trial succeeded.
Return type
None
before_trial(study, trial)
Trial pre-processing.
This method is called before the objective function is called and right after the trial is instantiated. More pre-
cisely, this method is called during trial initialization, just before the infer_relative_search_space()
call. In other words, it is responsible for pre-processing that should be done before inferring the search
space.
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object.
Return type
None
infer_relative_search_space(study, trial)
Infer the search space that will be used by relative sampling in the target trial.
This method is called right before sample_relative() method, and the search space returned by this
method is passed to it. The parameters not contained in the search space will be sampled by using
sample_independent() method.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
See also:
Please refer to intersection_search_space() as an implementation of
infer_relative_search_space().
reseed_rng()
Reseed sampler’s random number generator.
This method is called by the Study instance if trials are executed in parallel with the option n_jobs>1.
In that case, the sampler instance will be replicated including the state of the random number generator,
and they may suggest the same values. To prevent this issue, this method assigns a different seed to each
random number generator.
Return type
None
sample_independent(study, trial, param_name, param_distribution)
Sample a parameter for a given distribution.
This method is called only for the parameters not contained in the search space returned by
sample_relative() method. This method is suitable for sampling algorithms that do not use relationship
between parameters such as random sampling and TPE.
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• param_name (str) – Name of the sampled parameter.
• param_distribution (BaseDistribution) – Distribution object that specifies a prior
and/or scale of the sampling algorithm.
Returns
A parameter value.
Return type
Any
Note: The failed trials are ignored by any build-in samplers when they sample new parameters. Thus,
failed trials are regarded as deleted in the samplers’ perspective.
Parameters
• study (Study) – Target study object.
• trial (FrozenTrial) – Target trial object. Take a copy before modifying this object.
• search_space (Dict[str, BaseDistribution]) – The search space returned by
infer_relative_search_space().
Returns
A dictionary containing the parameter names and the values.
Return type
Dict[str, Any]
optuna.samplers.IntersectionSearchSpace
class optuna.samplers.IntersectionSearchSpace(include_pruned=False)
A class to calculate the intersection search space of a Study.
Intersection search space contains the intersection of parameter distributions that have been suggested in the
completed trials of the study so far. If there are multiple parameters that have the same name but different
distributions, neither is included in the resulting search space (i.e., the parameters with dynamic value ranges are
excluded).
Note that an instance of this class is supposed to be used for only one study. If different studies are passed to
calculate(), a ValueError is raised.
Parameters
include_pruned (bool) – Whether pruned trials should be included in the search space.
Warning: Deprecated in v3.2.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v4.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.2.0.
Please use optuna.search_space.IntersectionSearchSpace instead.
Methods
calculate(study, ordered_dict=False)
Returns the intersection search space of the Study.
Parameters
• study (Study) – A study with completed trials. The same study must be passed for one
instance of this class through its lifetime.
• ordered_dict (bool) – A boolean flag determining the return type. If False, the re-
turned object will be a dict. If True, the returned object will be a dict sorted by keys,
i.e. parameter names.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
optuna.samplers.intersection_search_space
Note: IntersectionSearchSpace provides the same functionality with a much faster way. Please consider
using it if you want to reduce execution time as much as possible.
Parameters
• study (Study) – A study with completed trials.
• ordered_dict (bool) – A boolean flag determining the return type. If False, the returned
object will be a dict. If True, the returned object will be a dict sorted by keys, i.e. param-
eter names.
• include_pruned (bool) – Whether pruned trials should be included in the search space.
Returns
A dictionary containing the parameter names and parameter’s distributions.
Return type
Dict[str, BaseDistribution]
Warning: Deprecated in v3.2.0. This feature will be removed in the future. The removal of this feature is
currently scheduled for v4.0.0, but this schedule is subject to change. See https://github.com/optuna/optuna/
releases/tag/v3.2.0.
Please use optuna.search_space.intersection_search_space instead.
Note: The following optuna.samplers.nsgaii module defines crossover operations used by NSGAIISampler.
optuna.samplers.nsgaii
optuna.samplers.nsgaii.BaseCrossover
class optuna.samplers.nsgaii.BaseCrossover
Base class for crossovers.
A crossover operation is used by NSGAIISampler to create new parameter combination from parameters of n
parent individuals.
Note: Concrete implementations of this class are expected to only accept parameters from numerical distribu-
tions. At the moment, only crossover operation for categorical parameters (uniform crossover) is built-in into
NSGAIISampler.
Methods
Attributes
optuna.samplers.nsgaii.UniformCrossover
class optuna.samplers.nsgaii.UniformCrossover(swapping_prob=0.5)
Uniform Crossover operation used by NSGAIISampler.
Select each parameter with equal probability from the two parent individuals. For further information about
uniform crossover, please refer to the following paper:
• Gilbert Syswerda. 1989. Uniform Crossover in Genetic Algorithms. In Proceedings of the 3rd International
Conference on Genetic Algorithms. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2-9.
Parameters
swapping_prob (float) – Probability of swapping each parameter of the parents during
crossover.
Methods
Attributes
n_parents
optuna.samplers.nsgaii.BLXAlphaCrossover
class optuna.samplers.nsgaii.BLXAlphaCrossover(alpha=0.5)
Blend Crossover operation used by NSGAIISampler.
Uniformly samples child individuals from the hyper-rectangles created by the two parent individuals. For further
information about BLX-alpha crossover, please refer to the following paper:
• Eshelman, L. and J. D. Schaffer. Real-Coded Genetic Algorithms and Interval-Schemata. FOGA (1992).
Parameters
alpha (float) – Parametrizes blend operation.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Methods
Attributes
n_parents
optuna.samplers.nsgaii.SPXCrossover
class optuna.samplers.nsgaii.SPXCrossover(epsilon=None)
Simplex Crossover operation used by NSGAIISampler.
Uniformly samples child individuals from within a single simplex that is similar to the simplex produced by the
parent individual. For further information about SPX crossover, please refer to the following paper:
• Shigeyoshi Tsutsui and Shigeyoshi Tsutsui and David E. Goldberg and David E. Goldberg and Kumara
Sastry and Kumara Sastry Progress Toward Linkage Learning in Real-Coded GAs with Simplex Crossover.
IlliGAL Report. 2000.
Parameters
epsilon (float | None) – Expansion rate. If not specified, defaults to
sqrt(len(search_space) + 2).
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Methods
Attributes
n_parents
optuna.samplers.nsgaii.SBXCrossover
class optuna.samplers.nsgaii.SBXCrossover(eta=None)
Simulated Binary Crossover operation used by NSGAIISampler.
Generates a child from two parent individuals according to the polynomial probability distribution.
• Deb, K. and R. Agrawal. “Simulated Binary Crossover for Continuous Search Space.” Complex Syst. 9
(1995): n. pag.
Parameters
eta (float | None) – Distribution index. A small value of eta allows distant solutions to
be selected as children solutions. If not specified, takes default value of 2 for single objective
functions and 20 for multi objective.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Methods
Attributes
n_parents
optuna.samplers.nsgaii.VSBXCrossover
class optuna.samplers.nsgaii.VSBXCrossover(eta=None)
Modified Simulated Binary Crossover operation used by NSGAIISampler.
vSBX generates child individuals without excluding any region of the parameter space, while maintaining the
excellent properties of SBX.
• Pedro J. Ballester, Jonathan N. Carter. Real-Parameter Genetic Algorithms for Finding Multiple Optimal
Solutions in Multi-modal Optimization. GECCO 2003: 706-717
Parameters
eta (float | None) – Distribution index. A small value of eta allows distant solutions to
be selected as children solutions. If not specified, takes default value of 2 for single objective
functions and 20 for multi objective.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Methods
Attributes
n_parents
optuna.samplers.nsgaii.UNDXCrossover
Parameters
• sigma_xi (float) – Parametrizes normal distribution from which xi is drawn.
• sigma_eta (float | None) – Parametrizes normal distribution from which etas are
drawn. If not specified, defaults to 0.35 / sqrt(len(search_space)).
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Methods
Attributes
n_parents
6.3.11 optuna.search_space
The search_space module provides functionality for controlling search space of parameters.
optuna.search_space.IntersectionSearchSpace
class optuna.search_space.IntersectionSearchSpace(include_pruned=False)
A class to calculate the intersection search space of a Study.
Intersection search space contains the intersection of parameter distributions that have been suggested in the
completed trials of the study so far. If there are multiple parameters that have the same name but different
distributions, neither is included in the resulting search space (i.e., the parameters with dynamic value ranges are
excluded).
Note that an instance of this class is supposed to be used for only one study. If different studies are passed to
calculate(), a ValueError is raised.
Parameters
include_pruned (bool) – Whether pruned trials should be included in the search space.
Methods
calculate(study)
Returns the intersection search space of the Study.
Parameters
study (Study) – A study with completed trials. The same study must be passed for one
instance of this class through its lifetime.
Returns
A dictionary containing the parameter names and parameter’s distributions sorted by param-
eter names.
Return type
Dict[str, BaseDistribution]
optuna.search_space.intersection_search_space
optuna.search_space.intersection_search_space(trials, include_pruned=False)
Return the intersection search space of the given trials.
Intersection search space contains the intersection of parameter distributions that have been suggested in the
completed trials of the study so far. If there are multiple parameters that have the same name but different
distributions, neither is included in the resulting search space (i.e., the parameters with dynamic value ranges are
excluded).
Note: IntersectionSearchSpace provides the same functionality with a much faster way. Please consider
using it if you want to reduce execution time as much as possible.
Parameters
• trials (list[FrozenTrial]) – A list of trials.
• include_pruned (bool) – Whether pruned trials should be included in the search space.
Returns
A dictionary containing the parameter names and parameter’s distributions sorted by parameter
names.
Return type
Dict[str, BaseDistribution]
6.3.12 optuna.storages
The storages module defines a BaseStorage class which abstracts a backend database and provides library-internal
interfaces to the read/write histories of the studies and trials. Library users who wish to use storage solutions other
than the default in-memory storage should use one of the child classes of BaseStorage documented below.
optuna.storages.RDBStorage
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
return x**2
storage = optuna.storages.RDBStorage(
url="sqlite:///:memory:",
engine_kwargs={"pool_size": 20, "connect_args": {"timeout": 10}},
)
(continues on next page)
study = optuna.create_study(storage=storage)
study.optimize(objective, n_trials=10)
Parameters
• url (str) – URL of the storage.
• engine_kwargs (Optional[Dict[str, Any]]) – A dictionary of keyword arguments
that is passed to sqlalchemy.engine.create_engine function.
• skip_compatibility_check (bool) – Flag to skip schema compatibility check if set to
True.
• heartbeat_interval (Optional[int]) – Interval to record the heartbeat. It is recorded
every interval seconds. heartbeat_interval must be None or a positive integer.
Note: The heartbeat is supposed to be used with optimize(). If you use ask() and
tell() instead, it will not work.
• grace_period (Optional[int]) – Grace period before a running trial is failed from the
last heartbeat. grace_period must be None or a positive integer. If it is None, the grace
period will be 2 * heartbeat_interval.
• failed_trial_callback (Optional[Callable[['optuna.study.Study',
FrozenTrial], None]]) – A callback function that is invoked after failing each
stale trial. The function must accept two parameters with the following types in this order:
Study and FrozenTrial.
Note: The procedure to fail existing stale trials is called just before asking the study for a
new trial.
Note: If you use MySQL, pool_pre_ping will be set to True by default to prevent connection timeout. You
can turn it off with engine_kwargs['pool_pre_ping']=False, but it is recommended to keep the setting if
execution time of your objective function is longer than the wait_timeout of your MySQL configuration.
Note: We would never recommend SQLite3 for parallel optimization. Please see the FAQ How can I solve the
error that occurs when performing parallel optimization with SQLite3? for details.
Note: Mainly in a cluster environment, running trials are often killed unexpectedly. If you want to de-
tect a failure of trials, please use the heartbeat mechanism. Set heartbeat_interval, grace_period, and
failed_trial_callback appropriately according to your use case. For more details, please refer to the tuto-
rial and Example page.
See also:
You can use RetryFailedTrialCallback to automatically retry failed trials detected by heartbeat.
Methods
check_trial_is_updatable(trial_id, trial_state)
Check whether a trial state is updatable.
Parameters
• trial_id (int) – ID of the trial. Only used for an error message.
• trial_state (TrialState) – Trial state to check.
Raises
RuntimeError – If the trial is already finished.
Return type
None
create_new_study(directions, study_name=None)
Create a new study from a name.
If no name is specified, the storage class generates a name. The returned study ID is unique among all
current and deleted studies.
Parameters
• directions (Sequence[StudyDirection]) – A sequence of direction whose element
is either MAXIMIZE or MINIMIZE.
• study_name (str | None) – Name of the new study to create.
Returns
ID of the created study.
Raises
optuna.exceptions.DuplicatedStudyError – If a study with the same study_name
already exists.
Return type
int
create_new_trial(study_id, template_trial=None)
Create and add a new trial to a study.
The returned trial ID is unique among all current and deleted trials.
Parameters
• study_id (int) – ID of the study.
• template_trial (FrozenTrial | None) – Template FrozenTrial with default user-
attributes, system-attributes, intermediate-values, and a state.
Returns
ID of the created trial.
Raises
KeyError – If no study with the matching study_id exists.
Return type
int
delete_study(study_id)
Delete a study.
Parameters
study_id (int) – ID of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
get_all_studies()
Read a list of FrozenStudy objects.
Returns
A list of FrozenStudy objects, sorted by study_id.
Return type
List[FrozenStudy]
get_head_version()
Return the latest schema version.
Return type
str
get_heartbeat_interval()
Get the heartbeat interval if it is set.
Returns
The heartbeat interval if it is set, otherwise None.
Return type
int | None
get_n_trials(study_id, state=None)
Count the number of trials in a study.
Parameters
• study_id (int) – ID of the study.
• state (Tuple[TrialState, ...] | TrialState | None) – Trial states to filter on.
If None, include all states.
Returns
Number of trials in the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
int
get_study_directions(study_id)
Read whether a study maximizes or minimizes an objective.
Parameters
study_id (int) – ID of a study.
Returns
Optimization directions list of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
List[StudyDirection]
get_study_id_from_name(study_name)
Read the ID of a study.
Parameters
study_name (str) – Name of the study.
Returns
ID of the study.
Raises
KeyError – If no study with the matching study_name exists.
Return type
int
get_study_name_from_id(study_id)
Read the study name of a study.
Parameters
study_id (int) – ID of the study.
Returns
Name of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
str
get_study_system_attrs(study_id)
Read the optuna-internal attributes of a study.
Parameters
study_id (int) – ID of the study.
Returns
Dictionary with the optuna-internal attributes of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
Dict[str, Any]
get_study_user_attrs(study_id)
Read the user-defined attributes of a study.
Parameters
study_id (int) – ID of the study.
Returns
Dictionary with the user attributes of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
Dict[str, Any]
get_trial(trial_id)
Read a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Trial with a matching trial ID.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
FrozenTrial
get_trial_id_from_study_id_trial_number(study_id, trial_number)
Read the trial ID of a trial.
Parameters
• study_id (int) – ID of the study.
• trial_number (int) – Number of the trial.
Returns
ID of the trial.
Raises
KeyError – If no trial with the matching study_id and trial_number exists.
Return type
int
get_trial_number_from_id(trial_id)
Read the trial number of a trial.
Note: The trial number is only unique within a study, and is sequential.
Parameters
trial_id (int) – ID of the trial.
Returns
Number of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
int
get_trial_param(trial_id, param_name)
Read the parameter of a trial.
Parameters
• trial_id (int) – ID of the trial.
• param_name (str) – Name of the parameter.
Returns
Internal representation of the parameter.
Raises
KeyError – If no trial with the matching trial_id exists. If no such parameter exists.
Return type
float
get_trial_params(trial_id)
Read the parameter dictionary of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary of a parameters. Keys are parameter names and values are internal representations
of the parameter values.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
get_trial_system_attrs(trial_id)
Read the optuna-internal attributes of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary with the optuna-internal attributes of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
get_trial_user_attrs(trial_id)
Read the user-defined attributes of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary with the user-defined attributes of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
record_heartbeat(trial_id)
Record the heartbeat of the trial.
Parameters
trial_id (int) – ID of the trial.
Return type
None
remove_session()
Removes the current session.
A session is stored in SQLAlchemy’s ThreadLocalRegistry for each thread. This method closes and re-
moves the session which is associated to the current thread. Particularly, under multi-thread use cases, it is
important to call this method from each thread. Otherwise, all sessions and their associated DB connections
are destructed by a thread that occasionally invoked the garbage collector. By default, it is not allowed to
touch a SQLite connection from threads other than the thread that created the connection. Therefore, we
need to explicitly close the connection from each thread.
Return type
None
set_study_system_attr(study_id, key, value)
Register an optuna-internal attribute to a study.
This method overwrites any existing attribute.
Parameters
• study_id (int) – ID of the study.
• key (str) – Attribute key.
• value (Mapping[str, JSONSerializable] | Sequence[JSONSerializable] |
str | int | float | bool | None) – Attribute value. It should be JSON serializ-
able.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
set_study_user_attr(study_id, key, value)
Register a user-defined attribute to a study.
This method overwrites any existing attribute.
Parameters
• study_id (int) – ID of the study.
• key (str) – Attribute key.
• value (Any) – Attribute value. It should be JSON serializable.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
set_trial_intermediate_value(trial_id, step, intermediate_value)
Report an intermediate value of an objective function.
This method overwrites any existing intermediate value associated with the given step.
Parameters
• trial_id (int) – ID of the trial.
• step (int) – Step of the trial (e.g., the epoch when training a neural network).
• intermediate_value (float) – Intermediate value corresponding to the step.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
None
set_trial_param(trial_id, param_name, param_value_internal, distribution)
Set a parameter to a trial.
Parameters
• trial_id (int) – ID of the trial.
• param_name (str) – Name of the parameter.
• param_value_internal (float) – Internal representation of the parameter value.
Parameters
• trial_id (int) – ID of the trial.
• key (str) – Attribute key.
• value (Any) – Attribute value. It should be JSON serializable.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
None
upgrade()
Upgrade the storage schema.
Return type
None
optuna.storages.RetryFailedTrialCallback
import optuna
from optuna.storages import RetryFailedTrialCallback
storage = optuna.storages.RDBStorage(
url="sqlite:///:memory:",
heartbeat_interval=60,
grace_period=120,
failed_trial_callback=RetryFailedTrialCallback(max_retry=3),
)
study = optuna.create_study(
storage=storage,
)
See also:
See RDBStorage.
Parameters
• max_retry (int | None) – The max number of times a trial can be retried. Must be set
to None or an integer. If set to the default value of None will retry indefinitely. If set to an
integer, will only retry that many times.
• inherit_intermediate_values (bool) – Option to inherit trial.intermediate_values re-
ported by optuna.trial.Trial.report() from the failed trial. Default is False.
Note: Added in v2.8.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.8.0.
Methods
static retried_trial_number(trial)
Return the number of the original trial being retried.
Parameters
trial (FrozenTrial) – The trial object.
Returns
The number of the first failed trial. If not retry of a previous trial, returns None.
Return type
int | None
Note: Added in v2.8.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v2.8.0.
static retry_history(trial)
Return the list of retried trial numbers with respect to the specified trial.
Parameters
trial (FrozenTrial) – The trial object.
Returns
A list of trial numbers in ascending order of the series of retried trials. The first item of the
list indicates the original trial which is identical to the retried_trial_number(), and the
last item is the one right before the specified trial in the retry series. If the specified trial is
not a retry of any trial, returns an empty list.
Return type
List[int]
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
optuna.storages.fail_stale_trials
optuna.storages.fail_stale_trials(study)
Fail stale trials and run their failure callbacks.
The running trials whose heartbeat has not been updated for a long time will be failed, that is, those states will
be changed to FAIL.
See also:
See RDBStorage.
Parameters
study (Study) – Study holding the trials to check.
Return type
None
Note: Added in v2.9.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.9.0.
optuna.storages.JournalStorage
class optuna.storages.JournalStorage(log_storage)
Storage class for Journal storage backend.
Note that library users can instantiate this class, but the attributes provided by this class are not supposed to be
directly accessed by them.
Journal storage writes a record of every operation to the database as it is executed and at the same time, keeps
a latest snapshot of the database in-memory. If the database crashes for any reason, the storage can re-establish
the contents in memory by replaying the operations stored from the beginning.
Journal storage has several benefits over the conventional value logging storages.
1. The number of IOs can be reduced because of larger granularity of logs.
2. Journal storage has simpler backend API than value logging storage.
3. Journal storage keeps a snapshot in-memory so no need to add more cache.
Example
import optuna
def objective(trial):
...
storage = optuna.storages.JournalStorage(
optuna.storages.JournalFileStorage("./journal.log"),
)
In a Windows environment, an error message “A required privilege is not held by the client” may appear. In this
case, you can solve the problem with creating storage by specifying JournalFileOpenLock as follows.
file_path = "./journal.log"
lock_obj = optuna.storages.JournalFileOpenLock(file_path)
storage = optuna.storages.JournalStorage(
optuna.storages.JournalFileStorage(file_path, lock_obj=lock_obj),
)
Note: Added in v3.1.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.1.0.
Methods
check_trial_is_updatable(trial_id, trial_state)
Check whether a trial state is updatable.
create_new_study(directions[, study_name]) Create a new study from a name.
create_new_trial(study_id[, template_trial]) Create and add a new trial to a study.
delete_study(study_id) Delete a study.
get_all_studies() Read a list of FrozenStudy objects.
get_all_trials(study_id[, deepcopy, states]) Read all trials in a study.
get_best_trial(study_id) Return the trial with the best value in a study.
get_n_trials(study_id[, state]) Count the number of trials in a study.
get_study_directions(study_id) Read whether a study maximizes or minimizes an ob-
jective.
get_study_id_from_name(study_name) Read the ID of a study.
get_study_name_from_id(study_id) Read the study name of a study.
get_study_system_attrs(study_id) Read the optuna-internal attributes of a study.
get_study_user_attrs(study_id) Read the user-defined attributes of a study.
get_trial(trial_id) Read a trial.
get_trial_id_from_study_id_trial_number(...) Read the trial ID of a trial.
get_trial_number_from_id(trial_id) Read the trial number of a trial.
get_trial_param(trial_id, param_name) Read the parameter of a trial.
get_trial_params(trial_id) Read the parameter dictionary of a trial.
get_trial_system_attrs(trial_id) Read the optuna-internal attributes of a trial.
get_trial_user_attrs(trial_id) Read the user-defined attributes of a trial.
remove_session() Clean up all connections to a database.
restore_replay_result(snapshot)
Parameters
log_storage (BaseJournalLogStorage) –
check_trial_is_updatable(trial_id, trial_state)
Check whether a trial state is updatable.
Parameters
• trial_id (int) – ID of the trial. Only used for an error message.
• trial_state (TrialState) – Trial state to check.
Raises
RuntimeError – If the trial is already finished.
Return type
None
create_new_study(directions, study_name=None)
Create a new study from a name.
If no name is specified, the storage class generates a name. The returned study ID is unique among all
current and deleted studies.
Parameters
• directions (Sequence[StudyDirection]) – A sequence of direction whose element
is either MAXIMIZE or MINIMIZE.
• study_name (str | None) – Name of the new study to create.
Returns
ID of the created study.
Raises
optuna.exceptions.DuplicatedStudyError – If a study with the same study_name
already exists.
Return type
int
create_new_trial(study_id, template_trial=None)
Create and add a new trial to a study.
The returned trial ID is unique among all current and deleted trials.
Parameters
• study_id (int) – ID of the study.
• template_trial (FrozenTrial | None) – Template FrozenTrial with default user-
attributes, system-attributes, intermediate-values, and a state.
Returns
ID of the created trial.
Raises
KeyError – If no study with the matching study_id exists.
Return type
int
delete_study(study_id)
Delete a study.
Parameters
study_id (int) – ID of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
get_all_studies()
Read a list of FrozenStudy objects.
Returns
A list of FrozenStudy objects, sorted by study_id.
Return type
List[FrozenStudy]
get_study_directions(study_id)
Read whether a study maximizes or minimizes an objective.
Parameters
study_id (int) – ID of a study.
Returns
Optimization directions list of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
List[StudyDirection]
get_study_id_from_name(study_name)
Read the ID of a study.
Parameters
study_name (str) – Name of the study.
Returns
ID of the study.
Raises
KeyError – If no study with the matching study_name exists.
Return type
int
get_study_name_from_id(study_id)
Read the study name of a study.
Parameters
study_id (int) – ID of the study.
Returns
Name of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
str
get_study_system_attrs(study_id)
Read the optuna-internal attributes of a study.
Parameters
study_id (int) – ID of the study.
Returns
Dictionary with the optuna-internal attributes of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
Dict[str, Any]
get_study_user_attrs(study_id)
Read the user-defined attributes of a study.
Parameters
study_id (int) – ID of the study.
Returns
Dictionary with the user attributes of the study.
Raises
KeyError – If no study with the matching study_id exists.
Return type
Dict[str, Any]
get_trial(trial_id)
Read a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Trial with a matching trial ID.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
FrozenTrial
get_trial_id_from_study_id_trial_number(study_id, trial_number)
Read the trial ID of a trial.
Parameters
• study_id (int) – ID of the study.
• trial_number (int) – Number of the trial.
Returns
ID of the trial.
Raises
KeyError – If no trial with the matching study_id and trial_number exists.
Return type
int
get_trial_number_from_id(trial_id)
Read the trial number of a trial.
Note: The trial number is only unique within a study, and is sequential.
Parameters
trial_id (int) – ID of the trial.
Returns
Number of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
int
get_trial_param(trial_id, param_name)
Read the parameter of a trial.
Parameters
• trial_id (int) – ID of the trial.
• param_name (str) – Name of the parameter.
Returns
Internal representation of the parameter.
Raises
KeyError – If no trial with the matching trial_id exists. If no such parameter exists.
Return type
float
get_trial_params(trial_id)
Read the parameter dictionary of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary of a parameters. Keys are parameter names and values are internal representations
of the parameter values.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
get_trial_system_attrs(trial_id)
Read the optuna-internal attributes of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary with the optuna-internal attributes of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
get_trial_user_attrs(trial_id)
Read the user-defined attributes of a trial.
Parameters
trial_id (int) – ID of the trial.
Returns
Dictionary with the user-defined attributes of the trial.
Raises
KeyError – If no trial with the matching trial_id exists.
Return type
Dict[str, Any]
remove_session()
Clean up all connections to a database.
Return type
None
set_study_system_attr(study_id, key, value)
Register an optuna-internal attribute to a study.
This method overwrites any existing attribute.
Parameters
• study_id (int) – ID of the study.
• key (str) – Attribute key.
• value (Mapping[str, Mapping[str, JSONSerializable] |
Sequence[JSONSerializable] | str | int | float | bool
| None] | Sequence[Mapping[str, JSONSerializable] |
Sequence[JSONSerializable] | str | int | float | bool | None] | str
| int | float | bool | None) – Attribute value. It should be JSON serializable.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
set_study_user_attr(study_id, key, value)
Register a user-defined attribute to a study.
This method overwrites any existing attribute.
Parameters
• study_id (int) – ID of the study.
• key (str) – Attribute key.
• value (Any) – Attribute value. It should be JSON serializable.
Raises
KeyError – If no study with the matching study_id exists.
Return type
None
set_trial_intermediate_value(trial_id, step, intermediate_value)
Report an intermediate value of an objective function.
This method overwrites any existing intermediate value associated with the given step.
Parameters
• trial_id (int) – ID of the trial.
• step (int) – Step of the trial (e.g., the epoch when training a neural network).
• intermediate_value (float) – Intermediate value corresponding to the step.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
None
set_trial_param(trial_id, param_name, param_value_internal, distribution)
Set a parameter to a trial.
Parameters
• trial_id (int) – ID of the trial.
• param_name (str) – Name of the parameter.
• param_value_internal (float) – Internal representation of the parameter value.
• distribution (BaseDistribution) – Sampled distribution of the parameter.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
None
set_trial_state_values(trial_id, state, values=None)
Update the state and values of a trial.
Set return values of an objective function to values argument. If values argument is not None, this method
overwrites any existing trial values.
Parameters
• trial_id (int) – ID of the trial.
• state (TrialState) – New state of the trial.
• values (Sequence[float] | None) – Values of the objective function.
Returns
True if the state is successfully updated. False if the state is kept the same. The latter
happens when this method tries to update the state of RUNNING trial to RUNNING.
Raises
• KeyError – If no trial with the matching trial_id exists.
• RuntimeError – If the trial is already finished.
Return type
bool
set_trial_system_attr(trial_id, key, value)
Set an optuna-internal attribute to a trial.
This method overwrites any existing attribute.
Parameters
• trial_id (int) – ID of the trial.
• key (str) – Attribute key.
• value (Mapping[str, Mapping[str, JSONSerializable] |
Sequence[JSONSerializable] | str | int | float | bool
| None] | Sequence[Mapping[str, JSONSerializable] |
optuna.storages.JournalFileStorage
Methods
append_logs(logs)
Append logs to the backend.
Parameters
logs (List[Dict[str, Any]]) – A list that contains json-serializable logs.
Return type
None
read_logs(log_number_from)
Read logs with a log number greater than or equal to log_number_from.
If log_number_from is 0, read all the logs.
Parameters
log_number_from (int) – A non-negative integer value indicating which logs to read.
Returns
Logs with log number greater than or equal to log_number_from.
Return type
List[Dict[str, Any]]
optuna.storages.JournalFileSymlinkLock
class optuna.storages.JournalFileSymlinkLock(filepath)
Lock class for synchronizing processes for NFSv2 or later.
On acquiring the lock, link system call is called to create an exclusive file. The file is deleted when the lock is
released. In NFS environments prior to NFSv3, use this instead of JournalFileOpenLock
Parameters
filepath (str) – The path of the file whose race condition must be protected.
Methods
acquire()
Acquire a lock in a blocking way by creating a symbolic link of a file.
Returns
True if it succeeded in creating a symbolic link of self._lock_target_file.
Return type
bool
release()
Release a lock by removing the symbolic link.
Return type
None
optuna.storages.JournalFileOpenLock
class optuna.storages.JournalFileOpenLock(filepath)
Lock class for synchronizing processes for NFSv3 or later.
On acquiring the lock, open system call is called with the O_EXCL option to create an exclusive file. The file is
deleted when the lock is released. This class is only supported when using NFSv3 or later on kernel 2.6 or later.
In prior NFS environments, use JournalFileSymlinkLock.
Parameters
filepath (str) – The path of the file whose race condition must be protected.
Methods
acquire()
Acquire a lock in a blocking way by creating a lock file.
Returns
True if it succeeded in creating a self._lock_file
Return type
bool
release()
Release a lock by removing the created file.
Return type
None
optuna.storages.JournalRedisStorage
Note: Added in v3.1.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.1.0.
Methods
append_logs(logs)
Append logs to the backend.
Parameters
logs (List[Dict[str, Any]]) – A list that contains json-serializable logs.
Return type
None
load_snapshot()
Load snapshot from the backend.
Returns
A serialized snapshot (bytes) if found, otherwise None.
Return type
bytes | None
read_logs(log_number_from)
Read logs with a log number greater than or equal to log_number_from.
If log_number_from is 0, read all the logs.
Parameters
log_number_from (int) – A non-negative integer value indicating which logs to read.
Returns
Logs with log number greater than or equal to log_number_from.
Return type
List[Dict[str, Any]]
save_snapshot(snapshot)
Save snapshot to the backend.
Parameters
snapshot (bytes) – A serialized snapshot (bytes)
Return type
None
6.3.13 optuna.study
The study module implements the Study object and related functions. A public constructor is available for the Study
class, but direct use of this constructor is not recommended. Instead, library users should create and load a Study using
create_study() and load_study() respectively.
optuna.study.Study
Methods
Attributes
Parameters
• study_name (str) –
• storage (str | storages.BaseStorage) –
• sampler ('samplers.BaseSampler' | None) –
• pruner (pruners.BasePruner | None) –
add_trial(trial)
Add trial to study.
The trial is validated before being added.
Example
import optuna
from optuna.distributions import FloatDistribution
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2
study = optuna.create_study()
assert len(study.trials) == 0
trial = optuna.trial.create_trial(
params={"x": 2.0},
distributions={"x": FloatDistribution(0, 10)},
value=4.0,
)
study.add_trial(trial)
assert len(study.trials) == 1
study.optimize(objective, n_trials=3)
assert len(study.trials) == 4
(continues on next page)
other_study = optuna.create_study()
other_study.optimize(objective, n_trials=2)
assert len(other_study.trials) == len(study.trials) + 2
See also:
This method should in general be used to add already evaluated trials (trial.state.is_finished()
== True). To queue trials for evaluation, please refer to enqueue_trial().
See also:
See create_trial() for how to create trials.
See also:
Please refer to add_trial_tutorial for the tutorial of specifying hyperparameters with the evaluated value
manually.
Parameters
trial (FrozenTrial) – Trial to add.
Return type
None
add_trials(trials)
Add trials to study.
The trials are validated before being added.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
assert len(study.trials) == 3
other_study = optuna.create_study()
other_study.add_trials(study.trials)
assert len(other_study.trials) == len(study.trials)
other_study.optimize(objective, n_trials=2)
assert len(other_study.trials) == len(study.trials) + 2
See also:
See add_trial() for addition of each trial.
Parameters
trials (Iterable[FrozenTrial]) – Trials to add.
Return type
None
ask(fixed_distributions=None)
Create a new trial from which hyperparameters can be suggested.
This method is part of an alternative to optimize() that allows controlling the lifetime of a trial outside
the scope of func. Each call to this method should be followed by a call to tell() to finish the created
trial.
See also:
The ask_and_tell tutorial provides use-cases with examples.
Example
import optuna
study = optuna.create_study()
trial = study.ask()
x = trial.suggest_float("x", -1, 1)
study.tell(trial, x**2)
Example
import optuna
study = optuna.create_study()
distributions = {
"optimizer": optuna.distributions.CategoricalDistribution(["adam", "sgd"]),
"lr": optuna.distributions.FloatDistribution(0.0001, 0.1, log=True),
}
# `optimizer` and `lr` are already suggested and accessible with `trial.params`.
(continues on next page)
Parameters
fixed_distributions (dict[str, BaseDistribution] | None) – A dictionary con-
taining the parameter names and parameter’s distributions. Each parameter in this dictionary
is automatically suggested for the returned trial, even when the suggest method is not ex-
plicitly invoked by the user. If this argument is set to None, no parameter is automatically
suggested.
Returns
A Trial.
Return type
Trial
Returns
A dictionary containing parameters of the best trial.
Note: This feature can only be used for single-objective optimization. If your study is multi-objective, use
best_trials instead.
Returns
A FrozenTrial object of the best trial.
See also:
The reuse_best_trial tutorial provides a detailed example of how to use this method.
property best_trials: list[FrozenTrial]
Return trials located at the Pareto front in the study.
A trial is located at the Pareto front if there are no trials that dominate the trial. It’s called that a trial
t0 dominates another trial t1 if all(v0 <= v1) for v0, v1 in zip(t0.values, t1.values) and
any(v0 < v1) for v0, v1 in zip(t0.values, t1.values) are held.
Returns
A list of FrozenTrial objects.
property best_value: float
Return the best objective value in the study.
Returns
A float representing the best objective value.
Note: This feature can only be used for single-objective optimization. If your study is multi-objective, use
directions instead.
Returns
A StudyDirection object.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2
study = optuna.create_study()
study.enqueue_trial({"x": 5})
study.enqueue_trial({"x": 0}, user_attrs={"memo": "optimal"})
study.optimize(objective, n_trials=2)
Parameters
• params (dict[str, Any]) – Parameter values to pass your objective function.
• user_attrs (dict[str, Any] | None) – A dictionary of user-specific attributes other
than params.
• skip_if_exists (bool) – When True, prevents duplicate trials from being enqueued
again.
Note: This method might produce duplicated trials if called simultaneously by multiple
processes at the same time with same params dict.
Return type
None
See also:
Please refer to enqueue_trial_tutorial for the tutorial of specifying hyperparameters manually.
get_trials(deepcopy=True, states=None)
Return all trials in the study.
The returned trials are ordered by trial number.
See also:
See trials for related property.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
trials = study.get_trials()
assert len(trials) == 3
Parameters
• deepcopy (bool) – Flag to control whether to apply copy.deepcopy() to the trials. Note
that if you set the flag to False, you shouldn’t mutate any fields of the returned trial.
Otherwise the internal state of the study may corrupt and unexpected behavior may happen.
• states (Container[TrialState] | None) – Trial states to filter on. If None, include
all states.
Returns
A list of FrozenTrial objects.
Return type
list[FrozenTrial]
Returns
A list with names for each dimension of the returned values of the objective function.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
Parameters
• func (Callable[[Trial], float | Sequence[float]]) – A callable that imple-
ments objective function.
• n_trials (int | None) – The number of trials for each process. None represents no
limit in terms of the number of trials. The study continues to create trials until the number
of trials reaches n_trials, timeout period elapses, stop() is called, or a termination
signal such as SIGTERM or Ctrl+C is received.
See also:
optuna.study.MaxTrialsCallback can ensure how many times trials will be per-
formed across all processes.
• timeout (float | None) – Stop study after the given number of second(s). None rep-
resents no limit in terms of elapsed time. The study continues to create trials until the
number of trials reaches n_trials, timeout period elapses, stop() is called or, a termi-
nation signal such as SIGTERM or Ctrl+C is received.
• n_jobs (int) – The number of parallel jobs. If this argument is set to -1, the number is
set to CPU count.
Note: n_jobs allows parallelization using threading and may suffer from Python’s GIL.
It is recommended to use process-based parallelization if func is CPU bound.
set_metric_names(metric_names)
Set metric names.
This method names each dimension of the returned values of the objective function. It is particularly useful
in multi-objective optimization. The metric names are mainly referenced by the visualization functions.
Example
import optuna
import pandas
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2, x + 1
df = study.trials_dataframe(multi_index=True)
assert isinstance(df, pandas.DataFrame)
assert list(df.get("values").keys()) == ["x**2", "x+1"]
See also:
The names set by this method are used in trials_dataframe() and plot_pareto_front().
Parameters
metric_names (list[str]) – A list of metric names for the objective function.
Return type
None
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without
prior notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
set_system_attr(key, value)
Set a system attribute to the study.
Note that Optuna internally uses this method to save system messages. Please use set_user_attr() to
set users’ attributes.
Parameters
• key (str) – A key string of the attribute.
• value (Any) – A value of the attribute. The value should be JSON serializable.
Return type
None
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
set_user_attr(key, value)
Set a user attribute to the study.
See also:
See user_attrs for related attribute.
See also:
See the recipe on attributes.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 1)
y = trial.suggest_float("y", 0, 1)
return x**2 + y**2
study = optuna.create_study()
assert study.user_attrs == {
"objective function": "quadratic function",
"dimensions": 2,
"contributors": ["Akiba", "Sano"],
}
Parameters
• key (str) – A key string of the attribute.
• value (Any) – A value of the attribute. The value should be JSON serializable.
Return type
None
stop()
Exit from the current optimization loop after the running trials finish.
This method lets the running optimize() method return immediately after all trials which the optimize()
method spawned finishes. This method does not affect any behaviors of parallel or successive study pro-
cesses. This method only works when it is called inside an objective function or callback.
Example
import optuna
def objective(trial):
if trial.number == 4:
trial.study.stop()
x = trial.suggest_float("x", 0, 10)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=10)
assert len(study.trials) == 5
Return type
None
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
Example
import optuna
from optuna.trial import TrialState
def f(x):
return (x - 2) ** 2
def df(x):
return 2 * x - 4
study = optuna.create_study()
n_trials = 30
for _ in range(n_trials):
trial = study.ask()
trial.report(y, step=step)
if trial.should_prune():
# Finish the trial with the pruned state.
(continues on next page)
gy = df(x)
x -= gy * lr
else:
# Finish the trial with the final value after all iterations.
study.tell(trial, y)
Parameters
• trial (Trial | int) – A Trial object or a trial number.
• values (float | Sequence[float] | None) – Optional objective value or a sequence
of such values in case the study is used for multi-objective optimization. Argument must
be provided if state is COMPLETE and should be None if state is FAIL or PRUNED.
• state (TrialState | None) – State to be reported. Must be None, COMPLETE, FAIL or
PRUNED. If state is None, it will be updated to COMPLETE or FAIL depending on whether
validation for values reported succeed or not.
• skip_if_finished (bool) – Flag to control whether exception should be raised when
values for already finished trial are told. If True, tell is skipped without any error when
the trial is already finished.
Returns
A FrozenTrial representing the resulting trial. A returned trial is deep copied thus user can
modify it as needed.
Return type
FrozenTrial
Example
import optuna
import pandas
def objective(trial):
x = trial.suggest_float("x", -1, 1)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
Parameters
• attrs (tuple[str, ...]) – Specifies field names of FrozenTrial to include them to
a DataFrame of trials.
• multi_index (bool) – Specifies whether the returned DataFrame employs MultiIndex or
not. Columns that are hierarchical by nature such as (params, x) will be flattened to
params_x when set to False.
Returns
A pandas DataFrame of trials in the Study.
Return type
pd.DataFrame
Note: If value is in attrs during multi-objective optimization, it is implicitly replaced with values.
Note: If set_metric_names() is called, the value or values is implicitly replaced with the dictionary
with the objective name as key and the objective value as value.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 1)
y = trial.suggest_float("y", 0, 1)
return x**2 + y**2
study = optuna.create_study()
assert study.user_attrs == {
"objective function": "quadratic function",
"dimensions": 2,
"contributors": ["Akiba", "Sano"],
}
Returns
A dictionary containing all user attributes.
optuna.study.create_study
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
Parameters
• storage (str | storages.BaseStorage | None) – Database URL. If this argument is
set to None, in-memory storage is used, and the Study will not be persistent.
Note:
When a database URL is passed, Optuna internally uses SQLAlchemy to handle the
database. Please refer to SQLAlchemy’s document for further details. If you want to
specify non-default options to SQLAlchemy Engine, you can instantiate RDBStorage
with your desired options and pass it to the storage argument instead of a URL.
Note: If none of direction and directions are specified, the direction of the study is set to
“minimize”.
See also:
optuna.create_study() is an alias of optuna.study.create_study().
See also:
The rdb tutorial provides concrete examples to save and resume optimization using RDB.
optuna.study.load_study
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 10)
return x**2
Parameters
• study_name (str | None) – Study’s name. Each study has a unique name as an identi-
fier. If None, checks whether the storage contains a single study, and if so loads that study.
study_name is required if there are multiple studies in the storage.
• storage (str | storages.BaseStorage) – Database URL such as sqlite:///
example.db. Please see also the documentation of create_study() for further details.
• sampler ('samplers.BaseSampler' | None) – A sampler object that implements back-
ground algorithm for value suggestion. If None is specified, TPESampler is used as the
default. See also samplers.
• pruner (pruners.BasePruner | None) – A pruner object that decides early stopping of
unpromising trials. If None is specified, MedianPruner is used as the default. See also
pruners.
Return type
Study
See also:
optuna.load_study() is an alias of optuna.study.load_study().
optuna.study.delete_study
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
optuna.delete_study(study_name="example-study", storage="sqlite:///example.db")
Parameters
• study_name (str) – Study’s name.
• storage (str | BaseStorage) – Database URL such as sqlite:///example.db.
Please see also the documentation of create_study() for further details.
Return type
None
See also:
optuna.delete_study() is an alias of optuna.study.delete_study().
optuna.study.copy_study
Note: copy_study() copies a study even if the optimization is working on. It means users will get a copied
study that contains a trial that is not finished.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study = optuna.create_study(
study_name="example-study",
storage="sqlite:///example.db",
)
study.optimize(objective, n_trials=3)
optuna.copy_study(
from_study_name="example-study",
from_storage="sqlite:///example.db",
to_storage="sqlite:///example_copy.db",
)
study = optuna.load_study(
study_name=None,
storage="sqlite:///example_copy.db",
)
Parameters
• from_study_name (str) – Name of study.
• from_storage (str | BaseStorage) – Source database URL such as sqlite:///
example.db. Please see also the documentation of create_study() for further details.
• to_storage (str | BaseStorage) – Destination database URL.
• to_study_name (str | None) – Name of the created study. If omitted,
from_study_name is used.
Raises
DuplicatedStudyError – If a study with a conflicting name already exists in the destination
storage.
Return type
None
optuna.study.get_all_study_names
optuna.study.get_all_study_names(storage)
Get all study names stored in a specified storage.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
study_names = optuna.study.get_all_study_names(storage="sqlite:///example.db")
assert len(study_names) == 1
Parameters
storage (str | BaseStorage) – Database URL such as sqlite:///example.db. Please
see also the documentation of create_study() for further details.
Returns
List of all study names in the storage.
Return type
list[str]
See also:
optuna.get_all_study_names() is an alias of optuna.study.get_all_study_names().
optuna.study.get_all_study_summaries
optuna.study.get_all_study_summaries(storage, include_best_trial=True)
Get all history of studies stored in a specified storage.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -10, 10)
return (x - 2) ** 2
study.optimize(objective, n_trials=3)
study_summaries = optuna.study.get_all_study_summaries(storage="sqlite:///example.db
˓→")
assert len(study_summaries) == 1
study_summary = study_summaries[0]
assert study_summary.study_name == "example-study"
Parameters
• storage (str | BaseStorage) – Database URL such as sqlite:///example.db.
Please see also the documentation of create_study() for further details.
• include_best_trial (bool) – Include the best trials if exist. It potentially increases the
number of queries and may take longer to fetch summaries depending on the storage.
Returns
List of study history summarized as StudySummary objects.
Return type
list[StudySummary]
See also:
optuna.get_all_study_summaries() is an alias of optuna.study.get_all_study_summaries().
optuna.study.MaxTrialsCallback
Example
import optuna
from optuna.study import MaxTrialsCallback
from optuna.trial import TrialState
def objective(trial):
x = trial.suggest_float("x", -1, 1)
return x**2
study = optuna.create_study()
study.optimize(
objective,
callbacks=[MaxTrialsCallback(10, states=(TrialState.COMPLETE,))],
)
Parameters
• n_trials (int) – The max number of trials. Must be set to an integer.
• states (Container[TrialState] | None) – Tuple of the TrialState to be counted
towards the max trials limit. Default value is (TrialState.COMPLETE,). If None, count
all states.
optuna.study.StudyDirection
Methods
Attributes
NOT_SET
MINIMIZE
MAXIMIZE
as_integer_ratio()
Return integer ratio.
Return a pair of integers, whose ratio is exactly equal to the original int and with a positive denominator.
>>> (10).as_integer_ratio()
(10, 1)
>>> (-10).as_integer_ratio()
(-10, 1)
>>> (0).as_integer_ratio()
(0, 1)
bit_count()
Number of ones in the binary representation of the absolute value of self.
Also known as the population count.
>>> bin(13)
'0b1101'
>>> (13).bit_count()
3
bit_length()
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
conjugate()
Returns self, the complex conjugate of any int.
denominator
the denominator of a rational number in lowest terms
from_bytes(byteorder='big', *, signed=False)
Return the integer represented by the given array of bytes.
bytes
Holds the array of bytes to convert. The argument must either support the buffer protocol or be an
iterable object producing bytes. Bytes and bytearray are examples of built-in objects that support the
buffer protocol.
byteorder
The byte order used to represent the integer. If byteorder is ‘big’, the most significant byte is at the
beginning of the byte array. If byteorder is ‘little’, the most significant byte is at the end of the byte
array. To request the native byte order of the host system, use `sys.byteorder’ as the byte order value.
Default is to use ‘big’.
signed
Indicates whether two’s complement is used to represent the integer.
imag
the imaginary part of a complex number
numerator
the numerator of a rational number in lowest terms
real
the real part of a complex number
to_bytes(length=1, byteorder='big', *, signed=False)
Return an array of bytes representing an integer.
length
Length of bytes object to use. An OverflowError is raised if the integer is not representable with the
given number of bytes. Default is length 1.
byteorder
The byte order used to represent the integer. If byteorder is ‘big’, the most significant byte is at the
beginning of the byte array. If byteorder is ‘little’, the most significant byte is at the end of the byte
array. To request the native byte order of the host system, use `sys.byteorder’ as the byte order value.
Default is to use ‘big’.
signed
Determines whether two’s complement is used to represent the integer. If signed is False and a negative
integer is given, an OverflowError is raised.
optuna.study.StudySummary
• study_id (int) –
• directions (Sequence[StudyDirection] | None) –
study_name
Name of the Study.
direction
StudyDirection of the Study.
directions
A sequence of StudyDirection objects.
best_trial
optuna.trial.FrozenTrial with best objective value in the Study.
user_attrs
Dictionary that contains the attributes of the Study set with optuna.study.Study.set_user_attr().
system_attrs
Dictionary that contains the attributes of the Study internally set by Optuna.
Warning: Deprecated in v3.1.0. system_attrs argument will be removed in the future. The removal
of this feature is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.
com/optuna/optuna/releases/tag/v3.1.0.
n_trials
The number of trials ran in the Study.
datetime_start
Datetime where the Study started.
Attributes
direction
directions
system_attrs
6.3.14 optuna.terminator
The terminator module implements a mechanism for automatically terminating the optimization process, accompa-
nied by a callback class for the termination and evaluators for the estimated room for improvement in the optimization
and statistical error of the objective function. The terminator stops the optimization process when the estimated poten-
tial improvement is smaller than the statistical error.
optuna.terminator.BaseTerminator
class optuna.terminator.BaseTerminator
Base class for terminators.
Methods
should_terminate(study)
optuna.terminator.Terminator
Parameters
Example
import logging
import sys
import optuna
from optuna.terminator import Terminator
from optuna.terminator import report_cross_validation_scores
study = optuna.create_study(direction="maximize")
terminator = Terminator()
min_n_trials = 20
while True:
trial = study.ask()
X, y = load_wine(return_X_y=True)
clf = RandomForestClassifier(
max_depth=trial.suggest_int("max_depth", 2, 32),
min_samples_split=trial.suggest_float("min_samples_split", 0, 1),
criterion=trial.suggest_categorical("criterion", ("gini", "entropy")),
)
value = scores.mean()
logging.info(f"Trial #{trial.number} finished with value {value}.")
study.tell(trial, value)
See also:
Please refer to TerminatorCallback for how to use the terminator mechanism with the optimize() method.
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
Methods
should_terminate(study)
Judge whether the study should be terminated based on the reported values.
Parameters
study (Study) –
Return type
bool
optuna.terminator.BaseImprovementEvaluator
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
Methods
evaluate(trials, study_direction)
optuna.terminator.RegretBoundEvaluator
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
Methods
evaluate(trials, study_direction)
get_preprocessing([add_random_inputs])
optuna.terminator.BestValueStagnationEvaluator
class optuna.terminator.BestValueStagnationEvaluator(max_stagnation_trials=30)
Evaluates the stagnation period of the best value in an optimization process.
This class is initialized with a maximum stagnation period (max_stagnation_trials) and is designed to evaluate
the remaining trials before reaching this maximum period of allowed stagnation. If this remaining trials reach
zero, the trial terminates. Therefore, the default error evaluator is instantiated by StaticErrorEvaluator(const=0).
Parameters
max_stagnation_trials (int) – The maximum number of trials allowed for stagnation.
Note: Added in v3.4.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.4.0.
Methods
evaluate(trials, study_direction)
optuna.terminator.BaseErrorEvaluator
class optuna.terminator.BaseErrorEvaluator
Base class for error evaluators.
Methods
evaluate(trials, study_direction)
optuna.terminator.CrossValidationErrorEvaluator
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
Methods
evaluate(trials, study_direction)
Evaluate the statistical error of the objective function based on cross-validation.
Parameters
• trials (list[FrozenTrial]) – A list of trials to consider. The best trial in trials is
used to compute the statistical error.
• study_direction (StudyDirection) – The direction of the study.
Returns
A float representing the statistical error of the objective function.
Return type
float
optuna.terminator.StaticErrorEvaluator
class optuna.terminator.StaticErrorEvaluator(constant)
An error evaluator that always returns a constant value.
This evaluator can be used to terminate the optimization when the evaluated improvement potential is below the
fixed threshold.
Parameters
constant (float) – A user-specified constant value to always return as an error estimate.
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
Methods
evaluate(trials, study_direction)
optuna.terminator.TerminatorCallback
class optuna.terminator.TerminatorCallback(terminator=None)
A callback that terminates the optimization using Terminator.
This class implements a callback which wraps Terminator so that it can be used with the optimize() method.
Parameters
terminator (BaseTerminator | None) – A terminator object which determines whether to
terminate the optimization by assessing the room for optimization and statistical error. Defaults
to a Terminator object with default improvement_evaluator and error_evaluator.
Example
import optuna
from optuna.terminator import TerminatorCallback
from optuna.terminator import report_cross_validation_scores
def objective(trial):
X, y = load_wine(return_X_y=True)
clf = RandomForestClassifier(
max_depth=trial.suggest_int("max_depth", 2, 32),
min_samples_split=trial.suggest_float("min_samples_split", 0, 1),
(continues on next page)
study = optuna.create_study(direction="maximize")
terminator = TerminatorCallback()
study.optimize(objective, n_trials=50, callbacks=[terminator])
See also:
Please refer to Terminator for the details of the terminator mechanism.
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
optuna.terminator.report_cross_validation_scores
optuna.terminator.report_cross_validation_scores(trial, scores)
A function to report cross-validation scores of a trial.
This function should be called within the objective function to report the cross-validation scores. The reported
scores are used to evaluate the statistical error for termination judgement.
Parameters
• trial (Trial) – A Trial object to report the cross-validation scores.
• scores (list[float]) – The cross-validation scores of the trial.
Return type
None
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
6.3.15 optuna.trial
optuna.trial.Trial
Methods
Attributes
Note: The reported value is converted to float type by applying float() function internally. Thus, it
accepts all float-like types (e.g., numpy.float32). If the conversion fails, a TypeError is raised.
Note: If this method is called multiple times at the same step in a trial, the reported value only the first
time is stored and the reported values from the second time are ignored.
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
import optuna
def objective(trial):
clf = SGDClassifier(random_state=0)
for step in range(100):
clf.partial_fit(X_train, y_train, np.unique(y))
intermediate_value = clf.score(X_valid, y_valid)
trial.report(intermediate_value, step=step)
if trial.should_prune():
raise optuna.TrialPruned()
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=3)
Parameters
• value (float) – A value returned from the objective function.
• step (int) – Step of the trial (e.g., Epoch of neural network training). Note that pruners
assume that step starts at zero. For example, MedianPruner simply checks if step is
less than n_warmup_steps as the warmup mechanism. step must be a positive integer.
Return type
None
set_system_attr(key, value)
Set system attributes to the trial.
Note that Optuna internally uses this method to save system messages such as failure reason of trials. Please
use set_user_attr() to set users’ attributes.
Parameters
• key (str) – A key string of the attribute.
• value (Any) – A value of the attribute. The value should be JSON serializable.
Return type
None
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
set_user_attr(key, value)
Set user attributes to the trial.
The user attributes in the trial can be access via optuna.trial.Trial.user_attrs().
See also:
See the recipe on attributes.
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, random_state=0)
def objective(trial):
trial.set_user_attr("BATCHSIZE", 128)
momentum = trial.suggest_float("momentum", 0, 1.0)
clf = MLPClassifier(
hidden_layer_sizes=(100, 50),
batch_size=trial.user_attrs["BATCHSIZE"],
momentum=momentum,
solver="sgd",
random_state=0,
)
clf.fit(X_train, y_train)
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=3)
assert "BATCHSIZE" in study.best_trial.user_attrs.keys()
assert study.best_trial.user_attrs["BATCHSIZE"] == 128
Parameters
• key (str) – A key string of the attribute.
• value (Any) – A value of the attribute. The value should be JSON serializable.
Return type
None
should_prune()
Suggest whether the trial should be pruned or not.
The suggestion is made by a pruning algorithm associated with the trial and is based on previously reported
values. The algorithm can be specified when constructing a Study.
Note: If no values have been reported, the algorithm cannot make meaningful suggestions. Similarly, if
this method is called multiple times with the exact same set of reported values, the suggestions will be the
same.
See also:
Please refer to the example code in optuna.trial.Trial.report().
Returns
A boolean value. If True, the trial should be pruned according to the configured pruning
algorithm. Otherwise, the trial should continue.
Return type
bool
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
def objective(trial):
kernel = trial.suggest_categorical("kernel", ["linear", "poly", "rbf"])
clf = SVC(kernel=kernel, gamma="scale", random_state=0)
clf.fit(X_train, y_train)
return clf.score(X_valid, y_valid)
Parameters
• name – A parameter name.
• choices – Parameter value candidates.
See also:
CategoricalDistribution.
Returns
A suggested value.
See also:
configurations tutorial describes more details and flexible usages.
suggest_discrete_uniform(name, low, high, q)
Suggest a value for the discrete parameter.
The value is sampled from the range [low, high], and the step of discretization is 𝑞. More specifically, this
method returns one of the values in the sequence low, low + 𝑞, low + 2𝑞, . . . , low + 𝑘𝑞 ≤ high, where 𝑘
denotes an integer. Note that ℎ𝑖𝑔ℎ may be changed due to round-off errors if 𝑞 is not an integer. Please
check warning messages to find the changed values.
Parameters
• name (str) – A parameter name.
• low (float) – Lower endpoint of the range of suggested values. low is included in the
range.
• high (float) – Upper endpoint of the range of suggested values. high is included in the
range.
• q (float) – A step of discretization.
Returns
A suggested float value.
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float(. . . , step=. . . ) instead.
Example
Suggest a momentum, learning rate and scaling factor of learning rate for neural network training.
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.neural_network import MLPClassifier
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, random_state=0)
def objective(trial):
momentum = trial.suggest_float("momentum", 0.0, 1.0)
learning_rate_init = trial.suggest_float(
"learning_rate_init", 1e-5, 1e-3, log=True
)
power_t = trial.suggest_float("power_t", 0.2, 0.8, step=0.1)
clf = MLPClassifier(
hidden_layer_sizes=(100, 50),
momentum=momentum,
learning_rate_init=learning_rate_init,
solver="sgd",
random_state=0,
power_t=power_t,
)
clf.fit(X_train, y_train)
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=3)
Parameters
• name (str) – A parameter name.
• low (float) – Lower endpoint of the range of suggested values. low is included in the
range. low must be less than or equal to high. If log is True, low must be larger than 0.
• high (float) – Upper endpoint of the range of suggested values. high is included in the
range. high must be greater than or equal to low.
• step (float | None) – A step of discretization.
Note: The step and log arguments cannot be used at the same time. To set the step
argument to a float number, set the log argument to False.
• log (bool) – A flag to sample the value from the log domain or not. If log is true, the
value is sampled from the range in the log domain. Otherwise, the value is sampled from
the range in the linear domain.
Note: The step and log arguments cannot be used at the same time. To set the log
argument to True, set the step argument to None.
Returns
A suggested float value.
Return type
float
See also:
configurations tutorial describes more details and flexible usages.
suggest_int(name, low, high, step=1, log=False)
Suggest a value for the integer parameter.
The value is sampled from the integers in [low, high].
Example
import numpy as np
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
import optuna
X, y = load_iris(return_X_y=True)
X_train, X_valid, y_train, y_valid = train_test_split(X, y)
def objective(trial):
n_estimators = trial.suggest_int("n_estimators", 50, 400)
clf = RandomForestClassifier(n_estimators=n_estimators, random_state=0)
clf.fit(X_train, y_train)
return clf.score(X_valid, y_valid)
study = optuna.create_study(direction="maximize")
study.optimize(objective, n_trials=3)
Parameters
• name (str) – A parameter name.
• low (int) – Lower endpoint of the range of suggested values. low is included in the range.
low must be less than or equal to high. If log is True, low must be larger than 0.
• high (int) – Upper endpoint of the range of suggested values. high is included in the
range. high must be greater than or equal to low.
• step (int) – A step of discretization.
Note: Note that high is modified if the range is not divisible by step. Please check the
warning messages to find the changed values.
Note: The method returns one of the values in the sequence low, low + step, low + 2 *
step, . . . , low + 𝑘 * step ≤ high, where 𝑘 denotes an integer.
Note: The step != 1 and log arguments cannot be used at the same time. To set the
step argument step ≥ 2, set the log argument to False.
• log (bool) – A flag to sample the value from the log domain or not.
Note: If log is true, at first, the range of suggested values is divided into grid points of
width 1. The range of suggested values is then converted to a log domain, from which a
value is sampled. The uniformly sampled value is re-converted to the original domain and
rounded to the nearest grid point that we just split, and the suggested value is determined.
For example, if low = 2 and high = 8, then the range of suggested values is [2, 3, 4, 5, 6,
7, 8] and lower values tend to be more sampled than higher values.
Note: The step != 1 and log arguments cannot be used at the same time. To set the
log argument to True, set the step argument to 1.
Return type
int
See also:
configurations tutorial describes more details and flexible usages.
suggest_loguniform(name, low, high)
Suggest a value for the continuous parameter.
The value is sampled from the range [low, high) in the log domain. When low = high, the value of low will
be returned.
Parameters
• name (str) – A parameter name.
• low (float) – Lower endpoint of the range of suggested values. low is included in the
range.
• high (float) – Upper endpoint of the range of suggested values. high is included in the
range.
Returns
A suggested float value.
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float(. . . , log=True) instead.
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float instead.
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
optuna.trial.FixedTrial
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x**2 + y
Parameters
• params (Dict[str, Any]) – A dictionary containing all parameters.
• number (int) – A trial number. Defaults to 0.
Methods
report(value, step)
set_system_attr(key, value)
set_user_attr(key, value)
should_prune()
suggest_categorical()
Attributes
datetime_start
distributions
number
params
system_attrs
user_attrs
set_system_attr(key, value)
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
Parameters
• key (str) –
• value (Any) –
Return type
None
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float(. . . , step=. . . ) instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
• q (float) –
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float(. . . , log=True) instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
Return type
float
optuna.trial.FrozenTrial
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
Note: Instances are mutable, despite the name. For instance, set_user_attr() will update user attributes of
objects in-place.
Example:
Overwritten attributes.
import copy
import datetime
import optuna
def objective(trial):
x = trial.suggest_float("x", -1, 1)
(continues on next page)
return x**2
study = optuna.create_study()
study.optimize(objective, n_trials=3)
best_trial = study.best_trial
best_trial_copy = copy.deepcopy(best_trial)
# re-evaluate
objective(best_trial)
Parameters
• number (int) –
• state (TrialState) –
• value (float | None) –
• datetime_start (datetime | None) –
• datetime_complete (datetime | None) –
• params (Dict[str, Any]) –
• distributions (Dict[str, BaseDistribution]) –
• user_attrs (Dict[str, Any]) –
• system_attrs (Dict[str, Any]) –
• intermediate_values (Dict[int, float]) –
• trial_id (int) –
• values (Sequence[float] | None) –
number
Unique and consecutive number of Trial for each Study. Note that this field uses zero-based numbering.
state
TrialState of the Trial.
value
Objective value of the Trial. value and values must not be specified at the same time.
values
Sequence of objective values of the Trial. The length is greater than 1 if the problem is multi-objective
optimization. value and values must not be specified at the same time.
datetime_start
Datetime where the Trial started.
datetime_complete
Datetime where the Trial finished.
params
Dictionary that contains suggested parameters.
distributions
Dictionary that contains the distributions of params.
user_attrs
Dictionary that contains the attributes of the Trial set with optuna.trial.Trial.set_user_attr().
system_attrs
Dictionary that contains the attributes of the Trial set with optuna.trial.Trial.
set_system_attr().
intermediate_values
Intermediate objective values set with optuna.trial.Trial.report().
Methods
set_user_attr(key, value)
Attributes
datetime_start
distributions
params
system_attrs
user_attrs
value
values
Parameters
• value (float) – A value returned from the objective function.
• step (int) – Step of the trial (e.g., Epoch of neural network training). Note that pruners
assume that step starts at zero. For example, MedianPruner simply checks if step is
less than n_warmup_steps as the warmup mechanism.
Return type
None
set_system_attr(key, value)
Warning: Deprecated in v3.1.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v5.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.1.0.
Parameters
• key (str) –
• value (Any) –
Return type
None
should_prune()
Suggest whether the trial should be pruned or not.
The suggestion is always False regardless of a pruning algorithm.
Returns
False.
Return type
bool
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float(. . . , step=. . . ) instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
• q (float) –
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Parameters
• name (str) –
• low (float) –
• high (float) –
Return type
float
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal of this feature
is currently scheduled for v6.0.0, but this schedule is subject to change. See https://github.com/optuna/
optuna/releases/tag/v3.0.0.
Use suggest_float instead.
Parameters
• name (str) –
• low (float) –
• high (float) –
Return type
float
optuna.trial.TrialState
Methods
Attributes
RUNNING
COMPLETE
PRUNED
FAIL
WAITING
as_integer_ratio()
Return integer ratio.
Return a pair of integers, whose ratio is exactly equal to the original int and with a positive denominator.
>>> (10).as_integer_ratio()
(10, 1)
>>> (-10).as_integer_ratio()
(-10, 1)
>>> (0).as_integer_ratio()
(0, 1)
bit_count()
Number of ones in the binary representation of the absolute value of self.
Also known as the population count.
>>> bin(13)
'0b1101'
>>> (13).bit_count()
3
bit_length()
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
conjugate()
Returns self, the complex conjugate of any int.
denominator
the denominator of a rational number in lowest terms
from_bytes(byteorder='big', *, signed=False)
Return the integer represented by the given array of bytes.
bytes
Holds the array of bytes to convert. The argument must either support the buffer protocol or be an
iterable object producing bytes. Bytes and bytearray are examples of built-in objects that support the
buffer protocol.
byteorder
The byte order used to represent the integer. If byteorder is ‘big’, the most significant byte is at the
beginning of the byte array. If byteorder is ‘little’, the most significant byte is at the end of the byte
array. To request the native byte order of the host system, use `sys.byteorder’ as the byte order value.
Default is to use ‘big’.
signed
Indicates whether two’s complement is used to represent the integer.
imag
the imaginary part of a complex number
is_finished()
Return a bool value to represent whether the trial state is unfinished or not.
The unfinished state is either RUNNING or WAITING.
Return type
bool
numerator
the numerator of a rational number in lowest terms
real
the real part of a complex number
to_bytes(length=1, byteorder='big', *, signed=False)
Return an array of bytes representing an integer.
length
Length of bytes object to use. An OverflowError is raised if the integer is not representable with the
given number of bytes. Default is length 1.
byteorder
The byte order used to represent the integer. If byteorder is ‘big’, the most significant byte is at the
beginning of the byte array. If byteorder is ‘little’, the most significant byte is at the end of the byte
array. To request the native byte order of the host system, use `sys.byteorder’ as the byte order value.
Default is to use ‘big’.
signed
Determines whether two’s complement is used to represent the integer. If signed is False and a negative
integer is given, an OverflowError is raised.
optuna.trial.create_trial
Example
import optuna
from optuna.distributions import CategoricalDistribution
from optuna.distributions import FloatDistribution
trial = optuna.trial.create_trial(
params={"x": 1.0, "y": 0},
distributions={
"x": FloatDistribution(0, 10),
"y": CategoricalDistribution([-1, 0, 1]),
},
value=5.0,
)
See also:
See add_trial() for how this function can be used to create a study from existing trials.
Note: Please note that this is a low-level API. In general, trials that are passed to objective functions are created
inside optimize().
Parameters
• state (TrialState) – Trial state.
6.3.16 optuna.visualization
The visualization module provides utility functions for plotting the optimization process using plotly and mat-
plotlib. Plotting functions generally take a Study object and optional parameters are passed as a list to the params
argument.
Note: In the optuna.visualization module, the following functions use plotly to create figures, but JupyterLab
cannot render them by default. Please follow this installation guide to show figures in JupyterLab.
optuna.visualization.plot_contour
Example
The following code snippet shows how to plot the parameter relationship as contour plot.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=30)
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
Note: The colormap is reversed when the target argument isn’t None or direction of Study is minimize.
optuna.visualization.plot_edf
Note: EDF is useful to analyze and improve search spaces. For instance, you can see a practical use case of
EDF in the paper Designing Network Design Spaces.
Note: The plotted EDF assumes that the value of the objective function is in accordance with the uniform
distribution over the objective space.
Example
import math
import optuna
sampler = optuna.samplers.RandomSampler(seed=10)
# Narrowest search space but it doesn't include the global optimum point.
study2 = optuna.create_study(study_name="x=[1,3), y=[1,3)", sampler=sampler)
study2.optimize(lambda t: objective(t, 1, 3), n_trials=500)
Parameters
• study (Study | Sequence[Study]) – A target Study object. You can pass multiple stud-
ies if you want to compare those EDFs.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
optuna.visualization.plot_hypervolume_history
optuna.visualization.plot_hypervolume_history(study, reference_point)
Plot hypervolume history of all trials in a study.
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)
v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1
reference_point=[100., 50.]
fig = optuna.visualization.plot_hypervolume_history(study, reference_point)
fig.show()
Parameters
• study (Study) – A Study object whose trials are plotted for their hypervolumes. The num-
ber of objectives must be 2 or more.
• reference_point (Sequence[float]) – A reference point to use for hypervolume com-
putation. The dimension of the reference point must be the same as the number of objectives.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
optuna.visualization.plot_intermediate_values
optuna.visualization.plot_intermediate_values(study)
Plot intermediate values of all trials in a study.
Example
import optuna
def f(x):
return (x - 2) ** 2
def df(x):
return 2 * x - 4
def objective(trial):
lr = trial.suggest_float("lr", 1e-5, 1e-1, log=True)
x = 3
for step in range(128):
y = f(x)
trial.report(y, step=step)
if trial.should_prune():
raise optuna.TrialPruned()
gy = df(x)
x -= gy * lr
return y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=16)
fig = optuna.visualization.plot_intermediate_values(study)
fig.show()
Parameters
study (Study) – A Study object whose trials are plotted for their intermediate values.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
optuna.visualization.plot_optimization_history
Example
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
fig = optuna.visualization.plot_optimization_history(study)
fig.show()
Parameters
• study (Study | Sequence[Study]) – A Study object whose trials are plotted for their
target values. You can pass multiple studies if you want to compare those optimization his-
tories.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
• target_name (str) – Target’s name to display on the axis label and the legend.
• error_bar (bool) – A flag to show the error bar.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
optuna.visualization.plot_parallel_coordinate
Example
The following code snippet shows how to plot the high-dimensional parameter relationships.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
• target_name (str) – Target’s name to display on the axis label and the legend.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
Note: The colormap is reversed when the target argument isn’t None or direction of Study is minimize.
optuna.visualization.plot_param_importances
Example
import optuna
def objective(trial):
x = trial.suggest_int("x", 0, 2)
y = trial.suggest_float("y", -1.0, 1.0)
z = trial.suggest_float("z", 0.0, 1.5)
return x ** 2 + y ** 3 - z ** 4
sampler = optuna.samplers.RandomSampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
fig = optuna.visualization.plot_param_importances(study)
fig.show()
See also:
This function visualizes the results of optuna.importance.get_param_importances().
Parameters
• study (Study) – An optimized study.
• evaluator (BaseImportanceEvaluator | None) – An importance evaluator object
that specifies which algorithm to base the importance assessment on. Defaults to
FanovaImportanceEvaluator.
Note: FanovaImportanceEvaluator takes over 1 minute when given a study that con-
tains 1000+ trials. We published optuna-fast-fanova library, that is a Cython accelerated
fANOVA implementation. By using it, you can get hyperparameter importances within a
few seconds.
• params (list[str] | None) – A list of names of parameters to assess. If None, all pa-
rameters that are present in all of the completed trials are assessed.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted. For multi-objective optimization, all objectives will be plotted if
target is None.
Note: This argument can be used to specify which objective to plot if study is being used
for multi-objective optimization. For example, to get only the hyperparameter importance of
the first objective, use target=lambda t: t.values[0] for the target parameter.
• target_name (str) – Target’s name to display on the legend. Names set via
set_metric_names() will be used if target is None, overriding this argument.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
optuna.visualization.plot_pareto_front
Example
The following code snippet shows how to plot the Pareto front of a study.
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)
v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1
fig = optuna.visualization.plot_pareto_front(study)
fig.show()
Example
The following code snippet shows how to plot a 2-dimensional Pareto front of a 3-dimensional study. This
example is scalable, e.g., for plotting a 2- or 3-dimensional Pareto front of a 4-dimensional study and so on.
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 5)
(continues on next page)
study.optimize(objective, n_trials=100)
fig = optuna.visualization.plot_pareto_front(
study,
targets=lambda t: (t.values[0], t.values[1]),
target_names=["Objective 0", "Objective 1"],
)
fig.show()
Parameters
• study (Study) – A Study object whose trials are plotted for their objective values. The
number of objectives must be either 2 or 3 when targets is None.
• target_names (list[str] | None) – Objective name list used as the axis titles. If None
is specified, “Objective {objective_index}” is used instead. If targets is specified for a
study that does not contain any completed trial, target_name must be specified.
• include_dominated_trials (bool) – A flag to include all dominated trial’s objective
values.
• axis_order (list[int] | None) – A list of indices indicating the axis order. If None is
specified, default order is used. axis_order and targets cannot be used at the same time.
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal
of this feature is currently scheduled for v5.0.0, but this schedule is subject to change.
See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
optuna.visualization.plot_rank
Example
The following code snippet shows how to plot the parameter relationship as a rank plot.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
c0 = 400 - (x + y)**2
trial.set_user_attr("constraint", [c0])
return x ** 2 + y
def constraints(trial):
return trial.user_attrs["constraint"]
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
Note: Specify this argument if study is being used for multi-objective optimization.
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
optuna.visualization.plot_slice
Example
The following code snippet shows how to plot the parameter relationship as slice plot.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
optuna.visualization.plot_terminator_improvement
optuna.visualization.plot_terminator_improvement(study, plot_error=False,
improvement_evaluator=None,
error_evaluator=None, min_n_trials=20)
Plot the potentials for future objective improvement.
This function visualizes the objective improvement potentials, evaluated with improvement_evaluator. It
helps to determine whether we should continue the optimization or not. You can also plot the error evaluated
with error_evaluator if the plot_error argument is set to True. Note that this function may take some time
to compute the improvement potentials.
Example
The following code snippet shows how to plot improvement potentials, together with cross-validation errors.
from lightgbm import LGBMClassifier
from sklearn.datasets import load_wine
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
import optuna
from optuna.terminator import report_cross_validation_scores
from optuna.visualization import plot_terminator_improvement
def objective(trial):
X, y = load_wine(return_X_y=True)
clf = LGBMClassifier(
reg_alpha=trial.suggest_float("reg_alpha", 1e-8, 10.0, log=True),
reg_lambda=trial.suggest_float("reg_lambda", 1e-8, 10.0, log=True),
num_leaves=trial.suggest_int("num_leaves", 2, 256),
colsample_bytree=trial.suggest_float("colsample_bytree", 0.4, 1.0),
subsample=trial.suggest_float("subsample", 0.4, 1.0),
subsample_freq=trial.suggest_int("subsample_freq", 1, 7),
min_child_samples=trial.suggest_int("min_child_samples", 5, 100),
)
scores = cross_val_score(clf, X, y, cv=KFold(n_splits=5, shuffle=True))
(continues on next page)
study = optuna.create_study()
study.optimize(objective, n_trials=30)
Parameters
• study (Study) – A Study object whose trials are plotted for their improvement.
• plot_error (bool) – A flag to show the error. If it is set to True, errors evaluated by
error_evaluator are also plotted as line graph. Defaults to False.
• improvement_evaluator (BaseImprovementEvaluator | None) – An object that
evaluates the improvement of the objective function. Defaults to RegretBoundEvaluator.
• error_evaluator (BaseErrorEvaluator | None) – An object that evaluates the error
inherent in the objective function. Defaults to CrossValidationErrorEvaluator.
• min_n_trials (int) – The minimum number of trials before termination is considered.
Terminator improvements for trials below this value are shown in a lighter color. Defaults to
20.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
optuna.visualization.plot_timeline
optuna.visualization.plot_timeline(study)
Plot the timeline of a study.
Example
The following code snippet shows how to plot the timeline of a study. Timeline plot can visualize trials with
overlapping execution time (e.g., in distributed environments).
import time
import optuna
study = optuna.create_study(direction="minimize")
study.optimize(
objective, n_trials=50, n_jobs=2, catch=(ValueError,)
)
fig = optuna.visualization.plot_timeline(study)
fig.show()
Parameters
study (Study) – A Study object whose trials are plotted with their lifetime.
Returns
A plotly.graph_objs.Figure object.
Return type
Figure
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
optuna.visualization.is_available
optuna.visualization.is_available()
Returns whether visualization with plotly is available or not.
Note: visualization module depends on plotly version 4.0.0 or higher. If a supported version of plotly isn’t
installed in your environment, this function will return False. In such case, please execute $ pip install -U
plotly>=4.0.0 to install plotly.
Returns
True if visualization with plotly is available, False otherwise.
Return type
bool
optuna.visualization.matplotlib
optuna.visualization.matplotlib.plot_contour
Warning: Output figures of this Matplotlib-based plot_contour() function would be different from those
of the Plotly-based plot_contour().
Example
The following code snippet shows how to plot the parameter relationship as contour plot.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=30)
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
Note: Specify this argument if study is being used for multi-objective optimization.
Note: The colormap is reversed when the target argument isn’t None or direction of Study is minimize.
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
optuna.visualization.matplotlib.plot_edf
Note: Please refer to matplotlib.pyplot.legend to adjust the style of the generated legend.
Example
import math
import optuna
sampler = optuna.samplers.RandomSampler(seed=10)
# Narrowest search space but it doesn't include the global optimum point.
study2 = optuna.create_study(study_name="x=[1,3), y=[1,3)", sampler=sampler)
study2.optimize(lambda t: objective(t, 1, 3), n_trials=500)
Parameters
• study (Study | Sequence[Study]) – A target Study object. You can pass multiple stud-
ies if you want to compare those EDFs.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
optuna.visualization.matplotlib.plot_hypervolume_history
optuna.visualization.matplotlib.plot_hypervolume_history(study, reference_point)
Plot hypervolume history of all trials in a study with Matplotlib.
Example
import optuna
import matplotlib.pyplot as plt
def objective(trial):
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)
v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1
reference_point=[100, 50]
optuna.visualization.matplotlib.plot_hypervolume_history(study, reference_point)
plt.tight_layout()
Note: You need to adjust the size of the plot by yourself using plt.tight_layout() or plt.
savefig(IMAGE_NAME, bbox_inches='tight').
Parameters
• study (Study) – A Study object whose trials are plotted for their hypervolumes. The num-
ber of objectives must be 2 or more.
• reference_point (Sequence[float]) – A reference point to use for hypervolume com-
putation. The dimension of the reference point must be the same as the number of objectives.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: Added in v3.3.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.3.0.
optuna.visualization.matplotlib.plot_intermediate_values
optuna.visualization.matplotlib.plot_intermediate_values(study)
Plot intermediate values of all trials in a study with Matplotlib.
Note: Please refer to matplotlib.pyplot.legend to adjust the style of the generated legend.
Example
import optuna
def f(x):
return (x - 2) ** 2
def df(x):
return 2 * x - 4
def objective(trial):
lr = trial.suggest_float("lr", 1e-5, 1e-1, log=True)
x = 3
for step in range(128):
y = f(x)
(continues on next page)
trial.report(y, step=step)
if trial.should_prune():
raise optuna.TrialPruned()
gy = df(x)
x -= gy * lr
return y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=16)
optuna.visualization.matplotlib.plot_intermediate_values(study)
See also:
Please refer to optuna.visualization.plot_intermediate_values() for an example.
Parameters
study (Study) – A Study object whose trials are plotted for their intermediate values.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
optuna.visualization.matplotlib.plot_optimization_history
optuna.visualization.matplotlib.plot_optimization_history(study, *, target=None,
target_name='Objective Value',
error_bar=False)
Plot optimization history of all trials in a study with Matplotlib.
See also:
Please refer to optuna.visualization.plot_optimization_history() for an example.
Example
import optuna
import matplotlib.pyplot as plt
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
optuna.visualization.matplotlib.plot_optimization_history(study)
plt.tight_layout()
Note: You need to adjust the size of the plot by yourself using plt.tight_layout() or plt.
savefig(IMAGE_NAME, bbox_inches='tight').
Parameters
• study (Study | Sequence[Study]) – A Study object whose trials are plotted for their
target values. You can pass multiple studies if you want to compare those optimization his-
tories.
Note: Specify this argument if study is being used for multi-objective optimization.
• target_name (str) – Target’s name to display on the axis label and the legend.
• error_bar (bool) – A flag to show the error bar.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
optuna.visualization.matplotlib.plot_parallel_coordinate
Example
The following code snippet shows how to plot the high-dimensional parameter relationships.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
• target_name (str) – Target’s name to display on the axis label and the legend.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: The colormap is reversed when the target argument isn’t None or direction of Study is minimize.
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
optuna.visualization.matplotlib.plot_param_importances
Example
import optuna
def objective(trial):
x = trial.suggest_int("x", 0, 2)
y = trial.suggest_float("y", -1.0, 1.0)
z = trial.suggest_float("z", 0.0, 1.5)
return x ** 2 + y ** 3 - z ** 4
sampler = optuna.samplers.RandomSampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
(continues on next page)
optuna.visualization.matplotlib.plot_param_importances(study)
Parameters
• study (Study) – An optimized study.
• evaluator (BaseImportanceEvaluator | None) – An importance evaluator object
that specifies which algorithm to base the importance assessment on. Defaults to
FanovaImportanceEvaluator.
• params (list[str] | None) – A list of names of parameters to assess. If None, all pa-
rameters that are present in all of the completed trials are assessed.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted. For multi-objective optimization, all objectives will be plotted if
target is None.
Note: This argument can be used to specify which objective to plot if study is being used
for multi-objective optimization. For example, to get only the hyperparameter importance of
the first objective, use target=lambda t: t.values[0] for the target parameter.
• target_name (str) – Target’s name to display on the axis label. Names set via
set_metric_names() will be used if target is None, overriding this argument.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
optuna.visualization.matplotlib.plot_pareto_front
optuna.visualization.matplotlib.plot_pareto_front(study, *, target_names=None,
include_dominated_trials=True, axis_order=None,
constraints_func=None, targets=None)
Plot the Pareto front of a study.
See also:
Please refer to optuna.visualization.plot_pareto_front() for an example.
Example
The following code snippet shows how to plot the Pareto front of a study.
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 5)
y = trial.suggest_float("y", 0, 3)
v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1
optuna.visualization.matplotlib.plot_pareto_front(study)
Parameters
• study (Study) – A Study object whose trials are plotted for their objective values. study.
n_objectives must be either 2 or 3 when targets is None.
• target_names (list[str] | None) – Objective name list used as the axis titles. If None
is specified, “Objective {objective_index}” is used instead. If targets is specified for a
study that does not contain any completed trial, target_name must be specified.
Warning: Deprecated in v3.0.0. This feature will be removed in the future. The removal
of this feature is currently scheduled for v5.0.0, but this schedule is subject to change.
See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Note: Added in v3.0.0 as an experimental feature. The interface may change in newer
versions without prior notice. See https://github.com/optuna/optuna/releases/tag/v3.0.0.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: Added in v2.8.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.8.0.
optuna.visualization.matplotlib.plot_rank
Warning: Output figures of this Matplotlib-based plot_rank() function would be different from those of
the Plotly-based plot_rank().
Example
The following code snippet shows how to plot the parameter relationship as a rank plot.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
c0 = 400 - (x + y)**2
trial.set_user_attr("constraint", [c0])
return x ** 2 + y
def constraints(trial):
return trial.user_attrs["constraint"]
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
optuna.visualization.matplotlib.plot_slice
Example
The following code snippet shows how to plot the parameter relationship as slice plot.
import optuna
def objective(trial):
x = trial.suggest_float("x", -100, 100)
y = trial.suggest_categorical("y", [-1, 0, 1])
return x ** 2 + y
sampler = optuna.samplers.TPESampler(seed=10)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=10)
Parameters
• study (Study) – A Study object whose trials are plotted for their target values.
• params (list[str] | None) – Parameter list to visualize. The default is all parameters.
• target (Callable[[FrozenTrial], float] | None) – A function to specify the value
to display. If it is None and study is being used for single-objective optimization, the ob-
jective values are plotted.
Note: Specify this argument if study is being used for multi-objective optimization.
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
optuna.visualization.matplotlib.plot_terminator_improvement
optuna.visualization.matplotlib.plot_terminator_improvement(study, plot_error=False,
improvement_evaluator=None,
error_evaluator=None,
min_n_trials=20)
Plot the potentials for future objective improvement.
This function visualizes the objective improvement potentials, evaluated with improvement_evaluator. It
helps to determine whether we should continue the optimization or not. You can also plot the error evaluated
with error_evaluator if the plot_error argument is set to True. Note that this function may take some time
to compute the improvement potentials.
See also:
Please refer to optuna.visualization.plot_terminator_improvement().
Example
The following code snippet shows how to plot improvement potentials, together with cross-validation errors.
def objective(trial):
X, y = load_wine(return_X_y=True)
clf = LGBMClassifier(
reg_alpha=trial.suggest_float("reg_alpha", 1e-8, 10.0, log=True),
reg_lambda=trial.suggest_float("reg_lambda", 1e-8, 10.0, log=True),
num_leaves=trial.suggest_int("num_leaves", 2, 256),
colsample_bytree=trial.suggest_float("colsample_bytree", 0.4, 1.0),
subsample=trial.suggest_float("subsample", 0.4, 1.0),
subsample_freq=trial.suggest_int("subsample_freq", 1, 7),
min_child_samples=trial.suggest_int("min_child_samples", 5, 100),
)
scores = cross_val_score(clf, X, y, cv=KFold(n_splits=5, shuffle=True))
report_cross_validation_scores(trial, scores)
return scores.mean()
study = optuna.create_study()
study.optimize(objective, n_trials=30)
plot_terminator_improvement(study, plot_error=True)
Parameters
• study (Study) – A Study object whose trials are plotted for their improvement.
• plot_error (bool) – A flag to show the error. If it is set to True, errors evaluated by
error_evaluator are also plotted as line graph. Defaults to False.
• improvement_evaluator (BaseImprovementEvaluator | None) – An object that
evaluates the improvement of the objective function. Default to RegretBoundEvaluator.
• error_evaluator (BaseErrorEvaluator | None) – An object that evaluates the error
inherent in the objective function. Default to CrossValidationErrorEvaluator.
• min_n_trials (int) – The minimum number of trials before termination is considered.
Terminator improvements for trials below this value are shown in a lighter color. Defaults to
20.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
optuna.visualization.matplotlib.plot_timeline
optuna.visualization.matplotlib.plot_timeline(study)
Plot the timeline of a study.
See also:
Please refer to optuna.visualization.plot_timeline() for an example.
Example
The following code snippet shows how to plot the timeline of a study.
import time
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 1)
time.sleep(x * 0.1)
if x > 0.8:
raise ValueError()
if x > 0.4:
raise optuna.TrialPruned()
return x ** 2
optuna.visualization.matplotlib.plot_timeline(study)
Parameters
study (Study) – A Study object whose trials are plotted with their lifetime.
Returns
A matplotlib.axes.Axes object.
Return type
Axes
Note: Added in v3.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v3.2.0.
optuna.visualization.matplotlib.is_available
optuna.visualization.matplotlib.is_available()
Returns whether visualization with Matplotlib is available or not.
Note: matplotlib module depends on Matplotlib version 3.0.0 or higher. If a supported version of Mat-
plotlib isn’t installed in your environment, this function will return False. In such a case, please execute $ pip
install -U matplotlib>=3.0.0 to install Matplotlib.
Returns
True if visualization with Matplotlib is available, False otherwise.
Return type
bool
Note: Added in v2.2.0 as an experimental feature. The interface may change in newer versions without prior
notice. See https://github.com/optuna/optuna/releases/tag/v2.2.0.
See also:
The visualization tutorial provides use-cases with examples.
6.4 FAQ
Optuna is compatible with most ML libraries, and it’s easy to use Optuna with those. Please refer to examples.
import optuna
class Objective:
def __init__(self, min_x, max_x):
# Hold this implementation specific arguments as the fields of the class.
self.min_x = min_x
self.max_x = max_x
Second, you can use lambda or functools.partial for creating functions (closures) that hold extra arguments.
Below is an example that uses lambda:
import optuna
# Extra arguments.
min_x = -100
max_x = 100
Please also refer to sklearn_addtitional_args.py example, which reuses the dataset instead of loading it in each trial
execution.
study = optuna.create_study()
study.optimize(objective)
If you want to save and resume studies, it’s handy to use SQLite as the local storage:
There are two ways of persisting studies, which depend if you are using in-memory storage (default) or remote databases
(RDB). In-memory studies can be saved and loaded like usual Python objects using pickle or joblib. For example,
using joblib:
study = optuna.create_study()
joblib.dump(study, "study.pkl")
study = joblib.load("study.pkl")
print("Best trial until now:")
print(" Value: ", study.best_trial.value)
print(" Params: ")
for key, value in study.best_trial.params.items():
print(f" {key}: {value}")
Note that Optuna does not support saving/reloading across different Optuna versions with pickle. To save/reload a
study across different Optuna versions, please use RDBs and upgrade storage schema if necessary. If you are using
RDBs, see rdb for more details.
By default, Optuna shows log messages at the optuna.logging.INFO level. You can change logging levels by using
optuna.logging.set_verbosity().
For instance, you can stop showing each trial result as follows:
optuna.logging.set_verbosity(optuna.logging.WARNING)
study = optuna.create_study()
study.optimize(objective)
# Logs like '[I 2020-07-21 13:41:45,627] Trial 0 finished with value:...' are disabled.
Optuna saves hyperparameter values with its corresponding objective value to storage, but it discards intermediate
objects such as machine learning models and neural network weights. To save models or weights, please use features
of the machine learning library you used.
We recommend saving optuna.trial.Trial.number with a model in order to identify its corresponding trial. For
example, you can save SVM models trained in the objective function as follows:
def objective(trial):
svc_c = trial.suggest_float("svc_c", 1e-10, 1e10, log=True)
clf = sklearn.svm.SVC(C=svc_c)
clf.fit(X_train, y_train)
study = optuna.create_study()
study.optimize(objective, n_trials=100)
To make the parameters suggested by Optuna reproducible, you can specify a fixed random seed via seed argument of
an instance of samplers as follows:
To make the pruning by HyperbandPruner reproducible, you can specify study_name of Study and hash seed.
Trials that raise exceptions without catching them will be treated as failures, i.e. with the FAIL status.
By default, all exceptions except TrialPruned raised in objective functions are propagated to the caller of
optimize(). In other words, studies are aborted when such exceptions are raised. It might be desirable to con-
tinue a study with the remaining trials. To do so, you can specify in optimize() which exception types to catch using
the catch argument. Exceptions of these types are caught inside the study and will not propagate further.
You can find the failed trials in log messages.
You can also find the failed trials by checking the trial states as follows:
study.trials_dataframe()
See also:
The catch argument in optimize().
Trials that return NaN (float('nan')) are treated as failures, but they will not abort studies.
Trials which return NaN are shown as follows:
Since parameters search spaces are specified in each call to the suggestion API, e.g. suggest_float() and
suggest_int(), it is possible to, in a single study, alter the range by sampling parameters from different search
spaces in different trials. The behavior when altered is defined by each sampler individually.
6.4.11 How can I use two GPUs for evaluating two trials simultaneously?
If your optimization target supports GPU (CUDA) acceleration and you want to specify which GPU is used in your
script, main.py, the easiest way is to set CUDA_VISIBLE_DEVICES environment variable:
# On a terminal.
#
# Specify to use the first GPU, and run an optimization.
$ export CUDA_VISIBLE_DEVICES=0
$ python main.py
# On another terminal.
#
# Specify to use the second GPU, and run another optimization.
$ export CUDA_VISIBLE_DEVICES=1
$ python main.py
When you test objective functions, you may prefer fixed parameter values to sampled ones. In that case, you can use
FixedTrial, which suggests fixed parameter values based on a given dictionary of parameters. For instance, you can
input arbitrary values of 𝑥 and 𝑦 to the objective function 𝑥 + 𝑦 as follows:
def objective(trial):
x = trial.suggest_float("x", -1.0, 1.0)
y = trial.suggest_int("y", -5, 5)
return x + y
6.4.13 How do I avoid running out of memory (OOM) when optimizing studies?
If the memory footprint increases as you run more trials, try to periodically run the garbage collector. Specify
gc_after_trial to True when calling optimize() or call gc.collect() inside a callback.
def objective(trial):
x = trial.suggest_float("x", -1.0, 1.0)
y = trial.suggest_int("y", -5, 5)
return x + y
study = optuna.create_study()
study.optimize(objective, n_trials=10, gc_after_trial=True)
There is a performance trade-off for running the garbage collector, which could be non-negligible depending on how fast
your objective function otherwise is. Therefore, gc_after_trial is False by default. Note that the above examples
are similar to running the garbage collector inside the objective function, except for the fact that gc.collect() is
called even when errors, including TrialPruned are raised.
Note: ChainerMNStudy does currently not provide gc_after_trial nor callbacks for optimize(). When using
this class, you will have to call the garbage collector inside the objective function.
6.4.14 How can I output a log only when the best value is updated?
Here’s how to replace the logging feature of optuna with your own logging callback function. The implemented callback
can be passed to optimize(). Here’s an example:
import optuna
def objective(trial):
x = trial.suggest_float("x", 0, 1)
return x ** 2
study = optuna.create_study()
study.optimize(objective, n_trials=100, callbacks=[logging_callback])
Note that this callback may show incorrect values when you try to optimize an objective function with n_jobs!=1 (or
other forms of distributed optimization) due to its reads and writes to storage that are prone to race conditions.
6.4.15 How do I suggest variables which represent the proportion, that is, are in
accordance with Dirichlet distribution?
When you want to suggest 𝑛 variables which represent the proportion, that is, 𝑝[0], 𝑝[1], ..., 𝑝[𝑛 − 1] which satisfy
0 ≤ 𝑝[𝑘] ≤ 1 for any 𝑘 and 𝑝[0] + 𝑝[1] + ... + 𝑝[𝑛 − 1] = 1, try the below. For example, these variables can be used as
weights when interpolating the loss functions. These variables are in accordance with the flat Dirichlet distribution.
import numpy as np
import matplotlib.pyplot as plt
import optuna
def objective(trial):
n = 5
x = []
for i in range(n):
x.append(- np.log(trial.suggest_float(f"x_{i}", 0, 1)))
p = []
for i in range(n):
p.append(x[i] / sum(x))
for i in range(n):
trial.set_user_attr(f"p_{i}", p[i])
return 0
study = optuna.create_study(sampler=optuna.samplers.RandomSampler())
study.optimize(objective, n_trials=1000)
n = 5
p = []
for i in range(n):
p.append([trial.user_attrs[f"p_{i}"] for trial in study.trials])
axes = plt.subplots(n, n, figsize=(20, 20))[1]
for i in range(n):
for j in range(n):
axes[j][i].scatter(p[i], p[j], marker=".")
axes[j][i].set_xlim(0, 1)
axes[j][i].set_ylim(0, 1)
(continues on next page)
plt.savefig("sampled_ps.png")
This method is justified in the following way: First, if we apply the transformation 𝑥 = − log(𝑢) to the variable
𝑢 sampled from the uniform distribution 𝑈 𝑛𝑖(0, 1) in the interval [0, 1], the variable 𝑥 will follow the exponential
distribution 𝐸𝑥𝑝(1) with scale parameter 1. Furthermore, for 𝑛 variables 𝑥[0], ..., 𝑥[𝑛∑︀
− 1] that follow the exponential
distribution of scale parameter 1 independently, normalizing them with 𝑝[𝑖] = 𝑥[𝑖]/ 𝑖 𝑥[𝑖], the vector 𝑝 follows the
Dirichlet distribution 𝐷𝑖𝑟(𝛼) of scale parameter 𝛼 = (1, ..., 1). You can verify the transformation by calculating the
elements of the Jacobian.
When you want to optimize a model with constraints, you can use the following classes: TPESampler, NSGAIISampler
or BoTorchSampler. The following example is a benchmark of Binh and Korn function, a multi-objective optimiza-
tion, with constraints using NSGAIISampler. This one has two constraints 𝑐0 = (𝑥 − 5)2 + 𝑦 2 − 25 ≤ 0 and
𝑐1 = −(𝑥 − 8)2 − (𝑦 + 3)2 + 7.7 ≤ 0 and finds the optimal solution satisfying these constraints.
import optuna
def objective(trial):
# Binh and Korn function with constraints.
x = trial.suggest_float("x", -15, 30)
y = trial.suggest_float("y", -15, 30)
# Store the constraints as user attributes so that they can be restored after␣
˓→optimization.
trial.set_user_attr("constraint", (c0, c1))
v0 = 4 * x ** 2 + 4 * y ** 2
v1 = (x - 5) ** 2 + (y - 5) ** 2
return v0, v1
def constraints(trial):
return trial.user_attrs["constraint"]
sampler = optuna.samplers.NSGAIISampler(constraints_func=constraints)
study = optuna.create_study(
directions=["minimize", "minimize"],
(continues on next page)
print("Pareto front:")
If you are interested in an example for BoTorchSampler, please refer to this sample code.
There are two kinds of constrained optimizations, one with soft constraints and the other with hard constraints. Soft
constraints do not have to be satisfied, but an objective function is penalized if they are unsatisfied. On the other hand,
hard constraints must be satisfied.
Optuna is adopting the soft one and DOES NOT support the hard one. In other words, Optuna DOES NOT have
built-in samplers for the hard constraints.
This can be achieved by using JournalFileStorage or client/server RDBs (such as PostgreSQL and MySQL).
For more information about 2., see TutorialEasyParallelization.
This can be achieved by using client/server RDBs (such as PostgreSQL and MySQL). However, if you are in the
environment where you can not install a client/server RDB, you can not run multi-processing parallelization with
multiple nodes.
For more information about 3., see TutorialEasyParallelization.
6.4.18 How can I solve the error that occurs when performing parallel optimization
with SQLite3?
We would never recommend SQLite3 for parallel optimization in the following reasons.
• To concurrently evaluate trials enqueued by enqueue_trial(), RDBStorage uses SELECT . . . FOR UPDATE
syntax, which is unsupported in SQLite3.
• As described in the SQLAlchemy’s documentation, SQLite3 (and pysqlite driver) does not support a high level of
concurrency. You may get a “database is locked” error, which occurs when one thread or process has an exclusive
lock on a database connection (in reality a file handle) and another thread times out waiting for the lock to be
released. You can increase the default timeout value like optuna.storages.RDBStorage(“sqlite:///example.db”,
engine_kwargs={“connect_args”: {“timeout”: 20.0}}) though.
• For distributed optimization via NFS, SQLite3 does not work as described at FAQ section of sqlite.org.
If you want to use a file-based Optuna storage for these scenarios, please consider using JournalFileStorage instead.
import optuna
from optuna.storages import JournalStorage, JournalFileStorage
storage = JournalStorage(JournalFileStorage("optuna-journal.log"))
study = optuna.create_study(storage=storage)
...
6.4.19 Can I monitor trials and make them failed automatically when they are killed
unexpectedly?
A process running a trial could be killed unexpectedly, typically by a job scheduler in a cluster environment. If trials
are killed unexpectedly, they will be left on the storage with their states RUNNING until we remove them or update their
state manually. For such a case, Optuna supports monitoring trials using heartbeat mechanism. Using heartbeat, if a
process running a trial is killed unexpectedly, Optuna will automatically change the state of the trial that was running
on that process to FAIL from RUNNING.
import optuna
def objective(trial):
(Very time-consuming computation)
study = optuna.create_study(storage=storage)
study.optimize(objective, n_trials=100)
Note: The heartbeat is supposed to be used with optimize(). If you use ask() and tell(), please change the state
of the killed trials by calling tell() explicitly.
You can also execute a callback function to process the failed trial. Optuna provides a callback to retry failed tri-
als as RetryFailedTrialCallback. Note that a callback is invoked at a beginning of each trial, which means
RetryFailedTrialCallback will retry failed trials when a new trial starts to evaluate.
import optuna
from optuna.storages import RetryFailedTrialCallback
storage = optuna.storages.RDBStorage(
url="sqlite:///:memory:",
heartbeat_interval=60,
grace_period=120,
failed_trial_callback=RetryFailedTrialCallback(max_retry=3),
)
study = optuna.create_study(storage=storage)
Although it is not straightforward to deal with combinatorial search spaces like permutations with existing API, there
exists a convenient technique for handling them. It involves re-parametrization of permutation search space of 𝑛 items
as an independent 𝑛-dimensional integer search space. This technique is based on the concept of Lehmer code.
A Lehmer code of a sequence is the sequence of integers in the same size, whose 𝑖-th entry denotes how many inversions
the 𝑖-th entry of the permutation has after itself. In other words, the 𝑖-th entry of the Lehmer code represents the number
of entries that are located after and are smaller than the 𝑖-th entry of the original sequence. For instance, the Lehmer
code of the permutation (3, 1, 4, 2, 0) is (3, 1, 2, 1, 0).
Not only does the Lehmer code provide a unique encoding of permutations into an integer space, but it also has some
desirable properties. For example, the sum of Lehmer code entries is equal to the minimum number of adjacent trans-
positions necessary to transform the corresponding permutation into the identity permutation. Additionally, the lexico-
graphical order of the encodings of two permutations is the same as that of the original sequence. Therefore, Lehmer
code preserves “closeness” among permutations in some sense, which is important for the optimization algorithm. An
Optuna implementation example to solve Euclid TSP is as follows:
import numpy as np
import optuna
)
return total_distance
study = optuna.create_study()
study.optimize(objective, n_trials=10)
lehmer_code = study.best_params.values()
print(decode(lehmer_code))
Optuna may sometimes suggest parameters evaluated in the past and if you would like to avoid this problem, you can
try out the following workaround:
import optuna
from optuna.trial import TrialState
def objective(trial):
# Sample parameters.
x = trial.suggest_int("x", -5, 5)
y = trial.suggest_int("y", -5, 5)
# Fetch all the trials to consider.
# In this example, we use only completed trials, but users can specify other states
# such as TrialState.PRUNED and TrialState.FAIL.
states_to_consider = (TrialState.COMPLETE,)
trials_to_consider = trial.study.get_trials(deepcopy=False, states=states_to_
˓→consider)
study = optuna.create_study()
study.optimize(objective, n_trials=100)
SEVEN
• genindex
• modindex
• search
315
Optuna Documentation, Release 3.5.0.dev
o
optuna, 14
optuna.artifacts, 21
optuna.cli, 26
optuna.distributions, 26
optuna.exceptions, 39
optuna.importance, 41
optuna.integration, 45
optuna.logging, 101
optuna.pruners, 105
optuna.samplers, 119
optuna.samplers.nsgaii, 169
optuna.search_space, 176
optuna.storages, 178
optuna.study, 205
optuna.terminator, 230
optuna.trial, 236
optuna.visualization, 259
optuna.visualization.matplotlib, 275
317
Optuna Documentation, Release 3.5.0.dev
A method), 149
acquire() (optuna.storages.JournalFileOpenLock after_trial() (optuna.samplers.PartialFixedSampler
method), 203 method), 144
acquire() (optuna.storages.JournalFileSymlinkLock after_trial() (optuna.samplers.QMCSampler
method), 202 method), 161
add_note() (optuna.exceptions.CLIUsageError after_trial() (optuna.samplers.RandomSampler
method), 41 method), 129
add_note() (optuna.exceptions.DuplicatedStudyError after_trial() (optuna.samplers.TPESampler
method), 41 method), 135
add_note() (optuna.exceptions.OptunaError method), append_logs() (optuna.storages.JournalFileStorage
39 method), 201
add_note() (optuna.exceptions.StorageInternalError append_logs() (optuna.storages.JournalRedisStorage
method), 41 method), 204
add_note() (optuna.exceptions.TrialPruned method), as_integer_ratio() (optuna.study.StudyDirection
40 method), 227
add_note() (optuna.TrialPruned method), 21 as_integer_ratio() (optuna.trial.TrialState method),
add_trial() (optuna.study.Study method), 206 256
add_trials() (optuna.study.Study method), 207 ask() (optuna.study.Study method), 208
after_iteration() (op-
tuna.integration.CatBoostPruningCallback B
method), 55 Backoff (class in optuna.artifacts), 25
after_trial() (optuna.integration.BoTorchSampler BaseCrossover (class in optuna.samplers.nsgaii), 169
method), 48 BaseErrorEvaluator (class in optuna.terminator), 234
after_trial() (optuna.integration.CmaEsSampler BaseImprovementEvaluator (class in op-
method), 83 tuna.terminator), 232
after_trial() (optuna.integration.PyCmaSampler BasePruner (class in optuna.pruners), 106
method), 80 BaseSampler (class in optuna.samplers), 121
after_trial() (optuna.integration.SkoptSampler BaseTerminator (class in optuna.terminator), 230
method), 98 before_trial() (optuna.integration.BoTorchSampler
after_trial() (optuna.samplers.BaseSampler method), 48
method), 123 before_trial() (optuna.integration.CmaEsSampler
after_trial() (optuna.samplers.BruteForceSampler method), 84
method), 165 before_trial() (optuna.integration.PyCmaSampler
after_trial() (optuna.samplers.CmaEsSampler method), 81
method), 141 before_trial() (optuna.integration.SkoptSampler
after_trial() (optuna.samplers.GridSampler method), 98
method), 126 before_trial() (optuna.samplers.BaseSampler
after_trial() (optuna.samplers.MOTPESampler method), 123
method), 156 before_trial() (optuna.samplers.BruteForceSampler
after_trial() (optuna.samplers.NSGAIIISampler method), 165
method), 152 before_trial() (optuna.samplers.CmaEsSampler
after_trial() (optuna.samplers.NSGAIISampler method), 141
319
Optuna Documentation, Release 3.5.0.dev
before_trial() (optuna.samplers.GridSampler C
method), 127 calculate() (optuna.samplers.IntersectionSearchSpace
before_trial() (optuna.samplers.MOTPESampler method), 168
method), 157 calculate() (optuna.search_space.IntersectionSearchSpace
before_trial() (optuna.samplers.NSGAIIISampler method), 177
method), 153 CatBoostPruningCallback (class in op-
before_trial() (optuna.samplers.NSGAIISampler tuna.integration), 54
method), 149 CategoricalDistribution (class in op-
before_trial() (optuna.samplers.PartialFixedSampler tuna.distributions), 37
method), 145 check_distribution_compatibility() (in module
before_trial() (optuna.samplers.QMCSampler optuna.distributions), 39
method), 162 check_pruned() (optuna.integration.CatBoostPruningCallback
before_trial() (optuna.samplers.RandomSampler method), 55
method), 130 check_pruned() (optuna.integration.PyTorchLightningPruningCallback
before_trial() (optuna.samplers.TPESampler method), 87
method), 136 check_trial_is_updatable() (op-
best_estimator_ (op- tuna.integration.DaskStorage method), 57
tuna.integration.OptunaSearchCV attribute), check_trial_is_updatable() (op-
92 tuna.storages.JournalStorage method), 193
best_index_ (optuna.integration.OptunaSearchCV check_trial_is_updatable() (op-
property), 93 tuna.storages.RDBStorage method), 180
best_params (optuna.integration.lightgbm.LightGBMTunerchoices (optuna.distributions.CategoricalDistribution
property), 70 attribute), 37
best_params (optuna.integration.lightgbm.LightGBMTunerCVclasses_ (optuna.integration.OptunaSearchCV prop-
property), 73 erty), 93
best_params (optuna.study.Study property), 209 CLIUsageError, 41
best_params_ (optuna.integration.OptunaSearchCV CmaEsSampler (class in optuna.integration), 83
property), 93 CmaEsSampler (class in optuna.samplers), 138
best_score (optuna.integration.lightgbm.LightGBMTuner COMPLETE (optuna.trial.TrialState attribute), 255
property), 70 conjugate() (optuna.study.StudyDirection method),
best_score (optuna.integration.lightgbm.LightGBMTunerCV 227
property), 73 conjugate() (optuna.trial.TrialState method), 257
best_score_ (optuna.integration.OptunaSearchCV copy_study() (in module optuna), 17
property), 93 copy_study() (in module optuna.study), 222
best_trial (optuna.study.Study property), 209 create_new_study() (optuna.integration.DaskStorage
best_trial (optuna.study.StudySummary attribute), 229 method), 57
best_trial_ (optuna.integration.OptunaSearchCV create_new_study() (optuna.storages.JournalStorage
property), 93 method), 193
best_trials (optuna.study.Study property), 209 create_new_study() (optuna.storages.RDBStorage
best_value (optuna.study.Study property), 209 method), 180
BestValueStagnationEvaluator (class in op- create_new_trial() (optuna.integration.DaskStorage
tuna.terminator), 233 method), 57
bit_count() (optuna.study.StudyDirection method), create_new_trial() (optuna.storages.JournalStorage
227 method), 194
bit_count() (optuna.trial.TrialState method), 256 create_new_trial() (optuna.storages.RDBStorage
bit_length() (optuna.study.StudyDirection method), method), 181
227 create_study() (in module optuna), 14
bit_length() (optuna.trial.TrialState method), 256 create_study() (in module optuna.study), 219
BLXAlphaCrossover (class in optuna.samplers.nsgaii), create_trial() (in module optuna.trial), 258
171 crossover() (optuna.samplers.nsgaii.BaseCrossover
Boto3ArtifactStore (class in optuna.artifacts), 23 method), 170
BoTorchSampler (class in optuna.integration), 46 crossover() (optuna.samplers.nsgaii.BLXAlphaCrossover
BruteForceSampler (class in optuna.samplers), 164 method), 172
320 Index
Optuna Documentation, Release 3.5.0.dev
crossover() (optuna.samplers.nsgaii.SBXCrossover E
method), 174 enable_default_handler() (in module op-
crossover() (optuna.samplers.nsgaii.SPXCrossover tuna.logging), 104
method), 173 enable_propagation() (in module optuna.logging),
crossover() (optuna.samplers.nsgaii.UNDXCrossover 105
method), 176 enqueue_trial() (optuna.study.Study method), 210
crossover() (optuna.samplers.nsgaii.UniformCrossover evaluate() (optuna.importance.FanovaImportanceEvaluator
method), 171 method), 44
crossover() (optuna.samplers.nsgaii.VSBXCrossover evaluate() (optuna.importance.MeanDecreaseImpurityImportanceEvalua
method), 175 method), 45
CrossValidationErrorEvaluator (class in op- evaluate() (optuna.terminator.CrossValidationErrorEvaluator
tuna.terminator), 234 method), 234
cv_results_ (optuna.integration.OptunaSearchCV
property), 93 F
FAIL (optuna.trial.TrialState attribute), 255
D fail_stale_trials() (in module optuna.storages),
DaskStorage (class in optuna.integration), 55 191
datetime_complete (optuna.trial.FrozenTrial at- FanovaImportanceEvaluator (class in op-
tribute), 252 tuna.importance), 43
datetime_start (optuna.study.StudySummary at- FastAIPruningCallback (in module op-
tribute), 229 tuna.integration), 67
datetime_start (optuna.trial.FrozenTrial attribute), FastAIV1PruningCallback (class in op-
252 tuna.integration), 65
datetime_start (optuna.trial.Trial property), 237 FastAIV2PruningCallback (class in op-
decision_function (op- tuna.integration), 66
tuna.integration.OptunaSearchCV property), FileSystemArtifactStore (class in optuna.artifacts),
93 22
delete_study() (in module optuna), 17 fit() (optuna.integration.OptunaSearchCV method), 94
delete_study() (in module optuna.study), 222 FixedTrial (class in optuna.trial), 247
delete_study() (optuna.integration.DaskStorage FloatDistribution (class in optuna.distributions), 27
method), 58 from_bytes() (optuna.study.StudyDirection method),
delete_study() (optuna.storages.JournalStorage 227
method), 194 from_bytes() (optuna.trial.TrialState method), 257
delete_study() (optuna.storages.RDBStorage FrozenTrial (class in optuna.trial), 250
method), 181
denominator (optuna.study.StudyDirection attribute), G
227 GCSArtifactStore (class in optuna.artifacts), 24
denominator (optuna.trial.TrialState attribute), 257 get_all_studies() (optuna.integration.DaskStorage
direction (optuna.study.Study property), 210 method), 58
direction (optuna.study.StudySummary attribute), 229 get_all_studies() (optuna.storages.JournalStorage
directions (optuna.study.Study property), 210 method), 194
directions (optuna.study.StudySummary attribute), 229 get_all_studies() (optuna.storages.RDBStorage
disable_default_handler() (in module op- method), 181
tuna.logging), 103 get_all_study_names() (in module optuna), 19
disable_propagation() (in module optuna.logging), get_all_study_names() (in module optuna.study), 224
104 get_all_study_summaries() (in module optuna), 19
DiscreteUniformDistribution (class in op- get_all_study_summaries() (in module op-
tuna.distributions), 32 tuna.study), 224
distribution_to_json() (in module op- get_all_trials() (optuna.integration.DaskStorage
tuna.distributions), 38 method), 58
distributions (optuna.trial.FrozenTrial attribute), 252 get_all_trials() (optuna.storages.JournalStorage
distributions (optuna.trial.Trial property), 238 method), 194
DuplicatedStudyError, 41 get_all_trials() (optuna.storages.RDBStorage
duration (optuna.trial.FrozenTrial property), 253 method), 181
Index 321
Optuna Documentation, Release 3.5.0.dev
322 Index
Optuna Documentation, Release 3.5.0.dev
Index 323
Optuna Documentation, Release 3.5.0.dev
324 Index
Optuna Documentation, Release 3.5.0.dev
Index 325
Optuna Documentation, Release 3.5.0.dev
326 Index
Optuna Documentation, Release 3.5.0.dev
sample_relative() (op- 95
tuna.integration.BoTorchSampler method), set_metric_names() (optuna.study.Study method), 213
50 set_params() (optuna.integration.OptunaSearchCV
sample_relative() (op- method), 96
tuna.integration.CmaEsSampler method), set_study_system_attr() (op-
85 tuna.integration.DaskStorage method), 62
sample_relative() (op- set_study_system_attr() (op-
tuna.integration.PyCmaSampler method), tuna.storages.JournalStorage method), 199
82 set_study_system_attr() (op-
sample_relative() (optuna.integration.SkoptSampler tuna.storages.RDBStorage method), 186
method), 100 set_study_user_attr() (op-
sample_relative() (optuna.samplers.BaseSampler tuna.integration.DaskStorage method), 62
method), 125 set_study_user_attr() (op-
sample_relative() (op- tuna.storages.JournalStorage method), 199
tuna.samplers.BruteForceSampler method), set_study_user_attr() (op-
167 tuna.storages.RDBStorage method), 187
sample_relative() (optuna.samplers.CmaEsSampler set_system_attr() (op-
method), 143 tuna.integration.TorchDistributedTrial
sample_relative() (optuna.samplers.GridSampler method), 88
method), 128 set_system_attr() (optuna.study.Study method), 214
sample_relative() (optuna.samplers.MOTPESampler set_system_attr() (optuna.trial.FixedTrial method),
method), 159 248
sample_relative() (op- set_system_attr() (optuna.trial.FrozenTrial method),
tuna.samplers.NSGAIIISampler method), 253
154 set_system_attr() (optuna.trial.Trial method), 239
sample_relative() (optuna.samplers.NSGAIISampler set_trial_intermediate_value() (op-
method), 151 tuna.integration.DaskStorage method), 63
sample_relative() (op- set_trial_intermediate_value() (op-
tuna.samplers.PartialFixedSampler method), tuna.storages.JournalStorage method), 199
146 set_trial_intermediate_value() (op-
sample_relative() (optuna.samplers.QMCSampler tuna.storages.RDBStorage method), 187
method), 163 set_trial_param() (optuna.integration.DaskStorage
sample_relative() (optuna.samplers.RandomSampler method), 63
method), 131 set_trial_param() (optuna.storages.JournalStorage
sample_relative() (optuna.samplers.TPESampler method), 200
method), 138 set_trial_param() (optuna.storages.RDBStorage
sample_train_set() (op- method), 187
tuna.integration.lightgbm.LightGBMTuner set_trial_state_values() (op-
method), 70 tuna.integration.DaskStorage method), 63
sample_train_set() (op- set_trial_state_values() (op-
tuna.integration.lightgbm.LightGBMTunerCV tuna.storages.JournalStorage method), 200
method), 73 set_trial_state_values() (op-
save_snapshot() (op- tuna.storages.RDBStorage method), 188
tuna.storages.JournalRedisStorage method), set_trial_system_attr() (op-
204 tuna.integration.DaskStorage method), 64
SBXCrossover (class in optuna.samplers.nsgaii), 173 set_trial_system_attr() (op-
score() (optuna.integration.OptunaSearchCV method), tuna.storages.JournalStorage method), 200
95 set_trial_system_attr() (op-
score_samples (optuna.integration.OptunaSearchCV tuna.storages.RDBStorage method), 188
property), 95 set_trial_user_attr() (op-
scorer_ (optuna.integration.OptunaSearchCV at- tuna.integration.DaskStorage method), 64
tribute), 92 set_trial_user_attr() (op-
set_fit_request() (op- tuna.storages.JournalStorage method), 201
tuna.integration.OptunaSearchCV method), set_trial_user_attr() (op-
Index 327
Optuna Documentation, Release 3.5.0.dev
328 Index
Optuna Documentation, Release 3.5.0.dev
Index 329