8000 DOC document that last step is never cached in pipeline (#25995) · Veghit/scikit-learn@fbf7a0e · GitHub
[go: up one dir, main page]

Skip to content

Commit fbf7a0e

Browse files
windiana42glemaitre
authored andcommitted
DOC document that last step is never cached in pipeline (scikit-learn#25995)
Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com>
1 parent 3cbc887 commit fbf7a0e

File tree

2 files changed

+15
-15
lines changed

2 files changed

+15
-15
lines changed

doc/modules/compose.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ after calling ``fit``.
198198
This feature is used to avoid computing the fit transformers within a pipeline
199199
if the parameters and input data are identical. A typical example is the case of
200200
a grid search in which the transformers can be fitted only once and reused for
201-
each configuration.
201+
each configuration. The last step will never be cached, even if it is a transformer.
202202

203203
The parameter ``memory`` is needed in order to cache the transformers.
204204
``memory`` can be either a string containing the directory where to cache the

sklearn/pipeline.py

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -80,13 +80,13 @@ class Pipeline(_BaseComposition):
8080
estimator.
8181
8282
memory : str or object with the joblib.Memory interface, default=None
83-
Used to cache the fitted transformers of the pipeline. By default,
84-
no caching is performed. If a string is given, it is the path to
85-
the caching directory. Enabling caching triggers a clone of
86-
the transformers before fitting. Therefore, the transformer
87-
instance given to the pipeline cannot be inspected
88-
directly. Use the attribute ``named_steps`` or ``steps`` to
89-
inspect estimators within the pipeline. Caching the
83+
Used to cache the fitted transformers of the pipeline. The last step
84+
will never be cached, even if it is a transformer. By default, no
85+
caching is performed. If a string is given, it is the path to the
86+
caching directory. Enabling caching triggers a clone of the transformers
87+
before fitting. Therefore, the transformer instance given to the
88+
pipeline cannot be inspected directly. Use the attribute ``named_steps``
89+
or ``steps`` to inspect estimators within the pipeline. Caching the
9090
transformers is advantageous when fitting is time consuming.
9191
9292
verbose : bool, default=False
@@ -858,13 +858,13 @@ def make_pipeline(*steps, memory=None, verbose=False):
858858
List of the scikit-learn estimators that are chained together.
859859
860860
memory : str or object with the joblib.Memory interface, default=None
861-
Used to cache the fitted transformers of the pipeline. By default,
862-
no caching is performed. If a string is given, it is the path to
863-
the caching directory. Enabling caching triggers a clone of
864-
the transformers before fitting. Therefore, the transformer
865-
instance given to the pipeline cannot be inspected
866-
directly. Use the attribute ``named_steps`` or ``steps`` to
867-
inspect estimators within the pipeline. Caching the
861+
Used to cache the fitted transformers of the pipeline. The last step
862+
will never be cached, even if it is a transformer. By default, no
863+
caching is performed. If a string is given, it is the path to the
864+
caching directory. Enabling caching triggers a clone of the transformers
865+
before fitting. Therefore, the transformer instance given to the
866+
pipeline cannot be inspected directly. Use the attribute ``named_steps``
867+
or ``steps`` to inspect estimators within the pipeline. Caching the
868868
transformers is advantageous when fitting is time consuming.
869869
870870
verbose : bool, default=False

0 commit comments

Comments
 (0)
0