10000 A Bug at the `inverse_transform` of the `KernelPCA` · Issue #18902 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content
A Bug at the inverse_transform of the KernelPCA #18902
Closed
@kstoneriv3

Description

@kstoneriv3

I have a question about the following line of KernelPCA. This line appears in the reprojection of the sample points in KernelPCA but for me, this line seems unnecessary. Here, kernel ridge regression is used to (approximately) reproject the "transformed samples"(, which are coefficients of principal components), using precomputed dual coefficients of the kernel ridge regression. Though we need to add alpha to the diagonal elements of the kernel matrix when computing the dual coefficients (as in _fit_inverse_transform), we do not need to add alpha to the kernel in the prediction stage, as far as I understand.

K.flat[::n_samples + 1] += self.alpha

Is this line really necessary? If so, I would appreciate it if someone could explain why this line is needed.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0