8000 ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Apr 19, 2025) ⚠️ · Issue #29802 · scikit-learn/scikit-learn · GitHub
[go: up one dir, main page]

Skip to content

⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Apr 19, 2025) ⚠️ #29802

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
scikit-learn-bot opened this issue Sep 8, 2024 · 3 comments

Comments

@scikit-learn-bot
Copy link
Contributor
scikit-learn-bot commented Sep 8, 2024

CI is still failing on Linux_Docker.debian_32bit (Apr 19, 2025)

  • test_format_agnosticism[30-csr_matrix-RadiusNeighbors-float32]
  • test_format_agnosticism[30-csr_array-RadiusNeighbors-float32]
@github-actions github-actions bot added the Needs Triage Issue requires triage label Sep 8, 2024
@scikit-learn-bot
Copy link
Contributor Author
scikit-learn-bot commented Sep 9, 2024

CI is no longer failing! ✅

Successful run on May 21, 2025

@jeremiedbb
Copy link
Member

Probably a test too sensitive to the random seed.

@jeremiedbb jeremiedbb removed the Needs Triage Issue requires triage label Sep 9, 2024
@scikit-learn-bot scikit-learn-bot changed the title ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Sep 08, 2024) ⚠️ ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Sep 27, 2024) ⚠️ Sep 27, 2024
@scikit-learn-bot scikit-learn-bot changed the title ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Sep 27, 2024) ⚠️ ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Jan 08, 2025) ⚠️ Jan 8, 2025
@scikit-learn-bot scikit-learn-bot changed the title ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Jan 08, 2025) ⚠️ ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Mar 06, 2025) ⚠️ Mar 6, 2025
@scikit-learn-bot scikit-learn-bot changed the title ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Mar 06, 2025) ⚠️ ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Mar 13, 2025) ⚠️ Mar 13, 2025
@scikit-learn-bot scikit-learn-bot changed the title ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Mar 13, 2025) ⚠️ ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Mar 28, 2025) ⚠️ Mar 28, 2025
@lesteve
Copy link
Member
lesteve commented Mar 28, 2025

The March 28 failure build log is genuine. All the PRs with a recent push seem broken e.g. a polars related one #31095 or a pytest-related one #31074. A separated issue was created in #31098.

No idea why this would happen off the top of my head 😱 maybe a Debian Docker image upgrade somehow?

Failing tests:

FAILED tests/test_common.py::test_estimators[LinearRegression(positive=True)-check_sample_weight_equivalence_on_dense_data] - AssertionError: 
FAILED utils/tests/test_estimator_checks.py::test_check_estimator_clones - AssertionError: 

The error message is something like:

Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.
Stacktraces for the two failing tests
=================================== FAILURES ===================================
_ test_estimators[LinearRegression(positive=True)-check_sample_weight_equivalence_on_dense_data] _

estimator = LinearRegression(positive=True)
check = functools.partial(<function check_sample_weight_equivalence_on_dense_data at 0xd91bde88>, 'LinearRegression')
request = <FixtureRequest for <Function test_estimators[LinearRegression(positive=True)-check_sample_weight_equivalence_on_dense_data]>>

    @parametrize_with_checks(
        list(_tested_estimators()), expected_failed_checks=_get_expected_failed_checks
    )
    def test_estimators(estimator, check, request):
        # Common tests for estimator instances
        with ignore_warnings(
            category=(FutureWarning, ConvergenceWarning, UserWarning, LinAlgWarning)
        ):
>           check(estimator)

check      = functools.partial(<function check_sample_weight_equivalence_on_dense_data at 0xd91bde88>, 'LinearRegression')
estimator  = LinearRegression(positive=True)
request    = <FixtureRequest for <Function test_estimators[LinearRegression(positive=True)-check_sample_weight_equivalence_on_dense_data]>>

/io/sklearn/tests/test_common.py:122: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/io/sklearn/utils/estimator_checks.py:1570: in check_sample_weight_equivalence_on_dense_data
    _check_sample_weight_equivalence(name, estimator_orig, sparse_container=None)
        estimator_orig = LinearRegression(positive=True)
        name       = 'LinearRegression'
/io/sklearn/utils/_testing.py:145: in wrapper
    return fn(*args, **kwargs)
        args       = ('LinearRegression', LinearRegression(positive=True))
        fn         = <function _check_sample_weight_equivalence at 0xd91bdde8>
        kwargs     = {'sparse_container': None}
        self       = _IgnoreWarnings(record=True)
/io/sklearn/utils/estimator_checks.py:1566: in _check_sample_weight_equivalence
    assert_allclose_dense_sparse(X_pred1, X_pred2, err_msg=err_msg)
        X          = array([[0.37454012, 0.95071431, 0.73199394, 0.59865848, 0.15601864,
        0.15599452, 0.05808361, 0.86617615, 0.6011..., 0.98663958, 0.3742708 , 0.37064215, 0.81279957,
        0.94724858, 0.98600106, 0.75337819, 0.37625959, 0.08350072]])
        X_pred1    = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
        X_pred2    = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
        X_repeated = array([[0.37454012, 0.95071431, 0.73199394, 0.59865848, 0.15601864,
        0.15599452, 0.05808361, 0.86617615, 0.6011..., 0.98663958, 0.3742708 , 0.37064215, 0.81279957,
        0.94724858, 0.98600106, 0.75337819, 0.37625959, 0.08350072]])
        X_weighted = array([[0.60754485, 0.17052412, 0.06505159, 0.94888554, 0.96563203,
        0.80839735, 0.30461377, 0.09767211, 0.6842..., 0.69673717, 0.62894285, 0.87747201, 0.73507104,
        0.80348093, 0.28203457, 0.17743954, 0.75061475, 0.80683474]])
        err_msg    = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
        estimator_orig = LinearRegression(positive=True)
        estimator_repeated = LinearRegression(positive=True)
        estimator_weighted = LinearRegression(positive=True)
        method     = 'predict'
        n_samples  = 15
        name       = 'LinearRegression'
        rng        = RandomState(MT19937) at 0xCCEBACE8
        sparse_container = None
        sw         = array([3, 4, 0, 3, 1, 0, 4, 4, 0, 3, 0, 0, 3, 2, 0])
        y          = array([0, 1, 2, 2, 1, 1, 2, 2, 1, 2, 0, 0, 1, 1, 1])
        y_repeated = array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       1, 1, 1, 1, 1])
        y_weighted = array([1, 2, 1, 2, 1, 1, 2, 1, 0, 2, 0, 2, 0, 1, 1])
/io/sklearn/utils/_testing.py:283: in assert_allclose_dense_sparse
    assert_allclose(x, y, rtol=rtol, atol=atol, err_msg=err_msg)
        atol       = 1e-09
        err_msg    = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
        rtol       = 1e-07
        x          = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
        y          = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

actual = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
desired = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
rtol = 1e-07, atol = 1e-09, equal_nan = True
err_msg = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
verbose = True

    def assert_allclose(
        actual, desired, rtol=None, atol=0.0, equal_nan=True, err_msg="", verbose=True
    ):
        """dtype-aware variant of numpy.testing.assert_allclose
    
        This variant introspects the least precise floating point dtype
        in the input argument and automatically sets the relative tolerance
        parameter to 1e-4 float32 and use 1e-7 otherwise (typically float64
        in scikit-learn).
    
        `atol` is always left to 0. by default. It should be adjusted manually
        to an assertion-specific value in case there are null values expected
        in `desired`.
    
        The aggregate tolerance is `atol + rtol * abs(desired)`.
    
        Parameters
        ----------
        actual : array_like
            Array obtained.
        desired : array_like
            Array desired.
        rtol : float, optional, default=None
            Relative tolerance.
            If None, it is set based on the provided arrays' dtypes.
        atol : float, optional, default=0.
            Absolute tolerance.
        equal_nan : bool, optional, default=True
            If True, NaNs will compare equal.
        err_msg : str, optional, default=''
            The error message to be printed in case of failure.
        verbose : bool, optional, default=True
            If True, the conflicting values are appended to the error message.
    
        Raises
        ------
        AssertionError
            If actual and desired are not equal up to specified precision.
    
        See Also
        --------
        numpy.testing.assert_allclose
    
        Examples
        --------
        >>> import numpy as np
        >>> from sklearn.utils._testing import assert_allclose
        >>> x = [1e-5, 1e-3, 1e-1]
        >>> y = np.arccos(np.cos(x))
        >>> assert_allclose(x, y, rtol=1e-5, atol=0)
        >>> a = np.full(shape=10, fill_value=1e-5, dtype=np.float32)
        >>> assert_allclose(a, 1e-5)
        """
        dtypes = []
    
        actual, desired = np.asanyarray(actual), np.asanyarray(desired)
        dtypes = [actual.dtype, desired.dtype]
    
        if rtol is None:
            rtols = [1e-4 if dtype == np.float32 else 1e-7 for dtype in dtypes]
            rtol = max(rtols)
    
>       np_assert_allclose(
            actual,
            desired,
            rtol=rtol,
            atol=atol,
            equal_nan=equal_nan,
            err_msg=err_msg,
            verbose=verbose,
        )
E       AssertionError: 
E       Not equal to tolerance rtol=1e-07, atol=1e-09
E       Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.
E       Mismatched elements: 6 / 15 (40%)
E       Max absolute difference among violations: 2.51014256
E       Max relative difference among violations: 2.17024526
E        ACTUAL: array([ 8.881784e-16,  1.000000e+00,  2.000000e+00,  1.185498e+00,
E               4.062418e+00,  1.000000e+00,  2.000000e+00,  2.000000e+00,
E               4.105658e+00,  2.000000e+00, -2.799363e-02, -8.906428e-01,
E              -8.008100e-01,  1.000000e+00,  1.000000e+00])
E        DESIRED: array([0.      , 1.      , 2.      , 0.941865, 1.726709, 1.      ,
E              2.      , 2.      , 1.872389, 2.      , 1.508778, 0.761074,
E              1.709333, 1.      , 1.      ])

actual     = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
atol       = 1e-09
desired    = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
dtypes     = [dtype('float64'), dtype('float64')]
equal_nan  = True
err_msg    = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
rtol       = 1e-07
verbose    = True

/io/sklearn/utils/_testing.py:237: AssertionError
_________________________ test_check_estimator_clones __________________________

    def test_check_estimator_clones():
        # check that check_estimator doesn't modify the estimator it receives
    
        iris = load_iris()
    
        for Estimator in [
            GaussianMixture,
            LinearRegression,
            SGDClassifier,
            PCA,
            MiniBatchKMeans,
        ]:
            # without fitting
            with ignore_warnings(category=ConvergenceWarning):
                est = Estimator()
                set_random_state(est)
                old_hash = joblib.hash(est)
>               check_estimator(
                    est, expected_failed_checks=_get_expected_failed_checks(est)
                )

Estimator  = <class 'sklearn.linear_model._base.LinearRegression'>
est        = LinearRegression()
iris       = {'data': array([[5.1, 3.5, 1.4, 0.2],
       [4.9, 3. , 1.4, 0.2],
       [4.7, 3.2, 1.3, 0.2],
       [4.6, 3.1, 1.5,... width (cm)', 'petal length (cm)', 'petal width (cm)'], 'filename': 'iris.csv', 'data_module': 'sklearn.datasets.data'}
old_hash   = 'fdcbee8ed611695d1e19a9bdabd615ac'

/io/sklearn/utils/tests/test_estimator_checks.py:919: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/io/sklearn/utils/_param_validation.py:218: in wrapper
    return func(*args, **kwargs)
        args       = (LinearRegression(),)
        func       = <function check_estimator at 0xd91bd668>
        func_sig   = <Signature (estimator=None, generate_only=False, *, legacy: 'bool' = True, expected_failed_checks: 'dict[str, str] | N...al['warn'] | None" = 'warn', on_fail: "Literal['raise', 'warn'] | None" = 'raise', callback: 'Callable | None' = None)>
        global_skip_validation = False
        kwargs     = {'expected_failed_checks': {}}
        parameter_constraints = {'callback': [<built-in function callable>, None], 'expected_failed_checks': [<class 'dict'>, None], 'generate_only': ['boolean'], 'legacy': ['boolean'], ...}
        params     = {'callback': None, 'estimator': LinearRegression(), 'expected_failed_checks': {}, 'generate_only': False, ...}
        prefer_skip_nested_validation = False
        to_ignore  = ['self', 'cls']
/io/sklearn/utils/estimator_checks.py:856: in check_estimator
    check(estimator)
        callback   = None
        check      = functools.partial(<function check_sample_weight_equivalence_on_dense_data at 0xd91bde88>, 'LinearRegression')
        check_result = {'check_name': 'check_sample_weight_equivalence_on_dense_data', 'estimator': LinearRegression(), 'exception': None, 'expected_to_fail': False, ...}
        estimator  = LinearRegression(positive=True)
        expected_failed_checks = {}
        generate_only = False
        legacy     = True
        name       = 'LinearRegression'
        on_fail    = 'raise'
        on_skip    = 'warn'
        reason     = 'Check is not expected to fail'
        test_can_fail = False
        test_results = [{'check_name': 'check_estimator_cloneable', 'estimator': LinearRegression(), 'exception': None, 'expected_to_fail': F...k_no_attributes_set_in_init', 'estimator': LinearRegression(), 'exception': None, 'expected_to_fail': False, ...}, ...]
/io/sklearn/utils/estimator_checks.py:1570: in check_sample_weight_equivalence_on_dense_data
    _check_sample_weight_equivalence(name, estimator_orig, sparse_container=None)
        estimator_orig = LinearRegression(positive=True)
        name       = 'LinearRegression'
/io/sklearn/utils/_testing.py:145: in wrapper
    return fn(*args, **kwargs)
        args       = ('LinearRegression', LinearRegression(positive=True))
        fn         = <function _check_sample_weight_equivalence at 0xd91bdde8>
        kwargs     = {'sparse_container': None}
        self       = _IgnoreWarnings(record=True)
/io/sklearn/utils/estimator_checks.py:1566: in _check_sample_weight_equivalence
    assert_allclose_dense_sparse(X_pred1, X_pred2, err_msg=err_msg)
        X          = array([[0.37454012, 0.95071431, 0.73199394, 0.59865848, 0.15601864,
        0.15599452, 0.05808361, 0.86617615, 0.6011..., 0.98663958, 0.3742708 , 0.37064215, 0.81279957,
        0.94724858, 0.98600106, 0.75337819, 0.37625959, 0.08350072]])
        X_pred1    = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
        X_pred2    = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
        X_repeated = array([[0.37454012, 0.95071431, 0.73199394, 0.59865848, 0.15601864,
        0.15599452, 0.05808361, 0.86617615, 0.6011..., 0.98663958, 0.3742708 , 0.37064215, 0.81279957,
        0.94724858, 0.98600106, 0.75337819, 0.37625959, 0.08350072]])
        X_weighted = array([[0.60754485, 0.17052412, 0.06505159, 0.94888554, 0.96563203,
        0.80839735, 0.30461377, 0.09767211, 0.6842..., 0.69673717, 0.62894285, 0.87747201, 0.73507104,
        0.80348093, 0.28203457, 0.17743954, 0.75061475, 0.80683474]])
        err_msg    = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
        estimator_orig = LinearRegression(positive=True)
        estimator_repeated = LinearRegression(positive=True)
        estimator_weighted = LinearRegression(positive=True)
        method     = 'predict'
        n_samples  = 15
        name       = 'LinearRegression'
        rng        = RandomState(MT19937) at 0xC9302628
        sparse_container = None
        sw         = array([3, 4, 0, 3, 1, 0, 4, 4, 0, 3, 0, 0, 3, 2, 0])
        y          = array([0, 1, 2, 2, 1, 1, 2, 2, 1, 2, 0, 0, 1, 1, 1])
        y_repeated = array([0, 0, 0, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
       1, 1, 1, 1, 1])
        y_weighted = array([1, 2, 1, 2, 1, 1, 2, 1, 0, 2, 0, 2, 0, 1, 1])
/io/sklearn/utils/_testing.py:283: in assert_allclose_dense_sparse
    assert_allclose(x, y, rtol=rtol, atol=atol, err_msg=err_msg)
        atol       = 1e-09
        err_msg    = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
        rtol       = 1e-07
        x          = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
        y          = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

actual = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
desired = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
rtol = 1e-07, atol = 1e-09, equal_nan = True
err_msg = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
verbose = True

    def assert_allclose(
        actual, desired, rtol=None, atol=0.0, equal_nan=True, err_msg="", verbose=True
    ):
        """dtype-aware variant of numpy.testing.assert_allclose
    
        This variant introspects the least precise floating point dtype
        in 
8A6E
the input argument and automatically sets the relative tolerance
        parameter to 1e-4 float32 and use 1e-7 otherwise (typically float64
        in scikit-learn).
    
        `atol` is always left to 0. by default. It should be adjusted manually
        to an assertion-specific value in case there are null values expected
        in `desired`.
    
        The aggregate tolerance is `atol + rtol * abs(desired)`.
    
        Parameters
        ----------
        actual : array_like
            Array obtained.
        desired : array_like
            Array desired.
        rtol : float, optional, default=None
            Relative tolerance.
            If None, it is set based on the provided arrays' dtypes.
        atol : float, optional, default=0.
            Absolute tolerance.
        equal_nan : bool, optional, default=True
            If True, NaNs will compare equal.
        err_msg : str, optional, default=''
            The error message to be printed in case of failure.
        verbose : bool, optional, default=True
            If True, the conflicting values are appended to the error message.
    
        Raises
        ------
        AssertionError
            If actual and desired are not equal up to specified precision.
    
        See Also
        --------
        numpy.testing.assert_allclose
    
        Examples
        --------
        >>> import numpy as np
        >>> from sklearn.utils._testing import assert_allclose
        >>> x = [1e-5, 1e-3, 1e-1]
        >>> y = np.arccos(np.cos(x))
        >>> assert_allclose(x, y, rtol=1e-5, atol=0)
        >>> a = np.full(shape=10, fill_value=1e-5, dtype=np.float32)
        >>> assert_allclose(a, 1e-5)
        """
        dtypes = []
    
        actual, desired = np.asanyarray(actual), np.asanyarray(desired)
        dtypes = [actual.dtype, desired.dtype]
    
        if rtol is None:
            rtols = [1e-4 if dtype == np.float32 else 1e-7 for dtype in dtypes]
            rtol = max(rtols)
    
>       np_assert_allclose(
            actual,
            desired,
            rtol=rtol,
            atol=atol,
            equal_nan=equal_nan,
            err_msg=err_msg,
            verbose=verbose,
        )
E       AssertionError: 
E       Not equal to tolerance rtol=1e-07, atol=1e-09
E       Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.
E       Mismatched elements: 6 / 15 (40%)
E       Max absolute difference among violations: 2.51014256
E       Max relative difference among violations: 2.17024526
E        ACTUAL: array([ 8.881784e-16,  1.000000e+00,  2.000000e+00,  1.185498e+00,
E               4.062418e+00,  1.000000e+00,  2.000000e+00,  2.000000e+00,
E               4.105658e+00,  2.000000e+00, -2.799363e-02, -8.906428e-01,
E              -8.008100e-01,  1.000000e+00,  1.000000e+00])
E        DESIRED: array([0.      , 1.      , 2.      , 0.941865, 1.726709, 1.      ,
E              2.      , 2.      , 1.872389, 2.      , 1.508778, 0.761074,
E              1.709333, 1.      , 1.      ])

actual     = array([ 8.88178420e-16,  1.00000000e+00,  2.00000000e+00,  1.18549798e+00,
        4.06241761e+00,  1.00000000e+00,  2...5767e+00,  2.00000000e+00, -2.79936287e-02, -8.90642835e-01,
       -8.00809991e-01,  1.00000000e+00,  1.00000000e+00])
atol       = 1e-09
desired    = array([0.        , 1.        , 2.        , 0.94186541, 1.72670876,
       1.        , 2.        , 2.        , 1.8723887 , 2.        ,
       1.50877777, 0.76107365, 1.70933257, 1.        , 1.        ])
dtypes     = [dtype('float64'), dtype('float64')]
equal_nan  = True
err_msg    = 'Comparing the output of LinearRegression.predict revealed that fitting with `sample_weight` is not equivalent to fitting with removed or repeated data points.'
rtol       = 1e-07
verbose    = True

/io/sklearn/utils/_testing.py:237: AssertionError

@lesteve lesteve closed this as completed Mar 30, 2025
@lesteve lesteve reopened this Mar 30, 2025
@scikit-learn-bot scikit-learn-bot changed the title ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Mar 28, 2025) ⚠️ ⚠️ CI failed on Linux_Docker.debian_32bit (last failure: Apr 19, 2025) ⚠️ Apr 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
0