8000 SystemError: PY_SSIZE_T_CLEAN macro must be defined for '#' formats · Issue #5919 · triton-lang/triton · GitHub
[go: up one dir, main page]

Skip to content
SystemError: PY_SSIZE_T_CLEAN macro must be defined for '#' formats #5919
@famiu

Description

@famiu

Describe the bug

I'm trying to use Unsloth to finetune a model. When running it, I get the following error:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/famiu/Documents/Dev/discord_cloner/src/discord_cloner/__main__.py", line 115, in <module>
    main()
  File "/home/famiu/Documents/Dev/discord_cloner/src/discord_cloner/__main__.py", line 108, in main
    trainer.train()
  File "<string>", line 157, in train
  File "<string>", line 382, in _fast_inner_training_loop
  File "<string>", line 31, in _unsloth_training_step
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth/models/_utils.py", line 1069, in _unsloth_pre_compute_loss
    return self._old_compute_loss(model, inputs, *args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/transformers/trainer.py", line 3731, in compute_loss
    outputs = model(**inputs)
              ^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/_compile.py", line 32, in inner
    return disable_fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth/models/llama.py", line 1183, in PeftModelForCausalLM_fast_forward
    return self.base_model(
           ^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/peft/tuners/tuners_utils.py", line 197, in forward
    return self.model.forward(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth/models/llama.py", line 1043, in _CausalLM_fast_forward
    outputs = self.model(
              ^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth/models/llama.py", line 836, in LlamaModel_fast_forward
    hidden_states = Unsloth_Offloaded_Gradient_Checkpointer.apply(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/amp/autocast_mode.py", line 503, in decorate_fwd
    return fwd(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth_zoo/gradient_checkpointing.py", line 147, in forward
    output = forward_function(hidden_states, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth/models/llama.py", line 522, in LlamaDecoderLayer_fast_forward
    hidden_states = fast_rms_layernorm(self.input_layernorm, hidden_states)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 745, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth/kernels/rms_layernorm.py", line 210, in fast_rms_layernorm
    out = Fast_RMS_Layernorm.apply(X, W, eps, gemma)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/torch/autograd/function.py", line 575, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib/python3.12/site-packages/unsloth/kernels/rms_layernorm.py", line 156, in forward
    fx[(n_rows,)](
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/triton/runtime/jit.py", line 330, in <lambda>
    return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/triton/runtime/jit.py", line 653, in run
    kernel.run(grid_0, grid_1, grid_2, stream, kernel.function, kernel.packed_metadata, launch_metadata,
    ^^^^^^^^^^
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/triton/compiler/compiler.py", line 395, in __getattribute__
    self._init_handles()
  File "/home/famiu/Documents/Dev/discord_cloner/.venv/lib64/python3.12/site-packages/triton/compiler/compiler.py", line 390, in _init_handles
    self.module, self.function, self.n_regs, self.n_spills = driver.active.utils.load_binary(

While searching for existing GitHub issues, I found #5529 but the solution provided there was not relevant to me. Searching for the issue also did not lead me to anything useful.

I've tried

  • Deleting the venv and reinstalling everything.
  • Ensuring I have only one of triton and pytorch-trion installed instead of both (it's the former).

Environment details

OS: Fedora 41
Python version: 3.12.9
Triton version: 3.2.0
PyTorch version: 2.6.0
GPU: Nvidia Laptop 3060

I'm also using Poetry for package management, if that's important.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0