8000 Unable to use reasoning models with tool calls using LitellmModel · Issue #678 · openai/openai-agents-python · GitHub
[go: up one dir, main page]

Skip to content
Unable to use reasoning models with tool calls using LitellmModel #678
Open
@ukayani

Description

@ukayani

Please read this first

  • Have you read the custom model provider docs, including the 'Common issues' section? Model provider docs: Yes
  • Have you searched for related issues? Others may have faced similar issues.: Yes

Describe the question

When the agent attempts to submit a toolcall result via the LitellmModel abstraction, a failure occurs (400 bad request) if reasoning is enabled. I've tested this in particular on Anthropic claude 3.7 with reasoning effort set to high.

Debug information

  • Agents SDK version: v0.0.14
  • Litellm 1.67.4
  • Python version: 3.13

Repro steps

Here is a modified version of the litellm_provider.py example. The only changes i've made, aside from hardcoding the model, is to pass a reasoning effort. This value correctly gets sent to the underlying litellm and enables reasoning on Claude, however, after Claude decides to use the get_weather tool, the tool submission message chain loses all information related to the thinking blocks. This causes the anthropic API to return a 400

litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected `thinking` or `redacted_thinking`, but found `tool_use`. When `thinking` is enabled, a final `assistant` message must start with a thinking block (preceeding the lastmost set of `tool_use` and `tool_result` blocks). We recommend you include thinking blocks from previous turns. To avoid this requirement, disable `thinking`. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}

Example:

import asyncio

from agents import Agent, Runner, function_tool, set_tracing_disabled, ModelSettings
from agents.extensions.models.litellm_model import LitellmModel
from openai.types.shared.reasoning import Reasoning

@function_tool
def get_weather(city: str):
    print(f"[debug] getting weather for {city}")
    return f"The weather in {city} is sunny."


async def main():
    agent = Agent(
        name="Assistant",
        instructions="You only respond in haikus.",
        model=LitellmModel(model="anthropic/claude-3-7-sonnet-20250219"),
        tools=[get_weather],
        model_settings=ModelSettings(reasoning=Reasoning(effort="high")),
    )

    result = await Runner.run(agent, "What's the weather in Tokyo?")
    print(result.final_output)

if __name__ == "__main__":
    asyncio.run(main())

Expected behavior

I would expect the agent to return the weather in tokyo, without failure.
When using litellm directly, i'm able to accomplish tool calling with a reasoning model.

This issue unfortunately makes it impossible to properly use non openai reasoning models with agents.

Cause

I believe the cause of this error is that the message conversion steps remove the model provider specific details from the message chain, such as thinking blocks. These are maintained on the litellm message type but are lost during the following conversion steps done in the LitellmModel:

input item -> chat completion -> lite llm
lite llm -> chat completion -> output item

I think in order to properly support other model providers via Litellm, there needs to be a way to preserve model specific message properties between the various message models.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0