Open
Description
Describe the bug
Currently if I stop the stream mid-response, or if a tool throws an error before finishing, the session becomes unusable. Any new message returns:
litellm.BadRequestError: OpenAIException - An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'.
It seems the session expects unfinished tool calls to be completed, even after failure or interruption. Would be great if the session could recover or reset tool_call state in such cases.
To Reproduce
Steps to reproduce the behavior:
- Trigger a tool in the agent.
- Either: Raise an exception inside the tool, or Sstop the stream midway during the tool response.
- Send a new user message.
- Error occurs: tool_call_id not responded.
Expected behavior
I expect to be able to stop the stream at any point without breaking the session.
Also, if a tool throws an exception, the session should remain usable => it shouldn’t get stuck or throw tool_call_id errors on the next user message.
Screenshots
Desktop (please complete the following information):
- OS: Linux Ubuntu
- Python version(python -V): 3.12
- ADK version(pip show google-adk): 1.3.0
How can i fix this?