Tags: coder/aibridge
Tags
fix: ensure injected tool invocation results are appended to request … …body (#189) * chore: ensure injected tool invocation results are appended to request body Signed-off-by: Danny Kopping <danny@coder.com> * chore: review comments Signed-off-by: Danny Kopping <danny@coder.com> --------- Signed-off-by: Danny Kopping <danny@coder.com>
fix: Use requests byte payload in messages interceptor (#185) Adds `WithRequestBody` option use to messages interceptor. Any structured request modifications are mirrored on original request payload. Fixes missing adaptive thinking from request after re-marshaling due to golang SDK not supporting it. Fixes: #177
fix: fixes double `v1` prefix in passthrough openai routes (#174) fix: fixes doulbe `v1` prefix in passthrough openai routes Bug introduced in #159 With default OpenAI base url `https://api.openai.com/v1/` pass though routes have `v1` prefix added 2 times (from base url path + pass though route prefix) resulting in requests being forwarded to `https://api.openai.com/v1/v1/` This PR fixes the issue. Fixes: #176
fix: preserve the stream property for chat/completions calls (#164) * fix: preserve the stream property for chat/completions calls * test: add request body validation to mock server * document aibridge stream mashalling behaviour for chat completions --------- Co-authored-by: Susana Cardoso Ferreira <susana@coder.com>
perf: skip redundant openai request serialization (#160) * use dannykopping/anthropic-sdk-go to avoid appendCompact * make fmt * perf: use the more efficient sasswart/openai-go * perf: avoid an unncecessary json unmarshal when we intercept chat completions requests * perf: reduce allocations when creating chat completions interceptors * uncomment benchmark * make fmt * update openai-go dependency * chore: document why we replace llm provider sdks
PreviousNext