8000 Releases · TanStack/ai · GitHub
[go: up one dir, main page]

Skip to content

Releases: TanStack/ai

@tanstack/solid-ai-devtools@0.2.11

06 Mar 14:23
3f1b9a7

Choose a tag to compare

Patch Changes

  • fix solid bundling of devtools (#334)

  • Bump up package versions (#334)

  • Updated dependencies [d40adfe, d40adfe]:

    • @tanstack/ai-devtools-core@0.3.7

@tanstack/react-ai-devtools@0.2.11

06 Mar 14:23
3f1b9a7

Choose a tag to compare

Patch Changes

  • Bump up package versions (#334)

  • Updated dependencies [d40adfe, d40adfe]:

    • @tanstack/ai-devtools-core@0.3.7

@tanstack/preact-ai-devtools@0.1.11

06 Mar 14:23
3f1b9a7

Choose a tag to compare

Patch Changes

  • Bump up package versions (#334)

  • Updated dependencies [d40adfe, d40adfe]:

    • @tanstack/ai-devtools-core@0.3.7

@tanstack/ai-devtools-core@0.3.7

06 Mar 14:24
3f1b9a7

Choose a tag to compare

Patch Changes

  • fix solid bundling of devtools (#334)

  • Bump up package versions (#334)

@tanstack/ai-gemini@0.7.0

02 Mar 14:57
e549268

Choose a tag to compare

Minor Changes

    • Add NanoBanana native image generation with up to 4K image output, routing all gemini-* native image models through generateContent API (#321)
    • Fix SDK property names (imageGenerationConfig → imageConfig, outputImageSize → imageSize) and rename NanoBanana types to GeminiNativeImage
    • Add Gemini 3.1 Pro model support for text generation

@tanstack/solid-ai-devtools@0.2.10

28 Feb 01:16
81d0ac0

Choose a tag to compare

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.6

@tanstack/react-ai-devtools@0.2.10

28 Feb 01:16
81d0ac0

Choose a tag to compare

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.6

@tanstack/preact-ai-devtools@0.1.10

28 Feb 01:16
81d0ac0

Choose a tag to compare

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.6

@tanstack/ai@0.6.1

28 Feb 01:15
81d0ac0

Choose a tag to compare

Patch Changes

  • Fix chat stall when server and client tools are called in the same turn. (#323)

    When the LLM requested both a server tool and a client tool in the same response, the server tool's result was silently dropped. The processToolCalls and checkForPendingToolCalls methods returned early to wait for the client tool, skipping the emitToolResults call entirely — so the server result was never emitted or added to the message history, causing the session to stall indefinitely.

    The fix emits completed server tool results before yielding the early return for client tool / approval waiting.

    Also fixes the smoke-test harness and test fixtures to use chunk.value instead of chunk.data for CUSTOM events, following the rename introduced in #307.

@tanstack/ai-vue@0.6.1

28 Feb 01:16
81d0ac0

Choose a tag to compare

Patch Changes

  • Updated dependencies [d8678e2]:
    • @tanstack/ai@0.6.1
    • @tanstack/ai-client@0.5.1
0