[go: up one dir, main page]

Skip to content

Releases: langroid/langroid

0.16.6

24 Sep 18:55
Compare
Choose a tag to compare

fix: set litellm.modify_params=True to accommodate quirks of non-OpenAI APIs without crashing.

(E.g. Anthropic APIs don't support first msg w system role)

0.16.5

23 Sep 19:57
Compare
Choose a tag to compare

fix: Further enhances of json parsing from tool-gen with weak LLMs

0.16.4

23 Sep 15:14
Compare
Choose a tag to compare

fix: Improve JSON parsing (e.g. code) from weak LLMs

Uses the excellent, lightweight json-repair lib.

One example where this helps: when using tools, weak LLMs sometimes generate JSON that has un-escaped newlines within strings.
If we simply discard these, it's problematic when the strings contain newline-sensitive code (e.g. python, toml).
Instead we should escape these newlines, but ONLY escape the newlines that appear within string-valued fields in the JSON
(newlines that appear outside of these should definitely NOT be escaped, or it leads to inaccurate json).
Fortunately, the excellent json-repair lib has a good solution for this and other pesky json issues, using CFGs.

0.16.3

17 Sep 16:32
Compare
Choose a tag to compare

fix: in logging.py, escape markup when using rich.console.print(...)

0.16.2

17 Sep 14:45
Compare
Choose a tag to compare

fix: switch to langroid-tools for o1 models since they don't yet support tools/fns in API

When using o1 models, the ChatAgent automatically sets ChatAgentConfig.use_functions_api to False,
and ChatAgentConfig.use_tools so that Langroid's prompt-based ToolMessage mechanism is used instead.

feat: TaskConfig.recognize_string_signals bool flag (default True), can be set to False to dis-allow string-based signals like DONE, PASS etc

This is useful when we want to avoid "accidental prompt injection" (e.g. "DONE" may appear in normal text, and we don't want that to trigger task completion).
In general it is preferable to use the task orchestration tools (DoneTool etc) rather than string-based signals.

0.16.1

13 Sep 21:41
Compare
Choose a tag to compare

fix: handle max_tokens/max_completion_tokens variation to support groq, o1, other LLMs

0.16.0

13 Sep 14:18
Compare
Choose a tag to compare

feat: Support OpenAI o1-preview, o1-mini

To use these you can set the LLM config as follows:

config = OpenAIGPTConfig(
	chat_model=OpenAIChatModel.O1_MINI # or O1_PREVIEW
)

Or in many example scripts you can directly specify the model using -m o1-preview or -m o1-mini, e.g.:

python3 examples/basic/chat.py -m o1-mini

Also any pytest that runs against a real (i.e. not MockLM) LLM can be run with these models using --m o1-preview or --m o1-mini, e.g.

pytest -xvs tests/main/test_llm.py --m o1-mini

Note these models (as of Sep 12 2024):

  • do not support streaming, so langroid sets stream to False even if you try to stream
  • do not support system msg, so langroid maps any supplied system msg to a msg with role User, and
  • do not allow temperature setting, so any temperature setting is ignored when using langroid (the models use default temperature = 1)

0.15.2

12 Sep 16:26
Compare
Choose a tag to compare

chore: Neo4jChatAgent: add field to access current retrieval query

0.15.1

11 Sep 14:20
Compare
Choose a tag to compare

chore: add empty py.typed file to notify other pkgs that types are available.

See #558

0.15.0

10 Sep 01:37
Compare
Choose a tag to compare

feat: Cerebras API support.

You can now run langroid with LLMs hosted on cerebras by setting up a CEREBRAS_API_KEY in your environment,
and specifying the chat_model in the OpenAIGPTConfig as cerebras/<model_name>, e.g. cerebras/llama3.1-8b.

Cerebras docs: https://inference-docs.cerebras.ai/introduction

Guide to using Langroid with Cerebras-hosted LLMs: https://langroid.github.io/langroid/tutorials/local-llm-setup/