This is a bastion server for the Chatify notebook explanation service, which uses LLMs to produce natural language exploration of a code notebook.
poetry install
Before you can use the server, you must either (1) change the assignment and response stores in server.py
to run locally (e.g., JSON stores), or (2) update the AWS credentials in the config.py
file. Note that no config.py
is provided; you must create a new file. But we do provide a config.example.py
file that you can use as a template. See that file for itemwise documentation of the configuration options.
For production server usage, run:
poetry run uvicorn chatify_server.server:app --port 9910
Made with 💚 at the Kording Lab