| title | emoji | colorFrom | colorTo | sdk | sdk_version | app_file | pinned | python_version | tags | license | short_description | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Recipe Insights |
📉 |
yellow |
purple |
gradio |
5.49.1 |
src/main.py |
false |
3.13 |
|
apache-2.0 |
AI-powered insights into Recipe Dependencies! |
A tool to help hobbyist chefs discover and better understand the implicit dependencies in recipes they’d like to cook or bake.
Notice: Includes generated sample recipes that have not been tested/verified in the real world and may contain mistakes or improper instructions. Please use common sense and follow all applicable food and appliance safety recommendations when cooking!
This project takes a hybrid approach to LLM-integration, combining structured prompting with agentic workflows for complex recipe analysis:
🔍 Two-Stage AI Pipeline:
- Stage 1: Direct LLM calls with structured prompts to extract ingredients, equipment, and cooking actions from recipe text (works well with LLM's strong semantic understanding)
- Stage 2: Agentic workflow using smolagents framework with specialized tools to analyze dependencies between extracted entities
🛠️ Key Technical Features:
- Hybrid NLP approach combining spaCy methods with LLM reasoning for robust parsing
- Interactive network visualization using graph theory to display relationships between ingredients→actions→equipment.
- State management with progress tracking and error handling throughout the parsing pipeline
📋 User Experience: The "How To" tab provides a comprehensive walkthrough of the application flow: sample recipe selection → parsing → dependency analysis → interactive graph visualization → export capabilities. This creates an intuitive experience for hobbyist chefs to understand their recipe's hidden structure and dependencies.
The application showcases how agentic AI workflows can break down complex NLP tasks into manageable, tool-assisted steps while maintaining transparency and user control throughout the process.
Unfortunately sound capture wasn't working properly with Gradio capture, so these are without sound. See the above description, as well as the "How to" tab in the application to get an idea of how to use the app.
Part 1
gradio-screen-recording-recipe-insights-pt-1.mp4
Part 2
gradio-screen-recording-recipe-insights-pt-2.mp4
Internally, the project uses uv. So setting it up looks like:
# Install dependencies
uv sync
# Run the application
uv run recipe-insightsSet up pre-commit with:
uv run pre-commitLastly, you will need to the provide at least the following variables in the .env file (you must create the file):
# API Token with permissions for inference on Hugging Face
HF_TOKEN=<your token here>
# Name of model on Hugging Face
HF_MODEL=<model name here>
The app was developed with Qwen/Qwen2.5-Coder-32B-Instruct.
Qwen instruct family models continue to deliver reliable results (newer ones tested to work as well, e.g. Qwen/Qwen3-30B-A3B-Instruct-2507).
We've also had (initial) promising results with the GPT-OSS family (though sometimes 20B seems to time out/return empty results, so you may need to try a few times).
The app is using the chat.completion call for inference, so you might start by looking at the models tagged "conversational": https://huggingface.co/models?other=conversational
The project uses uv for package management internally. To run on HuggingFace, however, a requirements.txt is required for now. You can update it with the following command:
uv pip freeze | grep -v "file://" > requirements.txtNote that updates to gradio and python need to also be reflected in the HuggingFace Space configuration meta at the top of this file.
The project source is hosted on GitHub, with updates pushed to a Hugging Face Space.