This is the backend API for the Fashion Chatbot project, built using FastAPI. It processes user queries and returns intelligent fashion-related suggestions using LLMs and vector search.
- FastAPI-based RESTful API
- Handles chat interactions via POST requests
- Uses langchain, ChromaDB, and OpenAI's GPT
- Modular design for future improvements (e.g., vector DB, memory, etc.)
- Ready to deploy on Render
- Language: Python 3.10+
- Framework: FastAPI
- Vector Store: ChromaDB
- LLM Interface: LangChain + OpenAI API
- Deployment: Render / Localhost
Install dependencies with:
bash pip install -r requirements.txt
Before running, set the following environment variable:
bash export OPENAI_API_KEY=your_openai_api_key_here
You can use .env file support with packages like python-dotenv if preferred.
bash uvicorn api_server:app --reload
By default, the app runs at: http://127.0.0.1:8000
Request Body:
json { "user_id": "example_user", "message": "What should I wear to a beach party?" }
Response:
json { "answer": "You can go for a floral shirt with white shorts and sandals." }
fashion-chatbot-backend/
├── api_server.py # FastAPI app
├── chatbot_engine.py # Handles LLM + vector DB interaction
├── requirements.txt # Dependencies
├── .env (optional) # Environment variable storage
├── start.sh # Script for deployment
- Push your backend code to GitHub
- Go to https://render.com
- Create a new Web Service
- Use api_server:app as the entry point
- Add environment variable: OPENAI_API_KEY
- Choose Python 3.10+ and deploy
For backend-related queries, connect via LinkedIn
MIT License