This Repo contains all Qdrant based RAG Evaluation Reference material , notebooks and contents.
This repository contains various implementations of Retrieval Augmented Generation (RAG) using QDRANT, an open-source vector search engine. The project showcases different approaches to build RAG with QDRANT for efficient and effective information retrieval and generation. Additionally, the repository includes comprehensive evaluation tools to assess the performance of the implemented RAG application.
-
workshop-rag-eval-qdrant-arize : RAG implementation showcasing Naive (using Dense Vectors) vs Hybrid RAG (using Sparse and Dense Vectors) through Qdrant and Llamaindex and evaluating it using Arize Phoenix. Youtube : https://www.youtube.com/watch?v=m_J0nFmnrPI
-
workshop-rag-eval-qdrant-quotient: RAG implementation showcasing Naive RAG implemented using Qdrant and Langchain, incrementally evaluated and improved through rapid experimentation with Chunk size , Embedding model and LLM using Quotient AI.
Youtube: https://www.youtube.com/watch?v=3MEMPZR1aZA
Article: https://qdrant.tech/articles/rapid-rag-optimization-with-qdrant-and-quotient/ -
workshop-rag-eval-qdrant-ragas: RAG implementation showcasing Naive RAG implemented using Qdrant and Langchain , experimentation and effects of Retrieval Window size evaluated through RAGAS.
Article : https://superlinked.com/vectorhub/articles/retrieval-augmented-generation-eval-qdrant-ragas -
workshop-rag-eval-qdrant-ragas-haystack : RAG implementation showing Naive RAG implemented using Qdrant and Haystack , experimentation and improvement through MixedBread AI Embedding model and Reranker model + Retrieval Window Size , evaluated through RAGAS.
YouTube : https://www.youtube.com/watch?v=6NTZqpc4V-k -
agentic_rag_with_unify/notebook : RAG implementation showing Naive RAG implementation using Qdrant and improved through using Agent Routing and Unify.
-
synthetic_qna/notebook : Showing synthetic evaluation question generation or checkout https://www.fiddlecube.ai/
Each example is integrated with QDRANT to leverage its powerful vector search capabilities. Detailed instructions and code examples for each integration are provided in the respective directories.
We provide a suite of RAG evaluation tools to assess the performance of the implemented RAG models. These tools are designed to measure various aspects of the RAG systems, ensuring a thorough and robust evaluation process.
The RAGs are built using source dataset containing Qdrant’s documentation and evaluated using Evaluation dataset.
Follow the instructions in the respective directories to run the RAG implementations and perform evaluations using the provided tools.
We welcome contributions from the community! If you have any improvements or new RAG Evaluation tool cookbook to add, please submit a pull request or open an issue.
We would like to thank the contributors and the open-source community for their support and collaboration.