❤️ Community¶
“Alone we can do so little; together we can do so much.” - Helen Keller
Our project thrives on the vibrant energy, diverse skills, and shared passion of our community. It’s not just about code; it’s about people coming together to create something extraordinary. This space celebrates every contribution, big or small, and features the amazing people who make it all happen.
Note
📅 Upcomming Events
Greg Loughnane’s YT live event on RAG eval with LangChain and RAGAS on Feb 7
🌟 Contributors¶
Meet some of our outstanding members who made significant contributions !
📚 Blog & Insights¶
Explore insightful articles, tutorials, and stories written by and for our community members.
Shanthi Vardhan shares how his team at Atomicwork uses ragas to improve their AI system’s ability to accurately identify and retrieve more precise information for enhanced service management.
Pinecone’s study on how RAGs can enhance capabilities of LLMs in “RAG makes LLMs better and equal” uses ragas to proves context retrieval makes LLMs provide significantly better results, even when increasing the data size to 1 billion.
Aishwarya Prabhat shares her expertise on advanced RAG techniques in her comprehensive guide, “Performing, Evaluating & Tracking Advanced RAG (ft. AzureML, LlamaIndex & Ragas)”.
Leonie (aka @helloiamleonie) offers her perspective in the detailed article, “Evaluating RAG Applications with RAGAs”.
The joint efforts of Erika Cardenas and Connor Shorten are showcased in their collaborative piece, “An Overview on RAG Evaluation | Weaviate”, and their podcast with the Ragas team.
Erika Cardenas further explores the “RAG performance of hybrid search weightings (alpha)” in her recent experiment to tune weaviate alpha score using Ragas.
Langchain’s work about RAG Evaluating RAG pipelines with RAGAs and Langsmith provides a complete tutorial on how to leverage both tools to evaluate RAG pipelines.
Plaban Nayak shares his work Evaluate RAG Pipeline using RAGAS on building and evaluating a simple RAG using Langchain and RAGAS
Stephen Kurniawan compares different RAG elements such as Chunk Size, Vector Stores: FAISS vs ChromaDB, Vector Stores 2: Multiple Documents, and Similarity Searches / Distance Metrics / Index Strategies.
Discover Devanshu Brahmbhatt’s insights on optimizing RAG systems in his article, Enhancing LLM’s Accuracy with RAGAS. Learn about RAG architecture, key evaluation metrics, and how to use RAGAS scores to improve performance.
Suzuki and Hwang conducted an experiment to investigate if Ragas’ performance is language-dependent by comparing the performance (correlation coefficient between human labels and scores from Ragas) using datasets of the same content in Japanese and English. They wrote blog about the result of the experiment and basic algorithm of Ragas.
Atita Arora writes about Evaluating Retrieval Augmented Generation using RAGAS, an end-to-end tutorial on building RAG using Qdrant and Langchain and evaluating it with RAGAS.
Bonus content : Learn how to create an evaluation dataset that serves as a reference point for evaluating our RAG pipeline, Understand the RAGAS evaluation metrics and how to make sense of them and putting them in action to test a Naive RAG pipeline and measure its performance using RAGAS metrics.
Code walkthrough : https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-ragas
Code walkthrough using Deepset Haystack and Mixedbread.ai : https://github.com/qdrant/qdrant-rag-eval/tree/master/workshop-rag-eval-qdrant-ragas-haystack
📅 Events¶
Stay updated with our latest gatherings, meetups, and online webinars.
OpenAI Engineers shares their RAG tricks and features Ragas on DevDay.
Langchain’s a LangChain “RAG Evaluation” Webinar with the Ragas team