Griptape Integration
If you're familiar with Griptape's RAG Engine and want to start evaluating your RAG system's performance, you're in the right place. In this tutorial we'll explore how to use Ragas to evaluate the responses generated by your Griptape RAG Engine.
Griptape Setup
Setting Up Our Environment
First, let's make sure we have all the required packages installed:
Creating Our Dataset
We'll use a small dataset of text chunks about major LLM providers and set up a simple RAG pipeline:
chunks = [
"OpenAI is one of the most recognized names in the large language model space, known for its GPT series of models. These models excel at generating human-like text and performing tasks like creative writing, answering questions, and summarizing content. GPT-4, their latest release, has set benchmarks in understanding context and delivering detailed responses.",
"Anthropic is well-known for its Claude series of language models, designed with a strong focus on safety and ethical AI behavior. Claude is particularly praised for its ability to follow complex instructions and generate text that aligns closely with user intent.",
"DeepMind, a division of Google, is recognized for its cutting-edge Gemini models, which are integrated into various Google products like Bard and Workspace tools. These models are renowned for their conversational abilities and their capacity to handle complex, multi-turn dialogues.",
"Meta AI is best known for its LLaMA (Large Language Model Meta AI) series, which has been made open-source for researchers and developers. LLaMA models are praised for their ability to support innovation and experimentation due to their accessibility and strong performance.",
"Meta AI with it's LLaMA models aims to democratize AI development by making high-quality models available for free, fostering collaboration across industries. Their open-source approach has been a game-changer for researchers without access to expensive resources.",
"Microsoftâs Azure AI platform is famous for integrating OpenAIâs GPT models, enabling businesses to use these advanced models in a scalable and secure cloud environment. Azure AI powers applications like Copilot in Office 365, helping users draft emails, generate summaries, and more.",
"Amazonâs Bedrock platform is recognized for providing access to various language models, including its own models and third-party ones like Anthropicâs Claude and AI21âs Jurassic. Bedrock is especially valued for its flexibility, allowing users to choose models based on their specific needs.",
"Cohere is well-known for its language models tailored for business use, excelling in tasks like search, summarization, and customer support. Their models are recognized for being efficient, cost-effective, and easy to integrate into workflows.",
"AI21 Labs is famous for its Jurassic series of language models, which are highly versatile and capable of handling tasks like content creation and code generation. The Jurassic models stand out for their natural language understanding and ability to generate detailed and coherent responses.",
"In the rapidly advancing field of artificial intelligence, several companies have made significant contributions with their large language models. Notable players include OpenAI, known for its GPT Series (including GPT-4); Anthropic, which offers the Claude Series; Google DeepMind with its Gemini Models; Meta AI, recognized for its LLaMA Series; Microsoft Azure AI, which integrates OpenAIâs GPT Models; Amazon AWS (Bedrock), providing access to various models including Claude (Anthropic) and Jurassic (AI21 Labs); Cohere, which offers its own models tailored for business use; and AI21 Labs, known for its Jurassic Series. These companies are shaping the landscape of AI by providing powerful models with diverse capabilities.",
]
Ingesting data in Vector Store
import getpass
import os
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
from griptape.drivers.embedding.openai import OpenAiEmbeddingDriver
from griptape.drivers.vector.local import LocalVectorStoreDriver
# Set up a simple vector store with our data
vector_store = LocalVectorStoreDriver(embedding_driver=OpenAiEmbeddingDriver())
vector_store.upsert_collection({"major_llm_providers": chunks})
Setting up the RAG Engine
from griptape.engines.rag import RagContext, RagEngine
from griptape.engines.rag.modules import (
PromptResponseRagModule,
VectorStoreRetrievalRagModule,
)
from griptape.engines.rag.stages import (
ResponseRagStage,
RetrievalRagStage,
)
# Create a basic RAG pipeline
rag_engine = RagEngine(
# Stage for retrieving relevant chunks
retrieval_stage=RetrievalRagStage(
retrieval_modules=[
VectorStoreRetrievalRagModule(
name="VectorStore_Retriever",
vector_store_driver=vector_store,
query_params={"namespace": "major_llm_providers"},
),
],
),
# Stage for generating a response
response_stage=ResponseRagStage(
response_modules=[
PromptResponseRagModule(),
]
),
)
Testing Our RAG Pipeline
Let's make sure our RAG pipeline works by testing it with a sample query:
rag_context = RagContext(query="What makes Meta AIâs LLaMA models stand out?")
rag_context = rag_engine.process(rag_context)
rag_context.outputs[0].to_text()
"Meta AI's LLaMA models stand out for their open-source nature, which makes them accessible to researchers and developers. This accessibility supports innovation and experimentation, allowing for collaboration across industries. By making high-quality models available for free, Meta AI aims to democratize AI development, which has been a game-changer for researchers without access to expensive resources."
Ragas Evaluation
Creating a Ragas Evaluation Dataset
questions = [
"Who are the major players in the large language model space?",
"What is Microsoftâs Azure AI platform known for?",
"What kind of models does Cohere provide?",
]
references = [
"The major players include OpenAI (GPT Series), Anthropic (Claude Series), Google DeepMind (Gemini Models), Meta AI (LLaMA Series), Microsoft Azure AI (integrating GPT Models), Amazon AWS (Bedrock with Claude and Jurassic), Cohere (business-focused models), and AI21 Labs (Jurassic Series).",
"Microsoftâs Azure AI platform is known for integrating OpenAIâs GPT models, enabling businesses to use these models in a scalable and secure cloud environment.",
"Cohere provides language models tailored for business use, excelling in tasks like search, summarization, and customer support.",
]
griptape_rag_contexts = []
for que in questions:
rag_context = RagContext(query=que)
griptape_rag_contexts.append(rag_engine.process(rag_context))
from ragas.integrations.griptape import transform_to_ragas_dataset
ragas_eval_dataset = transform_to_ragas_dataset(
grip_tape_rag_contexts=griptape_rag_contexts, references=references
)
user_input | retrieved_contexts | response | reference | |
---|---|---|---|---|
0 | Who are the major players in the large languag... | [In the rapidly advancing field of artificial ... | The major players in the large language model ... | The major players include OpenAI (GPT Series),... |
1 | What is Microsoftâs Azure AI platform known for? | [Microsoftâs Azure AI platform is famous for i... | Microsoftâs Azure AI platform is known for int... | Microsoftâs Azure AI platform is known for int... |
2 | What kind of models does Cohere provide? | [Cohere is well-known for its language models ... | Cohere provides language models tailored for b... | Cohere provides language models tailored for b... |
Running the Ragas Evaluation
Now, let's evaluate our RAG system using Ragas metrics:
Evaluating Retrieval
To evaluate our retrieval performance, we can utilize Ragas built-in metrics or create custom metrics tailored to our specific needs. For a comprehensive list of all available metrics and customization options, please visit the documentation.
We will use ContextPrecision
, ContextRecall
and ContextRelevance
to measure the retrieval performance:
- ContextPrecision: Measures how well a RAG system's retriever ranks relevant chunks at the top of the retrieved context for a given query, calculated as the mean precision@k across all chunks.
- ContextRecall: Measures the proportion of relevant information successfully retrieved from a knowledge base.
- ContextRelevance: Measures how well the retrieved contexts address the userâs query by evaluating their pertinence through dual LLM judgments.
from ragas.metrics import ContextPrecision, ContextRecall, ContextRelevance
from ragas import evaluate
from langchain_openai import ChatOpenAI
from ragas.llms import LangchainLLMWrapper
llm = ChatOpenAI(model="gpt-4o-mini")
evaluator_llm = LangchainLLMWrapper(llm)
ragas_metrics = [
ContextPrecision(llm=evaluator_llm),
ContextRecall(llm=evaluator_llm),
ContextRelevance(llm=evaluator_llm),
]
retrieval_results = evaluate(dataset=ragas_eval_dataset, metrics=ragas_metrics)
retrieval_results.to_pandas()
user_input | retrieved_contexts | response | reference | context_precision | context_recall | nv_context_relevance | |
---|---|---|---|---|---|---|---|
0 | Who are the major players in the large languag... | [In the rapidly advancing field of artificial ... | The major players in the large language model ... | The major players include OpenAI (GPT Series),... | 1.000000 | 1.0 | 1.0 |
1 | What is Microsoftâs Azure AI platform known for? | [Microsoftâs Azure AI platform is famous for i... | Microsoftâs Azure AI platform is known for int... | Microsoftâs Azure AI platform is known for int... | 1.000000 | 1.0 | 1.0 |
2 | What kind of models does Cohere provide? | [Cohere is well-known for its language models ... | Cohere provides language models tailored for b... | Cohere provides language models tailored for b... | 0.833333 | 1.0 | 1.0 |
Evaluating Generation
To measure the generation performance we will use FactualCorrectness
, Faithfulness
and ContextRelevance
:
- FactualCorrectness: Checks if all statements in a response are supported by the reference answer.
- Faithfulness: Measures how factually consistent a response is with the retrieved context.
- ResponseGroundedness: Measures whether the response is grounded in the provided context, helping to identify hallucinations or made-up information.
from ragas.metrics import FactualCorrectness, Faithfulness, ResponseGroundedness
ragas_metrics = [
FactualCorrectness(llm=evaluator_llm),
Faithfulness(llm=evaluator_llm),
ResponseGroundedness(llm=evaluator_llm),
]
genration_results = evaluate(dataset=ragas_eval_dataset, metrics=ragas_metrics)
genration_results.to_pandas()
user_input | retrieved_contexts | response | reference | factual_correctness(mode=f1) | faithfulness | nv_response_groundedness | |
---|---|---|---|---|---|---|---|
0 | Who are the major players in the large languag... | [In the rapidly advancing field of artificial ... | The major players in the large language model ... | The major players include OpenAI (GPT Series),... | 1.00 | 1.000000 | 1.0 |
1 | What is Microsoftâs Azure AI platform known for? | [Microsoftâs Azure AI platform is famous for i... | Microsoftâs Azure AI platform is known for int... | Microsoftâs Azure AI platform is known for int... | 0.57 | 0.833333 | 1.0 |
2 | What kind of models does Cohere provide? | [Cohere is well-known for its language models ... | Cohere provides language models tailored for b... | Cohere provides language models tailored for b... | 0.57 | 1.000000 | 1.0 |
Conclusion
Congratulations! You've successfully set up a Ragas evaluation pipeline for your Griptape RAG system. This evaluation provides valuable insights into how well your system retrieves relevant information and generates accurate responses.
Remember that RAG evaluation is an iterative process. Use these metrics to identify weaknesses in your system, make improvements, and re-evaluate until you achieve the performance level you need.
Happy RAGging! đ