Bring your own LLMs

Ragas uses langchain under the hood for connecting to LLMs for metrices that require them. This means you can swap out the default LLM we use (gpt-3.5-turbo-16k) to use any 100s of API supported out of the box with langchain.

This guide will show you how to use another or LLM API for evaluation.

Note

If your looking to use Azure OpenAI for evaluation checkout this guide

Evaluating with GPT4

Ragas uses gpt3.5 by default but using gpt4 for evaluation can improve the results so lets use that for the Faithfulness metric

To start-off, we initialise the gpt4 chat_model from langchain

# make sure you have you OpenAI API key ready
import os

os.environ["OPENAI_API_KEY"] = "your-openai-key"
from langchain.chat_models import ChatOpenAI

gpt4 = ChatOpenAI(model_name="gpt-4")

In order to you the Langchain LLM you have to use the RagasLLM wrapper. This help the Ragas library specify the interfaces that will be used internally by the metrics and what is exposed via the Langchain library. You can also use other LLM APIs in tools like LlamaIndex and LiteLLM but creating your own implementation of RagasLLM that supports it.

from ragas.llms import LangchainLLM

gpt4_wrapper = LangchainLLM(llm=gpt4)

Substitute the llm in Metric instance with the newly create GPT4 model.

from ragas.metrics import faithfulness

faithfulness.llm = gpt4_wrapper

That’s it! faithfulness will now be using GPT-4 under the hood for evaluations.

Now lets run the evaluations using the example from quickstart.

# data
from datasets import load_dataset

fiqa_eval = load_dataset("explodinggradients/fiqa", "ragas_eval")
fiqa_eval
Found cached dataset fiqa (/home/jjmachan/.cache/huggingface/datasets/explodinggradients___fiqa/ragas_eval/1.0.0/3dc7b639f5b4b16509a3299a2ceb78bf5fe98ee6b5fee25e7d5e4d290c88efb8)
DatasetDict({
    baseline: Dataset({
        features: ['question', 'ground_truths', 'answer', 'contexts'],
        num_rows: 30
    })
})
# evaluate
from ragas import evaluate

result = evaluate(
    fiqa_eval["baseline"].select(range(5)),  # showing only 5 for demonstration
    metrics=[faithfulness],
)

result
evaluating with [faithfulness]
100%|████████████████████████████████████████████████████████████| 1/1 [06:03<00:00, 363.87s/it]
{'faithfulness': 0.7667}

Evaluating with Open-Source LLMs

You can also use any of the Open-Source LLM for evaluating. Ragas support most the the deployment methods like HuggingFace TGI, Anyscale, vLLM and many more through Langchain.

When it comes to selecting open-source language models, there are some rules of thumb to follow, given that the quality of evaluation metrics depends heavily on the model’s quality:

  1. Opt for models with more than 7 billion parameters. This choice ensures a minimum level of quality in the results for ragas metrics. Models like Llama-2 or Mistral can be an excellent starting point.

  2. Always prioritize finetuned models over base models. Finetuned models tend to follow instructions more effectively, which can significantly improve their performance.

  3. If your project focuses on a specific domain, such as science or finance, prioritize models that have been pre-trained on a larger volume of tokens from your domain of interest. For instance, if you are working with research data, consider models pre-trained on a substantial number of tokens from platforms like arXiv or Semantic Scholar.

Note

Choosing the right Open-Source LLM for evaluation can by tricky. You can also fine-tune these models to get even better performance on Ragas meterics. If you need some help/advice on that feel free to talk to us

In this example we are going to use vLLM for hosting a HuggingFaceH4/zephyr-7b-alpha. Checkout the quickstart for more details on how to get started with vLLM.

# start the vLLM server
!python -m vllm.entrypoints.openai.api_server \
    --model HuggingFaceH4/zephyr-7b-alpha \
    --host 0.0.0.0 \
    --port 8080

Now lets create an Langchain llm instance and wrap it with RagasLLM class. Because vLLM can run in OpenAI compatibilitiy mode, we can use the ChatOpenAI class as it is with small tweaks.

from langchain.chat_models import ChatOpenAI
from ragas.llms import LangchainLLM

inference_server_url = "http://localhost:8080/v1"

# create vLLM Langchain instance
chat = ChatOpenAI(
    model="HuggingFaceH4/zephyr-7b-alpha",
    openai_api_key="no-key",
    openai_api_base=inference_server_url,
    max_tokens=5,
    temperature=0,
)

# use the Ragas LangchainLLM wrapper to create a RagasLLM instance
vllm = LangchainLLM(llm=chat)

Now lets import all the metrics you want to use and change the llm.

from ragas.metrics import (
    context_precision,
    answer_relevancy,
    faithfulness,
    context_recall,
)
from ragas.metrics.critique import harmfulness

# change the LLM

faithfulness.llm = vllm
answer_relevancy.llm = vllm
context_precision.llm = vllm
context_recall.llm = vllm
harmfulness.llm = vllm

Now you can run the evaluations with and analyse the results.

# evaluate
from ragas import evaluate

result = evaluate(
    fiqa_eval["baseline"].select(range(5)),  # showing only 5 for demonstration
    metrics=[faithfulness],
)

result
evaluating with [faithfulness]
100%|████████████████████████████████████████████████████████████| 1/1 [06:25<00:00, 385.74s/it]
{'faithfulness': 0.7167}