Skip to content

Schemas

TestsetSample

Bases: BaseSample

Represents a sample in a test set.

Attributes:

Name Type Description
eval_sample Union[SingleTurnSample, MultiTurnSample]

The evaluation sample, which can be either a single-turn or multi-turn sample.

synthesizer_name str

The name of the synthesizer used to generate this sample.

Testset dataclass

Testset(samples: List[TestsetSample], cost_cb: Optional[CostCallbackHandler] = None)

Bases: RagasDataset[TestsetSample]

Represents a test set containing multiple test samples.

Attributes:

Name Type Description
samples List[TestsetSample]

A list of TestsetSample objects representing the samples in the test set.

to_evaluation_dataset

to_evaluation_dataset() -> EvaluationDataset

Converts the Testset to an EvaluationDataset.

Source code in src/ragas/testset/synthesizers/testset_schema.py
def to_evaluation_dataset(self) -> EvaluationDataset:
    """
    Converts the Testset to an EvaluationDataset.
    """
    return EvaluationDataset(
        samples=[sample.eval_sample for sample in self.samples]
    )

to_list

to_list() -> List[Dict]

Converts the Testset to a list of dictionaries.

Source code in src/ragas/testset/synthesizers/testset_schema.py
def to_list(self) -> t.List[t.Dict]:
    """
    Converts the Testset to a list of dictionaries.
    """
    list_dict = []
    for sample in self.samples:
        sample_dict = sample.eval_sample.model_dump(exclude_none=True)
        sample_dict["synthesizer_name"] = sample.synthesizer_name
        list_dict.append(sample_dict)
    return list_dict

from_list classmethod

from_list(data: List[Dict]) -> Testset

Converts a list of dictionaries to a Testset.

Source code in src/ragas/testset/synthesizers/testset_schema.py
@classmethod
def from_list(cls, data: t.List[t.Dict]) -> Testset:
    """
    Converts a list of dictionaries to a Testset.
    """
    # first create the samples
    samples = []
    for sample in data:
        synthesizer_name = sample["synthesizer_name"]
        # remove the synthesizer name from the sample
        sample.pop("synthesizer_name")
        # the remaining sample is the eval_sample
        eval_sample = sample

        # if user_input is a list it is MultiTurnSample
        if "user_input" in eval_sample and not isinstance(
            eval_sample.get("user_input"), list
        ):
            eval_sample = SingleTurnSample(**eval_sample)
        else:
            eval_sample = MultiTurnSample(**eval_sample)

        samples.append(
            TestsetSample(
                eval_sample=eval_sample, synthesizer_name=synthesizer_name
            )
        )
    # then create the testset
    return Testset(samples=samples)

total_tokens

total_tokens() -> Union[List[TokenUsage], TokenUsage]

Compute the total tokens used in the evaluation.

Source code in src/ragas/testset/synthesizers/testset_schema.py
def total_tokens(self) -> t.Union[t.List[TokenUsage], TokenUsage]:
    """
    Compute the total tokens used in the evaluation.
    """
    if self.cost_cb is None:
        raise ValueError(
            "The Testset was not configured for computing cost. Please provide a token_usage_parser function to TestsetGenerator to compute cost."
        )
    return self.cost_cb.total_tokens()

total_cost

total_cost(cost_per_input_token: Optional[float] = None, cost_per_output_token: Optional[float] = None) -> float

Compute the total cost of the evaluation.

Source code in src/ragas/testset/synthesizers/testset_schema.py
def total_cost(
    self,
    cost_per_input_token: t.Optional[float] = None,
    cost_per_output_token: t.Optional[float] = None,
) -> float:
    """
    Compute the total cost of the evaluation.
    """
    if self.cost_cb is None:
        raise ValueError(
            "The Testset was not configured for computing cost. Please provide a token_usage_parser function to TestsetGenerator to compute cost."
        )
    return self.cost_cb.total_cost(
        cost_per_input_token=cost_per_input_token,
        cost_per_output_token=cost_per_output_token,
    )

QueryLength

Bases: str, Enum

Enumeration of query lengths. Available options are: LONG, MEDIUM, SHORT

QueryStyle

Bases: str, Enum

Enumeration of query styles. Available options are: MISSPELLED, PERFECT_GRAMMAR, POOR_GRAMMAR, WEB_SEARCH_LIKE

BaseScenario

Bases: BaseModel

Base class for representing a scenario for generating test samples.

Attributes:

Name Type Description
nodes List[Node]

List of nodes involved in the scenario.

style QueryStyle

The style of the query.

length QueryLength

The length of the query.

SpecificQueryScenario

Bases: BaseScenario

Represents a scenario for generating specific queries. Also inherits attributes from BaseScenario.

Attributes:

Name Type Description
keyphrase str

The keyphrase of the specific query scenario.

AbstractQueryScenario

Bases: BaseScenario

Represents a scenario for generating abstract queries. Also inherits attributes from BaseScenario.

Attributes:

Name Type Description
theme str

The theme of the abstract query scenario.