Skip to content

Schemas

TestsetSample

Bases: BaseSample

Represents a sample in a test set.

Attributes:

Name Type Description
eval_sample Union[SingleTurnSample, MultiTurnSample]

The evaluation sample, which can be either a single-turn or multi-turn sample.

synthesizer_name str

The name of the synthesizer used to generate this sample.

Testset

Bases: RagasDataset[TestsetSample]

Represents a test set containing multiple test samples.

Attributes:

Name Type Description
samples List[TestsetSample]

A list of TestsetSample objects representing the samples in the test set.

to_evaluation_dataset

to_evaluation_dataset() -> EvaluationDataset

Converts the Testset to an EvaluationDataset.

Source code in src/ragas/testset/synthesizers/testset_schema.py
def to_evaluation_dataset(self) -> EvaluationDataset:
    """
    Converts the Testset to an EvaluationDataset.
    """
    return EvaluationDataset(
        samples=[sample.eval_sample for sample in self.samples]
    )

to_list

to_list() -> List[Dict]

Converts the Testset to a list of dictionaries.

Source code in src/ragas/testset/synthesizers/testset_schema.py
def to_list(self) -> t.List[t.Dict]:
    """
    Converts the Testset to a list of dictionaries.
    """
    list_dict = []
    for sample in self.samples:
        sample_dict = sample.eval_sample.model_dump(exclude_none=True)
        sample_dict["synthesizer_name"] = sample.synthesizer_name
        list_dict.append(sample_dict)
    return list_dict

from_list classmethod

from_list(data: List[Dict]) -> Testset

Converts a list of dictionaries to a Testset.

Source code in src/ragas/testset/synthesizers/testset_schema.py
@classmethod
def from_list(cls, data: t.List[t.Dict]) -> Testset:
    """
    Converts a list of dictionaries to a Testset.
    """
    # first create the samples
    samples = []
    for sample in data:
        synthesizer_name = sample["synthesizer_name"]
        # remove the synthesizer name from the sample
        sample.pop("synthesizer_name")
        # the remaining sample is the eval_sample
        eval_sample = sample

        # if user_input is a list it is MultiTurnSample
        if "user_input" in eval_sample and not isinstance(
            eval_sample.get("user_input"), list
        ):
            eval_sample = SingleTurnSample(**eval_sample)
        else:
            eval_sample = MultiTurnSample(**eval_sample)

        samples.append(
            TestsetSample(
                eval_sample=eval_sample, synthesizer_name=synthesizer_name
            )
        )
    # then create the testset
    return Testset(samples=samples)

QueryLength

Bases: str, Enum

Enumeration of query lengths. Available options are: LONG, MEDIUM, SHORT

QueryStyle

Bases: str, Enum

Enumeration of query styles. Available options are: MISSPELLED, PERFECT_GRAMMAR, POOR_GRAMMAR, WEB_SEARCH_LIKE

BaseScenario

Bases: BaseModel

Base class for representing a scenario for generating test samples.

Attributes:

Name Type Description
nodes List[Node]

List of nodes involved in the scenario.

style QueryStyle

The style of the query.

length QueryLength

The length of the query.

SpecificQueryScenario

Bases: BaseScenario

Represents a scenario for generating specific queries. Also inherits attributes from BaseScenario.

Attributes:

Name Type Description
keyphrase str

The keyphrase of the specific query scenario.

AbstractQueryScenario

Bases: BaseScenario

Represents a scenario for generating abstract queries. Also inherits attributes from BaseScenario.

Attributes:

Name Type Description
theme str

The theme of the abstract query scenario.