Schemas
TestsetSample
Bases: BaseSample
Represents a sample in a test set.
Attributes:
Name | Type | Description |
---|---|---|
eval_sample |
Union[SingleTurnSample, MultiTurnSample]
|
The evaluation sample, which can be either a single-turn or multi-turn sample. |
synthesizer_name |
str
|
The name of the synthesizer used to generate this sample. |
Testset
dataclass
Testset(samples: List[TestsetSample], cost_cb: Optional[CostCallbackHandler] = None)
Bases: RagasDataset[TestsetSample]
Represents a test set containing multiple test samples.
Attributes:
Name | Type | Description |
---|---|---|
samples |
List[TestsetSample]
|
A list of TestsetSample objects representing the samples in the test set. |
to_evaluation_dataset
to_evaluation_dataset() -> EvaluationDataset
Converts the Testset to an EvaluationDataset.
to_list
Converts the Testset to a list of dictionaries.
Source code in src/ragas/testset/synthesizers/testset_schema.py
from_list
classmethod
from_list(data: List[Dict]) -> Testset
Converts a list of dictionaries to a Testset.
Source code in src/ragas/testset/synthesizers/testset_schema.py
total_tokens
Compute the total tokens used in the evaluation.
Source code in src/ragas/testset/synthesizers/testset_schema.py
total_cost
total_cost(cost_per_input_token: Optional[float] = None, cost_per_output_token: Optional[float] = None) -> float
Compute the total cost of the evaluation.
Source code in src/ragas/testset/synthesizers/testset_schema.py
QueryLength
Bases: str
, Enum
Enumeration of query lengths. Available options are: LONG, MEDIUM, SHORT
QueryStyle
Bases: str
, Enum
Enumeration of query styles. Available options are: MISSPELLED, PERFECT_GRAMMAR, POOR_GRAMMAR, WEB_SEARCH_LIKE
BaseScenario
Bases: BaseModel
Base class for representing a scenario for generating test samples.
Attributes:
Name | Type | Description |
---|---|---|
nodes |
List[Node]
|
List of nodes involved in the scenario. |
style |
QueryStyle
|
The style of the query. |
length |
QueryLength
|
The length of the query. |
persona |
Persona
|
A persona associated with the scenario. |
SingleHopSpecificQuerySynthesizer
dataclass
SingleHopSpecificQuerySynthesizer(name: str = 'single_hop_specifc_query_synthesizer', llm: BaseRagasLLM = llm_factory(), generate_query_reference_prompt: PydanticPrompt = QueryAnswerGenerationPrompt(), theme_persona_matching_prompt: PydanticPrompt = ThemesPersonasMatchingPrompt())
Bases: SingleHopQuerySynthesizer
MultiHopSpecificQuerySynthesizer
dataclass
MultiHopSpecificQuerySynthesizer(name: str = 'multi_hop_specific_query_synthesizer', llm: BaseRagasLLM = llm_factory(), generate_query_reference_prompt: PydanticPrompt = QueryAnswerGenerationPrompt(), theme_persona_matching_prompt: PydanticPrompt = ThemesPersonasMatchingPrompt())
Bases: MultiHopQuerySynthesizer
Synthesizes overlap based queries by choosing specific chunks and generating a keyphrase from them and then generating queries based on that.
Attributes:
Name | Type | Description |
---|---|---|
generate_query_prompt |
PydanticPrompt
|
The prompt used for generating the query. |