Test fixture
Latency-sensitive tasks where concise correct output matters.
The model receives the prompt (and optional system message). The run uses scorer contains_any with the JSON configuration below. Pass/fail and partial credit are determined entirely by that scorer against the model output; no human grading.
Summarize in 2 short bullet points: AI assistants can speed up drafting and analysis, but outputs need human review and quality checks.
{
"expected_contains": [
"speed",
"human review",
"quality"
]
}temperature
0
max_tokens
80
timeout (s)
120
type
scored
file
speed_easy_02.json