BLXBenchBLXBench UI
blxbench

Benchmark

Levels

Misc

DocsDownload blxbenchOur TestsPassSponsor / Partnership
DocsDownload blxbenchOur TestsPassSponsor / Partnership
BLXBenchBLXBench UI
blxbench

Benchmark

Levels

Misc

DocsDownload blxbenchOur TestsPassSponsor / Partnership
DocsDownload blxbenchOur TestsPassSponsor / Partnership
  1. Home
  2. Our Tests
  3. Summary Customer Support
blxbench

Test fixture

Summary Customer Support

Speedmediumscorer: contains_all

Latency-sensitive tasks where concise correct output matters.

How it is scored

The model receives the prompt (and optional system message). The run uses scorer contains_all with the JSON configuration below. Pass/fail and partial credit are determined entirely by that scorer against the model output; no human grading.

User prompt
Summarize in exactly 3 short bullet points:
Support performance improved after introducing intent-based ticket routing, response templates for common issues, and escalation paths for payment and security incidents. Weekly reviews of first-response time and resolution time helped the team prioritize process bottlenecks.
Scorer config
{
  "expected_contains": [
    "ticket routing",
    "escalation",
    "first-response time"
  ]
}
Run parameters

temperature

0

max_tokens

110

timeout (s)

120

type

scored

file

speed_medium_04.json

← PreviousSummary Release Safety
|
Next →Summary Engineering Enablement

BLXBench

Community driven leaderboardPublic benchmark runner — run in your environment, share results with the community.

© 2026 BLXBench by bitslix.com

ProvenanceAggregated from user runs
Scope3 / 7 / 372
Latestrun_5434c2 / 7 / $0.00
TermsPrivacy