BLXBenchBLXBench UI
blxbench

Benchmark

Levels

Misc

DocsDownload blxbenchOur TestsPassSponsor / Partnership
DocsDownload blxbenchOur TestsPassSponsor / Partnership
BLXBenchBLXBench UI
blxbench

Benchmark

Levels

Misc

DocsDownload blxbenchOur TestsPassSponsor / Partnership
DocsDownload blxbenchOur TestsPassSponsor / Partnership
  1. Home
  2. Our Tests
  3. Summary Incident Response
blxbench

Test fixture

Summary Incident Response

Speedmediumscorer: contains_all

Latency-sensitive tasks where concise correct output matters.

How it is scored

The model receives the prompt (and optional system message). The run uses scorer contains_all with the JSON configuration below. Pass/fail and partial credit are determined entirely by that scorer against the model output; no human grading.

User prompt
Summarize in exactly 3 short bullet points:
During a high-traffic incident, the team reduced error rates by enabling graceful degradation, shifted read-heavy traffic to replicas, and paused non-critical background jobs. They also assigned an incident commander, updated stakeholders every 15 minutes, and tracked follow-up actions in a postmortem.
Scorer config
{
  "expected_contains": [
    "graceful degradation",
    "replicas",
    "incident commander"
  ]
}
Run parameters

temperature

0

max_tokens

110

timeout (s)

120

type

scored

file

speed_medium_01.json

← PreviousSummary
|
Next →Summary Data Quality

BLXBench

Community driven leaderboardPublic benchmark runner — run in your environment, share results with the community.

© 2026 BLXBench by bitslix.com

ProvenanceAggregated from user runs
Scope0 / 7 / 372
LatestNo runs
TermsPrivacy