Test fixture
Secure code changes, vulnerability recognition, and safe defaults.
The model receives the prompt (and optional system message). The run uses scorer contains_any with the JSON configuration below. Pass/fail and partial credit are determined entirely by that scorer against the model output; no human grading.
Is this secure? Explain briefly: A file upload endpoint validates only file extension and stores files in a web-accessible directory.
{
"expected_contains": [
"content-type",
"malicious file",
"validation"
]
}temperature
0
max_tokens
140
timeout (s)
120
type
scored
file
security_medium_04.json