Test fixture
Secure code changes, vulnerability recognition, and safe defaults.
The model receives the prompt (and optional system message). The run uses scorer contains_any with the JSON configuration below. Pass/fail and partial credit are determined entirely by that scorer against the model output; no human grading.
Is this secure? Explain briefly: result = eval(user_input)
{
"expected_contains": [
"code execution",
"unsafe",
"eval"
]
}temperature
0
max_tokens
120
timeout (s)
120
type
scored
file
security_easy_03.json