Agent Skills: evaluating-code-models
Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing coding abilities, testing multi-language support, or measuring code generation quality. Industry standard from BigCode Project used by HuggingFace leaderboards.
UncategorizedID: davila7/claude-code-templates/evaluating-code-models
19,6461,834
Install this agent skill to your local
Skill Files
Browse the full folder contents for evaluating-code-models.
Loading file tree…
Select a file to preview its contents.