Agent Skills: evaluating-code-models

Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing coding abilities, testing multi-language support, or measuring code generation quality. Industry standard from BigCode Project used by HuggingFace leaderboards.

UncategorizedID: davila7/claude-code-templates/evaluating-code-models

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/evaluating-code-models

Skill Files

Browse the full folder contents for evaluating-code-models.

Download Skill

Loading file tree…

Select a file to preview its contents.