Agent Skills: evaluating-llms-harness

Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing models, reporting academic results, or tracking training progress. Industry standard used by EleutherAI, HuggingFace, and major labs. Supports HuggingFace, vLLM, APIs.

UncategorizedID: davila7/claude-code-templates/evaluating-llms-harness

Install this agent skill to your local

pnpm dlx add-skill https://github.com/davila7/claude-code-templates/evaluating-llms-harness

Skill Files

Browse the full folder contents for evaluating-llms-harness.

Download Skill

Loading file tree…

Select a file to preview its contents.