Agent Skills: evaluating-llms-harness
Evaluates LLMs across 60+ academic benchmarks (MMLU, HumanEval, GSM8K, TruthfulQA, HellaSwag). Use when benchmarking model quality, comparing models, reporting academic results, or tracking training progress. Industry standard used by EleutherAI, HuggingFace, and major labs. Supports HuggingFace, vLLM, APIs.
llm-evaluationbenchmarkingacademic-benchmarkshuggingfacemodel-comparison
evaluationID: ovachiever/droid-tings/evaluating-llms-harness
81
Skill Files
Browse the full folder contents for evaluating-llms-harness.
Loading file tree…
Select a file to preview its contents.