Agent Skills: ai-evals

Help users create and run AI evaluations. Use when someone is building evals for LLM products, measuring model quality, creating test cases, designing rubrics, or trying to systematically measure AI output quality.

UncategorizedID: majiayu000/claude-skill-registry/ai-evals

Install this agent skill to your local

pnpm dlx add-skill https://github.com/majiayu000/claude-skill-registry/ai-evals

Skill Files

Browse the full folder contents for ai-evals.

Download Skill

Loading file tree…

Select a file to preview its contents.