Agent Skills: Rubric Design and Validation

Develop clear scoring rubrics with defined criteria, performance levels, and anchor examples ensuring inter-rater reliability

UncategorizedID: a5c-ai/babysitter/rubric-design-validation

Install this agent skill to your local

pnpm dlx add-skill https://github.com/a5c-ai/babysitter/tree/HEAD/plugins/babysitter/skills/babysit/process/specializations/domains/social-sciences-humanities/education/skills/rubric-design-validation

Skill Files

Browse the full folder contents for rubric-design-validation.

Download Skill

Loading file tree…

plugins/babysitter/skills/babysit/process/specializations/domains/social-sciences-humanities/education/skills/rubric-design-validation/SKILL.md

Skill Metadata

Name
rubric-design-validation
Description
Develop clear scoring rubrics with defined criteria, performance levels, and anchor examples ensuring inter-rater reliability

Rubric Design and Validation

Develop clear scoring rubrics with defined criteria, performance levels, and anchor examples ensuring inter-rater reliability.

Overview

This skill enables the development and validation of scoring rubrics for educational assessment. It encompasses criteria definition, performance level articulation, anchor example selection, and reliability validation to ensure consistent and fair evaluation of student work.

Capabilities

Criteria Development

  • Identify essential performance dimensions
  • Define observable indicators
  • Weight criteria appropriately
  • Ensure comprehensiveness
  • Avoid overlap between criteria

Performance Level Definition

  • Articulate distinct levels
  • Write clear descriptors
  • Ensure progressive differentiation
  • Define score points
  • Create level labels

Anchor Examples

  • Select representative samples
  • Document exemplars for each level
  • Annotate scoring rationale
  • Create training materials
  • Validate with raters

Reliability Validation

  • Conduct inter-rater reliability studies
  • Calculate agreement statistics
  • Identify scoring inconsistencies
  • Revise for clarity
  • Train and calibrate raters

Usage Guidelines

Rubric Development Process

  1. Define purpose and use
  2. Identify assessment criteria
  3. Describe performance levels
  4. Draft rubric descriptors
  5. Select and annotate anchors
  6. Validate with multiple raters
  7. Revise based on feedback

Descriptor Writing

  • Use concrete, observable language
  • Avoid vague qualifiers
  • Ensure parallel structure
  • Include critical attributes
  • Distinguish adjacent levels clearly

Validation Process

  • Select diverse raters
  • Provide calibration training
  • Score common samples
  • Calculate reliability statistics
  • Revise unclear criteria

Integration Points

Related Processes

  • Rubric Development and Validation
  • Summative Assessment Development
  • Formative Assessment Design

Collaborating Skills

  • assessment-item-development
  • learning-objectives-writing
  • quality-assurance-review

References

  • Stevens and Levi rubric design
  • Brookhart rubric guidelines
  • Inter-rater reliability methods
  • Performance assessment standards