model-evaluation
Evaluates machine learning models for performance, fairness, and reliability using appropriate metrics and validation techniques. Trigger keywords: model evaluation, metrics, accuracy, precision, recall, F1, ROC, AUC, cross-validation, ML testing.
ethics-safety-impact
Use when decisions could affect groups differently and need to anticipate harms/benefits, assess fairness and safety concerns, identify vulnerable populations, propose risk mitigations, define monitoring metrics, or when user mentions ethical review, impact assessment, differential harm, safety analysis, vulnerable groups, bias audit, or responsible AI/tech.
reference-class-forecasting
Use when starting a forecast to establish a statistical baseline (base rate) before analyzing specifics. Invoke when need to anchor predictions in historical reality, avoid "this time is different" bias, or establish outside view before inside view analysis. Use when user mentions base rates, reference classes, outside view, or starting a new prediction.
scout-mindset-bias-check
Use to detect and remove cognitive biases from reasoning. Invoke when prediction feels emotional, stuck at 50/50, or when you want to validate forecasting process. Use when user mentions scout mindset, soldier mindset, bias check, reversal test, scope sensitivity, or cognitive distortions.
advanced-evaluation
This skill should be used when the user asks to "implement LLM-as-judge", "compare model outputs", "create evaluation rubrics", "mitigate evaluation bias", or mentions direct scoring, pairwise comparison, position bias, evaluation pipelines, or automated quality assessment.