Back to tags
Tag

Agent Skills with tag: feature-importance

4 skills match this tag. Use tags to discover related Agent Skills and explore similar workflows.

shap

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

shapmodel-interpretabilityexplainable-aifeature-importance
ovachiever
ovachiever
81

ml-model-explainer

Explain ML model predictions using SHAP values, feature importance, and decision paths with visualizations.

shapfeature-importancemodel-interpretabilityvisualization
dkyazzentwatwa
dkyazzentwatwa
3

ML Model Explanation

Interpret machine learning models using SHAP, LIME, feature importance, partial dependence, and attention visualization for explainability

machine-learningexplainable-aishaplime
aj-geddes
aj-geddes
301

shap

Model interpretability and explainability using SHAP (SHapley Additive exPlanations). Use this skill when explaining machine learning model predictions, computing feature importance, generating SHAP plots (waterfall, beeswarm, bar, scatter, force, heatmap), debugging models, analyzing model bias or fairness, comparing models, or implementing explainable AI. Works with tree-based models (XGBoost, LightGBM, Random Forest), deep learning (TensorFlow, PyTorch), linear models, and any black-box model.

machine-learningexplainable-aishapfeature-importance
K-Dense-AI
K-Dense-AI
3,233360