Feature Importance (Global Explainability)
Permutation-based feature importance showing the impact of each feature on model predictions.
SHAP Beeswarm Plot (Top 12 Features)
Each dot represents a prediction. X-axis = SHAP value (impact on model output). Color = feature value (red=high, blue=low).
Partial Dependence Plots (PDP) & Individual Conditional Expectation (ICE)
PDP (blue line) shows average model behavior. ICE (gray lines) show individual prediction changes. Use dropdown to explore different features.
Data Drift Heatmap (PSI Over Time)
Population Stability Index (PSI) per feature across time periods. Red = high drift (>0.25), Yellow = medium drift (>0.1), Green = low drift.
Population Stability Index (Current vs Reference)
PSI threshold: >0.1 (yellow), >0.25 (red). Green bars indicate stable distributions.
Distribution Comparison: Reference vs Current
KDE curves comparing training (blue) vs production (orange) distributions for each feature.
Fairness & Bias Analysis
Monitoring demographic parity, equal opportunity, and disparate impact ratios across protected groups.
| Age Group |
Positive Rate |
TPR |
TNR |
DI Ratio |
Prediction Explorer & What-If Analysis
Adjust input features to see real-time predictions. Waterfall chart shows local explainability (feature contributions to this specific prediction).
Current Model: Customer Churn Predictor
35.2%
Confidence: High
Local Explainability (SHAP-style Feature Contributions)
Model Comparison
Side-by-side comparison of different model versions. Select models to compare across multiple metrics.