Prediction explanations are generated by running
ML_EXPLAIN_ROW or
ML_EXPLAIN_TABLE on unlabeled
data. The data must have the same feature columns as the data
used to train the model. The target column is not required.
Prediction explanations are similar to model explanations, but rather than explain the whole model, prediction explanations explain predictions for individual rows of data. See Explanations Overview to learn more.
You can train the following prediction explainers:
The Permutation Importance prediction explainer, specified as
permutation_importance, is the default prediction explainer, which explains the prediction for a single row or table. Right after training and loading a model, you can runML_EXPLAIN_ROWandML_EXPLAIN_TABLEwith this prediction explainer directly without having to runML_EXPLAINfirst.The SHAP prediction explainer, specified as
shap, uses feature importance values to explain the prediction for a single row or table. To run this prediction explainer withML_EXPLAIN_ROWandML_EXPLAIN_TABLE, you must runML_EXPLAINfirst.
ML_EXPLAIN_ROW generates
explanations for one or more rows of data.
ML_EXPLAIN_TABLE generates
explanations on an entire table of data and saves the results to
an output table. ML_EXPLAIN_* routines limit
explanations to the 100 most relevant features.