Explanations are generated by running
ML_EXPLAIN_ROW
or
ML_EXPLAIN_TABLE
on unlabeled
data; that is, it must have the same feature columns as the data
used to train the model but no target column.
Explanations help you understand which features have the most influence on a prediction. Feature importance is presented as a value ranging from -1 to 1. A positive value indicates that a feature contributed toward the prediction. A negative value indicates that the feature contributed toward a different prediction; for example, if a feature in a loan approval model with two possible predictions ('approve' and 'reject') has a negative value for an 'approve' prediction, that feature would have a positive value for a 'reject' prediction. A value of 0 or near 0 indicates that the feature value has no impact on the prediction to which it applies.
ML_EXPLAIN_ROW
generates
explanations for one or more rows of data.
ML_EXPLAIN_TABLE
generates
explanations on an entire table of data and saves the results to
an output table. ML_EXPLAIN_*
routines limit
explanations to the 100 most relevant features.
After the ML_TRAIN
routine, use
the ML_EXPLAIN
routine to train
prediction explainers and model explainers for HeatWave AutoML. You
must train prediction explainers in order to use
ML_EXPLAIN_ROW
and
ML_EXPLAIN_TABLE
. In earlier
releases, the ML_TRAIN
routine
trains the default Permutation Importance model and prediction
explainers. See Section 3.6, “Training Explainers”.