ads.evaluations package
Submodules
ads.evaluations.evaluation_plot module
- class ads.evaluations.evaluation_plot.EvaluationPlot
Bases:
object
EvaluationPlot holds data and methods for plots and it used to output them
- baseline(bool)
whether to plot the null model or zero information model
- baseline_kwargs(dict)
keyword arguments for the baseline plot
- color_wheel(dict)
color information used by the plot
- font_sz(dict)
dictionary of plot methods
- perfect(bool)
determines whether a “perfect” classifier curve is displayed
- perfect_kwargs(dict)
parameters for the perfect classifier for precision/recall curves
- prob_type(str)
model type, i.e. classification or regression
- get_legend_labels(legend_labels)
Renders the legend labels on the plot
- plot(evaluation, plots, num_classes, perfect, baseline, legend_labels)
Generates the evalation plot
- baseline = None
- baseline_kwargs = {'c': '.2', 'ls': '--'}
- color_wheel = ['teal', 'blueviolet', 'forestgreen', 'peru', 'y', 'dodgerblue', 'r']
- double_overlay_plots = ['pr_and_roc_curve', 'lift_and_gain_chart']
- font_sz = {'l': 14, 'm': 12, 's': 10, 'xl': 16, 'xs': 8}
- classmethod get_legend_labels(legend_labels)
Gets the legend labels, resolves any conflicts such as length, and renders the labels for the plot
- Parameters:
(dict) (legend_labels) – key/value dictionary containing legend label data
- Return type:
Nothing
Examples
EvaluationPlot.get_legend_labels({‘class_0’: ‘green’, ‘class_1’: ‘yellow’, ‘class_2’: ‘red’})
- perfect = None
- perfect_kwargs = {'color': 'gold', 'label': 'Perfect Classifier', 'ls': '--'}
- classmethod plot(evaluation, plots, num_classes, perfect=False, baseline=True, legend_labels=None)
Generates the evaluation plot
- Parameters:
(DataFrame) (evaluation) – DataFrame with models as columns and metrics as rows.
(str) (plots) – The plot type based on class attribute prob_type.
(int) (num_classes) – The number of classes for the model.
(bool (baseline) – Whether to display the curve of a perfect classifier. Default value is False.
optional) – Whether to display the curve of a perfect classifier. Default value is False.
(bool – Whether to display the curve of the baseline, featureless model. Default value is True.
optional) – Whether to display the curve of the baseline, featureless model. Default value is True.
(dict (legend_labels) – Legend labels dictionary. Default value is None. If legend_labels not specified class names will be used for plots.
optional) – Legend labels dictionary. Default value is None. If legend_labels not specified class names will be used for plots.
- Return type:
Nothing
- prob_type = None
- single_overlay_plots = ['lift_chart', 'gain_chart', 'roc_curve', 'pr_curve']
ads.evaluations.evaluator module
- class ads.evaluations.evaluator.ADSEvaluator(test_data, models, training_data=None, positive_class=None, legend_labels=None, show_full_name=False)
Bases:
object
ADS Evaluator class. This class holds field and methods for creating and using ADS evaluator objects.
- evaluations
list of evaluations.
- Type:
list[DataFrame]
- is_classifier
Whether the model has a non-empty classes_ attribute indicating the presence of class labels.
- Type:
bool
- legend_labels
List of legend labels. Defaults to None.
- Type:
dict
- metrics_to_show
Names of metrics to show.
- Type:
list[str]
- models
The object built using ADSModel.from_estimator().
- Type:
- positive_class
The class to report metrics for binary dataset, assumed to be true.
- Type:
str or int
- show_full_name
Whether to show the name of the evaluator in relevant contexts.
- Type:
bool
- test_data
Test data to evaluate model on.
- Type:
- training_data
Training data to evaluate model.
- Type:
- Positive_Class_names
Class attribute listing the ways to represent positive classes
- Type:
list
- add_metrics(func, names)
Adds the listed metics to the evaluator it is called on
- del_metrics(names)
Removes listed metrics from the evaluator object it is called on
- add_models(models, show_full_name)
Adds the listed models to the evaluator object
- del_models(names)
Removes the listed models from the evaluator object
- show_in_notebook(plots, use_training_data, perfect, baseline, legend_labels)
Visualize evalutation plots in the notebook
- calculate_cost(tn_weight, fp_weight, fn_weight, tp_weight, use_training_data)
Returns a cost associated with the input weights
Creates an ads evaluator object.
- Parameters:
test_data (ads.common.data.ADSData instance) – Test data to evaluate model on. The object can be built using ADSData.build().
models (list[ads.common.model.ADSModel]) – The object can be built using ADSModel.from_estimator(). Maximum length of the list is 3
training_data (ads.common.data.ADSData instance, optional) – Training data to evaluate model on and compare metrics against test data. The object can be built using ADSData.build()
positive_class (str or int, optional) – The class to report metrics for binary dataset. If the target classes is True or False, positive_class will be set to True by default. If the dataset is multiclass or multilabel, this will be ignored.
legend_labels (dict, optional) – List of legend labels. Defaults to None. If legend_labels not specified class names will be used for plots.
show_full_name (bool, optional) – Show the name of the evaluator object. Defaults to False.
Examples
>>> train, test = ds.train_test_split() >>> model1 = MyModelClass1.train(train) >>> model2 = MyModelClass2.train(train) >>> evaluator = ADSEvaluator(test, [model1, model2])
>>> legend_labels={'class_0': 'one', 'class_1': 'two', 'class_2': 'three'} >>> multi_evaluator = ADSEvaluator(test, models=[model1, model2], ... legend_labels=legend_labels)
- class EvaluationMetrics(ev_test, ev_train, use_training=False, less_is_more=None, precision=4)
Bases:
object
Class holding evaluation metrics.
- ev_test
evaluation test metrics
- Type:
list
- ev_train
evaluation training metrics
- Type:
list
- use_training
use training data
- Type:
bool
- less_is_more
metrics list
- Type:
list
- show_in_notebook()
Shows visualization metrics as a color coded table
- DEFAULT_LABELS_MAP = {'accuracy': 'Accuracy', 'auc': 'ROC AUC', 'f1': 'F1', 'hamming_loss': 'Hamming distance', 'kappa_score_': "Cohen's kappa coefficient", 'precision': 'Precision', 'recall': 'Recall'}
- property precision
- show_in_notebook(labels={'accuracy': 'Accuracy', 'auc': 'ROC AUC', 'f1': 'F1', 'hamming_loss': 'Hamming distance', 'kappa_score_': "Cohen's kappa coefficient", 'precision': 'Precision', 'recall': 'Recall'})
Visualizes evaluation metrics as a color coded table.
- Parameters:
labels (dictionary) – map printing specific labels for metrics display
- Return type:
Nothing
- Positive_Class_Names = ['yes', 'y', 't', 'true', '1']
- add_metrics(funcs, names)
Adds the listed metrics to the evaluator object it is called on.
- Parameters:
funcs (list) – The list of metrics to be added. This function will be provided y_true and y_pred, the true and predicted values for each model.
names (list[str])) – The list of metric names corresponding to the functions.
- Return type:
Nothing
Examples
>>> def f1(y_true, y_pred): ... return np.max(y_true - y_pred) >>> evaluator = ADSEvaluator(test, [model1, model2]) >>> evaluator.add_metrics([f1], ['Max Residual']) >>> evaluator.metrics Output table will include the desired metric
- add_models(models, show_full_name=False)
Adds the listed models to the evaluator object it is called on.
- Parameters:
models (list[ADSModel]) – The list of models to be added
show_full_name (bool, optional) – Whether to show the full model name. Defaults to False. ** NOT USED **
- Return type:
Nothing
Examples
>>> evaluator = ADSEvaluator(test, [model1, model2]) >>> evaluator.add_models("model3])
- calculate_cost(tn_weight, fp_weight, fn_weight, tp_weight, use_training_data=False)
Returns a cost associated with the input weights.
- Parameters:
tn_weight (int, float) – The weight to assign true negatives in calculating the cost
fp_weight (int, float) – The weight to assign false positives in calculating the cost
fn_weight (int, float) – The weight to assign false negatives in calculating the cost
tp_weight (int, float) – The weight to assign true positives in calculating the cost
use_training_data (bool, optional) – Use training data to pull the metrics. Defaults to False
- Returns:
DataFrame with the cost calculated for each model
- Return type:
pandas.DataFrame
Examples
>>> evaluator = ADSEvaluator(test, [model1, model2]) >>> costs_table = evaluator.calculate_cost(0, 10, 1000, 0)
- del_metrics(names)
Removes the listed metrics from the evaluator object it is called on.
- Parameters:
names (list[str]) – The list of names of metrics to be deleted. Names can be found by calling evaluator.test_evaluations.index.
- Returns:
None
- Return type:
None
Examples
>>> evaluator = ADSEvaluator(test, [model1, model2]) >>> evaluator.del_metrics(['mse]) >>> evaluator.metrics Output table will exclude the desired metric
- del_models(names)
Removes the listed models from the evaluator object it is called on.
- Parameters:
names (list[str]) – the list of models to be delete. Names are the model names by default, and assigned internally when conflicts exist. Actual names can be found using evaluator.test_evaluations.columns
- Return type:
Nothing
Examples
>>> model3.rename("model3") >>> evaluator = ADSEvaluator(test, [model1, model2, model3]) >>> evaluator.del_models([model3])
- property metrics
Returns evaluation metrics
- Returns:
HTML representation of a table comparing relevant metrics.
- Return type:
metrics
Examples
>>> evaluator = ADSEvaluator(test, [model1, model2]) >>> evaluator.metrics Outputs table displaying metrics.
- property raw_metrics
Returns the raw metric numbers
- Parameters:
metrics (list, optional) – Request metrics to pull. Defaults to all.
use_training_data (bool, optional) – Use training data to pull metrics. Defaults to False
- Returns:
The requested raw metrics for each model. If metrics is None return all.
- Return type:
dict
Examples
>>> evaluator = ADSEvaluator(test, [model1, model2]) >>> raw_metrics_dictionary = evaluator.raw_metrics()
- show_in_notebook(plots=None, use_training_data=False, perfect=False, baseline=True, legend_labels=None)
Visualize evaluation plots.
- Parameters:
plots (list, optional) –
Filter the plots that are displayed. Defaults to None. The name of the plots are as below:
regression - residuals_qq, residuals_vs_fitted
binary classification - normalized_confusion_matrix, roc_curve, pr_curve
multi class classification - normalized_confusion_matrix, precision_by_label, recall_by_label, f1_by_label
use_training_data (bool, optional) – Use training data to generate plots. Defaults to False. By default, this method uses test data to generate plots
legend_labels (dict, optional) – Rename legend labels, that used for multi class classification plots. Defaults to None. legend_labels dict keys are the same as class names. legend_labels dict values are strings. If legend_labels not specified class names will be used for plots.
- Returns:
Nothing. Outputs several evaluation plots as specified by plots.
- Return type:
None
Examples
>>> evaluator = ADSEvaluator(test, [model1, model2]) >>> evaluator.show_in_notebook()
>>> legend_labels={'class_0': 'green', 'class_1': 'yellow', 'class_2': 'red'} >>> multi_evaluator = ADSEvaluator(test, [model1, model2], ... legend_labels=legend_labels) >>> multi_evaluator.show_in_notebook(plots=["normalized_confusion_matrix", ... "precision_by_label", "recall_by_label", "f1_by_label"])
ads.evaluations.statistical_metrics module
- class ads.evaluations.statistical_metrics.ModelEvaluator(y_true, y_pred, model_name, classes=None, positive_class=None, y_score=None)
Bases:
object
ModelEvaluator takes in the true and predicted values and returns a pandas dataframe
- y_true
- Type:
array-like object holding the true values for the model
- y_pred
- Type:
array-like object holding the predicted values for the model
- model_name(str)
- Type:
the name of the model
- classes(list)
- Type:
list of target classes
- positive_class(str)
- Type:
label for positive outcome from model
- y_score
- Type:
array-like object holding the scores for true values for the model
- metrics(dict)
- Type:
dictionary object holding model data
- get_metrics()
Gets the metrics information in a dataframe based on the number of classes
- safe_metrics_call(scoring_functions, \*args)
Applies sklearn scoring functions to parameters in args
- get_metrics()
Gets the metrics information in a dataframe based on the number of classes
- Parameters:
self ((ModelEvaluator instance)) – The ModelEvaluator instance with the metrics.
- Returns:
Pandas dataframe containing the metrics
- Return type:
pandas.DataFrame
- safe_metrics_call(scoring_functions, *args)
Applies the sklearn function in scoring_functions to parameters in args.
- Parameters:
scoring_functions ((dict)) – Scoring functions dictionary
args ((keyword arguments)) – Arguments passed to the sklearn function from metrics
- Returns:
Nothing
- Raises:
Exception – If an error is enountered applying the sklearn function fn to arguments.