AdaBoost is a successful and popular classification method, but it is not geared towards solving cost-sensitive classification problems, i.e. problems where the costs of different types of erroneous predictions are unequal. In our 2016 paper cited below, we reviewed all cost-sensitive variants of AdaBoost in the literature, along with our own adaptations. Below we provide code for the method that achieves the best empirical results without any need for parameter tuning, while satisfying all desirable theoretical properties. The method, 'Calibrated AdaMEC', is described in detail and motivated in the paper:
Cost-sensitive boosting algorithms: Do we really need them?
Nikolaos Nikolaou, Narayanan U. Edakunni, Meelis Kull, Peter A. Flach, Gavin Brown
Machine Learning, 104(2), pages 359-384, 2016
[Google Scholar] [BibTex].
Python code 'CalibratedAdaMEC.py', can be downloaded from MLOSS here. If you make use of it, please cite the above paper.
Please, direct any questions or feedback to Nikolaos Nikolaou or Gavin Brown.
The following code shows an example of how to use Calibrated AdaMEC. Do not let the name intimidate you; this is just AdaBoost that properly calibrates its probability estimates and uses a cost-sensitive decision threshold to classify new data. The code showcases how to train and generate scores and predictions under Calibrated AdaMEC. The syntax follows the conventions of the AdaBoost implementation of scikit-learn.
from sklearn.ensemble import AdaBoostClassifier
from CalibratedAdaMEC import CalibratedAdaMECClassifier # Our calibrated AdaMEC implementation can be found here
#The code below assumes the user already split the binary classification data (classes denoted 0,1)
#into training and test sets, that they defined the cost of a false positive C_FP & the cost of a
#false negative C_FN and selected the weak learner base_estimator and the ensemble size n_estimators
#Create and train an AdaBoostClassifier
AdaBoost = AdaBoostClassifier(base_estimator, n_estimators)
AdaBoost = AdaBoost.fit(X_train, y_train)
#Create and train a CalibratedAdaMECClassifier --being cost-sensitive, it takes C_FP & C_FN as arguments
CalAdaMEC = CalibratedAdaMECClassifier(base_estimator, n_estimators, C_FP, C_FN)
CalAdaMEC = CalAdaMEC.fit(X_train, y_train)
#Produce AdaBoost & Calibrated AdaMEC classifications
labels_AdaBoost = AdaBoost.predict(X_test)
labels_CalibratedAdaMEC = CalAdaMEC.predict(X_test)
#Produce AdaBoost & Calibrated AdaMEC scores (probability estimates) - keep only positive class scores
scores_AdaBoost = AdaBoost.predict_proba(X_test)[:,1]
scores_CalibratedAdaMEC = CalAdaMEC.predict_proba(X_test)[:,1]
You can evaluate the two algorithms in terms of probability estimation using the Brier score, or the log-loss, found e.g. in the $metrics$ module of scikit-learn.
from sklearn import metrics
brier_score_AdaBoost = metrics.brier_score_loss(y_test, scores_AdaBoost)
brier_score_CalibratedAdaMEC = metrics.brier_score_loss(y_test, scores_CalibratedAdaMEC)
log_loss_AdaBoost = metrics.log_loss(y_test, scores_AdaBoost)
log_loss_CalibratedAdaMEC = metrics.log_loss(y_test, scores_CalibratedAdaMEC)
You will see that Calibrated AdaMEC achieves lower scores for both (better probability estimation).
You can evaluate the cost-sensitive behaviour of the classifications produced by the two algorithms in terms of total cost sensitive loss (empirical risk):
from sklearn import metrics
Pos = sum(y_train[np.where(y_train == 1)]) #Number of positive training examples
Neg = len(y_train) - Pos #Number of negative training examples
skew = C_FP*Neg / (C_FN*Pos + C_FP*Neg) #Skew (combined asymmetry due to both cost and class imbalance)
conf_mat_AdaBoost = metrics.confusion_matrix(y_test, labels_AdaBoost)#Confusion matrix
cost_AdaBoost = conf_mat_AdaBoost[0,1]*skew + conf_mat_AdaBoost[1,0]*(1-skew)#Skew-Sensitive Cost
conf_mat_CalibratedAdaMEC = metrics.confusion_matrix(y_test, labels_CalibratedAdaMEC)#Confusion matrix
cost_CalibratedAdaMEC = conf_mat_CalibratedAdaMEC[0,1]*skew + conf_mat_CalibratedAdaMEC[1,0]*(1-skew)#Skew-Sensitive Cost
In expectation, the misclassification cost should be lower for Calibrated AdaMEC on asymmetric problems (the greater the skew, the greater the performance gain of Calibrated AdaMEC over AdaBoost).
Go here for an extended ipython tutorial, providing a summary of the paper and interactive code allowing you to reproduce our experiments and run your own ones, every aspect of which (problem setup, calibration options, ensemble parameters, base learner parameters, evaluation measures) can be modified.