I have been talking about performance metrics recently, and I wanted to post/review on something from from chapter 8 of Data Science For Business.
Profit curves are a way of giving a benefit value to correct predictions and cost value to incorrect predictions. I really like these because it gives a way to evaluate the business impact of a model. It is often that we can gain epsilon increases in performance, but it may not make a meaningful difference in the bottom line. Profit curves allow us to see this.
Profit curves also allow us to estimate the optimal threshold, or what fraction of the strongest predictions, to predict positive to maximize the business impact.
Also, profit curves have constructed in a similar way to ROC curves. ROC plots show the True Positive Rate and False Positive Rate in order of prediction strength. Profit curves show expected profit in order of prediction strength.
The confusion matrix show the count of target classifications vs prediction classifications in a matrix. The True Positive Rates and False Positive Rates are metrics that are target based metrics. This means that they are normalized by the count of either total target positives or total target negatives.
An ROC curve would plot the top row of the right-hand matrix in order of prediction strength.
We have the above confusion matrix and normalize by the target values. It allows us to estimate the rate that we get a given prediction is correct or incorrect. The next step in constructing a profit curve is to normalize the matrix with respect to the population proportions of positive and negative targets. This will allow us to get a feel for the actual rate of misclassification and correct classification in the populations we are concerned with if we know the population proportions and .
Profit curves are created by giving each term in the above altered confusion matrix with a numerical value. A cost or benefit.
For illustrations of the calculations, and to develop some intuition, I will use the following example:
This matrix says a correct positive prediction is worth $10, a correct negative prediction is worth $6, and a misclassification costs $2.
I want to do two toy calculations using the example matrix. One case is where the model predicts everyone positive. and are the count of positive and negative targets in the population.
The 1,1 illustrate that this is the upper right corner of an ROC curve.
If we have a population that has the proportion of positive and negative examples of and , we have the error rate that looks like:
The element wise multipication with our cost matrix gives:
We sum all the elements of this matrix together to get the expected profit per prediction:
This strategy is profitable. If we look at the other extreme and predict everything is negative we get the following results:
The element wise multiplication with our cost matrix produces:
I have two extreme predictions. In one case I predict everything to be positive, and make $2 per prediction. The other case is I predict everything to be negative, and make $3.33 per prediction.
Models can help improve our predictions, and we would like to do two things.
- Forecast increase in profits
- Choose the model parameters that optimize profits
For this post I am going to make a profit curve for a flower shop that sells at two locations, but I am trying to automate the shipping. If we can get our best flowers to the right location, we will make 10 dollars, but if we have to pay 2 dollars in shipping to the correct place if the prediction is incorrect. If we get the other flowers to the other location, we get 6 dollars, but again we have to pay 2 dollars in shipping if this prediction is incorrect. The cost benefit matrix is what I have used above.
We already know the two extremes in profit, but lets fit some models on the iris data set and make some profit curves.
I am using the Iris data set fit with two models: Logistics Regression and Support Vector Machines. I have hamstrung both models because I am not performing a train/test split (for the interest in focus on making profit curves) and to artificially give both models the same Area under the ROC Curve (AUC).
import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn import datasets from sklearn.linear_model import LogisticRegression from sklearn.svm import SVC from sklearn.metrics import confusion_matrix,roc_curve, roc_auc_score %matplotlib inline iris = datasets.load_iris() X = iris.data # we only take the first two features. Y = (iris.target==1).astype(int) svm = SVC(C=.1,probability=True,kernel='linear') log = LogisticRegression(C=1) log.fit(X,Y) svm.fit(X,Y) y_pred = log.predict(X) probs =  p = log.predict_proba(X)[:,1] probs.append(p) fpr, tpr, thresholds = roc_curve(Y,p) plt.figure(figsize=(16,8)) plt.plot(fpr,tpr,label='Log ROC Curve - AUC: %.2f' % roc_auc_score(Y,p),color='steelblue',lw=2) y_pred = svm.predict(X) p = svm.predict_proba(X)[:,1] probs.append(p) fpr, tpr, thresholds = roc_curve(Y,p) plt.plot(fpr,tpr,label='SVM ROC Curve - AUC: %.2f' % roc_auc_score(Y,p),color='firebrick',lw=2) plt.xlabel(&amp;amp;quot;False Positive Rate&amp;amp;quot;) plt.ylabel(&amp;amp;quot;True Positive Rate&amp;amp;quot;) plt.legend() plt.show()
The SVM has a higher true positive rate for lower false positive rates, while the logistic regression model has a higher true positive rate for larger false positive rates. Everything predicted positive in the upper right, and everything predicted negative is the lower left.
The AUC is the same (and even if it was not) so we need a way to evaluate the business impact.
Profit Curve Calculations
Now that I have models, and I have scored all the p values, We can look a the confusion matrix. Because I am using sklearn’s method I want to point out that the confusion matrix has the following format:
The rates then become:
The profit matrix is multiplied element wise with the above matrix. The results are:
We now sum each element in the matrix and get the expected profit.
Making Profit Curves
To make the profit curve we now just have to go through each prediction ordered by strongest predictions to our weakest predictions. For each prediction, will say everything as strong or stronger than that prediction is positive, and everything weaker than that prediction is negative. We then calculate the expected profit as shown above.
names = ['Log','SVM'] colors = ['steelblue','firebrick'] plt.figure(figsize=(16,8)) for i,p in enumerate(probs): order = np.argsort(p) cost_vals =  for pp in p[order]: cm = confusion_matrix(Y,(p&amp;amp;gt;=pp)) #Make Rates cmr = cm/cm.sum(axis=1).astype(float).reshape(2,1) #Multiply by target proportions acmr = cmr*(cm.sum(axis=1).astype(float)/cm.sum()).reshape(2,1) #Elementwise multiplication with cost matrix cost_vals.append((acmr*cost).sum()) plt.plot(np.array(range(len(p),0,-1)).astype(float)/len(p),cost_vals,label='%s Curve - AUC: %.2f - Max Profit %.2f' % (names[i], roc_auc_score(Y,p),max(cost_vals)),color=colors[i],lw=2) plt.xlabel(&amp;amp;quot;Percent Predicted Positive&amp;amp;quot;) plt.ylabel(&amp;amp;quot;Expected Profit Per Prediction&amp;amp;quot;) plt.legend() plt.show()
The profit curve shows that logistic regression has a slightly higher profit than the SVM model, but in reality we should having train/test splits and boot strap errors. At a first order estimate this gives a feel for the business value a model can have. In this case, it allows us to increase profits significantly.
There are other business considerations that have be considered in model choice, such as technical debt, model debt, and data debt. Models do have to be maintained in practice.
While at Galvanize we did a churn analysis similar to this, with a costly intervention strategy for a cell phone provider. The best performing models are also the most costly to implement, train, and maintain. Using the wrong thresholds can also cost money.
There is significant gains to be had in this case, but the human and infrastructure costs have to be considered as well.
As all ways, thanks for reading. Hope you enjoyed or gained something.