- 1. Overview
- 2. Installation
- 3. Data description
- 4. Train a classifier
- 5. Algorithmic fairness metrics
- 6. Closing words
How to measure fairness of a machine learning model?
To date, a number of algorithmic fairness metrics have been proposed. Demographic parity, proportional parity and equalized odds are among the most commonly used metrics to evaluate fairness across sensitive groups in binary classification problems. Multiple other metrics have been proposed based on performance measures extracted from the confusion matrix (e.g., false positive rate parity, false negative rate parity).
Together with Tibor V. Varga, we developed
fairness package for R. The package provides tools to calculate algorithmic fairness metrics across different sensitive groups. It also provides opportunities to visualize and compare other prediction metrics between the groups.
The package provides functions to compute the commonly used metrics of algorithmic fairness:
- Demographic parity
- Proportional parity
- Equalized odds
- Predictive rate parity
In addition, the following comparisons are also implemented:
- False positive rate parity
- False negative rate parity
- Accuracy parity
- Negative predictive value parity
- Specificity parity
- ROC AUC comparison
- MCC comparison
#collapse-show install.packages('fairness') library(fairness)
Installing package into ‘/usr/local/lib/R/site-library’ (as ‘lib’ is unspecified)
You may also install the development version from Github:
#collapse-show devtools::install_github('kozodoi/fairness') library(fairness)
In this tutorial, you will be able to use a simplified version of the landmark COMPAS data set containing the criminal history of defendants from Broward County. You can read more about the data here. To load the data set, all you need to do is:
#collapse-show data('compas') head(compas)
The data set contains nine variables. The outcome variable is
Two_yr_Recidivism, which is a binary indicator showing whether an individual commited a crime within the two-year period. The data also includes features on prior criminal record (
Misdemeanor) and other features describing age (
Age_Below_TwentyFive), sex and ethnicity (
You don’t really need to delve into the data much. To simplify illustration, we have already trained a classifier that uses all available features to predict
Two_yr_Recidivism and concatenated the predicted probabilities (
probability) and predicted classes (
predicted) to the data frame. You will be able to use these columns with predictions directly in your analysis to test different metrics before using a real model.
The second data set included in the package is a credit scoring data set labeled as
germancredit. The data set includes 20 features describing the loan applicants and an outcome variable named
BAD, which is a binary indicator showing whether the applicant defaulted on a loan. Similarly to the compas data set, this data also includes two columns with model predictions named
Feel free to play with this data as well. You can load it with:
For the purpose of this tutorial, we will train two new models using different sets of features:
- model that uses all features as input
- model that uses all features except for ethnicity
We partition the COMPAS data into training and validation subsets and use logistic regression as base classifier.
#collapse-show # extract data compas <- fairness::compas df <- compas[, !(colnames(compas) %in% c('probability', 'predicted'))] # partitioning params set.seed(77) val_percent <- 0.3 val_idx <- sample(1:nrow(df))[1:round(nrow(df) * val_percent)] # partition the data df_train <- df[-val_idx, ] df_valid <- df[ val_idx, ] # check dim print(nrow(df_train)) print(nrow(df_valid))
 4320  1852
#collapse-show # fit logit models model1 <- glm(Two_yr_Recidivism ~ ., data = df_train, family = binomial(link = 'logit')) model2 <- glm(Two_yr_Recidivism ~ . -ethnicity, data = df_train, family = binomial(link = 'logit'))
Let's append model predictions to the validation set. Later, we will evaluate fairness of the two models based on these predictions.
#collapse-show # produce predictions df_valid$prob_1 <- predict(model1, df_valid, type = 'response') df_valid$prob_2 <- predict(model2, df_valid, type = 'response') head(df_valid)
The package currently includes nine fairness metrics and two other performance comparisons. Many of these metrics are mutually exclusive: results from a given classification problem most often cannot be fair in terms of all group fairness metrics. Depending on a context, it is important to select an appropriate metric to evaluate fairness.
Below, we intrdocue functions that are used to compute the implemented metrics. Every function has a similar set of arguments:
data: data.frame with the features and predictions
outcome: name of the outcome variable
group: name of the sensitive group, which needs to be a factor variable included in the data.frame.
base: name of the base group (factor level of
group) that serves as a base for fairness metrics
We also need to supply model predictions. Depending on a metric, we need to provide either porbabilistic predictions as
probs or class predictions as
preds. The model predictions can be appended to the original data.frame or provided as a vector. In this tutorial, we will use probabilistic predctions with all functions.
When working with probabilistic predictions, some metrics also require a cutoff value to convert probabilities into class precictions supplied as
cutoff. Finally, we also need to specifiy factor levels to indicate the reference classes using the
preds_levels argument. The first level refers to the base class, whereas the second level indicates the predicted class for which probabilities are provided.
Most fairness metrics are calculated based on a confusion matrix produced by a classification model. The confusion matrix is comprised of four distinct classes:
- True positives (TP): the true class is positive and the prediction is positive (correct classification)
- False positives (FP): the true class is negative and the prediction is positive (incorrect classification)
- True negatives (TN): the true class is negative and the prediction is negative (correct classification)
- False negatives (FN): the true class is positive and the prediction is negative (incorrect classification)
The fairness metrics are calculated by comparing one or more of these measures computed for different sensitive subgroups (e.g., male and female). For a detailed overview of various measures coming from the confusion matrix and precise definitions, please click here or here.
Let's demonstrate the fairness pipeline using predictive rate parity as an example. Predictive rate parity is achieved if the precisions (or positive predictive values) in the subgroups are close to each other. The precision stands for the number of the true positives divided by the total number of examples predicted positive within a group.
Formula: TP / (TP + FP)
Let's compute predictive rate parity for the first model that uses all features:
#collapse-show res1 <- pred_rate_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'Caucasian') res1$Metric
|Predictive Rate Parity||1.0000000||0.9734910||1.1448291||1.0151614||0.9713701||0.9455358|
The first row shows the raw precision values for all ethnicities. The second row displays the relative precisions compared to Caucasian defendants.
In a perfect world, all predictive rate parities should be equal to one, which would mean that the precision in every group is the same as in the base group. In practice, values are going to be different. The paritiy above one indicates that precision in this group is relatively higher, whereas a lower parity implies a lower precision. Observing a large variance in parities should hint us that the model is not performing equally well for different sensitive groups.
If the other ethnic group is set as a base group (e.g. Hispanic), the raw precision values do not change, only the relative metrics:
#collapse-show res1h <- pred_rate_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'Hispanic') res1h$Metric
|Predictive Rate Parity||1.0000000||0.9850650||0.9589520||1.1277311||0.9568627||0.9314143|
Overall, results suggest that the model precision varies between 0.6489 and 0.7857. Apart from the "other" category, the lowest precision is observed for African-American defendants. This implies that there are more cases where the model mistakingly predicts that a person will commit a crime among African-Americans than among, e.g., Asian defendants.
A standard output of every fairness metric function includes a barchart that visualizes the relative metrics for all subgroups:
Some fairness metrics do not require probabilistic predictions and can work with class predictions. When predicted probabilities are supplied, an extra density plot will be output displaying the distributions of probabilities of all subgroups and the user-defined cutoff:
Let's now compare the results to the second model that does not use ethnicity as a feature:
#collapse-show # model 2 res2 <- pred_rate_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_2', preds_levels = c('no','yes'), cutoff = 0.5, base = 'Caucasian') res2$Metric
|Predictive Rate Parity||1.000000||0.9570029||1.1846068||1.022795||0.9652352||0.9592025|
We can see two things.
ethnicity from the features slightly increases precision for African-American defendants but results in a lower precision for a number of other groups. This illustrates that improving a model for one group may cost a fall in the predictive performance for the general population. Depending on the context, it is a task of a decision-maker to decide what is best.
ethnicity does not align the predictive rate parities substantially closer to one. This illustrates another important research finding: removing a sensitive variable does not guarantee that a model stops discriminating. Ethnicity correlates with other features and is still implicitly included in the input data. In order to make the classifier more fair, one would neet to consider more sophisticated techniques than simply dropping the sensitive attribute.
In the rest of this tutorial, we will go through the functions that cover the remaining implemented fairness metrics, illustrating the corresponding equations and outputs. You can find more details on each of the fairness metric functions in the package documentation. Please don't hesitate to use the built-in helper to see further details and examples on the implemented metrics:
Demographic parity is one of the most popular fairness indicators in the literature. Demographic parity is achieved if the absolute number of positive predictions in the subgroups are close to each other. This measure does not take true class into consideration and only depends on the model predictions.
Formula: (TP + FP)
#collapse-show res_dem <- dem_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'Caucasian') res_dem$Metric
Of course, comparing the absolute number of positive predictions will show a high disparity when the number of cases within each group is different, which artificially boosts the disparity. This is true in our case:
Caucasian African_American Asian Hispanic 622 962 16 144 Native_American Other 4 104
To address this, we can use proportional parity.
Proportional parity is very similar to demographic parity but modifies it to address the issue discussed above. Proportional parity is achieved if the proportion of positive predictions in the subgroups are close to each other. Similar to the demographic parity, this measure also does not depend on the true labels.
Formula: (TP + FP) / (TP + FP + TN + FN)
#collapse-show res_prop <- prop_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'Caucasian') res_prop$Metric
The proportional parity still shows that African-American defendants are treated unfairly by our model. At the same time, the disparity is lower compared to the one observed with the demographic parity.
All the remaining fairness metrics account for both model predictions and the true labels.
#collapse-show res_eq <- equal_odds(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'African_American') res_eq$Metric
#collapse-show res_acc <- acc_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'African_American') res_acc$Metric
#collapse-show res_fnr <- fnr_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'African_American') res_fnr$Metric
#collapse-show res_fpr <- fpr_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'African_American') res_fpr$Metric
Negative predictive value parity is achieved if the negative predictive values in the subgroups are close to each other. The negative predictive value is computed as a ratio between the number of true negatives and the total number of predicted negatives. This function can be considered the ‘inverse’ of the predictive rate parity.
Formula: TN / (TN + FN)
#collapse-show res_npv <- npv_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'African_American') res_npv$Metric
#collapse-show res_sp <- spec_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'ethnicity', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'African_American') res_sp$Metric
Apart from the parity-based metrics presented above, two additional comparisons are implemented: ROC AUC comparison and Matthews correlation coefficient comparison.
#collapse-show res_auc <- roc_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'Female', probs = 'prob_1', preds_levels = c('no','yes'), base = 'Male') res_auc$Metric
Setting direction: controls < cases Setting direction: controls < cases
|ROC AUC Parity||1.0000000||0.9959731|
Apart from the standard outputs, the function also returns ROC curves for each of the subgroups:
The Matthews correlation coefficient (MCC) takes all four classes of the confusion matrix into consideration. MCC is sometimes referred to as the single most powerful metric in binary classification problems, especially for data with class imbalances.
#collapse-show res_mcc <- mcc_parity(data = df_valid, outcome = 'Two_yr_Recidivism', group = 'Female', probs = 'prob_1', preds_levels = c('no','yes'), cutoff = 0.5, base = 'Male') res_mcc$Metric
You have read through the fairness R package tutorial! By now, you should have a solid grip on algorithmic group fairness metrics.
We hope that you will be able to use the R package in your data analysis! Please let me know if you run into any issues while working with the package in the comment window below or on GitHub. Please also feel free to contact the authors if you have any feedback.
- Calders, T., & Verwer, S. (2010). Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2), 277-292.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), 153-163.
- Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015, August). Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 259-268). ACM.
- Friedler, S. A., Scheidegger, C., Venkatasubramanian, S., Choudhary, S., Hamilton, E. P., & Roth, D. (2018). A comparative study of fairness-enhancing interventions in machine learning. arXiv preprint arXiv:1802.04422.
- Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017, April). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (pp. 1171-1180). International World Wide Web Conferences Steering Committee.