A classifier satisfies this definition if all groups defined by the sensitive attribute have equal Positive Rate.
A classifier satisfies this definition if all groups defined by the sensitive attribute have equal True Positive Rate and equal False Positive Rate.
A classifier satisfies this definition if all groups defined by the sensitive attribute have equal True Positive Rate.
A classifier satisfies this definition if all groups defined by the sensitive attribute have equal Positive Predictive Value.
This metric measures whether the number of positive predictions (those with value "{{label_list[1]}}") is equal between the chosen group ("{{referenceGroup}}") and the other groups defined by the sensitive attribute.
The measure does not consider whether the positive predictions were true or false, only how many positive predictions appear in each group.
Disparity values close or equal to zero indicate equality or parity between the groups, while values close to one (or negative one) indicate imbalance in how many positive values are predicted in each group.
This metric measures whether the true positive rates and false positive rates are equal between the chosen group ("{{referenceGroup}}") and the other groups defined by the sensitive attribute.
It assesses the classifier’s ability to correctly predict positive values (those with value "{{label_list[1]}}") and the likelihood of incorrectly predicting positive values.
If the disparity value is low, it means that the classifier finds true positives at a similar rate across subgroups and incorrectly assigns the positive class at the same rate among groups.
This metric is a relaxed version of Equalized Odds that only measures the true positive rate between between the chosen group ("{{referenceGroup}}") and the other groups defined by the sensitive attribute.
When the disparity metric is close to zero, it means the classifier predicts true positive values (those with value "{{label_list[1]}}") at the same rate across groups.
A value farther from zero indicates that the classifier is better at predicting true positives for one group over another.
This metric measures the difference in precision between the chosen group ("{{referenceGroup}}") and the other groups defined by the sensitive attribute.
Precision is measured as the percentage of all predicted positives (those predicted with value "{{label_list[1]}}") that were actually true.
When the disparity metric is farther from zero, it indicates that the classifier predicts fewer false positives for one group than another.
Demographic Parity
Equalized Odds
Equality of Opportunity
Predictive Rate Parity
Fraction of positive cases ({{label_list[1]}}) predicted
Fraction of negative cases ({{label_list[0]}}) incorrectly predicted to be in the positive class out of all actual negative cases
Fraction of positive cases ({{label_list[1]}}) correctly predicted to be in the positive class out of all actual positive cases
Fraction of positive cases ({{label_list[1]}}) correctly predicted to be in the positive class out of all predicted positive cases