admin 管理员组

文章数量: 887021

机器学习模型的衡量指标

In our previous article, we gave an in-depth review on how to explain biases in data. The next step in our fairness journey is to dig into how to detect biased machine learning models.

在上一篇文章中,我们对如何解释数据偏差进行了深入的回顾。 我们公平之旅的下一步是深入研究如何检测偏向机器学习模型。

However, before detecting (un)fairness in machine learning, we first need to be able to define it. But fairness is an equivocal notion — it can be expressed in various ways to reflect the specific circumstances of the use case or the ethical perspectives of the stakeholders. Consequently, there can’t be a consensus in research about what fairness in machine learning actually is.

但是,在检测机器学习中的(不)公平性之前,我们首先需要能够对其进行定义。 但是,公平是一个模棱两可的概念,可以用各种方式来表达,以反映用例的特定情况或利益相关者的道德观点。 因此,关于机器学习实际上是什么公平的研究尚无共识。

In this article, we will explain the main fairness definitions used in research and highlight their practical limitations. We will also underscore the fact that those definitions are mutually exclusive and that, consequently, there is no “one-size-fits-all” fairness definition to use.

在本文中,我们将解释研究中使用的主要公平性定义,并强调它们的实际局限性。 我们还将强调以下事实:这些定义是互斥的,因此,不存在使用“千篇一律”的公平定义。

记号 (Notations)

To simplify the exposition, we will consider a single protected attribute in a binary classification setting. This can be generalized to multiple protected attributes and all types of machine learning tasks.

为了简化说明,我们将在二进制分类设置中考虑单个受保护的属性。 可以将其概括为多个受保护的属性以及所有类型的机器学习任务。

Throughout the article, we will consider the identification of promising candidates for a job, using the following notations:

在整篇文章中,我们将使用以下符号考虑确定有前途的候选人:

  • 本文标签: 模型 机器 公平性 指标