Explainability and Counterfactuals: Algorithmic Decisions and the Individual

Brent Mittelstadt

Tonii Leach

How do AI algorithms make decisions? Why might we need to
understand this, and to what extent do we need to understand this? How can we
conceptualise ‘explainable’ AI? And when we ask for an explanation, what is it
that we really want to know? On the 27th of February Dr Brent
Mittelstadt gave a talk at De Montfort University (Leicester, UK) on
‘Governance of AI with explanations’ to address some of these questions.

Explainability of complex AI algorithms has become a
highly contested topic since the introduction of the General
Data Protection Regulation
(GDPR) across the EU
on the 25th May 2018. Whilst it is often cited that the GDPR confers
a ‘right to explanation’ for an algorithmic decision made about an individual, Dr
Mittelstadt argued that this is not the case.
According to the recent paper Why a Right to
Explanation of Automated Decision-Making Does Not Exist in the General Data
Protection Regulation
(Wachter, Mittelstadt and Floridi, 2017), GDPR simply provides a right to be informed about the
existence of automated processes and system functionality if a decision is
solely based on automated processes and has legal or significant effects for
the individual. However, no explanation about the rationale of an individual
decision is required.  Whilst this to
some extent curbs the requirements for a detailed explanation, in situations
where legal or significant effects for the individual are likely a clear
explanation of the decision is key.

In this context, then, it is vital that there is an understanding
of explainability and how this could work in practice. A distinction was made
between explainability (pertaining to how a specific decision is made) and
interpretability (focusing on a general understanding of how the system or algorithm
works).

Explanation Relevance

Where someone is presented with an explanation for how a decision
was made, the accuracy, usefulness and relevance of the explanation were
discussed as key elements. The case of social media
adverts was considered in relation to the impact of explainability on everyday
life.

When seeing a targeted paid-for advert, an
explanation for why the individual has been selected
is offered, which is often a combination of
age, gender and location which meets the pre-selected target audience for a
specified product. However, these same characteristics are offered as an
explanation even for adverts that are quite clearly based on browsing history –
regardless of whether the product in any way relates to those characteristics. This
was discussed in relation to the usefulness of explanations when the
information provided is not central (and can be merely tangential) to the way in which the decision was made.

Counterfactuals

It was discussed that, for the individual, an understanding of why
a particular outcome occurred instead of an alternative outcome is often what
is actually being asked for when an explanation is requested. This is of
particular relevance in situations where, for example, a person was turned down
for credit when they did not expect to be. In this scenario, counterfactuals
can be a useful method of providing an explanation. Counterfactuals are a type
of explanation that describes how a decision depends on the external factors
inputted. A counterfactual explanation addresses an alternative question;
rather than asking ‘what was the cause of this decision?’, counterfactuals aim
to answer ‘how can I get the decision I wanted?’. In this way, the
counterfactual identifies the external factors that would need to be different
in order to get the desired outcome.

The usefulness of counterfactuals was discussed, both in relation
to multiple and equally valid counterfactual explanations, and how they can be
used to meet the expectations of explanations for the individual. A key benefit
of counterfactual explanations is that multiple counterfactual explanations can
often be provided for a single decision. Counterfactuals can be used to
identify a range of different factor combinations to achieve the desired
outcome, with the individual then having an understanding of a variety of
approaches they could take to remedy the issue resulting in the unwanted
decision.

By providing counterfactual explanations for algorithmic decisions
the goals of the individual can be met. They are able to understand the
decision by having some of the key factors and logic revealed to them.
Counterfactuals also support the implementation of Article 22 (3) of the GDPR,
as the individual can challenge a decision if, for example, the input factors
are not accurate. They also provide the individual with the opportunity to
alter future decisions by revealing the key factors that would need to be
addressed to achieve the desired outcome. The benefits of counterfactuals in AI
and machine learning contexts are further explored in Counterfactual
Explanations without Opening the Black Box: Automated Decisions and the GDPR

(Wachter, Mittelstadt and Russell, 2018).

This talk provided a fascinating and useful insight into the
issues around the provision of explanations being explored currently, and
alternative approaches being considered to address these issues. It also
provided a strong overview of the contentions around explanations for
algorithmic decisions between industry, regulators, and the individual.

Dr Brent
Mittelstadt
is a Research Fellow and British Academy Postdoctoral Fellow in
data ethics at the Oxford Internet Institute, a Turing Fellow at the Alan
Turing Institute, and a member of the UK National Statistician’s Data Ethics
Advisory Committee. He is an ethicist focusing on auditing, interpretability,
and ethical governance of complex algorithmic systems.

References

Wachter, S.,
Mittelstadt, B. & Floridi, L. 2017, “Why a Right to Explanation of
Automated Decision-Making Does Not Exist in the General Data Protection
Regulation”, International Data
Privacy Law,
vol. 7, no. 2, pp. 76-99. https://doi.org/10.1093/idpl/ipx005

Wachter, S.,
Mittelstadt, B. & Russell, C. 2018, “Counterfactual Explanations
Without Opening The Black Box: Automated Decisions And The GDPR”, Harvard Journal of Law & Technology, vol.
31, no. 2, pp. 841-887.


Tonii
Leach
holds the ‘Frontrunner
in Responsible Artificial Intelligence
’ internship at the Centre for Computing and Social
Responsibility, De Montfort University (DMU, Leicester, UK) and contributes to
the DMU team of
Ethics
Support in the Human Brain Project
. With a Masters in
Modern Literature, she is currently undertaking her PhD with DMU on the topic
of ethics and Human Rights in next-generation Artificial Intelligence.

The post Explainability and Counterfactuals: Algorithmic Decisions and the Individual appeared first on Ethics Dialogues.

Source: New feed