Abstract

As artificial intelligence technologies are increasingly deployed by government and commercial entitles to generate automated and semi-automated decisions, the right to an explanation for such decisions has become a critical legal issue. As the internal logic of machine learning algorithms is typically opaque, the absence of a right to explanation can weaken an individual’s ability to challenge such decisions. This article considers the merits of enacting a statutory right to explanation for automated decisions. To this end, this article begins by considering a theoretical justification for a right to explanation, examines consequentialist and deontological approaches to protection and considers the appropriate ambit of such a right, comparing absolute transparency with partial transparency and counterfactual explanations. This article then analyses insights provided by the European Union’s General Data Protection Regulation before concluding by recommending an option for reform to protect the legitimate interests of individuals affected by automated decisions.

This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://dbpia.nl.go.kr/journals/pages/open_access/funder_policies/chorus/standard_publication_model)
You do not currently have access to this article.