ABSTRACT

The article discusses the human rights implications of algorithmic decision-making in the social welfare sphere. It does so against the background of the 2020 Hague’s District Court judgment in a case challenging the Dutch government’s use of System Risk Indication—an algorithm designed to identify potential social welfare fraud. Digital welfare state initiatives are likely to fall short of meeting basic requirements of legality and protecting against arbitrariness. Moreover, the intentional opacity surrounding the implementation of algorithms in the public sector not only hampers the effective exercise of human rights but also undermines proper judicial oversight. The analysis unpacks the relevance and complementarity of three legal/regulatory frameworks governing algorithmic systems: data protection, human rights law and algorithmic accountability. Notwithstanding these frameworks’ invaluable contribution, the discussion casts doubt on whether they are well-suited to address the legal challenges pertaining to the discriminatory effects of the use of algorithmic systems.

1. INTRODUCTION

In February 2020, the District Court of The Hague rendered its judgment in NCJM et al. and FNV v The State of the Netherlands (‘SyRI’).1 The case challenged the Dutch government’s use of System Risk Indication (SyRI)—an algorithm designed to identify potential social welfare fraud. The Court ruled that neither the legislation governing SyRI nor its use met the requirements laid down in Article 8(2) of the European Convention on Human Rights (ECHR)2 for an interference with the exercise of the right to private life to be necessary and proportionate. This is one of the first judgments in the world addressing the human rights implications of the use of artificial intelligence (AI) in the public sector and states’ respective obligations to ensure transparency of AI processes. The use of algorithmic (automated) decision-making spans different areas in the public sector3 and is aimed at, among other things, determining the risk represented by visa applicants4 or grading exams in secondary education.5 For instance, in the United Kingdom, the R (Johnson and others) v SSWP judgment raised important issues arising from the implementation of an AI system making benefit and welfare decisions for the new system of universal credit.6 Against this background, the Hague District Court’s analysis and findings set a highly relevant legal precedent in an area that is now starting to receive judicial scrutiny. The ruling will likely prove to have global impact considering its pioneering role and the significant attention it received in the international press and by international bodies, such as the United Nations Special Rapporteur on extreme poverty and human rights.

The article places the challenges encountered by the Dutch court in deciding the case within the corpus of the evolving legal and regulatory landscape pertaining to algorithmic decision-making in the social welfare sphere. The discussion is structured as follows. First, the analysis explains the Court’s judgment and legal reasoning (Section 2). Section 3 examines more closely how the Court and parties framed the dispute with reference to three distinct legal regimes and approaches currently governing algorithmic systems: data protection, human rights law and algorithmic accountability/transparency. Notwithstanding the different priorities, vocabularies and structures of these regimes, in the SyRI case they complemented one another successfully, though some valuable legal questions remained unasked or unanswered by the plaintiffs and the Court, respectively. Section 4 engages with the implications that the intentional opacity surrounding SyRI for the purpose of preventing people to ‘game the system’ has on not only the effective exercise of human rights but also proper judicial oversight. The Court openly acknowledged on multiple occasions that the absence of transparency and information about how the AI system worked hampered its ability to address certain of the claims brought to it. Section 5 highlights the difficulties of substantiating the discriminatory effects of the use of algorithmic systems. The discussion casts doubt on whether international human rights law and data protection are well-suited frameworks, as they stand, to fully address contemporary challenges, including evidence and the burden of proof for substantiating a (risk of a) violation, new types of discriminatory (collective) harm or questions of legal standing. Section 6 concludes.

2. THE DISTRICT COURT OF THE HAGUE'S JUDGMENT

A. Background

SyRI was a big-data analysis system that ran under the auspices of the Dutch Ministry of Social Affairs and Employment. The aim of the system was to prevent and combat fraud in the areas of income-dependent schemes, taxes and social security. It could be used upon request by one of the so-called ‘cooperation associations’, namely governmental bodies and certain Dutch municipalities. Following the linkage of many siloed datasets held by government agencies, the aggregated data were fed into the SyRI algorithm. The algorithm’s risk model used several unknown risk indicators (for example, related to taxes, health insurance, residence, education), on the basis of which it detected increased risk of irregularities by generating risk profiles of cases suspected of presenting a higher likelihood of fraud. The submission of such a risk report could result in further investigation by relevant authorities. Risk notifications were included in a register for 2 years. Individuals were not informed when a risk report had been created for them and were not able to gain any insights into how a decision was reached. SyRI’s legal basis lay in two main domestic instruments: Section 65 of the Work and Income (Implementation Organisation Structure) Act (SUWI Act)7 taken together with Chapter 5a of the Decree concerning the rules for tackling fraud by exchanging data and the effective use of data known within the government with the use of SyRI (SUWI Decree).8

B. The Claims of the Plaintiffs and Admissibility

The case was brought by a coalition of Dutch civil society organisations and two Dutch citizens challenging the Dutch government’s use of SyRI. The plaintiffs submitted that the legislation governing SyRI and the algorithm’s use violated the right to private life, as protected under Article 8 of the ECHR, Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (EU Charter)9 and Article 17 of the International Covenant on Civil and Political Rights (ICCPR).10 They also claimed a violation of the right to a fair trial and the right to effective remedy under Articles 6 and 13 of the ECHR, respectively, and Articles 5, 6, 13, 14, 22 and 28 of the EU General Data Protection Regulation (GDPR).11

Of its own motion, the Court reviewed the admissibility of the claims brought before it. Most of the plaintiffs were civil society interest groups and, under the Dutch Civil Code, they had legal interest to bring proceedings seeking to protect the interests of their support base. However, two of the claimants were Dutch citizens complaining that the use of SyRI violated their right to privacy. The Court found that the citizens lacked standing due to an inability to show ‘sufficiently concrete and personal interest’.12

C. The Standard of Assessment

The Court made clear from the outset that the standard of assessment for reviewing the SyRI legislation would be the requirements of Article 8 of the ECHR concerning the right to private life.13 Although the right to protection of personal data is not laid down in the ECHR, the case law of the European Court of Human Rights (ECtHR) has recognised it as part of the protective scope of Article 8 of the ECHR. Furthermore, core aspects of the right to private life encompass the right to a personal identity and protection against discrimination and stereotyping in the context of data processing. The Court decided not to discuss the applicability of Article 17 of the ICCPR by opining that since it ‘offers the same protection of private life as Article 8 ECHR, [it] has no independent significance in this case’.14

The Court held that it would interpret Article 8 of the ECHR in light of the detailed rights and principles concerning data protection enshrined in EU law. In addition to Article 7 of the EU Charter, which provides for the right to respect for private and family life, Article 8 of the EU Charter stipulates the right to the protection of personal data and also details a series of specific rights, including fairness in the processing of personal data, which needs to take place for specified purposes and on the basis of the data subject’s consent or other legitimate basis laid down by law. According to Article 52(3) of the EU Charter, the substance and scope of the rights in the EU Charter are the same as those of the ECHR rights, insofar as the Charter contains rights that correspond with those of the ECHR. The ECHR provides the minimum level of protection under the EU Charter, unless the EU Charter provides more extensive protection, in which case the latter prevails.15 Moreover, the GDPR sets out a comprehensive framework of principles regarding data protection, including the principles of transparency, purpose limitation and data minimisation (Article 5(1)(a) of the GDPR) to be applied when processing personal data. The Court relied heavily upon these principles stating that it would ‘interpret Article 8 paragraph 2 ECHR on the basis of these principles’.16

The main questions before the Court were whether the SyRI legislation and the use of SyRI (1) constituted an interference with Article 8 of the ECHR, (2) pursued a legitimate aim, (3) had a basis in law and (4) were necessary and proportionate restrictions of the right to private life.

D. The Intrusiveness of the Interference

It was not disputed that the SyRI legislation authorised data processing for the benefit of cross-agency collaboration and, thus, constituted an interference with the exercise of the right to private life. The Court paid particular attention to the extent and seriousness of such interference as a factor that weighs heavily in the assessment of the necessity and proportionality of a restriction of the right to privacy. The Court had to clarify two issues in this regard: first, the nature of SyRI and, second, the legal effect of a risk report.

Starting with the nature of SyRI, the plaintiffs argued that SyRI was a proactive system involving the large-scale, unstructured and automated linking of files pertaining to large groups of citizens and secret processing of personal data. The state contested that SyRI was a self-learning system used for predictive analytics. The Court held that it was ‘unable to assess … the precise nature of SyRI because the State has not disclosed the risk model and the indicators of which the risk model is composed or may be composed’.17 During the court proceedings, the state did not provide the Court with objectively verifiable information arguing that a disclosure could lead to citizens adjusting their conduct in order to avoid detection of fraud. The Court maintained that this was a deliberate choice on behalf of the state, which was also reflected in the absence of transparency in the SyRI legislation as to how the system’s decision model functioned.18

The Court went on to find that, contrary to the plaintiffs’ submissions, ‘the SyRI legislation does not provide room for unstructured (“ad random”) data collection with the use of SyRI’.19 Although the amount of data that could be used was substantial, the data categories were exhaustively enumerated.20 The Court accepted that, as currently implemented, SyRI did not entail any use of deep learning and data mining. Nonetheless, links between data sets were established, which, in turn, led to results suggestive of an increased risk of committing fraud.21 In addition to this, the law not only failed to preclude the use of predictive analyses, deep learning and data mining, but also expressly allowed for the adjustment of a risk model and the development of models with new indicators.22 Therefore, in its view, the application of SyRI was aligned with deep learning and self-learning systems.23 As regards the risk profiles, it was not possible for the Court to ascertain whether these were being developed, although it was found that risk profiles based on existing factual data were intrinsic to SyRI’s application.

The Court emphasised that the ‘SyRI legislation does not provide for a duty of disclosure to those whose data are processed in SyRI’.24 Similarly, the law did not provide for an obligation to notify data subjects individually when a risk report had been submitted. Data subjects were not informed unless an investigation had been initiated in response to a risk report, which did not happen as a matter of course.25 The Court did not find it satisfactory that the only statutory obligation was to announce the start of a SyRI project by way of a publication in the Government Gazette26 or that access to the register of risk reports was only granted upon request after the data processing had taken place.27

The second matter to be elucidated in assessing the intrusiveness of the interference was the legal effect of a risk report. The state accepted that the submission of a risk report based on the application of SyRI constituted profiling within the meaning of Article 4(4) of the GDPR. The Court suggested that, although the use of SyRI in and of itself was not intended to have legal effect, ‘a risk report does have a similarly significant effect on the private life of the person to whom the risk report pertains’.28 A risk report could be stored for 2 years and could be used by participants in the SyRI project for a period of 20 months.29 Other investigative authorities could also be notified of the report upon request. Even if a risk report never resulted in an investigation or sanctions, the effect on the private life of the data subject would remain pronounced. This effect, in conjunction with the data subject’s inability to be reasonably aware of the processing of their data, led the Court to conclude that the interference with the right to private life was extensive and serious.30

E. The Necessity and Proportionality of the Restriction

The Court proceeded to examine whether the SyRI legislation pursued a legitimate aim. It was not disputed that combating fraud in social security and welfare is a legitimate and important goal.

The third main question that the Court had to answer was whether the interference with the right to private life had a sufficiently accessible and foreseeable legal basis. Accessibility and foreseeability are the so-called quality of law requirements and refer to the need that a piece of legislation must be formulated with sufficient precision so as to enable individuals to regulate their conduct. In other words, individuals must be able to access and foresee, to a degree that is reasonable in the circumstances, the consequences, which a given action may entail.31 Interestingly, the accessibility and foreseeability criteria regain their relevance in the context of rapid technological developments. This is because the domestic legislator struggles to sufficiently regulate such developments and may be intentionally obscure or vague in doing so.32 More specifically, in the SyRI case, following the ECtHR’s approach in S. and Marper v United Kingdom,33 the Court chose to ‘leav[e] undiscussed in its review whether the SyRI legislation is sufficiently accessible and foreseeable and as such affords an adequate legal basis’,34 as required under Article 8(2) of the ECHR. This choice was justified on the basis ‘that the SyRI legislation in any case contains insufficient safeguards for the conclusion that it is necessary in a democratic society’ and an assessment of the adequacy of the legal basis was thus not made.35 Given the fact that the Court had reservations on whether the legislation governing SyRI met the accessibility and foreseeability criteria, it is unfortunate that it drew no formal conclusion on this matter.

The final and crucial question addressed by the Court was whether the use of SyRI and the legislation pertaining to it were necessary and proportionate restrictions on the right to private life. In order to resolve this question, the Court relied heavily upon the EU law principles of transparency, purpose limitation and data minimisation. It maintained that the legislation was insufficiently transparent and verifiable and that the use of SyRI entailed an interference with the right to respect for private life, which was unnecessary and disproportionate to the purpose of combating fraud. This conclusion was grounded on three findings.36

First, the Court found that neither the legislation nor the use of SyRI respected the principle of transparency. The principle is grounded in Article 8(2) of the EU Charter and Articles 5(1)(a) and 12–15 of the GDPR. The normative scope of the principle encompasses a right for data subjects to access their data and obligations imposed upon data controllers to inform data subjects of data collection and processing activities. Crucially, transparency also serves an enabling function for the effective enjoyment of other data subject rights.37 In the case of the SyRI legislation, it provided no information on the objective factual data that could justify an inference of increased risk in individual cases. Nor was there any clear insight into the functioning of the risk model, such as the algorithms or risk-analysis method used.38 The Court thus questioned ‘how a data subject could be able to defend themselves against the fact that a risk report has been submitted about him or her’39 or ‘be aware that their data were processed on correct grounds’.40

Second, the importance of transparency in connection with the ability to verify the risk model and risk indicators was all the greater since the use of the risk model entailed the risk of discriminatory effects—unintentional or otherwise. The plaintiffs submitted that SyRI was used to investigate neighbourhoods known as ‘problem areas’. This use increased the chances of discovering irregularities in those neighbourhoods compared to others and further contributed to the stereotyping of their residents. The UN Special Rapporteur on extreme poverty and human rights, who submitted a third-party intervention before the Court, stressed that the use of SyRI had a discriminatory and stigmatising effect.41 In the hearing, the state admitted that SyRI had only been used to assess ‘problem districts’. The Court accepted that due to the large amounts of data that qualify for processing and the use of risk profiles ‘there is in fact a risk that SyRI inadvertently creates links based on bias, such as a lower socio-economic status or an immigration background’.42 No evidence was presented to suggest that safeguards were put in place to neutralise the risk of discriminatory effects.43

Third, the law contained insufficient safeguards relating to the principles of purpose limitation and data minimisation. Although the legislation provided an exhaustive list of data categories to be used in SyRI projects, the personal data eligible for processing in SyRI were essentially limitless. For this reason, the assessments conducted by national authorities on whether a given interference with private life was necessary and proportionate in light of the specific purpose of a SyRI project were critical. However, the Court found that there was no legal requirement for either a review by an independent third party prior to data processing approval or a comprehensive review of the necessity of using SyRI in specific instances.44

Considering the lack of respect for the principle of transparency in conjunction with inadequate safeguards, the Court thus found a violation of Article 8 of the ECHR and declared that the SyRI legislation ‘ha[s] no binding effect with respect to [the admissible claimants] and on the individuals whose interests these parties promote’.45 The state did not appeal the judgment,46 which is now final. The District Court of the Hague’s judgment has been lauded as a ‘landmark ruling’47 in the context of the digital welfare state, an area to which little attention has been paid thus far.48

3. FRAMING THE DISPUTE: THE QUESTIONS NEVER ASKED AND THE QUESTIONS NEVER ANSWERED

Notwithstanding the significant finding that SyRI legislation and projects were in violation of Article 8 of the ECHR, it is worth unpacking what the SyRI case was really about. This section discusses, first, how the plaintiffs chose to argue the case and, second, how the Court chose to frame it.

Starting with the first question, the claims by the plaintiffs focused on whether the data processing and the prediction analysis pursued by SyRI complied with the requirements of the right to privacy and data protection. The plaintiffs also alleged a violation of the rights to effective remedy and fair trial under the ECHR. These complaints and arguments bring together three different approaches to addressing the adverse impacts of the use of AI: the data protection legal regime, the human rights law regime and the algorithmic accountability approach.49 Each approach triggers distinct bodies of law, prioritises different concerns and entails different vocabularies and mindsets, but they can/should be complementary.50 The pertinence of the data protection regime lies in its detailed principles for lawful data processing and the rights of data subjects. The value of human rights law to the function of algorithmic systems is that it offers an organising framework for assessing algorithms, bringing the language of law and human rights back to the fore (instead of loosely used terms such as bias, harm) and emphasising the obligations of states.51 The algorithmic accountability approach focuses on how transparency, explainability and understandability in the design and implementation of algorithms enable individuals to exercise their rights.52 In the SyRI case, human rights law and data protection law formed the standard of assessment, but what made the case persuasive in the eyes of the Court were the serious shortcomings regarding algorithmic accountability and transparency (the absence of transparency into how SyRI worked in conjunction with the lack of any alternative safeguards).

Having said this, the plaintiffs did not bring the full force of (applicable to Dutch authorities) human rights law into play. Due to the monistic nature of the Dutch legal system, rights under human rights treaties binding on the Netherlands can be directly invoked before national courts.53 It is curious that, although the effective exercise of the right to social security, as protected under Article 9 of the International Covenant on Social, Economic and Cultural Rights (ICESCR),54 lies at the core of SyRI’s functioning, the plaintiffs did not bring this matter to the foreground. There was neither a claim concerning a possible violation of the right to social security nor an argument for a possible breach of the right to social security in conjunction with the principle of non-discrimination. A reasonable argument could have been made that the deployment of SyRI unduly restricts peoples’ access to social benefits to which they have a fundamental entitlement.55 Beneficiaries of social security schemes must be able to participate in the administration of the social security system. Individuals also have the right to seek, receive and impart information on all social security entitlements in a clear and transparent manner,56 especially if the imposition of technological requirements pose additional barriers.57 The lack of transparency surrounding SyRI interferes with these rights. Moreover, there was no claim raised regarding the right not to be discriminated against (Article 2(2) of the ICESCR and/or Article 14 of the ECHR) in connection to the right to privacy even though the interference especially concerned citizens of low economic status, which could constitute discrimination on the basis of social origin, property or other status.

The plaintiffs could potentially have invoked a violation of the highly neglected in scholarship and judicial practice right to science under Article 15(1)(b) of the ICESCR. While framed as a positive obligation, it is arguable that the right to science also includes a negative component: a right to protection from abuse or adverse effects of scientific progress.58 This interpretation is supported by a series of views and documents.59 It could have been argued that the use of SyRI for (semi)automated data processing for the purpose of generating risk profiles relies on recent scientific advances in the field of information technology. As the Court’s analysis showed, SyRI interferes with the right to privacy and potentially the right to social security and the prohibition of discrimination. SyRI can thus be considered an application of ‘scientific and technical progress […] contrary to the enjoyment of human dignity and human rights’. Considering that the right to protection from abuses or adverse effects of scientific progress is yet to be tested in practice, SyRI could have presented a chance to invoke it and thus advance associated definitional and interpretational questions regarding its relevance and protective scope.

In addition to how the plaintiffs chose to frame their claims, it is of equal interest how the Court construed its standard of assessment and framed the case. It conducted no separate analysis of the different claims raised under human rights and data protection law. Instead, it focused primarily on Article 8 of the ECHR and, in its interpretation, brought the detailed principles and rights fleshed out under the GDPR into the scope of Article 8. In this way, the Court pursued a human rights law analysis, which was greatly strengthened by EU law guarantees. Yet, its engagement with the GDPR was selective, since it did not address all claims raised by the plaintiffs thereby considerably narrowing the scope of the complaints submitted to it. In view of its finding that the SyRI legislation violated the right to privacy, the Court deemed it unnecessary to assess whether the legislation was also in breach of various provisions of the GDPR, including Article 22.60 It would have been welcome had the Court opined, for example, on whether an individual could effectively exercise his or her right to remedy when challenging a risk report, in light of the absence of any explanation as to how SyRI worked and any notification that such a report had been generated. The Court’s disinclination to answer the question of whether the submission of the risk report qualified as automated decision-making deserves special mention here. The Court stated that, for the purposes of reviewing the SyRI legislation against Article 8 of the ECHR guarantees, this question was irrelevant.61 However, the Court’s contribution on this matter would have been particularly useful in terms of the scope and nature of Article 22 of the GDPR, especially since courts rarely have the opportunity to interpret and apply this provision.62 In a recent opinion, the European Commission for Democracy through Law stressed the need for legislator to (justify that) they abide by the requirements of Article 22 of the GDPR.63 It should be noted that Article 22 of the GDPR and the Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data,64 which is the only existing legally binding international treaty with global relevance (although not yet in force), set out a global benchmark for automated decision-making.65

Overall, the Court’s choice to narrow the scope of the dispute may be understandable in the interest of procedural economy, but the unresolved complaints left much to be desired. Courts, pursuant to procedural economy, may choose not to rule on all submissions by the parties or to address all possible legal bases of the complaint(s). The rationale of procedural economy may be underpinned by different considerations, including time and resource constraints or a court’s reluctance to address a complicated issue when it is not indispensable to resolving a dispute. However, such an approach may be unduly reductive, if it risks not serving justice to the plaintiffs or remedying one of their core complaints.66 The Court missed an opportunity to convey an important message to Dutch authorities as to what is permissible under human rights and data protection law.67 This would have ensured legal certainty in the circumstances of this case and, more broadly, in the deployment of similar algorithmic systems.

One final point that should be highlighted concerns the admissibility stage of the case and the limitations of international human rights law in effectively grasping the challenges posed by algorithmic decision-making.68 The Court dismissed the complaints brought on behalf of the two individuals due to a lack of ‘sufficiently concrete and personal interest’69 but it did hear the same complaints brought by the civil society interest groups. This is due to the fact that there is a unique provision in the Dutch Civil Code, which offers the possibility of public interest litigation.70 A well-known example of how this provision has been put into use in the past is the Urgenda case.71 Therefore, under the Dutch Civil Code, civil society groups had a legal interest to bring proceedings seeking to protect the interests of their support base. If this unique provision did not exist in Dutch legislation, the plaintiffs—who lacked the necessary victim status to bring a human rights complaint—would also have lacked standing and, therefore, the case would have been inadmissible in its entirety. However, in other jurisdictions that do not provide an avenue for public interest litigation by civil society groups, the opacity of such algorithmic systems could pose a serious challenge to their judicial review, since individual citizens will most likely be unable to prove victim status and thus cannot raise admissible claims.72

4. THE IMPLICATIONS OF THE SECRECY SURROUNDING SYRI FOR THE EXERCISE OF HUMAN RIGHTS AND EFFECTIVE JUDICIAL OVERSIGHT

A prominent and overarching theme underpinning all aspects of the SyRI case was how the lack of transparency, first, inhibited data subjects from effectively exercising their rights and, second, undermined the exercise of the judicial function. Algorithmic transparency, in this instance, was not merely complementary to the human rights law claims but provided strong, if not the strongest, grounds for finding a violation of the right to privacy. This comes to confirm how AI systems challenge the traditional concepts of transparency and accountability and, at the same time, reinforce the need for ‘radical transparency about the impact of an AI system in the information environment’.73

More specifically, the Court found that the legislation provided little, if any, insight into the risk model and risk indicators used, the objective factual data that could justifiably lead to the inference of an increased risk or the data processed in SyRI projects.74 Crucial information concerning the algorithm’s use was deliberately kept secret—an instance of intentional opacity.75 The Netherlands’ refusal to disclose additional information, on the grounds that citizens would otherwise ‘game the system’, is an argument invoked by many countries with regard to different uses of algorithmic systems in different areas.76 The UN Special Rapporteur on extreme poverty and human rights, in the amicus curiae submitted to the Court, strongly maintained that disclosing information on how AI processes and socio-technical systems function is a matter of public interest serving transparent decision-making.77 Although there is merit in protecting the public interest in investigating and prosecuting crimes, as well as in the inspecting and monitoring work carried out by public authorities, there is equally significant merit, to say the least, in protecting the public interest in the ability of those concerned to obtain information about how these systems function and how their rights are affected.78 Algorithmic risk models constitute an essential part of applying and enforcing the law—one might even argue that in this instance SyRI is the law79—by determining whose right to social security may be affected and who may be the subject of intrusive scrutiny of their data and person. In this sense, the models are not merely a matter of internal concern to welfare bureaucracies but a matter of public interest. It is arguable that the same standard of rule of law concerning the publicness and transparency of law also needs to be applied to algorithmic systems used by public authorities, so that citizens know what is expected of them.80

Furthermore, the absence of information on SyRI hampered the Court’s ability to pass judgment on many crucial points and, therefore, to exercise proper judicial oversight of the application of the law in accordance with international human rights. On multiple occasions, the Court expressly held that it was unable to answer legal questions such as the legal nature of SyRI, whether risk profiles were developed in the course of its use or whether the risk of discrimination was sufficiently neutralised.81 The question then arises as to how the lack of information is to be appreciated when assessing a potential human rights violation. According to the UN Special Rapporteur, such a lack of insight entails that the burden of proof falls upon the government to explain convincingly why more openness about such an important system is impossible.82 The Court seems to have duly considered the arguments made by the Special Rapporteur. On the one hand, its task was to decide the questions raised as objectively as possible with the limited information available; on the other, when the lack of transparency and the absence of safeguards were affecting the rights of individuals, the state’s scant justification led the Court to conclude that neither the legal basis nor the use of SyRI met the requirements of Article 8 of the ECHR. Therefore, in this instance, the state’s failure to provide a convincing explanation for the lack of transparency and alternative safeguards to protect data subjects’ rights was critical for affirming a violation of the right to privacy. This sets a laudable precedent for courts around the world for how to deal with a lack of transparency surrounding algorithmic decision-making tools.

Alternatively, the Court could have considered the option of holding a closed hearing, similar to the approach taken in the domestic proceedings of Big Brother Watch and Others v United Kingdom.83 This could allow courts to gain a deeper understanding of algorithms deployed in the area of social welfare and assess their compliance with data protection and human rights law without exposing information to the public that would allow citizens to ‘game the system’.

It needs to be clarified that the Court did not find that the SyRI legislation’s unlawfulness, insofar as the use of SyRI was concerned, entailed an obligation on the state to disclose the inner workings of the risk model.84 It did note, however, that proceedings were pending before administrative courts entrusted to decide this matter.

5. SYRI'S DISCRIMINATORY EFFECTS: INTERNATIONAL HUMAN RIGHTS LAW AND DATA PROTECTION AS WELL-SUITED FRAMEWORKS?

An important finding in this case was that the state had employed hidden algorithmic risk models by specifically and exclusively targeting neighbourhoods inhabited by low-income and minority-background demographics. Predictive analytics, algorithms and other forms of AI are highly likely to reproduce and exacerbate biases reflected in existing data and policies. Identifying and counteracting such biases in designing the digital welfare state is important and requires precisely what was found to be lacking in the Dutch legislation: transparency in law and in practice about how an AI system works and broad-based inputs into policymaking processes.85 The public, and especially those affected by the welfare system, need to be able to understand and evaluate the processes and outcomes buried deep within the algorithms.86 Those targeted by algorithmic systems, are the least likely to be able to defend themselves against intrusions and the ensuing negative consequences.87 There is already research to substantiate the creation of ‘digital poorhouses',88 referring to the adverse impacts of automated decision-making on vulnerable communities.

The Court ruled that the SyRI legislation contained no safeguards to neutralise the risk of discriminatory and stereotyping effects. Notwithstanding the significance of this finding, there was no separate examination of a possible violation of Article 8 of the ECHR read together with Article of the 14 ECHR, nor a possible violation of Article 9 of the ICESCR in combination with Article 2(3) of the ICESCR. The right to social security encompasses the right to access and maintain benefits, without discrimination—whether in law or in fact, direct or indirect—on grounds such as race, colour, sex, age, language or national or social origin,89 especially with regard to individuals belonging to disadvantaged and marginalised groups.90 Curiously, none of these claims were raised by the plaintiffs and, even if they had been raised, it would have been difficult for the Court to entertain them precisely due to the lack of information and evidence. This may have been the reason that the plaintiffs did not raise the complaints in the first place.

Proving potential and/or indirect discrimination is a challenging task in any context —all the more so when algorithmic systems and prediction analytics come into play. The UN Special Rapporteur's suggestion was that the burden lay with the government to provide evidence dispelling the suspicion that the SyRI’s singular focus on poor and marginalised groups in Dutch society was discriminatory.91 While this is a viable path, it is unclear what it entails in terms of evidence and burden of proof in cases where algorithmic systems do not necessarily have a singular focus. In such cases it will be impossible for plaintiffs to successfully argue for (a risk of) discrimination without insight into the risk factors used by the algorithm.

The difficulties of substantiating the (potential) discriminatory effect of algorithmic systems cast doubt on whether data protection rules and international human rights law, as they currently stand, are well-suited to address risks posed to specific groups. The same concern also applies to conceptualising new types of discriminatory harm and/or societal harm, some of which we may not even be able to anticipate at this point in time. In cases where no specific individual is necessarily discriminated against, but rather groups, people and neighbourhoods are being targeted, anti-discrimination laws and human rights provisions arguably fall short of grasping and articulating the issues at stake.92 The European Data Protection Supervisor (EDPS) calls for AI tools and respective regulation to be geared towards protecting individuals, collectives and society as a whole from any negative impacts.93 Therefore the notion of harm should be construed as inclusively as possible in order to capture collective harm.94 The assessment of the level of risk of a given use of AI should be based not only on the impact on the affected parties but also on wider societal considerations, including the impact on the democratic process, due process and the rule of law and the public interest.95 Many scholars now discuss threats to group privacy, in response to the fact that the individualistic notion of privacy, as protected under data protection regimes, is not apt for grasping problems raised by sophisticated data analytics, including inferences and predictions made on a large scale.96 These types of challenges bring into play not only existing limitations of substantive human rights law (for example, the protective scope of the principle of non-discrimination or other human rights) but also crucial questions around admissibility (for instance, the inability to claim a violation of group rights under current human rights law and, therefore, lack of standing97) or the evidence and burden of proof required to substantiate such claims.

6. CONCLUSION: THE LEGISLATOR'S RESPONSIBILITY TO UPHOLD THE RULE OF LAW WHEN DEPLOYING NEW TECHNOLOGIES

Despite the fact that the rapid deployment of AI-based products and services in public administration is a high-level political priority for many states,98 legal and ethical and socio-economic concerns are increasingly being voiced as to these products and services responsible stewardship.99 There is a notable lack of prior scrutiny, democratic oversight and public debate. In the case of SyRI, the government introduced a legal basis for its functioning years after its initial deployment. There was very little parliamentary debate about the system’s introduction and the government proceeded with its plans despite repeated warnings from the data protection authority and cautionary advice from the Advisory Division of the Council of State.100

It should not go unnoticed that the use of new technologies, including AI, machine learning and (semi)automated algorithmic systems, appears to create and sustain its own powerful claim to self-referential exceptional legitimacy and authority. The use of AI is regularly being portrayed as self-evident and self-justified, even though there is limited empirical evidence to suggest that its use in government is achieving the intended results.101 SyRI’s ability to achieve the purported objective of reducing benefit fraud has been seriously disputed.102 AI is not necessarily the most appropriate technology for all contexts and needs;103 this is a matter that lawmakers need to seriously reflect upon before deploying AI in the digital welfare state.

Many important digital welfare state initiatives fall short of meeting basic requirements of legality.104 Yet the design and implementation of AI systems by public authorities are not exempt from the rule of law and legality.105 The District Court of the Hague in SyRI stressed the need for the state to articulate a clear, accessible and foreseeable legal basis for AI before deploying it in public service delivery. The legislator bears a special responsibility to provide effective safeguards protecting against abuse and arbitrariness when developing and applying new technologies.106

The deployment of AI in the social welfare state must also be consistent with states’ existing obligations under national and international law.107 The discussion brought to the fore and analysed three different approaches/regimes for addressing the adverse impacts of the use of AI: human rights law, data protection and algorithmic accountability. Each approach may prioritise different concerns and entail different vocabularies, but they should be seen as complementary. In SyRI, human rights law and data protection law formed the Court’s standard of assessment while algorithmic accountability and transparency (or more accurately the absence thereof) weighted heavily in the Court’s legal reasoning. Interestingly, the principle of transparency is an apt example of how the previously-mentioned legal/regulatory frameworks complement each other. Different variations of transparency came together in different steps of the Court’s evaluation. First, transparency (alongside accessibility and foreseeability) was a significant factor for assessing the intrusiveness of the interference with Article 8 of the ECHR and SyRI’s legality. Second, the principle of transparency under the GDPR and the detailed relevant data subjects’ rights and data controllers’ obligations were used by the Court to assess the necessity and proportionality of the restriction with Article 8 of the ECHR. Third, the Court emphasised that the absence of algorithmic transparency as to how SyRI works inhibits individuals from being able to claim and effectively exercise their rights.

It is without a doubt that human rights law provides an invaluable organising framework of concrete rights and respective obligations for assessing the use AI systems in the digital welfare state. At the same time, the discussion raised certain questions on whether and, if yes, how international human rights law (and data protection rules), as they currently stand, are well-suited to address the challenges posed by the use of AI systems. Some of these difficulties include the substantiation of (the risk of) indirect discrimination when prediction analytics come into play; the allocation of the burden of proof when the state refuses to disclose information about a given AI system; the discriminatory risks posed not only to individuals but also groups and, consequently, the relevance of human rights law to conceptualising group privacy and new types of harm for groups and/or societal harm. It remains to be seen how human rights law will rise up to the challenge by progressively and creatively developing new avenues to reach its potential.

Footnotes

1

District Court of the Hague, 6 March 2020, ECLI:NL:RBDHA:2020:865, available in English at: uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878 [last accessed 14 January 2022].

2

1950, ETS 5.

3

See Algorithm Watch, Automating Society Report 2020, 1 October 2020, available at: algorithmwatch.org/en/automating-society-2020/ [last accessed 14 January 2022].

4

Crider, ‘Home Office Says It Will Abandon Its Racist Visa Algorithm’, 4 August 2020, available at: www.foxglove.org.uk/2020/08/04/home-office-says-it-will-abandon-its-racist-visa-algorithm-after-we-sued-them/ [last accessed 14 January 2022].

5

Tiffany, ‘Algorithmic Grading Is Not an Answer to the Challenges of the Pandemic’, 12 August 2020, available at: algorithmwatch.org/en/uk-algorithmic-grading-gcse/?fbclid=IwAR0LsQ4d1g-KLHO2GRX4p1yCuJoKVYiTbXNtxD6YbyE_2ZsqYoq9A3BpCJ0 [last accessed 14 January 2022]. See further Marsh, ‘Councils Scrapping Use of Algorithms in Benefit and Welfare Decisions’, Guardian, 24 August 2020, available at: www.theguardian.com/society/2020/aug/24/councils-scrapping-algorithms-benefit-welfare-decisions-concerns-bias [last accessed 14 January 2022].

6

[2020] EWCA Civ 778 at paras 45, 77–83, 107. The AI system calculated a given Universal Credit entitlement automatically and exclusively according to one’s assessment period. In many cases this led to (wrongly) assessing that one will earn twice as much in their upcoming assessment period than they did in their most recent assessment period and, thus, to lowering one’s pending Universal Credit payment accordingly. The Court of Appeal found that the Secretary of State’s failure to provide an alternative method of calculating the claimants’ earned income was irrational.

7

Act of 9 October 2013 to amend the Act of 29 November 2001 on Work and Income (Implementation Organisation Structure) and any other acts pertaining to tackling fraud by exchanging data and the effective use of data known within the government, Bulletin of Acts and Decrees 2013, 405 available at: wetten.overheid.nl/BWBR0013060/2021-01-01 (in Dutch) [last accessed 14 January 2022].

8

Decree of 1 September 2014 to amend the Decree of 20 December 2001 in connection with rules for tackling fraud by exchanging data and the effective use of data known within the government with the use of SyRI, Bulletin of Acts and Decrees 2014, 320, available at: wetten.overheid.nl/BWBR0013267/2021-07-01 (in Dutch) [last accessed 14 January 2022].

9

Charter of Fundamental Rights of the European Union [2012] OJ C 326/391.

10

1966, 999 UNTS 171.

11

Regulation 2016/679/EU of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data [2016] OJ L 119. See SyRI, supra n 1 at para 6.2.

12

SyRI, supra n 1 at paras 6.15, 7.1. See below at section 3 for further discussion.

13

Ibid., at paras 6.19–6.41.

14

Ibid., at para 6.21.

15

Ibid., at para 6.37.

16

Ibid., at para 6.41.

17

Ibid., at para 6.49.

18

Ibid.

19

Ibid., at para 6.50.

20

Article 5a.1(3) of the SUWI Decree, supra n 8 provides 17 data categories.

21

For the very vague description of the linking of the files/data in SyRI and what this entails see Article 5a.2 of the SUWI Decree, supra n 8.

22

SyRI, supra n 1 at paras 4.24, 6.51. Under Article 1.1(z) of the SUWI Decree, supra n 8 a risk model is defined as ‘a model that consists of predetermined indicators’ leaving open the possibility that different risk models may be introduced and used (translation by the authors).

23

SyRI, supra n 1 at paras 6.50–6.51.

24

Ibid., at para 6.54.

25

According to Article 5a.5(1) of the SUWI Decree, supra n 8, one of the aims of the risk reporting register, in which data on risk reports are processed, is to ‘inform subjects of risk reports on request whether his data is included in the register’ (emphasis added). Moreover, ‘subjects will not be informed separately after the investigation about the risk reports that are processed in the register’ (Article 5a.5(4)) (translation by the authors).

26

Article 5a.4(1) of the SUWI Decree, supra n 8.

27

SyRI, supra n 1 at para 6.54.

28

Ibid., at para 6.59.

29

Article 65(5) and (6) of the SUWI Act, supra n 7; Article 5a.5(5) of the SUWI Decree, supra n 8.

30

SyRI, supra n 1 at paras 6.59–6.60.

31

Rainey, McCormick and Ovey, Jacobs, White and Ovey. The European Convention on Human Rights, 8th edn (2021) at 350–4, 424–8.

32

Ibid., at 352.

33

Application Nos 30562/04 and 30566/04, Merits and Just Satisfaction, 4 December 2008.

34

SyRI, supra n 1 at para 6.72.

35

Ibid.

36

Ibid., at paras 6.82, 6.86, 6.95.

37

Polcák, ‘Article 12. Transparent Information, Communication and Modalities for the Exercise of the Rights of the Data Subject’ in Kuner, Bygrave and Docksey (eds), The EU General Data Protection Regulation (GDPR) A Commentary (2020) 398, at 401–2.

38

SyRI, supra n 1 at 6.87, 6.89.

39

Ibid., at para 6.90.

40

Ibid.

41

Brief by the United Nations Special Rapporteur on extreme poverty and human rights as Amicus Curiae in the case of NJCM c.s./De Staat der Nederlanden (SyRI) before the District Court of The Hague (case number: C/09/550982/HA ZA 18/388), available at: www.ohchr.org/Documents/Issues/Poverty/Amicusfinalversionsigned.pdf [last accessed 14 January 2022].

42

SyRI, supra n 1 at para 6.93.

43

Ibid., at paras 6.91–6.94.

44

Ibid., at paras 6.99–6.102.

45

Ibid., at para 7.2. See also at paras 6.110–6.111.

46

Letter to the President of the House of Representatives from the State Secretary for Social Affairs and Employment, Tamara van Ark, on a court judgment regarding SyRI, 23 April 2020, available at: www.rijksoverheid.nl/binaries/rijksoverheid/documenten/publicaties/2020/04/23/vertaling-kamerbrief-naar-aanleiding-van-vonnis-rechter-inzake-syri/Engelse+vertaling+Kamerbrief+nav+vonnis+SyRI.pdf [last accessed 14 January 2022] (in Dutch).

47

Alston, ‘Landmark Ruling by Dutch Court Stops Government Attempts to Spy on the Poor’, 5 February 2020, available at: www.ohchr.org/en/NewsEvents/Pages/DisplayNews.aspx?NewsID=25522&LangID=E [last accessed 14 January 2022].

48

Report of the Special Rapporteur on extreme poverty and human rights, A/74/493, 11 October 2019.

49

McGregor, Murray and Ng, ‘International Human Rights Law as a Framework for Algorithmic Accountability’ (2019) 68 International and Comparative Law Quarterly 314; Yeung, Howes and Pogrebna, ‘AI Governance by Human Rights–Centered Design, Deliberation, and Oversight: An End to Ethics Washing’ in Dubber, Pasquale and Das (eds), The Oxford Handbook of Ethics of AI (2020) 77.

50

Report of the Special Rapporteur on the promotion and protection of the rights to freedom of opinion and expression, A/73/348, 29 August 2018 at para 48.

51

Cf McGregor, Murray and Ng, supra n 49 at 320, 325 (infra n 85), 327 who argue that human rights law is up to the task.

52

Ibid., at 320; Diakopoulos, ‘Algorithmic Accountability: Journalistic Investigation of Computational Power Structures’ (2015) 3 Digital Journalism 398; Wachter, Mittelstadt and Russell, ‘Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR’ (2018) 31 Harvard Journal of Law & Technology 841.

53

See Articles 93, 94 of the Constitution of the Kingdom of the Netherlands (24 August 1815, amended as of 21 December 2018) available in English at: www.government.nl/binaries/government/documents/reports/2019/02/28/the-constitution-of-the-kingdom-of-the-netherlands/WEB_119406_Grondwet_Koninkrijk_ENG.pdf [last accessed 14 January 2022].

54

1966, 993 UNTS 3.

55

See UNSR Amicus Curiae, supra n 41 at paras 19–27 regarding the argument on the applicability of right to social security.

56

Ibid., at para 26.

57

Report of the Special Rapporteur on extreme poverty and human rights, supra n 48 at para 51.

58

Saul, Kinley and Mowbray, The International Covenant on Economic, Social and Cultural Rights: Commentary, Cases, and Materials (2016) at 1219; Müller, ‘Remarks on the Venice Statement on the Right to Enjoy the Benefits of Scientific Progress and its Applications (Article 15(1)(b) ICESCR)’ (2010) 10 Human Rights Law Review 765 at 773.

59

See, UNESCO, Venice Statement on the Right to Enjoy the Benefits of Scientific Progress and its Applications (2009) at paras 14(d), 15(a), available at: www.aaas.org/sites/default/files/VeniceStatement_July2009.pdf [last accessed 14 January 2022]; Article 8, UN Declaration on the Use of Scientific and Technological Progress in the Interests of Peace and for the Benefit of Mankind, Resolution 3384 (XXX) (1975) A/RES/3384(XXX); UN Committee on Economic Social and Cultural Rights, Guidelines on the Treaty-Specific Documents to be Submitted by State Parties under Articles 16 and 17 of the International Covenant on Economic, Social and Cultural Rights (2009) E/C.12/2008/224 at para 70(b).

60

SyRI, supra n 1 at para 6.107.

61

Ibid., at para 6.60.

62

Gantchev, ‘Data Protection in the Age of Welfare Conditionality: Respect for Basic Rights or a Race to the Bottom?’ (2019) 21 European Journal of Social Security 3 at 10–11; Bygrave, ‘Article 22. Automated Decision-making, Including Profiling’ in Kuner, Bygrave and Docksey supra n 37, 522.

63

European Commission for Democracy through Law, The Netherlands—Opinion on the Legal Protection of Citizens, Opinion No 1031/2021, 18 October 2021 at para 95, available at: www.venice.coe.int/webforms/documents/default.aspx?pdffile=CDL-AD(2021)031-e [last accessed 14 January 2022].

64

2018, ETS 223 (not in force); Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, 1981, ETS 108.

65

It should be mentioned here that the Court’s disinclination to assess possible violations of rights under the GDPR led to leaving unaddressed the potentially applicable remedies and penalties provided specifically under the GDPR (Articles 77–84). On the other hand, the choice of either the ECHR and/or the GDPR as the Court’s standard of assessment does not seem to make a difference as to whether Dutch courts may declare a piece of legislation invalid or with no binding effect. Dutch courts may declare domestic laws invalid when they conflict with the ECHR or secondary EU law. See Taekema, ‘Introducing Dutch law’ in Taekema, de Roo and Elion-Valter, Understanding Dutch Law (2020) 15, at 19, 22 (supra n 15).

66

Kudla v Poland Application No 30210/96, Merits and Just Satisfaction, 26 October 2000 at paras 46–156; Palombino, ‘Judicial Economy and Limitation of the Scope of the Decision in International Adjudication’ (2010) 23 Leiden Journal of International Law 909, at 918, 927–8.

67

Harris et al. (eds), Harris, O’Boyle & Warbrick: Law of the European Convention on Human Rights, 4th edn (2018) at 804.

68

The informative works by McGregor, Murray and Ng, supra n 49 at 320 and Yeung, Howes and Pogrebna, supra n 49, besides a very vague recognition that international human rights law is not a panacea, do not acknowledge or discuss such limitations.

69

SyRI, supra n 1 at paras 6.15, 7.1.

70

Article 305a of Book 3, Dutch Civil Code, available at: wetten.overheid.nl/jci1.3:c:BWBR0005291&boek=3&titeldeel=11&artikel=305a [last accessed 14 January 2022] (in Dutch). See Spijkers, ‘Public Interest Litigation Before Domestic Courts in The Netherlands on the Basis of International Law: Article 3:305a Dutch Civil Code’, EJIL: Talk!, Blog of the European Journal of International Law, 6 March 2020, available at: www.ejiltalk.org/public-interest-litigation-before-domestic-courts-in-the-netherlands-on-the-basis-of-international-law-article-3305a-dutch-civil-code/ [last accessed 14 January 2022].

71

Supreme Court of the Netherlands (civil division), The State of the Netherlands and Stichting Urgenda, 20 December 2019, ECLI:NL:HR:2019:2007.

72

But see from a data protection point of view the potential of Article 80 of the GDPR on representation of data subjects.

73

Report of the Special Rapporteur on the promotion and protection of the rights to freedom of opinion and expression, supra n 50 at para 51. Also at paras 35, 48–52.

74

SyRI, supra n 1 at paras 6.49, 6.87–6.89, 6.86, 6.65.

75

Niklas, ‘Human Rights-Based Approach to AI and Algorithms Concerning Welfare Technologies’ in Barfield (ed), The Cambridge Handbook of the Law of Algorithms (2021) 517 at 521–2.

76

As far as Europe is concerned, see the excellent work by Algorithm Watch, Automating Society Report 2020, supra n 3.

77

UNSR Amicus Curiae, supra n 41 at paras 24–25.

78

Ibid., at para 25.

79

Schartum, ‘From Legal Sources to Programming Code—Automatic Individual Decisions in Public Administration and Computers under the Rule of Law’ in Barfield, supra n 75, 301.

80

UNSR Amicus Curiae, supra n 41 at para 26.

81

SyRI, supra n 1 at paras 6.49, 6.53, 6.94.

82

UNSR Amicus Curiae, supra n 41 at para 27.

83

Application Nos 58170/13, 62322/14, 24960/15, Merits and Just Satisfaction, 25 May 2021 at paras 31–36.

84

SyRI, supra n 1 at para 6.115.

85

Report of the Special Rapporteur on the promotion and protection of the rights to freedom of opinion and expression, supra n 50 at paras 49–54.

86

Report of the Special Rapporteur on extreme poverty and human rights, supra n 48 at para 82.

87

UNSR Amicus Curiae, supra n 41 at para 36.

88

Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018). For further discussion see Niklas, supra n 75 at 526.

89

Report of the Special Rapporteur on extreme poverty and human rights, supra n 48 at para 51; Committee on Economic Social and Cultural Rights, General Comment No 19: The right to social security (Art. 9 of the Covenant), 4 February 2008 at paras 29, 30.

90

General Comment 19, supra n 89 at para 23.

91

UNSR Amicus Curiae, supra n 41 at para 37.

92

Cf McGregor, Murray and Ng, supra n 49 at 320, 325 (supra n 85), 327, who argue that human rights law is up to this task.

93

European Data Protection Supervisor, Opinion 4/2020 on the European Commission’s White Paper on Artificial Intelligence—A European Approach to Excellence and Trust, 29 June 2020 at 9, available at: edps.europa.eu/sites/edp/files/publication/20-06-19_opinion_ai_white_paper_en.pdf [last accessed 14 January 2022].

94

Ibid., at 12.

95

Ibid.

96

Niklas, supra n 75 at 524. See Mittelstadt, ‘From Individual to Group Privacy in Big Data Analytics’ (2017) 30 Philosophy & Technology 478; Taylor, Floridi and van der Sloot, ‘Introduction: A New Perspective on Privacy’ in Taylor, Floridi and van der Sloot (eds), Group Privacy: New Challenges of Data Technologies (2017) 4.

97

See discussion earlier in Section 3 on the difficulties for the plaintiffs to prove concrete and personal interest as a requirement for their legal standing and the potential from a data protection point of view of Article 80 of the GDPR.

98

European Commission, White Paper, On Artificial Intelligence: A European Approach to Excellence and Trust, COM(2020) 65 final, 19 February 2020 at 8, available at: ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf [last accessed 14 January 2022].

99

For example, see: digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence [last accessed 14 January 2022].

100

Council of Europe, Parliamentary Assembly, Committee on Equality and Non-Discrimination, Report, Preventing Discrimination Caused by the Use of Artificial Intelligence, Doc 15151, 29 September 2020 at para 33, available at: pace.coe.int/en/files/28715/html [last accessed 14 January 2022]. For a detailed account, see Gantchev, supra n 62 at 16–19.

101

Ibid., at 51, 80.

102

UNSR Amicus Curiae, supra n 41 at paras 33–34.

103

EDPS Opinion 4/2020, supra n 93 at 6.

104

Report of the Special Rapporteur on extreme poverty and human rights, supra n 48 at para 42; Committee on Standards in Public Life, Artificial Intelligence and Public Standards: Report, 10 February 2020, at 8, available at: www.gov.uk/government/publications/artificial-intelligence-and-public-standards-report [last accessed 14 January 2022].

105

Misuraca and van Noordt, AI Watch—Artificial Intelligence in Public Services (2020) at 49, available at: publications.jrc.ec.europa.eu/repository/handle/JRC120399 [last accessed 14 January 2022]. See also Automating Society Report, supra n 3 for overview of instances of deployment of automated decision-making in the public sector with questionable records of adhering to transparency standards and effective judicial review. In the case of SyRI, the system had been deployed before its legal basis was introduced with the SyRI legislation in 2014.

106

SyRI, supra n 1 at paras 6.5, 6.85.

107

Council of Europe, Ad Hoc Committee on Artificial Intelligence, Feasibility Study on a Legal Framework on AI Design, Development and Application Based on Council of Europe’s Standards, CAHAI(2020)23, 17 December 2020 at 20–6, available at: rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da [last accessed 14 January 2022].

Author notes

Assistant Professor of International Law, Faculty of Law, University of Groningen, the Netherlands ([email protected]).

Postgraduate Student, Scottish Research Centre for IP and Technology Law, School of Law, University of Edinburgh, United Kingdom ([email protected]).

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.