-
PDF
- Split View
-
Views
-
Cite
Cite
Mario Pasquale Amoroso, Intelligent Borders: Exploring the Suitability of Artificial Intelligence Systems in Refugee Status Determination Under International Law, Refugee Survey Quarterly, Volume 43, Issue 4, December 2024, Pages 410–426, https://doi.org/10.1093/rsq/hdae021
- Share Icon Share
Abstract
This article assesses the potential use of Artificial Intelligence (AI) in Refugee Status Determination, exploring how AI systems, including biometrics, predictive analytics, and emotion recognition, could support or replace human decision-making in determining refugee status. While AI could speed up and improve the objectivity of Refugee Status Determination procedures, the article raises concerns about its limitations in assessing the subjective fear of persecution, a critical element of refugee claims. The inability of AI systems to fully account for emotional and personal nuances leads to the possibility of discriminatory practices, particularly when AI is used without human oversight. The study highlights how AI could exacerbate power imbalances between asylum-seekers and States, transforming these technologies into tools for anti-immigration policies. It therefore calls for caution in the use of AI in Refugee Status Determination procedures and stresses that human interaction remains essential for the fair assessment of refugee claims. This article concludes that while AI can assist in certain aspects of the Refugee Status Determination procedures, its full implementation without careful legal and human rights considerations poses significant risks.
1. INTRODUCTION
In recent decades, Artificial Intelligence (AI) has proved to be particularly influential in a variety of contexts, often leading to benefits in terms of making processes originally governed by human activity easier and faster. However, despite the potential of this technology, its widespread use in the everyday lives of human communities has raised concerns about the impact it could have on fundamental human rights, such as the right to privacy. In particular, in the context of migration, AI has been increasingly used as a tool to support administrative procedures and decision-making,1 potentially leading to summary assessments and collective judgments that could undermine fundamental rights and violate principles of international migration law (hereinafter IML).
The variety of technologies and the multitude of functions that AI could realise has rendered so far any attempt at comprehensive regulation of the subject hard to achieve. Only recently, on the 13 of March 2024, the European Parliament approved the Artificial Intelligence Act (hereinafter EU AI Act) containing harmonised rules aimed at ensuring that these technologies are used in a safe and non-discriminatory way.2 In this ambitious instrument, the protection of human rights is a key objective,3 the realisation of which relies on the establishment of differentiated obligations for both providers and users, identified according to a risk-based approach grounded on a classification of the potential harm that AI systems could cause to the rights of individuals.4 One of the applications of AI that could have a significant impact on fundamental rights is the use of these technologies in Refugee Status Determination (RSD) procedures and border control activities. The EU AI Act highlights how systems used in migration and border control affect vulnerable people and how their accuracy and transparency are particularly important to ensure respect for fundamental rights.5 In particular, AI systems used for border control management are identified in Annex II of the final text as “high-risk AI”, that is technologies that should only be put into service if they comply with mandatory requirements,6 including the implementation of risk management system and human oversight in relation to these AI systems.7 In addition, some of the technologies that could be used in border control activities could be prohibited in any case, as potentially falling within the AI practices prohibited for posing an unacceptable risk to fundamental rights, such as real-time’ remote biometric identification systems for the purposes of law enforcement or biometric categorisation systems inferring race with the purpose of categorising natural persons.8
However, the potential uses of AI in migration are numerous and the complex regulatory effort realised with the EU AI Act is partly hampered by the general exemption in Article 3 of the final text, which states that the Act does not apply if it affects Member States' competences in the field of national security and if AI systems are used exclusively for national security services. The presence of such a broad exception in the AI Act increases the risks of the application of AI systems to RSD Procedures, making it necessary to address a fundamental question: are these systems suitable to assess the existence of the criteria for evaluating applications for international protection? The aim of this article is to inquire in the legal and personal consequences the recourse to such technologies would imply in this context, given their potential to be more precise than human agents in assessing the objective element for the establishment of the refugee status, that is the well-founded risk of persecution, without being able to assess the subjective one, that is the fear of persecution. In particular, should these technologies develop and become more accurate in their evaluations, asylum-seekers would risk having their claims assessed without any form of human interaction, which is essential in assessing an emotional element such as the fear of persecution. This, combined with the inability of human agents to judge the correctness of AI outcomes, could aggravate the inequality between States and asylum-seekers and easily turn AI systems into anti-migration and discriminatory tools.
In light of the above, the article will first identify what is at stake by examining the international refugee protection regime, focusing in particular on the criteria for granting refugee status set out in Article 1 of the Convention relating to the Status of Refugees (hereafter Refugee Convention) (Section 2). Secondly, following a descriptive examination of the various AI systems used in border control activities (Section 3), the suitability of the use of these technologies in RSD procedures will be explored in order to verify whether unmanned systems are adequate to assess the “fear of persecution” as a central criterion for refugee status under the Refugee Convention. In particular, the need for a subjective element in refugee status determination will be analysed to determine whether AI is an adequate alternative to human judgment, which is often biased by the lack of credibility of evidence provided by asylum-seekers (Section 4).
2. THE HUMAN DIMENSION: EVALUATING REFUGEE STATUS BEYOND PURE OBJECTIVITY
Human migration has always been a global phenomenon, as the need to move is an inherent human trait that has led large masses of individuals to areas different from those of origin in search of better living conditions or to escape dangers in their homeland.9 The complexity of international relations, together with social, economic, and technological developments, has affected migration flows, not only increasing their scale but also changing the factors that drive people to mobility. Over the last 20 years, people have fled their countries not only driven by conflict or poverty but also because of new predisposing factors, that is circumstances that contribute to creating a context in which migration is more likely to occur, such as environmental degradation or processes of demographic growth and uncontrolled urbanisation, which have made some areas uninhabitable due to a lack of resources and employment opportunities.10
These changes have not been fully reflected in the 1951 UN Convention relating to the Status of Refugees, whose aim is to provide international protection when the State of origin fails to fulfil its duty to protect its own citizens.11 In particular, the Convention, in defining refugees, does not cover all types of forced migration, as it focuses on persecution and excludes many other typical drivers, such as famine, natural disasters, or pandemics.12 Therefore, for the purposes of this section, it is important to examine the definitional framework to understand which individuals are covered by the definition of refugee (Section 2.1) in particular, focusing on the subjective element of the definition, that is the fear of persecution (Section 2.2).
2.1. The definition of refugee: limits and different regional approaches
The 1951 Refugee Convention was designed to identify migrants in need of protection within an international legal framework that did not recognise general freedom of movement and where migration control was the paradigm.13 This is clearly reflected in the definition of a refugee, which includes only selected categories of migrants based on the motivations that led them to leave their homes. Specifically, a refugee is considered to be a person outside his or her country of origin, unable or unwilling to avail himself or herself of the protection of his or her country of origin, due to a well-founded fear of being persecuted based on five limited grounds (namely, race, religion, nationality, membership of a particular social group, or political opinion).14 Nonetheless, the definition is even more selective as Article 1 of the Refugee Convention provides two exclusion clauses that prevent migrants who fulfil the general conditions from availing themselves of any form of refugee protection if they already benefit from some kind of international or national protection or if they have committed serious crimes (such as those listed in the Rome Statute).15
The limitation of this definition lies in its focus on persecution, which excludes many other typical drivers of forced migration, such as economic reasons, natural disasters, or armed conflicts,16 and therefore fails to provide protection to migrants in the event of inability of the State of origin to do so. This restrictive approach has also influenced regional regimes, notably the EU legal regime, where the refugee definition mirrors that of the Refugee Convention,17 with the exception of some limited forms of subsidiary protection.18 In the African region, on the other hand, a broader and more protective definition of a refugee has been adopted, referring to persons forced to leave their country as a result of armed conflict or “events seriously disturbing public order”, potentially covering all contemporary drivers of forced migration.19 The same can be said for Latin America, where a similarly comprehensive clause has been included in the definition of refugee enshrined in the Cartagena Declaration on Refugees,20 reflecting a more urgent need to provide protection to migrants in countries where most of the current flows originate.
These different definitions of refugee therefore highlight how the appropriateness of AI systems in determining refugee status will need to be assessed differently in different regional contexts. At the same time, there is another element of the general definition of refugee in the Refugee Convention that needs to be examined before moving to the next step, namely the emotional element of fear of persecution.
2.2. The subjective element of fear of persecution and its relevance in RSD Procedures
Before migrants can be granted international protection far from an inhospitable homeland, their refugee status must be assessed through the so-called RSD procedures. These are legal or administrative processes carried out by various authorities, usually UNHCR staff on behalf of or in cooperation with government agents and other state authorities to determine whether a person seeking international protection can be considered a refugee.21 In the European context, for example, it is up to each Member State to designate the authorities responsible for examining asylum applications,22 and also in other regional contexts there are no uniform standards for determining who should be responsible for carrying out such procedures.23 The diversity of the authorities involved in these procedures in each national context could influence the outcome of RSD procedures and should require a contextual examination, based on how the assessment of the criteria for recognition of international protection is performed case by case. In addition, authorities responsible for the examination of asylum claims must be clearly distinguished from those who are only responsible for receiving applications for international protection without processing them, such as immigration authorities, including police and border guards,24 whose role is mainly to carry out checks with a view to repatriation. However, while national differences are a relevant factor to be taken into account, the main task of RSD procedures, at least potentially, is to verify that the criteria for recognition of refugee status are met and the ability of the competent authorities charged with this task to carry it out adequately could be compromised by an excessive reliance on AI systems.
More in-depth, RSD procedures aim to examine all elements of the refugee definition for each asylum-seeker, including the subjective element of a well-founded fear of persecution, which requires assessing the credibility of the claimant. In order to do so, competent authorities gather information from testimonies, reports, and country of origin information (COI), that is information on the socio-political situation in the country of origin of applicants for international protection, selecting the facts that are credible and relevant for each case.25 However, things are not as simple as they seem. In fact, a twofold examination of both the objective element of well-founded risk of persecution and the subjective element of fear is to be performed, since the migrant’s mental state must always be supported by a factual situation.26 In order to guide decision-makers in this evaluation, some indicators based on material facts have been identified aiming at avoiding speculative reasoning based on the subjective perspective of the decision-maker.27 This assessment seems to bypass any examination of the subjective element, leading some authors to argue that the “well-founded fear standard” does not contain a subjective element, considering that the latter also imposes an excessive burden on the applicant who has to convince the controller of his/her emotional status.28
Nevertheless, a closer look at the credibility indicators reveals that in reality, it is impossible to avoid some form of speculative assessment in RSD procedures. In fact, the three criteria identified by the UNHCR (sufficiency of detail, consistency, and plausibility) do not seem to eliminate the subjective element in the assessment that the interviewer has to make in order to establish refugee status.29 Moreover, various factors call into question the reliability of these indicators, starting with the accuracy of migrants’ memories and the influence of external factors. Indeed, trauma and disturbing events can alter an individual’s reconstruction of past events,30 making it difficult for applicants to provide an adequate level of detail about relevant events,31 to refer to statements consistently,32 and to adopt a demeanour that gives the appearance of truthfulness and plausibility to their responses.
While it is true that the element of fear may lead to a biased decision influenced by the lack of credibility and plausibility of the evidence, the adoption of a purely objective approach in the RSD procedure is not the solution to the problem. In fact, while it can be affirmed that a subjective assessment may alter the material facts demonstrating the existence of a risk of persecution, if we toss the coin, allowing controllers to take into account the emotional state of the applicant allows more flexibility in determining that risk. Even where material facts are uncertain or presented inconsistently, the decision-maker can infer from the context of the interview and the migrant’s fearful behaviour the reality of a situation of danger in the home country that is not clearly apparent from the evidence presented. In fact, an essential feature of the RSD procedure is the relational element, that is the need to establish a relationship between the assessor and the assessed in order to have a full understanding of the circumstances of the migration, which could not be observed without “looking into the eyes of the applicant”.
In light of the foregoing, it is essential to understand whether AI systems, technologies that could be useful in speeding up RSD procedures, are an adequate alternative to human decision-making, in particular examining whether they would allow for an adequate and complete assessment of the element of fear. However, in order to proceed with this analysis, an overview of the different AI systems used (or that could potentially be used in the future) in border control activities is provided in order to check whether their implementation would “objectify” RSDs in terms of the way they are programmed and operate.
3. THE POTENTIAL USES OF AI IN RSD PROCEDURES
If the measure of intelligence is the ability to adapt to change, recent decades have put international lawyers’ “intelligence” to the test, requiring them to adapt to constant developments. But overcoming the “cultural lag” between society and rapid technological development is not always as easy as it seems.
Although the first forms of cognitive AI were only developed in the 1950s, with the increase in computing power and the availability of large amounts of data,33 these systems have evolved and taken on a variety of functions designed to replace human intelligence, making any attempt to establish a common definition futile. The most recent attempt to harmonise the different definitions describing the sectoral functions performed by AI systems can be found in the EU AI Act where they are described as software developed using one or more techniques that, for a given set of human-defined objectives, can generate outputs such as content, predictions, recommendations, or decisions.34 In particular, these technologies operate in the physical or digital dimension, perceiving their environment through data collected by sensors or input devices, and processing the information derived from the data – that is interpreting and simplifying them into more concise information35 – in order to decide, with a degree of autonomy, on the best action to take in order to achieve the human-defined goal.36
Based on the level of sophistication, AI systems can be classified according to their learning capabilities and the various techniques associated with these systems. The most commonly used technique is supervised machine learning, which is the ability of computers to learn independently, by exposure to labelled data under human supervision to make predictions or classifications.37 This technique must be distinguished from unsupervised learning methods, that discover patterns from unlabelled data38 – that is raw data that did not undergo a process of labelling, consisting of external human intervention aiming at providing context to the algorithm so that an algorithm can learn from it.39 Pre-labelling implies that the AI system carries within it human biases, not influential, on the other hand, on unsupervised machine learning methods, that from a human input consisting of raw data, draw inferences about future behaviours.40 The predictions made by AI systems can be used to inform decisions, including in migration and border control, where the application of AI to RSD may soon become widespread due to the need to manage rapidly a large number of claims.
Therefore, AI systems are designed to perform human tasks by learning from data with the assistance of algorithms and through different techniques that allow them to act “intelligently”, that is to adapt to changing circumstances and learn from experience. The development of AI can therefore be interpreted as an attempt to transfer the innate human trait of “intelligence” to unmanned systems that could gradually replace humans in various activities of our daily lives. However, intelligence is not always synonymous with moral and legal correctness, and human behaviour still retains an emotional component that influences their judgment and allows them to act differently from AI systems.
This ability can be observed when analysing the potential use of AI in border control and RSD procedures, where the need for human interaction is particularly strong for the kind of assessments that competent authorities are asked to make. Indeed, while AI systems are (and will be potential) employed in border control activities, that is with the aim to limit migration fluxes, the perspective of recurring to these technologies in procedures for international protection is not so farfetched and their use in pre-screening or border management procedures might influence the outcome of RSD procedures. In this context, AI systems perform a variety of tasks that need to be examined to understand how these technologies might affect the accuracy of assessments made during RSD procedures. Therefore, in this section, the main characteristics and functions of the different AI systems that are currently or potentially used in border control activities will be described, in order to provide a useful roadmap that will be employed in the following section to verify their adequacy in determining refugee status.
3.1. The potential uses of AI in border control and RSD procedures
International crises, such as the ongoing wars in both Ukraine and the Gaza Strip, often lead to mass displacement, making it difficult for border authorities in neighbouring countries to manage large numbers of migrants. Conflicts and similar scenarios, together with the rapid development of AI technologies could accelerate the widespread automatisation of borders, the recurrence of which is already a reality in certain contexts, even in the absence of ongoing hostilities41 or national security justifications.42
Automated border control systems consist of immigration control systems used to expedite the processing of travellers at the border and to determine the eligibility of border crossing according to pre-defined rules.43 In most cases, these systems work only based on historical data, collecting and using them to infer outcomes through a quantitative approach, without training AI for their specific use in migration management.44 More in detail, these AI systems may have a role consisting in detecting and identifying potential threats through the recourse to technologies such as biometric scanning and facial recognition.45 Biometric scanning consists of automated methods of identifying and verifying the identity of individuals based on their physiological and behavioural characteristics.46 These characteristics typically consist of fingerprints, facial features, voice, and iris, which, once collected, could be used in the reception and registration phases of RSD procedures to establish the evidential link between asylum-seekers and their statements.47 However, supervised and unsupervised AI systems could go further and use the data collected through biometric scanning to categorise individuals or groups of individuals in order to assist migration authorities in assessing the credibility of the claimant’s claim,48 which, as explained above, is an essential step in the decision-making process to assess the well-founded fear of persecution. Indeed, by helping to establish the identity of the asylum-seeker, biometrics contribute to the assessment of the credibility of the claimant which is strictly dependent on the possession of acceptable identity documents.49
Even more, AI systems could also be used directly in RSD procedures to assess the credibility of the claimant. In particular, automated credibility assessments are carried out by AI systems on the basis of material data (written or oral statements, country of origin information, or witness testimonies) in order to draw conclusions on the truthfulness and plausibility of the claims made by asylum-seekers.50 More in detail, the ability of AI systems to make predictions more quickly than by relying on human-controlled calculations has led to further developments in these technologies in order to enable them to interpret human emotions from oral testimonies. AI could be employed during interviews to perform emotion recognition, that is to understand and describe the claimant’s current mental state or mental condition.51 Using machine learning algorithms, AI systems could be able to read people’s emotions through voice tones, facial expressions, and gestures,52 information that could help speed up credibility assessments during RSD procedures. While the extent to which these AI systems will be used in RSD procedures is still unclear, machine learning could support decision-makers through “expert systems”, that is data-mining systems that analyse historical information by structuring it into clusters and are, therefore, trained to discriminate about the use of sources.53 Despite the lack of examples of the use of automated credibility systems in RSD procedures, their use in border control activities shows the potential for the development of automated interview systems for use in RSD procedures is not so far from reality, also taking into account the need to speed up the processing of claims due to large migration flows.
However, those just described are not the only ways in which AI systems could be used to assist border authorities and those competent in examining requests for international protection. Migration is an unpredictable and uncertain process whose random factors are context-dependent and show little persistence over time.54 Advances in computational technologies have led to the development of new AI systems capable of modelling and predicting social processes, that is AI-based predictive analytics systems. In addition to biometric data, international organisations such as the UNHCR, and monitoring centres collect a range of other data in different categories, such as mobile, economic, environmental, conflict, and political data.55 Using this data, AI-based predictive analytics systems make predictions about forced displacement,56 using techniques such as machine learning or data mining (the process of extracting patterns and information from large datasets using algorithms).57 These predictions can then be used in RSD procedures to anticipate migration flows and the needs of displaced persons, in order to plan and prepare their resources adequately before the arrival. The utility of predictive analytics in the context of refugee protection has therefore led to the rapid development and implementation of AI systems whose function is to predict migration flows.
Before proceeding with the delineation of the potential consequences of the use of AI in RSD procedures, in order to give concrete meaning to the analysis presented in this section, it is essential to examine the current (and past) use of AI systems by border authorities, as this examination could also help to understand the risks that asylum-seekers are exposed to upon arrival in a country that relies on automated decision-making systems.
3.2 Current and past uses of AI in RSD procedures and border control activities
The prospect of integrating AI systems into RSD procedures is not that far off. In 2015, the International Business Machines (IBM) Corporation announced that it had developed technology that could “separate genuine refugees from imposters”.58 The potential use of AI in RSD procedures becomes even more apparent when looking at the recent development and testing of these systems in border control activities which will be presented in this section. Taking into account the need to ensure efficiency and rapidity in the assessment of asylum applications, the use of AI in border control could lead some States to implement similar technologies in RSD procedures.
Starting with biometrics, fingerprints, and other similar data have been collected by States and the UNHCR for migration purposes for more than 10 years. In particular, the UNHCR launched the Biometric Identity Management System (BIMS) in 2013 which stores biometric data, including facial images, with the purpose of facilitating registration procedures and issuing digital identities to asylum-seekers.59 While these systems were developed by the UNHCR with the aim of enhancing the legal protection of asylum-seekers, States have used biometric data, including that collected by the UNHCR, as input for machine learning systems to perform automated identification with the aim of enhancing border security, potentially leading to discriminatory outcomes with implication on international protection.
The US Department of Homeland Security, among others, has created a database of biometric information of individuals who have entered, attempted to enter, or left the United States (IDENT) to provide automated identity verification for the purposes of immigration enforcement and border control.60 However, this system is not isolated, as fingerprint and facial scanning have been used for migration identity checks in both the United Kingdom and Greece, where there have been complaints of racial bias associated with the misuse of these systems by police authorities.61
In addition, the EU is also involved in the development of new AI technologies that use biometrics for border control. For example, TRESPASS is a closed project that aims to develop an automated border management system to prevent crimes by assessing individuals’ risk through biometric data that could be potentially used for other purposes, that is to assess the credibility of the asylum-seeker in RSD procedures.62 EU countries have also been at the forefront of implementing automated credibility assessment systems, notably through the development of a project that ran from 2016 to 2019, the well-known iBorderCtrl which has been tested at border crossing points by some EU members.63 The project was based on an Automated Deception Detection System, whose ostensible role was to assess whether a migrant or asylum-seeker was behaving differently than expected, showing deceptive behaviour aimed at hiding information.64 However, after a pre-registration to collect personal data, the automated credibility assessment consisted of an interview with a lie-detecting AVATAR using AI to analyse micro-gesture to assess whether the interviewee was being deceptive.65 At the end of the interview, a QR code was issued for the traveller to bring to the border, which was scanned by border agents to categorise the traveller’s risk based on the number of presumed false answers given during the interview. Based on this, the border agent could decide to deny entry or proceed with further checks.66 A similar border technology that has also been properly tested is the Automated Virtual Agent Truth Assessment in Real Time (AVATAR), developed by researchers associated with the University of Arizona, consisting of a virtual human agent that performs an automated analysis of travellers’ credibility.67
These projects show that automated credibility assessment systems are a reality that, although not yet fully implemented in RSD procedures, could potentially be used in the future to assess whether asylum-seekers meet the criteria for refugee status. In particular, they could be used to assess whether applicants risk persecution, an assessment that relies heavily on the credibility, consistency, and plausibility of testimony. In particular, the examination of these indicators could be deferred to virtual agents, especially when States, driven by the need to speed up RSD procedures and pursue anti-migration policies, decide to progressively implement automated credibility assessment systems.
Predictive analytics AI systems have also recently attracted the interest of several border management authorities, such as Frontex, the European Border and Coast Guard Agency, which published a research report outlining numerous potential applications of predictive analytics.68 In particular, the agency was interested in developing a social media monitoring service that would also be programmed to provide forms of “sentiment analysis” to assess what people are feeling in relation to irregular migration, with the aim of predicting flows and supporting joint operations coordinated by Frontex.69 Similarly, IBM has partnered with the Danish Refugee Council to provide AI technologies to forecast migration flows from Ethiopia to six different countries using data collected by the UNHCR or gathered through interviews.70
Finally, a number of AI systems that perform predictive analytics are already being tested or deployed, such as the Jetson Engine, developed by UNHCR to predict the displacement of people in sub-Saharan Africa, in particular Somalia,71 and the Displacement Tracking Matrix, developed by the International Organisation for Migration (IOM) to predict displacement during crises and provide border authorities with information on the mobility and vulnerability of migrants to ensure better context-specific assistance.72 To these systems should be added the recent development of an AI-powered algorithm programmed to predict potential employment outcomes using historical data, demonstrating how predictive analytics can prove to be a tool not only for improving the preparedness of border authorities, but also for resettlement purposes.73
These projects demonstrate the growing interest in the use of AI in border control and migration forecasting, which could lead to a gradual automation of RSD procedures in the short term. Although the state of development of these systems would initially lead to their implementation under human control, the risk of almost complete automation is a reality that cannot be ignored, especially if it could be seen as a means of speeding up RSD procedures and implementing anti-migration policies based on national security justifications. Therefore, the following section will provide a full picture of the legal and personal implications associated with the use of these technologies.
4. BEYOND BORDERS: REFLECTIONS ON THE ADEQUACY OF AI IN REFUGEE STATUS DETERMINATION PROCEDURES
As highlighted in the introduction to this article, while the EU AI Act qualifies AI technologies used for border control management as “high risk”, it provides for a broad exemption from the application of the Act in case it interferes with Members’ States competences in the area of national security. The Regulation does not provide a specific definition of what is to be understood as national security, and the exception is justified by the simple fact that national security is the sole responsibility of each Member State.74 This exception could lead to the possibility of allowing the use of AI systems, even those that pose an unacceptable risk and are prohibited by the AI Act, only on the basis of acts of Member States declaring the existence of a national security risk linked to the economic and social pressure resulting from mass migration or to the political instability and security threats existing in the countries of origin of migrants.75 Therefore, recourse to AI systems could be justified by the need to speed up asylum procedures in order to protect national security from the risks associated with migration. Similar considerations can be applied to another recently adopted instrument, the Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, recently adopted by the Council of Europe. Indeed, the establishment of a blanket national security exception in this instrument,76 together with the exclusion of the private sector, has the effect of weakening the protection of human rights from harmful activities that may occur during the life cycle of AI.
It is thus fundamental, in light of the examination provided above, to examine whether the technologies presented in this paper are able to adequately support or replace human controllers in carrying out RSD assessments. In fact, credibility assessment AI systems, especially if used inappropriately and as a tool of anti-immigration policies, could allow decision-makers to reject claims for international protection based on behavioural or risk assessments grounded on physical appearance and the ability of claimants to manage their emotions. Similarly, while predictive analytics and biometric identification systems can only be used as support technologies that could provide insightful information to decision-makers, there are still risks connected to the widespread recourse to these technologies, due to the possibility of converting them into anti-immigration tools. In light of the foregoing, this section will show how the use of AI systems in RSD procedures is a risky perspective, as decision-making in these procedures should always be based not only on factual elements but also on the human ability of decision-makers to interpret people's emotions.
4.1. Trapped in the border matrix: risks and potentials of the use of AI in RSD procedures
As the analysis presented in this article has attempted to show, various machine learning techniques can potentially be used by border authorities to assist or, in some cases, partially replace them in carrying out assessments that were previously solely human-based. Among these techniques, some could actually contribute to the task for which they are programmed, that is speeding up border control and RSD procedures, while others, in particular automated credibility assessment, could affect the rights of migrants and asylum-seekers by making incorrect evaluations that could lead to the denial of international protection. This section will therefore try to examine whether the technologies previously presented would enable migration authorities to adequately assess the objective indicators of the status of refugees and the well-founded fear of persecution.
4.1.1. Biometrics and identification systems
Starting with biometrics, using physiological traits in the form of data collected by States and international agencies, could ensure faster identity verification, which is the first step in assessing the credibility of asylum-seekers. Having biometric data at their disposal, decision-makers can rely on more precise information essential to establish the existence of a link between the claimant and their testimony as a proof of their credibility.77 In particular, these data could fill in gaps in information that the claimant was unable to provide with a sufficient degree of certainty, due to inconsistencies in testimony relating to the traumatic event or an inability to recall relevant events.
However, there are still some risks associated with the use of these systems. First, they could easily be used to engage in discriminatory practices, as they could facilitate racial profiling by categorising individuals on the basis of physical data in order to infer suspicion of irregular migration and thus influence RSD procedures.78 Therefore, the correct outcome of RSD procedures could be affected by “function creep”, that is the use of a technology for new purposes beyond its original purpose.79 Data collected through biometrics for identity verification could be programmed to speed up border checks and provide more reliable information to border authorities; however, the same data could be used for purposes other than those programmed, potentially compromising asylum-seekers’ access to international protection. In particular, biometric data could be shared with countries of origin of asylum-seekers where there is a risk of persecution, increasing this risk if the refugee returns to the country of origin and potentially leading to non-refoulment.80 Functional creep in the use of biometrics for migration purposes is not so far from reality, as the Rohingya refugees in Bangladesh protested against the UNHCR project to collect their data, which could be shared with Myanmar and increase the risk of persecution and discrimination in the event of forced return to their country of origin, as planned by Bangladesh.81 Therefore, if biometric data were to fall into the wrong hands, the risk of persecution could actually increase, leading to serious risks for the refugee in the case of cessation of the international protection previously granted, or in the case of refusal of international protection since the first access, following an unsuccessful RSD procedure.
4.1.2. Predictive analytics
As for biometrics, predictive analytics, when used in border control activities, provides controllers with information on mass movements, both on the number and type of migrants, which could be useful in the implementation of human assessment and RSD procedures. Indeed, these technologies could be used to prepare the resources needed for border management or RSD procedures by anticipating the needs of migrants before their arrival,82 thus integrating information that the applicant is not able to provide and giving the applications an adequate level of sufficiency. Moreover, the output of predictive AI systems, consisting of information about political, economic, and geographical circumstances, such as the ongoing armed conflict in the home country, could prove essential in assessing the plausibility of migrants’ testimonies in RSD procedures,83 as these data would allow obtaining more precise COI information. Therefore, an appropriate use of biometrics and predictive analytics could indeed prove useful in speeding up RSD procedures.
Nevertheless, although the correct functioning of these systems could facilitate RSD procedures, the risk of function creep is still something that needs to be dealt with when examining the potential uses of these technologies. Indeed, it needs to be taken into account that the predictions made recurring to AI are not always reliable and the need for these systems to “learn from experience” might lead to incorrect assessments about migration trends, at least in the first phases of their implementation.84 Incorrect predictions could have serious consequences on the preparedness of decision-makers involved in RSD procedures and wrong inferences from social media sources could seriously impair the access to international protection, as they could influence the assessment of the credibility of the claimants. In a similar vein, despite the correctness of the outcome of the work of these AI systems, they could be employed as anti-immigration tools, in particular, to facilitate preventive responses aiming at forbidding entrance, solely based on predictions of the risk of irregular migration, exposing asylum-seekers to non-refoulment.85
4.1.3. Automated credibility assessment
The picture described so far with regard to predictive analytics and biometrics becomes even more worrying when we turn at examining the implications credibility assessment systems might have on RSD procedures. Indeed, from one side, it cannot be denied that, thanks to the progress in the development of AI systems, some of the risks associated with biased human decisions in RSD procedures could be avoided. Human-conducted RSD procedures often consist of an assessment of the credibility indicators based mostly (or exclusively) on historical data presented by claimants during their testimonies, especially when COI is not available.86 However, the outcome of the interview relies heavily on a subjective assessment, which could be grounded on unconscious bias and is characterised by a high degree of variability linked to tangential factors.87 In addition, the risk of subjective and biased conclusions from human decision-making would be also exacerbated, as already observed, by the unreliability of claimants’ memories and the influence of institutional narratives, which could lead to a misinterpretation of the credibility indicators and of the emotional element of the subjective fear.88
It is therefore clear how human-led RSD procedures are not flawless, and the risk of discriminatory outcomes for asylum-seekers whose claims for international protection are considered only by human decision-makers should not be underestimated. In this sense, automated credibility assessment AI systems, even when programmed by human actors, can identify and remove unconscious bias by creating balanced datasets before their implementation.89 In this sense, these technologies could help screeners in providing a more accurate assessment of the individual characteristics collected through biometrics and testimonies and give them proper account in the RSD. However, over-reliance on AI in RSD procedures is not the solution to the shortcomings of human decision-making and it could in some cases exacerbate the vulnerability of individuals seeking international protection.
In fact, by working on human input, AI systems are not fully capable of assessing the plausibility of migrants’ testimonies, without, on the other hand, the innate capacity for self-reflexivity that allows decision-makers to question their personal assumptions and recognise their biases.90 Most of the credibility assessment systems use supervised machine learning expert systems, which are able to organise relevant data based on human input and are therefore programmed to reflect an institutional narrative, without being able to adequately determine the refugee status independently of the human intervention of programmers, often designing algorithms according to political considerations.91 Moreover, should automated credibility assessment systems be programmed to infer the mental state of migrants based on personal characteristics, as it was programmed in the iBorderCtrl project, other problems would arise. In fact, a consistent amount of data, which is impossible to actually quantify, would be required for the algorithm to learn how claimants express emotions that would result obvious to humans.92
In light of the above, while these technologies would be able to provide adequate credibility assessments, their full implementation in RSD procedures, without human intervention aimed at better verifying the plausibility of migrants’ testimonies in the context of an individual interview, would lead to worryingly dehumanising results. Indeed, a closer look at the wording of Article 1 of the Refugee Convention might be useful in understanding how AI systems are unable to adequately assess migrants’ fear of persecution and emotions. By establishing persecution as the objective element of the refugee definition, Article 1 is based on an assessment that is always forward-looking and therefore likely to be incorrect if it is based only on past events, without giving due weight to the subjective perception of the risk of persecution expressed by the applicant. In fact, even if certain events could constitute objective evidence of the risk of persecution in case of return to the country of origin, the assessment that the decision-makers in RSD procedures are called upon to make consists of predictions that rely heavily on the emotional element. Therefore, recurring AI systems to assess emotions would lead to an automatic denial of refugee status to all individuals who do not fulfil, at first sight, the objective conditions needed to be granted international protection.
This is also related to the inability of AI systems to engage in presumptive reasoning, that is to presume the existence of refugee status when only some of the criteria in Article 1 of the Refugee Convention can be clearly inferred from the claimant’s statements. Indeed, the likelihood of persecution should be presumed if the claimant's fear is established, and this presumption “goes to the heart” of the inquiry on which RSD procedures are based.93 The presumptive nature of the assessment that underpins RSD procedures justifies the permanent relevance of the subjective element of well-founded fear, as mere proof of the latter could lead to a presumption of the objective element of persecution. And this is very much in keeping with the nature of RSD procedures, which should be primarily aimed at providing protection to individuals fleeing their country of origin for one of the reasons for persecution set out in Article 1. In this vein, should the element of fear be de-valorised, AI systems would still not be able to have a clear picture of the circumstances of the migration and the risk of persecution, because for this purpose a “human contact” with the asylum-seeker is still fundamental to assess the credibility and plausibility of the information he/she provides.
Finally, since they work mostly based on historical facts, such as evidence of past persecution, or testimonies, these technologies are not able to speculate in all circumstances about the current risk of persecution that would entitle claimants to protection upon arrival in a new country.94 Although AI is capable (and will be even more so in the near future) of judging human beings with a high degree of accuracy, its outcomes are the result of abductive reasoning, that is a process of logical inference based on both certain and uncertain premises and leading to equally uncertain results. Therefore, even if the repeated use of AI systems would gradually reduce inaccuracy, even a reduced percentage of errors, especially when these technologies are first implemented, cannot be accepted in RSD procedures where AI decisions would affect the lives of individuals. It can therefore be concluded that, although human-conducted RSD procedures are not flawless, an excessive recourse to AI systems for credibility assessment is not to solution to biased decision-making as it could only increase the risk of arbitrary outcomes. Be as it may, credibility assessment systems are not capable of carrying out RSD procedures on their own, and human supervision, both during and after the use of these technologies, cannot be ruled out in order to ensure that decisions are not taken on the basis of hasty information that could affect the future of individuals.
5. CONCLUSIONS
The analysis presented in this article has shown how AI systems, currently used in border control activities, do not seem suitable in determining refugee status under the Refugee Convention. In light of the conclusions drawn from the examination presented above, some observations can be made in order to fully understand how States should consider the current legal regime when regulating RSD procedures.
Starting with biometrics and predictive analytics, their potential contribution to ensuring more effective identification procedures for RSD cannot be denied, as they could effectively contribute to speeding up preliminary checks and ensuring better preparedness of border authorities by allowing them to anticipate migration flows. However, on the one hand, these technologies could easily be turned into anti-migration tools, as they could facilitate racial profiling and preventive responses by decision-makers involved in RSD procedures. An assessment of the benefits of using AI systems in these procedures cannot ignore the potential users of these technologies and the ease with which they could be used to pursue discriminatory purposes. Moreover, a comprehensive examination cannot be carried out without taking into account the backdrop of the "crises of solidarity" that states are currently facing, linked to the unwillingness of developed countries to accept as many refugees as they are able to.95 The rise of anti-refugee populism, not only in Europe but also in countries such as the United States, Brazil, and India, affects the ability of States to express international solidarity and makes the prospect of the proliferation of AI systems in RSD procedures, which could be used for discriminatory purposes, particularly worrying.
In addition, even if the EU AI Act has begun to both spread awareness of the risks associated with the use of these systems and to make them subject to strict limitations before their potential use, immigration, especially if consistent, could still be invoked as a national security issue falling under the broad exception recognised in Article 3 of the EU AI law. The tendency of powerful States to resist or at least try to circumvent international obligations could find fertile ground in the national security exception, which could easily be used to justify discriminatory technologies. It should also be noted that the EU AI Act and the Convention on Artificial Intelligence adopted by the Council of Europe, both instruments with a regional scope, are the only existing legal instrument that imposes restrictions on states in the development and use of AI systems. This means that, apart from the EU Member States, other countries are not restricted in the use of AI systems in border control and RSD procedures and can pursue discriminatory policies through the use of these technologies.
Finally, in the optic of limiting the recourse to these technologies apart from existing obligations, the role of UNHCR as the principal UN agency charged with protecting the rights of refugees should not be undermined. In this sense, UNHCR guidelines and documents must be taken into account when determining how to proceed with expulsion decisions, irrespective of the policies of States. Even if these instruments are not mandatory per se, when States decide to establish a body, whose role is to monitor the correct application of the Refugee Convention,96 they should at least try to take into account the procedural guarantees established by that body. In fact, its role is not only to advocate for the refugees but also to facilitate States’ policies, while guaranteeing fundamental rights.97 In this sense, the right to be heard, in a personal interview or otherwise, is clearly established by the UNHCR in its guidelines,98 making contact between the asylum-seeker and the decision-maker a necessary step in RSD procedures. If the possibility of establishing a relationship between these two subjects is excluded by the resort to credibility assessment AI systems, the UNCHR’s guidance in this sense loses any relevance. For this reason, it is only by partially depoliticising migration control and giving sufficient weight to the position of experts that dangerous developments towards depersonalisation of RSD assessments could be sufficiently counteracted, avoiding arbitrary decisions that could determine the risks to fundamental rights that the denial of refugee status could entail.
Footnotes
A. Beduschi & M. McAuliffe, “Artificial Intelligence, Migration and Mobility: Implications for Policy and Practice”, in M. McAuliffe. & A. Triandafyllidou (eds.), World Migration Report 2022, International Migration Organisation (IOM) 2021, 1.
European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, TA (2024) 0138, 13 Mar. 2024.
Ibid., Recital 6.
Ibid., Recital 26.
Ibid., Recital 60.
Ibid., Recital 46.
Ibid., Arts. 9, 14.
Ibid., Art. 5.
F. Castelli, “Drivers of Migration: Why Do People Move”, Journal of Travel Medicine, 25(1), 2018, 2.
N. Van Hear, O. Bakewell & K. Long, “Push-Pull Plus: Reconsidering the Drivers of Migration”, Journal of Ethnic and Migration Studies, 44(6), 2018, 936.
House of Lords, Horvath v Secretary of State for the Home Department, Judgement, 1 AC 489, 6 Jul. 2000, 497.
V. Chetail, International Migration Law, Oxford University Press, 2019, 174.
J.C. Hathaway, The Law of Refugee Status, 1st ed., Butterworths, 1991, 231.
Convention relating to the Status of Refugees, 189 UNTS 150, 28 Jul. 1951 (entry into force: 22 Apr. 1954), Art. 1(A)(2).
Ibid., Art. 1(D)(E)(F).
Supreme Court of Canada, Canada (Attorney General) v. Ward, Judgment, 2 S.C.R. 689, 30 Jun. 1993, 67–68.
Directive 2011/95/EU of the European Parliament and of the Council of 13 December 2011 on standards for the qualification of third-country nationals or stateless persons as beneficiaries of international protection, for a uniform status for refugees or for persons eligible for subsidiary protection, and for the content of the protection granted, OJ L 337/9, 13 Dec. 2011, Art. 2(d).
Ibid., Art. 15. Subsidiary protection is granted in cases of risk of death penalty or execution; torture or inhuman or degrading treatment or punishment of a national in the country of origin; serious and individual threat to a civilian’s life or person by reason of indiscriminate violence in situations of international or internal armed conflict.
Convention governing the Specific Aspects of Refugee Problems in Africa, 1001 UNTS 45, 10 Sep. 1969 (entry into force: 20 Jun. 1974), Art. 1(2).
Colloquium on the International Protection of Refugees in Central America, Mexico and Panama, Cartagena Declaration on Refugees, Colloquium on the International Protection of Refugees in Central America, Mexico and Panama, Cartagena, 1984, Conclusion No. 3, available at: https://www.unhcr.org/media/cartagena-declaration-refugees-adopted-colloquium-international-protection-refugees-central (last visited 23 Nov. 2023).
United Nations High Commissioner for Refugees, Procedural Standards for Refugee Status Determination under the UNHCR’s Mandate, Geneva, UNHCR, 2020, 14, available at: https://www.unhcr.org/media/procedural-standards-refugee-status-determination-under-unhcrs-mandate (last visited 23 Nov. 2023).
Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection, L 180/60, 29 June 2013, Art. 4.
Asylum authorities are usually part of a ministry, or an independent authority charged with examining asylum cases. In a more limited number of cases, asylum authorities are part of law enforcement agencies.
Ibid., Art. 6.1.
N. Kinchin, “Technology, Displaced? The Risks and Potential of Artificial Intelligence for Fair, Effective, and Efficient Refugee Status Determination”, Law in Context, 37(45), 2021, 52.
Supreme Court of Canada, Canada (Attorney General) v. Ward, 53.
United Nations High Commissioner for Refugees, Beyond Proof: Credibility Assessment in the EU Asylum System, UNHCR, Geneva, 2013, 77, available at: https://www.unhcr.org/media/full-report-beyond-proof-credibility-assessment-eu-asylum-systems (last visited 23 Nov. 2023).
J.C. Hathaway & W.S. Hicks, “Is There a Subjective Element in the Refugee Convention’s Requirement of Well-Founded Fear?”, Michigan Journal of International Law, 26(505), 2005.
UNCHR, Procedural Standards for Refugee Status Determination under the UNHCR’s Mandate, Ch. 5.
H. Evans Cameron, Refugee Law’s Fact-Finding Crisis; Truth, Risk and the Wrong Mistake, Cambridge University Press, 2018.
H. Evans Cameron, “Refugee Status Determination and the Limits of Memory”, International Journal of Refugee Law, 22(469), 2010, 491.
Demeanour has to be taken into account with a “great deal of caution” in the credibility assessment as affirmed in: High Court of Australia, SAAK v. Minister for Immigration and Multicultural Affairs, Judgement, FCA 367, 28 Mar. 2002, para. 27. On the influence of the plausibility of statements see: G. McFadyen, “Memory, Language and Silence: Barriers to Refuge within the British Asylum System”, Journal of Immigrant and Refugee Studies, 17(168), 2018, 174.
S. Russel & P. Novig, Artificial Intelligence. A modern approach, Upper Saddle River, 3rd ed., 2010, Ch. 1.
EU AI Act, Art. 3, 39.
Frontex, Artificial Intelligence-based Capabilities for the European Border and Coast Guard: Final Report, Warsaw, Mar. 2021, 10.
European Commission, Definition of AI: Main capabilities and disciplines, Brussels, Apr. 2019, 6.
B. Mahesh, ‘Machine Learning Algorithms—A Review’, International Journal of Science and Research, 2018, 381.
S. Naeem, A. Ali, S. Anam, & M. Munawar Ahmed, ‘An Unsupervised Machine Learning Mechanism: Comprehensive Review’, International Journal of Computing and Digital System, 13 (1), 911.
N. Kudan, The Difference Between Labeled and Unlabeled Data, Toloka, 3 Mar. 2023, available at: https://toloka.ai/blog/labelled-data-vs-unlabelled-data/
Z. Ghahramani, ‘Unsupervised Learning’, in O. Bousquet, U. von Luxburg & G. Rätsch (eds.), Advanced Lectures on Machine Learning, Springer, 2004, 73.
Israel deployed AI surveillance technologies in the West Bank years before the Hamas attack on the 7 October 2023: S. Goodfriend, “Algorithmic State Violence: Automated Surveillance and Palestinian Dispossession in Hebron’s Old City”, International Journal of Middle East Studies, 55(3), 2023.
E. Baptista, “Insight: China Uses AI Software to Improve its Surveillance Capabilities”, Reuters, 8 Apr. 2022, available at: https://www.reuters.com/world/china/china-uses-ai-software-improve-its-surveillance-capabilities-2022-04-08/ (last visited 25 Aug. 2024).
Frontex, Best Practice Operational Guidelines for Automated Border Control (ABC) Systems, Warsaw, 31 Aug. 2012, 7.
A. Beduschi, “International Migration Management in the Age of Artificial Intelligence”, Migration Studies, 9(3), 2021, 584.
Frontex, Artificial Intelligence-based Capabilities for the European Border and Coast Guard: Final Report, 28.
M. Abomhara, S. Yildirim Yayilgan, A.H. Nymoen, M. Shalaginova, Z. Székely, & O. Elezaj, “How to do it Right: A Framework for Biometrics Supported Border Control”, in E-Democracy—Safeguarding Democracy and Human Rights in the Digital Age, 2019, 94.
N. Kinchin & D. Mougouei, “What can Artificial Intelligence do for Refugee Status Determination? A Proposal for Removing Subjective Fear”, International Journal of Refugee Law, 34(3–4), 2022, 386.
Access Now, Uses of AI in Migration and Border Control: A Fundamental Rights Approach to the Artificial Intelligence Act, 2022, 2, available at: https://www.accessnow.org/press-release/joint-statement-ai-act-people-on-the-move/ (last visited 20 Apr. 2024).
Immigration and Refugee Protection Act (Canada), S.C. 2001, c. 27, 1 Nov. 2001, Art. 106.
Kinchin, “Technology, Displaced?”, 52.
R. Vempati & L.D. Sharma, “A Systematic Review on Automated Human Emotion Recognition Using Electroencephalogram Signals and Artificial Intelligence”, Science Direct, 18, 2023, 1.
N. Alkhaldi, “Emotional AI: Are Algorithms Smart Enough to Decipher Human Emotions?” Iot For All, 5 May 2022, available at: https://www.iotforall.com/emotional-ai-are-algorithms-smart-enough-to-decipher-human-emotions (last visited 22 Nov. 2023).
Kinchin & Mougouei, “What can Artificial Intelligence do for Refugee Status Determination?”, 385.
M. Carammia, S.M. Iacus, & T. Wilkin, “Forecasting Asylum-Related Migration Flows with Machine Learning and Data at Scale”, Scientific Reports, 12(1457), 2022, 1.
K.H. Pham & M. Luengo-Oroz, Predictive Modeling of Movements of Refugees and Internally Displaced People: Towards a Computational Framework, 2022, 4–6.
Ibid., 2.
D. Andre, What is Data Mining?, All About AI, 6 Dec. 2023, available at: https://www.allaboutai.com/ai-glossary/data-mining/.
P. Tucker, Refugee or Terrorists? IBM Thinks its Software has the Answer, Defense One, 27 Jan. 2016, available at: https://www.defenseone.com/technology/2016/01/refugee-or-terrorist-ibm-thinks-its-software-has-answer/125484/.
UNHCR, Modernizing Registration and Identity Management in UNHCR: Introducing PRIMES, UNHCR Blogs, 15 Dec. 2017, available at: https://www.unhcr.org/blogs/modernizing-registration-identity-management-unhcr/.
DHS’s Automated Biometric Identification System IDENT – the Heart of Biometric Visitor Identification in the USA, THALES, 19 Jan. 2021, available at: https://proxy.nl.go.kr/_Proxy_URL_/https://www.thalesgroup.com/en/markets/digital-identity-and-security/government/customer-cases/ident-automated-biometric-identification-system.
The Racial Justice Network, STOP THE SCAN: Police Use of Mobile Fingerprinting Technology for Immigration Enforcement, Bradford, Mar. 2021; Human Rights Watch, Greece: New Biometrics Policing Program Undermines Rights, 18 Jan. 2022, available at: https://www.hrw.org/news/2022/01/18/greece-new-biometrics-policing-program-undermines-rights.
European Commission, robusT Risk basEd Screening and Alert System for PASSengers and Luggage, Fact Sheet, 11 Dec. 2023, available at: https://cordis.europa.eu/project/id/787120.
P. Breyer, EU Funded Technology Violates Fundamental Human Rights, about:intel, 22 April 2021, available at: https://aboutintel.eu/transparency-lawsuit-iBorderCtrl/
J. Sanchez-Moreno & L. Dencik, “The Politics of Deceptive Borders: ‘Biomarkers of Deceit’ and the case of iBorderCtrl”, Information, Communication and Society, 25(3), 2022, 418–419.
D. Boffey, “EU Border ‘lie detector’ System Criticised As Pseudoscience”, The Guardian, 2 Nov. 2018, available at: https://www.theguardian.com/world/2018/nov/02/eu-border-lie-detection-system-criticised-as-pseudoscience
R. Gallagher & L. Jona, We Tested Europe’s New Lie Detector for Travellers — and Immediately Triggered a False Positive, The Intercept, 26 Jul. 2019, available at: https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/
BORDERS, Appraising the AVATAR for Automated Border Control, Tucson, 1 Oct. 2014.
Frontex, Artificial Intelligence-based Capabilities for the European Boarder and Coast Guard, Final Report, Warsaw, Mar. 2021.
C. Dumbrava, Artificial Intelligence at EU borders: Overview of applications and key issues, Brussels, European Parliament Research Service, PE 690.706, 21 Jun. 2021.
R.E. Curzon et al., “A Unique Approach to Corporate Disaster Philanthropy Focused on Delivering Technology and Expertise”, IBM Journal of Research and Development, 64(1–2), 2020, 2:9.
UNHCR, UNHCR’s Newest Artificial Intelligence Engineer on Bias, Coding, and Representation, Medium, 14 May 2019, available at: https://medium.com/unhcr-innovation-service/unhcrs-newest-artificial-intelligence-engineer-on-bias-coding-and-representation-3363c432dd98
IOM, A Thematic Evaluation of Displacement Tracking Matrix (DTM), Geneva, Apr. 2018.
Stanford Momentum, Using Machine Learning to Help Refugee Succeed: How GeoMatch is Revolutionising Resettlement Efforts, 1 Feb. 2024, available at: https://momentum.stanford.edu/stories/using-machine-learning-to-help-refugees-succeed
Consolidated Version of the Treaty on European Union, OJEU C 326/13, 26 Oct. 2012, Art. 4.2.
J. Estevens, “Migration Crisis in the EU: Developing a Framework for Analysis of National Security and Defence Strategies”, Comparative Migration Studies, 28 (2018), 2018.
Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, CETS No 225, 5 May 2024, Art. 3.2.
Kinchin & Mougouei, “What can Artificial Intelligence do for Refugee Status Determination?”, 386.
B. Goodwin, The EU AI Act Must Protect People on the Move, European Civic Forum, 6. Dec. 2022, available at: https://civic-forum.eu/statement/joint-statement-the-eu-ai-act-must-protect-people-on-the-move.
B. Koops, “The Concept of Function Creep”, Law, Innovation and Technology, 13(1), 2021, 36.
UNGA, Contemporary Forms of Racism, Racial Discrimination, Xenophobia and Related Intolerance, Geneva, 2020, §35 (last visited 1 May 2024).
“Bangladesh tells UN that Rohingya Refugees Must Return to Myanmar”, Aljazeera, 17 Aug. 2022, available at: https://www.aljazeera.com/news/2022/8/17/rohingya-refugees-have-to-be-taken-back-bangladesh-pm-says.
Kinchin, “Technology, Displaced?”, 59.
UK Border Agency, Assessing Credibility and Refugee Status in Asylum Claims Lodged on or After 28 June 2022, 28 Sep. 2023, 51, available at: https://www.gov.uk/government/publications/considering-asylum-claims-and-assessing-credibility-instruction/assessing-credibility-and-refugee-status-in-asylum-claims-lodged-before-28-june-2022-accessible (last visited 23 Nov. 2023).
Access Now, Uses of AI in Migration and Border Control: A Fundamental Rights Approach to the Artificial Intelligence Act, 8.
B. Goodwin, The EU AI Act Must Protect People on the Move.
UNHCR, Handbook on Procedures and Criteria for Determining Refugee Status and Guidelines on International Protection, Geneva, Feb. 2019, 183, available at: https://www.unhcr.org/media/handbook-procedures-and-criteria-determining-refugee-status-under-1951-convention-and-1967
K.A. Houser, "Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making", Stanford Technology Law Review, 22(290), 2019, 317.
L. Smith-Kahn, "Why Refugee Visa Credibility Assessments Lack Credibility: A Critical Discourse Analysis", Griffith Law Review, 28(4), 2019, 425.
Kinchin & Mougouei, “What can Artificial Intelligence do for Refugee Status Determination? ”, 388.
Houser, “Can AI Solve the Diversity Problem in the Tech Industry?”, 173.
Smith-Khan, “Why Refugee Visa Credibility Assessments Lack Credibility: A Critical Discourse Analysis”, 407.
D. Watson, “The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence”, Minds and Machines, 29(417), 2019, 423.
Supreme Court of Canada, Canada (Attorney General) v. Ward.
House of Lords, R (Saber) v. Secretary of State for the Home Department, Judgement, UKHL 57, 12 Dec. 2007, para. 2.
O.C. Okafor, Cascading toward “De-Solidarity”? The Unfolding of Global Refugee Protection, TWAILR Reflections, 30 Aug. 2019, available at: https://twailr.com/cascading-toward-de-solidarity-the-unfolding-of-global-refugee-protection/.
United Nations Economic and Social Council, Resolution 319 (XI): Refugees and stateless persons, E/RES/319 (XI), ECOSOC, Geneva, 1950, available at: https://digitallibrary.un.org/record/212490 (last visited 23 Nov. 2023).
G. Loescher, The UNHCR and World Politics: A Perilous Path, Oxford University Press, 2001, 2.
United Nations High Commissioners for Refugees, Aide-Memoire & Glossary of Case Processing Modalities, Terms and Concept Applicable to Refugee Status Determinations (RSD) under UNHCR’s Mandate, Geneva, UNHCR, 2020, 4, available at: https://www.refworld.org/docid/5a2657e44.html (last visited 23 Nov. 2023).