AI in the asylum procedure: Risk of wrong decisions and fundamental rights violations!
AI in the asylum procedure: Risk of wrong decisions and fundamental rights violations!
The Federal Austrian Office for Foreign Affairs and Asylum (BFA) plans to introduce artificial intelligence (AI) in asylum procedures to shorten the processing times and increase efficiency. In an increasingly digitized procedure, AI-based systems such as chatbots and data analysis tools are to be used to process information about countries of origin. Proven programs such as Deepl and Complexity as well as specially developed applications are used, reports Exxpress.at .
These technologies should help to evaluate persecution risks and to check the credibility of information. However, experts such as political scientist Laura Jung and the lawyer Angelika adelsamer expressed serious concerns. They warn of the susceptibility to errors and non -transparency of the systems as well as possible violations of fundamental rights. The so-called Black Box problem is particularly worrying, in which the decision-making process of the AI is not understandable, which violates the rule of law.
concerns about technology
As part of the research project A.I.Syl at the University of Graz, high risks were identified. Jung pointed out that voice models such as Chatgpt can create "hallucinations", which could lead to serious wrong decisions. In addition, the new analysis methodology can give knowledge and objectivity, while it is actually based on a poor factual basis. These concerns are reinforced by planning the BFA to read mobile phones for the identity examination, which is considered a massive interference with privacy, so Uni-graz.at .
Another risk is the possibility that AI models could take prejudice from the training data. Adenamer emphasizes that not only asylum seekers, but also all third -country nationals are affected by this extended data processing. Skepticism compared to the promised cost reductions and increases in efficiency is great because incorrect procedures could endanger fundamental rights and the rule of law.
EU regulation and their implications
The current discussion about the use of AI is tightened by the planned AI regulation of the EU. Originally intended to protect people on the run, the regulation pursues a risk -based approach that disadvantages refugees. For example, migration and security authorities often receive exceptional permits to use high-risk technologies. gwi-boell.de Areas are prohibited, allowed in migration contexts and could increase the existing social discrimination.
All of this raises the question to what extent the use of AI is compatible with human rights and the principles of the rule of law in the asylum procedure. The fear is that the technology will lead to faster human rights violations rather than improves the protection of refugees. Civil society organizations therefore call for stricter regulations and more transparency in dealing with these technologies.
Next Monday, May 12, 2025, the results of the research project A.I.Syl will be presented and discussed in the Stadtpark forum in Graz. The discussion could be crucial for how the use of AI is regulated in the sensitive area of asylum procedures in the future.
Details | |
---|---|
Ort | Graz, Österreich |
Quellen |
Kommentare (0)