NachrichtenBearbeiten
https://odysee.com/@ovalmedia:d/mwgfd-impf-symposium:9
https://totalityofevidence.com/dr-david-martin/
| | Kaum beachtet von der Weltöffentlichkeit, bahnt sich der erste internationale Strafprozess gegen die Verantwortlichen und Strippenzieher der CoronaâP(l)andemie an. Denn beim Internationalem Strafgerichtshof (IStGH) in Den Haag wurde im Namen des britischen Volkes eine Klage wegen âVerbrechen gegen die Menschlichkeitâ gegen hochrangige und namhafte Eliten eingebracht. Corona-Impfung: Anklage vor Internationalem Strafgerichtshof wegen Verbrechen gegen die Menschlichkeit! â UPDATE |
Libera Nos A Malo (Deliver us from evil)
Transition NewsBearbeitenFeed Titel: Homepage - Transition News Bundesregierung: Schwarz-GrĂŒn fĂŒr Ricarda Lang âauf jeden Fall eine Optionâ
![]() Union und die GrĂŒnen wĂ€ren nach Ansicht von GrĂŒnen-Chefin Ricarda Lang geeignete Koalitionspartner ab 2025. In drei BundeslĂ€ndern gebe es bereits funktionierende Koalitionen. Baden-WĂŒrttembergs MinisterprĂ€sident Winfried Kretschmann hofft auf eine âVerbindung von Ăkologie und Ăkonomieâ. Dengue-Fieber in Brasilien ausgebrochen: Kollabiert das Gesundheitswesen?
![]() Brasilien kÀmpft gegen den schwersten Dengue-Ausbruch seit Jahrzehnten. In mehreren Gebieten wurde der Notstand ausgerufen. Bank of America investiert wieder in fossile Brennstoffe
![]() Die Bank of America hat ihr Versprechen zurĂŒckgenommen, die grĂŒne Agenda zu unterstĂŒtzen und nicht mehr in Kohlenwasserstoffe â Kohle, Erdöl und Erdgas â [âŠ] Tucker Carlson bestĂ€tigt zum ersten Mal offiziell, daĂ es ein Interview mit PrĂ€sident Putin geben wird, und begrĂŒndet ausfĂŒhrlich warum das nötig ist. Twitter/X
Tucker Carlson bestĂ€tigt zum ersten Mal offiziell, daĂ es ein Interview mit PrĂ€sident Putin geben wird, und begrĂŒndet ausfĂŒhrlich warum das nötig ist. Twitter/X(Sobald eine deutsche Ăbersetzung vorliegt, wird das hier nochmal...
Umfrage der Bertelsmann Stiftung: Viele junge Deutsche misstrauen Regierung und Parlament
![]() Viele junge Deutschen zweifeln daran, ob die Politik kĂŒnftige Herausforderungen lösen könne. Experten sehen darin ein Warnsignal fĂŒr die Demokratie. | Peter MayerBearbeitenFeed Titel: tkp.at â Der Blog fĂŒr Science & Politik KernstĂŒcke der neuen WHO VertrĂ€ge bringen Verlust der nationalen SouverĂ€nitĂ€t der Mitgliedsstaaten
![]() Bekanntlich sollen bis Ende Mai Ănderungen der Internationalen Gesundheitsvorschriften (IGV) beschlossen werden, die der WHO eine massive Ausweitung ihrer völkerrechtlich verbindlichen Vollmachten bringen sollen. [âŠ] Hardware-Schwachstelle in Apples M-Chips ermöglicht VerschlĂŒsselung zu knacken
![]() Apple-Computer unterscheiden sich seit langem von Windows-PCs dadurch, dass sie schwieriger zu hacken sind. Das ist ein Grund, warum einige sicherheitsbewusste Computer- und Smartphone-Nutzer [âŠ] 25 Jahre weniger Lebenserwartung fĂŒr "vollstĂ€ndig" Geimpfte
![]() Eine beunruhigende Studie hat ergeben, dass Menschen, die mit mRNA-Injektionen âvollstĂ€ndigâ gegen Covid geimpft wurden, mit einem Verlust von bis zu 25 Jahren ihrer [âŠ] OstermĂ€rsche und Warnungen vor dem Frieden
![]() Ostern ist auch die Zeit der pazifistischen und antimilitaristischen OstermĂ€rsche. Grund genug, um davor zu warnen. Tod nach Covid-Spritze: Ărzte im Visier der Justiz
![]() In Italien stehen fĂŒnf Ărzte nach dem Tod einer jungen Frau aufgrund der âImpfungâ vor einer Anklage. |
NZZBearbeiten
Feed Titel: Wissenschaft - News und HintergrĂŒnde zu Wissen & Forschung | NZZ
Fettleber durch SĂŒssstoffe? Warum man sich von Medizin-Studien nicht zu schnell den Spass verderben lassen sollte
Das Wissen ĂŒber das Gehirn nimmt zu, das bedroht die Psychiatrie
INTERVIEW - «Ich bin heute, mit 71 Jahren, stÀrker als je zuvor», sagt der Altersforscher
FrĂŒher glaubte man, Zugvögel flögen im Winter auf den Mond. Dank Forschern wie diesen wissen wir es heute besser
Eine neue MondlandefÀhre muss her: Die USA lancieren einen neuen Wettbewerb, um vor China auf dem Mond zu landen
VerfassungsblogBearbeiten
Feed Titel: Verfassungsblog
Introduction to the Symposium on Algorithmic Fairness for Asylum Seekers and Refugees
The AFAR (Algorithmic Fairness for Asylum Seekers and Refugees) is a collaborative research project based at the Centre for Fundamental Rights at the Hertie School, Berlin. It emerged out of conversations with Dr Derya Ozkul, then a postdoctoral researcher on the RefMig project. In her research on refugee recognition and reception practices, she was encountering increasing use of digital and algorithmic systems, in particular forms of automated (mostly part-automated) decision-making in the refugee regime â which we referred to as ânewtechâ for short. Derya Ozkul and Cathryn Costello developed the application for a collaborative project, drawing together a team from across Europe â Professor Thomas Gammeltoft-Hansen, assistant professor William Hamilton-Byrne and doctoral researcher Asta Sofie Stage Jarlner at MOBILE, University of Copenhagen; Professor Iris Goldner-Lang, and PhDs Matija Kontak and Ana KrĆĄiniÄ, University of Zagreb; and Professor Martin Ruhs and Dr Lenka DraĆŸanovĂĄ, at the Migration Policy Centre, European University Institute. The project launched in 2021, funded by the VW Foundation through the âChallenges for Europeâ program.
Origins, Approaches and Concepts
Around the same time as we turned to study newtech, Petra Molnar and Lex Gill published âBots at the Gateâ, a report on automation of borders in Canada. In 2019, Philip Alston had published his important report on the automated welfare state, under the headline âWorld stumbling zombie-like into a digital welfare dystopiaâ, followed by E. Tendayi Achiumeâs critical 2020 report on race discrimination in automated borders. By 2020, many human rights scholars were examining algorithmic systems â Lorna McGregor, Daragh Murray amongst others. The UKâs post-Brexit settlement scheme, heavily automated, had been the focus of important studies and scholarship by Joe Tomlinson. Ana Beduschi and Niovi Vavloulaâs work was also emerging at that time, as the Covid pandemic swept in greater digitization and automation of borders.
Meanwhile, data science was exploding as a field, and new tools of big data analytics enabling empirical legal scholars and political scientists to study entire corpuses of asylum decisions, demonstrating high degrees of arbitrariness and discrimination within these systems. Studies such as Chen & Eagelâs 2017 paper on the US system and Mathilde Emeriauâs 2019 award-winning paper on the French system stood out. Preference-matching tools were emerging which promised to enable better allocation of asylum-seekers and refugees to their places of refuge, including by enabling their choice in these matters. Newtech seemed like both the problem and potentially a solution to the problems of migration-asylum governance.
The central normative concept in AFAR was âfairnessâ, including but not limited to the concept of procedural fairness familiar to lawyers. We also aimed to consider allocative and distributive fairness, and wider questions about the human rights and rule of law impacts of AI. At times, even this broad normative lens seems too constraining. AIâs impact on human cognition, working life, climate and planetary justice are also screaming for greater attention. The term âAIâ itself seems profoundly misleading â not particularly intelligent, and absolutely material â built on energy and water guzzling infrastructure, and armies of underpaid human workers in the Global South. Many AI experts and global leaders, including Nobel prize-winner, Geoffrey Hinton, warned of a possible AI apocalypse in their open letters, including that presented to the UN General Assembly in September 2025. The ongoing environmental, labour and cognitive degradations attract less attention. The AFAR projectâs relatively modest aims felt at times provincial and legalistic, and yet, treating AI and other digital technologies in their material infrastructural context, focusing on the actual workings within governance and public administration, seems less hype-inflected and probably wiser overall.
Contributions & Development
Derya Ozkul led the AFAR mapping of the use of newtech in the field, and the research on asylum seekersâ and refugeesâ fairness perceptions. In the course of the project, she took up an assistant professorship at the University of Warwick, just one of the projectâs many placement successes. She has continued to research digital technologies in their wider context, co-edited a recent special issue aiming to look âbehind, beyond and around the black box.â
The first postdoctoral researcher to join the project, Francesca Palmiotto, researched and published on AI and fairness in asylum, the concept of automation decision-making, and (with Derya Ozkul) the hurdles to strategic litigation. She also took the lead in establishing the TechLitigation Database. At the end of her term, she went on to an assistant professorship at IE Law School in Madrid. Upon completion of his postdoctoral research at EUI, Mirko ÄukoviÄ joined the AFAR team, who has been working on technology regulation and non-discrimination.
Within the framework of this collaborative project, the Zagreb team investigated specific questions of fundamental rights protection at the EUâs external borders, focusing respectively on the design and practice of national independent monitoring mechanisms and on the legality of Frontexâs biometric practices. Drawing on his recent PhD thesis, Matija Kontakâs blogpost âBiometric Technologies, Frontex and Fundamental Rightsâ, demonstrates that many biometric border practices lack sufficient legal basis and justification to be legal under EU law, including some of the novel biometric practices of Frontex.
The EUI team led the work on fairness perceptions, making important contributions on the issue of public perceptions on asylum in general, as well as on the role of AI. In her blogpost here, Dr Lenka Drazanova shows that public perceptions of the fairness of biometric border checks are fragmented and context-dependent, which challenges policy narratives of âobjectiveâ smart borders and underscores the need to ground biometric systems in EU fundamental rights standards of necessity, proportionality and effective redress. The Copenhagen team broke new conceptual ground with their infrastructural turn in this field, as well as work on credibility assessment in asylum, mobile phone data extraction and digital evidence. With many data science projects already underway at MOBILE, we were able to benefit from their expertise.
Challenges
The research faced three main challenges:
Firstly, the use of newtech is often shrouded in secrecy â governmental and business. This meant undertaking the mapping was challenging, as is any attempt to understand the workings of newtech. As Ludivine Stewart and Deirdre Curtin explore in their blogpost in this symposium, âBeyond AI Secrecy: The Struggle for Transparency in European Migration Governanceâ the transparency obligations in the AI Act need urgent clarification to curtail the potentially sweeping exception for migration governance. Over time, the project clarified the diversity of ânewtechâ tools and use cases. While we still focus on automated (and part-automated) decision-making, our focus also moved to include various forms of automated and digital evidence, as explored by Francesca Palmiotto in her work clarifying âwhen is a decision automatedâ; and Thomas Gammeltoft-Hansen and William Hamilton Byrne in their contribution on Digital Evidence in Refugee Status Determination. Natalie Welfenâs blogpost âPrivatised Digital Bordersâ identifies the âdigital privatised bordersâ in this field, and the additional regulatory and accountability challenges posed by the role of private commercial actors. In her new research project, she plans to explore participatory design as a way to reconfigure digital border tools.
Secondly, the project coincided with the âgenerative AI waveâ and âAI hypeâ. We started the project with certain tools in mind (admittedly some based on junk science), which governments bought or developed to meet specific needs in specific contexts â mobile phone data extraction tools are the exemplar here. In November 2022, ChatGPT was launched, and the generative AI wave took off. It now seems that AIâs takeover of administrative practices is inexorable. In joint (on-going) work, Francesca Palmiotto and Cathryn Costello sought to clarify just which administrative tasks are suitable for automation. This proved no mere preliminary question, but rather necessitated a deep dive into the capacities of AI, bursting the AI hype bubble. Engaging with the work of computer scientists who also excel at public communication â Arvind Narayanan and Sayash Kapoor stand out â has proved a vital antidote to the âAI snakeoilâ peddled in policy circles. Some scholarsâ work stood out for not only looking at the newtech with the lawyerâs gaze, but also demonstrating a deep understanding of the tech itself. The work of Sandra Wachter and her team continues to inspire, bringing together law, ethics and computer science to great effect. In hindsight, a shortcoming of the project was the lack of a formal role for colleagues from computer science, and our one recommendation for any legal scholars embarking on new work in this field would be to draw in computer scientists and philosophers from the outset. In this Symposium, Angelika Adensamer and Laura Jung in their contribution âNavigating Technologies in Asylum Procedures in Austriaâ reveal Austriaâs use of a range of AI tools, including commercial LLMs in country of origin research, with profound implications for the reliability of evidential assessment in asylum.
A third challenge relates to the legal framework: over the course of the project, the EU framework of great complexity emerged â data protection law, the AI Act, and new asylum measures â a giant legal mess. In the AFAR project, Francesca Palmiotto contributed important work on the legislative history of the EU AI Act. The application of the AI Act has barely started, and under great pressure from bigtech, the EU already announced the âdigital omnibus packageâ which will âsimplify existing rules on Artificial Intelligence, cybersecurity, and data.â The Center for AI and Digital Policy warned that the package will âlet loose unsafe AI systems in the EU that will threaten public safety and fundamental rights, the very interests the EU AI Act was designed to protectâ reminding that many jurisdictions started following the EUâs approach in classification of the AI systems.
To make sense of the legislative complexity, the lawyerâs tendency is often to search for principles to lend doctrinal clarity to complex overlapping legislation. In this vein, Herwig Hofmannâs blog here argues for a âRethinking the Notion of the File â Access, Fair Hearing and Effective Remedies in the Age of Automation.â His approach is to reassert legal fundamentals in the face of technological change. In our own contribution, âDigital Visas â Deepening Discriminatory Borders?â we draw on the body of work on algorithmic systemsâ propensity to exacerbate discrimination, at vast scale, as well as the key features of proposed digital visas to sound a warning call. In this work, we draw on the best of scholarship seeking to draw on the principles underlying EU equality law to challenge algorithmic practices, rather than diluting equality into the thin data science-driven concept of âdebiasingâ.
Going deeper, in their contribution âWhat âReal Riskâ Means For AI-Assisted Refugee Status Determination,â Maya Ellen Hertz, William Hamilton Byrne and Thomas Gammeltoft-Hansen argue against any wholesale automation in the asylum field, given its intrinsic lack of ground truth. However, they do see some possible support roles for AI to nudge decision-makers away from common errors, or support applications through the process. Whether such tools are likely to be commissioned or developed under current political environments remains to be seen.
Looking beyond EU law to international human rights law, Ben Hayes presents âHuman Rights and Digital Border Governanceâ. The AFAR team engaged with OHCHR guidance at different stages. Like the Council of Europeâs HUDEIRA methodology, there is much emphasis here on ex ante scrutiny of algorithmic systems. By integrating the EU AI Actâs high-risk checklist with the Council of Europe AI Conventionâs contextual HUDERIA methodology, EU migration and asylum authorities can transform formal risk compliance into genuine, enforceable protection of fundamental rights.
Conclusion
Although the project nears its formal end, the AFAR research continues. Fairness remains an important normative consideration for assessing the workings of AI in public decision-making, and our work-in-progress continues to examine its implications for decision-making in both asylum and visa decision-making. The generative AI wave has brought with it an unprecedented increase in corporate power, and an alignment between big tech and the Trump administration. This symposium emerged out of the AFAR final conference, held at the Hertie School on 18â19 September 2025. The conference opened with a keynote by Dr Matt Mahmoudi, âAlgorithms as Borders: Race, Border and Capital Entanglements,â which situated current deployments of digital systems within broader political economies of migration control and urged closer attention to distributive and procedural consequences for affected communities, drawing on his recent book. Such approaches, attuned to the political economy of big tech and borders, remain pressing lest the march of automation and autocracy fall further into lockstep.
The post Introduction to the Symposium on Algorithmic Fairness for Asylum Seekers and Refugees appeared first on Verfassungsblog.








