NachrichtenBearbeiten
https://odysee.com/@ovalmedia:d/mwgfd-impf-symposium:9
https://totalityofevidence.com/dr-david-martin/
| Kaum beachtet von der Weltöffentlichkeit, bahnt sich der erste internationale Strafprozess gegen die Verantwortlichen und Strippenzieher der CoronaâP(l)andemie an. Denn beim Internationalem Strafgerichtshof (IStGH) in Den Haag wurde im Namen des britischen Volkes eine Klage wegen âVerbrechen gegen die Menschlichkeitâ gegen hochrangige und namhafte Eliten eingebracht. Corona-Impfung: Anklage vor Internationalem Strafgerichtshof wegen Verbrechen gegen die Menschlichkeit! â UPDATE |
Libera Nos A Malo (Deliver us from evil)
Transition NewsBearbeiten![]() Feed Titel: Homepage - Transition News Bundesregierung: Schwarz-GrĂŒn fĂŒr Ricarda Lang âauf jeden Fall eine Optionâ
![]() Union und die GrĂŒnen wĂ€ren nach Ansicht von GrĂŒnen-Chefin Ricarda Lang geeignete Koalitionspartner ab 2025. In drei BundeslĂ€ndern gebe es bereits funktionierende Koalitionen. Baden-WĂŒrttembergs MinisterprĂ€sident Winfried Kretschmann hofft auf eine âVerbindung von Ăkologie und Ăkonomieâ. Dengue-Fieber in Brasilien ausgebrochen: Kollabiert das Gesundheitswesen?
![]() Brasilien kÀmpft gegen den schwersten Dengue-Ausbruch seit Jahrzehnten. In mehreren Gebieten wurde der Notstand ausgerufen. Bank of America investiert wieder in fossile Brennstoffe
![]() Die Bank of America hat ihr Versprechen zurĂŒckgenommen, die grĂŒne Agenda zu unterstĂŒtzen und nicht mehr in Kohlenwasserstoffe â Kohle, Erdöl und Erdgas â [âŠ] Tucker Carlson bestĂ€tigt zum ersten Mal offiziell, daĂ es ein Interview mit PrĂ€sident Putin geben wird, und begrĂŒndet ausfĂŒhrlich warum das nötig ist. Twitter/X
Tucker Carlson bestĂ€tigt zum ersten Mal offiziell, daĂ es ein Interview mit PrĂ€sident Putin geben wird, und begrĂŒndet ausfĂŒhrlich warum das nötig ist. Twitter/X(Sobald eine deutsche Ăbersetzung vorliegt, wird das hier nochmal...
Umfrage der Bertelsmann Stiftung: Viele junge Deutsche misstrauen Regierung und Parlament
![]() Viele junge Deutschen zweifeln daran, ob die Politik kĂŒnftige Herausforderungen lösen könne. Experten sehen darin ein Warnsignal fĂŒr die Demokratie. | Peter MayerBearbeiten![]() Feed Titel: tkp.at â Der Blog fĂŒr Science & Politik KernstĂŒcke der neuen WHO VertrĂ€ge bringen Verlust der nationalen SouverĂ€nitĂ€t der Mitgliedsstaaten
![]() Bekanntlich sollen bis Ende Mai Ănderungen der Internationalen Gesundheitsvorschriften (IGV) beschlossen werden, die der WHO eine massive Ausweitung ihrer völkerrechtlich verbindlichen Vollmachten bringen sollen. [âŠ] Hardware-Schwachstelle in Apples M-Chips ermöglicht VerschlĂŒsselung zu knacken
![]() Apple-Computer unterscheiden sich seit langem von Windows-PCs dadurch, dass sie schwieriger zu hacken sind. Das ist ein Grund, warum einige sicherheitsbewusste Computer- und Smartphone-Nutzer [âŠ] 25 Jahre weniger Lebenserwartung fĂŒr "vollstĂ€ndig" Geimpfte
![]() Eine beunruhigende Studie hat ergeben, dass Menschen, die mit mRNA-Injektionen âvollstĂ€ndigâ gegen Covid geimpft wurden, mit einem Verlust von bis zu 25 Jahren ihrer [âŠ] OstermĂ€rsche und Warnungen vor dem Frieden
![]() Ostern ist auch die Zeit der pazifistischen und antimilitaristischen OstermĂ€rsche. Grund genug, um davor zu warnen. Tod nach Covid-Spritze: Ărzte im Visier der Justiz
![]() In Italien stehen fĂŒnf Ărzte nach dem Tod einer jungen Frau aufgrund der âImpfungâ vor einer Anklage. |
NZZBearbeiten

Feed Titel: Wissenschaft - News und HintergrĂŒnde zu Wissen & Forschung | NZZ
Hilfe, es ist Spargelzeit! Wieso manche Menschen ĂŒbelriechenden Urin nach dem Spargelessen produzieren und weshalb nicht jeder die DĂŒfte wahrnehmen kann
Getreide zwischen ApfelbĂ€umen und Futterhecken fĂŒr KĂŒhe: Der sogenannte Agroforst macht Ăcker klimaresistent â und soll sich sogar fĂŒr Bauern lohnen
Wissenschaft? Nein danke. Trump will Forschungsbudgets massiv kĂŒrzen
Da blÀst er noch! Wo der bedrohte Jangtse-Glattschweinswal einst lebte, verraten uralte chinesische Gedichte
Die internationale Schifffahrt plant finanzielle Anreize fĂŒr grĂŒne Kraftstoffe. Kritische Stimmen sagen, das reiche nicht aus
VerfassungsblogBearbeiten

Feed Titel: Verfassungsblog
Anatomy of a Fall
On 11 February 2025, the Commission published its 2025 work programme and revealed the likely withdrawal of the Proposal for an Artificial Intelligence Liability Directive (âAILD proposalâ), citing âno foreseeable agreementâ among Member States. The announcement followed the AI Action Summit in Paris, where U.S. Vice President JD Vance criticised the EUâs regulatory stance on AI. Although the U.S. opposition may have influenced the Commissionâs decision, the future of the AILD proposal did not look so bright before the summit.
The AILD proposal, initially approved by the Commission in September 2022, sought to strengthen procedural protection for plaintiffs in non-contractual liability claims against AI providers and users. It introduced mechanisms to compel the disclosure of technical documentation and ease the burden of proof on claimants. Alongside the revised Product Liability Directive (âPLDâ), the AILD was intended to complement the regulatory framework established by the AI Act (analysed here in this blog). By facilitating compensation for harm caused by AI systems, the proposal aimed to reinforce the accountability of AI providers and deployers, ultimately fostering trustworthy AI.
The upcoming withdrawal elicited mixed reactions in the European Parliament. Notably, members of the Internal Market and Consumer Protection (IMCO) committee expressed divergent views: some welcomed the withdrawal, labelling the proposal as âunnecessary and prematureâ, while others continued to support the harmonisation of AI liability. Civil society organisations expressed concerns over the anticipated withdrawal in an open letter to the Commission and advocated for a robust AI liability framework.
This blog post highlights the proposed AILDâs main merits and shortcomings and it explores the implications of its likely withdrawal for EU tech regulation by clarifying the interplay between AI liability rules, the AI Act, and the PLD. As AI technology impacts several aspects of our lives, effective legal redress mechanisms are essential to promote accountability and safety for AI deployment. Conversely, the absence of clear liability rules and the persistence of information asymmetries between victims and companies may discourage litigation and lead to opaque settlements that obscure responsibility. Despite its weaknesses, the AILD proposal was an important initiative to enhance victimsâ legal protection and promote the objectives of EU tech regulation. It concludes by considering potential future pathways.
Brief critical remarks on the AILD proposal
Litigation involving technology-related harm often faces considerable legal uncertainty, especially when AI systems are involved (see the Commissionâs Expert Group 2019 Report). Due to their inherent complexity, opacity, and vulnerability, AI systems complicate the judicial assessment of core elements of tort law â namely, causation, damage, fault, and duty of care. The fragmentation of tort law across EU Member States leads to significantly different outcomes in compensation claims, depending on national evidentiary rules, definitions of essential elements, and the applicable liability regimes (fault-based, vicarious, or strict).
Against this backdrop, the AILD proposal aims to ease the burden of proof for victims through targeted procedural mechanisms, leaving the determination of liability regimes to Member States. It primarily focuses on one specific element of tort law, namely causation. Rather than tackling standards of proof or substantive causation rules â which vary significantly across Member States â the proposal introduces a rebuttable presumption of causation, contingent on the demonstration of the remaining elements of tort (Article 4).
Additionally, national courts can order tortfeasors to release relevant technical documentation if the claimant establishes the âplausibilityâ of a damage claim (Article 3). Remarkably, the disclosure order can be requested even prior to actioning a liability claim. However, disclosure must be limited to what is ânecessary and proportionateâ, taking into account trade secrets and confidentiality.
Hurdles related to transparency and explainability in AI litigation are well recognised. Victims frequently face significant information asymmetries, which can be exacerbated by IP rights that shield tortfeasors from information requests. National courts, data protection authorities, and the European Court of Justice (ECJ; C-634/21 Schufa Holding; C-203/22 Dun & Bradstreet Austria) have addressed these concerns in various disputes involving automated decision-making â with mixed outcomes. Comparisons between the approaches by the German Federal Supreme Court (BGH 28.01.2014 â VI ZR 156/13, granting trade secret protection to the formula of a private companyâs credit scoring algorithm) and the District Court of The Hague (Rechtbank Den Haag, 05-02-2020, finding that ethnicity bias in automated risk assessment for targeted social fraud investigation violates Article 8 of the ECHR) illustrate the variability in court attitudes. In this context, the procedural measures introduced by the AILD proposal represent a positive step towards enhancing victimsâ access to evidence and redress.
Nonetheless, critics argue that the proposal risks over-regulation. As AI becomes increasingly pervasive (especially in interconnected IoT environments), the scope of AI liability rules may expand beyond expectations. However, the proposalâs focus on procedural rather than substantive rules arguably mitigates the risk of excessive regulatory reach.
At the same time, the vague wording of key procedural thresholds jeopardises the objective of legal harmonisation. For example, the presumption of causation for non-high-risk AI systems applies only when proving causation entails âexcessive difficultyâ. Moreover, the requirement to establish the «plausibility» of the claim for obtaining a disclosure order risks overwhelming potential claimants, thereby undermining the utility of such provision (see here, p. 713). Furthermore, the proposal leaves fundamental substantive questions unanswered, such as the definition of fault, duties of care, or foreseeability in the context of AI-related accidents (see here, p. 171 f.).
Moreover, Recital 15 suggested that the directiveâs scope under Article 2(5) did not include damage âcaused by a human assessmentâ when the AI system only provides information or a recommendation to the operator making the final assessment (decision-support system). This provision watered down significantly the directiveâs impact since âmereâ decision-support systems can still cause significant harm. Human operators may draw strongly on preliminary assessments by AI systems when making final decisions, failing to scrutinise the automated input effectively. Nevertheless, this pitfall could have been overcome by case law. Drawing on the ECJ ruling in the Schufa Holding case on profiling (para. 50 of the judgment), it could have been argued that harm is not âcaused by a human assessmentâ within the meaning of Recital 15 when said assessment has been substantially shaped by information or recommendation provided by an AI system.
The AILD proposal struck a balance between harmonisation and national discretion. Yet, as a compromise instrument, it left numerous critical issues unresolved and contained severe loopholes. Furthermore, harmonising AI liability rules seemed ambitious considering the long-standing difficulty in achieving consensus on EU tort law, which includes piecemeal provisions and does not constitute a comprehensive normative framework.
Implications of the AILD proposalâs anticipated withdrawal for EU technology regulation
The withdrawal of the AILD must be assessed in light of the existing regulatory landscape. The AI Act focuses essentially on ex ante safety measures and post-market surveillance. Yet, despite compliance with safety requirements, accidents may still occur, making effective ex post remedies critical to promoting public trust in AI. Sanctions under the AI Act may not sufficiently prompt responsible innovation, as the AI Act does not provide individuals with actionable rights beyond the right to explanation and the right to lodge complaints with market surveillance authorities (as pointed out here).
Thus, AI liability rules are meant to incentivise compliance with the AI Act. Liability rules are indeed intimately intertwined with safety legislation. Obligations under the AI Act and related AI standards provide benchmarks for judicial assessments of fault and duties of care under national tort law (as explained here at p. 233). Therefore, liability can be a catalyst of responsible innovation.
In this respect, the revised PLD marks a significant step by incorporating software into the legal definition of âproductâ, thereby extending product liability to non-tangible AI systems. Still, the PLD regime applies alongside national tort law. The former covers only certain types of harm â personal injury, property damage, and data loss â excluding infringements of personality rights and pure financial loss. Moreover, it does not include damage to commercially (as opposed to privately) used property. It also provides a âstate-of-the-artâ defence shielding manufacturers from liability for unknown defects that could not be discovered through the scientific and technical knowledge available when the product was placed on the market or put into service. In such cases, national tort law may step in, making the procedural alleviations envisaged by the AILD proposal particularly valuable.
From a business perspective, harmonising liability rules would reduce compliance costs inflated by legal uncertainty and favour cross-border commerce. Yet, the AILD proposal was limited to procedural measures without altering substantive liability rules, making the support of national tort law experts still essential for companies. Hence, it would have impacted victims more positively than business actors. Consequently, its withdrawal also affects victims more than businesses.
Prospects and alternative approaches
The European Parliament or the Council could challenge the Commissionâs withdrawal decision before the ECJ. However, unless the withdrawal decision lacks adequate justification, the Court would likely uphold it based on prior case law (see the ECJ ruling in C-409/13 Council v Commission).
Nevertheless, the Commission has not formally withdrawn the proposal yet and has signalled the possibility of introducing a new legislative initiative. One alternative could entail adopting a regulation instead of a directive (as suggested by Hacker). Another possibility would be to expand harmonisation by regulating liability regimes for AI providers and users, in addition to, or as a replacement for, procedural rules. However, the top-down harmonisation of liability regimes interferes significantly with national laws. A more tailored, sector-specific regulation of liability based on particular use cases may offer a more feasible path, aligning with both existing EU tort law and the AI Actâs risk-based classification (as suggested here and here).
The European Parliamentâs 2020 Resolution on civil liability for AI already recommended harmonising liability regimes through a two-tiered system: strict liability for high-risk AI systems and fault-based liability for operators of other systems. However, imposing strict liability on all front-end users of high-risk AI systems, including non-expert users, may prove disproportionate.
The material scope of future liability legislation could also be expanded beyond AI to encompass all software. Non-sophisticated technologies falling outside the AI Actâs definition of AI systems may still cause significant harm and pose comparable evidentiary challenges.
Finally, the AILD proposal has been criticised for lacking robust scientific and empirical foundations (see pp. 34 ff.), similar to the AI Act. A more interdisciplinary regulatory approach, by increasing the involvement of domain-specific experts such as computer scientists, could arguably enhance the quality and relevance of future legislative proposals.
Â
Â
The post Anatomy of a Fall appeared first on Verfassungsblog.
Lawsuits about state actions and policies in response to the coronavirus (COVID-19) pandemic