Du bist nicht der Besitzer dieser Seite.

NachrichtenBearbeiten


https://odysee.com/@ovalmedia:d/mwgfd-impf-symposium:9
https://totalityofevidence.com/dr-david-martin/



Kaum beachtet von der Weltöffentlichkeit, bahnt sich der erste internationale Strafprozess gegen die Verantwortlichen und Strippenzieher der Corona‑P(l)andemie an. Denn beim Internationalem Strafgerichtshof (IStGH) in Den Haag wurde im Namen des britischen Volkes eine Klage wegen „Verbrechen gegen die Menschlichkeit“ gegen hochrangige und namhafte Eliten eingebracht. Corona-Impfung: Anklage vor Internationalem Strafgerichtshof wegen Verbrechen gegen die Menschlichkeit! – UPDATE


Libera Nos A Malo (Deliver us from evil)

Transition NewsBearbeiten

XML

Feed Titel: Homepage - Transition News



Peter MayerBearbeiten

XML

Feed Titel: tkp.at – Der Blog fĂŒr Science & Politik



NZZBearbeiten

XML

Feed Titel: Wissenschaft - News und HintergrĂŒnde zu Wissen & Forschung | NZZ


VerfassungsblogBearbeiten

XML

Feed Titel: Verfassungsblog


Anatomy of a Fall

On 11 February 2025, the Commission published its 2025 work programme and revealed the likely withdrawal of the Proposal for an Artificial Intelligence Liability Directive (‘AILD proposal’), citing “no foreseeable agreement” among Member States. The announcement followed the AI Action Summit in Paris, where U.S. Vice President JD Vance criticised the EU’s regulatory stance on AI. Although the U.S. opposition may have influenced the Commission’s decision, the future of the AILD proposal did not look so bright before the summit.

The AILD proposal, initially approved by the Commission in September 2022, sought to strengthen procedural protection for plaintiffs in non-contractual liability claims against AI providers and users. It introduced mechanisms to compel the disclosure of technical documentation and ease the burden of proof on claimants. Alongside the revised Product Liability Directive (‘PLD’), the AILD was intended to complement the regulatory framework established by the AI Act (analysed here in this blog). By facilitating compensation for harm caused by AI systems, the proposal aimed to reinforce the accountability of AI providers and deployers, ultimately fostering trustworthy AI.

The upcoming withdrawal elicited mixed reactions in the European Parliament. Notably, members of the Internal Market and Consumer Protection (IMCO) committee expressed divergent views: some welcomed the withdrawal, labelling the proposal as “unnecessary and premature”, while others continued to support the harmonisation of AI liability. Civil society organisations expressed concerns over the anticipated withdrawal in an open letter to the Commission and advocated for a robust AI liability framework.

This blog post highlights the proposed AILD’s main merits and shortcomings and it explores the implications of its likely withdrawal for EU tech regulation by clarifying the interplay between AI liability rules, the AI Act, and the PLD. As AI technology impacts several aspects of our lives, effective legal redress mechanisms are essential to promote accountability and safety for AI deployment. Conversely, the absence of clear liability rules and the persistence of information asymmetries between victims and companies may discourage litigation and lead to opaque settlements that obscure responsibility. Despite its weaknesses, the AILD proposal was an important initiative to enhance victims’ legal protection and promote the objectives of EU tech regulation. It concludes by considering potential future pathways.

Brief critical remarks on the AILD proposal

Litigation involving technology-related harm often faces considerable legal uncertainty, especially when AI systems are involved (see the Commission’s Expert Group 2019 Report). Due to their inherent complexity, opacity, and vulnerability, AI systems complicate the judicial assessment of core elements of tort law – namely, causation, damage, fault, and duty of care. The fragmentation of tort law across EU Member States leads to significantly different outcomes in compensation claims, depending on national evidentiary rules, definitions of essential elements, and the applicable liability regimes (fault-based, vicarious, or strict).

Against this backdrop, the AILD proposal aims to ease the burden of proof for victims through targeted procedural mechanisms, leaving the determination of liability regimes to Member States. It primarily focuses on one specific element of tort law, namely causation. Rather than tackling standards of proof or substantive causation rules – which vary significantly across Member States – the proposal introduces a rebuttable presumption of causation, contingent on the demonstration of the remaining elements of tort (Article 4).

Additionally, national courts can order tortfeasors to release relevant technical documentation if the claimant establishes the “plausibility” of a damage claim (Article 3). Remarkably, the disclosure order can be requested even prior to actioning a liability claim. However, disclosure must be limited to what is “necessary and proportionate”, taking into account trade secrets and confidentiality.

Hurdles related to transparency and explainability in AI litigation are well recognised. Victims frequently face significant information asymmetries, which can be exacerbated by IP rights that shield tortfeasors from information requests. National courts, data protection authorities, and the European Court of Justice (ECJ; C-634/21 Schufa Holding; C-203/22 Dun & Bradstreet Austria) have addressed these concerns in various disputes involving automated decision-making – with mixed outcomes. Comparisons between the approaches by the German Federal Supreme Court (BGH 28.01.2014 – VI ZR 156/13, granting trade secret protection to the formula of a private company’s credit scoring algorithm) and the District Court of The Hague (Rechtbank Den Haag, 05-02-2020, finding that ethnicity bias in automated risk assessment for targeted social fraud investigation violates Article 8 of the ECHR) illustrate the variability in court attitudes. In this context, the procedural measures introduced by the AILD proposal represent a positive step towards enhancing victims’ access to evidence and redress.

Nonetheless, critics argue that the proposal risks over-regulation. As AI becomes increasingly pervasive (especially in interconnected IoT environments), the scope of AI liability rules may expand beyond expectations. However, the proposal’s focus on procedural rather than substantive rules arguably mitigates the risk of excessive regulatory reach.

At the same time, the vague wording of key procedural thresholds jeopardises the objective of legal harmonisation. For example, the presumption of causation for non-high-risk AI systems applies only when proving causation entails “excessive difficulty”. Moreover, the requirement to establish the «plausibility» of the claim for obtaining a disclosure order risks overwhelming potential claimants, thereby undermining the utility of such provision (see here, p. 713). Furthermore, the proposal leaves fundamental substantive questions unanswered, such as the definition of fault, duties of care, or foreseeability in the context of AI-related accidents (see here, p. 171 f.).

Moreover, Recital 15 suggested that the directive’s scope under Article 2(5) did not include damage “caused by a human assessment” when the AI system only provides information or a recommendation to the operator making the final assessment (decision-support system). This provision watered down significantly the directive’s impact since “mere” decision-support systems can still cause significant harm. Human operators may draw strongly on preliminary assessments by AI systems when making final decisions, failing to scrutinise the automated input effectively. Nevertheless, this pitfall could have been overcome by case law. Drawing on the ECJ ruling in the Schufa Holding case on profiling (para. 50 of the judgment), it could have been argued that harm is not “caused by a human assessment” within the meaning of Recital 15 when said assessment has been substantially shaped by information or recommendation provided by an AI system.

The AILD proposal struck a balance between harmonisation and national discretion. Yet, as a compromise instrument, it left numerous critical issues unresolved and contained severe loopholes. Furthermore, harmonising AI liability rules seemed ambitious considering the long-standing difficulty in achieving consensus on EU tort law, which includes piecemeal provisions and does not constitute a comprehensive normative framework.

Implications of the AILD proposal’s anticipated withdrawal for EU technology regulation

The withdrawal of the AILD must be assessed in light of the existing regulatory landscape. The AI Act focuses essentially on ex ante safety measures and post-market surveillance. Yet, despite compliance with safety requirements, accidents may still occur, making effective ex post remedies critical to promoting public trust in AI. Sanctions under the AI Act may not sufficiently prompt responsible innovation, as the AI Act does not provide individuals with actionable rights beyond the right to explanation and the right to lodge complaints with market surveillance authorities (as pointed out here).

Thus, AI liability rules are meant to incentivise compliance with the AI Act. Liability rules are indeed intimately intertwined with safety legislation. Obligations under the AI Act and related AI standards provide benchmarks for judicial assessments of fault and duties of care under national tort law (as explained here at p. 233). Therefore, liability can be a catalyst of responsible innovation.

In this respect, the revised PLD marks a significant step by incorporating software into the legal definition of “product”, thereby extending product liability to non-tangible AI systems. Still, the PLD regime applies alongside national tort law. The former covers only certain types of harm – personal injury, property damage, and data loss – excluding infringements of personality rights and pure financial loss. Moreover, it does not include damage to commercially (as opposed to privately) used property. It also provides a “state-of-the-art” defence shielding manufacturers from liability for unknown defects that could not be discovered through the scientific and technical knowledge available when the product was placed on the market or put into service. In such cases, national tort law may step in, making the procedural alleviations envisaged by the AILD proposal particularly valuable.

From a business perspective, harmonising liability rules would reduce compliance costs inflated by legal uncertainty and favour cross-border commerce. Yet, the AILD proposal was limited to procedural measures without altering substantive liability rules, making the support of national tort law experts still essential for companies. Hence, it would have impacted victims more positively than business actors. Consequently, its withdrawal also affects victims more than businesses.

Prospects and alternative approaches

The European Parliament or the Council could challenge the Commission’s withdrawal decision before the ECJ. However, unless the withdrawal decision lacks adequate justification, the Court would likely uphold it based on prior case law (see the ECJ ruling in C-409/13 Council v Commission).

Nevertheless, the Commission has not formally withdrawn the proposal yet and has signalled the possibility of introducing a new legislative initiative. One alternative could entail adopting a regulation instead of a directive (as suggested by Hacker). Another possibility would be to expand harmonisation by regulating liability regimes for AI providers and users, in addition to, or as a replacement for, procedural rules. However, the top-down harmonisation of liability regimes interferes significantly with national laws. A more tailored, sector-specific regulation of liability based on particular use cases may offer a more feasible path, aligning with both existing EU tort law and the AI Act’s risk-based classification (as suggested here and here).

The European Parliament’s 2020 Resolution on civil liability for AI already recommended harmonising liability regimes through a two-tiered system: strict liability for high-risk AI systems and fault-based liability for operators of other systems. However, imposing strict liability on all front-end users of high-risk AI systems, including non-expert users, may prove disproportionate.

The material scope of future liability legislation could also be expanded beyond AI to encompass all software. Non-sophisticated technologies falling outside the AI Act’s definition of AI systems may still cause significant harm and pose comparable evidentiary challenges.

Finally, the AILD proposal has been criticised for lacking robust scientific and empirical foundations (see pp. 34 ff.), similar to the AI Act. A more interdisciplinary regulatory approach, by increasing the involvement of domain-specific experts such as computer scientists, could arguably enhance the quality and relevance of future legislative proposals.

 

 

The post Anatomy of a Fall appeared first on Verfassungsblog.

Kommentare lesen (1 Beitrag)