NachrichtenBearbeiten
https://odysee.com/@ovalmedia:d/mwgfd-impf-symposium:9
https://totalityofevidence.com/dr-david-martin/
| Kaum beachtet von der Weltöffentlichkeit, bahnt sich der erste internationale Strafprozess gegen die Verantwortlichen und Strippenzieher der Corona‑P(l)andemie an. Denn beim Internationalem Strafgerichtshof (IStGH) in Den Haag wurde im Namen des britischen Volkes eine Klage wegen „Verbrechen gegen die Menschlichkeit“ gegen hochrangige und namhafte Eliten eingebracht. Corona-Impfung: Anklage vor Internationalem Strafgerichtshof wegen Verbrechen gegen die Menschlichkeit! – UPDATE |
Libera Nos A Malo (Deliver us from evil)
Transition NewsBearbeiten![]() Feed Titel: Homepage - Transition News Bundesregierung: Schwarz-Grün für Ricarda Lang „auf jeden Fall eine Option“
![]() Union und die Grünen wären nach Ansicht von Grünen-Chefin Ricarda Lang geeignete Koalitionspartner ab 2025. In drei Bundesländern gebe es bereits funktionierende Koalitionen. Baden-Württembergs Ministerpräsident Winfried Kretschmann hofft auf eine „Verbindung von Ökologie und Ökonomie“. Dengue-Fieber in Brasilien ausgebrochen: Kollabiert das Gesundheitswesen?
![]() Brasilien kämpft gegen den schwersten Dengue-Ausbruch seit Jahrzehnten. In mehreren Gebieten wurde der Notstand ausgerufen. Bank of America investiert wieder in fossile Brennstoffe
![]() Die Bank of America hat ihr Versprechen zurückgenommen, die grüne Agenda zu unterstützen und nicht mehr in Kohlenwasserstoffe – Kohle, Erdöl und Erdgas – […] Tucker Carlson bestätigt zum ersten Mal offiziell, daß es ein Interview mit Präsident Putin geben wird, und begründet ausführlich warum das nötig ist. Twitter/X
Tucker Carlson bestätigt zum ersten Mal offiziell, daß es ein Interview mit Präsident Putin geben wird, und begründet ausführlich warum das nötig ist. Twitter/X(Sobald eine deutsche Übersetzung vorliegt, wird das hier nochmal...
Umfrage der Bertelsmann Stiftung: Viele junge Deutsche misstrauen Regierung und Parlament
![]() Viele junge Deutschen zweifeln daran, ob die Politik künftige Herausforderungen lösen könne. Experten sehen darin ein Warnsignal für die Demokratie. | Peter MayerBearbeiten![]() Feed Titel: tkp.at – Der Blog für Science & Politik Kernstücke der neuen WHO Verträge bringen Verlust der nationalen Souveränität der Mitgliedsstaaten
![]() Bekanntlich sollen bis Ende Mai Änderungen der Internationalen Gesundheitsvorschriften (IGV) beschlossen werden, die der WHO eine massive Ausweitung ihrer völkerrechtlich verbindlichen Vollmachten bringen sollen. […] Hardware-Schwachstelle in Apples M-Chips ermöglicht Verschlüsselung zu knacken
![]() Apple-Computer unterscheiden sich seit langem von Windows-PCs dadurch, dass sie schwieriger zu hacken sind. Das ist ein Grund, warum einige sicherheitsbewusste Computer- und Smartphone-Nutzer […] 25 Jahre weniger Lebenserwartung für "vollständig" Geimpfte
![]() Eine beunruhigende Studie hat ergeben, dass Menschen, die mit mRNA-Injektionen „vollständig“ gegen Covid geimpft wurden, mit einem Verlust von bis zu 25 Jahren ihrer […] Ostermärsche und Warnungen vor dem Frieden
![]() Ostern ist auch die Zeit der pazifistischen und antimilitaristischen Ostermärsche. Grund genug, um davor zu warnen. Tod nach Covid-Spritze: Ärzte im Visier der Justiz
![]() In Italien stehen fünf Ärzte nach dem Tod einer jungen Frau aufgrund der „Impfung“ vor einer Anklage. |
NZZBearbeiten

Feed Titel: Wissenschaft - News und HintergrĂĽnde zu Wissen & Forschung | NZZ
Astronomen beobachten eine Verschmelzung von zwei Schwarzen Löchern, die es eigentlich nicht geben sollte
80 Jahre Atombombe: In der amerikanischen Geheimstadt Los Alamos tobt der Geschichtskrieg
KOMMENTAR - Die Zeiten des klimapolitischen Purismus sind vorbei. Das sind gute Nachrichten fĂĽr den Klimaschutz
ERKLÄRT - Harmlose Veränderung oder Krebs? Was Frauen erwartet, wenn sie einen auffälligen Abstrich am Gebärmutterhals haben
Das Chikungunya-Fieber grassiert auf La Réunion. Jetzt könnte das Virus den Sprung nach Europa schaffen
VerfassungsblogBearbeiten

Feed Titel: Verfassungsblog
The GPAI Code of Practice
On 10 July 2025, the European Commission published the final version of its Code of Practice for General-Purpose AI (GPAI) – a voluntary rulebook developed by a group of independent experts and more than 1,400 stakeholders from industry, academia, civil society, and rightsholders.
Under Articles 53 to 55 of the AI Act, powerful AI models – such as GPT-4, Gemini, Llama or Mistral – will be subject to binding obligations starting in August 2025 (for new models) and August 2027 (for models already on the market). The Code is meant to prepare providers for what’s ahead: it offers a straightforward way to start complying with these future obligations under the AI Act.
While not all stakeholders are satisfied with the final outcome, others have already expressed their interest in signing the Code. Its success will ultimately depend on whether it manages to reduce compliance burdens and provide legal certainty. Even if not universally adopted, it could still serve as a regulatory benchmark under the AI Act.
Regulation of GPAI models under the AI Act
The Commission’s initial proposal for the AI Act did not specifically address GPAI models. However, the attention that these powerful models received during the legislative process led to the inclusion of specific obligations. These obligations address the substantial risks GPAI models pose – risks not adequately captured by the proposed rules on AI systems and practices.
Ultimately, this inclusion in the AI Act treats GPAI models as a distinct category with its own risk classification criteria. The AI Act classifies AI systems and practices into unacceptable-risk AI practices (Art. 5), which are prohibited; high-risk AI systems (Art. 6), which are subject to mandatory requirements; and minimal-risk AI systems, which are not subject to any mandatory requirements.
Meanwhile, the AI Act imposes horizontal mandatory requirements on all GPAI models (Art. 53(1)), particularly with regard to transparency obligations and copyright. Furthermore, GPAI models posing a systemic risk are subject to additional safety and security obligations (Art. 55(1)).
Notably, the classification criteria for GPAI models are independent of those for AI systems and practices. AI systems and practices are based on lists of prohibited practices (Article 5) and critical domains (particularly Annex III). In contrast, systemic risk of GPAI models requires assessing whether the model poses “a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain” (Art. 3(65) AI Act).
Code of Practice for GPAI models
To facilitate compliance with GPAI models’ obligations, the AI Act calls for the creation of codes of practice through a co-regulatory procedure (Art. 56), thereby establishing clear parameters for ensuring and demonstrating compliance.
The recently published final version of the Code of Practice is divided into three chapters: Transparency, Copyright and Safety and Security. While the Transparency and Copyright chapters are relevant to all GPAI providers, the Safety and Security chapter only applies to GPAI models with systemic risks. This three-chapter structure first appeared in the third draft and reflects the scope of the obligations, given that the strict safety and security obligations under Art. 55(1) of the AI Act apply only to GPAI models that pose systemic risks.
Transparency Chapter
The Transparency chapter outlines how providers must make information available to the AI Office and the national competent authorities upon request pursuant to Art. 53(1)(a), Annex XI, Section I, and also proactively to downstream providers under Art. 55(1)(b) and Annex XII. In particular, the chapter defines the specific information that providers must document. It introduces a Model Documentation Form to help compile the relevant information. This form also indicates whether each element may be relevant to the AI Office, national competent authorities, or downstream providers. It may be helpful for GPAI providers to make information available in a timely manner.
The chapter also clarifies restrictions on these transparency obligations. Specifically, it designates the AI Office as the point of contact for national competent authorities requesting information from GPAI providers. Therefore, national authorities must request information through the AI Office, which will then request the relevant information in accordance with Art. 91(4) AI Act. Furthermore, the chapter reiterates that requests for information must specify the legal basis and purpose, and that the information requested must be strictly necessary for the tasks performed by the AI Office or the national authority.
Copyright Chapter
The Copyright Chapter outlines what a provider’s copyright policy must include under Art. 53(1)(2). In particular, the provider may only reproduce and extract accessible, lawful, copyright-protected content without circumventing effective technological measures (as defined in Art. 6(3) of Directive 2001/29/EC). Providers must exclude content from a dynamic list of websites that will be created by EU authorities. Additionally, web crawling should respect rights reservation, including machine-readable reservations (e.g. via robots.txt) or other appropriate methods.
To mitigate the risk of copyright infringement, providers must implement appropriate technical safeguards and prohibit copyright-infringing uses in their acceptable use policy. Finally, the GPAI provider must designate a point of contact for affected rightsholders and implement a complaint mechanism for non-compliance.
Safety and Security Chapter
The Safety and Security chapter focuses on the specific obligations of providers of GPAI models posing systemic risk under Art. 55(1) AI Act. Providers must establish and commit to a Safety and Security Framework, including a full systemic risk assessment and mitigation process after market placement and model report updates. If the model poses systemic risk, the provider must monitor it before and after release and allow external evaluators – a controversial provision of the Code.
Appendix 1 of the chapter supports providers by outlining the types, nature and sources of systemic risk they must assess. Furthermore, the chapter provides a list of default systemic risks, including chemical, biological, radiological and nuclear risks; loss of control; cyber offences; and harmful manipulation risks.
Once the systemic risks have been identified, the GPAI provider must analyse whether the existing risks would be considered acceptable according to their own pre-determined acceptance criteria. If the systemic risk is unacceptable, the provider has to implement measures to mitigate the risk until the acceptance criteria are met. Finally, the chapter details how providers must keep track of, document and report serious incidents and corrective measures pursuant to Art. 55(1)(c) AI Act.
The right path towards an effective regulation of GPAI models?
Although delayed by a few months – it was originally due in May – the Code of Practice is a necessary step to ensure the effectiveness of the AI Act. Given the critical stage at which the AI Act currently stands, with companies pressing the EU to introduce a two-year pause for enforcement of obligations, the Code of Practice could clarify the complex obligations for GPAI models, which are currently subject to considerable vagueness.
In an effort to boost interest in the Code, the Commission has informally recognized a one-year grace period within which “the AI Office will not consider them to have broken their commitments under the Code and will not reproach them for violating the AI Act. Instead, in such cases, the AI Office will consider them to act in good faith and will be ready to collaborate to find ways to ensure full compliance.”
Still, further action is needed to ensure effective AI regulation that protects fundamental rights and values. Regarding the Code of Practice, the AI Office and the AI Board will now assess its adequacy, in accordance with Art. 56(6) AI Act, and the Commission may approve it via an implementing act. Once approved, GPAI providers can voluntarily adhere to the Code (Art. 57(7)).
It remains unclear whether the Commission will require full adoption of the Code or allow selective adherence. Art. 57(7) suggests that only providers of GPAI models that do not present systemic risk may limit adherence to the Transparency and Copyright chapters – “unless they declare explicitly their interest to join the full code.” While requiring full adoption would help to avoid fragmentation, this approach would conflict with the Code’s voluntary nature and could discourage potential signatories, resulting in lower levels of adherence.
Yet some fundamental elements to encourage adherence are still missing. Over the next few weeks, the Commission will publish guidelines to clarify how the rules for GPAI providers apply, as these were not covered by the Code. It is expected that these guidelines will clarify the classification of models as GPAI, including modified models, as well as the classification of GPAI with systemic risks – according to the consultation conducted by the Commission in April 2025. Additionally, the Commission still needs to provide the template for the public disclosure of information about training data in accordance with Art. 53(1)(d), which may be a sensitive topic for providers.
Overall, the publication of the final version of the Code of Practice is a promising step forward, but effectively regulating GPAI will require completion of these remaining steps. While there is still a long way to go, the Commission appears to be heading in the right direction.
The post The GPAI Code of Practice appeared first on Verfassungsblog.