Advertisement Ā· 728 Ɨ 90

Posts by LK Seiling

Preview
Assessing the Operation of EU Social Media Data Access Mechanisms via the DSA40 Collaboratory | Snurblog — Axel Bruns The post-lunch session at the

And @lkseiling.bsky.social and Sophia Graf presenting lots of super useful work by the @dsa40collaboratory.bsky.social snurb.info/index.php/no... #smad2026 #digitalservicesact

1 month ago 2 1 1 0

The issue with data access under DSA Art. 40(4): only successful requests are published on the Commission's Data Access Portal.
When it comes to rejected requests, researchers have to share information amongst themselves.
That's what our data access tracker is for šŸ‘‡

1 month ago 1 0 0 0

It’s not a privilege, it’s a researchers right! #DSA40

@lkseiling.bsky.social and I summarize what the European Commission decision in their case against X (I feel we should follow John Oliver and from now on call it Twitter again) means for researchers right to platform data access [in German]

1 month ago 4 3 0 0
Zitatkachel mit dem GFF-Juristen Joschka Selinger: ā€žAuch die großen Digitalkonzerne stehen nicht über dem Gesetz. Der DSA ist ein scharfes Schwert für digitale Rechte und Plattformregulierung.ā€œ

Zitatkachel mit dem GFF-Juristen Joschka Selinger: ā€žAuch die großen Digitalkonzerne stehen nicht über dem Gesetz. Der DSA ist ein scharfes Schwert für digitale Rechte und Plattformregulierung.ā€œ

Durchbruch für digitale Rechte – und für den Kampf gegen #Desinformation und #Wahlbeeinflussung online: Das Kammergericht Berlin entschied heute, dass die Plattform #X unserer Partnerorganisation @democracyreporting.bsky.social Zugang zu #Forschungsdaten geben muss. šŸ’Ŗ

2 months ago 116 52 2 1

Heute verhandelt das Kammergericht Berlin, ob @democracyreporting.bsky.social von #X Plattformdaten bekommt, um zu Wahlbeeinflussung und Desinformation zu forschen. Obwohl DRI nach dem #DSA der EU ein Recht darauf hat, verweigert X die Herausgabe.

Mehr Infos im Quotepost.

2 months ago 67 14 2 0

There have been increasingly shrill accusations against the EU over its digital legislation, based on accusations of "censorship" by defenders of "free speech" -- including, so it appears, the right to peddle an AI app that seemingly produces child sexual abuse material (CSAM).
1/9

2 months ago 65 33 1 3
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

7 months ago 3944 1974 111 406

Anyone interested in drafting an Art. 40(4) DSA access request to make use of this moment? Looks like a great chance to figure out, how the @grok account fits into X' internal governance/moderation structures.

3 months ago 1 0 0 0
Preview
Why Are Grok and X Still Available in App Stores? Elon Musk’s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other ā€œnudifyā€ apps—but continue to host X and Grok.

"Why Are Grok and X Still Available in App Stores?
Elon Musk’s chatbot has been used to generate thousands of sexualized images of adults and apparent minors. Apple and Google have removed other ā€œnudifyā€ apps—but continue to host X and Grok." www.wired.com/story/x-grok...

3 months ago 16 7 0 0
Preview
Grok turns off image generator for most users after outcry over sexualised AI imagery X to limit editing function to paying subscribers after platform threatened with fines and regulatory action

The Guardian doing inexplicable voluntary free public relations work for a corporation profiting from sexual abuse by headlining this as the image gen being "turned off". It isn't off: they've just monetised it. What the fuck are we doing here people, come on.

www.theguardian.com/technology/2...

3 months ago 1012 339 31 29
Advertisement

liked undressing any woman or child on the platform? throw us some cash and you can keep doing it

3 months ago 103 29 3 1
Fuse – 39C3: Power Cycles Streaming Live streaming from the 39th Chaos Communication Congress

Gleich gehts los auf dem #39c3: Simone und Jürgen ziehen in ā€žHacking Karlsruhe - 10 years laterā€œ nach zehn Jahren GFF Bilanz. Hier gehts zum Stream: streaming.media.ccc.de/39c3/fuse

3 months ago 27 8 2 0
Video

#Trump has been accusing the #EU of #censorship via its #DigitalServicesAct, but is any of what they are saying true?

Let's investigate šŸ§µšŸ”½
#EUpol #USpol

3 months ago 1 7 1 0

Did you know that the new European Omnibus threatens research based on data donations?

We wrote an open letter highlighting the problematic amendment (see below).

Please consider reading and signing it. If the amendment goes through, this might well be the end of data donation research...

4 months ago 2 0 0 0

We recently concluded a special article series, ā€œSeeing the Digital Sphere: The Case for Public Platform Data,ā€ in collaboration with the Knight-Georgetown Institute, in which experts explored why access to public platform data is critical. Here’s a snapshot: (1/9)

5 months ago 14 11 1 0
Preview
Trotz Kritik: Ab wann die BW-Polizei eine Palantir-Software nutzen darf Trotz viel Kritik und einer Petition hat der Landtag das Polizeigesetz geƤndert und die Nutzung der Datenanalyse-Software Palantir beschlossen.

Die Grünen sind nur Bürgerrechtspartei, wenn sie nicht in der Regierung sind. www.swr.de/swraktuell/b...

5 months ago 81 23 4 1
Preview
In Critical Condition – How To Stabilize Researcher Data Access? | TechPolicy.Press Mark Scott and LK Seiling discuss the struggle for researcher access to social media data and an alternative future where transparency is seen as a civic good.

@lkseiling.bsky.social & @markscott.bsky.social plea in @techpolicypress.bsky.social for a new data access regime emerging from a "generation of decentralized platforms that treat data transparency not as a regulatory burden but as a civic and scientific good"

www.techpolicy.press/in-critical-...

5 months ago 14 8 0 0

The most transparent, de-gamified way to do social media would be a ā€œmore like this / less like thisā€ interface that nobody else sees. The ā€œlikeā€ is pointless here anyway, while sharing is everything.

5 months ago 7 3 0 0
Preview
How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Here are seven unusual financial agreements helping to drive the ambitions of the poster child of the A.I. revolution.

"Many of the deals OpenAI has struck — with chipmakers, cloud computing companies and others — are strangely circular. OpenAI receives billions from tech companies before sending those billions back to the same companies to pay for computing power and other services." www.nytimes.com/interactive/...

5 months ago 535 203 38 77
Advertisement

two for one, nice šŸ˜Ž

5 months ago 457 53 13 3
Preview
Abschiebung trotz Ausbildungsvertrag: Aus dem Bett geholt, ins Flugzeug gesetzt Rouaa und Ibrahim hatten ihre Ausbildungsverträge unterschrieben und hätten damit bis zum Abschluss bleiben dürfen. Trotzdem wurden sie abgeschoben.

"Gegen vier Uhr morgens drangen BeĀ­amĀ­t:inĀ­nen in die Wohnung der syrischen Familie Seleman ein, führten die Geschwister Rouaa (24) und Ibrahim (28) ab und setzten sie in ein Flugzeug. Dabei sollten beide eine Ausbildung antreten. Der Flüchtlingsrat Schleswig-Holstein nennt das ā€žBehƶrdenwahnsinnā€œ.

5 months ago 877 384 64 48

Since Meta and TikTok have been asked to respond individually, achieving such harmonisation seems unlikely—unless both researchers and regulators actively push for it. I have high hopes šŸ˜‰

5 months ago 2 0 0 0

Researchers need a BASELINE STANDARD OF PUBLICLY ACCESSIBLE DATA: a minimum set of comparable, high-quality data from all platforms, along with robust quality checks to ensure validity.

5 months ago 2 0 1 0

2) Burdensome tools
The data access tools provided under Article 40(12) are also inadequate. Both Meta and TikTok reportedly supply incomplete data with questionable accuracy.

5 months ago 2 0 1 0

Researchers need STANDARDISED APPLICATION FORMS and FAIR, TRANSPARENT TERMS ACROSS PLATFORMS. Without them, access to data will remain extremely resource-intensive, discouraging cross-platform research and keeping much of this work in a legal grey area.

5 months ago 2 0 1 0

For example, both platforms request detailed information about researchers’ qualifications, while Meta even asks for a date of birth and phone number. On top of that, researchers must agree to contradictory or restrictive terms just to apply for data access.

5 months ago 2 0 1 0
Advertisement

1) Burdensome procedures
In practice, this means that the application processes set up by these platforms may violate the DSA’s provisions. TikTok’s application form reportedly includes around 40 required fields, while Meta’s goes up to 50 - many without connection to the requirements in Art. 40(8)

5 months ago 2 0 1 0

The key terms here are:
1) burdensome procedures
2)burdensome tools

While the findings themselves are not public, let's take a closer look šŸ§µšŸ‘‡

5 months ago 8 4 1 0

I’ll be presenting this work at #CSCW2025 in Bergen on Tuesday at 2:30PM! We will be part of the session ā€œCore Concepts in Privacy Researchā€ (in the Bekken room) chaired by @emtseng.bsky.social ā˜ŗļø

6 months ago 16 3 0 0

The parsing code and info on the schema based on Activity Streams 2.0 we suggested: gitlab.weizenbaum-institut.de/lukas.seilin...

@lionw.bsky.social's latest paper on data donations: doi.org/10.12758/mda... (follow him for more to come soon!)

6 months ago 3 0 0 0