Advertisement Β· 728 Γ— 90

Posts by Lukas Warode

Post image

Mr. President, a second pipe has just hit the tidyverse

1 month ago 6 0 0 1
Preview
a man with a mustache is saying attention ladies and gentlemen can i please have your attention ? ALT: a man with a mustache is saying attention ladies and gentlemen can i please have your attention ?

A few years ago, as parlgov.org discontinued its dynamic website, I created my own minimalistic Parlgov Lookup to quickly show which cabinet governed when.

Since parlgov itself is now kinda-discontinued, I wanted to move my web-app to rely on www.repdem.org. How? ChatGPT Codex to the rescue!

2 months ago 5 2 1 0

Perhaps you received a mysterious noreply email asking you to evaluate some publications 'for novelty'. Looked kinda dubious? Yup, that's the one.

So what's up with this 'metascience novelty indicators challenge'? 🧡

2 months ago 31 17 7 7
Post image Post image Post image

partycoloR is now on CRAN! Started as a simple idea 6 years ago, now it's a full-featured package. Extract party colors and logos from Wikipedia with one line of code. It's already powering ParlGov Dashboard.

install.packages("partycoloR")

2 months ago 99 20 0 2

The app shows how German politicians associate words with "left" or "right" based on ideological in- and out-group narratives and contested concepts. For example, both ideological sides claim the term "freedom."

3 months ago 2 0 0 0

This question became the topic of my 2nd dissertation paper. I also considered creating an app to communicate the results efficiently and allow you to explore the patterns yourself. I’ve used Shiny for years, but "AI-assisted agentic engineering" (aka vibe coding πŸ˜‚) really helped here a lot.

3 months ago 5 0 1 0
Post image Post image

Words like "patriotism" and "racism" are often associated with the right, while "solidarity" and "socialism" are associated with the left.

But who uses these associations, and how do political positions matter?

πŸ“Š App: lukas-warode.shinyapps.io/lr-words-map/
πŸ“„ Paper: www.nature.com/articles/s41...

3 months ago 13 3 1 0
Post image

"When Conservatives See Red but Liberals Feel Blue: Labeler Characteristics and Variation in Content Annotation" by
Nora Webb Williams, Andreu Casas, Kevin Aslett, and John Wilkerson.
www.journals.uchicago.edu/doi/10.1086/...

3 months ago 2 1 0 0

Congrats!!

3 months ago 2 0 0 0

Serienempfehlungen: Fargo, The Sopranos, vielleicht noch Narcos :)

3 months ago 1 0 0 0
Advertisement
Post image

The Call for Papers and Panels for #COMPTEXT2026 in Birmingham (23-25 April) is out; feel free to circulate: shorturl.at/gRg0p!
Deadline: January 16!

4 months ago 21 15 1 4
Post image
4 months ago 4 0 0 0

Leben und Tod der DiD

5 months ago 1 0 0 0

Dissertation track? πŸ˜‰

5 months ago 1 0 1 0
Slavoj Ε½iΕΎek meme image

Slavoj Ε½iΕΎek meme image

β€œYou see, the endless renovation of the Stuttgart train station is a symbol of our late-capitalist condition: the project is always β€˜in progress,’ yet nothing ever progresses. The construction site itself becomes the true destination.”

5 months ago 55 7 2 1
Post image

www.instagram.com/p/DQPf_pJiG8...

Is it a fit?

5 months ago 2 0 2 0

Job Alert! We are hiring two post-docs (full time, 4+ years) in our project SCEPTIC - Social, Computational and Ethical Premises of Trust and Informational Cohesion with @annanosthoff.bsky.social @guzoch.bsky.social and Prof. Andreas Peters (uol.de/informatik/s...)

6 months ago 28 32 3 2
Post image

πŸ“£ New Preprint!
Have you ever wondered what the political content in LLM's training data is? What are the political opinions expressed? What is the proportion of left- vs right-leaning documents in the pre- and post-training data? Do they correlate with the political biases reflected in models?

6 months ago 47 13 2 1
Advertisement
Preview
The threat of analytic flexibility in using large language models to simulate human data: A call to attention Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...

Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD 🧡

7 months ago 343 159 12 61
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks. For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations. Then, we collect 13 million LLM annotations across plausible LLM configurations. These annotations feed into 1.4 million regressions testing the hypotheses. For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions. Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors. Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models. Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.

🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825

7 months ago 303 106 6 23

Implications for political behaviour, communication, and representation are manifold, as 'left' and 'right' are central categories in polarised public discourse – which is particularly evident in pejorative usage, such as labelling political opponents as 'racist' or 'socialist'.

7 months ago 1 0 0 0
Post image

Both in- and out-ideological associations are externally validated by serving as seed words to scale parliamentary speeches. The resulting ideal points reflect party ideology across different specifications in the German Bundestag.

7 months ago 1 0 1 0
Post image

The mapping is based on associations from open-ended survey responses in German candidate surveys. Words are mapped into a semantic space using word embeddings and weighted by frequency. Construct validity is ensured by using alternative embeddings and frequency weightings.

7 months ago 1 0 1 0
Post image

Words associated with both left and the right are mapped to the semantic centre, where connotations can vary: 'freedom' has a positive connotation (it is primarily used by the respective in-group to describe left and the right), while 'politics' has a rather neutral connotation.

7 months ago 1 0 1 0
Post image

This framework yields associations that are driven by positive (in-ideology) and negative (out-ideology) associations. Examples: 'justice' (left) and 'patriotism' (right) are in-ideological associations; 'socialism' (left) and 'racism' (right) are out-ideological associations.

7 months ago 1 0 1 0
Post image

Left and right are essential poles in political discourse. We know little about how they are associated across the spectrum. I propose a 2-dimensional model that accounts for both semantics – is a term left or right – and position – are associations coming from the left or right.

7 months ago 1 0 1 0
Post image

My 2nd dissertation paper is out in @nature.com Humanities and Social Sciences Communications: www.nature.com/articles/s41...

I study and explore how associations with 'left' and 'right' vary systematically by semantic and political position.

7 months ago 22 3 1 0
Advertisement

Ja, die goldene Twitterzeit ist leider over

7 months ago 1 0 0 0
Sitcom Laugh Track
Sitcom Laugh Track YouTube video by SamGordonRHK

youtu.be/4VTBMznLrWs?...

7 months ago 1 0 1 0
Post image

πŸ“’ New Publication Alert!
Our (@msaeltzer.bsky.social)
latest article, "Issue congruence between candidates' Twitter communication and constituencies in an MMES: Migration as an exemplary case", has just been published in Parliamentary Affairs.
academic.oup.com/pa/advance-a...

8 months ago 23 11 1 0