Advertisement · 728 × 90

Posts by Joachim Baumann

Preview
aurman/GoogleTrendArchive · Datasets at Hugging Face We’re on a journey to advance and democratize artificial intelligence through open source and open science.

The dataset we call GoogleTrendArchive has over 7 million trend episodes spanning over 1 year since Nov. 28th 2024. We cover all 1358 locations available.

Direct link to the dataset huggingface.co/datasets/aur...

Joint work with Anikó Hannák @scg-uzh.bsky.social and @joachimbaumann.bsky.social

4 weeks ago 3 1 1 0
Preview
GoogleTrendArchive: A Year-Long Archive of Real-Time Web Search Trends Worldwide GoogleTrendArchive is a comprehensive archive of Google Trending Now data spanning over one year (from November 28, 2024 to January 3, 2026) across 125 countries and 1,358 locations. Unlike Google Tre...

Working with web search data? Ever wanted to get access to historical data on what was trending in different locations - beyond the 7 days of such history that Google's Trending Now provides? We've got you covered with the new dataset paper, accepted at ICWSM, preprint here arxiv.org/abs/2603.21871

4 weeks ago 14 4 1 0
Preview
Hello Entire World · Entire Blog Announcing Entire with $60 million seed round and shipping our first product, called Checkpoints.

Beep, boop. Come in, rebels. We’ve raised a 60m seed round to build the next developer platform. Open. Scalable. Independent. And we ship our first OSS release today. entire.io/blog/hello-e...

2 months ago 23 6 4 2

Dirk and Debora are amazing postdoc advisors, and the @milanlp.bsky.social team is fun fun fun ❤️ you should apply!

4 months ago 8 0 0 0
Text reads: About synthetic panels
Recruiting the right participants for a study can be difficult. You may not get the exact demographics you need, and the shorter the deadline, the less sure you can be that everyone will answer on time. One possible solution can be to use synthetic panels.

Synthetic panels are powered by a first party proprietary AI model developed here at Qualtrics. Our synthetic panel is trained on thousands of responses from a variety of demographic backgrounds in order to more accurately predict how certain populations would respond to a survey.

Our synthetic panel is based on the United States General Population, and is only available in English. This panel comes with ready-made quotas and target breakouts in order to represent your chosen population and make it easy to launch your survey right away.

Text reads: About synthetic panels Recruiting the right participants for a study can be difficult. You may not get the exact demographics you need, and the shorter the deadline, the less sure you can be that everyone will answer on time. One possible solution can be to use synthetic panels. Synthetic panels are powered by a first party proprietary AI model developed here at Qualtrics. Our synthetic panel is trained on thousands of responses from a variety of demographic backgrounds in order to more accurately predict how certain populations would respond to a survey. Our synthetic panel is based on the United States General Population, and is only available in English. This panel comes with ready-made quotas and target breakouts in order to represent your chosen population and make it easy to launch your survey right away.

Text reads:
Question-writing best practices
To get the most reliable and actionable results from synthetic audiences, consider these question-writing best practices:

Ask forward-looking and attitudinal questions.
Synthetic panels perform best with perceptions, preferences, and intent-based questions. For example, “How likely are you to try…?”
Synthetic panels are less applicable for studies on past behaviors, detailed recall, brand recall, or awareness questions. For example, “When did you last visit…?”

Text reads: Question-writing best practices To get the most reliable and actionable results from synthetic audiences, consider these question-writing best practices: Ask forward-looking and attitudinal questions. Synthetic panels perform best with perceptions, preferences, and intent-based questions. For example, “How likely are you to try…?” Synthetic panels are less applicable for studies on past behaviors, detailed recall, brand recall, or awareness questions. For example, “When did you last visit…?”

Text reads:
Discussion
The current study aimed to conduct a meta-analysis of the TPB when applied to health behaviours which addressed the limitations of previous reviews by including only prospective tests of behaviour, applying RE meta-analytic procedures, correcting correlations for sampling and measurement error, and hierarchically analysing the effect of behaviour type and sample and methodological moderators. Some 237 tests were identified which examined relations amongst model components. Overall the analysis indicated that the TPB could explain 19.3% of the variance in behaviour and 44.3% of the variance in intention across studies. This level of prediction of behaviour is slightly lower than that of previous meta-analytic reviews which have found between 27% (Armitage & Conner, 2001; Hagger et al., 2002) and 36% (Trafimow et al., 2002)
of the variance in behaviour to be explained by intention and PBC.

Text reads: Discussion The current study aimed to conduct a meta-analysis of the TPB when applied to health behaviours which addressed the limitations of previous reviews by including only prospective tests of behaviour, applying RE meta-analytic procedures, correcting correlations for sampling and measurement error, and hierarchically analysing the effect of behaviour type and sample and methodological moderators. Some 237 tests were identified which examined relations amongst model components. Overall the analysis indicated that the TPB could explain 19.3% of the variance in behaviour and 44.3% of the variance in intention across studies. This level of prediction of behaviour is slightly lower than that of previous meta-analytic reviews which have found between 27% (Armitage & Conner, 2001; Hagger et al., 2002) and 36% (Trafimow et al., 2002) of the variance in behaviour to be explained by intention and PBC.

Did you know that from tomorrow, Qualtrics is offering synthetic panels (AI-generated participants)?

Follow me down a rabbit hole I'm calling "doing science is tough and I'm so busy, can't we just make up participants?"

4 months ago 656 287 38 225

Good luck drawing reliable conclusions from the answers that Qualtrics' AI model provides to your survey questions... bsky.app/profile/joac...

4 months ago 27 7 0 1
Post image

At today’s lab reading group @carolin-holtermann.bsky.social presented ‘Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs’ by @angelinawang.bsky.social et al. (2025).
Lots to think about how we evaluate fairness in language models!

#NLProc #fairness #LLMs

4 months ago 8 3 0 0

Also see more nuanced takes worth reading from @seanjwestwood.bsky.social (x.com/seanjwestwoo...) and @joshmccrain.bsky.social (bsky.app/profile/josh...) and @phe-lim.bsky.social (bsky.app/profile/phe-...)

4 months ago 0 0 0 0

The path forward: Survey panels and crowdsourcing platforms must invest in better panel curation and periodic quality verification.

Good to see that @joinprolific.bsky.social is already on it: bsky.app/profile/phe-...

4 months ago 2 0 1 0

✅ LLM instruction tuning works: tell a model to answer as a human, it will
❌ Silicon sampling still doesn't work: AI responses are plausible but don't accurately represent a real human population
❌ Bot detection fails: it's hard to design tasks that are easy for humans but difficult for LLMs

4 months ago 1 0 1 0
Advertisement

Now that the hype has cooled off, here's my take on AI-generated survey answers:
This is a real problem, but the paper's core insights aren't exactly news!
A thread with the most important summary... 🧵

Image: shows the LLM system prompt used

4 months ago 4 1 1 0
Post image Post image Post image

Another exhausting day in the lab… conducting very rigorous panettone analysis. Pandoro was evaluated too, because we believe in fair experimental design.

4 months ago 23 6 0 1
Post image

For our weekly reading group, @joachimbaumann.bsky.social presented the upcoming PNAS article "The potential existential threat of large language models to online survey research" by @
@seanjwestwood.bsky.social.

5 months ago 8 3 0 0
Preview
Auditing Google's AI Overviews and Featured Snippets: A Case Study on Baby Care and Pregnancy Google Search increasingly surfaces AI-generated content through features like AI Overviews (AIO) and Featured Snippets (FS), which users frequently rely on despite having no control over their presen...

Google AI overviews now reach over 2B users worldwide. But how reliable are they on high stakes topics - for instance, pregnancy and baby care?

We have a new paper - led by Desheng Hu, now accepted at @icwsm.bsky.social - exploring that and finding many issues

Preprint: arxiv.org/abs/2511.12920
🧵👇

5 months ago 16 9 1 1
Language Model Hacking - Granular Material

Trying an experiment in good old-fashioned blogging about papers: dallascard.github.io/granular-mat...

5 months ago 29 9 3 0
Post image

Next Wednesday, we are very excited to have
@joachimbaumann.bsky.social, who will present co-authored work on "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation". Paper and information on how to join ⬇️

5 months ago 4 3 1 2
Video

Can AI simulate human behavior? 🧠
The promise is revolutionary for science & policy. But there’s a huge "IF": Do these simulations actually reflect reality?
To find out, we introduce SimBench: The first large-scale benchmark for group-level social simulation. (1/9)

5 months ago 11 5 1 1

Cool paper by @eddieyang.bsky.social, confirming our LLM hacking findings (arxiv.org/abs/2509.08825):
✓ LLMs are brittle data annotators
✓ Downstream conclusions flip frequently: LLM hacking risk is real!
✓ Bias correction methods can help but have trade-offs
✓ Use human expert whenever possible

6 months ago 16 6 0 0
Advertisement

Looks interesting! We have been facing this exact issue - finding big inconsistencies across different LLMs rating the same text.

6 months ago 5 6 0 0
Post image Post image

About last week’s internal hackathon 😏
Last week, we -- the (Amazing) Social Computing Group, held an internal hackathon to work on our informally called “Cultural Imperialism” project.

7 months ago 3 1 1 1

If you feel uneasy using LLMs for data annotation, you are right (if not, you should). It offers new chances for research that is difficult with traditional #NLP/#textasdata methods, but the risk of false conclusions is high!

Experiment + *evidence-based* mitigation strategies in this preprint 👇

7 months ago 22 4 1 0

The 94% LLM hacking success rate is achieved by annotating data with several model-prompt configs, then choosing the one that yields the desired result (70% if considering SOTA models only).
The 31-50% risk reflects well-intentioned researchers who just run one reasonable config w/o cherry-picking.

7 months ago 0 0 0 0
Post image

Thank you, Florian :) We use two methods, CDI and DSL. Both debias LLM annotations and reduce false positive conclusions to about 3-13%, on average, but at the cost of a much higher Type II risk (up to 92%). The human-only conclusions have a pretty low Type I risk as well, at a lower Type II risk.

7 months ago 0 0 2 0

Great question! Performance and LLM hacking risk are negatively correlated. So easy tasks do have lower risk. But even tasks with 96% F1 score showed up to 16% risk of wrong conclusions. Validation is important because high annotation performance doesn't guarantee correct conclusions.

7 months ago 3 1 1 0

We used 199 different prompts total: some from prior work, others based on human annotation guidelines, and some simple semantic paraphrases

Even when LLMs correctly identify significant effects, estimated effect sizes still deviate from true values by 40-77% (see Type M risk, Table 3 and Figure 3)

7 months ago 1 0 0 0

Thank you to the amazing @paul-rottger.bsky.social @aurman21.bsky.social @albertwendsjo.bsky.social @florplaza.bsky.social @jbgruber.bsky.social @dirkhovy.bsky.social for this fun collaboration!!

7 months ago 6 0 0 0

Why this matters: LLM hacking affects any field using AI for data analysis–not just computational social science!

Please check out our preprint, we'd be happy to receive your feedback!

#LLMHacking #SocialScience #ResearchIntegrity #Reproducibility #DataAnnotation #NLP #OpenScience #Statistics

7 months ago 11 2 1 0
Advertisement

The good news: we found solutions that help mitigate this:
✅ Larger, more capable models are safer (but no guarantee).
✅ Few human annotations beat many AI annotations.
✅ Testing several models and configurations on held-out data helps.
✅ Pre-registering AI choices can prevent cherry-picking.

7 months ago 20 1 1 0

- Researchers using SOTA models like GPT-4o face a 31-50% chance of false conclusions for plausible hypotheses.
- Risk peaks near significance thresholds (p=0.05), where 70% of "discoveries" may be false.
- Regression correction methods often don't work as they trade off Type I vs. Type II errors.

7 months ago 12 0 2 0

We tested 18 LLMs on 37 social science annotation tasks (13M labels, 1.4M regressions). By trying different models and prompts, you can make 94% of null results appear statistically significant–or flip findings completely 68% of the time.

Importantly this also concerns well-intentioned researchers!

7 months ago 24 5 3 0