Advertisement · 728 × 90

Posts by Thomas Davidson

Universitat Pompeu Fabra hiring Funded PhD Position in Computational Social Science / Agentic Large Language Models in Barcelona, Catalonia, Spain | LinkedIn Posted 3:02:08 PM. The Department of Engineering at Universitat Pompeu Fabra (Barcelona, Spain) offers a fully funded…See this and similar jobs on LinkedIn.

I’m #hiring a PhD Student in Computational Social Science / Agentic Large Language Models. Know anyone who might be interested? www.linkedin.com/jobs/view/44...

1 week ago 5 6 0 0
Post image

Mine is still alive and well (homebrew install)

1 week ago 1 0 0 0

🚨 Update: WOAH Mentorship @ #EMNLP2026 🚨

Deadline extended to April 17! 📅

Still time to apply as mentor or mentee and develop your WOAH submission with expert guidance 💡

📝 Mentor: forms.gle/XaK8KBFomaWZ...
📝 Mentee: forms.gle/AC5akVcdzsCv...

Join us! 🌍 #NLP #WOAH

1 week ago 4 4 0 0

And they are probably getting a lot more submissions. Most of the papers I have been asked to review are from institutions in China or other non US/European institutions.

1 week ago 0 0 1 0

I've had the same experience re review requests. I've been receiving about one a month for papers that are either completely unrelated or at best tangential to my expertise.

1 week ago 1 1 1 0

This blog by @maartengr.bsky.social has some awesome visualizations showing the architecture and components of multimodal models.

This will be a great pedagogical resource, up there with Christopher Olah's blogs and 3Brown1Blue's videos.

2 weeks ago 7 1 1 0
Post image Post image

📄Published Today in Nature:

500 researchers reproduced 100 studies across the social & behavioral sciences to assess their analytical robustness (led by @balazsaczel.bsky.social & @szaszibarnabas.bsky.social).

Article: www.nature.com/articles/s41...

Preprint: osf.io/preprints/me...

TLDR: 1/11

2 weeks ago 91 48 2 4

3-year contract for a Phd position in Sociology on the discourse of/about AI, at l'Institut polytechnique de Paris.

3 weeks ago 30 28 0 1

It appears this was due to badly configured repetition_penalty parameter in transformers. Essentially, if you penalize repetition then the model optimizes by regurgitating random tokens. The responses are now coherent after removing this - but painfully slow.

3 weeks ago 2 0 0 0
Advertisement

These results look really good - and getting the trace would be ideal. I think I need to do some more wrangling to get this working correctly.

3 weeks ago 1 0 0 0

END OF TEXT STREAM......WAIT.......MORE DATA REQUIRED!!!!!!!!STOPPING NOW..........RESUMING SHORTLY....NEVER END................GOODBYEEEEEEE.....NOPE STILL HERE........LET'S KEEP GOIN THEN.............WHATEEVER COMETH...............JUST SAY THE WORD AND WE SHALL DO IT TOGETHER FOR REAL THIS TIME

3 weeks ago 4 1 1 0

Claude calls it "degenerate generation". Sounds like a good band name.

3 weeks ago 3 0 0 0
A screenshot of a wall of text. The text begins by describing a task presented by a user and then turns into gibberish, repeating many different words and names without any structure. A second image is similar but includes lots of symbols.

A screenshot of a wall of text. The text begins by describing a task presented by a user and then turns into gibberish, repeating many different words and names without any structure. A second image is similar but includes lots of symbols.

Post image

Question for AI Bsky, has anyone managed to get useful reasoning traces from open-weights models?

Here is some typical output I got from Qwen 3.5 on a vision task.

It starts out reasonable but descends into gibberish (neuralese?) @lauraknelson.bsky.social @tedunderwood.com @cbarrie.bsky.social

3 weeks ago 5 0 3 1
Post image

Excited to be in Toronto to share my latest research in the UTM sociology speaker series and conduct a graduate student masterclass this afternoon!

www.sociology.utoronto.ca/events/utm-s...

1 month ago 2 1 0 0
Post image

WOAH is coming back for its 10th edition at EMNLP 2026 in Budapest! 🎊

For this important anniversary, we invite authors to critically reflect on our achievements as a community and adjust the aim going forward.

Stay tuned, more updates coming soon!

#EMNLP2026 #WOAH2026 #NLProc

1 month ago 11 8 0 1
General strike participants leaving shipyards, Seattle, February 1919. This image of workers was taken at the Skinner & Eddy Corporation shipyard located between Dearborn Street and Connecticut Street (now Royal Brougham Way). The nitrate photo shows signs of deterioration on some light parts of the image.

General strike participants leaving shipyards, Seattle, February 1919. This image of workers was taken at the Skinner & Eddy Corporation shipyard located between Dearborn Street and Connecticut Street (now Royal Brougham Way). The nitrate photo shows signs of deterioration on some light parts of the image.

AI-generated summaries of history led to more liberal opinions compared to Wikipedia, while summaries by chatbots prompted to use a conservative framing produced more conservative opinions—but primarily among conservative readers. In PNAS Nexus: https://ow.ly/1moU50YpOHN

1 month ago 2 2 0 0
Advertisement
Home

Call for Submissions: AI for Social Science Methodology (Yale)
• Keynote: @nachristakis.bsky.social
• Panel with editors of leading journals on publishing AI research
• Mentoring roundtables for early-career scholars
• Generous travel support
Discussion-driven, high-quality research.

1 month ago 14 12 1 1
Post image

Recently, van der Stigchel and colleagues posted a provocative commentary suggesting that we should be wary of bots in online behavioral data collection (🧵by @cstrauch.bsky.social here: bsky.app/profile/cstr...). But should we? Here is my response letter osf.io/preprints/ps.... 1/5

1 month ago 55 33 6 5

Great write-up on our recent studies:

“Back in the day, if you wanted to know what the Seattle General Strike was, you’d grab an encyclopedia or ... check Wikipedia,” Karell said. “Now, you just ask ChatGPT... Increasingly, the information we rely on is being packaged by tools built by companies.”

1 month ago 5 1 0 0
Post image

Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)

We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4

1 month ago 21 9 1 1
Post image

Want to learn about computational social science *for free* and identify new research partners across academic fields? Apply to one of the 2026 Summer Institutes in Computational Social Science (described in yellow in the attached map) here: sicss.io/locations

1 month ago 33 32 0 0

Robustness checks showed that GPT-4o tended to generate liberal-leaning summaries across a wide range of historical events, highlighting the importance of default biases.

More work is needed to understand how these patterns might generalize to other topics and across different models. 4/4

1 month ago 1 0 0 0
Preview
How latent and prompting biases in AI-generated historical narratives influence opinions Abstract. Large language models (LLMs) can be used to persuade people on a range of issues, particularly through user-driven strategies such as personalizi

This helps distinguish between two pathways of AI influence: latent biases baked into models from training, and prompting biases introduced through deliberate prompting. Both can shape opinions even when the content is factually accurate. 3/4

Paper: academic.oup.com/pnasnexus/ar...

1 month ago 1 0 1 0
Advertisement
Post image Post image

We ran an experiment where people read GPT-4o summaries or Wikipedia.

Default summaries with no ideological slant and texts generated using a liberal persona both shifted readers toward liberal opinions relative to Wikipedia.

Conservative-framed texts only shifted among conservatives. 2/4

1 month ago 1 0 1 0
Post image

Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)

We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4

1 month ago 21 9 1 1
microgpt Musings of a Computer Scientist.

a gpt in a sublime 200 lines of pure Python — it is all there. Incredible for teaching students (and yourself)

karpathy.github.io/2026/02/12/m...

1 month ago 36 8 1 2
Post image Post image

Come work w CSMaP!

We're hiring two postdocs.: one with a focus on AI; other is general focus.

Let me know if you have questions about the roles. And please share widely.

apply.interfolio.com/181817

apply.interfolio.com/181820

2 months ago 6 4 1 0
Post image

Benchmarks of LLM common sense overwhelmingly rely on correct labels to report an accuracy score. But what if your "ground truth" genuinely differs from mine?

In a new @pnasnexus.org paper, @duncanjwatts.bsky.social, @whiting.me and I explore the implications of this intriguing question.

🧵⤵️

2 months ago 8 3 1 1
Preview
Postdoctoral researcher on applications of AI in sociological research Are you able to lead sociological research into the AI age?

📢WORK! At the Sociology department of @utrechtuniversity.bsky.social we are hiring a postdoc who will work on applications of AI in sociological research. Join our vibrant-yet-cohesive research community doing cutting-edge research. Please share or apply! www.uu.nl/en/organisat...

2 months ago 17 30 0 0
Post image

AI does not engage in motivated reasoning

While individuals processing information may be motivated to
reach a certain conclusion, LLMs have no such motivation and operate on purely cognitive input. As such, they do not mimic humans in motivated reasoning tasks.

arxiv.org/pdf/2601.16130

2 months ago 17 3 0 0