I’m #hiring a PhD Student in Computational Social Science / Agentic Large Language Models. Know anyone who might be interested? www.linkedin.com/jobs/view/44...
Posts by Thomas Davidson
Mine is still alive and well (homebrew install)
🚨 Update: WOAH Mentorship @ #EMNLP2026 🚨
Deadline extended to April 17! 📅
Still time to apply as mentor or mentee and develop your WOAH submission with expert guidance 💡
📝 Mentor: forms.gle/XaK8KBFomaWZ...
📝 Mentee: forms.gle/AC5akVcdzsCv...
Join us! 🌍 #NLP #WOAH
And they are probably getting a lot more submissions. Most of the papers I have been asked to review are from institutions in China or other non US/European institutions.
I've had the same experience re review requests. I've been receiving about one a month for papers that are either completely unrelated or at best tangential to my expertise.
This blog by @maartengr.bsky.social has some awesome visualizations showing the architecture and components of multimodal models.
This will be a great pedagogical resource, up there with Christopher Olah's blogs and 3Brown1Blue's videos.
📄Published Today in Nature:
500 researchers reproduced 100 studies across the social & behavioral sciences to assess their analytical robustness (led by @balazsaczel.bsky.social & @szaszibarnabas.bsky.social).
Article: www.nature.com/articles/s41...
Preprint: osf.io/preprints/me...
TLDR: 1/11
3-year contract for a Phd position in Sociology on the discourse of/about AI, at l'Institut polytechnique de Paris.
It appears this was due to badly configured repetition_penalty parameter in transformers. Essentially, if you penalize repetition then the model optimizes by regurgitating random tokens. The responses are now coherent after removing this - but painfully slow.
These results look really good - and getting the trace would be ideal. I think I need to do some more wrangling to get this working correctly.
END OF TEXT STREAM......WAIT.......MORE DATA REQUIRED!!!!!!!!STOPPING NOW..........RESUMING SHORTLY....NEVER END................GOODBYEEEEEEE.....NOPE STILL HERE........LET'S KEEP GOIN THEN.............WHATEEVER COMETH...............JUST SAY THE WORD AND WE SHALL DO IT TOGETHER FOR REAL THIS TIME
Claude calls it "degenerate generation". Sounds like a good band name.
A screenshot of a wall of text. The text begins by describing a task presented by a user and then turns into gibberish, repeating many different words and names without any structure. A second image is similar but includes lots of symbols.
Question for AI Bsky, has anyone managed to get useful reasoning traces from open-weights models?
Here is some typical output I got from Qwen 3.5 on a vision task.
It starts out reasonable but descends into gibberish (neuralese?) @lauraknelson.bsky.social @tedunderwood.com @cbarrie.bsky.social
Excited to be in Toronto to share my latest research in the UTM sociology speaker series and conduct a graduate student masterclass this afternoon!
www.sociology.utoronto.ca/events/utm-s...
WOAH is coming back for its 10th edition at EMNLP 2026 in Budapest! 🎊
For this important anniversary, we invite authors to critically reflect on our achievements as a community and adjust the aim going forward.
Stay tuned, more updates coming soon!
#EMNLP2026 #WOAH2026 #NLProc
General strike participants leaving shipyards, Seattle, February 1919. This image of workers was taken at the Skinner & Eddy Corporation shipyard located between Dearborn Street and Connecticut Street (now Royal Brougham Way). The nitrate photo shows signs of deterioration on some light parts of the image.
AI-generated summaries of history led to more liberal opinions compared to Wikipedia, while summaries by chatbots prompted to use a conservative framing produced more conservative opinions—but primarily among conservative readers. In PNAS Nexus: https://ow.ly/1moU50YpOHN
Call for Submissions: AI for Social Science Methodology (Yale)
• Keynote: @nachristakis.bsky.social
• Panel with editors of leading journals on publishing AI research
• Mentoring roundtables for early-career scholars
• Generous travel support
Discussion-driven, high-quality research.
Recently, van der Stigchel and colleagues posted a provocative commentary suggesting that we should be wary of bots in online behavioral data collection (🧵by @cstrauch.bsky.social here: bsky.app/profile/cstr...). But should we? Here is my response letter osf.io/preprints/ps.... 1/5
Great write-up on our recent studies:
“Back in the day, if you wanted to know what the Seattle General Strike was, you’d grab an encyclopedia or ... check Wikipedia,” Karell said. “Now, you just ask ChatGPT... Increasingly, the information we rely on is being packaged by tools built by companies.”
Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)
We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4
Want to learn about computational social science *for free* and identify new research partners across academic fields? Apply to one of the 2026 Summer Institutes in Computational Social Science (described in yellow in the attached map) here: sicss.io/locations
Robustness checks showed that GPT-4o tended to generate liberal-leaning summaries across a wide range of historical events, highlighting the importance of default biases.
More work is needed to understand how these patterns might generalize to other topics and across different models. 4/4
This helps distinguish between two pathways of AI influence: latent biases baked into models from training, and prompting biases introduced through deliberate prompting. Both can shape opinions even when the content is factually accurate. 3/4
Paper: academic.oup.com/pnasnexus/ar...
We ran an experiment where people read GPT-4o summaries or Wikipedia.
Default summaries with no ideological slant and texts generated using a liberal persona both shifted readers toward liberal opinions relative to Wikipedia.
Conservative-framed texts only shifted among conservatives. 2/4
Our new paper is out today in @pnasnexus.org with colleagues at Yale (@matthewshu.com, Danny Karell, @keitarookura.bsky.social)
We wanted to understand how using AI-generated summaries to learn about history influenced attitudes compared to existing resources like Wikipedia. 1/4
a gpt in a sublime 200 lines of pure Python — it is all there. Incredible for teaching students (and yourself)
karpathy.github.io/2026/02/12/m...
Come work w CSMaP!
We're hiring two postdocs.: one with a focus on AI; other is general focus.
Let me know if you have questions about the roles. And please share widely.
apply.interfolio.com/181817
apply.interfolio.com/181820
Benchmarks of LLM common sense overwhelmingly rely on correct labels to report an accuracy score. But what if your "ground truth" genuinely differs from mine?
In a new @pnasnexus.org paper, @duncanjwatts.bsky.social, @whiting.me and I explore the implications of this intriguing question.
🧵⤵️
📢WORK! At the Sociology department of @utrechtuniversity.bsky.social we are hiring a postdoc who will work on applications of AI in sociological research. Join our vibrant-yet-cohesive research community doing cutting-edge research. Please share or apply! www.uu.nl/en/organisat...
AI does not engage in motivated reasoning
While individuals processing information may be motivated to
reach a certain conclusion, LLMs have no such motivation and operate on purely cognitive input. As such, they do not mimic humans in motivated reasoning tasks.
arxiv.org/pdf/2601.16130