And another Quarto announcement; I've alluded to it before, but we're making it "official".
We've started work on Quarto 2. The blog post has an overview: quarto.org/docs/blog/po...
We'll share more in future blog posts, but here's what you can expect from the Quarto 2 dev effort:
(1/)
Posts by Dᴜsᴛɪɴ Sᴛᴏʟᴛᴢ
the front page of the paper, which can be read in: https://onlinelibrary.wiley.com/doi/epdf/10.1111/pops.70056
we have a new paper in the April issue of Political Psychology.
in this (admittedly quite lovely) new paper, @stephenvaisey.com, @pablobellode.bsky.social and I make a simple point: using panel data to understand belief change is very hard.
we highlight an empirical intractability in the process:
A screenshot of an Android app. Text: ▲ Nearby Glasses Smart Glasses are probably nearby Device: Unknown RSSI: -42 dBm Reason: Meta Company ID (0x058E) Company: Meta Platforms, Inc. This app notifies you when smart glasses are nearby. It uses company identificators in the Bluetooth data sent out by these. Therefore, there likely are false positives (e.g. from VR headsets). Hence, please proceed with caution when approaching a person nearby wearing glasses. They might just be regular glasses, despite this app's warning. Debug Log [19:08:40] Unknown (-50 dBm) Meta Company ID (0x058E) [19:08:40] Unknown (-50 dBm) Meta Company ID (0x058E) [19:08:41] Unknown (-34 dBm) Meta Company ID (0x058E) [19:08:42] Unknown (-34 dBm) Meta Company ID (0x058E) [19:08:42] Unknown (-34 dBm) Meta Company ID (0x058E) [19:08:42] Unknown (-57 dBm) Meta Company ID (0x058E) [19:08:43] Unknown (-40 dBm) Meta Company ID (0x058E) [19:08:43] Unknown (-42 dBm) Meta Company ID (0x058E) [19:08:43] Unknown (-42 dBm) Meta Company ID (0x058E) [19:08:44] Unknown (-42 dBm) Meta Company ID (0x058E) [19:08:44] Unknown (-42 dBm) Meta Company ID (0x058E) [19:08:44] Unknown (-42 dBm) Meta Company ID (0x058E) [19:08:45] Unknown (-40 dBm) Meta Company ID (0x058E) [19:08:45] Unknown (-63 dBm) Meta Company ID (0x058E)
I made an app.
play.google.com/store/apps/d...
Nearby Glasses is here to warn you when smart glasses are nearby.
I hope it's useful for someone.
Nearby Glasses's open source, free and rather simple
github.com/yjeanrenaud/...
It's also downloadable outside the Play Store. iOS port is in the making
Are our evaluations actually measuring any stable properties of LLMs? 🧵
We recently updated R Basics, our introductory workshop about research computing in R! 🎉 The reader (a mini-textbook) is here:
ucdavisdatalab.github.io/workshop_r_b...
THE SWITHEROO
Hey, I have a new WP out now on @socarxiv.bsky.social !
In the paper, I study the full network of partnerships in Norway since 1967 up to today. I examine the general structure of the partnerships network, and the existence of a specific type of network: the partner switcheroo. 1/x
Jacob Hibel and I have organized an upcoming conference on AI and Social Inequality, which will be held on 3/17 at the UC Student and Policy Center in Sacramento. This event is open to social scientists, computer scientists, and the policy community. poverty.ucdavis.edu/event/artifi...
Here’s a full draft of the upcoming second edition of my “Data Visualization: A Practical Introduction”: socviz.co
The Science, Knowledge, and Technology Section of the American Sociological Association is hosting a pre-conference here at City College (in our FiDi division)!
Our call for abstracts is open until April 1, 2026.
asaskat.com/skattoday/
Now out in the American Sociological Review
We present the first large-scale assessment of the structure and evolution of temporalities expressed in U.S. climate change news coverage (2000 to 2021). For this, we analyzed more than 23,000 statements about climate change effects and actions. 🧵 1/
🧵on my new paper "Synthetic personas distort the structure of human belief systems" w Roberto Cerina I'm v excited about...
🚨 Do synthetic samples look like human samples?
We compare 28 LLMs to the 2024 General Social Survey (GSS) to find out + develop host of diagnostics...
NEW: A hobbyist has created Nearby Glasses, an app that warns you if someone close by is wearing smart glasses. 404 Media spoke to the creator who said he was inspired by our coverage that uncovers how men are wearing Meta's Ray-Bans to covertly film massage parlor workers.
If Odo converted to Judaism he would have to be liquid on Shabbat
Well this story blew up. Went to the front page of reddit, hacker news, Gizmodo. People donated thousands of dollars to the legal fund of Jeff Sovern, who dismantled 13 Flock surveillance cameras. The response was near-universal enthusiasm for the project of smashing invasive, exploitative tech.
Oh amassing large enough datasets with provenance for language model training is totally doable. Just when you do that you feel lonely (and unpaid) as people don’t really care.
LLM cloud inference dominates usage, but should it? Local models and accelerators have improved massively over recent years.
Perfect routing to best local model "reduce energy consumption by 80.4%, compute by 77.3%, and cost by 73.8% versus cloud-only deployment"
arxiv.org/pdf/2511.07885
Yesterday, those who teach Intro to Sociology at Florida colleges (as opposed to universities) received a ready-made curriculum from the state and were ordered to teach it.
Yes, you read that correctly. The *state* is enforcing a curriculum on college profs, complete w/ the following restrictions:
Currently, there are thousands of large pretrained language models (LLMs) available to social scientists. How do we select among them? Using validity, reliability, reproducibility, and replicability as guides, we explore the significance of:(1) model openness, (2) model footprint, (3) training data, and (4) model architectures and fine-tuning. While ex ante tests of validity (i.e., benchmarks) are often privileged in these discussions, we argue that social scientists cannot altogether avoid validating computational measures (ex-post). Replicability, in particular, is a more pressing guide for selecting language models. Being able to reliably replicate a particular finding that entails the use of a language model necessitates reliably reproducing a task. To this end, we propose starting with smaller, open models, and constructing delimited benchmarks to demonstrate the validity of the entire computational pipeline.
Here's a little working paper with Marshall Taylor and Sanuj Kumar:
Selecting Language Models for Social Science
arxiv.org/abs/2601.10926
Thoughts welcome!
📢 In this Social Forces article, I introduce occupational elitism as a novel measure of social closure: the share of upper-class background workers within an occupation.
Its consequences for earnings stratification can be examined using a social closure theory lens.
🔓 doi.org/10.1093/sf/s...
Why does a worse candidate win? Or an inferior song dominate?
New article with @alexgelas.bsky.social, @pantelispa.bsky.social & Gaël Le Mens.
We show that often once A becomes even slightly more popular than B, people choose A much more often.
www.science.org/doi/pdf/10.1...
I’m looking for three PhD students for my new ERC project, starting 1 September. The goal is to understand how firms shape inequality in workers’ careers—using population registers.
Please spread the word! Deadline is March 8, more info here (see projects 4-6):
ics-graduateschool.nl/vacancies/
Currently, there are thousands of large pretrained language models (LLMs) available to social scientists. How do we select among them? Using validity, reliability, reproducibility, and replicability as guides, we explore the significance of:(1) model openness, (2) model footprint, (3) training data, and (4) model architectures and fine-tuning. While ex ante tests of validity (i.e., benchmarks) are often privileged in these discussions, we argue that social scientists cannot altogether avoid validating computational measures (ex-post). Replicability, in particular, is a more pressing guide for selecting language models. Being able to reliably replicate a particular finding that entails the use of a language model necessitates reliably reproducing a task. To this end, we propose starting with smaller, open models, and constructing delimited benchmarks to demonstrate the validity of the entire computational pipeline.
Here's a little working paper with Marshall Taylor and Sanuj Kumar:
Selecting Language Models for Social Science
arxiv.org/abs/2601.10926
Thoughts welcome!
📢WORK! At the Sociology department of @utrechtuniversity.bsky.social we are hiring a postdoc who will work on applications of AI in sociological research. Join our vibrant-yet-cohesive research community doing cutting-edge research. Please share or apply! www.uu.nl/en/organisat...
About 10 years ago, I set out to better understand the drivers of radicalization and deradicalization into white supremacy. Work from our endeavors is starting to come out, and I am no longer concerned about sharing it.
I want to share the findings from one of these studies, published last March. 🧵
✨ We’re excited to announce the Spring 2026 IAS Seminar Series, featuring a stellar lineup of speakers and thought-provoking talks. Open to all! #AcademicSky
I took the Colbert Questionert!
Watch the full interview here: youtu.be/HPONVyNiWsU?...
La course des nouveaux usages associés aux modèles génératifs met la professions scientifiques à rude épreuve.
Article précieux pour suivre l'état des usages #GenAI en sociologie sociologicalscience.com/articles-v13... par @oms279.bsky.social, @ajalvero.bsky.social @dustinstoltz.com et M. Taylor