Sure I think this is right. But Iβm definitely not a senior dev and you clearly are not used to working with code written by social scientists π.
Posts by Sol Messing
You have to maintain any open source software package no?
bsky.app/profile/sam-...
Google Trends chart showing interest in commercial coding agents increasing dramatically in early 2026
You can just research things. New from @jatucker.bsky.social & me at @brookings: Coding agents like Claude Code and Codex will likely accelerate research AND undermine institutional structures we built to support it.
PS l all you have to do is say please take out common AI-isms in that text and poof no more obsessive negation and em dashes. β99 percent human writingβ π₯
PS l all you have to do is say please take out common AI-isms in that text and poof no more obsessive negation and em dashes. β99 percent human writingβ π₯
No one goes viral without unnecessary provocation π
NB - the thread itself was AI assisted even if the original Brookings piece was not bsky.app/profile/solm...
bsky.app/profile/akou...
Fixed! bsky.app/profile/solm...
Works now! bsky.app/profile/solm...
Fixed! bsky.app/profile/solm...
P.S. No AI was used to draft the Brookings piece. But this thread was posted by Claude Code using a Bluesky MCP server + API.
Here's the paper: brookings.edu/articles/the-train-has-left-the-station-agentic-ai-and-the-future-of-social-science-research/
Andy Hall thinks we'll have new institutions: living research that auto-updates, auto-verified replication, hyperscaled descriptive work. Senior scholars directing dozens of agents. Compute funding + ambitious people is all you need.
freesystems.substack.com/p/the-100x-research-institution
But even if AI progress stopped today, folks like @akoustov.bsky.social argues the changes already in motion will transform academic research beyond recognition. Hard to disagree!
alexanderkustov.substack.com/p/academics-need-to-wake-up-on-ai
Simon P. Couch chart: Claude Code session energy use vs. everyday activities
On energy: The story is more nuanced than headlines. 25 LLM prompts use less energy than one hour of Netflix. But a full-day coding agent session may use ~1 kWh.
We also worry researchers will 'look for nails' β it's so much faster to study data that's there. Research on non-agentic AI already shows it increases productivity but narrows focus. If hundreds of researchers using agents to mine the same dataset, the odds of "finding" signal in noise go up & up.
Anakin/Padme meme: sandboxed environment / --dangerously-skip-permissions
Security risks too. Claude deleted half a dataset during one of my sessions (recovered, thankfully). Agents can ingest security keys from local directories and users often tell claude to skip permission requests.
Spiderman pointing meme: AI coding, AI math assist, AI packet eval, AI cover letter and CV
How do you evaluate a job candidate when an audit study can be built in days? Math rigor, tech chops, strong methods, familiarity w/ lit, traditional signals get harder to assess when agents do the heavy lifting.
What happens to junior scholar training when agents do lit review, data labeling, and data vis for a fraction of the cost? Levels the field across institutions, but the RA apprenticeship model is in trouble.
ModelSlant.com β interactive dashboard for LLM political slant
But agents make it faster to build interactive websites for public engagement, what funders/admin have long wanted. I used CC to build one that demos LLM outputs across languages (findings coming soon). @seanjwestwood.bsky.social has done amazing work here:
modelslant.com
Turtles stacked on top of each other - turtles all the way down
And who reviews all these papers? If AI writes the manuscripts and AI reviews them, it's "AI turtles all the way down." Journals need to figure out what human judgment still means in this pipeline.
@causalinf.bsky.social: 'The supply curve has already shifted. The demand curve for publication slots hasn't moved.' It's one thing in econ journals that charge fees, but most journals don't + aren't built for this volume.
causalinf.substack.com/p/claude-code-27-research-and-publishing
This means more papers. Preprint submissions jumped 6-13% above seasonal expectations after Claude Code launched. One editor predicts 50% more submissions to top poli sci journals this year.
bsky.app/profile/solmg.bsky.socia...
HetSL R package GitHub repository page
With CC, we turned replication code into a v1, well-documented R package in a day (more work to do, bug reports welcome). We produced a 20-page analysis of corporate responses to the Ukraine invasion in under an hour. We would have never done this w/o CC.
https://github.com/SolomonMg/HetSL
Yes indeed
P.S. No AI was used to draft the Brookings piece. But this thread was posted by Claude Code using a Bluesky MCP server + API.
Bar chart from Simon P. Couch comparing Claude Code session energy use to everyday activities like running a dishwasher, driving an electric car, and microwaving
On energy: The story is more nuanced than headlines. 25 LLM prompts use less energy than one hour of Netflix. But a full-day coding agent session may use ~1 kWh.
We also worry researchers will 'look for nails' β it's so much faster to study data that's there. Research on non-agentic AI already shows it increases productivity but narrows focus. If hundreds of researchers using agents to mine the same dataset, the odds of "finding" signal in noise go up & up.
Anakin/Padme 4-panel meme. Anakin: Claude Code. Padme: In sandboxed environment, with no credential access, right? Anakin: --dangerously-skip-permissions from root with .envs
Security risks too. Claude deleted half a dataset during one of my sessions (recovered, thankfully). Agents can ingest security keys from local directories and users often tell claude to skip permission requests.