Tripping this post about a PhD position I have open.
Please re share or re skeet or re sky! 🙏
Posts by Emir Efendić
We are inviting applications for a two-year postdoctoral position in a collaborative meta-science project on the effectiveness of data and code sharing policies in research-performing organizations. www.tue.nl/en/working-a...
There’s discourse you just have to be on this site more 🙂
Is the discourse now that this website is dying. I really can’t handle another migration.
I need a website to skulk and look at what people are talking about while being too afraid to engage myself.
Stay alive Bluesky.
For all the benefits of the Artemis mission many people are disregarding the obvious: the amount of sick wallpapers this mission is going to produce.
Spend 2 years in Prague and then 2 years in Maastricht. Get the full quaint European cobblestone experience.
Here's a short description of the project:
Do you want to do a PhD in Judgment and Decision Making in two beautiful European locations on human-AI interaction?
Well have I got news for you.
@bahniks.bsky.social and I are recruiting a candidate to start around September 2026.
See here: decisionlab.vse.cz/english/we-a...
SCORE, a collaboration of 865 researchers, is now released as three papers in Nature, six preprints, and a lot of data (cos.io/score/). SCORE examined repeatability of findings from the social-behavioral sciences and tested whether human and automated methods could predict replicability.
Teachers going from “Wikipedia is not a resource to use in your citations” to “Wikipedia may be the only resource to use in your citations”.
I’ve launched a website on measurement, experimentation, and causal inference:
danrschley.github.io/Measurement-...
I built it to share methods ideas that are often taught separately, but are deeply connected in practice.
Now published in Psych Science: doi.org/10.1177/0956...
We explored cultural differences in how people across six different countries attribute moral standing.
One thing I keep seeing again and again is how all the democratization claims of social media are falling apart and how the whole system seems to be a successful example of minority influence for the worst of humanity’s qualities.
Remember that brief period in time when everyone’s presentations had those hyper realistic cartoon AI generated pictures of people working in libraries of Babel.
Apropos, we have a preprint that attempts to leverage disagreement from an LLM to help people with making predictions. Could be of interest.
www.researchgate.net/publication/...
I’ll just gpt code it to something fancy 😀
Looking at all these fancy people here making jokes about R and pipes and tibbles while I'm just doing basic Qualtrics experiments and my code to clean the data has been the same for the last 4 years.
Online Studies Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses. Rationale As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses. Scope This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools). Required Reporting Authors must include in the Methods section either: A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copy–paste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable …
Maybe of interest: The submission guidelines of Psychological Science now demand an explicit statement on measures taken to reduce the risk of AI-generated responses for all online studies!
www.psychologicalscience.org/publications...
Many are appropriately outraged by Altman’s comments here implying that raising a human child is akin to “training” an AI model.
This is part of a broader pattern where AI industry leaders use language that collapses the boundary between human and machine.
🧵/
Built out in the last couple of years. We added mounds upon mounds.
Spent the day skiing. On actual snow.
I see people are talking about pipes here while I’m skiing this half pipe. (Note: no pipes or half pipes were harmed.)
This is fascinating. I always used these chills and goosebumps to tell me what I liked but also when I play music to figure out which tones work with each other even before knowing theory e.g. scales.
Could be interesting to: @dgrand.bsky.social @gordpennycook.bsky.social @tomcostello.bsky.social
There's a lot of nuances to this finding. Some improvement was also observed in the agreeing LLM condition, and conversations weren't uniformly beneficial.
In fact, when the first prediction was pretty accurate, we saw a slight decrease in accuracy.
Comments are welcome!!!
Now for the cool stuff: when people talked to a disagreeing LLM they revised their predictions more and were more likely to revise them in the right direction (upper panels). This improved accuracy (lower panels) and the improvement occurred much more when initial predictions were inaccurate.
We had people make predictions and either converse with an agreeing LLM or a disagreeing LLM.
They had to explain their reasoning behind the prediction and after the conversation, they could take another shot at the prediction.
Disagreement (having one's views challenged) is a really good way to improve decisions. But, people avoid it because it's uncomfortable (among other things).
But LLMs are really good at conversation so we thought why not leverage this to deliver disagreement without the social consequences.
We have a new pre-print! 📝🖨️
We find that conversing with a disagreeing LLM helped improve people's inaccurate predictions!
osf.io/preprints/ps...
Let me tell you all about it:
📣 Applications for the 23rd Summer Institute on Bounded Rationality are now open!
✨Join us in Berlin @arc-mpib.bsky.social June 08–16, 2026, to explore the topic of “Decision Making in the Age of AI”.
✏️ More details + application form (deadline: March 16): www.mpib-berlin.mpg.de/research/res...
Sir Ian McKellen performing a monologue from Shakespeare’s Sir Thomas More on the Stephen Colbert show. Never have I heard this monologue performed with such a keen sense of prescience. Nor have I ever been in this exact historical moment.TY Sir Ian, for reaching us once again.
#Pinks #ProudBlue
New paper (forthcoming in Cognition): Context-dependent effects of branches in decisions under risk authors.elsevier.com/a/1mXL%7E2Hx...
Key finding: when people choose between risky options, they’re more likely to pick the one with more distinct probabilistic outcomes (“more pathways to winning”).