Advertisement · 728 × 90

Posts by Emir Efendić

Tripping this post about a PhD position I have open.

Please re share or re skeet or re sky! 🙏

4 days ago 3 1 0 1
Preview
Postdoc In Meta-science Personal type: Scientific staff

We are inviting applications for a two-year postdoctoral position in a collaborative meta-science project on the effectiveness of data and code sharing policies in research-performing organizations. www.tue.nl/en/working-a...

6 days ago 52 66 1 0

There’s discourse you just have to be on this site more 🙂

6 days ago 0 0 0 0

Is the discourse now that this website is dying. I really can’t handle another migration.

I need a website to skulk and look at what people are talking about while being too afraid to engage myself.

Stay alive Bluesky.

6 days ago 1 0 1 0

For all the benefits of the Artemis mission many people are disregarding the obvious: the amount of sick wallpapers this mission is going to produce.

2 weeks ago 1 0 0 0
Post image

Spend 2 years in Prague and then 2 years in Maastricht. Get the full quaint European cobblestone experience.

Here's a short description of the project:

2 weeks ago 2 0 0 0
Preview
We are hiring PhD Position in Judgment & Decision Making At a Glance Topic: Human-AI Interaction & Cognitive Biases Locations: VŠE Prague (2 years) + Maastricht University (2 years) Funding: Fully funded (4-year pr...

Do you want to do a PhD in Judgment and Decision Making in two beautiful European locations on human-AI interaction?

Well have I got news for you.

@bahniks.bsky.social and I are recruiting a candidate to start around September 2026.

See here: decisionlab.vse.cz/english/we-a...

2 weeks ago 3 4 1 1
Advertisement
Preview
SCORE | Center for Open Science SCORE shows that there is no shortcut to producing credible research findings, and there is no single indicator of trustworthiness. Research progress depends on transparency, rigor, and establishing r...

SCORE, a collaboration of 865 researchers, is now released as three papers in Nature, six preprints, and a lot of data (cos.io/score/). SCORE examined repeatability of findings from the social-behavioral sciences and tested whether human and automated methods could predict replicability.

2 weeks ago 190 106 1 32

Teachers going from “Wikipedia is not a resource to use in your citations” to “Wikipedia may be the only resource to use in your citations”.

3 weeks ago 0 0 0 0
Measurement, Experimentation, & Causal Inference

I’ve launched a website on measurement, experimentation, and causal inference:

danrschley.github.io/Measurement-...

I built it to share methods ideas that are often taught separately, but are deeply connected in practice.

4 weeks ago 21 6 1 2
Post image

Now published in Psych Science: doi.org/10.1177/0956...

We explored cultural differences in how people across six different countries attribute moral standing.

4 weeks ago 13 2 1 0

One thing I keep seeing again and again is how all the democratization claims of social media are falling apart and how the whole system seems to be a successful example of minority influence for the worst of humanity’s qualities.

1 month ago 2 0 0 0

Remember that brief period in time when everyone’s presentations had those hyper realistic cartoon AI generated pictures of people working in libraries of Babel.

1 month ago 0 0 0 0
Preview
(PDF) Conversing with a disagreeing LLM improves people's inaccurate predictions PDF | Accurately predicting the outcome of future events improves decisions in domains ranging from health to finance, yet prediction errors are common.... | Find, read and cite all the research you n...

Apropos, we have a preprint that attempts to leverage disagreement from an LLM to help people with making predictions. Could be of interest.

www.researchgate.net/publication/...

1 month ago 1 0 0 0
Advertisement

I’ll just gpt code it to something fancy 😀

1 month ago 1 0 0 0
Preview
homer simpson from the simpsons is standing in front of a grassy field . ALT: homer simpson from the simpsons is standing in front of a grassy field .

Looking at all these fancy people here making jokes about R and pipes and tibbles while I'm just doing basic Qualtrics experiments and my code to clean the data has been the same for the last 4 years.

1 month ago 0 0 1 0
Online Studies
Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses.

Rationale

As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses.

Scope

This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools).

Required Reporting

Authors must include in the Methods section either:

A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copy–paste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable …

Online Studies Psychological Science requires that authors who use samples from online data collection include a statement in the Method section explicitly addressing their approach to preventing and detecting automated or AI-generated responses. Rationale As large language models and other generative AI tools become more accessible, the risk of data contamination by non-human respondents has increased dramatically in research. Psychological science (and the social sciences generally) is particularly susceptible to this issue given its growing reliance on online data collection. Preventing automated responses during data collection and detecting them afterward often involve methodological trade-offs. For instance, technical barriers that aim to prevent LLM use (e.g., blocking copy-pasting functionalities) may eliminate behavioral indicators needed for detection (e.g., pasting rather than typing). This policy aims to enhance transparency and reproducibility of reported results by requiring authors to articulate their approach across both prevention and detection dimensions, enabling readers and reviewers to assess the likelihood of reported data being influenced by automated responses. Scope This policy applies to any submission with at least one study that includes data collected online without direct human supervision (e.g., via crowdsourcing platforms, student participants who complete the study online, online recruitment ads, or remote survey distribution tools). Required Reporting Authors must include in the Methods section either: A statement confirming that procedures were in place to prevent and/or detect and exclude automated or AI-generated responses, including a description of those procedures (e.g., explicit participant instructions against LLM use, disabled copy–paste functionality, CAPTCHA use, IP filtering, consistency checks, attention checks, adversarial prompting) as well as the types of automated responses that these procedures are suitable …

Maybe of interest: The submission guidelines of Psychological Science now demand an explicit statement on measures taken to reduce the risk of AI-generated responses for all online studies!

www.psychologicalscience.org/publications...

1 month ago 124 53 1 0

Many are appropriately outraged by Altman’s comments here implying that raising a human child is akin to “training” an AI model.

This is part of a broader pattern where AI industry leaders use language that collapses the boundary between human and machine.

🧵/

1 month ago 493 200 28 22

Built out in the last couple of years. We added mounds upon mounds.

1 month ago 0 0 0 0
Post image

Spent the day skiing. On actual snow.
I see people are talking about pipes here while I’m skiing this half pipe. (Note: no pipes or half pipes were harmed.)

1 month ago 1 0 1 0

This is fascinating. I always used these chills and goosebumps to tell me what I liked but also when I play music to figure out which tones work with each other even before knowing theory e.g. scales.

2 months ago 0 0 0 0

Could be interesting to: @dgrand.bsky.social @gordpennycook.bsky.social @tomcostello.bsky.social

2 months ago 0 0 0 0
Advertisement

There's a lot of nuances to this finding. Some improvement was also observed in the agreeing LLM condition, and conversations weren't uniformly beneficial.

In fact, when the first prediction was pretty accurate, we saw a slight decrease in accuracy.

Comments are welcome!!!

2 months ago 0 0 1 0
Post image

Now for the cool stuff: when people talked to a disagreeing LLM they revised their predictions more and were more likely to revise them in the right direction (upper panels). This improved accuracy (lower panels) and the improvement occurred much more when initial predictions were inaccurate.

2 months ago 0 1 1 0

We had people make predictions and either converse with an agreeing LLM or a disagreeing LLM.

They had to explain their reasoning behind the prediction and after the conversation, they could take another shot at the prediction.

2 months ago 0 0 1 0

Disagreement (having one's views challenged) is a really good way to improve decisions. But, people avoid it because it's uncomfortable (among other things).

But LLMs are really good at conversation so we thought why not leverage this to deliver disagreement without the social consequences.

2 months ago 0 0 1 0
Post image

We have a new pre-print! 📝🖨️

We find that conversing with a disagreeing LLM helped improve people's inaccurate predictions!

osf.io/preprints/ps...

Let me tell you all about it:

2 months ago 10 3 1 0
Post image

📣 Applications for the 23rd Summer Institute on Bounded Rationality are now open!

✨Join us in Berlin @arc-mpib.bsky.social June 08–16, 2026, to explore the topic of “Decision Making in the Age of AI”.

✏️ More details + application form (deadline: March 16): www.mpib-berlin.mpg.de/research/res...

2 months ago 25 24 0 3
Video

Sir Ian McKellen performing a monologue from Shakespeare’s Sir Thomas More on the Stephen Colbert show. Never have I heard this monologue performed with such a keen sense of prescience. Nor have I ever been in this exact historical moment.TY Sir Ian, for reaching us once again.
#Pinks #ProudBlue

2 months ago 32394 13911 586 1587
Please wait whilst we redirect you All content on this site: Copyright © 2026 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply.

New paper (forthcoming in Cognition): Context-dependent effects of branches in decisions under risk authors.elsevier.com/a/1mXL%7E2Hx...
Key finding: when people choose between risky options, they’re more likely to pick the one with more distinct probabilistic outcomes (“more pathways to winning”).

2 months ago 3 2 1 0