Advertisement · 728 × 90

Posts by Christoph Strauch

Nothing not to love here.
Some extra nerd love for the joy letting his pupil dilate at 14:32 (that is not remotely related to ours!). Maybe it's me projecting that it's joy. But you can't really stop me from that, can you?

1 day ago 4 0 0 0
Post image

❤️‍🔥 Exciting new preprint ❤️‍🔥 #Pupil constriction causes, by itself and independently of visual stimulation, activity in the human #retina and #visual system. w/ @anavili.bsky.social @veerahelmisofia.bsky.social @hakankarsilar.bsky.social @olaf.dimigen.de 1/4 🧵
www.biorxiv.org/content/10.6...

4 days ago 26 12 1 2

a pop science article on our recent synesthesia paper, how nice! We hope to get the reply to reviews & updated manuscript out soon :)

2 weeks ago 0 0 0 0
Preview
Synesthesia isn't just in your mind. The body reacts as if the colors were real. Pupil size in people with synesthesia changed depending on how bright or dark the perceived colors were.

Synesthesia isn't just in your mind. The body reacts as if the colors were real. www.livescience.com/health/synes...

3 weeks ago 10 4 3 1

Led by the one and only Koert Stribos (master's thesis!). Thanks to: @yuqingc.bsky.social, @dkoevoet.bsky.social & reviewers and editor for their helpful suggestions!

3 weeks ago 0 1 0 0

Package is available via github and fully open, including example data. You can use it with MNE, for instance! We see our method as useful primarily for tasks involving few and large eye movements (antisacce, memory guided saccade tasks etc.) Hopefully opens opportunities for reanalyses, too!

3 weeks ago 0 1 1 0
Snip  & Stitch: a simple and accessible correction for the pupil foreshortening error

For those interested in (re)analyzing (video-based) pupil data and looking into a solution for the foreshortening error (resulting from changes in the angle between camera and eye), check our open access Behavior Research Methods paper! rdcu.be/faNse
link.springer.com/article/10.3...

3 weeks ago 15 2 1 2
Screenshot of a manuscript title page. Title: “An AI agent can complete the Attention Network Test with human-like behavioral signatures: Implications for the bot-or-not debate.” Authors: Richard Huskey, Ziyu Zhao, Douglas A. Parry, and Jacob T. Fisher, with university affiliations listed below. The abstract says an autonomous AI agent completed the Attention Network Test in real time and produced mostly human-like behavioral data. Across seven code revisions, the bot achieved attention network scores within published human norms, 95.8% accuracy, and reaction-time patterns showing positive skew and trial-to-trial autocorrelation. Compared with 796 human participants, the bot fell within the human range on several measures but showed elevated autocorrelation and a bimodal reaction-time distribution due to intermittent detection failures. The paper argues this makes simple bot-vs-human detection harder in online reaction-time studies.

Screenshot of a manuscript title page. Title: “An AI agent can complete the Attention Network Test with human-like behavioral signatures: Implications for the bot-or-not debate.” Authors: Richard Huskey, Ziyu Zhao, Douglas A. Parry, and Jacob T. Fisher, with university affiliations listed below. The abstract says an autonomous AI agent completed the Attention Network Test in real time and produced mostly human-like behavioral data. Across seven code revisions, the bot achieved attention network scores within published human norms, 95.8% accuracy, and reaction-time patterns showing positive skew and trial-to-trial autocorrelation. Compared with 796 human participants, the bot fell within the human range on several measures but showed elevated autocorrelation and a bimodal reaction-time distribution due to intermittent detection failures. The paper argues this makes simple bot-vs-human detection harder in online reaction-time studies.

1/n

New preprint with Ziyu Zhao, @dougaparry.bsky.social, & @jacobtfisher.online

Can an AI bot complete a live online reaction-time task & produce data that passes as human?

We built an autonomous bot to take the Attention Network Test (ANT) in real time

Preprint:
doi.org/10.31234/osf...

3 weeks ago 21 12 2 1

No probably not worth it, if you do it for one account alone.
We tailored prompts a bit, but expanded from a common base prompt, so some but not unlimited scalability. And indeed, I'd hope that human-like behavior (like we show for Stroop) is something that is not so easy to achieve for non experts

1 month ago 1 0 1 0

Probably no reason to abandon online studies yet, but caution is likely appropriate.
I'm wondering where the edge cases are:
heavily text-based, slow RTs (e.g., a slow economic decision making task). What would you think here?

1 month ago 1 0 1 0
Advertisement

we'd argue though that a network of profiles is only the most sophisticated/organized way. Agentic AI primarily produces scripts, and such scripts could be sold. Similarly, people might sell scripts that pass attention checks, for instance. Whether that's happening is another question

1 month ago 1 0 1 0

Thanks for sharing your thoughts. We didn't find it too hard to let agentic AI do a couple of behavioral tasks (our reply in turn - so basically demonstrating 1).
Indeed, for psych, economics & barriers will probably be the key to watch out for.

1 month ago 2 0 1 0

Probably interesting for @talha-ozudogru.bsky.social

1 month ago 0 0 0 0

Interesting, no, haven't heard of their work!

1 month ago 0 0 1 0

Cool! @achetverikov.bsky.social showed that we don't have good reason to assume that bots were already polluting our data on Prolific. We then showed that bots can be built through agentic AI with prompting alone. So the danger is real, but it hasn't been shown that we're affected (on Prolific)

1 month ago 2 0 1 0

A reply to my reply about bots in online studies - and a very cool demonstration! I summarized my thoughts in the comments to this post.

1 month ago 8 3 2 0

Thanks also to @achetverikov.bsky.social for discussing his reply & our reply to the reply upfront.

1 month ago 1 0 1 0
Responses on an online Stroop task. A bot achieved average RT and accuracy that is in line with recent papers on the Stroop task.

Responses on an online Stroop task. A bot achieved average RT and accuracy that is in line with recent papers on the Stroop task.

Learnings for us (besides not to rely on shitty bot-detectors):
- building bots that solve specific tasks is relatively easy with a base prompt
- human-like performance is harder, but not impossible (Figure: Stroop data)
- text heavy, slow-paced experiments more vulnerable
Caution is appropriate.

1 month ago 0 1 1 0

led by @talha-ozudogru.bsky.social, we used a more elaborate base prompt that was then adjusted per experiment. For now, we used it on Posner cueing, Stroop, and a copy task.

1 month ago 0 0 1 0
Video

We recently warned of bots in online behavioral research. @achetverikov.bsky.social showed there is no evidence for that in our @joinprolific.bsky.social data - but that doesn't mean we're safe. Agentic AI can do behavioral tasks through prompting alone. Reply & videos: osf.io/3cztr/overview

1 month ago 20 12 3 3
Advertisement

As written above, we will reply.

1 month ago 0 0 0 0

perhaps it's the head. @dkoevoet.bsky.social and me once paid two euros (we didn't get reimbursed) to legally use it, gotta work for its money now! ;-)

1 month ago 3 0 1 0

so overall it works well, but then again, we had 120 trials (although substantially less may end up in either bin as we can't control how bright the colors are people experience), but on a single trial basis, the pupil responses are likely very noisy I would think. hope that answers your point!

1 month ago 2 0 0 0
Post image

Thanks! I quickly made this plot for you, a reviewer also asked a question in that direction. Light gray is average pupil response to colors in the bright bin, dark gray average pupil response to colors in the dark bin.

1 month ago 0 0 1 0

We show that synesthesia is sensory and automatic in nature: the pupil scales with the brightness of experienced synesthetic colors. doi.org/10.7554/eLif...
Now in its new dress @elife.bsky.social (convincing & valuable in round 1).
If anyone wants to pick up the method, happy to share & explain!

1 month ago 86 25 4 0

Thanks Andrey, sensible analyses. In short, we mostly share the notion that our markers are not sufficiently well establishing that bots were in our data. That said, we do think there is sufficient reason to worry - we'll respond a bit more elaborately in a week or two!

1 month ago 6 0 2 0

For either technical approach (that more likely affects scalability/economics of fraud/degree of fabricated data rather than its existence), we need to find strategies to respond/treat online behavioral data with appropriate care as a community.

1 month ago 1 0 0 0

That's fair. Obviously, we dont know how the data was produced, through agents or more autonomous approaches. Given that our task was relatively short and the financial compensation limited, I would expect some sophistication/generalization in the approach (to be a sensible business case)

1 month ago 0 0 1 0

@belekedezwart.bsky.social is this something you could check?

1 month ago 1 0 0 0
Advertisement

Does that answer your point? Otherwise Talha can maybe elaborate a bit more as he has played with how bots (can) achieve this more than I/we have.

1 month ago 0 0 1 0