Deeply honored that our paper was recognized with the #SANS2026 Award!
SANS was an important part of this paper's journey! @jadynpark.bsky.social first presented this work at SANS2023, and again in SANS2025, and we benefited greatly from the feedback and discussion!
Posts by Richard Huskey
And that's a wrap!
We hope you had a great conference!
Join us for #SANS2027 in Montreal! With program co-chairs @p1sh.bsky.social and Nina Lauharatanahirun
Can’t wait to see some of you at my poster today! #SANS2026
Research poster by Kee, Madgula, and Huskey introducing Inoxity, an open-source iOS platform for high-throughput data collection. Inoxity integrates passive sleep tracking, Apple Screen Time capture, and EMA surveys to study sleep–media dynamics in real-world settings. An N=1 proof-of-concept shows strong agreement with Apple Health sleep stage data.
Last day of #SANS2026! Today, @rachaelkee.bsky.social will present her poster introducing Inoxity. It’s an open source iOS app that allows for high-throughput collection of EMA and bio-behavioral data. What can you do with high-throughput data? Check her preprint! doi.org/10.33767/osf...
Interesting question! We’re pretty stumped by this one, too. Prior work has shown correlations between ADHD and executive control using the ANT. That we haven’t also seen that in our data surprised us. It isn’t a code bug, either. Caveat, this is self-reported rather than clinical ADHD assessment
Get your media neuro dose at #SANS26!
We will have a poster session everyday at SANS presenting on both topics of morality and media addiction. See our poster details below 👇👇👇
Today, Dr. Weber was recognized for his pioneering work using naturalistic paradigms in fMRI research. We’re proud to see the Media Neuroscience Lab’s work honored through his recognition as a SANS Founding Fellow. #SANS2026
Congratulations, Rene! Remarkable to see a career-long focus on content and naturalistic approaches in media neuroscience. And exciting to see the new things coming from your lab!
Brilliant work, @elisabaek.bsky.social !! Congratulations on this well deserved recognition of your scholarship!
Are you a trainee at #SANS2026 or a trainee in social or affective neuroscience. Fill out this quick survey to help with a game at COMICSANS. More data = better games.
forms.gle/b5B8y3m6LEFu...
Research poster by Zhao, Fisher, Parry, and Huskey presenting a large-scale, multi-site, pre-registered study on TikTok use and attention. Key finding: near-null effects on executive control and orienting; TikTok users show higher alerting efficiency than non-users, suggesting a stimulus-reactive attentional profile rather than impairment.
Also today at #SANS2026, Ziyu Zhao investigates the “brain rot” hypothesis. When studied rigorously, in large multi-national samples, we find a whole lot of nothing. Conclusion: previously observed alarming results might be driven by sampling bias and/or measurement error (P1-C-20)
Research poster by Klein, Gong, Eden, and Huskey using the drift-diffusion model to examine how depression, anxiety, and loneliness shape media selection in 313 undergraduates. Key finding: anxiety and depression are linked to adaptive, affect-regulating selection; loneliness is linked to maladaptive use.
We’re at #SANS2026! Today, Valerie Klein will share her undergraduate capstone project using drift diffusion models to understand how mental health status influences affective media selection. Find her at P1-F-32, and check out the preprint, here doi.org/10.21203/rs....
Grad school: to be successful in a future career you're going to need to focus on THIS ONE THING for the next 5 years.
The actual career in question: you can't focus on one thing for more than 30 minutes at a time and you have to keep switching between 1000 things endlessly
A waste of grant proposals
This can probably be further refined somewhat easily. Although, the engineering solution remains highly task and environment specific
Rereading our preprint, “fallback failure” needs rewritten. The bot used 2 engineering pathways: on no/center-cue trials it detected the target directly; on spatial/double-cue trials it detected the cue & inferred target onset ~500ms later. That pathway asymmetry produced the bimodal RT pattern. So…
Absolutely! A few things. First: we could have kept going and probably further optimized the bot. Had we an economic incentive like @cstrauch.bsky.social and colleagues mentioned, we probably would have. Instead, we just wanted to show was was feasible. And second…
10/n
This preprint is meant as a complement to the conversation started by @cstrauch.bsky.social & @achetverikov.bsky.social
We need better detection. We also need a better model of what bots & agents can actually do
Code/data/manuscript:
github.com/richardhuske...
9/n
Unfortunately, a productive path forward probably requires collecting both in-lab & online data while also building a task-specific bot. That gives us 3 benchmarks to test both whether bots can do the task & what signatures they leave, rather than relying on distributional assumptions alone
8/n
What does this all mean?
We don’t think the lesson is: online behavioral research is doomed
And we don’t think the lesson is: we’re safe
Instead: aggregate RT summaries alone are neither sufficient to detect bots nor specific enough to separate bots from atypical humans
Psychometric profiles Six-panel figure comparing Bot v7.3 (top, red) with a human participant (bottom, blue) on three reaction-time markers: QQ plot, SD-versus-mean RT, and autocorrelation. The bot and human show similarly right-skewed RT distributions and positive SD–mean relationships, but the bot’s QQ plot bends more sharply at the upper tail and its autocorrelation stays positive across many lags. The human shows a strong lag-1 autocorrelation that drops close to zero afterward, while the bot shows more sustained serial dependence.
Marker distributions across humans with bot overlaid Four histograms show human distributions in light blue with a red vertical line marking the bot’s value for A) QQ correlation, B) skewness, C) SD–mean slope, and D) mean autocorrelation. The bot falls well inside the human range for QQ correlation, skewness, and SD–mean slope, usually near the middle of the distribution. For mean autocorrelation, the bot’s value is shifted to the high end, indicating more trial-to-trial dependence than most humans.
RT distributions Two-panel figure of reaction-time histograms. Left panel overlays all human trials (blue, N=208,104) and bot trials (red, N=273): both peak around 400–600 ms, but the bot has a broader spread, including more very fast responses and a heavier slow tail extending past 1,000 ms. Right panel breaks the bot RTs down by flanker condition: congruent, neutral, and incongruent distributions overlap heavily, with incongruent trials tending slower overall and all conditions showing occasional very slow responses.
Cue by flanker interaction Two line charts compare mean RT across cue conditions (No Cue, Center, Double, Spatial) for human participants and Bot v7.3, with separate lines for congruent, neutral, and incongruent flankers. In humans, RT drops steadily as cues become more informative, and incongruent trials are consistently slowest. The bot shows the same general pattern (faster responses with better cues and slower incongruent trials) but with a noisier profile, especially at the spatial cue, where congruent trials become much faster while neutral and incongruent trials remain higher.
7/n
The bot got surprisingly close on a lot of the usual benchmarks: QQ normality, skewness, & ANT network scores.
But its clearest tells were (a) bimodal RT distribution caused by intermittent pixel-detection fallback, & (b) an atypical cue x flanker interaction
6/n
Answer: quite a lot.
Our bot had to complete an unmodified ANT on a live Pavlovia experiment
Getting there took 7 major code revisions, experiment-specific reverse engineering, pixel-level stimulus detection, & repeated comparison against human output
5/n
We engage this debate by taking a slightly different approach
Rather than ask if summary statistics detect bots after the fact, we ask: what does it actually take to build a bot that can do a real task, live, through a browser, under real timing constraints?
4/n
Although, newer work by @cstrauch.bsky.social & colleagues suggests prompt-only agents can already do behavioral tasks
bsky.app/profile/cstr...
3/n
Whereas @achetverikov.bsky.social convincingly argued that unusual RTs are not, by themselves, evidence of bots
bsky.app/profile/ache...
2/n
Our work sits in the middle of the current bot-or-not debate in online behavioral research
Recently, @cstrauch.bsky.social & colleagues raised the alarm about possible AI contamination in RT data
bsky.app/profile/cstr...
Screenshot of a manuscript title page. Title: “An AI agent can complete the Attention Network Test with human-like behavioral signatures: Implications for the bot-or-not debate.” Authors: Richard Huskey, Ziyu Zhao, Douglas A. Parry, and Jacob T. Fisher, with university affiliations listed below. The abstract says an autonomous AI agent completed the Attention Network Test in real time and produced mostly human-like behavioral data. Across seven code revisions, the bot achieved attention network scores within published human norms, 95.8% accuracy, and reaction-time patterns showing positive skew and trial-to-trial autocorrelation. Compared with 796 human participants, the bot fell within the human range on several measures but showed elevated autocorrelation and a bimodal reaction-time distribution due to intermittent detection failures. The paper argues this makes simple bot-vs-human detection harder in online reaction-time studies.
1/n
New preprint with Ziyu Zhao, @dougaparry.bsky.social, & @jacobtfisher.online
Can an AI bot complete a live online reaction-time task & produce data that passes as human?
We built an autonomous bot to take the Attention Network Test (ANT) in real time
Preprint:
doi.org/10.31234/osf...
Oh!!!! I can’t wait for this!
We recently warned of bots in online behavioral research. @achetverikov.bsky.social showed there is no evidence for that in our @joinprolific.bsky.social data - but that doesn't mean we're safe. Agentic AI can do behavioral tasks through prompting alone. Reply & videos: osf.io/3cztr/overview