@noamchompers.bsky.social is right that we are right about him being right about perceived animacy. (more on this project coming soon!)
Posts by tal boger
Several entries from a references section of a manuscript. The title of the first entry is "The relationship between aesthetic preference and visual complexity in absract art".
Screenshot of the original paper, whose title is in fact "The relationship between aesthetic preference and visual complexity in absract art"
Conundrum of the day:
The references section for your accepted-in-principle manuscript has an obvious typo in it ('absract' should clearly be 'abstract'). But no, the typo is in the original paper title! WHAT DO YOU DO
Is core knowledge actually core *perception*? In a forthcoming piece in BBS, @shariliu.bsky.social, Lisa Feigenson, & I comment on @daweibai.bsky.social et al.’s target article. osf.io/preprints/psyarxiv/vnbep
What does it mean for culture to “shape” cognition?
In our new TiCS paper, @benjaminpitt.bsky.social & I offer a typology of four possible effects: culture
can Privilege one cognitive process over others, Prune out disfavored ones, Produce new ones, or have no effect.
www.cell.com/trends/cogni...
🥁Now announcing the winner of the 2026 Stanton Prize:
Congratulations, Melissa Kibbe @levelsof.bsky.social!
⭐️ ⭐️ ⭐️ ⭐️ ⭐️
This honor will be celebrated at the upcoming meeting of the SPP
Algorithmic complexity for dice randomness and 3x3 grid randomness (z-scored)
Results from a permutation test computing 10,000 iterations of shuffled mean absolute error (within-person, across-tasks) vs. the observed mean absolute error.
Their data only include each sequence’s algorithmic complexity score (not the raw sequences), but even so, the same patterns emerge. Pairwise correlations are significant (especially in dice + grid), and a permutation test shows just how strong this stability is.
Our work used longer (250-trial) lab tasks with a smaller sample. But the pudding.cool article collects data from tons of people, and the sequences it collects are extremely short (10-12 items), making it a super strong test for this stability.
Beyond being a great read, the article collected within-subject randomization data for over 52,000 people across these 3 tasks. Last year, I (+ @samiyousif.bsky.social and others) put out work demonstrating that random behavior is stable across tasks and time. talboger.github.io/files/Boger_...
A few years ago, my favorite website (@puddingviz.bsky.social) put out this great piece analyzing a study of how randomizing ability changes with age. It includes demos where readers produce sequences of random coin flips, dice rolls, and locations in a 3x3 grid. pudding.cool/2022/04/rand...
I am very excited to announce that over the holidays, my first ever paper (w/ @samiyousif.bsky.social) was published in Cognitive Science! Here, we describe a new illusion of *number*: The Crowd Size Illusion!
onlinelibrary.wiley.com/doi/10.1111/...
Well this is exciting!
The Department of Psychological & Brain Sciences at Johns Hopkins University (@jhu.edu) invites applications for a full-time tenured or tenure-track faculty member in Cognitive Psychology, in any area and at any rank!
Application + more info: apply.interfolio.com/178146
Congratulations (and thank you) to @talboger.bsky.social, who lectured in front of nearly 500 @jhu.edu undergraduates today on the psychology of music! They didn’t see it coming, and then they loved it :)
(from lapidow & @ebonawitz.bsky.social's awesome 2023 explore-exploit paper)
methods from lapidow & bonawitz, 2023. children are "dropped"
a falling child
can't believe the IRB approved this part — hope the children are ok!
What a lovely 'spotlight' of @talboger.bsky.social's work on style perception! Written by @aennebrielmann.bsky.social in @cp-trendscognsci.bsky.social.
See Aenne's paper below, as well as Tal's original work here: www.nature.com/articles/s41...
When a butterfly becomes a bear, perception takes center stage.
Research from @talboger.bsky.social, @chazfirestone.bsky.social and the Perception & Mind Lab.
Out today!
www.cell.com/current-biol...
important question for dev people: when reporting demographics for a paper involving both kids and adults, we want some consistency in how we report that information. so do you call the kids "men" and "women", or do you call the adults "boys" and “girls"?
sami is such a creative, thoughtful, and fun mentor. anyone who gets to work with him is so lucky!
Visual adaptation is viewed as a test of whether a feature is represented by the visual system.
In a new paper, Sam Clarke and I push the limits of this test. We show spatially selective, putatively "visual" adaptation to a clearly non-visual dimension: Value!
www.sciencedirect.com/science/arti...
It's true: This is the first project from our lab that has a "Merch" page!
Get yours @ www.perceptionresearch.org/anagrams/mer...
The present work thus serves as a ‘case study’ of sorts. It yields concrete discoveries about real-world size, and it also validates a broadly applicable tool for psychology and neuroscience. We hope it catches on!
Though we manipulated real-world size, you could generate anagrams of happy faces and sad faces, tools and non-tools, or animate and inanimate objects, overcoming low-level confounds associated with such stimuli. Our approach is perfectly general.
Overall, our work confronts the longstanding challenge of disentangling high-level properties from their lower-level covariates. We found that, once you do so, most (but not all) of the relevant effects remain.
(Never fear, though: As we say in our paper, that last result is consistent with the original work, which suggested that mid-level features — the sort preserved in ‘texform’ stimuli — may well explain these search advantages.)
whereas previous work shows efficient visual search for real-world size, we did not find a similar effect with anagrams. our study included a successful replication of these previous findings with ordinary objects (i.e., non-anagram images).
Finally, visual search. Previous work shows targets are easier to find when they differ from distractors in their real-world size. However, in our experiments with anagrams, this was not the case (even though we easily replicated this effect with ordinary, non-anagram images).
people prefer to view real-world large objects as larger than real-world small objects, even with visual anagrams.
Next, aesthetic preferences. People think real-world large objects look better when displayed large, and vice versa for small objects. Our experiments show that this is true with anagrams too!
results from the real-world size Stroop effect with anagrams. performance is better when displayed size is congruent with real-world size.
First, the “real-world size Stroop effect”. If you have to say which of two images is larger (on the screen, not in real life), it’s easier if displayed size is congruent with real-world size. We found this to be true even when the images were perfect anagrams of one another!
Then, we placed these images in classic experiments on real-world size, to see if observed effects arise even under such highly controlled conditions.
(Spoiler: Most of these effects *did* arise with anagrams, confirming that real-world size per se drives many of these effects!)
anagrams we generated, where rotating the object changes its real-world size.
We generated images using this technique (see examples). Each pair differs in real-world size but are otherwise identical* in lower-level features, because they’re the same image down to the last pixel.
(*avg orientation, aspect-ratio, etc, may still vary. ask me about this!)