New paper in Child Development!
When we enter others' homes, we learn about them from the placement of their belongings. This requires integrating multiple social factors (social context, pref). We find 6+yo succeed at integration & 'read the room' in this way!
academic.oup.com/chidev/artic...
Posts by Bria Long
Beautiful use of the BabyView dataset to train a visual learning model!
I'm hiring a new lab manager for my lab @ UCSD! For more info on the lab, check out our website: lillab.ucsd.edu
Target start date is June 1 (flexible) and application deadline is March 26. Please share with anyone you think might be a good fit!
Apply here: employment.ucsd.edu/laboratory-c...
A list of talks and poster presentations to be given by Rodney Tompkins (Thursday preconference, Friday talk in symposium on when helping backfires, poster in Saturday lunch session), Bill Pepe (poster Friday evening), Coxi Jiang (poster Friday evening), and Tori Hennessy (poster Saturday evening, to be presented by co-author Angela Liu)
Members of the SoCal lab are presenting their work at #CDS2026! Check out where to find us below
If you’re at CDS this weekend, stop by our posters! I’m sad to miss it this year but so proud of this fantastic group ✨
okay, so, what kind of data *d*o we learn from?
at the very least: visual sequences, depth, and self-motion cues. @brialong.bsky.social, @mcxfrank.bsky.social and Linda Smith have done incredible work characterizing these experiences with head-mounted cameras
(thank you Bria for the video! 🙇)
excited to share some recent work!
neural networks trained on multi-view sensory data are the first to match human-level 3D shape perception
we predict human accuracy, error patterns, and reaction time—all zero-shot, no training on experimental data
arxiv.org/abs/2602.17650
1/🧠
Applications accepted on a rolling basis, with deadline now April 9th employment.ucsd.edu/laboratory-c...
Come join us in sunny San Diego!!
We are still hiring for our computational focus lab coordinator position!
If you have a computational / software engineering background and are looking for more research experience before applying to graduate school, this would be a great fit.
Read more about our lab here!
www.vislearnlab.org
Our department is hiring an Assistant Teaching Professor!! This is a joint-appointed position with Computational Social Sciences (css.ucsd.edu). It's 75+ degrees F and sunny today, just thought I'd mention apol-recruit.ucsd.edu/JPF04461
New preprint with @SamJung @timbrady.bsky.social and @violastoermer.bsky.social: osf.io/preprints/ps.... Here we uncover what might be driving the “meaningfulness benefit” in visual working memory. Studies show that real objects are remembered better in VWM tasks than abstract stimuli. But why? 1/
Come join our team!
For more details & official postings: www.vislearnlab.org/join-the-lab
Recent publications & projects at:
www.vislearnlab.org/publications
Feel free to reach out directly with questions!
Position 1: Developmental focus—work closely with postdoc @ajhaskins.bsky.social. Hands-on data collection with kids in our lab & at children's museums.
Position 2: Computational focus—manage eye-tracking studies, video data analysis, & lab software infrastructure.
The Visual Learning Lab is hiring TWO lab coordinators!
Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)—with flexible summer start dates.
a red building on UPENN's campus photographed during the fall
the Philadelphia skyline, with clear skies and autumn trees
starting fall 2026 i'll be an assistant professor at @upenn.edu 🥳
my lab will develop scalable models/theories of human behavior, focused on memory and perception
currently recruiting PhD students in psychology, neuroscience, & computer science!
reach out if you're interested 😊
title line
screenshots from the nine tasks
Very excited to share the first empirical paper from LEVANTE: we describe the LEVANTE core tasks, a set of nine open source tasks for measuring learning and development in kids ages 5-12 years.
osf.io/preprints/ps...
🧵
This platform is so important! It allows researchers regardless of their university prestige to do research! So important for leveling the playing field for many developmental psychologist scientist from under resourced institutions. Please consider donating - donations will be matched!
This resource has been such a boon to the developmental science community broadly and to my lab specifically. One needn’t look any further than the publications that have come out of the platform to see this (lookit.readthedocs.io/en/develop/p...).
Please consider donating!
Just out in Infancy! "Time to talk", by our great team, Janet Bang, Mónica Munévar, Arlyn Mora, and Anne Fernald used day-long recordings of English- and Spanish-speaking families in the US to explore what caregivers are doing when talking the most with their children.
Thrilled to start 2026 as faculty in Psych & CS
@ualberta.bsky.social + Amii.ca Fellow! 🥳 Recruiting students to develop theories of cognition in natural & artificial systems 🤖💭🧠. Find me at #NeurIPS2025 workshops (speaking coginterp.github.io/neurips2025 & organising @dataonbrainmind.bsky.social)
For our last talk of the workshop, we are honored to have Prof. @brialong.bsky.social sharing with us on “The BabyView Dataset: Learning from and about young children's everyday experiences” #NeurIPS2025
📍Location: Upper Level Room 10.
🗓️ data-brain-mind.github.io
In the running for greatest human accomplishment.
Abstract of paper
Figure 1!
What do kids choose to do when they think that someone will help them? What about when no one will help?
New paper: "Young children strategically adapt to unreliable social partners" - led by Kat Shannon, with @hyogweon.bsky.social and Willem Frankenhuis.
osf.io/preprints/ps...
We’re recruiting a postdoctoral fellow to join our team! 🎉
I’m happy to share that I’ve opened back up the search for this position (it was temporarily closed due to funding uncertainty).
See lab page and doc below for details!
At #COLM2025 and would love to chat all things cogsci, LMs, & interpretability 🍁🥯 I'm also recruiting!
👉 I'm presenting at two workshops (PragLM, Visions) on Fri
👉 Also check out "Language Models Fail to Introspect About Their Knowledge of Language" (presented by @siyuansong.bsky.social Tue 11-1)
A first blogpost from the LEVANTE team, introducing our global project perspective.
A good intro if you're interested in learning more about cross-cultural developmental data collection using LEVANTE.
levante-network.org/global-colla...
🧠 New preprint: Why do deep neural networks predict brain responses so well?
We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics.
📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text.
In our #Interspeech2025 paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech.
Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
Josh Tenenbaum's inspiring keynote at #cogsci2025 on growing vs scaling AI, the big questions of cognitive science, and the many open questions for the field.