If internal inattentional blindness is real, people could plausibly learn to see more clearly into their own mind through attentional training, which would have huge implications both for our scientific understanding of the mind and for interventions to improve well-being.
Posts by Adam Morris
In this paper, I ground the idea in theories of perception and internal attention, synthesize existing indirect evidence for it, and chart a roadmap for how to test it directly.
This idea of "internal inattentional blindness" is implicitly assumed in much of clinical psychology and public discourse, but it has not been given precise theoretical or empirical treatment.
In other words, many high-level mental processes may not be *intrinsically* or permanently unconscious. Rather, people may be experiencing an internal analogue of inattentional blindness -- they fail to see what's going on in their mind because they're not paying attention.
A widespread view in psychology is that most cognitive processes are unconscious. In a new paper, I argue that many of these processes may evade consciousness for the same reason the "invisible gorilla" did: People fail to pay attention to them.
🧵
direct.mit.edu/opmi/article...
These results challenge the idea that most decision processes are unconscious, and instead suggest that – at least in simple contexts – people are often aware of how they’re choosing. Next, we’re testing awareness in more realistic contexts, and how that awareness can be improved. Stay tuned!
On the other hand, awareness was inconsistent and varied enormously across participants, suggesting that some people may be better at introspecting on their choice processes than others. Awareness was not predicted by participants’ self-reported confidence or self-reported introspective ability.
We also recruited “observers” who were matched with the original “actor” participants, shown their choices, and asked to predict the actors’ choice process. Actors were more accurate than observers, suggesting that their accuracy came from some kind of first-person introspection.
Before the study, we asked decision scientists from SJDM to predict participants’ accuracy. Participants were *much* more accurate than these experts predicted; experts thought people would only show an r of 0.44. (Thanks to everyone at @sjdm-tweets.bsky.social bsky.social who participated!)
The main result: Participants’ self-reports were often highly correlated with their actual choice processes. For instance, in one study, participants had an average correlation of 0.8 between their self-reported attribute weights and the best-fit weights from the computational model.
We then fit computational models to their choices to identify their actual attribute weights & choice strategy. Critically, our models accounted for participants’ diverse choice strategies (e.g., rather than assume everyone computed full expected values, we accounted for people using heuristics).
In our studies, participants made value-guided choices (e.g., between homes to rent) that varied on many attributes at once (size, kitchen quality, etc.), and then reported how they think they made those choices: how much weight they placed on each attribute, and which choice strategies they used.
Are we “strangers to ourselves”? Classic theories say people have limited insight into how they decide. Our new paper at @natcomms.nature.com challenges this view. With @rcarl.bsky.social sky.social, @hedykober.bsky.social y.social, and @mjcrockett.bsky.social
www.nature.com/articles/s41...
🧵
A common view in behavioral science is that people cannot introspect directly on the mental processes underlying their choices. Here, in a new preprint, we provide evidence that they can.
osf.io/preprints/ps...
Work with Ryan Carlson, Hedy Kober, and @mjcrockett.bsky.social