🧠 ECVP symposia Spotlight
Featuring:
• Beyond local features: Spatiotemporal structure in perception and neural processing — David Pascucci & Michael H. Herzog
• The rhythmic nature of perception and attention: Evidence, challenges, and open questions — Maëlan Q. Menétrey
#ECVP2026 #VisionScience
Posts by Icelandic Vision Lab
🎯 ECVP Symposia Spotlight
Featuring:
• What vision scientists can learn from continuous movement tracking during decision making — Elahe’ Yargholi
• Vision on the Move: From Eye Movements to Visual Encoding (and back) — Antonella Pomè & Alessandro Benedetto
#ECVP2026 #VisionScience
🌀ECVP Symposia Spotlight
Featuring:
• Probing the Visual System: Illusions as Windows into Typical and Atypical Cognition — Erez Freud & Elisabeth Hein
• Visuomotor transforms in prostheses, virtual reality, and teleoperation — Emily Crowe
#ECVP2026 #VisionScience
🔍 ECVP Symposia Spotlight
Featuring:
• Perception as Inference Across Scales — Guido Maiello & Veronica Pisu
• Strategies for searching: better understanding visual foraging — Anna E. Hughes & Jérôme Tagu
#ECVP2026 #VisionScience
🙂 ECVP Symposia Spotlight#
Featuring:
• Recent Advances in Face Perception and Identification — Alejandro J. Estudillo & Christel Devue
• Artificial Intelligence as a Window into Material Perception — Masataka Sawayama & Filipp Schmidt
#ECVP2026 #VisionScience
How to join zillions of lexical norms to each word in your language sample the easy way: a quick tutorial and demo reilly-lab.github.io/Jamie_JoinLe...
Science is good. We should fund it.
TAKE-HOME:
-VIVAS measures imagery using visual dimensions
-Color imagery dissociates from structural clarity
-Food shows enhanced color imagery
-Novel, unfamiliar objects elicit consistently weaker imagery
-VIVAS and VVIQ correlate moderately, with striking individual-level dissociations
9/9
These findings suggest that visually anchored and verbally prompted imagery measures capture overlapping but distinct components of imagery, and that anchoring imagery judgments to perceptual dimensions reveals structure that standard self-report tools cannot easily assess.
8/9
Cross-tabulation by percentiles: we split both VVIQ and VIVAS into five percentile bands based on each score’s position within the sample: 0-10%, 10-25%, 25-75% (middle half; IQR: interquartile range), 75-90%, and 90-100%; equal scores stay in the same band (ties kept together).
VIVAS correlates only moderately with the VVIQ, well below the ceiling set by their respective reliabilities, and shows striking individual-level dissociations: some individuals scoring in the bottom 10% on one measure fall in the top quartile on the other.
7/9
We administered VIVAS alongside the VVIQ to a probability-based sample drawn from the National Registry of Iceland (N = 205 after exclusions). Complete aphantasia was observed in 4% of participants on VIVAS and 2% on VVIQ, with complete hyperphantasia in 9% and 4% respectively.
6/9
VIVAS color saturation distributions show that food imagery is particularly colorful, while imagery for novel objects is less vivid than for other categories.
Imagery varied systematically by object category in ways that mirror known functional and neural specializations: novel objects elicited uniformly weaker imagery, which underscores the central role of familiarity, while food objects showed selectively enhanced color imagery.
5/9
VIVAS dimensions: Opacity, sharpness, color saturation
We developed the Visual Imagery Visually Anchored Scale (VIVAS) where people reconstruct mental images of objects from multiple semantic categories using perceptually anchored dimensions. Individual differences in visual imagery partially dissociated into structural clarity and chromaticity.
4/9
This may partly be due to the overreliance on verbal self-report of imagery strength. The most widely used instrument, the Vividness of Visual Imagery Questionnaire (VVIQ), collapses the richness of imagery experience into a single vividness dimension and provides limited perceptual anchoring.
3/9
Visual mental imagery varies widely across individuals, from aphantasia to hyperphantasia, and may play a significant role in cognition, emotion, and mental health. Yet our understanding of imagery's structure remains limited, and its relationship to other constructs is in muddy waters.
2/9
Photo of an eye by Beel coor on Unsplash: https://unsplash.com/photos/brown-and-black-eye-illustration-1AIHIjtuNCI
🚨Preprint Alert🚨and Thread 🧵
The Visual Imagery Visually Anchored Scale (VIVAS) reveals dissociable perceptual dimensions and category-specific structure: osf.io/preprints/ps...
Authors: @heidasigurdar.bsky.social, Árnason, Mäekalle, Vésteinsdóttir, @arnig.bsky.social
1/9
New Paper! w. Léa Entzmann and Árni Kristjánsson
TL;DR: Endpoint deviations are determined by current target-previous distractor difference, but do not reflect the shape of the previous distractor distribution. Saccadic latencies do reflect these distributions.
jov.arvojournals.org/Article.aspx...
The newly minted Dr. Dr. (medical and now Ph.D.) @antonlukashevich.bsky.social is pictured here with his proud advisor @heidasigurdar.bsky.social -- not pictured is the newly minted Ph.D.'s advisor @utochkin.bsky.social and doctoral committee member @shansmann-roth.bsky.social Congratulations!🥳🥳🥳🥳
Open tenure track the University of Akureyri, Iceland.
Question for my fellow vision researchers: anyone know of work where people looked at dynamic ensemble perception for lots of boxes on Zoom? Think "what's the average emotion of these people on a group call?"
Paper alert 💥 This project took considerable effort, great to see the first paper out! @bpitchford.bsky.social Hélène Devillez @heidasigurdar.bsky.social #dyslexia #visionscience #neuroskyence Free to read until November 2nd: authors.elsevier.com/c/1lmUB6TBG5...
Did you know @icevislab.bsky.social have curated and shared a list of Animal visual stimuli?
Love finding lists like this when searching for visual stimuli for experiments 🦜 🐅 🐕
Thank you for sharing!
#reproducible #science #openscience #replication #open
👀 IVL Wednesday ECVP 👀
12:15PM Talk: Atrium Maximum: Choose your own prosopagnosia index
3:30PM Poster: The role of memory load and inter-item similarity on serial dependence
3:30PM Poster: A conceptual replication of target selection during conjunction foraging
#ECVP2025 #ECVP @ecvp.bsky.social
Tuesday After lunch at #ECVP
Symposium Session 8 – Active vision in embodied interaction
14.30 – 15.30 (Audimax)
Probabilistic attention templates guide visual selection
Árni Kristjánsson
#ecvp2025
👀 IVL Tuesday ECVP 👀
10 AM Poster: No Attention - No Ensembles
10 AM Poster: No Evidence for Enhanced Sensory Imagery in Synaesthetes using Psi-Q Assessment
12 PM Talk: Visual Search & Foraging HS 19: Object Discrimination is an Independent Predictor of Reading
#ecvp2025 #ecvp @ecvp.bsky.social
👀 IVL Monday ECVP 👀
10 AM Poster: Foraging for Biological Motion
11:30 AM Talk: Development & Aging I RW 1: The Development of High-Level Vision
4 PM Poster: The Role of Priming and Distractor Suppression in Ensemble Perception
#visionscience #ecvp2025 #ecvp @ecvp.bsky.social
The Icelandic Vision Lab in toto is coming to #ECVP2025 and will be reunited with prior lab members, honorary lab members, and lab friends from all around the world. Looking forward to lots of science and camaraderie @ecvp.bsky.social #ECVP #visionscience
Interested in starting EEG Research? Join Our Hands-On Workshop This December!
✍️ Before we finalise plans, we’d love to hear from you. Please fill out our short Expression of Interest form — even if you can't attend, your feedback will help shape future events.
run.pavlovia.org/pavlovia/sur...
New paper by @icevislab.bsky.social graduate Dr. Aleksei Iakovlev w. @khvo100v.bsky.social , @utochkin.bsky.social and Árni Kristjánsson.
link.springer.com/article/10.3...
JOB ALERT: Computational Cognitive Neuroscience Postdoc position in Osaka, Japan! Possible start in October 2025 (contact me ASAP), or from April 2026. PLEASE REPOST! #postdocjobs #neuroskyence #neuroscience #psychscisky #compneurosky #neurojobs 1/