Advertisement · 728 × 90

Posts by Icelandic Vision Lab

🧠 ECVP symposia Spotlight

Featuring:
• Beyond local features: Spatiotemporal structure in perception and neural processing — David Pascucci & Michael H. Herzog

• The rhythmic nature of perception and attention: Evidence, challenges, and open questions — Maëlan Q. Menétrey

#ECVP2026 #VisionScience

6 days ago 3 3 0 0

🎯 ECVP Symposia Spotlight

Featuring:

• What vision scientists can learn from continuous movement tracking during decision making — Elahe’ Yargholi

• Vision on the Move: From Eye Movements to Visual Encoding (and back) — Antonella Pomè & Alessandro Benedetto

#ECVP2026 #VisionScience

6 days ago 1 2 0 0

🌀ECVP Symposia Spotlight

Featuring:

• Probing the Visual System: Illusions as Windows into Typical and Atypical Cognition — Erez Freud & Elisabeth Hein

• Visuomotor transforms in prostheses, virtual reality, and teleoperation — Emily Crowe

#ECVP2026 #VisionScience

6 days ago 0 2 0 0

🔍 ECVP Symposia Spotlight

Featuring:

• Perception as Inference Across Scales — Guido Maiello & Veronica Pisu

• Strategies for searching: better understanding visual foraging — Anna E. Hughes & Jérôme Tagu

#ECVP2026 #VisionScience

6 days ago 1 3 0 0

🙂 ECVP Symposia Spotlight#

Featuring:

• Recent Advances in Face Perception and Identification — Alejandro J. Estudillo & Christel Devue

• Artificial Intelligence as a Window into Material Perception — Masataka Sawayama & Filipp Schmidt

#ECVP2026 #VisionScience

6 days ago 0 2 0 0
Join Lexical Norms to Your Word List Your word list should ideally be nested in Tidy format (one word per row, within one column of a dataframe). Your word vector should NOT be a factor but a chr. Set up your word list like this. You can split/unlist a language sample to get it in this format also.

How to join zillions of lexical norms to each word in your language sample the easy way: a quick tutorial and demo reilly-lab.github.io/Jamie_JoinLe...

1 week ago 17 9 1 0

Science is good. We should fund it.

1 week ago 12637 2530 135 65

TAKE-HOME:

-VIVAS measures imagery using visual dimensions
-Color imagery dissociates from structural clarity
-Food shows enhanced color imagery
-Novel, unfamiliar objects elicit consistently weaker imagery
-VIVAS and VVIQ correlate moderately, with striking individual-level dissociations

9/9

1 week ago 2 0 0 0

These findings suggest that visually anchored and verbally prompted imagery measures capture overlapping but distinct components of imagery, and that anchoring imagery judgments to perceptual dimensions reveals structure that standard self-report tools cannot easily assess.

8/9

1 week ago 1 0 1 0
Advertisement
Cross-tabulation by percentiles: we split both VVIQ and VIVAS into five percentile bands based on each score’s position within the sample: 0-10%, 10-25%, 25-75% (middle half; IQR: interquartile range), 75-90%, and 90-100%; equal scores stay in the same band (ties kept together).

Cross-tabulation by percentiles: we split both VVIQ and VIVAS into five percentile bands based on each score’s position within the sample: 0-10%, 10-25%, 25-75% (middle half; IQR: interquartile range), 75-90%, and 90-100%; equal scores stay in the same band (ties kept together).

VIVAS correlates only moderately with the VVIQ, well below the ceiling set by their respective reliabilities, and shows striking individual-level dissociations: some individuals scoring in the bottom 10% on one measure fall in the top quartile on the other.

7/9

1 week ago 0 0 1 0

We administered VIVAS alongside the VVIQ to a probability-based sample drawn from the National Registry of Iceland (N = 205 after exclusions). Complete aphantasia was observed in 4% of participants on VIVAS and 2% on VVIQ, with complete hyperphantasia in 9% and 4% respectively.

6/9

1 week ago 0 0 1 0
VIVAS color saturation distributions show that food imagery is particularly colorful, while imagery for novel objects is less vivid than for other categories.

VIVAS color saturation distributions show that food imagery is particularly colorful, while imagery for novel objects is less vivid than for other categories.

Imagery varied systematically by object category in ways that mirror known functional and neural specializations: novel objects elicited uniformly weaker imagery, which underscores the central role of familiarity, while food objects showed selectively enhanced color imagery.

5/9

1 week ago 1 0 1 0
VIVAS dimensions: Opacity, sharpness, color saturation

VIVAS dimensions: Opacity, sharpness, color saturation

We developed the Visual Imagery Visually Anchored Scale (VIVAS) where people reconstruct mental images of objects from multiple semantic categories using perceptually anchored dimensions. Individual differences in visual imagery partially dissociated into structural clarity and chromaticity.

4/9

1 week ago 1 0 1 0

This may partly be due to the overreliance on verbal self-report of imagery strength. The most widely used instrument, the Vividness of Visual Imagery Questionnaire (VVIQ), collapses the richness of imagery experience into a single vividness dimension and provides limited perceptual anchoring.

3/9

1 week ago 0 0 1 0

Visual mental imagery varies widely across individuals, from aphantasia to hyperphantasia, and may play a significant role in cognition, emotion, and mental health. Yet our understanding of imagery's structure remains limited, and its relationship to other constructs is in muddy waters.

2/9

1 week ago 2 0 1 0
Photo of an eye by Beel coor on Unsplash: https://unsplash.com/photos/brown-and-black-eye-illustration-1AIHIjtuNCI

Photo of an eye by Beel coor on Unsplash: https://unsplash.com/photos/brown-and-black-eye-illustration-1AIHIjtuNCI

🚨Preprint Alert🚨and Thread 🧵
The Visual Imagery Visually Anchored Scale (VIVAS) reveals dissociable perceptual dimensions and category-specific structure: osf.io/preprints/ps...

Authors: @heidasigurdar.bsky.social, Árnason, Mäekalle, Vésteinsdóttir, @arnig.bsky.social

1/9

1 week ago 16 8 1 1
Saccade endpoints reflect attentional templates in visual search: Evidence from feature distribution learning | JOV | ARVO Journals

New Paper! w. Léa Entzmann and Árni Kristjánsson

TL;DR: Endpoint deviations are determined by current target-previous distractor difference, but do not reflect the shape of the previous distractor distribution. Saccadic latencies do reflect these distributions.

jov.arvojournals.org/Article.aspx...

4 months ago 2 1 1 0
Advertisement
Post image

The newly minted Dr. Dr. (medical and now Ph.D.) @antonlukashevich.bsky.social is pictured here with his proud advisor @heidasigurdar.bsky.social -- not pictured is the newly minted Ph.D.'s advisor @utochkin.bsky.social and doctoral committee member @shansmann-roth.bsky.social Congratulations!🥳🥳🥳🥳

4 months ago 15 1 0 1

Open tenure track the University of Akureyri, Iceland.

5 months ago 4 2 0 1

Question for my fellow vision researchers: anyone know of work where people looked at dynamic ensemble perception for lots of boxes on Zoom? Think "what's the average emotion of these people on a group call?"

6 months ago 10 4 6 0

Paper alert 💥 This project took considerable effort, great to see the first paper out! @bpitchford.bsky.social Hélène Devillez @heidasigurdar.bsky.social #dyslexia #visionscience #neuroskyence Free to read until November 2nd: authors.elsevier.com/c/1lmUB6TBG5...

6 months ago 8 1 0 0
Preview
Animals – Icelandic Vision Lab Icelandic Vision Lab

Did you know @icevislab.bsky.social have curated and shared a list of Animal visual stimuli?

Love finding lists like this when searching for visual stimuli for experiments 🦜 🐅 🐕

Thank you for sharing!

#reproducible #science #openscience #replication #open

7 months ago 13 2 0 0

👀 IVL Wednesday ECVP 👀

12:15PM Talk: Atrium Maximum: Choose your own prosopagnosia index

3:30PM Poster: The role of memory load and inter-item similarity on serial dependence

3:30PM Poster: A conceptual replication of target selection during conjunction foraging
#ECVP2025 #ECVP @ecvp.bsky.social

7 months ago 6 1 0 0

Tuesday After lunch at #ECVP

Symposium Session 8 – Active vision in embodied interaction

14.30 – 15.30 (Audimax)

Probabilistic attention templates guide visual selection
Árni Kristjánsson

#ecvp2025

7 months ago 3 1 0 0

👀 IVL Tuesday ECVP 👀

10 AM Poster: No Attention - No Ensembles

10 AM Poster: No Evidence for Enhanced Sensory Imagery in Synaesthetes using Psi-Q Assessment

12 PM Talk: Visual Search & Foraging HS 19: Object Discrimination is an Independent Predictor of Reading
#ecvp2025 #ecvp @ecvp.bsky.social

7 months ago 2 1 0 0
Advertisement

👀 IVL Monday ECVP 👀

10 AM Poster: Foraging for Biological Motion

11:30 AM Talk: Development & Aging I RW 1: The Development of High-Level Vision

4 PM Poster: The Role of Priming and Distractor Suppression in Ensemble Perception

#visionscience #ecvp2025 #ecvp @ecvp.bsky.social

7 months ago 2 2 0 0

The Icelandic Vision Lab in toto is coming to #ECVP2025 and will be reunited with prior lab members, honorary lab members, and lab friends from all around the world. Looking forward to lots of science and camaraderie @ecvp.bsky.social #ECVP #visionscience

7 months ago 16 0 0 0
Post image

Interested in starting EEG Research? Join Our Hands-On Workshop This December!

✍️ Before we finalise plans, we’d love to hear from you. Please fill out our short Expression of Interest form — even if you can't attend, your feedback will help shape future events.

run.pavlovia.org/pavlovia/sur...

8 months ago 7 5 0 0
Preview
Amplification from saliency affects explicit but not implicit ensemble representations - Attention, Perception, & Psychophysics The visual system can encode multiple objects in the form of ensemble representations. Such representations can be accessed with either explicit or implicit reports, but depending on the type of repor...

New paper by @icevislab.bsky.social graduate Dr. Aleksei Iakovlev w. @khvo100v.bsky.social , @utochkin.bsky.social and Árni Kristjánsson.

link.springer.com/article/10.3...

8 months ago 2 2 0 0
Post image Post image

JOB ALERT: Computational Cognitive Neuroscience Postdoc position in Osaka, Japan! Possible start in October 2025 (contact me ASAP), or from April 2026. PLEASE REPOST! #postdocjobs #neuroskyence #neuroscience #psychscisky #compneurosky #neurojobs 1/

9 months ago 85 75 1 3