The assumption is more that people could form/use them to different degrees. Then based on their behaviour when the environment is misaligned we can calculate how much they rely on allocentric cues. We can compare this "weight" across conditions/experiments
Posts by Meaghan McManus
Of the first environment yes
Thanks! I'll check this out.
We tried something like that in exp 4 with our baseline. They see the real door, put on the headset and see nothing, turn and point to the real door. They could do this accurately. After seeing the visual environment they then pointed to the visual door. So it did not seem to affect anything.
Exactly yes! If you haven't read it yet, I think you would really enjoy the General Discussion section of the paper :)
If you like to do that you should stay tuned for our follow up :D
It sure does! But it is a bit nuanced. In the case of vr, you might want to keep track of the previous real world environment (so you don't walk into a wall). Instead people seem to just rely on the current enviro. We suggest that maybe you can hold multiple allo but maybe only 1 ego representation.
Have you ever been using VR and walked into a wall? Check out our latest paper that investigates this topic!
jov.arvojournals.org/article.aspx...
a visual illusion: what appears to be some strangely shaped pink pieces of cloth are actually the background on which several forks are laying
these are forks
What's the relationship between our feeling of motion (vection) and our perception of motion (distance traveled)? Are they related? Find out in our latest paper! @royalsociety.org doi.org/10.1098/rsos...
Tilting people relative to gravity affects our perception of visual size. But what about haptic size? Check out our latest paper on how the vestibular system might be involved in encoding a representation of space with respect to gravity to find out! rdcu.be/eiUlS #scientificreports