Adaptive VR Gallery: Device-Aware Interaction | Ayaz Ismail posted on the topic | LinkedIn
I’ve been working on an adaptive VR art gallery that explores how interaction can change across spatial computing devices, without changing the space itself. The idea is simple: the same virtual gallery adapts how you interact with it based on the device. On Quest-style headsets, interaction is controller-based; on Vision Pro, it shifts to gaze and gesture. The current prototype is a museum-style gallery with classical artworks, interactive 3D objects, adjustable lighting, and contextual info overlays. It’s still evolving, but the core interactions are working. The project is built with WebXR, A-Frame, Three.js, and JavaScript, and runs directly in the browser. For this prototype, 3D models were sourced from Sketchfab (GLTF), and artwork images were pulled from online references for demonstration. The system is designed to support custom 3D assets created in tools like Blender or Maya and integrate them into the same environment. This work connects to my broader interest in adaptive interfaces and accessibility in spatial computing. There’s a live demo on my portfolio site, and Dan Dao and I will be leading a hands-on XR workshop at Dallas College, focused on building virtual environments from scratch. Dates will be announced soon. Would love to hear thoughts from others working in XR. https://lnkd.in/gip9qdE2 [Demo & Portfolio] #XR #SpatialComputing #WebXR #InteractionDesign #VirtualReality #AFrame #ThreeJS #AppleVisionPro #MetaQuest3 Dr. Justin H. Lonon Dr. Shawnda Navarro Floyd Dr. Madeline Burillo-Hopkins Ahava Silkey-Jones Brett Dyer LaTanya Ceren, M. Ed. Brianna Cattell Scott Burkey Mary Lee Carter. MS, CHSE Jenny Taylor
#LinkedIn
#Prof #DigitalArt #AyazIsmail
www.linkedin.com/posts/ayaz-ismail-978150...
3
0
0
0