Or even way before that: when the experiments are being designed and analyses begin to be decided. These days delegating experimental design and analysis to AI is not uncommon.
Posts by Nima Zargarnezhad
Don't forget to tune in to Neuroscience Research Day on February 19th and 20th! 🧠
Explore cutting-edge neuroscience research from Western University students & London researchers. Free event open to all curious minds.
See you there!
songsuwo.ca/nrd2026
Brainhack Western is coming! This fun event March 27-29 includes talks, collaboration, workshops and more. For more details or to register visit: brainhackwestern.github.io
🚨🚨Applications for the OHBM Mentoring Program has opened! 🚨🚨
Use this chance to connect with mentors and mentees from all around the world.
Register: docs.google.com/forms/d/e/1F...
More details: www.ohbmtrainees.com/mentoring-pr...
Excited to present my recent work at the International Conference on Auditory Cortex ( #ICAC2025 ) this Tuesday afternoon!
If you're interested in fMRI naturalistic paradigms, homotopic coupling, intersubject synchrony, or stimulus-driven dynamic connectivity, come find me at Poster Session (187).
#SoundScapes #Ambisonics #VirtualAuditoryEnvironments #Perception #SpatialHearing #OpenScience #JASA #SpecialIssue #FirstFirstAuthor
the AudioDome Python module and our Head-And-Torso Simulator recordings used in estimating localization cues publicly available.
🖥️ GitHub – AudioDome Module & Tutorial: github.com/NimaZN/Audio...
🔓 HATS data on OSF: osf.io/k9684
/5-end
This project was part of my MSc thesis at @westernu.bsky.social, aiming to clarify the strengths and limitations of the AudioDome, laying the foundation for future experiments at @westernuwin.bsky.social. To support continued work in this area, we’ve made both
/4
We found that low-frequency sounds are reproduced with high focality, accurately simulating location. However, spectral distortions in higher frequencies disrupt elevation cues, leading to misperceived sound source position. /3
In this study we investigated how well the ninth-order ambisonics algorithm, implemented via the AudioDome (a loudspeaker array at the Center for Brain and Mind), can reproduce spatial soundscapes for human perception research. /2
📢#PublicationAlert
Excited to share that my first first-author publication coauthored with @bmesquit.bsky.social, Dr. Ewan Macpherson, & @ingridjohnsrude.bsky.social is now officially published in JASA as part of a special issue on Advances in Soundscape
📜 Read the paper: doi.org/10.1121/10.0...
🧵/1
Turing Sour Snakes on Bluesky!
@zrmor.bsky.social @thoughtdrifter.bsky.social
We will be presenting our findings about the advantage of biological architectures in computational cost-performance trade-off for reinforcement learning agents.
Our team (Turing Sour Snakes🐍) is presenting on Monday's second session (5 p.m. UTC). Please join us if you are interested!