Doing a PhD, master or postdoc in computer vision, NLP, audio & speech processing, or social robotics?
Interested in multimodal deep learning?
Join us in our beautiful mountains for a unique opportunity to dive into state-of-the-art research across all these disciplines
Spread the word!
Posts by Thomas Hueber
๐ Six keynote speakers and two hands-on session with TIAGo ๐ค
โ๏ธ What else could you wish for?
๐ฅ Meals, ๐๏ธ accommodation? Yes, they are included.
๐ฏ Register for the early bird before December 23rd:
๐ project.inria.fr/soraim/
@grenoble-inp.fr @ugrenoblealpes.bsky.social @cnrs.fr @inria-grenoble.bsky.social
Interested in multimodal conversational AI and social robots? ๐ค
Join us at SoRAIM 2026, our winter school in Autrans Feb 9โ13, 2026!
๐ project.inria.fr/soraim/
Great speakers in vision, NLP & robotics, hands-on sessions, and the beautiful Vercors mountains!
๐ Congratulations to the whole team, and especially to Marc-Antoine Georges and Marvin Lavechin!
This work has been conducted at GIPSA-lab (@cnrs.fr/ Grenoble Alpes University) and is supported by the MIAI Cluster AI institute.
๐ง
Why this matters?
These works contribute to the development of computational models that learn acoustic, articulatory, and linguistic structure with minimal supervision and can be used to study the mechanisms underlying speech acquisition in children.
2) From perception to production
How acoustic invariance facilitates articulatory learning in a self-supervised vocal imitation model
๐ Gather Session 1 โ 5 Nov 2025 @ 08:00 (online)
๐ Full text: arxiv.org/abs/2509.05849
๐ง Demo&code: marvinlvn.github.io/projects/fro...
๐๏ธ Check out our two papers presented today at #EMNLP2025!
1. Decode, Move and Speak! Self-supervised vocal imitation linking acoustics, articulation & discrete speech units
(originally published in Comp. Ling.)
๐ Gather Session 2 โ 5 Nov @ 18:00
๐ Full text + Code + Demo: tinyurl.com/kraxhpjd
Had the pleasure of visiting Okko Rรคsรคnenโs lab at Tampere Univ. Gave a talk on our work on computational models of speech acquisition and served as the opponent in the PhD defense of Marรญa Andrea Cruz Blandรณn, an impressive body of work. Congrats to her & the group, a great place for research!
๐จ Open PhD Position โ Grenoble, France ๐จ
Join us at GIPSA-lab to explore how Speech Language Models can learn like children: through physical and social interaction. Think AI, robots, development ๐ง ๐ค๐๏ธ
Fully funded (3 yrs) โข @cnrs.fr / @ugrenoblealpes.bsky.social
Details ๐ tinyurl.com/bde988b3
DevAI&Speech involves researcher and engineers from GIPSA-lab (CNRS, Universitรฉ Grenoble Alpes), Laboratoire de Psychologie et de NeuroCognition (Mathilde Fort), Tampere University, Inria, and Atos (6/6)
๐ข Several fully funded PhD positions will be announced soon โ but feel free to reach out already if youโre interested! (5/6)
๐ค embedding SpeechLMs in our humanoid robots and training them through natural interaction with humans
๐ถ better understand the underlying mechanisms of speech acquisition through experimental studies involving parents, children, and robots at Grenobleโs Babylab. (4/6)
Key goals include:
๐ง integrating knowledge of biomechanics in SpeechLMs
๐๏ธ enabling SpeechLMs with multimodal input/output processing (3/6)
Weโll be developing Speech Language Models (SpeechLMs) that learn like children do: through multimodal sensory input (audio, images) and interactive experiences with both their speech production system and social environment. (2/6)
๐ Iโm excited to announce the launch of our new research chair DevAI&Speech (2025โ2029), funded by the Grenoble AI Institute MIAI Cluster IA!
The project explores how human developmental processes can inspire more grounded and socially aware conversational AI (1/6).