Advertisement ยท 728 ร— 90

Posts by Thomas Hueber

Doing a PhD, master or postdoc in computer vision, NLP, audio & speech processing, or social robotics?
Interested in multimodal deep learning?

Join us in our beautiful mountains for a unique opportunity to dive into state-of-the-art research across all these disciplines

Spread the word!

4 months ago 5 2 0 0
Post image

๐Ÿ‘€ Six keynote speakers and two hands-on session with TIAGo ๐Ÿค–
โ‰๏ธ What else could you wish for?
๐Ÿฅ– Meals, ๐Ÿ•๏ธ accommodation? Yes, they are included.
๐ŸŽฏ Register for the early bird before December 23rd:
๐Ÿ”— project.inria.fr/soraim/

4 months ago 4 2 1 1

@grenoble-inp.fr @ugrenoblealpes.bsky.social @cnrs.fr @inria-grenoble.bsky.social

5 months ago 2 0 0 0
SoRAIM'26 / Autrans, 9-13 February โ€“ Winter School on Social Robotics, Artificial Intelligence, and Multimedia

Interested in multimodal conversational AI and social robots? ๐Ÿค–
Join us at SoRAIM 2026, our winter school in Autrans Feb 9โ€“13, 2026!
๐Ÿ‘‰ project.inria.fr/soraim/
Great speakers in vision, NLP & robotics, hands-on sessions, and the beautiful Vercors mountains!

5 months ago 4 1 1 0

๐Ÿ‘ Congratulations to the whole team, and especially to Marc-Antoine Georges and Marvin Lavechin!

This work has been conducted at GIPSA-lab (@cnrs.fr/ Grenoble Alpes University) and is supported by the MIAI Cluster AI institute.

5 months ago 1 0 0 0

๐Ÿง 
Why this matters?

These works contribute to the development of computational models that learn acoustic, articulatory, and linguistic structure with minimal supervision and can be used to study the mechanisms underlying speech acquisition in children.

5 months ago 1 0 1 0
Preview
From perception to production: how acoustic invariance facilitates articulatory learning in a self-supervised vocal imitation model Human infants face a formidable challenge in speech acquisition: mapping extremely variable acoustic inputs into appropriate articulatory movements without explicit instruction. We present a computati...

2) From perception to production
How acoustic invariance facilitates articulatory learning in a self-supervised vocal imitation model
๐Ÿ“ Gather Session 1 โ€” 5 Nov 2025 @ 08:00 (online)
๐Ÿ”— Full text: arxiv.org/abs/2509.05849
๐ŸŽง Demo&code: marvinlvn.github.io/projects/fro...

5 months ago 1 0 1 0
Advertisement
Preview
The 2025 Conference on Empirical Methods in Natural Language Processing November 4 โ€“9Suzhou, China

๐ŸŽ™๏ธ Check out our two papers presented today at #EMNLP2025!
1. Decode, Move and Speak! Self-supervised vocal imitation linking acoustics, articulation & discrete speech units
(originally published in Comp. Ling.)
๐Ÿ“ Gather Session 2 โ€” 5 Nov @ 18:00
๐Ÿ”— Full text + Code + Demo: tinyurl.com/kraxhpjd

5 months ago 1 1 1 0
Post image

Had the pleasure of visiting Okko Rรคsรคnenโ€™s lab at Tampere Univ. Gave a talk on our work on computational models of speech acquisition and served as the opponent in the PhD defense of Marรญa Andrea Cruz Blandรณn, an impressive body of work. Congrats to her & the group, a great place for research!

7 months ago 2 1 0 0

๐Ÿšจ Open PhD Position โ€“ Grenoble, France ๐Ÿšจ

Join us at GIPSA-lab to explore how Speech Language Models can learn like children: through physical and social interaction. Think AI, robots, development ๐Ÿง ๐Ÿค–๐ŸŽ™๏ธ
Fully funded (3 yrs) โ€ข @cnrs.fr / @ugrenoblealpes.bsky.social
Details ๐Ÿ‘‰ tinyurl.com/bde988b3

7 months ago 8 7 0 0

DevAI&Speech involves researcher and engineers from GIPSA-lab (CNRS, Universitรฉ Grenoble Alpes), Laboratoire de Psychologie et de NeuroCognition (Mathilde Fort), Tampere University, Inria, and Atos (6/6)

9 months ago 1 0 0 0

๐Ÿ“ข Several fully funded PhD positions will be announced soon โ€” but feel free to reach out already if youโ€™re interested! (5/6)

9 months ago 1 0 1 0

๐Ÿค– embedding SpeechLMs in our humanoid robots and training them through natural interaction with humans
๐Ÿ‘ถ better understand the underlying mechanisms of speech acquisition through experimental studies involving parents, children, and robots at Grenobleโ€™s Babylab. (4/6)

9 months ago 1 0 1 0

Key goals include:
๐Ÿง  integrating knowledge of biomechanics in SpeechLMs
๐Ÿ‘๏ธ enabling SpeechLMs with multimodal input/output processing (3/6)

9 months ago 1 0 1 0

Weโ€™ll be developing Speech Language Models (SpeechLMs) that learn like children do: through multimodal sensory input (audio, images) and interactive experiences with both their speech production system and social environment. (2/6)

9 months ago 1 0 1 0

๐Ÿš€ Iโ€™m excited to announce the launch of our new research chair DevAI&Speech (2025โ€“2029), funded by the Grenoble AI Institute MIAI Cluster IA!

The project explores how human developmental processes can inspire more grounded and socially aware conversational AI (1/6).

9 months ago 9 5 1 0