Advertisement · 728 × 90

Posts by Sho Akamine

A proofed version of our commentary is now available here pure.mpg.de/rest/items/i...

Also, the authors have posted their response to the full set of 25 commentaries bsky.app/profile/kmah...

1 day ago 5 4 0 0

Very happy to see this publication out by our great student @shoakamine.bsky.social . This will advance the way we measure whether two mimicing behavior is same or not applied Kinematic measures applied to gesture and validated by hand coding and big data 👏👏👏

4 days ago 7 1 0 0
Preview
Validating dynamic time warping as a measure of gesture form similarity - Behavior Research Methods Dynamic time warping (DTW) is a well-known algorithm used to assess the similarity between signals of varying lengths. Initially developed for automatic speech recognition, DTW has found applications in psycholinguistics, particularly in analyzing gesture form similarity. An open question in this domain is how effectively DTW captures gesture form similarity. Here, we validate DTW against human annotations of gesture form similarity across two multimodal interaction corpora and explore its utility as an automatic, continuous measure of gesture form similarity. Our findings reveal weak to moderate correlations between DTW distance and the number of similar gesture features – such as handshape, movement, orientation, and position – suggesting that DTW serves as a useful proxy for gesture form similarity. Additionally, we highlight the importance of qualitative analysis of raw data and DTW predictions in enhancing DTW’s predictive accuracy. Our study offers a rigorous validation of DTW as a measure of gesture form similarity and presents a detailed framework for preprocessing motion tracking data and calculating DTW distance. While none of the methods is perfect, the combination of automatic and manual measures provides a comprehensive approach to understanding and measuring gesture form similarity.

Validating dynamic time warping as a measure of gesture form similarity. New paper by @shoakamine.bsky.social , @dingemansemark.bsky.social & @asliozyurek.bsky.social
doi.org/10.3758/s13428-026-02975-5

4 days ago 9 4 0 1

Link to the paper how one can quantify if two communicative movements are similar or not 👐👐🙌🙌

1 week ago 5 2 0 0

📣Congratulations @shoakamine.bsky.social for the great work #BBM!! Quantifying&coming up with objective measures (grounded with human coding) on how we repeat each others communicative movements (visual alignment) is very difficult !! Sho used DTW to do this AND validated with human coding 👇🏼👇🏼

1 week ago 4 1 1 0

Huge thanks to my supervisors @asliozyurek.bsky.social and @dingemansemark.bsky.social for their support😊

1 week ago 2 0 0 0
DTW correctly predicted the left gesture pair to be similar (lower distance) and the right pair to be dissimilar (higher distance)

DTW correctly predicted the left gesture pair to be similar (lower distance) and the right pair to be dissimilar (higher distance)

💡New publication💡
In our new paper in Behavior Research Methods, we validated and demonstrated the utility of dynamic time warping (DTW) as an efficient, continuous measure of gesture form similarity.

Read the full (open access!) paper here: 🔗 link.springer.com/article/10.3...

1 week ago 12 3 1 0

Looks very useful!! Thank you for your contribution!!

1 week ago 1 0 0 0
Advertisement

📦 My first #RStats package on CRAN:

{readelan}

A package dedicated to reading all files associated with ELAN: eaf, etf, ecv. Reads annotations, metadata, controlled vocabularies. Relevant for many in #linguistics perhaps?

More info here:
borstell.github.io/misc/readelan/

2 weeks ago 57 21 5 0
Post image

People re-use each other's words, syntax and semantics in conversation, arguably to facilitate common ground. In a new Proceedings of the Royal Society B we assess whether this tendency can be used an individually reliable trait (e.g. to correlate it to something else): doi.org/10.1098/rspb... 1/

1 month ago 66 19 1 2

📌 Curious to know more? Visit my website 👉 lnkd.in/d59v_TpH

📌 Looking for a PhD opportunity? The Spring admissions round at SISSA is now open and I’d be delighted to hear from prospective candidates (deadline: March 20, 1pm CET) 👉 lnkd.in/dvf6RuQd

1 month ago 6 2 0 0

Congratulations!!!

1 month ago 1 0 0 0

Our new review paper (w/Anna Kuhlen) is out emphasising the cognitive complexities involved in everyday face-to-face conversation and a call for scaling up traditional psycholinguistic paradigms to better capture these complexities, esp multimodality, addressee responses and multiparty interactions.

1 month ago 15 3 0 0
Preview
Multimodal Minds: Language, Gesture & Sign for Human and Non-Human Interaction | Radboud University In this summer course, you'll explore how speech, gesture, and sign languages shape meaning and cognition in interaction.

Excited to co-teach Multimodal Minds this June with @anitaslonimska.bsky.social! See below for more information.

www.ru.nl/en/education...

1 month ago 10 4 2 0
LinkedIn This link will take you to a page that’s not on LinkedIn

📣📣📣Job alert Multimodal Language Department Max Planck Institute for Psycholinguistics MAX PLANCK RESEARCH GROUP LEADER POSITION (W2 BBESG) lnkd.in/eaq5MW9a

1 month ago 17 20 1 2
Advertisement
Preview
Psycholinguistic perspectives on face-to-face conversation Nature Reviews Psychology - Language production and comprehension are often studied as separate processes, but they are intertwined in naturalistic conversation. In this Review, Holler and Kuhlen...

Psycholinguistic perspectives on face-to-face conversation. New paper by @judithholler.bsky.social & Anna K. Kuhlen
doi.org/10.1038/s44159-026-00538-1

rdcu.be/e4rlV

1 month ago 13 6 1 1
Preview
Adults mark the communicative relevance of their gestures more for children than for other adults According to relevance theory, communication relies on speakers’ ability to signal relevant information, which addressees use to infer meaning efficiently. Most research within the relevance theore...

New open-access paper with @asliozyurek.bsky.social & Emanuela Campisi
We extend relevance theory to a multimodal view of language, demonstrating that speakers explicitly highlight iconic gestures as communicatively relevant in knowledge transmission context
🔗 www.tandfonline.com/doi/full/10....

1 month ago 11 5 2 1
Abstract of the paper

Abstract of the paper

Figure 1 - experimental setup

Figure 1 - experimental setup

Figure 2 - accuracy over time

Figure 2 - accuracy over time

Figure 3 - semantic similarity within/across games

Figure 3 - semantic similarity within/across games

I always thought preschoolers were too egocentric to do well on communication tasks where they had to talk about novel referents. Old papers reported they'd say stuff like "this one looks like my uncle's hat."

@vboyce.bsky.social shows that this is wrong!

osf.io/preprints/ps...

2 months ago 29 9 0 0

You can read the full chapter here: Iconicity in simultaneous constructions in sign languages | Max Planck Institute share.google/rzwFU9EpdXJb...

2 months ago 3 2 0 0

I confirm this is not Japanese!

2 months ago 2 0 1 0

I'm honored to have been awarded an NWO Rubicon grant! ⭐ Later this year, I will be joining the Spoken Language group at the @bcbl.bsky.social. Looking forward to more research on the cognitive mechanisms involved in perceptual learning, with @effiekapnoula.bsky.social and Arthur Samuel 🤩

3 months ago 14 1 1 0
矢野雅貴(編)・伊藤愛音・大石衡聴・大関洋平・小野創・小泉政利・峰見一輝・門馬将太(著)『ひとが言葉を理解・産出する仕組みー心理言語学 入門』

矢野雅貴(編)・伊藤愛音・大石衡聴・大関洋平・小野創・小泉政利・峰見一輝・門馬将太(著)『ひとが言葉を理解・産出する仕組みー心理言語学 入門』

📢 心理言語学(文処理)の概説書が出ました!

心理言語学は大学の言語学の講義であまり扱われることがないので学ぶ機会が少ないと思うのですが、この本を読めば、「ひとが言葉を理解・産出する仕組み」に関する古典的な知見から最新の研究(予測処理、自然言語処理の技術を用いた計算モデリング的研究など)まで理解できるようになっています。第1章で、研究するときに身につけておきたい基本的な考え・研究のコツも紹介しているので学部生・院生のかたもぜひ。

本書が少しでも心理言語学に関心のある多くのひとに読まれ、今後、言語学内外の様々な分野との交流の契機になればとても嬉しく思います🍀

3 months ago 16 8 1 1
Post image

ECRs in the Spotlight: @shoakamine.bsky.social from @mpi-nl.bsky.social

Sho's PhD is all about #multimodality in online video-mediated communication (such as #Zoom) 📹

Read the interview 👇

medal.ut.ee/news/ecrs-in...

4 months ago 3 1 0 0
Advertisement
A busy figure showing some time series and timing distributions movement versus speech

A busy figure showing some time series and timing distributions movement versus speech

Team science (all shared authorship), pre-registered, diamond open access paper now accepted at Open Mind: "Foreign Language Learners Show a Kinematic Accent in Their Co-speech Hand Movements".

with Bosker, Marieke Hoetjes, Doenja Hustin, Lieke van Maastricht

www.wimpouw.com/files/POSTPR...

4 months ago 21 7 1 1
GitHub - nickduran/align2-linguistic-alignment: ALIGN 2.0: Modern Python package for multi-level linguistic alignment analysis. Faster, streamlined, and feature-rich while maintaining full compatibili... ALIGN 2.0: Modern Python package for multi-level linguistic alignment analysis. Faster, streamlined, and feature-rich while maintaining full compatibility with the original ALIGN methodology (Duran...

for all of you using the ALIGN library (to measure lexical, syntactic and semantic alignment in conversations), Nick Duran has put together a great refactoring: ALIGN 2.0 (github.com/nickduran/al...), now integrated with Spacy and Bert

4 months ago 15 6 0 0
EnvisionBOX overview2025
EnvisionBOX overview2025 YouTube video by Wim Pouw

www.envisionbox.org has been shortlisted for the Leo Waaijers Open Science price: ukb.nl/en/news/shor...

@babajideowoyele.bsky.social @jamestrujillo.bsky.social @sarkadava.bsky.social @DavideAhmar @acwiek.bsky.social

Amazing Markus Küpper made an animated video:
www.youtube.com/watch?v=HduI...

6 months ago 18 11 2 2

Some of the popular chain restaurant recommendations: Sushiro, Kurasushi, Tsurutontan (udon noodles), Ohtoya (Japanese), Yayoi Ken (Japanese), Gyoza no Ohsho (Chinese), Coco curry. Hope you enjoy Japan!!

7 months ago 1 0 1 0
Preview
ERC Starting Grants for research into language, money circulation and medieval songs | Radboud University Three researchers at Radboud University will receive a Starting Grant from the European Research Council (ERC). They will receive a grant of rougly 1.5 million euros.

👀 👂 How does the brain merge what we hear & see? @lindadrijvers.bsky.social got an ERC Starting Grant (≈ €1.5M) for HANDWAVE, studying how we integrate audiovisual signals.

Vital for understanding language disorders & improving diagnostics.👇

www.ru.nl/en/research/...

7 months ago 19 5 0 0

You are right. That's why I'm trying to develop a good understanding so that I can make my judgments! And your summer school really helped me understand stats better :)

8 months ago 0 0 0 0

I think it’d be a great addition! Especially because I saw recommendations against using BF due to its sensitivity to priors, so instead using CI or HDI for NHST. That’s why I got confused when I read the statement in your book. I’ll read more papers/books and try to get a full understanding on this

8 months ago 0 0 1 0