Advertisement · 728 × 90

Posts by Christopher Carignan

Preview
SHaPS People

I suggest everyone visits our webpage and looks at how beautiful we all are thanks to @carignan.bsky.social 's amazing photography skills ;-) www.ucl.ac.uk/pals/researc...

1 year ago 9 2 0 0
Post image

One more cohort of PALS0047 R programmers receives their rubber ducks! Everyone say, "Hello World!"

1 year ago 6 0 0 0
Post image

Currently updating the slides for the debugging lecture in my Star-Wars-meme-laden "Programming in R" module to discuss using LLMs in the debugging process. The prompt was to "create a function in R that plots vowels in different colors".

1 year ago 3 0 0 0
Video

Speech science meets Hollywood 🎬

1 year ago 17 3 0 0
Post image

Filming is officially underway for our upcoming promotional video for the SHaPS department and MSc Language Sciences programme!

1 year ago 7 1 0 0

Very many congratulations, DOCTOR Nagamine!! 🥳

1 year ago 5 0 0 0
Preview
Language-specific and individual variation in anticipatory nasal coarticulation: A comparative study of American English, French, and German Anticipatory contextual nasalization, whereby an oral segment (usually a vowel) preceding a nasal consonant becomes partially or fully nasalized, has …

Out now in Journal of Phonetics! In comparing nasal coarticulation across three languages, our results suggest that vowel nasalization has become a *source* of coarticulation in English, adding further evidence of an ongoing sound change in the language.

www.sciencedirect.com/science/arti...

1 year ago 8 0 0 0
Advertisement
Preview
Co-speech head nods are used to enhance prosodic prominence at different levels of narrow focus in French Previous research has shown that prosodic structure can regulate the relationship between co-speech gestures and speech itself. Most co-speech studies have focu

Out now in JASA! Using EMA to track head motion, we find evidence that co-speech head nod gestures are used in French as a way of enhancing and magnifying different levels of prosodic prominence.

pubs.aip.org/asa/jasa/art...

1 year ago 4 0 0 0

It is cool! These coefficients are the products of PCA loadings, proportion of PC variance explanation, and estimates from Bayesian models that include by-speaker random effects. So it's not really possible to tease apart speaker-wise values from these specific coefficients, unfortunately.

1 year ago 1 0 0 0
Post image

Sneak peek! Cross-linguistic lip rounding across 86 speakers and three languages shows evidence for a trading relation between protrusion and area, regardless of the phonological status of vowel rounding.

So much time, effort, and computation for the six numbers in this graph!

1 year ago 3 0 1 0

Have you ever been interested in using the "earbuds method" to measure acoustic nasalance, but were perhaps unsure of how the accuracy compares to a traditional nasometer? Then check out my new #OpenAccess article out now in JASA!

pubs.aip.org/asa/jasa/art...

1 year ago 2 1 0 0

Aaaaannnd... accepted! 🥳 Looks like needing to shave my beard beard paid off in the end.

Coming soon to an Open Access JASA publication near you.

1 year ago 2 0 0 0
Preview
Assessing ultrasound probe stabilization for quantifying speech production contrasts using the Adjustable Laboratory Probe Holder for UltraSound (ALPHUS) Ultrasound imaging of the tongue is biased by the probe movements relative to the speaker’s head. Two common remedies are restricting or algorithmical…

Out now in Journal of Phonetics! We introduce our new and improved open source 3D-printable Adjustable Laboratory Probe Holder for UltraSound (ALPHUS), a highly modular and adaptable solution for different research/clinical needs in ultrasound tongue imaging:

www.sciencedirect.com/science/arti...

1 year ago 2 0 0 0

I shaved my beard this morning in order to collect some data this afternoon (which requires adhesive tape on my face)... only to forget the crucial equipment at home when I left for my train. So how's your day starting off?

2 years ago 0 0 0 1
Advertisement

Don't forget our special SSF panel today at 16:00 GMT. If you're a speech/language scientist and you've ever been curious about switching to/from industry, you won't want to miss it!

2 years ago 1 0 0 0

Are you a phonetician / speech scientist interested in transitioning to/from industry? Then come join our next Speech Science Forum this Thursday, where we have a panel of five speakers who will share their experiences and the lessons they have learned:

www.ucl.ac.uk/pals/events/...

2 years ago 9 9 0 1

And good coffee! And Tears of the Kingdom!

2 years ago 2 0 1 0

Any colleagues who have made the switch from academia to industry: would you be interested in being part of a panel discussion for students in our MSc Language Sciences programme at UCL? Please reply if you would like to help!

2 years ago 0 0 0 0

*Starts working on article at 9:00*

*Looks at clock and sees it's now 14:00*

"Hmmm, I should probably eat something today."

2 years ago 0 0 0 0

Clearing up this point *early* in my new methods article, so that there's no doubt:

"The aim of this research is not to denigrate or undermine the earbuds method but, rather, to provide a context by which to understand more accurately the measurements that arise from its usage."

2 years ago 2 0 0 0
Post image

This year's PALS0047 intro R programming students have been the absolute best! This nearly made me cry like a baby 😭

2 years ago 2 0 0 0
Advertisement
Post image

PALS0047 students are officially code-debugging masters, now that they have the most important, crucial tool in their debugging toolbox... rubber duckies! 🦆🦆🦆

2 years ago 1 0 0 0

I'm currently writing up the results for my new paper, "Ground-truth validation of the 'earbuds method'
for measuring acoustic nasalance", and the conclusion is essentially that I can potentially recommend the method... but only after proper consideration of 8 (!!!) caveats.

2 years ago 4 0 0 0

New in Frontiers in Communication: using neural US and UK English TTS voices as perception stimuli, our findings suggest that vowel nasality helps listeners correctly identify coda nasality, but at the same time it hinders identification of vowel quality:

www.frontiersin.org/articles/10....

2 years ago 1 0 0 0

Yes, that's the plan! There's just a lot of ground work to do, but I'm getting there slowly...

2 years ago 1 0 0 0
Post image

Advice for the "earbuds method" of measuring nasalance: bandpass filter your signal to approximately the 400-700Hz range before calculating nasalance, regardless of the earbud type used. Incidentally, this is very close to the 300-750Hz filter range already recommended for nasometers!

2 years ago 3 3 1 0

I actually get to work on not just *one* but *two* new papers today 😍

2 years ago 0 0 0 0

WHY does AUR praat take so long to update??

2 years ago 1 0 1 0
Advertisement

install.packages("brms")

*hits enter*

*dies from old age*

2 years ago 2 0 1 0
Post image
2 years ago 1 0 0 0