whoops, yes
Posts by Morgan Sonderegger
At least some of the papers I'm an author on measuring English VOT (Stuart-Smith et al. Labphon; Sonderegger et al Language x2) are large corpus studies that consider stress. Not word position, iirc.
🚨🚨🚨 I'm hiring a Postdoc for a 3-year position to come work with me at the University of Oslo using iterative learning experiments to understand the evolution of sound symbolism. Please share widely.
🔴 Deadline is May 31st '25
⏲️ Desired starting date is Fall '25
shorturl.at/vSTZx
We're delighted to announce that #LabPhon20 will be held in Montréal, Québec, Canada, in the (North American) summer of 2026, with the theme “Looking back and looking forward.” Dates, thematic sessions, invited speakers and further information will be announced by the organizing committee. #LabPhon
Linguists, speech scientists, cognitive scientists: Meghan Clayards and I are hiring a postdoc to work at McGill Linguistics, deadline February 16. Come work with us!
mcgill.wd3.myworkdayjobs.com/McGill_Caree...
Linguists! We're hiring an Assistant Professor (tenure track) in phonology (secondary specializations welcome) at McGill linguistics, deadline January 13. Come work with us!
mcgill.wd3.myworkdayjobs.com/en-US/McGill...
I sympathize with the problem, though, and have seen people in this situation just exclude all words with frequency < some_constant, which seems worse.
So there'd be a pseudo-word called "low_frequency_word"? I think I wouldn't do this, because it loses a lot of information... but I'm not sure what harm it'd do, if any. Probably affect the random-effect variance estimate and thus maybe SEs for word-level predictors? Curious what you find.
could you add me? Thanks for doing this!
not sure if you're doing Bayesian or frequentist models, but I've also used the neutralization dataset for different topics in Bayesian models here (feel free to take anything)
people.linguistics.mcgill.ca/~morgan/ling...
"smallest effect size of interest" connects nicely to ROPE
I like your incomplete neutralization dataset! There are extensive exercises with it in my book. The 'english' and 'french_cdi' (word learning) datasets there, are also simple and intuitive, I've used them in data science courses.
Ooh, agree. There's a section on this in my book -- random intercepts for "nuisance variables" with many levels. Shamelessly mentioning in case it helps more than 2 people read it...
You can kind of do this using webR "Line-by-line Execution":
quarto-webr.thecoatlessprofessor.com/qwebr-code-c...
not exactly what you're saying, but close?
Deadline extension! Abstracts for CorpusPhon are now due Wednesday, March 13 AoE.
We are also excited to announce Dr. Michael McAuliffe @mmcauliffe.bsky.social
as our invited speaker.
Hope you can join us!