One thing that I like about Typst is that independent of how much writing or code you put in it, you do not have to wait more than a second for it to compile :D Once I get my stuff together, I am gonna try to recreate Mal's SynTex with Typst.
Posts by utku turk
Is anyone using Typst? I realized it is a lot more easier to create documents and faster to compile with. Following what Mal did with SynTex, I want to create something with Typst, here's a lousy draft for semantics (github.com/utkuturk/typ...). Let me know what you would like to see more.
most of the things on stats i learned was thanks to first with my MA advisor Pavel, and then advanced bayesian course in SMLP. I wish the world was a better place and travel was easier. but please apply, it is awesome!
I gave a talk at HSP 2026 @mit.edu, on agreement attraction in Turkish! We showed evidence for the role of structural positional associations in memory. This was joint work with @utkuturk.com, Duygu Demiray and @linguistbrian.bsky.social (that's me at the very back!)
Will be in Boston until the Sunday for HSP, would be happy to meet with folks that is around!
Not sure what "entirely generated" means, exactly, but I feel we need new norms around this. Here's a blog I like about "etiquette", "consent", and what should be considered "rude" when sharing AI-generated content. Disclosures after the fact do not cut it, IMO. distantprovince.by/posts/its-ru...
New preprint! 🚨 Does surface overlap affect dependency resolution? Using evidence from Turkish agreement, I show the answer is no. I argue that "phonological modulation" of memory is better explained by a more parsimonious mechanism: statistical controllerhood association. osf.io/preprints/ps...
Illusion alert 🚨 We looked at NPI and NCI illusions in 🇨🇿. We aimed to distinguish between them by using negation. In English, it can't cause NPI illusions, but with NCIs it might, because those are licensed in the syntax. Surprisingly, we found strong illusions with both! 1/2 osf.io/preprints/ps...
www.utkuturk.com/posts/audio-... I made the mistake of recording my own voice for an experiment. Luckily, no one heard it because it was really easy (and more pleasant for participants to listen) to use text-to-speech models. with some tricks to make it more realistic (by varying some values)
Hahahahha, okay that is amazing then :D
We also looked at what might be happening in canonical cases of polar questions, if we were right. It looks like we need to constrain alternatives, not using just context, not using syntactic complexity, but syntactic identity, that was already there in Katzir's thesis. ling.auf.net/lingbuzz/009...
Firstly, we looked at what happens when you embed these monster in Turkish. We saw that in certain cases, exactly where wh-like alternatives are introduced, question reading survives in the matrix sentence: ling.auf.net/lingbuzz/009...
Have been working (with Aron Hirsch) on two interesting cases for alternatives in semantics, that was taken for granted until very recently: polar questions. Turkish present interesting case, because the question morpheme can attach to almost any constituent and also can form canonical forms.
Hahahahhaha, that decision and this post forces us to only speak about base of 6s now :D
I do not understand why people are so mean to her, I love intermediate objects too, and I love pipes in R too, but of course you do not use neither of those all the time.
please update this post with every nth character 😅
Have you ever wondered when Lunar New Year - Ramadan - Ash Wednesday is going to be on the same day? It is 3571. And after next year, the next time they are going to be in the same week is 2127. Here's the code for it: colab.research.google.com/drive/1olp7f...
I love creating wikis or step-by-step guide for every procedure because I get to roleplay as that guy in the opening scene of a post-apocalyptic movie who just records everything.
Dark blue hexagon, with a light blue frame. Features a brain facing the left, with the front half as a wire-frame mesh of light blue, and the back half with several polygon segments of different colours. Below is written "ggseg" in light blue.
The ggseg ecosystem finally has a proper home! 🧠
For those who don't know, ggseg is an R package ecosystem for visualizing brain atlas data. Think ggplot2, but for brains.
#rstats #neuroimaging #openscience
Lastly, I was also part of another poster (with Eva Neu, @ozgebakay.bsky.social, Gaja Jarosz, and @linguistbrian.bsky.social). We will present some findings on how frequency might affect the structural choices in Turkish non-local morphology!
I will be also presenting two more posters! The 1st one discussing the time course of agreement planning in production (with Ellen Lau and @colinphillips.bsky.social) and The second one on the role surface-level heuristics in agreement!
Another year, another agreement talk in HSP 😅! We (@ozgebakay.bsky.social, me, Duygu Demiray, and @linguistbrian.bsky.social) will be talking about the role of cues related to subjecthood and probabilistic inferences on strings!
(side note the bayesian analysis including an ordered model is ran with pyro
I do not want to hype anyone about what LLMs are doing or what they can tell us. as I say in the blog post, what these models are doing and what humans are doing is of course different.
unfortunately, it looks like it is more difficult to extract the subjects from unaccusative pictures, compared to unergatives. also text-2-image similarity was lower in unaccusatives. HOWEVER, extraordinary arguments require extraordinary evidence and this (clip similarity) is **not** it.
New blog post: www.utkuturk.com/posts/clip/ Ran an analysis on whether some of the pictures used in previous advanced planning is sus. Used a CLIP similarity model to quantify picture<->target sentence similarity and how difficult to get the subject from the picture.
They don’t. Across two experiments: nominal plural attractors reliably induced attraction, while plural verbal attractors don’t—despite identical surface -lAr. Takeaway: agreement retrieval is gated by controllerhood, and surface heuristics do not bleed into retrieval. (3/3)
An alternative: it’s not phonology, rather controller-eligibility. Turkish is a clean test: the same suffix -lAr appears on nouns and verbs, both can be subjects, but only nouns are possible agreement controllers. If “looks plural” drives retrieval, plural-marked verbs should attract too. (2/3)