"Mission: Impossible" was featured in Quanta Magazine! Big thank you to @benbenbrubaker.bsky.social for the wonderful article covering our work on impossible languages. Ben was so thoughtful and thorough in all our conversations, and it really shows in his writing!
Posts by Aryaman Arora
I've posted the practice run of my LSA keynote. My core claim is that LLMs can be useful tools for doing close linguistic analysis. I illustrate with a detailed case study, drawing on corpus evidence, targeted syntactic evaluations, and causal intervention-based analyses: youtu.be/DBorepHuKDM
What are the broad open problems in your view?
i am now going to write a massive reply that will have no effect on this score you have given me
hmm bluesky feed is 80% reviewing complaints. twitter is slightly better in this regard
I was reading Tim Bodt's new book on Proto-Western Kho-Bwa while waiting for code to run
www.ling.sinica.edu.tw/item/en?act=...
very nice work
e.g. Proto-Western Kho-Bwa *n̥a-jʷa-kʰa "chin" (> Khoina Sartang nyjukʰu) seems to be composed of
- PWKB *n̥a- "lower face" < Proto-Sino-Tibetan *s-na ~ s-naːr
- ?PST *g(j/w)ar "cheek, chin, jaw"
- PST *m/s-k(w)a-j "mouth, opening"
very interesting derivational morphology strat
it's pretty interesting how (some?) Sino-Tibetan languages, when they historically underwent too much degradation through sound change, just decided to make compounds with synonymous elements or stick semantic prefixes on everything
The cat is the mech interp researcher right
@avzaagzonunaada.bsky.social
added 🫡
added 🫡
made this thing, reply to be added
go.bsky.app/AKGJ82V
sus
caleb is so cool pls follow
Title: "Characterizing the Role of Similarity in the Property Inferences of Language Models" Authors: Juan Diego Rodriguez, Aaron Mueller, Kanishka Misra Left figure: "Given that dogs are daxable, is it true that corgis are daxable?" A language model could answer this either using taxonomic relations, illustrated by a taxonomy dog-corgi, dog-mutt, canine-wolf, etc., or by similarity relations (dogs are more similar to corgis than cats, wolves or shar peis). Right figure: illustration of the causal model (and an example intervention) for distributed alignment search (DAS), which we used to find a subspace in the network responsible for property inheritance behavior. The bottom nodes are "property", "premise concept (A)" and "conclusion concept (B)" , the middle nodes are "A has property P", "B is a kind of A", and the top node is "B has property P".
How do language models organize concepts and their properties? Do they use taxonomies to infer new properties, or infer based on concept similarities? Apparently, both!
🌟 New paper with my fantastic collaborators @amuuueller.bsky.social and @kanishka.bsky.social
so true
the fact that you are posting here (and not there) has significantly increased my desire to use this platform
i do like my username here
wow i have so many followers here somehow, is it time to start posting here too
I decided what we need to make blueskAI happen is a feed. Reply here to get added to the whitelist! Whitelisted users can post to the feed by adding the following keywords to a post:
🤖
bskAI
blueskAI
😮
Hey @noviscl.bsky.social should we start shitposting here too
lol I think we talked about it the first time I reviewed (SIGMORPHON)
There will never be a random Burushaski speaker from Pakistan in my mentions here
So Threads got all the AI influencer accounts. This seems to be getting more of the linguists. But I don't think anything will be like Twitter tbh
Do I really need to read more than 600 posts a day? The answer is apparently yes
South Asian ling twitter check in 🫡