Advertisement · 728 × 90

Posts by Maria Ryskina

A Moroccan tea service tray: a metal teapot and two glasses of tea

A Moroccan tea service tray: a metal teapot and two glasses of tea

I'm in Rabat, Morocco attending #EACL2026 and enjoying tea!

I will present our recent work – comparing how new words emerge in books/articles and on social media – at the LChange workshop: aclanthology.org/2026.lchange...

4 weeks ago 12 1 0 1

I knew from the first sentence that it would be MACE! Excited to see it revisited!

3 months ago 2 0 0 0

Congrats Dr Vagrant!!!

3 months ago 1 0 0 0

The must-read paper on LLMs, language, and thought that I reference here:

Dissociating language and thought in large language models
arxiv.org/abs/2301.06627
by @kmahowald.bsky.social @neuranna.bsky.social Idan Blank @nancykanwisher.bsky.social @joshtenenbaum.bsky.social @evfedorenko.bsky.social

3 months ago 16 4 0 0

Huge thanks to @wiair.bsky.social for hosting me -- I had an absolutely wonderful time chatting with @j-novikova-nlp.bsky.social and @malikeh97.bsky.social 🤩

3 months ago 3 0 0 0

New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press.

This is open access; MIT Press will post a link soon, but until then, the book is available on my website:
tedlab.mit.edu/tedlab_websi...

3 months ago 124 41 2 3
Apply - Interfolio

Hiring a postdoc for the Normativity Lab at Johns Hopkins (2026 start). Looking for multiagent systems expertise (RL/generative agents) + interdisciplinary background in AI and cognitive science/econ/cultural evolution.
apply.interfolio.com/177701

4 months ago 6 11 0 1
Post image

🧑‍🔬I’m recruiting PhD students in Natural Language Processing @unileipzig.bsky.social Computer Science, together with @scadsai.bsky.social!

Topics include, but aren’t limited to:

🔎Linguistic Interpretability
🌍Multilingual Evaluation
📖Computational Typology

Please share!

#NLProc #NLP

4 months ago 41 25 1 3
Advertisement

I thought it was very good! Some people strongly prefer Babel for its perspective (the POV character of BoBH is a white woman), but I had the same criticisms as you and I liked BoBH better, especially in terms of character development. It also talks a lot more about research as a career!

4 months ago 1 0 0 0

Have you read Blood over Bright Haven? (No translation magic there, unfortunately, but much better on both other points IMO)

4 months ago 1 0 1 0

Surprising to me that on the chart it's labelled as being darker than The Secret History!

4 months ago 1 0 1 0
References to two papers next to one another in a bibliography section:

Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025.

Making “fetch” happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.

References to two papers next to one another in a bibliography section: Making FETCH! happen: Finding emergent dog whistles through common habitats by Kuleen Sasse, Carlos Alejandro Aguirre, Isabel Cachola, Sharon Levy, and Mark Dredze. ACL 2025. Making “fetch” happen: The influence of social and linguistic context on nonstandard word growth and decline by Ian Stewart and Jacob Eisenstein. EMNLP 2018.

Accidental bibliography achievement unlocked!
(I highly recommend checking out both papers)

4 months ago 6 1 0 0

Congratulations!!!

5 months ago 1 0 0 0
Gillian Hadfield - Alignment is social: lessons from human alignment for AI
Gillian Hadfield - Alignment is social: lessons from human alignment for AI Current approaches conceptualize the alignment challenge as one of eliciting individual human preferences and training models to choose outputs that that satisfy those preferences. To the extent…

The recording of my keynote from #COLM2025 is now available!

5 months ago 10 3 0 0

Btw the PI of this work, Dr Kelly Lambert, has a cool book called "The Lab Rat Chronicles" that describes lots of behavioral findings from rat experiments! (Written pre-driving rats, unfortunately)

5 months ago 1 0 0 0
two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive

two rats in cars from the University of Richmond study where they trained rats to drive tiny cars to get to treats and concluded that the rats love driving so much they'll do it without any incentive

the only kind of Rat Race I'm down for

5 months ago 18 1 2 0
Advertisement

Congratulations! Took me a second to understand you weren't talking about Lexical Functional Grammar though...

5 months ago 3 0 1 0

Canadian researchers should be aware the there is a motion before the Parliamentary Standing Committee on Science and Research to force Tricouncils to hand over disaggregated peer review data on all applications:
Applicant names, profiles, demographics
Reviewers names, profiles, comments, and scores

5 months ago 143 169 13 50
Preview
Incomplete Contracting and AI Alignment We suggest that the analysis of incomplete contracting developed by law and economics researchers can provide a useful framework for understanding the AI alignment problem and help to generate a syste...

Isn't mis- (or at least under-)specification inevitable? (I'm thinking of arxiv.org/abs/1804.04268)

6 months ago 3 0 1 0
Post image

Finally out in TACL:
🌎EWoK (Elements of World Knowledge)🌎: A cognition-inspired framework for evaluating basic world knowledge in language models

tl;dr: LLMs learn basic social concepts way easier than physical&spatial concepts

Paper: direct.mit.edu/tacl/article...
Website: ewok-core.github.io

6 months ago 69 10 1 2
Post image

🚀 Excited to share a major update to our “Mixture of Cognitive Reasoners” (MiCRo) paper!

We ask: What benefits can we unlock by designing language models whose inner structure mirrors the brain’s functional specialization?

More below 🧠👇
cognitive-reasoners.epfl.ch

6 months ago 31 9 2 2

DM'd you, thanks!

6 months ago 1 0 0 0

The organizers mentioned that the videos will be up a few weeks after the conference! I expect it'll be at www.youtube.com/@colm_conf

6 months ago 1 0 1 0

I still have that card! Still working on that second ice cream 🥲

6 months ago 1 0 0 0

It used to be 5 "no"s for ice cream/pizza! Has the exchange rate gone up?

6 months ago 1 0 1 0
Advertisement

I'm on the job market looking for CS/ischool faculty and related positions! I'm broadly interested in doing research with policymakers and communities impacted by AI to inform and develop mitigations to harms and risks. If you've included any of my work in syllabi or policy docs please let me know!

6 months ago 7 6 2 0
Post image

Grateful to keynote at #COLM2025. Here's what we're missing about AI alignment: Humans don’t cooperate just by aggregating preferences, we build social processes and institutions to generate norms that make it safe to trade with strangers. AI needs to play by these same systems, not replace them.

6 months ago 15 3 1 0

Inspired to share some papers that I found at #COLM2025!

"Register Always Matters: Analysis of LLM Pretraining Data Through the Lens of Language Variation" by Amanda Myntti et al. arxiv.org/abs/2504.01542

6 months ago 26 8 1 0
Title: Large Language Models Assume People are More Rational than We Really are
Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths
Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University
Email: ryanliu at princeton.edu and jiayig at princeton.edu

Title: Large Language Models Assume People are More Rational than We Really are Authors: Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, Thomas L. Griffiths Affiliations: Department of Computer Science & Department of Psychology, Princeton University; Computing & Data Sciences, Boston University; Center for Data Science, New York University Email: ryanliu at princeton.edu and jiayig at princeton.edu

LLMs Assume People Are More Rational Than We Really Are by Ryan Liu* & Jiayi Geng* et al.:

LMs are bad (too rational) at predicting human behaviour, but aligned with humans in assuming rationality in others’ choices.

arxiv.org/abs/2406.17055

6 months ago 4 0 0 0
Title: Neologism Learning for Controllability and Self-Verbalization
Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim
Affiliation: Google DeepMind
Email: {johnhew, oyvindt, geirhos, beenkim} at google.com

Title: Neologism Learning for Controllability and Self-Verbalization Authors: John Hewitt, Oyvind Tafjord, Robert Geirhos, Been Kim Affiliation: Google DeepMind Email: {johnhew, oyvindt, geirhos, beenkim} at google.com

Neologism Learning by John Hewitt et al.:

Training new token embeddings on examples with a specific property (e.g., short answers) leads to finding “machine-only synonyms” for these tokens that elicit the same behaviour (short answers=’lack’).

arxiv.org/abs/2510.08506

6 months ago 0 0 1 0