Don't be shy to take on a little two-week side project. These five months will be the most precious three years of your academic journey.
Posts by Fausto Carcassi
Really enjoyed chatting about a Drive to Survive with Carrie Figdor for the @newbooksnetwork.bsky.social and @mitpress.bsky.social podcast!
newbooksnetwork.com/kathryn-nave...
trying to learn more about Approximate Bayesian Computation, to fit less tractable models (say ABMs, or rule-based categorization systems) to data; or fit models to aggregate data reported in other papers. but so many approaches out there! what are your favorite tutorials/papers?
I think part of the reason why people find this controversial is that they read into it "therefore there is no difference". But as the prof for intro to philosophy said in the first year of my BA: "The fact that there are unclear cases does not mean there aren't clear cases"
There is no straightforward way, or any recognized method, to demarcate cleanly between science and pseudoscience.
Thanks to François Chollet and mike knoop, my interest on AI is back and I'm really looking into arcprize this looks interesting. I went to the resources tab and started learning aboout program synthesys.
I'll go through this mini course from @faustocarcassi.bsky.social
Update: we've extended our timeline! Review of applications will now begin March 24. Still plenty of time to put together an app!
disi.org/apply/
This is *very cool*
Can LLMs use ToM to genuinely persuade you, or do they just use good rhetoric? In our new preprint, we use the MINDGAMES framework to test this. Surprisingly, LLMs like o3 can be incredibly effective persuaders *without* actually understanding your mental states. 🧵👇
Come to my and Tom Stephen's course at ESSLLI 2026! 2026.esslli.eu/courses-work... We'll consider typological data to formulate tightly fitting empirical constraints on the operation of semantic composition in natural language, in the tradition of Generalized Quantification Theory
my course notes on a bayesian workflow for (single agent) cognitive modeling are now fully revised and online: fusaroli.github.io/AdvancedCogn...
Predictive checks, updating checks, sensitivity analyses and simulation based calibration in @mc-stan.org
Feedback is very welcome!
A new preprint, co-authored with @johnwkrakauer.bsky.social:
The Deliberation Taboo
Cognitive science is, nominally, the science of thinking. We argue that the field has no theory of what thinking is and, even worse, that the topic has largely dropped out of focus. 1/
osf.io/preprints/ps...
**Postdoc position in human category learning**
@thecharleywu.bsky.social, Frank Jäkel and I are seeking a postdoctoral fellow to lead a joint project on human category learning at the Centre for Cognitive Science @tuda.bsky.social.
www.career.tu-darmstadt.de/tu-darmstadt...
New preprint
"Human-Like Coarse Object Representations in Vision Models"
arxiv.org/pdf/2602.12486
Drop an album that was important to you when you were nineteen.
Come work with us!!
Two full substitute professorships for Computational Linguistics (1 year) and General Linguistics (1.5 years) at the University of Tübingen. @unituebingen.bsky.social
uni-tuebingen.de/universitaet...
uni-tuebingen.de/universitaet...
I binged @tonytula.com's two books and want more so now I'm forced to read "Remote Research: Real Users, Real Time, Real Research" as a piece of narrative in the Tulathimutte-verse
you know there is one thing everyone could have that billionaires now have: "a meaningful say over their work, their lives, and the places they live". don't think Altman will like the story of how we get that for everyone tho
TIL Fats Domino's eight children were called: Antoine III, Anatole, Andre, Antonio, Antoinette, Andrea, Anola, and Adonica.
I agree! I am more worried for students. I guess I should have said: the fear is AI will absorb anyone who *could learn* what a function application is
I think the fear might be that formal semantics will disappear because AI will absorb everyone who knows what a function application is
Sadly, as a field it's just slightly too close to the star of AI research. Whether it will orbit or fall and crash is unclear (a lot of pessimism around though afaict).
Personally in teaching it I think we should emphasise more: (1) How strangely language behaves even in apparently simple domains like Boolean connectives (2) events events events!
So I think that, much like logic, TCS lives a split but stable existence between a topic students learn to make their thinking about language a bit more precise, and a research field that connects more and more with other fields (typology, cognitive science)
The field has matured so that the intro course (usually H&K, using S at the top) is quite far removed from the standard picture in the research lit (some kind of neo-Davidsonian event semantics w/ a rich verbal spine), though there's some attempts to realign (Coppock & Champollion textbook)
(1) a tool to state typological generalisations or describe underdocumented langs, (2) a framework to formulate precise empirical predictions to test (Jacopo Romoli has great stuff here), (3) a way to study how certain kind of meanings are realized and dealt with (e.g. degrees), ...
In my experience most people aren't as committed to (/interested in?) the big foundations that motivated it in the old days (I think Davidson (?) somewhere sums it up as: a systematic account of compositionality+entailment patterns+logical form), and instead use TCS in more applied ways e.g., as: