Nowadays LLMs are used a lot in reasoning. When we use them in regular tasks (more specifically: those that are covered in the model's training data), it's fine. However, using the models with new information, new rules, and new capabilities would require more caution. 5/n. n=5
Posts by Zining Zhu
Can LLMs reason? When the problems are grounded in the real world, the performance is good. Otherwise, there's a huge performance drop. 4/n
These are common properties for formal reasoning datasets, but have been very hard to incorporate in commonsense reasoning (which is usually considered a type of informal reasoning). 3/n
ACCORD allows (1) controllable reasoning path length, (2) controllable distraction items on the reasoning tree. These controls are (3) automatic and (4) scalable. 2/n
Let's bring in more formal reasoning properties in the commonsense reasoning datasets! Introducing ACCORD arxiv.org/abs/2406.02804, to be presented at #NAACL2025 w/ François Roewer-Després, Jinyue Feng and Frank Rudzicz. 1/n
A uniquely interesting book with a lot of new information, and I feel the urge to take notes (either to echo or to debate) while reading. Highly recommend.
Nature Biotechnology
Behind the graduate mental health crisis in science
www.nature.com/articles/s41...
I know there are already plenty of tips out there on how to write an effective rebuttal, but I thought I’d share mine as well. I’m not claiming to be an expert or to have a perfect success rate, but I hope these suggestions might be helpful for anyone who could use them.
What are some recent papers that show making models explainable can also make them safer?
Hi I'm starting to use Bluesky!