📘 Paper: aclanthology.org/2024.emnlp-m...
🎬 YouTube: www.youtube.com/watch?v=RKIv...
🎧 Spotify: open.spotify.com/episode/3IvN...
🍎 Apple Podcasts: podcasts.apple.com/ca/podcast/w...
#WiAIR #AIResearch #VLM #EMNLP2024 #MulticulturalAI #GLOBALRG
(8/8🧵)
Ana and her co-authors dive deep in “On Evaluating Explanation Utility for Human-AI Decision Making in NLP” (Findings of #EMNLP2024) 🧠 — asking whether explanations truly help humans make better decisions, or just make us feel more confident. (2/8🧵)
The final entry in my #EMNLP2024 fav papers was this paper aclanthology.org/2024.finding... from Thomas L Griffiths' keynote. Used rotational cyphers like ROT-13 and ROT-3 to disentangle forms of reasoning in Chain-of-Thought. Good cypher joke in the keynote! (see p. 24 arxiv.org/abs/2309.13638)
This #EMNLP2024 best paper aclanthology.org/2024.emnlp-m... had large gains over their (somewhat weak) baseline in trying to determine if a given document was in a LLMs pre-training data. Progress in an important problem.
This #EMNLP2024 outstanding paper (aclanthology.org/2024.emnlp-m..., underline.io/events/469/s...) LMs can learn a rare grammatical construction like "a beautiful five days", even without any examples in the training data, by generalizing from more common phenomenon.
This #EMNLP2024 post (aclanthology.org/2024.emnlp-m..., underline.io/events/469/p...) was about avoiding hallucination without human feedback. If you compare an answer at a higher temperature to a beam search generation, then the latter will be more factual, making preference pairs for DPO.
This paper underline.io/events/469/s... at #EMNLP2024 had one of my favorite takeaways: if you fine tune a LLM on knew knowledge it doesn't know you encourage hallucinations.
I wanted to post of a few of my favorite #EMNLP2024 papers, starting with a couple in tokenization. Fishing For Magicarp explores the problem of undertrained "glitch" tokens, and how they can be identified from their embedding vectors. aclanthology.org/2024.emnlp-m...
At this Year’s #EMNLP2024 we presented 13 papers
bsky.app/profile/ukpl...
▪️ 11 papers authored or co-authored by UKP members have been accepted for publication at this year's #ACL2024NLP in Bangkok 🇹🇭!
(2/🧵)
#EMNLP2024 Accepted to EMNLP 2024 Main Conference 🎉
”A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution“
It works so well that a 7B open-source LLM outperforms GPT-4 with prompting at identifying unique writing style. 🔍
👉 Learn more: arxiv.org/pdf/2410.217...
Had fun presenting my favourite #EMNLP2024 papers today at our secret reading group + bonus raccoon pics from Miami 🦝
Will follow up with favourite papers in blog post form soon!
Exciting research on an AI-driven mnemonic generator for easier vocabulary memorization by @nbalepur.bsky.social, Jordan Boyd-Graber, Rachel Rudinger, & @alexanderhoyle.bsky.social. Part of 21 CLIP projects at #EMNLP2024. 👉 Read more: go.umd.edu/1u48 #AI
⚕️What if evaluation #metrics for text simplification focused on understanding the gist of biomedical texts?
We present “SciGisPy,” a gist-based metric for biomedical text evaluation.
📄: shorturl.at/dss4Z
#EMNLP2024 #nlp #nlpproc #biomedical #clinical #textsimplification #gist #metric #evaluation
I really enjoyed #EMNLP2024. It was an honor to present our tokenization paper aclanthology.org/2024.emnlp-m.... I’m planning to post about some of my favorite papers soon, but here is a nice write up.
🩺 What if #simplifying medical texts could be a collaborative effort among #agents?
See how our “Society of Medical Simplifiers” makes it possible!
📄: aclanthology.org/2024.tsar-1.7/
#nlpproc #nlp #textsimplification #ats #biomedical #EMNLP2024
#EMNLP2024 "event page is now open to public. Invite your colleagues to view content, leave comments and share by sending them this link" Link: underline.io/events/469/r...
Throwback to #EMNLP2024 in sunny Miami 🌞
The UKP Lab had an amazing time at this year’s
@emnlpmeeting.bsky.social in Florida 🌴!
Our team presented 13 papers, including 11 in the Main track and 2 in the Findings track, showcasing our latest research to a vibrant international audience.
(1/🧵)
Officially migrated from X! 🕊️
Super grateful for our outstanding paper award at #EMNLP2024 ✨
📝 dill-lab.github.io/oath-frames/
You can find out more about my work here - jr4fs.github.io :-)
Made it to northern Sweden (Kiruna) by train from London. Freezing cold with northern lights 😍
Just over a week ago and I was in the crazy Miami heat for #EMNLP2024
Had a lot of fun teaching a tutorial on Human-Centered Evaluation of Language Technologies at #EMNLP2024, w/ @ziangxiao.bsky.social, Su Lin Blodgett, and Jackie Cheung
We just posted the slides on our tutorial website: human-centered-eval.github.io
1/6 A lot of us are grappling with peer review these days, but its worst manifestation is when prestigious conference awards overlook critical flaws.
Case in point: #EMNLP2024 ’s Best Paper Award.
I & @iamgroot42.bsky.social wrote a blog on what went wrong: www.anshumansuri.com/blog/2024/ca... 🧵
Had a blast in Miami presenting our Stanford paper at #EMNLP2024 and catching up with old friends and meeting new ones ;)
More details about our paper in the thread below!
Paper: arxiv.org/abs/2408.03617
Code & data: github.com/styfeng/Tiny...
@stanfordnlp.bsky.social @emnlpmeeting.bsky.social
From left to right: Haw-Shiuan, Thomas, Violet and Sijia standing in front of the poster.
Thomas standing in front of his poster in the conference room
Excited to kick off my Bluesky sharing that I just presented our LLM Self-Correction paper at #EMNLP2024! 🎉 We propose a benchmark and a solution for LLMs on multi-constrained instruction following.
Check it: bit.ly/DeCRIM
Super exciting discussion there and lots of new ideas coming out from it!
Last week at #EMNLP2024 in Miami, I had the privilege of presenting DocEdit-v2: Document Structure Editing via Multimodal LLM Grounding. 🌟
Our work advances document editing by using multimodal LLMs to seamlessly ground and execute structural edits on PDF documents.
#NLP #AI #MultimodalAI #EMNLP
We had a great experience presenting our work on ImageInWords to the community #EMNLP2024 . Thank you everyone for stopping by🙏! Looking forward to future work and seeing image descriptions as a foundational multi-modal task! @emnlpmeeting.bsky.social @deep-mind.bsky.social #NLProc #Multimodal
I'm really encouraged by how much the AI research community is already on 🦋. Just look at the #EMNLP / #EMNLP2024 hashtag, and you can almost see old #AIresearch community of years ago reassembling in realtime.
personal stuff? sure. as my bio suggests, i like karaoke (went three times during last week's #emnlp2024). i run long-distance. i am a father of two. i live in jerusalem. i have zero tolerance for antisemitism including all the super-clever dog whistles.
oh yeah and here's the MeLeL explanation:
Check our latest cultural survey paper presented in the #EMNLP2024 last week!
with Prof. Monojit and @sagnikmukherjee.bsky.social
LTI researchers capped off a great week at #EMNLP2024 with not one, but *two* Best Paper awards. Read about a new model that expands speech technologies to 1000s of languages, and how "transcreation" tools can generate culturally-appropriate images, here: lti.cs.cmu.edu/news-and-eve...
#EMNLP2024 was a fun time to reconnect with old friends and meet new ones! Reflecting on the conference program and in-person discussions, I believe we're seeing the "Google Moment" to #IR research play out in #NLProc.
1/n