Next week I'll be at #ACL2024NLP to present our work about the interplay between SES and LLM usage 🇦🇹
I will present it on Wednesday morning and also take part in a panel at the end of the session.
🤗: shorturl.at/W77Kq
If you're interested in the topic, feel free to reach out for a chat!
#NLProc
▪️ At the #ACL2024NLP in Bangkok, @igurevych.bsky.social held a Keynote and the #UKPLab won two outstanding paper awards.
Congratulations to the authors Indraneil Paul, Goran Glavaš, Jan-Christoph Klie, Rahul N., Juan Haladjian, Marc Kirchner, and @igurevych.bsky.social
x.com/UKPLab/statu...
(4/🧵)
At this Year’s #EMNLP2024 we presented 13 papers
bsky.app/profile/ukpl...
▪️ 11 papers authored or co-authored by UKP members have been accepted for publication at this year's #ACL2024NLP in Bangkok 🇹🇭!
(2/🧵)
My #acl2024nlp Presidential Address is now publicly available. If you saw the slides & discussion of them in August, especially, please have a listen to the actual talk. It starts at 43'50" in this video:
underline.io/events/466/s...
Theme of the talk: ACL is not an AI conference
My #acl2024nlp Presidential Address is now publicly available. If you saw the slides & discussion of them in August, especially, please have a listen to the actual talk. It starts at 40'50" in this video:
underline.io/events/466/s...
»[O]ur results do not mean that AI is not a threat at all« emphasized Iryna Gurevych. »[But future research should] focus on other risks posed by the models, such as their potential to be used to generate fake news.« (3/🧵)
Full press release: nachrichten.idw-online.de/2024/08/12/i...
#ACL2024NLP
The 2024 study, authored by Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi & Iryna Gurevych (BathNLP Lab | UKP Lab), was just presented at #ACL2024NLP. It found no evidence of emergent abilities in LLMs that go beyond in-context learning. (2/🧵)
📄 arxiv.org/abs/2309.01809
Our colleagues Iryna Gurevych, Yufang Hou and Preslav Nakov presenting the work of Max Glockner on #Missci at #ACL2024NLP 🇹🇭 , a collaboration with IBM Research Ireland and MBZUAI.
arxiv.org/abs/2406.03181
Today at #ACL2024NLP: “1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)” and the proceedings are online: aclanthology.org/volumes/2024... #NLProc
And consider following the authors Fengyu Cai (UKP Lab), Xinran Zhao (Carnegie Mellon University), Hongming Zhang (Tencent AI), Iryna Gurevych, and Heinz Koeppl (@cs-tudarmstadt.bsky.social, @tuda.bsky.social) for more information or an exchange of ideas.
See you at #ACL2024NLP 🇹🇭!
Check our paper and code!
📰 Paper: arxiv.org/abs/2407.12512
💻 Code: github.com/TRUMANCFY/ge...
(8/🧵) #ACL2024NLP #NLProc
We demonstrate that with the knowledge of class-wise hardness, class reorganization will lead to a more coherent class-wise hardness distribution, and further improve the model performance.
(7/🧵)
#ACL2024NLP #NLProc
☝️ Moreover, we theoretically prove that the intra-class hardness is associated with overfitting phenomena, leading to performance degradation in the training process.
(6/🧵) #ACL2024NLP #NLProc
🧠 GeoHard is stable across different semantic encoders and NLP tasks. This means that it generalizes well in measuring class-wise hardness.
(5/🧵) #ACL2024NLP #NLProc
By computing the correlation between hardness measures and performance, we compare GeoHard with baseline metrics, specifically the aggregation of instance-level hardness metrics on eight NLU datasets.
GeoHard outperforms the instance-level aggregation by more than 59%! 🤯
(4/🧵)
#ACL2024NLP #NLProc
🤔 How do we measure the hardness of a class?
GeoHard to the rescue! It incorporates both inter-class and intra-class measures from class-wise semantics.
In the embedding space, greater diversity within a class and closer distances between classes indicate higher hardness.
(3/🧵) #ACL2024NLP #NLProc
🎯 We've found a consistent pattern of class-wise difficulty across various language models, paradigms and human annotations on eight NLU datasets.
• Fine-tuned LMs: Roberta/OPT/Flan-T5
• In-context learning: LLama/OPT
Class-wise difficulty is an intrinsic feature! 🔍🤖
(2/🧵) #ACL2024NLP #NLProc
Are all classes in #NLProc tasks equally difficult to learn? 🤔
In our #ACL2024NLP paper, we analyze why this is not the case!
Please meet #GeoHard, a metric to measure class-wise difficulty 🔍📊 ! 🧵(1/9)
📆 Poster: Tue, Aug 13, 12:15 ICT
📰 arxiv.org/abs/2407.12512
Meet our fellow researchers representing UKP Lab at #ACL2024NLP: Iryna Gurevych, Qian Ruan, Justus-Jonas Erker, Indraneil Paul, Fengyu CAI, Sheng Lu, Haishuo Fang, Furkan Şahinuç, Kexin Wang, Haau-Sing Li, Andreas Waldis, and a very special guest from BathNLP Lab, Harish Tayyar Madabushi!
Interested in our research? At these times you can see the presentations of papers co-authored by our colleagues at #ACL2024NLP 🇹🇭!
Excited to share my ACL 2024 presentation on my second-to-last PhD paper! 🎓📚
Watch it here if you are also interested in LLM self-explanations: 🤖 youtu.be/b3wbTOZXRyI
Are you joining ACL 2024 in Bangkok? Ping me—let's chat! #ACL2024NLP #PhDLife
⭐️🗞️ Accepted to ACL 2024 main conference! #ACL2024NLP
Neural nets can in theory learn formal languages such as aⁿbⁿ & Dyck. Yet no one ever finds such nets using standard techniques. Why?
We suggest that the culprit might have been the objective function all along 👇
arxiv.org/abs/2402.10013
Congratulations to all authors! We look forward to seeing you in Bangkok 🇹🇭 this August!
➡️ www.informatik.tu-darmstadt.de/ukp/ukp_home...
(17/17) #ACL2024NLP #NLProc
»$\textit{GeoHard}$: Towards Measuring Class-wise Hardness through Modelling Class Semantics« by Fengyu Cai, Xinran Zhao, Hongming Zhang, Iryna Gurevych and Heinz Koeppl (16/🧵) #ACL2024NLP (arXiv coming soon)
»$\texttt{DARA}$: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs« by Haishuo Fang, Xiaodan Zhu and Iryna Gurevych (15/🧵) #ACL2024NLP (arXiv coming soon)
Also, 2 papers have been accepted to the Findings section of #ACL2024NLP. They are:
»On Efficient and Statistical Quality Estimation for Data Annotation« by Jan-Christoph Klie, Rahul Nair, Juan Haladjian and Marc Kirchner (13/🧵) #ACL2024NLP
arxiv.org/abs/2405.11919
»DAPR: A Benchmark on Document-Aware Passage Retrieval« by Kexin Wang, Nils Reimers and Iryna Gurevych (12/🧵) #ACL2024NLP
arxiv.org/abs/2305.13915
»Re3: A Holistic Framework and Dataset for Modeling Collaborative Document Revision« by Qian Ruan, Ilia Kuznetsov and Iryna Gurevych (11/🧵) #ACL2024NLP (arXiv coming soon)
»Dismantling the Misleading Narratives: Reconstructing the Fallacies in Misrepresented Science« by Max Glockner, Yufang Hou, Preslav Nakov and Iryna Gurevych (10/🧵) #ACL2024NLP (arXiv coming soon)