Advertisement · 728 × 90
#
Hashtag
#ACL2024NLP
Advertisement · 728 × 90

Next week I'll be at #ACL2024NLP to present our work about the interplay between SES and LLM usage 🇦🇹
I will present it on Wednesday morning and also take part in a panel at the end of the session.

🤗: shorturl.at/W77Kq

If you're interested in the topic, feel free to reach out for a chat!

#NLProc

11 0 0 0
x.com

▪️ At the #ACL2024NLP in Bangkok, @igurevych.bsky.social held a Keynote and the #UKPLab won two outstanding paper awards.
Congratulations to the authors Indraneil Paul, Goran Glavaš, Jan-Christoph Klie, Rahul N., Juan Haladjian, Marc Kirchner, and @igurevych.bsky.social
x.com/UKPLab/statu...
(4/🧵)

1 0 1 0

At this Year’s #EMNLP2024 we presented 13 papers
bsky.app/profile/ukpl...

▪️ 11 papers authored or co-authored by UKP members have been accepted for publication at this year's #ACL2024NLP in Bangkok 🇹🇭!
(2/🧵)

1 0 1 0

My #acl2024nlp Presidential Address is now publicly available. If you saw the slides & discussion of them in August, especially, please have a listen to the actual talk. It starts at 43'50" in this video:

underline.io/events/466/s...

Theme of the talk: ACL is not an AI conference

15 6 0 2

My #acl2024nlp Presidential Address is now publicly available. If you saw the slides & discussion of them in August, especially, please have a listen to the actual talk. It starts at 40'50" in this video:

underline.io/events/466/s...

15 4 1 0
Independent, complex thinking not (yet) possible after all: Study led by TU shows limitations of ChatGPT & co.

»[O]ur results do not mean that AI is not a threat at all« emphasized Iryna Gurevych. »[But future research should] focus on other risks posed by the models, such as their potential to be used to generate fake news.« (3/🧵)

Full press release: nachrichten.idw-online.de/2024/08/12/i...

#ACL2024NLP

0 0 1 0
Preview
Are Emergent Abilities in Large Language Models just In-Context Learning? Large language models, comprising billions of parameters and pre-trained on extensive web-scale corpora, have been claimed to acquire certain capabilities without having been specifically trained...

The 2024 study, authored by Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi & Iryna Gurevych (BathNLP Lab | UKP Lab), was just presented at #ACL2024NLP. It found no evidence of emergent abilities in LLMs that go beyond in-context learning. (2/🧵)

📄 arxiv.org/abs/2309.01809

0 0 1 0
Post image

Our colleagues Iryna Gurevych, Yufang Hou and Preslav Nakov presenting the work of Max Glockner on #Missci at #ACL2024NLP 🇹🇭 , a collaboration with IBM Research Ireland and MBZUAI.

arxiv.org/abs/2406.03181

0 0 0 0
Post image

Today at #ACL2024NLP: “1st Workshop on Machine Learning for Ancient Languages (ML4AL 2024)” and the proceedings are online: aclanthology.org/volumes/2024... #NLProc

1 0 0 0
Post image

And consider following the authors Fengyu Cai (UKP Lab), Xinran Zhao (Carnegie Mellon University), Hongming Zhang (Tencent AI), Iryna Gurevych, and Heinz Koeppl (@cs-tudarmstadt.bsky.social, @tuda.bsky.social) for more information or an exchange of ideas.

See you at #ACL2024NLP 🇹🇭!

1 0 0 0
Preview
GitHub - TRUMANCFY/geohard Contribute to TRUMANCFY/geohard development by creating an account on GitHub.

Check our paper and code!

📰 Paper: arxiv.org/abs/2407.12512
💻 Code: github.com/TRUMANCFY/ge...

(8/🧵) #ACL2024NLP #NLProc

1 0 1 0
Post image

We demonstrate that with the knowledge of class-wise hardness, class reorganization will lead to a more coherent class-wise hardness distribution, and further improve the model performance.

(7/🧵)

#ACL2024NLP #NLProc

2 0 1 0

☝️ Moreover, we theoretically prove that the intra-class hardness is associated with overfitting phenomena, leading to performance degradation in the training process.

(6/🧵) #ACL2024NLP #NLProc

0 0 1 0
Post image

🧠 GeoHard is stable across different semantic encoders and NLP tasks. This means that it generalizes well in measuring class-wise hardness.

(5/🧵) #ACL2024NLP #NLProc

1 0 1 0
Post image

By computing the correlation between hardness measures and performance, we compare GeoHard with baseline metrics, specifically the aggregation of instance-level hardness metrics on eight NLU datasets.

GeoHard outperforms the instance-level aggregation by more than 59%! 🤯

(4/🧵)

#ACL2024NLP #NLProc

0 0 1 0
Post image

🤔 How do we measure the hardness of a class?
GeoHard to the rescue! It incorporates both inter-class and intra-class measures from class-wise semantics.
In the embedding space, greater diversity within a class and closer distances between classes indicate higher hardness.

(3/🧵) #ACL2024NLP #NLProc

0 0 1 0
Post image

🎯 We've found a consistent pattern of class-wise difficulty across various language models, paradigms and human annotations on eight NLU datasets.

• Fine-tuned LMs: Roberta/OPT/Flan-T5
• In-context learning: LLama/OPT

Class-wise difficulty is an intrinsic feature! 🔍🤖

(2/🧵) #ACL2024NLP #NLProc

0 0 1 0
Post image

Are all classes in #NLProc tasks equally difficult to learn? 🤔
In our #ACL2024NLP paper, we analyze why this is not the case!
Please meet #GeoHard, a metric to measure class-wise difficulty 🔍📊 ! 🧵(1/9)

📆 Poster: Tue, Aug 13, 12:15 ICT

📰 arxiv.org/abs/2407.12512

1 0 1 0
Post image

Meet our fellow researchers representing UKP Lab at #ACL2024NLP: Iryna Gurevych, Qian Ruan, Justus-Jonas Erker, Indraneil Paul, Fengyu CAI, Sheng Lu, Haishuo Fang, Furkan Şahinuç, Kexin Wang, Haau-Sing Li, Andreas Waldis, and a very special guest from BathNLP Lab, Harish Tayyar Madabushi!

0 0 0 0
Post image Post image Post image Post image

Interested in our research? At these times you can see the presentations of papers co-authored by our colleagues at #ACL2024NLP 🇹🇭!

0 0 0 0
[Own work] On Measuring Faithfulness or Self-consistency of Natural Language Explanations
[Own work] On Measuring Faithfulness or Self-consistency of Natural Language Explanations Excited to share my ACL 2024 presentation on my almost-last PhD paper about LLM self-explanations! 🎓📚 Are you joining ACL 2024 in Bangkok? Ping me—let's c...

Excited to share my ACL 2024 presentation on my second-to-last PhD paper! 🎓📚
Watch it here if you are also interested in LLM self-explanations: 🤖 youtu.be/b3wbTOZXRyI

Are you joining ACL 2024 in Bangkok? Ping me—let's chat! #ACL2024NLP #PhDLife

2 0 0 0

⭐️🗞️ Accepted to ACL 2024 main conference! #ACL2024NLP

Neural nets can in theory learn formal languages such as aⁿbⁿ & Dyck. Yet no one ever finds such nets using standard techniques. Why?

We suggest that the culprit might have been the objective function all along 👇

arxiv.org/abs/2402.10013

4 0 0 0
ACL 2024 accepts 11 UKP papers We are happy to announce that 11 papers authored or co-authored by UKP members have been accepted for publication at the 62nd Annual Meeting of the Association for Computational Linguistics (ACL) in B...

Congratulations to all authors! We look forward to seeing you in Bangkok 🇹🇭 this August!

➡️ www.informatik.tu-darmstadt.de/ukp/ukp_home...

(17/17) #ACL2024NLP #NLProc

0 0 0 0

»$\textit{GeoHard}$: Towards Measuring Class-wise Hardness through Modelling Class Semantics« by Fengyu Cai, Xinran Zhao, Hongming Zhang, Iryna Gurevych and Heinz Koeppl (16/🧵) #ACL2024NLP (arXiv coming soon)

0 0 1 0

»$\texttt{DARA}$: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs« by Haishuo Fang, Xiaodan Zhu and Iryna Gurevych (15/🧵) #ACL2024NLP (arXiv coming soon)

0 0 1 0

Also, 2 papers have been accepted to the Findings section of #ACL2024NLP. They are:

0 0 1 0
Post image

»On Efficient and Statistical Quality Estimation for Data Annotation« by Jan-Christoph Klie, Rahul Nair, Juan Haladjian and Marc Kirchner (13/🧵) #ACL2024NLP

arxiv.org/abs/2405.11919

0 0 1 0
Post image

»DAPR: A Benchmark on Document-Aware Passage Retrieval« by Kexin Wang, Nils Reimers and Iryna Gurevych (12/🧵) #ACL2024NLP

arxiv.org/abs/2305.13915

0 0 1 0
Post image

»Re3: A Holistic Framework and Dataset for Modeling Collaborative Document Revision« by Qian Ruan, Ilia Kuznetsov and Iryna Gurevych (11/🧵) #ACL2024NLP (arXiv coming soon)

0 0 1 0

»Dismantling the Misleading Narratives: Reconstructing the Fallacies in Misrepresented Science« by Max Glockner, Yufang Hou, Preslav Nakov and Iryna Gurevych (10/🧵) #ACL2024NLP (arXiv coming soon)

0 0 1 0