Advertisement · 728 × 90

Posts by ConvAI @ UIUC

Preview
Reinforcement Learning Finetunes Small Subnetworks in Large Language Models Reinforcement learning (RL) yields substantial improvements in large language models (LLMs) downstream task performance and alignment with human values. Surprisingly, such large gains result from upda...

Reinforcement Learning Finetunes Small Subnetworks in Large Language Models by @sagnikmukherjee.bsky.social, Lifan Yuan, @dilekh.bsky.social, Hao Peng

Read more here: arxiv.org/abs/2505.11711
x.com/saagnikkk/st...

7 months ago 2 1 0 0
Preview
ToolRL: Reward is All Tool Learning Needs Current Large Language Models (LLMs) often undergo supervised fine-tuning (SFT) to acquire tool use capabilities. However, SFT struggles to generalize to unfamiliar or complex tool use scenarios. Rece...

ToolRL: Reward is All Tool Learning Needs by Cheng Qian, @emrecanacikgoz.bsky.social, Qi He, Hongru Wang, Xiusi Chen, @dilekh.bsky.social, @gokhantur.bsky.social, Heng Ji

Read more here: arxiv.org/abs/2504.13958, x.com/emrecanacikg...

7 months ago 2 0 1 0
MIRAGE: A Benchmark for Multimodal Information‑Seeking and Reasoning in Agricultural Expert‑Guided Conversations MIRAGE is a benchmark for multimodal expert‑level reasoning and decision‑making in agricultural consultative interactions.

MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations by @vardhandongre.bsky.social Chi Gui, Hooshang Nayyeri, Shubham Garg, @gokhantur.bsky.social, @dilekh.bsky.social, Vikram Adve

Read more here: mirage-benchmark.github.io

7 months ago 2 1 1 0
Preview
Ishika Agarwal on X: "🚀Very excited about my new paper! NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance! https://t.co/7SEMjFV2Pw" / X 🚀Very excited about my new paper! NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance! https://t.co/7SEMjFV2Pw

Neural Networks for Learnable and Scalable Influence Estimation of Instruction Fine-Tuning Data by @wonderingishika.bsky.social @dilekh.bsky.social

Read more here: x.com/wonderingish...

7 months ago 2 0 1 0

ConvAI had a great NeurIPS season with four accepted papers to the main conference🎉 Find all the authors in San Diego this December ☀️

7 months ago 3 1 1 0

[5/5] Persuasion research is still playing catch-up, promising great advancements!✨

Thank you to my amazing co-authors! @shuhaib.bsky.social @xiaocheng-yang.bsky.social @HyeonjeongHa @ziruicheng.bsky.social @EsinDurmus @JiaxuanYou @HengJi @gokhantur.bsky.social @dilekh.bsky.social

11 months ago 2 1 0 0
Post image

Thrilled to announce our new survey that explores the exciting possibilities and troubling risks of computational persuasion in the era of LLMs 🤖💬
📄Arxiv: arxiv.org/pdf/2505.07775
💻 GitHub: github.com/beyzabozdag/...

11 months ago 8 5 1 0

📂 Code and data coming soon! Read our paper here: arxiv.org/abs/2502.02362

This would not have been possible without the contributions of @abhinav-chinta.bsky.social @takyoung.bsky.social Tarun and our amazing advisor @dilekh.bsky.social Special thanks to the members of @convai-uiuc.bsky.social

11 months ago 2 1 0 0
Post image

🚀Our ICML 2025 paper introduces "Premise-Augmented Reasoning Chains" - a structured approach to induce explicit dependencies in reasoning chains.

By revealing the dependencies within chains, we significantly improve how LLM reasoning can be verified.

🧵[1/n]

11 months ago 7 3 1 0
Advertisement

While persuasive models are promising for social good, they can also be misused towards harmful behavior. Recent work by @beyzabozdag.bsky.social and @shuhaib.bsky.social aims to assess LLM persuasiveness and susceptibility towards persuasion.

1 year ago 5 2 0 0

New Blog Alert: The Future of Human-Robot Conversation! We explore the evolution of embodied conversational agents beyond simple command followers. How will robots develop theory of mind, natural turn-taking, and truly understand human intentions? 🤖💬 #EmbodiedAI #HRI (1/2)

1 year ago 2 1 1 0
Post image

[1/6] Can LLMs out-persuade each other? 🤖🧠💬

Introducing Persuade Me If You Can (PMIYC)—a new framework to evaluate (1) how persuasive LLMs are and (2) how easily they can be persuaded! 🚀

📄Arxiv: arxiv.org/abs/2503.01829
🌐Project Page: beyzabozdag.github.io/PMIYC/

1 year ago 9 2 1 1
Post image

🚀Very excited about my new paper!

NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance!

1 year ago 11 4 1 1

CALM is a result of a collaboration between @convai-uiuc.bsky.social and #Oumi.

Special thanks for the great team work, it would not be possible without Jeremiah Greer, Akul Datta, Ze Yang, William Zeng, Oussama Elachqar, Manos Koukoumidis, @dilekh.bsky.social, and @gokhantur.bsky.social.

1 year ago 2 1 0 0

The secret sauce for this work is the ReAct style training data preparation: “User-Thought1-Action/API-Observation-Thought2-Response”. We transformed public dialogue datasets into this format for training. Congratulations to @emrecanacikgoz and the @convai_uiuc and Oumi teams!

1 year ago 2 1 0 0
Post image

🚀Can a Single Model Master Both Multi-turn Conversations and Tool Use?

Introducing CALM, fully open-source Conversational Agentic Language Models with CALM 8B, CALM 70B, and CALM 405B-excelling in both multi-turn dialogue management & function calling.

🌐Project Page: emrecanacikgoz.github.io/CALM/

1 year ago 7 1 1 1

Introducing positive friction in goal-oriented dialogues boosts task success and efficiency! 🎯By strategically slowing down to ask, reveal, or pause, agents improve their understanding of user goals—leading to more efficient, aligned interactions. Read more below:

1 year ago 4 1 0 0
Post image

💡 Introducing Reference-Level Feedback: A new paradigm for using feedback to improve synthetic data!
🌐 shuhaibm.github.io/refed/
🧵 [1/n]

1 year ago 6 2 1 1
Advertisement

AI over-reliance is an important issue for conversational agents. Our work supported mainly by the DARPA FACT program proposes introducing positive friction to encourage users to think critically when making decisions. Great team-work, all!
@convai-uiuc.bsky.social @gokhantur.bsky.social

1 year ago 10 3 0 0
Post image

‼️ Ever wish LLMs would just... slow down for a second?

In our latest work, "Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems", we delve into how strategic delays can enhance dialogue systems.

Paper Website: merterm.github.io/positive-fri...

1 year ago 15 5 1 2
Preview
Chatbots Can Be Inaccurate. Do They Just Need More Time to ‘Think’? A technique called “test-time compute” can improve how AI responds to some hard questions, but it comes at a cost

Do Chatbots just need more time to think? Read about Dr. @dilekh.bsky.social's thoughts here: www.scientificamerican.com/article/do-c...

1 year ago 3 0 0 0
ACL Fellows 2024 | ACL Member Portal

Congratulations to @dilekh.bsky.social for her ACL Fellowship! 🎉🎉🎉 www.aclweb.org/portal/conte...

1 year ago 11 2 0 1

Hello! Can our group be added as well? Thank you :)

1 year ago 4 0 1 0

Visit our webpage to learn more: uiuc-conversational-ai-lab.github.io

1 year ago 9 0 0 0
Post image Post image Post image

We had so much fun at #EMNLP2024 during the poster sessions and in Miami 🎉🎉 Evidence of fun (excursion to the south beach! 🏖️):

1 year ago 12 3 0 0

Welcome to the official page of ConvAI@UIUC! 🤖 Based in the cornfields of UIUC, and led by Dilek Hakkani-Tur and Gokhan Tur, we do cool research on chatbots, dialogue, embodied agents, and everything in between!

1 year ago 9 1 1 1