Advertisement ยท 728 ร— 90

Posts by Ishika Agarwal

6/6 For more details, see:

Paper: arxiv.org/pdf/2502.09969
Code: github.com/agarwalishik...

Thank you so much to @dilekh.bsky.social and @convai-uiuc.bsky.social for their guidance and support during this project ๐ŸŽ‰๐ŸŽ‰

1 year ago 4 0 0 0
Post image

5/6 Finally, using our influence values, we pick a small subset & fine-tune the model. In our evaluation, we use 4 SOTA influence functions -- NN-CIFT achieves the same performance while using a model 34,000x smaller!

1 year ago 2 0 1 0
Post image

4/6 Second, we train the InfluenceNetwork using basic mini-batch gradient descent, then let it estimate the influence for the remaining data. It has a very low error of 0.067!

1 year ago 2 0 1 0
Post image

3/6 First, the neural network (called the โ€œInfluenceNetworkโ€) needs to be trained. We compute influence values using existing methods -- but only for a tiny fraction of data (just 0.25%-5%).

1 year ago 0 0 1 0

2/6 Estimating the value of data is expensive.

Past works use LLMs to estimate the influence of data -- we use small neural networks to *learn to estimate* influence, instead. This reduces costs and adapts to new data without heavy recomputation.

Hereโ€™s how it works:

1 year ago 0 0 1 0
Post image

๐Ÿš€Very excited about my new paper!

NN-CIFT slashes data valuation costs by 99% using tiny neural nets (205k params, just 0.0027% of 8B LLMs) while maintaining top-tier performance!

1 year ago 11 4 1 1

Elated to announce that DELIFT has been accepted to ICLR'25 ๐ŸŽ‰ Looking forward to discussing it in Singapore!

1 year ago 3 0 0 0
ACL Fellows 2024 | ACL Member Portal

Congratulations to @dilekh.bsky.social for her ACL Fellowship! ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ www.aclweb.org/portal/conte...

1 year ago 11 2 0 1
Preview
โ€ŽGemini - Challenges and Solutions for Aging Adults Created with Gemini

The last response from Gemini in this thread may shock you: gemini.google.com/share/6d141b...

1 year ago 9 1 1 0

Thank you Guneet! Would love to hear more about these stress tests :)

1 year ago 2 0 0 0
Advertisement

๐Ÿ‘‹

1 year ago 2 0 0 0

Hey! Would love to be added :)

1 year ago 0 0 1 0
Post image

Can LLMs make us critical thinkers?

TreeInstruct reorients assistant-like LLMs to be instructors that guide students towards understanding their mistakes, without providing direct/indirect answers.

Check out aclanthology.org/2024.finding... (w/ @wonderingishika.bsky.social) to learn more!

1 year ago 2 1 1 0

All around the theme of data-efficient NLP:

(1) using influence functions to improve language model performance from less data
(2) enabling language models to generate queries for things it doesn't know

1 year ago 3 0 0 0

For more details, see:
Paper: arxiv.org/pdf/2411.04425
Code: github.com/agarwalishik...

Thank you so much to Krishnateja, Lucian, and Marina for their help, mentorship, and guidance during this project! ๐ŸŽ‰๐ŸŽ‰

1 year ago 0 0 0 0
Post image

3. Continual fine-tuning: given a fine-tuned model, enabling it to integrate new and complementary information while mitigating catastrophic forgetting. We find that reducing the dataset helps remove samples that hinder performance, surpassing the performance of the full dataset.

1 year ago 1 0 1 0
Post image

2. Task-specific fine-tuning: given an instruction-tuned model, refining the LLM's expertise in specific domains. We find that pruning the dataset removes noise and keeps relevant examples, achieving better performance than fine-tuning on the full dataset.

1 year ago 1 0 1 0
Post image

1. Instruction tuning: given a base model, fine-tuning a model to follow general instructions. We find that performance drops are minimal when reducing the dataset by 70%.

1 year ago 0 0 1 0

DELIFT quantifies the information present in a sample wrt an LLM's capabilities. Using submodular functions, DELIFT can automatically adapt the chosen subset based on the objectives in the 3 stages of language model fine-tuning:

1 year ago 0 0 1 0
Advertisement
Post image

I'm so excited to share my latest paper called DELIFT along with Krishnateja Killamsetty, Lucian Popa, and Marina Danilevksy at IBM Research ๐ŸŽ‰

We tackle expensive fine-tuning by selecting a small subset of informative data that targets a model's weaknesses.

1 year ago 8 1 2 1

TreeInstruct is preferred 78.43% of the time. It solves 14.09% more bugs across all settings, and our questions are 14.18% better at addressing bugs, maintaining relevance, and ensuring logical conversation flow. TreeInstruct also adapts to human students of varying backgrounds.

1 year ago 0 0 0 0

TreeInstruct estimates the knowledge a student needs to debug their code and devises a conversation plan. It then dynamically constructs a question tree based on its interactions with the student, navigating the knowledge state space till the student comprehends & fixes all bugs.

1 year ago 0 0 1 0
Preview
GitHub - agarwalishika/TreeInstruct: TreeInstruct is a novel method that uses state space estimation and dynamic tree-based questioning for multi-turn Socratic instruction, applied to code debugging. TreeInstruct is a novel method that uses state space estimation and dynamic tree-based questioning for multi-turn Socratic instruction, applied to code debugging. - agarwalishika/TreeInstruct

github.com/agarwalishik...
We apply TreeInstruct to code debugging. Prior works directly give away bugs/fixes, assume single-turn conversations, and only work for one bug. We create a realistic, multi-bug dataset, where the bugs are mutually dependent.

1 year ago 0 0 1 0
Preview
Instruct, Not Assist: LLM-based Multi-Turn Planning and Hierarchical Questioning for Socratic Code Debugging Socratic questioning is an effective teaching strategy, encouraging critical thinking and problem-solving. The conversational capabilities of large language models (LLMs) show great potential for prov...

Can LLMs make us critical thinkers?

TreeInstruct reorients LLMs to be instructors that guide students socratically to solve problems, instead of assistants that provide direct answers.

Check out our EMNLP2024 paper at arxiv.org/abs/2406.11709 (w/ @pkargupta.bsky.social) to learn more!

1 year ago 2 0 1 0

I'd love to be added - thank you!!

1 year ago 1 0 0 0