Advertisement · 728 × 90

Posts by Preetha Chatterjee

LLMs can repair code, but often miss the broader context developers use every day.
We propose a 3-layer knowledge injection framework that incrementally feeds LLMs with bug, repository, and project knowledge.

Preprint of our ASE '25 paper: arxiv.org/pdf/2506.24015

7 months ago 1 1 1 0

Error analysis reveals that unresolved bugs are not randomly distributed; they cluster around specific bug types and higher complexity profiles. In particular, Program Anomaly, Network, and GUI bugs remain the most challenging for both models.

7 months ago 0 0 0 0

Evaluated on 314 real-world Python bugs, we observed consistent gains in both #fixed and Pass@k scores for Llama 3.3 and GPT-4o-mini, demonstrating a 23% improvement over prior work.

7 months ago 0 0 1 0

This layered approach offers several advantages.
Allows simpler bugs to be fixed with minimal input, conserving tokens and computation.
Scales context progressively, injecting more information only when necessary. Enables analysis of bug types & complexity.

7 months ago 0 0 1 0

1️⃣ Bug Knowledge (e.g., immediate code and test context)
2️⃣ Repository Knowledge (e.g., related files, dependencies, commit history)
3️⃣ Project Knowledge (e.g., documentation, past bug fixes)

7 months ago 0 0 1 0

LLMs can repair code, but often miss the broader context developers use every day.
We propose a 3-layer knowledge injection framework that incrementally feeds LLMs with bug, repository, and project knowledge.

Preprint of our ASE '25 paper: arxiv.org/pdf/2506.24015

7 months ago 1 1 1 0
Post image Post image Post image

🌍 The future of #icse is global!
🇧🇷 ICSE 2026 – Brazil #icse2026
🇮🇪 ICSE 2027 – Ireland #icse2027
🌺 ICSE 2028 – Hawaii #icse2028
We can't wait to see you there! Pack your ideas and your passport. 🧳✈️

11 months ago 15 9 1 1

💡 If you are building, evaluating, or relying on LLMs for software development, please ask yourself: Did it warn you about the hidden security risk?

1 year ago 0 0 0 0
Advertisement

As a preliminary solution to this problem, we built a CLI tool prototype that integrates static analysis with LLM prompting, aiming to make AI code suggestions more secure by design.

1 year ago 0 0 1 0

However, when LLMs do warn you, they tend to offer more complete explanations, including potential causes of the vulnerability, exploits, and even fixes.

1 year ago 0 0 1 0

We evaluated GPT-4, Claude 3, and Llama 3 across 300 real-world Stack Overflow posts containing vulnerable code.

The results?
⚠️<40% of vulns flagged
⚠️As low as 12.6% when code was obfuscated
⚠️Common issues (e.g., unsanitized input) often missed - unless explicitly prompted

1 year ago 0 0 1 0
Preview
Do LLMs Consider Security? An Empirical Study on Responses to Programming Questions The widespread adoption of conversational LLMs for software development has raised new security concerns regarding the safety of LLM-generated content. Our motivational study outlines ChatGPT's potent...

LLMs are great at generating code, but are they silently spreading vulnerabilities? TLDR: Yes.

In our latest EMSE paper, we look into: when developers unknowingly share vulnerable code with LLMs, do these models proactively raise security red flags? 🧵

👉 Read the paper: arxiv.org/abs/2502.14202

1 year ago 2 0 1 0
Post image

Delighted to share that our paper, led by my PhD advisee Ramtin Ehsani, “Towards Detecting Prompt Knowledge Gaps for Improved LLM-guided Issue Resolution,” has been accepted to the Research Track of MSR 2025.

Preprint: soar-lab.github.io//papers/MSR2...

1 year ago 3 0 0 0
Preview
I can now run a GPT-4 class model on my laptop Meta’s new Llama 3.3 70B is a genuinely GPT-4 class Large Language Model that runs on my laptop. Just 20 months ago I was amazed to see something that felt …

I can now run a GPT-4 class model on my laptop

(The exact same laptop that could just about run a GPT-3 class model 20 months ago)

The new Llama 3.3 70B is a striking example of the huge efficiency gains we've seen in the last two years
simonwillison.net/2024/Dec/9/l...

1 year ago 358 59 11 6

Congrats!!

1 year ago 1 0 0 0
Post image

#NeurIPS2024 paper 3, Assemblage - the dataset of source-to-binary projects compiled from GitHub that you've dreamed of bet never had before! Collab with @krismicinski.bsky.social and a multi-year effort to get to @NeurIPSConf @BoozAllen arxiv.org/abs/2405.03991

1 year ago 6 3 1 0
Post image

🎉 Thrilled to share that our paper (with Ramtin Ehsani and @rezapour.bsky.social) has been accepted at NLBSE'25, co-located with @icseconf.bsky.social! 🎉

Our work shows promise in improving toxicity detection in OSS using moral values & psycholinguistic cues. Preprint coming soon.

1 year ago 3 2 0 0

Can you please add me here

1 year ago 1 0 0 0
Advertisement