Advertisement Β· 728 Γ— 90

Posts by Samyak Rawlekar

@neuripsconf.bsky.social

Does the "Potential Positive and Negative Societal Impacts" section count toward the 9-page limit?

Thanks!

11 months ago 0 0 0 0

Hi, I would love to be added to it if possible. I am a PhD student at UIUC working on vision-language models.

1 year ago 3 0 1 0
WACV 2025 Open Access Repository

(8/8)
Paper: openaccess.thecvf.com/content/WACV...

Project Page: samyakr99.github.io/PositiveCoOp/

#WACV2025 #AI #MachineLearning #ComputerVision #CLIP #MultiLabelRecognition #PromptLearning

1 year ago 2 0 0 0

(7/8) This work is done at UIUC with
@shubhangb.bsky.social and Prof. Narendra Ahuja

Excited to discuss more at WACV 2025! Come find us at Poster Session 3 - 2nd March 11:15-1PM

1 year ago 1 0 1 0

(6/8) TL;DR: If you're using VLMs for MLR, skip negative prompts and use learned embeddings instead!
This saves compute, parameters, and improves performance.

1 year ago 1 0 1 0

(5/8) Why is Negative Prompting Ineffective?
πŸ” We analyze the LAION-400M dataset and find that less than 0.5% of captions contain negative words.
❌ CLIP simply doesn’t learn meaningful representations for class absence!

1 year ago 1 0 1 0

(4/8)Results on COCO & VOC2007
βœ… PositiveCoOp outperforms existing dual-prompt methods (like DualCoOp)
βœ… A simple vision-only baseline performs surprisingly well shows prompting isn’t always necessary!
βœ… NegativeCoOp performs the worst, proves negative prompting is not optimal

1 year ago 1 0 1 0
Post image

(3/8) We introduce PositiveCoOp and NegativeCoOp:
πŸ”Ή PositiveCoOp learns only positive prompts via CLIP and replaces negative prompts with learned embeddings
πŸ”Ή NegativeCoOp does the opposite.
πŸ”Ή Which one works better? (Spoiler: PositiveCoOp wins! πŸ†)

1 year ago 1 0 1 0

(2/8) We show that negative prompts hurt MLR performance:
πŸ‘‰ VLMs like CLIP are trained on image-caption data that focus on what’s present, not what’s absent.
πŸ‘‰ As a result, negative prompts often highlight the same regions as positive ones!

1 year ago 1 0 1 0

(1/8)Vision-language models like CLIP have been used for multi-label recognition (MLR) by learning both positive and negative prompts for associated with presence and absence of each class.
But is learning negative prompts actually helping detect absence? πŸ€”

1 year ago 1 0 1 0
Advertisement
Post image

Excited to Present Our paper "PositiveCoOp: Rethinking Prompting Strategies for Multi-Label Recognition with Partial Annotations" at WACV 2025! @wacvconference.bsky.social πŸ“’

🧡 A thread on what we found! 🧡

1 year ago 6 1 1 0