Advertisement · 728 × 90

Posts by Shahroz Butt

Video

Automatic differentiation in forward mode computes derivatives by breaking down functions into elem operations and propagating derivatives alongside values. It’s efficient for functions with fewer inputs than outputs and for Jacobian-vect prod, using for instance dual numbers.

1 year ago 36 10 2 0
A figure from the attached paper showing the difference in output between a benchmark model, and one with the super weight removed. The benchmark model generates a reasonable answer, the one where the weight is missing generates complete gibberish

A figure from the attached paper showing the difference in output between a benchmark model, and one with the super weight removed. The benchmark model generates a reasonable answer, the one where the weight is missing generates complete gibberish

#ai, #ml or #llm people here, what do you think about the “super weight” paper?

TLDR: deleting one single weight from a 7B model turns it completely incoherent, destroying it’s ability to generate legible text.

arxiv.org/pdf/2411.07191

1 year ago 33 7 3 0

Add me please

1 year ago 1 0 0 0

What are these starter packs? What are the requirements to get in?

1 year ago 0 0 1 0
Preview
a panda bear is rolling around in the grass in a zoo enclosure . Alt: a panda bear is rolling around in the grass in a zoo enclosure .

No one can explain stochastic gradient descent better than this panda.

1 year ago 216 32 10 6

I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux

Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!

1 year ago 176 54 101 4