Automatic differentiation in forward mode computes derivatives by breaking down functions into elem operations and propagating derivatives alongside values. It’s efficient for functions with fewer inputs than outputs and for Jacobian-vect prod, using for instance dual numbers.
Posts by Shahroz Butt
A figure from the attached paper showing the difference in output between a benchmark model, and one with the super weight removed. The benchmark model generates a reasonable answer, the one where the weight is missing generates complete gibberish
#ai, #ml or #llm people here, what do you think about the “super weight” paper?
TLDR: deleting one single weight from a 7B model turns it completely incoherent, destroying it’s ability to generate legible text.
arxiv.org/pdf/2411.07191
Add me please
What are these starter packs? What are the requirements to get in?
I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux
Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!