Advertisement · 728 × 90

Posts by Yoav Goldberg

they are useful.

1 month ago 0 0 0 0

re the question about trust: I don't know if its the right question for me. getting access to sources is relevant only in a small fraction of my LLM usages (irrelevant for example when it paraphrases, styles, translates, codes, ...), and even there for me its not about trust but as anchors.

1 month ago 1 0 0 0

i would note that the entities who serve LLMs do have access to the training data, so they can implement your form of answer grounding, and users still do not need to know the training data composition for the LLM system to be useful.

1 month ago 0 0 1 0

(re attribution of claims to supported text, this does not require access to training data, only to a pile of data that contain in it texts supporting the claim, or to a search interface over such data)

1 month ago 0 0 1 0

they did not say "unreliable", they said incorrect sentence structure and odd word choices.

1 month ago 0 0 1 0

i see why transparency of data is important for research. this was not Emily's claim though. I am asking why is it important (or even needed) for users?

1 month ago 2 0 1 0

i think the fruitful aspects of your position on LLMs (which do exist and some of them you mention in a recent thread) would be much better served if it weren't for the non-stochastic repeated parroting of the "not useful for almost anything" mantra.

1 month ago 0 0 0 0

bsky.app/profile/yoav...

1 month ago 0 0 1 0

which usages do you find beneficial?

and why do you think they *require* transparency into the training data?

1 month ago 0 0 1 1

interestingness is subjective

1 month ago 0 0 0 0
Advertisement

yes, but some changes are to interesting directions and others less so

1 month ago 1 1 0 0

they blocked me before i got to blocking them

1 month ago 1 0 1 0

aaaand i am blocked

1 month ago 1 0 0 0

lol how i missed bsky

1 month ago 0 0 2 0

it is not easy and frankly the research became in many ways less fun due to LLMs, but hey they work and they are here and they are amazing and fascinating

1 month ago 5 0 0 0

i mean not really? yes LLMs made 95% of my technical knowledge as NLP researcher obsolete, and pretty much solved the things we were all working on. most of us said "wow this is kinda cool" and adapted.

1 month ago 5 1 1 0

yes. (i mean, comparatively. they perform just as well as they did back then, maybe slightly better)

1 month ago 3 0 0 0

(linguists mostly ignore it because it is indeed mostly irrelevant for them. yes, one could use it to learn about language, and i think it does potentially highlight some fascinating phenomena, but this is on the fringe of linguistics research and its ok.)

1 month ago 11 0 0 0

DL was dismissed/ignored by academia because until the 2010s or so other methods were working much better and easier to analyze (this is the ML academia, not the Linguistics/NLP academia).

DL was then adopted by NLP academia (albeit a bit late). And some weirdos like emily kept deluding themselves.

1 month ago 15 0 3 0
Advertisement

i heard it is possible to scrutinize and/or criticize a technology without being delusional

1 month ago 71 2 6 0

idk, sounds plausible to me that a person who actually uses the technology will understand it and its limitations much better than a person that only reads about it. dont you think?

1 month ago 3 0 0 0

are these *the only* things LLMs are useful for?

1 month ago 11 0 1 0

play this

www.puzzlescript.net/play.html?p=...

4 months ago 7 4 0 0

in the trailer, the mentions use the full names for "Mirror Isle" and "Skipping Stones to Lonely Homes", and "Heroes" for Heroes of Sokoban, so I'd be surprised if the intention is to hide anything.

4 months ago 1 0 0 0

for the record i don't think language is "solved". the parts i cared about solving, though, are to a large extent "solved", to the extent that the remaining "non-solved" parts are imo not linguistic

4 months ago 2 0 1 0

why?

4 months ago 0 0 0 0
Advertisement

whats the difference in your view?

4 months ago 0 0 0 0

i discuss this in the gist text. this is the more correct way to frame it imo (env provides observations, which agent interprets as rewards based on its goals), and it also opens up possible variations in how to think about learning from the env.

4 months ago 0 0 0 0
Preview
rl-wrong-about-rewards.md GitHub Gist: instantly share code, notes, and snippets.

I complain a lot about RL lately, and here we go again.

The CS view of RL is wrong in how it thinks about rewards, already at the setup level. Briefly, the reward computation should be part of the agent, not part of the environment.

More at length here:

gist.github.com/yoavg/3eb3e7...

4 months ago 14 2 2 0

yes it sucks to be the ICLR organizers today, totally agree

4 months ago 2 0 0 0