Advertisement ยท 728 ร— 90

Posts by Robin

Preview
The False Promise of Imitating Proprietary LLMs An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). T...

Weaker Foundation Models can imitate better larger models and can become incredibly good, even reducing toxic content output. But they are also better at fooling users and are more likely to just mimic the well structured style rather than the original learned content:

arxiv.org/abs/2305.15717

1 year ago 0 0 0 0

Hello world! Looking to connect with data journalists and investigative journalists :)

1 year ago 0 0 0 0