These two assumptions are enough to prove convergence bounds! More generally, this view presents a theoretical framework that unifies existing elements of synthetic data approaches, facilitating reasoning about when they might succeed or fail.
Posts by Sergei Vassilvitskii
1 year ago
0
0
0
0
Instead of a weak learner, we assume access to models that can perfectly model an input distribution, which we call strong learners.
But instead of iid samples, we have access to only weak information about the target distribution, i.e. weak data.
1 year ago
0
0
1
0