induction vs transduction point holds
with induction you can search because you have a metric to optimise (% train examples correct)
with transduction there is no clear metric to guide search / brute force, so the model needs to get it right? or to come up with a way to guide its own search
Posts by cedric
just checked, on semi-private set ryan got 43 (not that far i admit)
ok he did use for loops so he didn't hill climb, but you can 'filter good candidates' by taking the solutions that solves 100% training examples, and submit only these as solutions
with transduction you can't filter
the challenge rules say you can submit only 2 (3?) solutions per pb
2) he used program synthesis which allows hill climbing on the % of training examples correct
if the o3 prompt that circulates is correct, the o3 score uses transduction (predicting output grid directly), and you can't hill climb there
you can ensemble, but that doesn't help much for hard pbs
this is a misleading comparison for two reasons
1) that guy got 50% on the public test set, which is easier than the private test set where o3 reached the 85%(87?)
the official testing procedure is 2 or 3 solutions per problem iirc, don't think chollet would have let them brute force it
it seems they don't use program induction, so they can't hill climb on training examples either
hmm is there no bookmarks over here? or did i miss them?
imol-workshop.github.io
hope to see you all at the IMOL worskhop on sunday!
my work: scholar.google.com/citations?user=VBz8gZ4AAAAJ
in vancouver for @neuripsconf.bsky.social
looking forward catching up with friends and meeting new ones!
reach out to chat about:
> open-ended learning
> intrinsic motivations
> exploration and diversity search
> social and cultural learning
> llm agents
> other?
hi Melanie,
we have a cool workshop on intrinsically motivated open-ended learning with a blend of cogsci and ai on dec 15
@IMOLNeurIPS2024 on X
see program here: imol-workshop.github.io/pages/program/
oh cool, what's the paper? i've been thinking it could be the case and was wondering who wrote about it
balancing exploration and exploitation with autotelic rl
autotelic rl is usually concerned with open-ended exploration in the absence of external reward
how should we conduct an open-ended exploration *at the service* of an external task?
deep rl skills required
llm-mediated cultural evolution
we wanna study how llm-based agents can be used to facilitate collective intelligence in controlled human experiment where groups of participant collectively find solutions to problems
this requires some background in cogsci + llms
we are recruiting interns for a few projects with @pyoudeyer
in bordeaux
> studying llm-mediated cultural evolution with @nisioti_eleni
@Jeremy__Perez
> balancing exploration and exploitation with autotelic rl with @ClementRomac
details and links in ๐งต
please share!