Advertisement ยท 728 ร— 90

Posts by cedric

induction vs transduction point holds

with induction you can search because you have a metric to optimise (% train examples correct)

with transduction there is no clear metric to guide search / brute force, so the model needs to get it right? or to come up with a way to guide its own search

1 year ago 0 0 0 0

just checked, on semi-private set ryan got 43 (not that far i admit)

1 year ago 1 0 1 0

ok he did use for loops so he didn't hill climb, but you can 'filter good candidates' by taking the solutions that solves 100% training examples, and submit only these as solutions

with transduction you can't filter

the challenge rules say you can submit only 2 (3?) solutions per pb

1 year ago 1 0 1 0

2) he used program synthesis which allows hill climbing on the % of training examples correct

if the o3 prompt that circulates is correct, the o3 score uses transduction (predicting output grid directly), and you can't hill climb there

you can ensemble, but that doesn't help much for hard pbs

1 year ago 1 0 1 0

this is a misleading comparison for two reasons

1) that guy got 50% on the public test set, which is easier than the private test set where o3 reached the 85%(87?)

1 year ago 1 0 2 0

the official testing procedure is 2 or 3 solutions per problem iirc, don't think chollet would have let them brute force it

it seems they don't use program induction, so they can't hill climb on training examples either

1 year ago 2 0 0 0

hmm is there no bookmarks over here? or did i miss them?

1 year ago 0 0 1 0

imol-workshop.github.io

1 year ago 1 0 0 0
Advertisement

hope to see you all at the IMOL worskhop on sunday!

1 year ago 1 0 1 0

my work: scholar.google.com/citations?user=VBz8gZ4AAAAJ

1 year ago 1 0 1 0

in vancouver for @neuripsconf.bsky.social

looking forward catching up with friends and meeting new ones!

reach out to chat about:
> open-ended learning
> intrinsic motivations
> exploration and diversity search
> social and cultural learning
> llm agents
> other?

1 year ago 1 0 1 0

hi Melanie,
we have a cool workshop on intrinsically motivated open-ended learning with a blend of cogsci and ai on dec 15

@IMOLNeurIPS2024 on X

see program here: imol-workshop.github.io/pages/program/

1 year ago 2 0 1 0

oh cool, what's the paper? i've been thinking it could be the case and was wondering who wrote about it

1 year ago 1 0 1 0
Jobs - Flowers Laboratory FLOWing Epigenetic Robots and Systems

find more info at flowers.inria.fr/jobs/ (other positions are open)

1 year ago 1 0 0 0

balancing exploration and exploitation with autotelic rl

autotelic rl is usually concerned with open-ended exploration in the absence of external reward

how should we conduct an open-ended exploration *at the service* of an external task?

deep rl skills required

1 year ago 2 0 1 0

llm-mediated cultural evolution

we wanna study how llm-based agents can be used to facilitate collective intelligence in controlled human experiment where groups of participant collectively find solutions to problems

this requires some background in cogsci + llms

1 year ago 4 0 1 0

we are recruiting interns for a few projects with @pyoudeyer
in bordeaux
> studying llm-mediated cultural evolution with @nisioti_eleni
@Jeremy__Perez

> balancing exploration and exploitation with autotelic rl with @ClementRomac

details and links in ๐Ÿงต
please share!

1 year ago 6 6 1 0
Advertisement