Advertisement · 728 × 90

Posts by Brian Cheung

Over the past year, my lab has been working on fleshing out theory + applications of the Platonic Representation Hypothesis.

Today I want to share two new works on this topic:

Eliciting higher alignment: arxiv.org/abs/2510.02425
Unpaired learning of unified reps: arxiv.org/abs/2510.08492

1/9

6 months ago 133 34 1 5
https://representational-alignment.github.io/2025/

https://representational-alignment.github.io/2025/

Learn about aligning minds and machines.

Call for papers is live now—join us at
@iclr_conf
#ICLR2025.

representational-alignment.github.io/2025/

1 year ago 8 3 0 0
Post image

🚨Call for Papers🚨
The Re-Align Workshop is coming back to #ICLR2025

Our CfP is up! Come share your representational alignment work at our interdisciplinary workshop at
@iclr-conf.bsky.social

Deadline is 11:59 pm AOE on Feb 3rd

representational-alignment.github.io

1 year ago 15 7 0 3

Any downside to having diversity with competing methods of alignment?

If "good" and "ethics" are universal, having many different ways of arriving at it seems to be safer than a unipolar governance.

1 year ago 0 0 0 0
Video

Introducing ASAL: Automating the Search for Artificial Life with Foundation Models

Blog: sakana.ai/asal/

We propose a new method called Automated Search for Artificial Life (ASAL) which uses foundation models to automate the discovery of the most interesting and open-ended artificial lifeforms!

1 year ago 117 38 1 7