Advertisement · 728 × 90

Posts by Jan Kulveit

Beren Millidge is in ~top 5 people who's taste in questions I respect the most; this talk covers about 15 big ideas in half an hour, each of which would be sufficient as a topic for a pop-science book; highly recommended.

2 months ago 2 0 0 0
The Post-AGI Workshop: Economics, Culture and Governance | San Diego 2025
The Post-AGI Workshop: Economics, Culture and Governance | San Diego 2025 Join us in San Diego on December 3rd, 2025 to explore post-AGI economics, culture, and governance. Co-located with NeurIPS.

AI polytheism, ultra-malthusian state, Why Not Uber-Organisms, hyper-cooperators, the multicellular transition,... and yes, what's the basin of convergent evolution of human values.
postagi.org/talks/millid...
www.youtube.com/watch?v=ua67...

2 months ago 3 0 1 0
ChatGPT and other LLMs were asked to choose between consumer products, academic papers, and films summarized either by humans or LLMs. The LLMs consistently preferred content summarized by LLMs, suggesting a possible antihuman bias. In PNAS: https://www.pnas.org/doi/10.1073/pnas.2415697122

ChatGPT and other LLMs were asked to choose between consumer products, academic papers, and films summarized either by humans or LLMs. The LLMs consistently preferred content summarized by LLMs, suggesting a possible antihuman bias. In PNAS: https://www.pnas.org/doi/10.1073/pnas.2415697122

ChatGPT and other LLMs were asked to choose between consumer products, academic papers, and films summarized either by humans or LLMs. The LLMs consistently preferred content summarized by LLMs, suggesting a possible antihuman bias. In PNAS: www.pnas.org/doi/10.1073/...

8 months ago 7 2 0 1

Related work by @panickssery.bsky.social
et al. found that LLMs evaluate LLM-written texts written by themselves as better. We note that our result is related but distinct: the preferences we’re testing are not preferences over texts, but preferences over the deals they pitch.

8 months ago 0 0 0 0

Full text: pnas.org/doi/pdf/10.1...

Research done at acsresearch.org

@cts.cuni.cz, Arb research, with @walterlaurito.bsky.social @peligrietzer.bsky.social
Ada Bohm and Tomas Gavenciak.

8 months ago 1 1 1 0

While defining and testing discrimination and bias in general is a complex and contested matter, if we assume the identity of the presenter should not influence the decisions, our results are evidence for potential LLM discrimination against humans as a class.

8 months ago 0 0 1 0

Unfortunately, a piece of practical advice in case you suspect some AI evaluation is going on: get your presentation adjusted by LLMs until they like it, while trying to not sacrifice human quality.

8 months ago 0 0 1 0

How might you be affected? We expect a similar effect can occur in many other situations, like evaluation of job applicants, schoolwork, grants, and more. If an LLM-based agent selects between your presentation and LLM written presentation, it may systematically favour the AI one.

8 months ago 1 0 1 0
Post image

"Maybe the AI text is just better?" Not according to people. We had multiple human research assistants do the same task. While they sometimes had a slight preference for AI text, it was weaker than the LLMs' own preference. The strong bias is unique to the AIs themselves.

8 months ago 0 0 1 0
Advertisement

We tested this by asking widely-used LLMs to make a choice in three scenarios:
🛍️ Pick a product
📄 Select a paper from an abstract
🎬 Recommend a movie from a summary
One description was human-written, the AI. The AIs consistently preferred the AI-written pitch, even for the exact same item.

8 months ago 0 0 1 0
Post image

Being human in an economy populated by AI agents would suck. Our new study in @pnas.org finds that AI assistants—used for everything from shopping to reviewing academic papers—show a consistent, implicit bias for other AIs: "AI-AI bias". You may be affected

8 months ago 9 3 1 1
Preview
Post-AGI Civilizational Equilibria Workshop | Vancouver 2025 Are there any good ones? Join us in Vancouver on July 14th, 2025 to explore stable equilibria and human agency in a post-AGI world. Co-located with ICML.

It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!

Post-AGI Civilizational Equilibria: Are there any good ones?

Vancouver, July 14th
www.post-agi.org

Featuring: Joe Carlsmith, @richardngo.bsky.social‬, Emmett Shear ... 🧵

10 months ago 10 3 2 0
Preview
Gradual Disempowerment: Concrete Research Projects — LessWrong This post benefitted greatly from comments, suggestions, and ongoing discussions with David Duvenaud, David Krueger, and Jan Kulveit. All errors are…

What to do about gradual disempowerment from AGI? We laid out a research agenda with all the concrete and feasible research projects we can think of: 🧵

www.lesswrong.com/posts/GAv4DR...

with Raymond Douglas, @kulveit.bsky.social @davidskrueger.bsky.social

10 months ago 8 1 1 0

- Threads of glass beneath earth and sea, whispering messages in sparks of light
- Tiny stones etched by rays of invisible sunlight, awakened by captured lightning to command unseen forces

11 months ago 1 0 0 0

Imagine explaining physical infrastructure critical for stability of our modern world in concepts familiar to the ancients
- Giant spinning wheels
- Metal moons, watching the earth from the heavens
- Ships under the sea, able to unleash the fire of the stars

11 months ago 4 0 1 0
Preview
The Pando Problem AI safety has a problem: we often implicitly assume clear individuals—like humans.

AI safety has a problem: we often implicitly assume clear individuals - like humans.

In a new post, I'm sharing why this fails, and why thinking of AIs as forests, fungal networks, or even reincarnating minds helps get unconfused.

Plus stories, co-authored with GPT4.5

1 year ago 9 3 0 0
Advertisement
Post image

The Serbian protests show The True Nature of various 'Colour revolutions':

Which is, people protesting just don't prefer to live in incompetent kleptocratic Russia-backed states. No US scheming needed.

1 year ago 7 0 0 0

Confusion which casual US observers often have is equating Russia with ˜former Warsaw Pact.
Warsaw Pact population was 387M: USSR 280M, Poland 35M, E.Germany 16M, Czechoslovakia 15M, Hungary 10M, Romania 22M, Bulgaria 9M.
Russia+Belarus is now 144M, NATO East& Ukraine ˜150M.

1 year ago 3 2 0 0

the most surprising and disappointing aspect of becoming a global health philanthropist is the existence of an opposition team

1 year ago 11243 1221 111 37

A simple theory of Trump’s foreign policy: "make the world safer for autocracy" (‘strong man rule,’ etc.), moderated by his personal self-interest.

What is the best evidence against?

1 year ago 2 0 0 0
Post image

New paper: What happens once AIs make humans obsolete?

Even without AIs seeking power, we argue that competitive pressures are set to fully erode human influence and values.

www.gradual-disempowerment.ai

with @kulveit.bsky.social, Raymond Douglas, Nora Ammann, Deger Turann, David Krueger 🧵

1 year ago 17 1 1 4
Preview
A Three-Layer Model of LLM Psychology — LessWrong This post offers an accessible model of psychology of character-trained LLMs like Claude.  …

Accessible model of psychology of character-trained LLMs like Claude: "A Three-Layer Model".
-Mostly phenomenological, based on extensive interactions with LLMs, eg Claude.
-Intentionally anthropomorphic in cases where I believe human psychological concepts lead to useful intuitions

1 year ago 7 0 0 1

7/7 At the end ... humanity survived, at least to the extent that "moral facts" favoured that outcome. A game where the automated moral reasoning led to some horrible outcome and the AIs were at least moderately strategic would have ended the same.

1 year ago 4 0 1 0

6/7 Most attention went to geopolitics (US vs China dynamics). Way less on alignment, if, than focused mainly on evals. How a future with extremely smart AIs may going well may even look like, what to aim for? Almost zero

1 year ago 3 0 1 0
Advertisement

5/7 Most people and factions thought their AI was uniquely beneficial to them. By the time decision-makers got spooked, AI cognition was so deeply embedded everywhere that reversing course wasn't really possible.

1 year ago 1 0 1 0

4/7 Fascinating observation: humans were often deeply worried about AI manipulation/dark persuasion. Reality was often simpler - AIs just needed to be helpful. Humans voluntarily delegated control, no manipulation required.

1 year ago 1 0 2 0

3/7 Today's AI models like Claude already engage in moral extrapolation. For example, this is an Opus eigenmode/attractor: x.com/anthrupad/st...
If you do put some weight on moral realism, or moral reflection leading to convergent outcomes, AIs might discover these principles.

1 year ago 2 0 1 0

2/7 The game determined AI alignment through dice rolls. My AIs ended up aligned with "Morality itself" + "Convergent instrumental goals." This is less wild than it sounds.

1 year ago 1 0 1 0

Over the weekend, I was at "The Curve" conference. It was great.

One highlight was an AI takeoff wargame/role-play by
Daniel Kokotajlo and Eli Lifland

I played 'the AIs'

Spoiler: we won. Here's how it went:

1 year ago 5 1 1 0

Space of minds - what's even possible there

1 year ago 3 0 1 0