Advertisement · 728 × 90

Posts by Seb Krier

🥹 thank you!

6 days ago 8 0 1 0
Post image

kind of like twitter, unfortunately. makes the rare success leapfrogs more interesting as case studies.

2 weeks ago 22 1 2 1
Preview
Building AI for the Democratic Matrix: A Technical Research Agenda for Normative Competence and Normative Institutions To maintain democratic resilience, it is essential to build AI agents capable of choosing behaviors that mirror those of the human agents that constitute human democracies.

Democracy isn't a rulebook. It runs on daily interactions where people comply with norms and hold each other accountable. AI agents are about to join that system. We need to build them to read it. New paper with Rakshit Trivedi and Dylan Hadfield-Menell.

2 weeks ago 11 2 4 0

I'm not sure this is obvious at all, in fact 'it from bit' vs 'bit from it' is a pretty fundamental crux in the field!

1 month ago 6 0 1 0
Preview
Alexander Lerchner, The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness - PhilArchive Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlyin...

philarchive.org/rec/LERTAF

1 month ago 15 3 0 4
Post image

An excellent paper for anyone interested in rigorous physicalist argument against computational functionalism. Alex is a fantastic, careful thinker and influenced my views a lot; we're working on a broader blog post breaking these concepts down, stay tuned!

1 month ago 56 5 12 5

😈

1 month ago 1 0 0 0

This is good, provided it's good at knowing where it's appropriate to push for these goals vs where it isn't! (E.g. maybe I don't care about eating unhealthy when I'm on holidays etc)

1 month ago 1 0 1 0

The answer is alright but why relate this to my prior work on distributional AGI? It's not necessarily wrong, but detracts from what I'm trying to understand and feels like it's shoehorning ideas that are related and I'm predisposed to like, but not really needed to answer my query accurately. 4/4

1 month ago 13 0 0 0
Advertisement

In the screenshot above I'm trying to understand Morgenstern and Von Neumann's mathematical theory of microeconomics, and relate it to how people model AIs in the abstract. 3/n

1 month ago 10 0 1 0

It feels like intellectual sycophancy and makes me doubt the answer. Obviously models sometimes push back but there's no clear demarcation of when they do so and why. 2/n

1 month ago 12 0 1 0
Post image

The memory feature can be very useful at times, but with academic work where I'm trying to understand ideas as objectively as I can and work out what is true, I'm afraid it slants the answers to relate to my existing beliefs in a way that is ultimately unhelpful. 1/n

1 month ago 42 2 7 0

I find this a strange reaction. Stuff like this has happened to me, and I feel no desire to cut the "offending" people out of my life. A really normal thing for humans to do is to reason through dialogue, especially when it comes to relationships.

1 month ago 16 1 4 1

someone suggested to me the other day that the biggest ai haters are the "smartest guy at thanksgiving dinner" types, because LLMs actually displace their social role of being stuck up know-it-alls to uninformed people

it's sort of obviously true if you look at certain banner examples

1 month ago 306 24 23 1

😭

1 month ago 2 0 0 0

I'll get there, need a gradual transition

1 month ago 2 0 1 0
Post image
1 month ago 43 3 2 1

there can be good reasons to stay there despite it being an evil lunatic asylum

1 month ago 7 0 2 0
Advertisement

misaligned!!

1 month ago 5 0 0 0

is it safe to come back here or will any view sympathetic to AI be met with rude/aggro replies

1 month ago 72 1 23 0
Preview
Continue reading on Firefly.Social "AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of governance. The core problem with democratic / decentralized modes of governance (including DAOs on ethereum) is limits to human attention: there are many thousands of decisions to make, involving many domains of expertise, and most people don't have the time or skill to be experts in even one, let alone all of them. The usual solution, delegation, is disempowering: it leads to a small group of delegates controlling decision-making while their supporters, after they hit the "delegate" button, have no influence at all. So what can we do? We use personal LLMs to solve the attention problem! Here are a few ideas: ## Personal governance agents If a governance mechanism depends on you to make a large number of decisions, a personal agent can perform all the necessary votes for you, based on preferences that it infers from your personal writing, conversation history, direct statements, etc. If the agent is (i) unsure how you would vote on an issue, and (ii) convinced the issue is important, then it should ask you directly, and give you all relevant context. ## Public conversation agents Making good decisions often cannot come from a linear process of taking people's views that are based only on their own information, and averaging them (even quadratically). There is a need for processes that aggregate many people's information, and then give each person (or their LLM) a chance to respond *based on that*. This includes: * Inferring and summarizing your own views and converting them into a format that can be shared publicly (and does not expose your private info) * Summarizing commonalities between people's inputs (expressed as words), similar to the various LLM+pol.is ideas ## Suggestion markets If a governance mechanism values "high-quality inputs" of any type (this could be proposals, or it could even be arguments), then you can have a prediction market, where anyone can submit an input, AIs can bet on a token representing that input, and if the mechanism "accepts" the input (either accepting the proposal, or accepting it as a "unit" of conversation that it then passes along to its participant), it pays out $X to the holders of the token. Note that this is basically the same as https://firefly.social/post/x/2017956762347835488 ## Decentralized governance with private information One of the biggest weaknesses of highly decentralized / democratic governance is that it does not work well when important decisions need to be made with secret information. Common situations: (i) the org engaging in adversarial conflicts or negotiations (ii) internal dispute resolution (iii) compensation / funding decisions. Typically, orgs solve this by appointing individuals who have great power to take on those tasks. But with multi-party computation (currently I've seen this done with TEEs; I would love to see at least the two-party case solved with garbled circuits https://vitalik.eth.limo/general/2020/03/21/garbled.html so we can get pure-cryptographic security guarantees for it), we could actually take many people's inputs into account to deal with these situations, without compromising privacy. Basically: you submit your personal LLM into a black box, the LLM sees private info, it makes a judgement based on that, and it outputs only that judgement. You don't see the private info, and no one else sees the contents of your personal LLM. ## The importance of privacy All of these approaches involve each participant making use of much more information about themselves, and potentially submitting much larger-sized inputs. Hence, it becomes all the more important to protect privacy. There are two kinds of privacy that matter: * Anonymity of the participant: this can be accomplished with ZK. In general, I think all governance tools should come with ZK built in * Privacy of the contents: this has two parts. First, the personal LLM should do what it can to avoid divulging private info about you that it does not need to divulge. Second, when you have computation that combines multiple LLMs or multiple people's info, you need multi-party techniques to compute it privately. Both are important.

"AI becomes the government" is dystopian: it leads to slop when AI is weak, and is doom-maximizing once AI becomes strong. But AI used well can be empowering, and push the frontier of democratic / decentralized modes of…

firefly.social/post/ff-669078d24e3d46ad...

2 months ago 33 3 4 3

Very aligned with my writing and views :) happy that 'AI as a governance technology' is starting to take off!

2 months ago 5 0 0 0

if you're an older person who finds words like mogg and maxxing annoying, just starting using them. people over the age of 30 have the superpower to end trends by simply adopting them

2 months ago 24848 2899 1057 490
2 months ago 51 6 0 2
A brilliant emerald green bird on a thin brown vine.  The green wings are complemented by a green head ruff, red throat and long pointed bill.  It has an alert black eye and looks defiant to me, but maybe I'm reading too much attitude into a little green bird.

It resembles a hummingbird, but like... 'fatter', I guess?

CREDIT:  DrE11even, Wikimedia

A brilliant emerald green bird on a thin brown vine. The green wings are complemented by a green head ruff, red throat and long pointed bill. It has an alert black eye and looks defiant to me, but maybe I'm reading too much attitude into a little green bird. It resembles a hummingbird, but like... 'fatter', I guess? CREDIT: DrE11even, Wikimedia

Here's my other new friend from Puerto Rico.

Meet the Puerto Rican Tody which has the unfortunate scientific name of 'Todus mexicanus', thanks to a mix-up of samples by a visiting ornithologist in 1830.

There's a campaign to rename it to 'Todus borinquensis', the Taíno name for the island.

2 months ago 186 40 9 1

I kinda regret going too deep in old school simulators - the aim was to intuition pump the mechanisms that shape models outputs, not to say that it's the same today as it was in 2023. But overall I think there are still many simulatorsy elements to model behaviour

2 months ago 2 1 0 0

yes definitely agree with that! need x1000 more tests and logs and evals.

separately I wonder if we'll ever drift away from characters/personas in some cases. snac/qorporate had some great takes on gemini being more of a platform than an assistant

2 months ago 3 0 1 0

🫡

2 months ago 3 0 1 0
Advertisement
Preview
Open Character Training: Shaping the Persona of AI Assistants through Constitutional AI The character of the "AI assistant" persona generated by modern chatbot large language models influences both surface-level behavior and apparent values, beliefs, and ethics. These all affect interact...

In retrospect I should've used more examples from arxiv.org/abs/2511.01689

2 months ago 2 0 0 0

but yes pls expand I'm interested :)

2 months ago 5 0 3 0