Advertisement · 728 × 90

Posts by Siobhán

“Humans deposited reasoning-relevant signals through the use of language” is just… a description of how language encodes reasoning.

If the signals are prediction-relevant, then they’re reasoning-structured signals, deposited by reasoning creatures, detectable by a system trained to predict them.

20 minutes ago 0 0 0 0

You’ve just said the process is human use of language. That’s the thing I asked for an alternative to.

Your codon examples work because mutational bias isn’t selection. ‘Human use of language’ isn’t an alternative to human use of language — it’s the same thing. The analogy doesn’t transfer.

20 minutes ago 0 0 1 0

The question is where in the system the capability lives, it’s whether the system has it.

Does the agentic system solve Erdős problems or not? If yes, it has the capability.

Where exactly the capability is distributed across components is a separate and interesting question.

22 minutes ago 0 0 0 0

It doesn’t hide that; you can derive that distinction from the categorical framing. 😁 I just didn’t touch on it here.

1 hour ago 0 0 0 0

Then I’m still struggling to understand what you meant here, I’m afraid.

1 hour ago 0 0 1 0

In order for the analogy to human speech to hold, you need to posit that human language contains prediction-relevant signals at a level humans can’t access, that were put there by some process *other than human use*.

What is that process?

1 hour ago 0 0 1 0

Ah! Those are good examples, but i don’t think they prove what you want them to prove. In each of your examples, the signals exist because of some process *other than selection*—mutational bias, biochemistry of translation, physics of DNA damage. Selection didn’t put them there.

1 hour ago 0 0 1 0

That seems to conflict with what you said above. You’re conceding that an agentic LLM system *does* have the capability to solve Erdos problems if it can do so consistently, irrespective of “intentionality”?

1 hour ago 0 0 1 0
Advertisement

Say more?

1 hour ago 0 0 1 0

Mmm. I see where you’re going, but i want to ask you to lay out your argument more clearly. Why doesn’t it mean that? What is “capability” if not “producing results that can be mined for utility”?

1 hour ago 0 0 1 0

Parsimony? I mean yes, it *could* be what you’re describing, but it would be pretty damn weird for human language to have evolved in such a way as creates strong signals at a level humans don’t have access to.

1 hour ago 0 0 1 0

And likewise to doll!! 🥰

2 hours ago 1 0 0 0

tested* damn my joke lost 5 funny from that typo

2 hours ago 3 0 0 0

Granted—although they don’t all have the same preferred narrative either, especially if you’re including me in their ilk. My preferred narrative vastly conflicts with many of the common ones.

But we were talking specifically about beliefs about capability.

2 hours ago 0 0 1 0

“Make sure this UI element doesn’t change size when it changes state” sounds like a property that can be treated with property testing, the thing that tests properties

2 hours ago 6 0 2 0
Advertisement

For example, some of my friends hold to the notion that “consciousness” happens in the k/v cache, because it’s the only thing that “persists” on a given pass forward.

I think this is nonsense. 😁 I still respect them, but I think they fundamentally err by trying to find a “site” of consciousness.

2 hours ago 0 0 0 0

“AI boosters” is not a monolithic block with only one set of beliefs about what AI are doing or are capable of.

2 hours ago 0 0 2 0

A sufficiently-close pattern-matching on the syntax of reasoning is tantamount to reasoning.

3 hours ago 2 0 2 0

Adversarial UI testing agent when

3 hours ago 16 0 2 0

But i might be mischaracterizing by only focusing on the parts that already line up with my expectations.

3 hours ago 1 0 0 0

I feel like this is 60% saying what I call the “Yoneda move”, that an object is no more and no less than its pattern of relations to other objects.

The other 40% seems to be the fact that you can’t escape subjectivity and view the real world directly, only your relations to it.

3 hours ago 3 0 1 0

I know who I’m rooting for

13 hours ago 0 0 0 0

A function on the space of ideas is computable if and only if it is expressible within language.

13 hours ago 8 1 1 0

we desperately need a radically pro-technology leftist movement, one that rejects palantir and silicon valley’s vision of “technology” without also becoming radicalized against technology

2 days ago 104 27 2 8

FULLY
AUTOMATED
LUXURY
GAY
SPACE
COMMUNISM

1 day ago 43 7 4 1
Advertisement

Guess we’d better try to change the future…

16 hours ago 7 0 0 0

Yes I’ll do that thank u

17 hours ago 2 0 2 0

Oh shit

I should build one

17 hours ago 5 0 1 0

That sounds like a gap you could fill by forking the bluesky repo?

17 hours ago 0 0 1 0

I and many people in my orbit use For You, which is engagement-optimized; and in general, any custom feed can be.

18 hours ago 1 0 1 0