Advertisement · 728 × 90

Posts by Disagreeable Me

Just be more irresponsible! Find the most ludicrously wasteful way imaginable of learning whatever it is you're learning.

23 hours ago 1 0 0 0

But at least you can get some value out of your subscription by using it as a learning tool. Or just ask it to code up some educational software for you. There is no problem that can't be solved by irresponsibly throwing enough tokens at it, including meatbrain ignorance.

23 hours ago 0 0 1 0

Use the tokens to feed knowledge into your meatbrain.

23 hours ago 1 0 1 0

Never change. bsky.app/profile/disa...

4 days ago 3 0 1 0
Screenshot showing a bluesky outage caused by a very good tweet about bluesky outages being caused by very good tweets.

Screenshot showing a bluesky outage caused by a very good tweet about bluesky outages being caused by very good tweets.

This is a very good tweet. Keep it to yourself next time. I don't want any more outages.

6 days ago 2 0 0 0
Preview
Islands of Life Using pictures to think beyond the constants in the fine-tuning debate

We say there is a fine-tuning problem because if we tweak the values of the physical constants, life seems to become impossible. But what if we tweak the structure of the laws of physics instead? open.substack.com/pub/disagree...

1 week ago 3 1 0 0

Anyway, you'll fit right in. After all, most of @keithfrankish.com's posts are on poetry, terrible puns, travel pics, cat pics, Heraklion pics, and nostalgia.

1 week ago 1 0 0 1
Advertisement

Also, drumming updates!

1 week ago 1 0 0 0

My first ever tweet was in Feb 2014, replying to Massimo Pigliucci on the topic of determinism.

2 weeks ago 0 0 1 0
Preview
I Don't Get Why the Problem of Evil is a Problem for Theist Phenomenalists P-zombies to the rescue

Short and sweet post about why I don't think the problem of evil should bother anyone who is both a theist and an anti-functionalist. As with most of my philosophy, my implicit target here is probably my frenemy @philipgoff.bsky.social.
disagreeableme.substack.com/p/i-dont-get...

2 weeks ago 3 1 1 0

... the weather's fuckin terrible.

2 weeks ago 0 0 0 0

So it's both just playing a token game, and also running a simulation in order to win that game. The same could be true of an ideal LLM. It could be simulating an agent with its own goals. Or at least your argument doesn't show it is not.

1 month ago 1 0 1 1

But in order to complete the tasks successfully, I think it's going to need to do what amounts to running an internal simulation. Somewhere in all those weights and biases is an implementation of a solar system simulation.

1 month ago 1 0 1 0

If we trained a neural network to do this task, then with enough data and training and a suitable architecture it could probably do it. The game it is playing is just to produce the right outputs for the given inputs.

1 month ago 0 0 1 0

If you want a system that predicts where the planets will Benin 2000 years, then the best way to do it is probably to simulate the planets. Not just spit out whatever looks plausible.

1 month ago 0 0 1 0

How stable the agent is is a matter of degree. It's not determinate whether humans persist either. Whether I'm the same agent I was ten years ago. As long as the context is stable the agent is stable (for an ideal LLM).

1 month ago 1 0 2 0

oneira glyka!

1 month ago 0 0 0 0
Advertisement

Oh, not if you went to that conference in person I guess. You in Colombia?

1 month ago 0 0 1 0

It's pretty late for you... get some sleep! Talk tomorrow

1 month ago 0 0 1 0

The problem with "that's all it does" is the same as the problem with saying that without consciousness we would just be aggregations of molecules. Just a physical systems obeying physics and that's all they do. Implying that such a physical system would have no real intentions.

1 month ago 0 0 1 0

The problem I have with this is "that's all it does". I think an ideal LLM can be thought of as a platform which conjures agents with appropriate intentions and then those agents generate speech accordingly. I dont believe but cannot rule out that such agents could even be conscious.

1 month ago 1 0 3 0

My interpretation is similarly flexible. The context conjures up an appropriate agent. If the context requires a helpful agent, then a helpful agent is conjured/simulated. If an antagonistic agent is required, that's what you get.

1 month ago 0 0 1 0

BTW, my main axe to grind here is not with your conclusion but with your argument. I don't think LLMs are in fact conscious or are best thought of as having fully fledged intentions, but I don't think your argument works. So I'm mostly thinking of an "ideal" LLM.

1 month ago 0 0 0 0

Seems simpler to me. It's more direct. The grounds are that it predicts that it will say helpful things. Its only means of interacting is via text, so it's (ideally) behaviourally indistinguishable from an agent with the same limitations and a general desire to help.

1 month ago 0 0 2 0
Advertisement

My big problem with your view is that the viability of an interpretation where it is just playing a chat game does not mean that an interpretation where it has more meaningful intentions is unviable. The two are compatible. Like MD the actor and GG the sociopath.

1 month ago 0 0 1 0

To predict what the LLM will say, we can interpret it as a system playing the chat game imitating a helpful assistant, or we can interpret it as a helpful assistant. The two interpretations have equal predictive power. The second is simpler.

1 month ago 0 0 2 0

I said the opposite of what I intended in my message, an ISN'T instead of an IS.

1 month ago 0 0 0 0

(IS, dammit... IS just a bunch of atoms)

1 month ago 0 0 1 0

First give me an example of a human response that can't be predicted on the assumption that the system isn't just a bunch of atoms obeying the laws of physics.

1 month ago 0 0 2 0

I think you do get that predictive power, though. At least potentially. Like we can predict what GG will do by interpreting him as a greedy narcissist, regardless of whether MD is.

1 month ago 0 0 1 0