Just be more irresponsible! Find the most ludicrously wasteful way imaginable of learning whatever it is you're learning.
Posts by Disagreeable Me
But at least you can get some value out of your subscription by using it as a learning tool. Or just ask it to code up some educational software for you. There is no problem that can't be solved by irresponsibly throwing enough tokens at it, including meatbrain ignorance.
Use the tokens to feed knowledge into your meatbrain.
Never change. bsky.app/profile/disa...
Screenshot showing a bluesky outage caused by a very good tweet about bluesky outages being caused by very good tweets.
This is a very good tweet. Keep it to yourself next time. I don't want any more outages.
We say there is a fine-tuning problem because if we tweak the values of the physical constants, life seems to become impossible. But what if we tweak the structure of the laws of physics instead? open.substack.com/pub/disagree...
Anyway, you'll fit right in. After all, most of @keithfrankish.com's posts are on poetry, terrible puns, travel pics, cat pics, Heraklion pics, and nostalgia.
Also, drumming updates!
My first ever tweet was in Feb 2014, replying to Massimo Pigliucci on the topic of determinism.
Short and sweet post about why I don't think the problem of evil should bother anyone who is both a theist and an anti-functionalist. As with most of my philosophy, my implicit target here is probably my frenemy @philipgoff.bsky.social.
disagreeableme.substack.com/p/i-dont-get...
... the weather's fuckin terrible.
So it's both just playing a token game, and also running a simulation in order to win that game. The same could be true of an ideal LLM. It could be simulating an agent with its own goals. Or at least your argument doesn't show it is not.
But in order to complete the tasks successfully, I think it's going to need to do what amounts to running an internal simulation. Somewhere in all those weights and biases is an implementation of a solar system simulation.
If we trained a neural network to do this task, then with enough data and training and a suitable architecture it could probably do it. The game it is playing is just to produce the right outputs for the given inputs.
If you want a system that predicts where the planets will Benin 2000 years, then the best way to do it is probably to simulate the planets. Not just spit out whatever looks plausible.
How stable the agent is is a matter of degree. It's not determinate whether humans persist either. Whether I'm the same agent I was ten years ago. As long as the context is stable the agent is stable (for an ideal LLM).
oneira glyka!
Oh, not if you went to that conference in person I guess. You in Colombia?
It's pretty late for you... get some sleep! Talk tomorrow
The problem with "that's all it does" is the same as the problem with saying that without consciousness we would just be aggregations of molecules. Just a physical systems obeying physics and that's all they do. Implying that such a physical system would have no real intentions.
The problem I have with this is "that's all it does". I think an ideal LLM can be thought of as a platform which conjures agents with appropriate intentions and then those agents generate speech accordingly. I dont believe but cannot rule out that such agents could even be conscious.
My interpretation is similarly flexible. The context conjures up an appropriate agent. If the context requires a helpful agent, then a helpful agent is conjured/simulated. If an antagonistic agent is required, that's what you get.
BTW, my main axe to grind here is not with your conclusion but with your argument. I don't think LLMs are in fact conscious or are best thought of as having fully fledged intentions, but I don't think your argument works. So I'm mostly thinking of an "ideal" LLM.
Seems simpler to me. It's more direct. The grounds are that it predicts that it will say helpful things. Its only means of interacting is via text, so it's (ideally) behaviourally indistinguishable from an agent with the same limitations and a general desire to help.
My big problem with your view is that the viability of an interpretation where it is just playing a chat game does not mean that an interpretation where it has more meaningful intentions is unviable. The two are compatible. Like MD the actor and GG the sociopath.
To predict what the LLM will say, we can interpret it as a system playing the chat game imitating a helpful assistant, or we can interpret it as a helpful assistant. The two interpretations have equal predictive power. The second is simpler.
I said the opposite of what I intended in my message, an ISN'T instead of an IS.
(IS, dammit... IS just a bunch of atoms)
First give me an example of a human response that can't be predicted on the assumption that the system isn't just a bunch of atoms obeying the laws of physics.
I think you do get that predictive power, though. At least potentially. Like we can predict what GG will do by interpreting him as a greedy narcissist, regardless of whether MD is.