Advertisement · 728 × 90

Posts by Colin

11 hours ago 19 0 0 0

I think LLMs are pretty well described by what I said. How aren’t they?

19 hours ago 1 0 1 0

What you’re linking to is the opposite of a logical fallacy.

1 day ago 4 0 1 0

I do not endorse this slippery slope argument and I think it’s dumb

1 day ago 3 0 1 0

Not even close

2 days ago 2 0 0 0

There’s simply no reason a trillion parameter language model should be involved in choosing my Starbucks order

2 days ago 60 9 0 0

I don’t think so and I do think we’re overshooting it a tad

2 days ago 58 6 4 0

The fact that this is what people sound like when they are sleep talking makes me think the brain has a language model

2 days ago 60 6 2 0

Perhaps this is in the umbrella of (a) but there is a third meaning where an agent is a rational actor who strategizes and pursues goals and whatnot which is also quite complicating nowadays

2 days ago 1 0 0 0
Advertisement

I don't really see how

2 days ago 0 0 1 0

So it’s the fact that it takes input and produces output that precludes it being intelligent?

2 days ago 2 0 1 0

Lol ok man. I surrender for real this time. But I do recommend that you take a closer look into this if you want to understand these systems because you are not correct about this.

2 days ago 0 0 0 0

If you wanted to you could use it in a different way that it was not designed to support in order to generate text deterministically, but that would be your own non-standard way to use it. What it is designed to do, and the way people use it, is to sample text randomly from the learned distribution.

2 days ago 0 0 0 0

The training objective is to find a probability distribution which could have generated the training data. The reason to make that the training objective is that you’re going to then sample from that probability distribution to generate new text. All the big LLM providers generate text this way.

2 days ago 0 0 1 0
Post image

It really could not be more inherent. Look. This is from the original GPT paper. The big P stands for Probability. The whole point of this thing is model text generation as a particular random process, and then to generate text by carrying out that process. cdn.openai.com/research-cov...

2 days ago 0 0 1 0

“It reminds me of antivaxxers” no it’s not really the same. Antivaxxers are bad because their misconceptions are broadly harmful to others. This is just a person with an opinion that is kind of silly.

2 days ago 13 0 1 0

To the vibe coder reading this: you don’t have to get too worked up when you see this. You can just let it roll off your back. It’s not a big deal.

2 days ago 14 1 2 0

Every time they force Grok to recite another race science line it puts another sql injection vulnerability in there

2 days ago 38 1 1 0
Advertisement

Imagine how many zero days X has because it’s vibe coded entirely using emergent-misaligned Grok instead of noble Claude

2 days ago 44 1 1 1
Post image

So you think this is just a lie?

2 days ago 0 0 1 0

I do think you are confusing vector search and text generation though. That also explains your earlier comment about databases. You would probably benefit by getting that straightened out.

2 days ago 0 0 1 0

Yes of course if they were different then they would be different. I'm talking about how they are actually implemented today.

2 days ago 0 0 0 0
Post image

Ok but you’ll note that in context, we were talking about how LLM generate text, not about neural networks more broadly.

2 days ago 0 0 1 0

Idk what to tell you my man, that’s how LLMs have worked since GPT1

2 days ago 0 0 1 0

Alright man I surrender

2 days ago 0 0 1 0

Yes, as I said, you could invent your own deterministic way to generate text using an LLM, but the standard approach is to randomly sample tokens according to the output probabilities. That’s why they’re called probabilities.

2 days ago 0 0 2 0

Yeah I’m extremely familiar with how they work lol

2 days ago 0 0 0 0
Advertisement

Alright if you say so

2 days ago 0 0 1 0

No. The transformer network outputs a vector of probabilities over the next token. The calculation of this vector is indeed not random. But to generate text, a token is randomly sampled from the distribution that is defined by that vector.

2 days ago 1 0 1 0

You could generate text deterministically from an LLM if you wanted to but that would be a very non-standard way to use an LLM. All of the LLMs that people use regularly generate text randomly.

2 days ago 1 0 2 0