Advertisement · 728 × 90

Posts by nora-hakase (野良博士)

Fuuuuuck thiiissssss

4 hours ago 2 0 0 0

Damn. Amazing

4 hours ago 1 0 0 0

It's so great that if Elizabeth Warren talk like book, she too much professor. But if Elizabeth Warren use normal people talk, she very condescend

9 hours ago 3 0 1 0

I am completely dead. Thread

10 hours ago 1 0 0 0

Jesus fuck. What a nightmare

10 hours ago 2 0 0 0

In Another World With My Billionaire Pedophile Network

10 hours ago 1 0 0 0

Fuck yeah

12 hours ago 0 0 0 0

Sigh. Solidarity from a fellow stress anorexic

12 hours ago 0 0 0 0
Advertisement

This casual athleticism

21 hours ago 0 0 0 0

I myself find that a bit sad. Like... A real lack of attempts at theory of mind. Which, you know,

21 hours ago 7 0 1 0

Oh yes.

21 hours ago 5 0 0 0

Some LLMs can (with bolt-on deterministic tools) do more. I'm using one to scrape a bunch of identically-structured webpages into a table. Far as I can tell, it's doing that right. And I like being able to use fairly natural language to tell it what I want.

It's still not reasoning though

21 hours ago 1 0 0 0

Yeah, but also introspectionism (earlier) ran into major barriers. This is tricky shit!!

21 hours ago 5 0 1 0

Yes. If private behavior is in principle observable only by one person, then there's no path to truth-by-agreement about it. I suspect (e.g.) "thinking" has way more different shapes than we know. Just look at things like "aphantasia." That might just be different learning histories at work!

21 hours ago 10 0 1 0

I think they have a particular definition (and phenomenology?) of intelligence such that the LLMs do look, to them, intelligent.

To me, it's instantly obvious that LLMs don't reason. But I think we're maybe dealing with really different private verbal repertoires and calling them all "thinking".

21 hours ago 36 1 4 0

You did great all by yourself.

21 hours ago 0 0 0 0

About eating him?

1 day ago 1 0 0 0
Advertisement

Given what you say here, how then is the screenshot insufficient?

1 day ago 0 0 1 0

I am not a technical expert in LLMs, but with that in mind, I do think this is a good analogy.

1 day ago 1 0 0 0

I mean you have a real point there

1 day ago 1 0 0 0

(and I don't think that was Ghibli exactly but I do think it involved Ghibli alumni)

1 day ago 2 0 0 0

Uh EARWIG AND THE WITCH, but I'm not sure whether there are any others.

1 day ago 2 0 1 0

Hey @capcomusa.com I really like this game and I don't think you want it explicitly linked with pedophilia.

1 day ago 5 1 1 0

Exactly like... Oh wait, I see who I'm replying to now, but what the hell—it's exactly like when I tell people I love playing the YAkuza series of games

1 day ago 2 0 0 0
Advertisement

The token prediction gets better and better! It really does! And I still think this is an asymptotic curve with a very real limit!

1 day ago 0 0 0 0

"Rejection of functionalism"?

Oh buddy

1 day ago 0 0 0 0

Based on how people argue for it on social media, my best guess is "confirmation bias"

Secondary: general weakness in theory of mind, a key skill for full-fledged... reasoning.

1 day ago 0 0 0 0

That would only be a problem if the LLM could not reason

1 day ago 0 0 0 0

When a cat is sleepin' real hard and you pet her and she makes a little "activation noise" rt if you agree

1 day ago 364 148 7 7

Heh. I had a feeling we were going this direction, and it will be very telling to see who picks up what you've put down.

1 day ago 2 0 0 0