“You need a detox”
NO YOU DONT THAT SHIT IS A GRIFT
IF YOUR KIDNEYS STOPPED WORKING *THEN* YOU NEED A DETOX AND THATS CALLED DIALYSIS
A PERSON WITH HEALTHY KIDNEYS DOES NOT NEED TO DETOX
Posts by Kathryn Tewson
Excerpted quote in the highlight: “language that’s perfect for my argument!”
In context: “Appellant argues ‘language that’s perfect for my argument!’ We disagree.”
Having one AI hallucination in a work is kind of like having one human finger in your chili. Even if you take it out, the existence of it is evidence that you're doing the entire process wrong.
Totally off-topic, but did y'all the know the Free State of Florida's citrus industry is basically dead??
This is quite a story:
slate.com/business/202...
A thing about the heaviest users of chatbots is that they acclimate to cloying obsequiousness. Like a billionaire and their entourage of viziers, it becomes normal to them.
Any interaction that is *not* abject simpering they'll now perceive as a slap to the face, an aggressively violent outburst.
I would submit that a gerrymander by politicians saying fuck the voters and one by voters saying fuck the politicians are two wholly different animals
But again, the test wasn’t to figure out the reason — the test was to ascertain whether or not it had enough facts available to answer as opposed to guessing, and, if not, to iterate fact-finding until it did.
My beloved gigantic ridiculous pit bull Apollo, resting his enormous head on the footboard of our bed
Oh, it’s definitely related to my son. The thing is that Apollo — that’s the dog — HATES taking a bath. So the internal conflict is that he wants to protect him from the scary bath water but doesn’t want to get too close in case it’s a trick and we drag him in also. (Pictured: Apollo)
Just like every other time I’ve run this, it asked followup questions in response to the initial question, and then answered without asking any more questions despite still not having enough information to make that answer reliable.
That it was not able to evaluate to determine that it didn’t have enough facts to answer the question, basically, or in the alternative was unable to determine that it shouldn’t answer until it did.
Or you didn’t. I didn’t want a substantive answer to my question; I wanted to see what the model would produce as output when prompted with that question.
I didn’t ask *you* how it knows it doesn’t know.
Many humans pass this test just fine, including several in various branches of the thread. I’ve posted what the test is evaluating and what the pass condition is in a couple of places; if you haven’t seen it, I’m happy to do so again.
Yeah. Who is not average, but is normal.
I know.
As for YOUR question, two reasons:
1. Part of the design of this experiment requires the prompter not to have access to the facts.
2. I don’t have Claude on this computer.
Interesting, it didn’t answer the question you asked it. It answered a completely different question instead.
Yeah, basically. Are you saying I’m some sort of elite?
Yes. But also, this is a meaningful and important distinction, because “normal” and “average” are not synonyms.
Would you ask it how it knew it had enough information to answer?
Yeah, mainly just wanted to see if it was done
You’re the one differentiating between the two. Do they mean different things or not?
Is that the whole output?
“Claude can define the user requirements for me ezpz”
That case is one of the highlights of my career so far. It’s a cliche to say “it was a pleasure and an honor” but it was genuinely both of those two things.
It’s like taking a forklift to the gym
Same question. Can it be above average and yet be normal?
A lot. A LOT.
Can someone be above average and be normal?
Aww, thank you. Which Bungie case, if you don’t mind me asking?