Posts by Jeffrey M. Binder
First thought in my head: 1964 or 2064?
I'm personally skeptical that hallucination can be solved in the current paradigm, but the Computer World article is basically misinformation, and people are jumping on it because it confirms their biases.
Plausibly, they might. I am not defending OpenAI's position. My point is that the Computer World article is misrepresenting its source, and we should raise our standards.
This seems to be making the rounds again, and the framing is highly misleading. The mathematical result only applies to “base” models that haven’t gone through RLHF. The study actually argues that hallucination is _not_ inevitable. www.computerworld.com/article/4059...
Clearly, this shows a man soldering his own finger to the soldering iron’s power supply
This is your annual reminder that Sun Ra made a Christmas song www.youtube.com/watch?v=HEpF...
In 2021, I developed PromptArray, which lets you muck around with the internals of GPT models. I moved on because this method doesn't work with closed-source models like GPT-3, but GPT-OSS makes it possible again. Read if you miss the weirdness of the GPT-2 era! jeffreymbinder.net?p=480
Still in New York, continuing to keep on. We should catch up some time!
Chocolate-covered Leibniz biscuits
There’s also Choco Leibniz, named after the inventor of choco calculus
“Large language models like ChatGPT produce shallow, unoriginal ‘predictive text-y ideas’ and I worry that my students and others will increasingly believe that that’s okay—that there’s nothing better than that to aspire to.”
I do not see how we’re going to have productive academic discussions of "AI" until we stop accepting the marketing that lumps all machine learning methods & technologies into one amorphous thing called "AI"—so we get minimalist & maximalist responses that echo that marketing—we don’t have to do this
Just thought of one. My mom saw a picture of a corpse-painted black metal band—I think it was Immortal—and said, "I remember bands like that. Like Kiss."
Adam Smith An Inquiry into the Nature and Causes of The Wealth of Nations Stella • 23m @antlervel.vet Do I need to spend $100 on bottles of high-end skincare product? Many people would say no. However, the manufacturers of $100 bottles of high-end skincare product would say yes. And it's important to consider all viewpoints ... Sep 28, 2023 at 12:17 AM
“Griping in the Guts”
Checks out
Is it possible for numbers to get too big? We’ll tell you about a shocking new development, after this.
The Theories and Methodologies cluster on #criticalAI co-edited for PMLA by @ritaraley.bsky.social and myself is now online, open access. it consists of eight interventions by leading voices, as well as our framing essay “AI and the University as a Service.” www.cambridge.org/core/journal...
Today I learned the word "ultracrepidarianism"
Not sure about Llama3, but have you looked at the structured output modes offered by some LLMs? python.langchain.com/docs/how_to/...
A page from the book Language and the Rise of the Algorithm reproduced as a figure in the Chicago Manual of Style.
My book made it to the Chicago Manual of Style! I don't know how I'm going to top this.
*raises hand*
If you see this, quote skeet with a famous landmark you’ve seen.
Album cover of "Momentum" by Monolake.
"Jonathan Sings!" by Jonathan Richman
Thinking of the time when I bought two CDs from a stand at the WFMU record fair—"Momentum" by Monolake and "Jonathan Sings!" by Jonathan Richman—and the seller seemed to have trouble comprehending that the same person could want both of these albums.
This post goes really well with the Wittgenstein I was just reading
A Reddit post with the title "How some of you look like." A meme showing a man pointing at his reflection in the mirror. The text reads "My prompt is not the problem. Claude 3.5 is the problem."
I'm talking about this kind of thing
I’ve been noticing a tendency among LLM enthusiasts to blame the prompt, not the model, whenever things go wrong. It’s snake oil salesman stuff: if my potion doesn’t cure your wounds, you just didn’t have enough faith. It makes the effectiveness of the technology itself unfalsifiable.
Congratulations, Josh!
Got the Elden Ring DLC—and just like that, my acid reflux is back
This bit, and particularly the last sentence I’ve highlighted, is exactly where that “ChatGPT is Bullshit” paper would have benefited from engagement with a discipline that actually looks at questions like meaning, an intention, namely, literary theory. See critinq.wordpress.com/2023/06/26/a...