Same with ai literacy or fluency
Posts by Ben Tsai
“Designers will never have influence without understanding how organizations learn”
productpicnic.beehiiv.com/p/designers-will-never-h...
> what everyone is suddenly obsessed with producing (via vibe coding) is not actually prototypes
Diagram on how the prediction/error/correction loop is how people learn and acquire memories
Practice is to encounter more opportunities for errors. Being wrong and correcting is the way we learn. Simply being presented with the right answer or memorising that answer won’t stick. The direct path of AI does not help.
Please make this mandatory in all critical workflows immediately.
you say "i asked chatgpt"
i hear "i asked [an improv comedy group]"
an improv group wrote this report
instead of a therapist i use an improv comedy group
The fun thing about any automation is that it is *most* tempting to use in situations with tight deadlines and cognitive fatigue & *most* impactful for people who don't know how to do it the manual way.
What this means is that when it fails, the operator is structurally *guaranteed* to be fucked.
🚨New preprint and our results are rather concerning..
We find the "boiling frog" equivalent of AI use. Using large-scale RCTs, we provide *casual* evidence that AI assistance reduces persistence and hurts independent performance.
And these effects emerge after just 10–15 minutes of AI use!
1/
oh wow, so SHOCKING, "AI" doesn't exist actually (Apple paper proves LLMs can't math or think or understand a problem)
nitter.poast.org/heynavtoor/s...
Arne here has a great summary of reasons why developers should reject the vibe codist death cult project.
mastodon.social/@plexus@toot...
a cool thing about "cd's" is you push one button and it plays an entire album of music ad free and then (unless you request otherwise) it just stops and awaits further instruction. a very elegant system.
Reflections on the Claude Code source code leak from @techtrenches.dev
“The leak isn’t the story.
The code is the story.”
The chaconne by JS Bach for solo violin
Amy's Dictum:
“AI” doesn’t reveal anything about what computers can do, but it reveals a lot about what humans can’t do
such as a good, thorough, professional job
that’s the most distressing thing about ai code/interface: so many people checked out of doing the real work a long time ago, so…
I tell my students that writing is an exercise in figuring out what you think; it's not a place to deposit what you've already worked out.
If you step on a frictionless surface, momentum takes over and you just slide; there's nothing to interact with, which means there's nowhere to stand.
This is so, so well-articulated.
Highlighted section of the Copilot ToS which says "Copilot is for entertainment purposes only"
www.microsoft.com/en-us/micros... is it good to build an entire economy and software infrastructure on this
Thank you for this piece. Everything resonates. I am working on a framework for teams to assess AI risk and plan to heavily reference your insights. Thank you.
"If No One Pays for Proof, Everyone Will Pay for the Loss" freakonometrics.hypotheses.org/89367 (back on "AI still doesn’t work very well in business, businesses are faking it, and a reckoning is coming" www.theregister.com/2026/03/17/a...)
A marketing email from Slack for a Webinar, with the subject: “Tomorrow: See how Anthropic, MrBeast, and Salesforce are using Slack to move faster”
Legion of Doom levels of “nightmare blunt rotation” here
To add to the list of things we don't talk about enough, driverless taxis don't work for people who need a little assistance.
people view themselves as computers, like they can load in information, but that’s not how it works at all.
all learning - even book learning - is experiential. handwriting, restating, organizing are all learning tasks.
you can’t “buy” it done by someone else & expect it to work at all
I was not surprised to learn that I Georgetown, like most American colleges and universities, has succumbed to the pressure to appear part of the “AI” in-crowd (and to the temptation of the resources being made available to those in that crowd). But even though everything I have done in my professional life has been in some way based on the expectation that institutions will tend towards corruption, corrosion and capture, when I think about what this particular instance of that phenomenon signifies for you, the students of Georgetown, I feel very sad and angry. And I decided that the best thing to do with that sadness and anger would be to write to you all directly about why this decision by your university, which may seem on the surface to be an example of garden variety corporate thoughtlessness, should disturb you deeply, and provoke you to fight back.
Emily Tucker, the Executive Director of @georgetownprivacy.bsky.social, remains stellar and GOATed.
medium.com/center-on-pr...
The paradox is: to get better at ai, you need to not use ai
The danger to my job from AI isn't that AI can do my job, it's that my job is made even more precarious by the way AI is shaping ideas of the value of work. It can't do my job, but it can be part of convincing people (incorrectly) that my job isn't necessary.
I wrote about the nature of LLM output is performative. It's always, "what would the response be if I wanted to sound like a person answering your prompt?"
bentsai.org/posts/perfor...
perfect quip
I read this post by Tailscale CEO describing the coordination problem in software and thought of your piece
apenwarr.ca/log/20260316
hearing microsoft is reorganizing its AI team under the banner of "the Copilot System." Also hearing that teams are under pressure to *reduce* AI token use, remit is that there needs to be "fiscal responsibility in AI ops" and that Claude Code usage is being reduced in favour of Copilot CLI.
Screenshot from paper: 6.1 Coherence in the Eye of the Beholder Where traditional n-gram LMs [117] can only model relatively local dependencies, predicting each word given the preceding sequence of N words (usually 5 or fewer), the Transformer LMs capture much larger windows and can produce text that is seemingly not only fluent but also coherent even over paragraphs. For example, McGuffie and Newhouse [80] prompted GPT-3 with the text in bold in Figure 1, and it produced the rest of the text, including the Q&A format.21 This example illustrates GPT-3’s ability to produce coherent and on-topic text; the topic is connected to McGuffie and Newhouse’s study of GPT-3 in the context of extremism, discussed below. We say seemingly coherent because coherence is in fact in the eye of the beholder. Our human understanding of coherence de- rives from our ability to recognize interlocutors’ beliefs [30, 31] and intentions [23, 33] within context [32]. That is, human language use
Screenshot continued: takes place between individuals who share common ground and are mutually aware of that sharing (and its extent), who have commu- nicative intents which they use language to convey, and who model each others’ mental states as they communicate. As such, human communication relies on the interpretation of implicit meaning conveyed between individuals. The fact that human-human com- munication is a jointly constructed activity [29, 128] is most clearly true in co-situated spoken or signed communication, but we use the same facilities for producing language that is intended for au- diences not co-present with us (readers, listeners, watchers at a distance in time or space) and in interpreting such language when we encounter it. It must follow that even when we don’t know the person who generated the language we are interpreting, we build a partial model of who they are and what common ground we think they share with us, and use this in interpreting their words.
Screen shot continued Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind. It can’t have been, because the training data never in- cluded sharing thoughts with a listener, nor does the machine have the ability to do that. This can seem counter-intuitive given the increasingly fluent qualities of automatically generated text, but we have to account for the fact that our perception of natural language text, regardless of how it was generated, is mediated by our own linguistic competence and our predisposition to interpret commu- nicative acts as conveying coherent meaning and intent, whether or not they do [89, 140]. The problem is, if one side of the commu- nication does not have meaning, then the comprehension of the implicit meaning is an illusion arising from our singular human understanding of language (independent of the model).22 Contrary fn22: Controlled generation, where an LM is deployed within a larger system that guides its generation of output to certain styles or topics [e.g. 147, 151, 158], is not the same thing as communicative intent. One clear way to distinguish the two is to ask whether
Final part of screenshot to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.
Went back to Sec 6 of Stochastic Parrots today (in the context of answering a query from a journalist) and was reminded how thoroughly we grounded that part in a discussion of language use -- y'know as communication between people.