Complacencies of the peignoir, and late
Coffee and oranges in a sunny chair,
And the green freedom of a cockatoo
Upon a rug mingle to dissipate
The holy hush of ancient sacrifice.
She dreams a little, and she feels the dark
Encroachment of that old catastrophe,
As a calm darkens among water-lights.
Posts by robert p. baird
I liked that book—maybe even loved it. I think she’s brilliant. I think her difficulties are many and I feel bad for her. I think it’s good for her and the world if she can keep writing without too many hassles. I also don’t think we have to pretend that she’s a shining hero here.
With respect, I couldn’t disagree more. Inefficiency is fine, but blessing difficulty is how you get writers insisting that their genius requires being an asshole (or worse) to others. It was a lie even for the Olde Gods, let alone the people leaping to take that trade without the requisite talent.
…But I at least am going to spare some additional sympathy for the people at Deep Vellum. I don’t have any special insight, but small-press publishing is a hard business even in the best of times. As she herself acknowledges (belatedly & insufficiently) they would seem to deserve better than this.
If you’ve ever worked in proximity to an artist like DeWitt—and yes, in this respect, she is a Type, and not a unique butterfly—you recognize that the situation in which she entangled herself is not easily resolved by appeal to high-minded universal principles. It’s tough all around, for everyone…
So much is so bad right now—like, specifically today, the great global It is just appallingly awful—but this is truly wonderful news. I loved ON TRAILS, and I have no doubt that I’m going to love this book just as much. Bet you will, too!
A copy of the book Skepticism and Impersonality in Modern Poetry, by V. Joshua Adams
Just finished this very rewarding read by @vjoshuaadams.bsky.social and can’t recommend it enough. The poems chosen to discuss are lovely to read, and the close readings taught me so much.
It's easy to get depressed about the state of serious book coverage in the US these days, but as long as Parul Sehgal is writing regularly, all is not lost: www.nytimes.com/2026/04/03/m...
Patrick Radden Keefe is one of the few truly greats in our industry:
It's possible that LLMs and AI will break that pattern, and will prove uniquely and universally corrosive to our cognitive abilities, just as Socrates thought writing would be back in the 5th century BCE. I am, to put it mildly, unconvinced.
…Some of those people, and some of those arguments, were correct! The internet, and blogging, and smartphones, and social media, really did have negative effects on our ability to think and think well. I'm enough of a Postmanite to think that's just true. But it didn't only have negative effects.
By all means let's try to maintain our sanity and our judgment. But let's not be stupid or insincere about the baseline we're working from. I was on the internet in 1996, and in 2006, and in 2016, and at each of those points there were people making versions of the same arguments we're hearing now…
And perhaps the most exquisitely ironic aspect of these claims is watching them be advanced by means of a technology, social media, that has done more to unravel our collective epistemic fabric than any other I can think of in my lifetime.
I promise I'm not being blasé when I say that how to think for oneself, and how to know the true from the false, are and always have been two of the hardest challenges in life. But I've seen respectable people literally saying things like, "I don't need LLMs—I have trusty old Google and Wikipedia!"
I will also say that one of the most important things you learn as a TNY fact checker—as I was for two blissful years—is that no medium, certainly not books, have a universal claim on the epistemological high ground. The libraries are full of volumes that are full of wrong information.
I think it's all to the good that people are working out their own ways to use (or not) this new technology. But I will say that one of the three people whose experiments with LLMs convinced me to give them another try last fall is one of the best reporters and writers on staff at TNY.
Yeah, sorry, no: if you can’t come up with 1000 interesting words about a book you’ve been assigned to review, you should probably not be in the business of book reviewing. At the very least you should refuse the assignment. open.substack.com/pub/samleith...
(I’ll also just say that I‘m not optimistic about these current abilities sticking around for long. The same forces that made the internet balkanized and proprietary in the first place are still very much with us, and I fully expect they’re going to shape the way AI agents move on the internet too.)
(That you may not want to, for other reasons—ethical or otherwise–is completely understandable. But that’s beside the point I’m making, which is that these things really can and do work for serious work, which is not something I believed even six months ago.)
Again I don’t want to dismiss the other-directed concerns or the constant temptation to let LLMs do more & more for you. But if you’re a grown-up, and can maintain a baseline respect for your intelligence & craft, along with the discipline those demand, you can genuinely use this tech productively.
It’s possible that your interpretation of Robert Caro’s “read every page” is that you should read every page of search results and spend days or weeks trying to find the actual internet location of the material you want to read. That is not my interpretation, and it’s here that LLMs can do wonders.
…up to and including hallucinations, though that’s honestly less of an issue for the frontier models than it was even a year ago. Why bother? you might ask. The answer is that the internet as it currently exists is actively hostile to serious research: everything is balkanized and proprietary.
Another mental model is to think of it as a lazy research assistant: it needs to be told where to go, how to get access, what to look for, and what to bring back. And you need to assume that it will always be trying to find shortcuts that will let it convince itself that it’s accomplished its task…
…use the LLM to write you a python script that counts the Rs in strawberry. Absurd overkill in this case, obviously, but that basic process model works for vastly more complicated tasks. I can’t promise that it eliminates all LLM error; it doesn’t. But it does let them play to their strengths.
For me, so far—again, early days—the most helpful way to design processes involving LLMs is a composite model: you use the AI as the soft, flexible, interface layer, and you use traditional deterministic software for the hard deterministic layer. Eg: don’t ask an LLM to count the Rs in “strawberry”…
LLMs are a fundamentally different kind of software. I think people sort of get that they’re statistical inference machines, but it’s really hard to get used to working with a non-deterministic piece of software after you’ve been trained by your computer to work in the deterministic mode.
In this as in so many areas I think a lot of people get bad results from LLMs because they don’t really get the way they differ from traditional software. We’ve been trained by our technologies to think of computers as deterministic and logic-driven: the same input gives the same output, every time…
I‘ve found ways to make AI remarkably useful for research, but for anything even marginally important to my work these absolutely do *not* involve treating it like a Google search (“tell me about X”). That’s the fastest way to get hallucinations and/or regurgitated (plagiarized) primary sources.
These are all live concerns worth hard thinking. But, with the stipulation that it’s early days, two ground rules have served me pretty well so far: 1/ AI doesn’t write for me: no drafting, no editing, no proofreading; 2/ AI doesn’t “read” for me anything I need to know or even halfway care about.
A lot of AI panic, understandably, is about what *other* people are doing with the tech: how it’s being misused/abused by students, companies, scammers, etc. A smaller but growing concern (e.g. in E. Klein’s and M. O’Rourke’s recent NYT pieces) is about how LLMs affect their primary users.…