The cover of a recording of Beethoven’s Middle Quartets by the Budapest String Quartet, showing a black and white photo of a quarter of men with string instruments.
A whole bunch of multicolored potatoes.
Basically the same:
The cover of a recording of Beethoven’s Middle Quartets by the Budapest String Quartet, showing a black and white photo of a quarter of men with string instruments.
A whole bunch of multicolored potatoes.
Basically the same:
THE ORIGIN OF THE WORK OF ART A painting -- for example van Gogh's portrayal of a pair of peasant shoes -- travels from one exhibition to another. Works are shipped like coal from the Ruhr or logs from the Black Forest. During the war Holderlin's hymns were packed in the soldier's knapsack along with cleaning equipment. Beethoven's quartets lie in the publisher's storeroom like potatoes in a cellar.
I guess I was really meant to study Heidegger. He draws attention to the thingly character of works of art by comparing Beethoven quartets to potatoes, and I just go, “Yup, checks out.”
Sending happy thoughts!!
Did you know Heidegger was kind of obsessed with nuclear weapons? New article out in Gatherings has both historical and philosophical research on this weird rabbit hole I’ve been living in the last two years: academia.edu/resource/wor...
I can’t wait to read this, Ben. I’m coming back to Heidegger’s critique of technology in my diss reading, perfect timing!
Transcription by Ben Lerner. Quick and interesting so far. I just finished Sea of Rust, which was good but not great.
That is beautiful, Shea!
They can be quite “creative” as partners through truly out of domain thinking is a lot less impressive.
This from Ted is really insightful. I haven’t (yet) seen it with academic work or corp strategy, and maybe I’m just not skilled enough. But if you treat LMs as thinking partners instead of answer machines, the results can be stellar. I’ve seen it in comms, finance, & more.
This is incredible. I am a recovering em-dash abuser, but I will never clean up my act with commas.
Brilliant marketing, but I imagine this is also roughly true in substance. AI coding agents will have gone from “bad even with human supervision” to “it isn’t safe for humans to write code themselves” in the space of ~6-12 months.
I'm from the Country by Tracy Byrd or It's a Great Day to Be Alive by Travis Tritt? Something about the self-conception of the "country" American would be the idea. (There are probably hundreds of songs that would answer that brief!)
Sitting on the metro with my 6 yo right now. Hard agree.
Your perspective is thoughtful and helpful, Berna. For myself, I feel like I can’t even guess at the outlines of what the future looks like, which feels unnerving.
“Take away the agent, and Bob is still a first-year student who hasn't started yet. The year happened around him but not inside him.”
Great description of any use of an LLM that takes away formative struggles in education (reading papers, fixing early mistakes, outlining arguments).
It feels like they’re all over fitting to the pre-LLM data and then claiming that has relevance to the post-LLM era (or even pre-LLM outside their training set). Lots of abuse of statistics and probability too.
It doesn’t seem like any of them are. I ran a (definitely me-generated!) paper through one and it said “53% confidence AI generated” in bold red. Trying valiantly to turn a coin flip into an accusation.
This WSJ op-ed on Pangram was worth reading.
Really insightful, and I love the idea of trusting our (future) selves to preserve games as living things. Especially on the precipice of code no longer being a precious, expensive commodity for many.
Oh man I loved that program.
This was pretty entertaining.
Our 13 yo’s view from the Ponte Sant’Angelo in Rome. She’s getting some good reps in with her camera.
A sign that human creativity is a bottleneck is that this year everyone can generate almost any image or video they can think of for nearly free and the April Fools posts are basically just as bad as any other year.
When I did the algorithmic trust paper in academic finance, it was wild. Required little editing, because I had specified the ideas very carefully and given it my own notes on the literature. And the genre is pretty formulaic. Impossible for me to use when I’m feeling out ideas I can’t specify.
I think friendship is quite useful for humans, just not for bots. And it’s worth investigating how human friendship are changing. For the bots, we could use some new relational concepts. Philosophers need to have things to keep us busy. ;-)
Yes, that’s helpful. I think the typical ways of critiquing human-AI ”friendship” idealize the human-human variety and don’t get at what’s troubling about delegating part of our happiness and mental health to bots. Or get at what’s going wrong w/ human friendship nowadays.
I think that is one of Minsky’s “suitcase words” that has outlived its usefulness in many contexts. I suspect “friendship” is such a word, for troubling reasons when it comes to many human relationships and conceptual weakness with bots. That’s what I would write if I do something on this.
I don’t think it’s at all clear when friends are best when they are at their most emotive. Not even when they’re most responsive to *our* emotions. But care and response to the friend’s normative “tug” feels like a promising place to mine further.
The AI friendship lit is an attractive nuisance for me. I keep wanting to contribute something, and a lot of existing approaches don’t think quite deeply enough about ontology. Jake’s paper looks great, focusing on the constitutive carelessness of AI models. Emotion is a red herring.
This is fascinating. Technology is part of the story of why we’ve become so polarized for sure. (Including democracy as a tech, as @alexgphilosophy.bsky.social describes in Lottocracy.) Anti-polarization tech could be part of the solution.
Oh but I LOVE that book! You're in for a treat.