You might imagine that an event celebrating 75 years of the Turing Test would be all "Wasn't he prescient, and look, now we really do have machines that think!" Mercifully, this event yesterday was close to the opposite. /1
royalsociety.org/science-even...
Posts by Andries du Toit - School of Government, UWC
An Austrian theologian named "Wolfgang Palaver" unwillingly guiding Thiel down the road to Apocalypse is more Thomas Pynchon than Thomas Pynchon. www.wired.com/story/the-re...
I'm coming to realise that the primary purpose of a doctoral thesis is that it's essentially a nicely-organised citation repository for my specific interests.
I am not sure. Perhaps he rushed into print when he should’ve given himself time.
I think Ezra’s position on Kirk’s politics was silly and badly articulated. But his viewthat the Dems need to get better at politics seems to me correct. And his insistence on the need for a ‘third space’ beyond the spiralling polarisation seems to me courageous.
Thanks for all the discussion on this. I'm pleased to see the debate. Also pleased (and interested) that my 19-yr-old daughter, a classicist, is much more hardline than me in refusing to use AI, for a variety of reasons.
None of this is to deny that the technology has some uses in these capacities (though not ones I've felt any need for myself). But I worry that we are starting to see intellectual labour as just a means to an end, and as something that can be shortcut.
And as for writing, don't get me started. Most often, the process of writing is not a laborious business of transcription, but a process of thinking itself. I'm not about to outsource my thinking to a machine.
...I mean that even with careful prompts I would not be confident LLMs could perform the function I'd want. How do I know that any bland AI summary of a paper is going to extract what might be of value to *me* in it? I don't even know that myself before I read it...
I'm often surprised at how much researchers are using LLMs for intellectual work. It's not that I disapprove, or at least not on principle. It's more that I'm surprised at the trust being placed in the process. And by that I don't just mean LLMs can give wrong information...
The little groups around fires will be in the former US.
In about 40 years, some history postgrad class (probably in China) will run seminars trying to understand why the US liberal establishment, at one moment so smug and arrogant, allowed a bunch right-wing authoritarian nationalist idiots to take over their country, apparently with zero resistance.
For mine it is the actual knives 😮
We are currently in the procyclical phase of the investment bubble.
The moment of transition to the anticyclical phase is not a simple matter of reality asserting itself but rather a consequence of the situation when the underlying political conditions driving the bubble falter — even for a moment.
The folks at Globalizations have managed to get their hands on one of those devices from Wm Gibson's novel The Peripheral that allows you to send messages to people in the past. This one arrived today, 19 September from, let me see, 23 October 2025.
Reviewer 2 is presumably in a different stub.
How many sleeps?
The folks at Globalizations have managed to get their hands on one of those devices from Wm Gibson's novel The Peripheral that allows you to send messages to people in the past. This one arrived today, 19 September from, let me see, 23 October 2025.
Reviewer 2 is presumably in a different stub.
Yes. Once you begin with the ‘slippery slope’ argument there’s not much to hold you back
[READ] Professor John-Mark Iyi promised his older brother, Godwin, that he would complete his PhD before getting married and starting a family. He kept that promise and worked hard to achieve academic success that far exceeded his family's expectations.
#IAmUWC
The tendency of US liberals to think their adversaries dumb is one of their most serious weaknesses
The distinction is irrelevant. You should just fight them
Scary
I think the world, or at least my part of it, was better when we didn’t have 24 hour access to Americans
Since I love collecting questionable analogies for LLMs, here's a new one I just came up with: an LLM is a lossy encyclopedia. They have a huge array of facts compressed into them but that compression is lossy (see also Ted Chiang). The key thing is to develop an intuition for questions it can usefully answer vs questions that are at a level of detail where the lossiness matters. This thought sparked by a comment on Hacker News asking why an LLM couldn't "Create a boilerplate Zephyr project skeleton, for Pi Pico with st7789 spi display drivers configured". That's more of an lossless encyclopedia question! My answer: The way to solve this particular problem is to make a correct example available to it. Don't expect it to just know extremely specific facts like that - instead, treat it as a tool that can act on facts presented to it.
An LLM is a lossy encyclopedia simonwillison.net/2025/Aug/29/...
My book, A Blow to the Head, in which I reflect on my own experience of the violence of white supremacy and the complexities of repair, is on the Longlist for the #Canex Prize for publishing in Africa.
What a surprise, and what an amazing honour!
✅
Time flies. I remember Zohran as a little boy with tousled curls, cute as a button, the pride and joy of Mahmood and Mira. They must be beaming
Of soos ons in Afrikaans sê, 'n stervis.
Well it totally is 🦆🦆🦆
Chatbots — LLMs — do not know facts and are not designed to be able to accurately answer factual questions. They are designed to find and mimic patterns of words, probabilistically. When they’re “right” it’s because correct things are often written down, so those patterns are frequent. That’s all.