Sixty years ago, Jennie Lee's vision created The Open University 🎓 Welcoming all backgrounds, millions have started life-changing journeys. #OUfamily #TheOpenUniversity #OU60 💙
Posts by Stuart Watt
I know I wrote it, but I truly think this should be a massive story
The Secretary of Defense told an AI company to remove its bot's moral guardrails so it can operate surveillance and lethal weapons autonomously
When warned that could endanger our own troops, he didn't care
Totally agree. This is why I don’t think of AI as a technology but as part of the unfolding of reflexive modernization. Other technologies (surveillance, IoT, crypto…) are all part of the same unfolding. It is the re-modernization of all of industry. bsky.app/profile/moru...
In fact, precisely the same arguments that used to be used about latent semantic analysis in the 1990s. Although the "latent" there might have been a little more honest.
Exactly. A lot of it is just straight knowledge. What Google was completely missing when their AI told me an osprey was a wading bird with a long thin beak.
There’s usually a little group of them around the lake where I hike during summer. Beautiful things.
This isn’t a diss on that article by the way. Not entirely. It’s more a lament. What will we lose by automating science? By focusing on the surface, on measures and metrics. Is what we lose valuable? To some, probably not. But to me, it was the craft of building science that made it all worthwhile.
In short, for me at least, it’s not research at all. It’s a simulacrum of research. It’s Wile E. Coyote research: trebling the effort while the point becomes increasingly distant.
Not only is it not my thing, I genuinely have no idea how I’d have mentored researchers to handle it.
I read this and I am glad I’m not an academic any more.
It feels like the entire nature of academia, a community, curiosity, inquiry, all of that has been replaced by a drive for productivity, strength, impact. Quality is no longer usefulness, it is a constantly shifting set of metrics.
I did look at this a little, because it crossed wires with my work on the ascription of mentality. In the end, I came to believe that perceived similarity is one factor that promotes that ascription. As so often, frameworks like these are fascinating insights into human psychology.
I found a great article a while ago which did a deep dive into the actual transcripts from three cases — not quite as catastrophic, but still ending very badly. There were consistent patterns. What this tech needs is some old fashioned qualitative research, but that’s mostly been cut, globally.
I’m not using that as an argument for blame legally. As a society, if something like that is a factor in causing harms, we do have a duty to address them and mitigate them. In effect, we need to be able to regulate it, as we do other media.
I don’t think reporting is the only issue. There is accumulating evidence that chatbots can, under some circumstances, reinforce harmful thought patterns. That has definitely happened in other cases. So reporting aside, it’s not unlikely the tech is a contributing factor.
This.👇👏
How can we have a sense of achievement, of fulfillment, without working for it? We cannot self-actualize for free.
I am not sure I can make it there myself, and there are plenty of others who would benefit more and give more than I can, but damn, this is so tempting.
This looks like an *AMAZING* event!
First, COGS and Sussex does outstanding cog sci work.
Second, Andy Clark's ideas have blown my mind, positively, on many occasions. I rate his work extremely highly.
Third, workshops are awesome to develop good research communities. (And I hate conferences).
I’d add absence of leadership and strategy, just reacting to events. From my experiences of being the first rat off sinking ships of employment, that was a surprisingly big factor in toxicity.
If they genuinely think people are going to let AIs have access to their credit cards, they're more delusional than I thought. A few rich bros, sure, but everyday folks on a budget? No chance.
That's a possibility, but the only way growth in retail can happen in aggregate is that smaller stores are driven out -- effectively all retail run by a cartel of global megacorporations. It can happen, arguably is happening, I don't see how AI enables it beyond what the internet has already done.
Maybe I am naive, but “AI-fueled growth” puzzles me.
Generally automation doesn’t do growth — especially when you are already hyperscale. It may cut costs, usually by transferring them, e.g., to consumers. But… growth? Someone will have to explain that to me. It sounds all hopey-wishy.
Supporters Our work is supported by a variety of foundations, charities, and individuals who share our commitment to high-quality journalism about AI (grouped by lifetime giving): $1M+ Coefficient Giving (formerly Open Philanthropy) (2023, 2024, 2025) Survival and Flourishing Fund (2024, 2025) $100k – $1M The Casey & Family Foundation (2025) EA Infrastructure Fund (2023) Future of Life Institute (2024) $10k – $100k ACX Grants (2024) AI Safety Tactical Opportunities Fund (2024) Cullen O'Keefe (2025) Hazel Browne (2024) Newman Family Charitable Fund (2025) Robert and Virginia Shiller Foundation (2023, 2024) We have also received donations of less than $10,000 from a variety of generous individual donors. Our donors have no editorial control over the work of Tarbell, our fellows, or our grantees. Tarbell does not accept anonymous donations greater than $10,000. For details, see the Donor relations section of our ethics and standards policies.
Dude who wrote about how "the left is missing out on AI" is on here. Do you see who they are funded by? The biggest EA funders, the longtermist institutes we've been writing about and documenting, including FLI where Muskrat is still an advisor.
Our ability to make inferences about the behaviour of a system is more a property of *us* than of the system. So it is at ourselves and our reactions we need to look.
I don’t think that’s it. “What we think of as computing” was itself framed on an abstracted version of human behaviour. There are many computing-like things we’ve had which were very different: cellular automata, GAs, etc. Computing isn’t some magical rational god-phenomenon.
Dennis nedry jurrasic park "see nobody cares" meme reads Hey everybody, this guy still posts on X! See? Everyone is horrified and disappointed. They feel it speaks directly to your values.
The basic setup is that it is driven by ranked constraints that evaluate candidates. Each person can have different constraints and rankings, but they all have to “work” well enough to be useful.
So so many, the problem.
But if I were you I’d take a look at optimality theory, which intriguingly arose from attempts to bridge connectionism and universal grammars. Much of it is on phonology, sadly, which is quite technical, but it’s also valid for grammar, and it’s quite intriguing.
First, thank you for this service.
Second, this thread is gold. It’s a real insight into how HRM’s management is so dysfunctional. But I’m not sure how the city can save itself from it.
I mean, that’s slightly flippant, but what we do know is, we do not communicate by predicting logical words to put into sentences. That’s essentially a behaviourist retrospective reconstruction. We now know better.
That’s like saying “how does human language work?”
Language is big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to language. (With apologies to Douglas Adams.)
PUT IT BACK. PEBBLE IS NOT FRIEND.