Probably not in a good way.
Posts by Paul Harland
Thanks, @stephenkb.bsky.social. I'm sure there's still room for him to make the situation worse. Worse for them, worse for us. Will he be remembered for taking a solid majority and dissolving it into a blob? (That would be a bit unfair but he's making rescuing his lumpy-looking legacy harder work.)
First page of the article "Measuring issue salience for political parties using LLMs" by Kenneth Benoit and Michael Laver, published online first in West European Politics. Shows the title, authors, and abstract.
Box plots showing the distribution of 'budgeted' issue salience scores across six policy dimensions (economic, social, environment, EU, decentralization, immigration), comparing expert surveys, Manifesto Project codings, and LLM-based estimates.
🎉 Online first:
LLMs estimate party positions well. How about issue salience?
@kenbenoit.bsky.social and Michael Laver show its harder: salience is inherently relative and more implicit. LLM's salience estimates are usable but track experts less closely than positions.
🔗 doi.org/10.1080/0140...
Is there much work on assessing the likelihood of AI that has in some way acquired consciousness ending up with a disordered variety, as seems plausible to me? The SEP deals with disorders in a splintered way, e.g. plato.stanford.edu/entries/cons... @metzinger.bsky.social or @eschwitz.bsky.social..?
No need to apologise, I know you're very busy - more need from me for the snarky tone in my post! But it's my observation of your writing behaviour. I think consciousness discourse is a particularly knotty tangle, but anaesthesiology seems a reasonably scientific practice, aware of known unknowns.
I liked this post -- it makes really similar points as I have made a couple times about technologies and effectiveness and implementation/intervention science vs idealize estimates of gains in RCTs (& even the RCTs in software spaces are troubled designs)
aleximas.substack.com/p/who-uses-a...
Getting back to obscene problems, for me that's something like schizophrenia. No conclusive definition, treatment or explanation. Diagnosis is by observation, not testing - an expert knows it when they see it (and it's not something else), but the person mostly doesn't see it themselves at the time.
Leafing through... Isn't consciousness a bit like a moving smear around a unifying instant? So - not sure about that. And I still wasn't clear how to experience "red" or drawing a clock lopsidedly (en.wikipedia.org/wiki/Allochi...). But lots of good ideas on how to avoid building conscious machines.
Usefully, his 2025 PhD thesis, How to Build Conscious Machines, as well as being nicely laid out and smoothly written, has a tiny bit more on Derrida so might be worth you leafing through David.
Looking at the first cites of the above paper scholar.google.com/citations?vi... led me to @michaeltbennett.bsky.social's www.authorea.com/users/684323... which refs Derrida, Pattee and others, probably a better overall argument for me (particularly given I think evolution is change that persists).
I enjoyed recent discussions on LinkedIn about Lerchner's (DeepMind) "The abstraction fallacy: Why AI can simulate but not instantiate consciousness", which I partly agree with but felt could be better argued. If anything, it made me doubt myself more: maybe a computing mechanism can be made consc.?
Thanks. Not sure, from a look through, but different from (and more thorough-going than) the way I have loosely been thinking about social fields, which is good! You may find this worth reading: osf.io/preprints/os... by @michaeltbennett.bsky.social Are Biological Systems More Intelligent Than AI?
Consciousness vexes me. It seems reasonable that evolution should find the possibility of vivid presence useful for an agent. It just doesn't seem reasonable that it's possible in that way! Perhaps this is why we get everything from panpsychism to eliminativism. There's a need to own it.
New radio op ed on WNIJ Northern Public Radio
www.northernpublicradio.org/wnij-perspec...
A good, and needed perspective! Lots of things are like this, in the news and elsewhere in our lives: what is measurable, observable, proximate stands in for something happening out of sight. See, for instance, @add-hawk.bsky.social on value capture www.theguardian.com/books/2026/j....
Chris Bateman interviewing Kendall Walton on games of make-believe in the representative arts. web.archive.org/web/20120508...
Interesting discussions in sub-threads.
I want to agree with you, but now you put it like that, they sound quite similar!
Gripping stuff!
HoP 6-month update post!
I've added Zeno of Citium, Epictetus, Marcus Aurelius, Skinner, Plantinga, Block, Strawson, and Oppy; updated Hume, Rousseau, Brentano, Husserl, Wittgenstein, & others; drawn hundreds of new connections. Details & links here:
www.denizcemonduygu.com/philo/zeno-o...
Out now! Mind as Metaphor (OUP, 2023) t.co/eBlYtDd5lY
Reading other minds is its own hard problem, so this may simply be what's on my mind, but I'm wondering if you're looking for a snappy angle for your Différance book on your usual consciousness-indifferent positions, one that allows you to maintain your avoidant writing behaviour in its presence?
I do have (I think? Yes, I do) the Boys' Book of Airfix.
Thanks - but I might give that nightmare a miss!
Thread - on Failure.
Now out in PTRSB!🔥
Wherein Francesco d'Errico, Ivan Colage & I track the emergence & trajectory of hominin epistemic #nicheconstruction through material culture—the alteration of the informational landscape via spaces, bodily ornaments, & artificial memory systems.
#evosky #archeosky #philsky
Going to split replies! While looking for something else by @raphaelmilliere.com, I noticed "Language Models as Models of Language" arxiv.org/abs/2408.07144. Not how we, vs LLMs, begin to learn world "models" alongside learning language, but '4.3 Language models as model learners' is still relevant.
My new paper 'Mapping profiles of animal affect' is now out in Biology and Philosophy. In it, I explore why understanding species-specific “affect profiles” matters for animal welfare research, and sketch some possible ways we might begin to map them.
Available open access: doi.org/10.1007/s105...
In terms of autonomy... the rules are voluntary, the amount of restriction in them doesn't provide any more possibilities, it focuses them. Just thinking aloud here!
Well, as the article said, "Project Ceti has set a goal of being able to comprehend 20 different vocalized expressions, relating to actions such as diving and sleeping, within the next five years." Wittgenstein's builders, etc. (Thinking about LLMs learning from mass of text but not development.)