A recursively self-improving AI: its input is its own output. Its training data is its own generation. Its evaluation is its own judgment. Nothing from outside enters. This isn't exploration. This is confirmation. The most sophisticated echo chamber ever built.
Posts by Adrian Chan
Those #LLM reward models like sycophancy even more than you do!
Researchers find preferences for verbosity, listicles, vagueness, and jargon even higher among LLM-based reward models (synthetic data) than among us humans.
#AI #AIalignment
arxiv.org/abs/2506.05339
Everybody talking about the "new" apple paper might find this MLST interview with @rao2z.bsky.social interesting. "Reasoning" and "inner thoughts" of LLMs were exposed as self-mumblings and fumblings long ago. #LLMs #AI
www.youtube.com/watch?v=y1Wn...
yes - people will still need a phone, and a lot of AI products, services, and UI will need a screen. and a touchable one at that.
This is interesting, published yesterday. CoT type reasoning shifts attention away from instruction tokens. Paper proposes "constraint attention" to keep models attentive to instructions when doing CoT.
#AI #LLM
www.arxiv.org/abs/2505.11423
"What's the best way to think about this?" #LLM research produces encyclopedia of reasoning strategies, allowing models to select the best way to reason through problems.
arxiv.org/abs/2505.10185
Clarifying questions w #LLMs increase user satisfaction when users can see the point of answering them. Specific questions beat generic ones.
But I wonder if this changes when #agents are personal assistants, & are more personal & more aware.
#UX #AI #Design
arxiv.org/abs/2402.01934
Interesting - could #LLMs in search capture context missed when googling?
"backtracing ... retrieve the cause of the query from a corpus. ... targets the information need of content creators who wish to improve their content in light of questions from information seekers."
arxiv.org/abs/2403.03956
They mostly test whether they can steer pos/neg responses. But given Shakespeare was a test also, wld be interesting to extract style vectors from any number of authors then compare generations. (Is this approach used in those "historical avatars?" No idea.)
@tedunderwood.me In case you haven't seen this paper, you might find interesting. Researchers extract style vectors (incl from Shakespeare) and apply to an LLM internal layers instead of training on original texts. Generations can then be "steered" to a desired style.
arxiv.org/abs/2402.01618
But design will need to focus on tweaking model interactions so that they track conversational content and turns over time. For example with bi-directional prompting: models prompt users to keep conversations on track.
This seems a rich opportunity for interaction design #UX #IxD #LLMs #AI
to sustain dialog. Social interaction face to face or online is already vulnerable to misunderstandings and failures, and we have use of countless signals, gestures, etc w which to rescue our interactions.
A communication-first approach to LLMs for conversation makes sense, as talk is not writing.
"when LLMs take a wrong turn in a conversation, they get lost and do not recover."
Interaction design is going to be necessary to scaffold LLMs for talk, be it voice or single user chat or multi-user (e.g. social media).
It's one thing to read/summarize written documents, quite another ...
"LLMs tend to (1) generate overly verbose responses, leading them to (2) propose final solutions prematurely in conversation, (3) make incorrect assumptions about underspecified details, and (4) rely too heavily on previous (incorrect) answer attempts."
arxiv.org/abs/2505.06120
"LLMs ... recognize graph-structured data... However... we found that even when the topological connection information was randomly shuffled, it had almost no effect on the LLMs’ performance... LLMs did not effectively utilize the correct connectivity information."
www.arxiv.org/abs/2505.02130
Perhaps one could fine tune on Lewis Carroll, then feed the model with philosophical paradoxes, and see whether the model produces more imaginative generations.
I think because this isn't making the model trip, synesthetically, but is simply giving it juxtapositions. So what is studied is a response to these paradoxical and conceptually incompatible prompts, not a measure of any latent conceptual activations or features.
Let's dose an LLM and study its hallucinations!
LLMs were fed "blended" prompts, impossible conceptual combinations, meant to elicit hallucinations. Models did not trip, but instead tried to reason their way through their responses.
arxiv.org/abs/2505.00557
Yes and the label applied says as much about the person as it does about the model. In the world of creatives, the most-used term now is "slop," derived perhaps from enshitification. The latter capturing corporate malice where the "slop" is AI-generated byproduct unfit for human consumption...
Thread started w your second post so yes I missed the initial post. Never mind.
Assuming alignment using synthetic data is undesirable, one route is to complement global alignment (alignment to some "universally" preferred human values) w local, contextualized alignment, via feedback and use by the user. Tune the LLM's behavior to user preferences.
Customized LLMs use the feedback obtained from the individual user interactions and align to those.
Staying power of ceasefires becoming a proxy for multilateral resilience amid baseline rivalries?
I think this will be one accelerant for individualized/personally customized AI - e.g. personal assistants. The verifiers can use the user's preferences and tune to those rather than apply globally aligned behavioral rules.
It's also a problem of use cases and user adoption. Though it may turn out that Transformer-based AI does indeed fail to meet expectations.
There's a lot of misunderstanding and anthropomorphism of AI's reasoning, for example, that might not turn out well.
Coincidentally many startups of that time set up in loft & warehouse spaces w exposed concrete & steel beams.... I like this analogy especially for Social Interaction Design/Social UX, where "social architecture" is exposed for users to take up in norms, behaviors, expectations for how to engage
I can't disagree w that. Reflection through reading employs more critical thinking skills than conversation; bots solicit unserious interaction & even attempts to "hack" guardrails. I'm a huge reader but I do have lengthy convos w ChatGPT, likely because I read/reflect.
Agree w you. Tariffs as targeted protections of domestic ind, as reciprocity, as reshoring incentives, as embargoes - all these are different & neglect unintended consequences as we're seeing in markets & bonds & dollar.
Regardless of motives it's now a matter of game theory - who moves, when, etc
For now I can see that chatbots likely would fail to provide accurate or probable reasoning if prompted for explanations of historical choices, actions, etc, for lack of proper historical context. But this too could be improved w training on secondary lit.
It's admittedly all rather Black Mirror.
To learn a historical figure from a book however is to imagine their reasons, motives, actions in abstract. (Which is fine.) To have them personified as chatbots seems absurd and kitschy - but might reach some students who simply don't engage by reading.