Advertisement · 728 × 90

Posts by Kieron

Preview
CDC won’t publish report showing covid shots cut likelihood of hospital visits The report, which had cleared the agency’s scientific-review process, had been delayed. It now won’t be published at all, people familiar with the decision told The Post.

This isn’t a scientific debate.

This is government censorship.

This is Lysenkoism, as @gregggonsalves.bsky.social pointed out last year.

And this will not end well.

www.washingtonpost.com/health/2026/...

2 hours ago 240 70 4 3

I implore the people of the US to really understand what is happening here—and I know that most people reading this do.

But this isn’t “routine”.

Coupled with RFK Jr. saying to Congress that immigrants are to blame for U.S. infectious disease outbreaks, we are in a dangerous situation.

2 hours ago 236 28 1 1

This action should be raising alarm bells across the country.

It’s not an annoyance.

It’s a willful and malicious action to deprive the people of the United States of critical health information in order to prop up a government disinformation campaign against vaccines.

2 hours ago 279 38 1 0

Jay Bhattacharya has ordered that a paper showing the effectiveness of COVID19 vaccines in preventing severe disease and hospitalization be suppressed and hidden.

This man has been crying nonstop about political censorship for 6 years.

But he’s the one who is actually doing it.

2 hours ago 1768 649 30 28
Coyote vs. ACME | Official Trailer
Coyote vs. ACME | Official Trailer YouTube video by Coyote vs. Acme

I'm in. 🧨

1 hour ago 51 8 3 3
Pluralistic: Daily links from Cory Doctorow – No trackers, no ads. Black type, white background. Privacy policy: we don't collect or retain any data at all ever period.

If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2026/04/20/p...

2/

1 day ago 12 8 3 1
A Soviet propaganda poster featuring Lenin pointing angrily into the distance. It has been altered. Lenin now has Trump's hair and his skin in orange. The hammer/sickle logo behind him has been replaced with a cross.

A Soviet propaganda poster featuring Lenin pointing angrily into the distance. It has been altered. Lenin now has Trump's hair and his skin in orange. The hammer/sickle logo behind him has been replaced with a cross.

There aren't a lot of things I agree with Mark Carney about, but there's one area where he and I are in *total* accord: the old, US-dominated, "rules-based international order" was total bullshit:

www.weforum.org/stories/2026...

1/

1 day ago 93 26 2 12

Si podéis leed (Formato artículo en el 2⁰ bluit).

Os va a hacer llorar de la emoción. Vamos ganando. Es pequeño y no se ve a veces, pero vamos ganando.

No vamos a reconocer el planeta para bien en los próximos 30 años, si sobrevivimos a la Internacional Fascista.

#hopepunk #solarpunk

2 hours ago 4 2 0 0
Advertisement

The claim that LLMs will “decimate white‑collar work” is not merely wrong; it is inverted.

LLMs increase the amount of white‑collar work required to keep systems coherent.

They generate artefacts faster than institutions can metabolise them. The bottleneck tightens, not loosens.

(37)

16 hours ago 10 1 2 0

Organisations that mistake the surface for the substrate will discover the difference at the worst possible moment: the failure (mode) will always be structural. The organisation is hollowed out through the same process drastically increasing the risk of failure.

(36)

16 hours ago 9 1 1 0

The irony is that LLMs do not automate the work; they automate the appearance of work. They generate the surface layer of competence without the critical substrate, which will be cut or will otherwise atrophy.

(35)

16 hours ago 11 1 1 0

This is why LLM‑generated code may feel deceptively competent until you try to integrate it. Integration is where hidden assumptions collide with real constraints.

A human knows the constraints. A model “knows“ only statistical residue and whatever could be provided as context.

(34)

16 hours ago 9 1 1 0

The review of the draft (for understanding, verification, realignment, correction…) is not a linear process.

Review is a reconstruction of the entire decision tree that should have produced the artefact. Without access to one, the reviewer must invent one. This is cognitively very costly.

(33)

16 hours ago 8 2 1 0

The “first draft” framing also hides a deeper epistemic trap: the model’s output is optimised to appear correct. Humans are optimised to trust things that appear correct.

This is not augmentation of human cognition; it is adversarial alignment against human cognition.

(32)

16 hours ago 10 3 1 0

Maintenance is not a minor phase, it is the dominant phase. If something only “breaks at maintenance,” it was broken at creation. The defect was merely deferred.

LLMs specialise in deferring defects into the most expensive part of the lifecycle.

(31)

16 hours ago 9 1 1 0

A human draft carries intent, constraints, and tacit knowledge, even when imperfect.

An LLM draft carries none of these. So before you can even evaluate it, you must reconstruct the missing scaffolding. This reconstruction is the work. The draft is incidental.

(30)

16 hours ago 7 1 1 0
Advertisement

This is already maintenance work, though degenerated.

The hard part of white collar work is maintaining invariants across time, teams, and systems. It’s wide thinking work anchored in implicit knowledge. A model that does not know the invariants cannot produce a draft that respects them.

(29)

16 hours ago 8 1 1 0

The “good for a first draft” argument only works if the draft is cheaper to verify than to produce. That is not the case here. A draft without context is not a draft; it is an orphaned artefact.

The cost of re‑parenting it back into the system exceeds the cost of writing it correctly.

(28)

16 hours ago 8 1 1 0

Third section - Generating a first draft

This section is a slightly roundabout way of addressing the maintenance and complexity aspects.

(27)

16 hours ago 7 1 1 0

Because LLMs are build in a way that obscures their own errors, they make that work (incl. verification) harder, not easier.

The artefact produced by LLM is structurally mismatched because an LLM is not a work producing system - it’s an artefact-as-plausible-product-of-work producing system.

(26)

16 hours ago 8 1 1 0

These objectives are orthogonal, and the gap between them is where all the cognitive debt accumulates.

The work is the structure that ensures the work product is correct, consistent, and safe to integrate.

The LLM generating a plausible work product hasn’t done much of the actual work.

(25)

16 hours ago 8 1 1 0

This is why “just check the output” is not a sufficient validation strategy. Subtle biases may have been silently introduced, for instance.

The model’s objective is to produce something that looks fine, with unknown caveats. The human’s objective is to determine whether it is actually fine.

(24)

16 hours ago 8 1 1 0

Similarly, the architecture of an LLM is such that you cannot judge the correctness of its product by simply inspecting it.

Both systems are explicitly designed to make superficial inspection meaningless.
For that reason, CSPRNG are available for analysis, whilst LLMs are black boxes.

(23)

16 hours ago 8 2 1 0

Random‑looking is not random. Plausible‑looking is not correct.

The entire point of a Cryptographically Secure Pseudo-Random Number Generator (CSPRNG) is that you cannot judge its security by inspecting its product. The generating process must be analysed.

(22)

16 hours ago 12 3 1 0

A note - information generating processes

One might look at the output of an LLM and assume that because it appears coherent, the underlying process must also be coherent.

This is the same category error as evaluating a cryptographically broken RNG by glancing at the numbers it emits.

(22)

17 hours ago 9 2 1 0
Advertisement

As we can already observe, the automation story collapses on contact with reality. You cannot replace humans with a system that increases the amount of human cognition required to prevent the system from drifting into nonsense.

(21)

17 hours ago 9 2 1 0

A plausibility engine is not a labour‑saving device. It is a cognitive‑debt generator. Every output it produces must receive meaning (be understood), be audited, and the audit is more expensive than the work it pretends to have replaced.

Fundamentally, a closed problem is now an open problem.

(20)

17 hours ago 14 4 1 1

This is why “AI will replace white‑collar work” is structurally incoherent.

White‑collar work is already dominated by specification, verification, reconciliation, exception‑handling, and cross‑checking.

LLMs make all of those harder by producing plausible-but-never-correct outputs.

(19)

17 hours ago 8 2 1 0

Once you introduce a system whose primary objective is to appear correct rather than to be correct, you create a validation problem harder than the original task.

You now have to reconstruct intent, check invariants, and detect mistakes and failures in an artefact generated to be plausible.

(18)

17 hours ago 14 5 1 0

Second section - specification as work

Accountability (or more precisely the lack thereof) is only one issue. The other is correctness. Not “does the output look fine“ but “is the output coherent, compatible with the system it enters, the intent of the designer, and maintenance of function”.

(17)

17 hours ago 8 1 1 0