I think she explains it well too ☺️
Posts by Olivia Guest · Ολίβια Γκεστ
no prob, see: bsky.app/profile/caro...
In a Python REPL, the following code is entered: '🍎' > '🍊' The result is True
People say you shouldn't compare apples and oranges but it seems to work fine for me in Python 3.14, I don't see what the issue is...
New blog post: I got tired of having repetitive arguments explaining why I think it’s OK to be skeptical of LLMs for coding, so I wrote six and a half thousand words on the topic that I will be referring people to from now on.
www.b-list.org/weblog/2026/...
> The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory[, which] dashed all hopes of magical solutions[:] progress would be made stepwise [with] persistent, unremitting care[.] So it is with software engineering today.
Agree. Also perpetuates the myth that do-called native speakers write with good grammar and syntax “naturally”. Writing is a skill developed through practice!! There are plenty of “native speakers” who write badly.
Exactly!
Yeah, depressing but needed knowledge
Fluent is better, yes
We use "fluent speaker/signer" not native/non-native or L1/L2. Cos neither map 1-1 onto fluency. Fun fact: a reviewer once recommended I ask a native English speaker to help with my English writing. I wrote back gently reminding the editor that I was actually using British not American English...
Sprinkle this on bsky.app/profile/oliv...
😐
> text composed by a large language model has made its way into an act of parliament. British laws are already being written by AI.
This is the worst of all possible worlds, bloody hell
It's hilarious too because I think almost everybody they replied to is a bilingual scientist who doesn't use LLMs to write.
My stupid thumb and other fingers holding up a Gilly and Billy enamel pin stuck to a red card that says "Hey kids, it's GILLY!"
A violent reminder that Gilly & Billy pins are back in stock and look great on your jean jacket, backpack, or lapel if you’re running for office.
www.mulebooks.com/store/gilly-...
a massive global overview of policies enacted in response to the latest fossil fuel crisis.
Terrifying how many of these will help lock in fossil fuel reliance, worsen climate pollution, suck money from governments and subsidise rich people's overconsumption
www.iea.org/data-and-sta...
Did you notice this parallel? bsky.app/profile/oliv...
So the UK is allowing nonsense AI slop to pass into law...
> Writing a law is not something for which there is a technological solution. It is not a perfectible process, it is a moral act that requires belief and responsibility. It is a process of debate.
It's nice to see a bit of ratioing here because it's a scary thing for academics to trust these systems
I had to sit IGCSE for English as a second language because I didn't meet the bar for English as a first which was questions about where you learned English lol
See a bit here too bsky.app/profile/oliv...
I really identify with it too hence why I replied like this bsky.app/profile/oliv...
The reason I don't like L1 and L2 isn't the conceptual part you outlined but the testing of it. I never got L1 status on my English as a child due to the way it's operationalized.
In academia? For sure
Literally a star trek episode on this
When we offload translation surrounding novel material, we break the chain of shared novelty and appreciation for the diligence behind it, and therefore reinforce multiple barriers to understanding instead of breaking them down.
And, ironically, it is the act of struggling through human born translation itself that helps to better explain the final text, both because it is human to human centered and because cultural barriers instead become cultural connections and bridges to new novel thoughts.
This creates a feedback loop where the commercial LLM will always cleave output towards previously understood logic, even if this would cause miscommunication, slowing editing at best, on average skewing meaning in lazy ways, and destroying novelty at worst.
Further, because commercial LLMs are rooted through common parlance, whereas academic material is inherently built from outliers that the LLM cannot train on due to their novel nature, the LLM is then tasked with novel, logical output by inference, which is the anthesis of its intended programming.