"The Ancients could do wondrous things, but often made mistakes" is a common trope in dungeon-crawling fantasy, but said fuckups are usually framed as products of hubris or madness. I want to see a setting where they messed up for the same reasons real public works often come to horrific ends. (1/3)
Posts by Ian Bicking
Note many professional software engineering standards were developed _so that_ they could be referenced in procurement processes. Today's procurement rules aren't written for coding agents, but we can (and regularly do) add rules, and they apply to the development process itself not just the outcome
I've found it very effective to give coding agents lots of bureaucracy to fill out: fields to fill in, lists to follow, overzealous checkers, etc.
You know who else does this? The government! How much is this like a procurement process?
I have had a few opportunities lately to make requests where I want to slip in some of the circumstances to invoke pity. Intellectually I'm fine with this but constitutionally it is very hard for me. I just tell the AI my strategy straight out and it does a great job threading that needle.
How much of the vehicle's value is recovered after it is considered "totaled"? I've noticed there's cases (including ICE cars) where the vehicle actually maintains a lot of value but not as a complete fixable vehicle.
(Like, I make AI agents go through hoops I would never foist upon a person)
I agree, but it's a structural change to even consider those things. And requires new processes to do it without regression, involving lots of up-front investment in process changes around reliability. And when you are done I think you'll have to use AI to maintain the resulting code.
I realize I don’t have an intuition for how much does the electricity costs, as well as capital costs and maintenance etc
An SVG drawing of a rollercoaster and ferris wheel
An SVG drawing of a sunset over hills
An SVG drawing of tide pools
My wife had setup a travel guide and wanted to make it printable. It kept leaving out the images from a PDF, and so I sternly admonished it and told it to do better.
Apparently ChatGPT simply cannot include images from the web in a PDF. So instead it drew its own pictures...
I guess I believe enough in the potential of AI to increase the capabilities of individuals and small teams, that I have some optimism that quality can emerge even in spite of market forces. Maybe! If people truly and in good faith make use of the opportunity to be of service to each other
"Didn’t anyone see my skeet about the epidemic of poorly thought-out return-to-office policies? The carnage that’s currently being carried out by formerly inanimate objects is the next logical step beyond all the stuff I warned everyone about."
buff.ly/5tPeXxJ
At the height of ICE resistance in Minneapolis the citizen surveillance of government was gaining some sophistication. To the degree it was effective it was because there was some limits of law and decency which could be exposed. But lacking state power it was much harder to make use of it
this is the way.
terms: the gift. open source
There’s some brand of pasta that implores “good pasta takes time! Don’t rush it!” next to the cook time, and I don’t even know how I would rush cooking pasta
Low marginal cost like you say… and maybe the market economy does work. Just like undercutting keeps commodity prices low, someone will outcompete by defecting and providing high quality even in their cheaper product lines
One appeal to fiction writing is that there’s a ton of theory around it. So much more than for nonfiction (expository, instructional, etc) writing. As someone noted the LLM is terrible at pacing, but there’s actually a lot of theory that can be applied to that problem
For instance when I invented new kinds of tests that the AI had to run manually (not fully automated) I felt like I was embedding myself in a new sort of engineering, handing off design decisions but not blindly, just creating space for better discussions with the AI over those results
I think I’m more excited when I find new loops to work in. Improving tooling to help the AI, or to give it more insight. Grounding its decisions with more background and motivation. Changing architecture based on the new math of implementation cost
Seeing some “levels of AI engineering” scales, and they don’t quite fit the progression I’ve felt.
One tendency is to max out concurrency, with hoards of agents or lots of sessions, but that feels a bit like a phase people go through, not an end in itself
I was picturing you readmaxing, buzzed out on the information streams
wasd for the left pane? :)
Oooh... the Canon Cat/Humane LEAP keys, but a different LEAP key for each pane
A very common AI sci-fi trope is for it to make precise predictions like "this is 86.3% likely to succeed"
I feel like we are further from this future than we were five years ago. Maybe it was always an unlikely future? Maybe it is an implausible view of probabilities?
But… maybe our own selfhood is more fictional than we’d like to admit, stories we tell to ourselves about who we are. (Stories the LLMs learned from)
So the fictional personality is essential to the AI systems (of which an LLM is part). Could the system be conscious?
I’m still quite convinced the fictional personality is only fictional, a written record of an imagined person. The mechanics of this are overt
The LLM does not talk to itself just as a performance! It is very clear this is essential to its ability. The self-talk has grown dramatically in the pursuit of better quality, larger scope, and more effortful results.
But the LLM can use its momentary existence to imagine and continue to imagine a fictional character, describing how the character thinks, acts, reacts, and so on.
Even in a relatively personality-neutral domain like coding the LLM actively records thoughts, intentions, and performs self-talk
On the last point: an LLM, which is stateless, approaches each interaction by flitting instantly into existence, quickly reading up on how it should act, performing an action, then it is annihilated.
It feels very clear there’s no room for selfhood or consciousness in that framework!
Really the only way I can make sense of AI personalities or selfhood is as fictional characters. An LLM may author much of the character, but they are still two distinct things.
We never think of fictional characters as independently conscious. Yet without authoring the LLM can’t maintain identity
It seems like it’s treated as a given that pvp is zero sum (or realistically worse than zero sum). If it was net positive with only marginal setbacks for the loser, would it feel more fun?
I think some discussions of capital-vs-labor assume that labor only exists to serve capital, and so the only power labor can have is for capital to need labor.
This leaves out of the discussion people who labor in service of their own goals.