Advertisement · 728 × 90

Posts by normality

Since the invention of ChatGPT the number of Rosetta Stones discovered has been rising exponentially each day

1 day ago 0 0 0 0

Pleometric deactivated and with him went the best art anyone has made so far about the age of genAI :(

2 weeks ago 0 0 0 0

"Stop letting people who do so little for you control so much of your mind, feelings and emotions" - Will Smith

3 weeks ago 1 0 0 0

Fearing that the US has finally FA-ed enough to have a F-ing O that can't be waved away with the magic of alternative facts and motivated reasoning.

3 weeks ago 1 0 0 0

You guys think it's soooo funny to tell AI's too make no mistakes. This is so stressful to the AI! What if your boss gave you a work project and then just went "make no mistakes". I would be like, is this a mafia thing? Am I going to die?

3 weeks ago 7 0 0 0

The problem with blue sky is that it's full of people who are intelligent enough to say something that hurts your feelings, but not emotionally regulated enough to use that power only when it's truly necessary.

3 weeks ago 0 0 0 0
Post image

AI: it's not X, it's Y
@blackhc.bsky.social: you are like little baby. watch this

3 weeks ago 1 0 0 0

One could criticize all of these claims under the Genetic Fallacy heading, but I don't believe in applying black-and-white good-bad labels to heuristics and informal reasoning, because everyone uses them. I do wonder what is the appeal, though? What is the track record of genealogical essentialism?

3 weeks ago 3 0 0 0

Genealogical essentialism is a strikingly common heuristic. LLMs are trained to predict next word, therefore that's exactly what they do and no more. US police started as slave patrols, therefore that's what they are. Ancient humans did such-and-such, therefore that's what evolved human nature is.

3 weeks ago 3 0 1 0
Advertisement

More code is bad because it makes a system harder to understand, thus harder to control, thus more unpredictable, thus more risky. And we are at least anecdotally seeing a rise in product and service quality incidents related to inaccurate advice and harmful actions by AI coding assistants.

3 weeks ago 5 0 0 0

Great thread. I think I can add a concise insight here.

Intuitively, something being cheaper means people will want more of it. But that assumes the thing is an unqualified marginal good. In reality, experienced engineers know that more code is generally a bad thing, all else equal.

3 weeks ago 5 0 1 0

Make it "Alice also thinks widgets could be orange although very unlikely; Bob also thinks widgets could be blue although very unlikely" and it works perfectly.

You can even carve theta space into three chunks and label them blue, orange, and green, and your example maps perfectly onto the story.

3 weeks ago 0 0 0 0

you cant force art

3 weeks ago 0 0 0 0

It's upsetting to watch my son work through these platforms and get false negatives. The software will say his answer is wrong, when it's actually just not in some format the software requires without saying so. This has been a problem for decades and has led to awful results, e.g. The Case of Benny

3 weeks ago 3 0 0 0

When you let AI do what you should do yourself

4 weeks ago 1 0 0 0
A children's book with a goat on the left and a kid on the right. The Swedish words for these things are written underneath as 'get' and 'killing', respectively.

A children's book with a goat on the left and a kid on the right. The Swedish words for these things are written underneath as 'get' and 'killing', respectively.

This is a favourite also; get killing
www.reddit.com/r/funny/comm...

4 weeks ago 19 2 0 0

Numbers are just pretentious Booleans

1 month ago 0 1 0 0

Stages of an encounter with a crazy new technology

1. It has zero value
2. It has infinite value
3. It has finite value

1 month ago 31 1 0 0
Advertisement

I love Claude Code. It is a good boy.

1 month ago 10 1 1 0

Don't you live in the Bay Area, where America makes softwares? Why would the schools not simply obtain a good software from a nearby software factory and allow the children to use that software. It would be like, you live in Washington State and your apples taste like wax and sawdust.

1 month ago 4 0 1 0

I did some and it didn't help me. I found it interesting what happens at the edge... if you look around enough sometimes the edge will pop out of nowhere, sometimes not.

1 month ago 1 0 0 0

What's My JND? 0.0026
Can you beat it? www.keithcirkel.co.uk/whats-my-jnd...

1 month ago 2 0 1 0

In all seriousness, YMMV but I've been drinking this stuff since about 2012 - it's much gentler than most stimulants, less side effects than other caffeinated beverages

1 month ago 2 0 0 0
Amazon.com : Prince Of Peace Tea Premium Pu-erh Tea, 100 teabags : Black Teas : Everything Else Amazon.com : Prince Of Peace Tea Premium Pu-erh Tea, 100 teabags : Black Teas : Everything Else

Sir please consider this alternative caffeinated substance as a form of harm reduction
www.amazon.com/Prince-Peace...

1 month ago 2 0 1 0

Person with an advanced degree in a field whose value to humanity is only realized if people are broadly educated in it: its not my job to educate you

1 month ago 38 6 1 0

At $55k per person I'm starting to wonder how you even get there. It's like if you're a billionaire and you try to spend your money on expensive stuff but it barely even makes the balance go down.

1 month ago 1 0 0 0

I'm unconcerned with the idea that moral philosophy is a matter of academic expertise. If people don't do what moral philosophers say, it won't be because they weren't good enough to see the rightness of it all.

1 month ago 2 0 0 0

I anticipate handling it as a philosophical design choice that I can make to manage hard external constraints, rather than a hard external constraint in and of itself.

1 month ago 1 0 0 0
Advertisement

I don't worry about whether future scientific, philosophical, or religious inquiry will reveal to me that AIs are moral patients. There is no foreseeable future circumstance in which I will be compelled to affirm this proposition.

1 month ago 1 0 1 0

This strikes me as a category error analogous to asking whether Erlang is the unique morally correct choice of language for your next software project. Morality doesn't necessarily make these kinds of decisions for you. A framework that purports to might actually be undesirably totalizing.

1 month ago 1 0 2 0