Advertisement · 728 × 90

Posts by goldfish crackers ©

Post image
1 week ago 0 0 0 0
Preview
Enterprise AI: Private, Secure, Customizable | Cohere Cohere builds powerful models and AI solutions enabling enterprises to automate processes, empower employees, and turn fragmented data into actionable insights.

You are wrong. There's Cohere (cohere.com). But it's very hard to compete with the U.S (or Chinese) companies. It's not going to happen, but I would like to see a CERN-like international effort to build open foundation models. If we pooled resources with other middle powers we could compete.

1 week ago 0 0 0 0

I think the lack of longterm memory consolidation is a dealbreaker. A lot of our compatibalist free will comes from the fact that over long timescales we influence the creation of our habits.

1 week ago 4 0 2 0

Defining abstraction via variables might be close to circular, but then you can start getting into Quine and stuff like that.

1 week ago 0 0 0 0

as a noun or as a verb?

re: noun, I feel like the universal generalization or lambda abstraction rules in logic correspond pretty closely to the folk meaning of the term? "a procedure or statement paramterized by (possibly typed) variables"?

1 week ago 0 0 1 0
Post image
2 weeks ago 2 2 1 0

I'm increasingly in favour of requiring some kind of license to use the word "ontological"

2 weeks ago 1 0 0 0
Advertisement

… also, the emphasis on human judgement creates cost disease which puts the benefits of all of that nuance and human discretion, such as they are, out of reach of poor people.

4 weeks ago 4 0 0 0

i'm a pretty naive outsider to this stuff, but i've always liked legal probabilism because (1) I'm compelled by the comparison to the history of evidence-based medicine, (2) i'm suspicious lawyers' epistemic intuitions, (3) and algorithmic decision-making is more auditable for bad biases.

4 weeks ago 3 0 1 0
From Turing's Computing Machinery and Intellligence paper: "The view that machines cannot give rise to surprises is due, I believe, to a fallacy to
which philosophers and mathematicians are particularly subject. This is the assumption
that as soon as a fact is presented to a mind all consequences of that fact spring into the
mind simultaneously with it. It is a very useful assumption under many circumstances,
but one too easily forgets that it is false. A natural consequence of doing so is that one
then assumes that there is no virtue in the mere working out of consequences from data
and general principles."

From Turing's Computing Machinery and Intellligence paper: "The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles."

From the man himself

1 month ago 2 0 0 0

A creature that can do anything. Make a machine. Make a machine to make the machine, and that's why we're partnering with Grammarly to build ProseSlop, an AI-powered style coach that will make your communication so clear and effective that you'll never have to notice you're actually using language.

1 month ago 3 0 0 0
A passage from the hitchhiker's guide to the galaxy:

"What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump.

"I just don't want to eat an animal that's standing there inviting me to," said Arthur, "It's heartless."

"Better than eating an animal that doesn't want to be eaten," said Zaphod.

"That's not the point," Arthur protested. Then he thought about it for a moment. "Alright," he said, "maybe it is the point. I don't care, I'm not going to think about it now. I'll just... er [...] I think I'll just have a green salad," he muttered.

"May I urge you to consider my liver?" asked the animal, "it must be very rich and tender by now, I've been force-feeding myself for months."

"A green salad," said Arthur emphatically.

"A green salad?" said the animal, rolling his eyes disapprovingly at Arthur.

A passage from the hitchhiker's guide to the galaxy: "What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump. "I just don't want to eat an animal that's standing there inviting me to," said Arthur, "It's heartless." "Better than eating an animal that doesn't want to be eaten," said Zaphod. "That's not the point," Arthur protested. Then he thought about it for a moment. "Alright," he said, "maybe it is the point. I don't care, I'm not going to think about it now. I'll just... er [...] I think I'll just have a green salad," he muttered. "May I urge you to consider my liver?" asked the animal, "it must be very rich and tender by now, I've been force-feeding myself for months." "A green salad," said Arthur emphatically. "A green salad?" said the animal, rolling his eyes disapprovingly at Arthur.

1 month ago 4 2 0 0

ime so far a lot of the leverage seems to come from a combination of good writing skills, domain expertise and intuition for process/architecture. i'd guess the skills that make people good writers will become more valuable over time.

1 month ago 2 0 0 0

(Oh, shoutout to Andy Clark too)

1 month ago 0 0 0 0

Honestly this whole thing has been like a wrecking ball to most of 20th C. analytic philosophy of language imo. The Churchlands came the closest to getting it right afaict, maybe Quine to the extent he was hostile to the existing attempts to formalize intensional semantics.

1 month ago 0 0 1 0

Yeah as someone with leftwing politics it's really depressing. We're leaving the wealthy and the cultural right to have more-or-less uncontested influence over one of the most politically consequential technologies ever created because we got affectively polarized against computing. Shameful shit.

1 month ago 4 0 0 0
A graph showing AI capability as a function of time, with human performance also shown.

A graph showing AI capability as a function of time, with human performance also shown.

This understates the current situation (the data is out of date) but it gives you some idea.

hai.stanford.edu/ai-index/202...

1 month ago 0 0 0 0
Advertisement

Are your barely trained 22 year olds better at software engineering than Opus 4.6?

1 month ago 0 0 1 0

This is exactly what I'm talking about. Citation verification is much easier than citation generation.

1 month ago 0 0 0 0

1) Failure rates on many tasks are human-level or better now.
2) The rate of progress so far has been more-or-less in line with the crazier hype scenarios.

1 month ago 1 0 1 0

I think that's almost never true (see P vs. NP in CS and "context of discovery" vs. "context of justification" in philosophy).

1 month ago 0 0 1 0

Sure, tech amplifies capability, and depending on how that capability is used the results will be good or bad.

I'm just responding to your comment about usefulness. AI is extremely useful to both good and bad actors alike. I don't think anyone knows what the net benefit will end up being.

1 month ago 0 0 1 0

imo what you are seeing is a combination of the "radioactive toothpaste" effect (where people capitalize on hype to sell shitty products) and capability misjudgement driven by the pace of progress. You should expect this kind of thing even (especially) if the technology is genuinely revolutionary.

1 month ago 4 0 1 0

I've mostly given up trying to get people to understand that SWE isn't exceptional wrt automation susceptibility. They’ll find out soon enough anyway.

1 month ago 6 0 0 0
Post image
1 month ago 1 0 0 0
Advertisement
Decidability of first-order theories of the real numbers - Wikipedia

Even cooler imo: it matters which number system you use. Arithmetic over the real numbers is decidable, unlike over the natural numbers*. Naively you might expect reals to be at least as difficult to decide as the naturals, but it's not true.

*terms and conditions apply

1 month ago 4 1 0 0
Learning to Represent: Mathematics-first accounts of representation and their relation to natural language - PhilSci-Archive

philsci-archive.pitt.edu/23224/

1 month ago 6 0 0 0

I think it's partly that people are generally pretty terrible at thinking about representation. Often mathematical representation is unfavourably contrasted with some unspecified "real" kind of representation, which in practice just ends up being linguistic representation.

1 month ago 9 0 1 0

I basically agree with this. I do worry about what happens when you make some but not all parts of the legal system orders of magnitude more efficient. If the cost of litigation drops enormously but we're still nervous about AI on the adjudication side, presumably a lot of things will break.

1 month ago 0 0 0 0

there is going to be a major incident

1 month ago 1 0 0 0