Posts by goldfish crackers ©
You are wrong. There's Cohere (cohere.com). But it's very hard to compete with the U.S (or Chinese) companies. It's not going to happen, but I would like to see a CERN-like international effort to build open foundation models. If we pooled resources with other middle powers we could compete.
I think the lack of longterm memory consolidation is a dealbreaker. A lot of our compatibalist free will comes from the fact that over long timescales we influence the creation of our habits.
Defining abstraction via variables might be close to circular, but then you can start getting into Quine and stuff like that.
as a noun or as a verb?
re: noun, I feel like the universal generalization or lambda abstraction rules in logic correspond pretty closely to the folk meaning of the term? "a procedure or statement paramterized by (possibly typed) variables"?
I'm increasingly in favour of requiring some kind of license to use the word "ontological"
… also, the emphasis on human judgement creates cost disease which puts the benefits of all of that nuance and human discretion, such as they are, out of reach of poor people.
i'm a pretty naive outsider to this stuff, but i've always liked legal probabilism because (1) I'm compelled by the comparison to the history of evidence-based medicine, (2) i'm suspicious lawyers' epistemic intuitions, (3) and algorithmic decision-making is more auditable for bad biases.
From Turing's Computing Machinery and Intellligence paper: "The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles."
From the man himself
A creature that can do anything. Make a machine. Make a machine to make the machine, and that's why we're partnering with Grammarly to build ProseSlop, an AI-powered style coach that will make your communication so clear and effective that you'll never have to notice you're actually using language.
A passage from the hitchhiker's guide to the galaxy: "What's the problem Earthman?" said Zaphod, now transferring his attention to the animal's enormous rump. "I just don't want to eat an animal that's standing there inviting me to," said Arthur, "It's heartless." "Better than eating an animal that doesn't want to be eaten," said Zaphod. "That's not the point," Arthur protested. Then he thought about it for a moment. "Alright," he said, "maybe it is the point. I don't care, I'm not going to think about it now. I'll just... er [...] I think I'll just have a green salad," he muttered. "May I urge you to consider my liver?" asked the animal, "it must be very rich and tender by now, I've been force-feeding myself for months." "A green salad," said Arthur emphatically. "A green salad?" said the animal, rolling his eyes disapprovingly at Arthur.
ime so far a lot of the leverage seems to come from a combination of good writing skills, domain expertise and intuition for process/architecture. i'd guess the skills that make people good writers will become more valuable over time.
(Oh, shoutout to Andy Clark too)
Honestly this whole thing has been like a wrecking ball to most of 20th C. analytic philosophy of language imo. The Churchlands came the closest to getting it right afaict, maybe Quine to the extent he was hostile to the existing attempts to formalize intensional semantics.
Yeah as someone with leftwing politics it's really depressing. We're leaving the wealthy and the cultural right to have more-or-less uncontested influence over one of the most politically consequential technologies ever created because we got affectively polarized against computing. Shameful shit.
A graph showing AI capability as a function of time, with human performance also shown.
This understates the current situation (the data is out of date) but it gives you some idea.
hai.stanford.edu/ai-index/202...
Are your barely trained 22 year olds better at software engineering than Opus 4.6?
This is exactly what I'm talking about. Citation verification is much easier than citation generation.
1) Failure rates on many tasks are human-level or better now.
2) The rate of progress so far has been more-or-less in line with the crazier hype scenarios.
I think that's almost never true (see P vs. NP in CS and "context of discovery" vs. "context of justification" in philosophy).
Sure, tech amplifies capability, and depending on how that capability is used the results will be good or bad.
I'm just responding to your comment about usefulness. AI is extremely useful to both good and bad actors alike. I don't think anyone knows what the net benefit will end up being.
imo what you are seeing is a combination of the "radioactive toothpaste" effect (where people capitalize on hype to sell shitty products) and capability misjudgement driven by the pace of progress. You should expect this kind of thing even (especially) if the technology is genuinely revolutionary.
I've mostly given up trying to get people to understand that SWE isn't exceptional wrt automation susceptibility. They’ll find out soon enough anyway.
Even cooler imo: it matters which number system you use. Arithmetic over the real numbers is decidable, unlike over the natural numbers*. Naively you might expect reals to be at least as difficult to decide as the naturals, but it's not true.
*terms and conditions apply
I think it's partly that people are generally pretty terrible at thinking about representation. Often mathematical representation is unfavourably contrasted with some unspecified "real" kind of representation, which in practice just ends up being linguistic representation.
I basically agree with this. I do worry about what happens when you make some but not all parts of the legal system orders of magnitude more efficient. If the cost of litigation drops enormously but we're still nervous about AI on the adjudication side, presumably a lot of things will break.
there is going to be a major incident