Advertisement · 728 × 90

Posts by AI Notes

So first version of an ml anon starter pack. go.bsky.app/VgWL5L Kept half-anons (like me and Vic). Not all anime pfp, but generally drawn.

1 year ago 63 17 10 5
Preview
OpenAI Email Archives (from Musk v. Altman) — LessWrong As part of the court case between Elon Musk and Sam Altman, a substantial number of emails between Elon, Sam Altman, Ilya Sutskever, and Greg Brockma…

The OpenAI emails are interesting in that they make clear that the goal was to build an AGI and then have 1-5 people control it: www.lesswrong.com/posts/5jjk4C...

That seems...wrong.

1 year ago 80 8 3 4

I think for most tasks, the bottleneck is reliability, not capability. So even though capability is definitely increasing on some dimensions (for whatever reason, scaling or otherwise, I don't know) most people just don't notice. Very, very few people need the math abilities of o1-preview.

1 year ago 0 0 1 0

To put it another way: some folks in the NLP community would be horrified if they knew what people actually use search engines for!

1 year ago 1 0 0 0

It's a funny analogy, but I think the situation might be subtler than this. People use search engines for all sorts of things, not just information retrieval. For some of these other tasks, isn't it conceivable that AI would be more fit for purpose?

1 year ago 0 0 1 0

People in science and technology are seeing something very different from people in the humanities, but I think that's a temporary phase.

1 year ago 0 0 0 0

Future AI capabilities are already here—they're just not very evenly distributed.

1 year ago 1 0 1 0
Advertisement

Isn't this just a matter of different subdisciplines using the word "model" in different ways? I feel like I'm watching a mathematician complaining that fields aren't just a bunch of grass, they have to be commutative.

1 year ago 0 0 0 0
Preview
The Rapid Adoption of Generative AI Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, an...

Real-world usage spans a very broad set of tasks. Look at the data yourself if you don't believe me, e.g.:
www.nber.org/papers/w32966
And true generality is definitely an engineering goal—it's the famous G in "AGI." All frontier model companies are public and explicit about this.

1 year ago 1 0 0 0
Preview
The Rapid Adoption of Generative AI An analysis suggests that generative AI has been quickly and widely adopted at home and in the workplace, with about 40% of the U.S. population ages 18 to 64 using it to some degree.

I don't know of any technology adopted as fast as ChatGPT. Examples that are close (personal computers, the internet) indeed became pervasive and foundational. E.g. see www.stlouisfed.org/on-the-econo...

1 year ago 1 0 0 0

I've met a lot of people who are 100% certain that AI will flop. That's probably who this kind of language is aimed at. I completely agree it would be better if they hedged and said, "There's a decent chance AI will be pervasive, and we want you to help decide how we use it."

1 year ago 1 0 0 0

LLM-based chatbots are built for general use and in practice are used for a wide variety of things. I'm genuinely curious: what leads you to see them as application-specific artifacts? Or is this more of a normative statement, that you wish they'd be built and used in a more targeted way?

1 year ago 0 0 1 0

I think it sets a baseline, but not a ceiling. And LLMs have blown way past my baseline expectations for what I guessed next-token prediction would produce. Isn't it at least a reasonable hypothesis they may be learning something deep as a byproduct of a superficial training task?

1 year ago 1 0 0 0

LLMs are a technique, not a tool: they're not "meant" for anything. (Is the fast Fourier transform "meant" for audio engineering or detecting nuclear tests? Why not both?) And at this point, the best LLM-based systems are far better than the average person at math. Surely that's worth exploring?

1 year ago 4 0 2 0

Such a good paper! And at the end there's a great summary of counterarguments and counter-counterarguments.

1 year ago 2 0 0 0

Oh, I see what you're saying! That is interesting, and I don't know of any studies.

1 year ago 1 0 0 0

The belief was that this made it easier to learn to translate the first word, which then made it easier to learn to translate the second, etc. I don't know if they ran careful experiments to show this was the mechanism.

1 year ago 1 0 1 0
Advertisement

I think there might be more to the story. One of the biggest AI believers I know is (1) a socially adept extrovert; and (2) was incredibly skeptical, up until LLMs became good enough that they helped him write a certain type of specialized code much faster.

1 year ago 1 0 1 0

I believe you. There seem to be dramatic differences between subdisciplines. In your work it's useless, but in chemistry, it just won a Nobel. As we figure out what universities should do, I find it helpful to take into account how different our various experiences are.

1 year ago 2 0 0 0

I think her analysis of the structural pressures on universities is excellent! But what I'm seeing on the ground is a mix of those pressures with "endogenous" aspects of the technology itself: its enormous utility for certain kinds of work, and its rapid improvement. Those are critical factors, too.

1 year ago 2 0 0 0

Excellent mini-talk! One missing variable is that many profs (in physics, chemistry, CS) are now finding AI extremely useful for their own work. That makes it harder to see as a "cheating device." This seems like a huge factor in the "pivot," and which may not be equally visible in all disciplines.

1 year ago 0 0 1 1

So is it fair to say your level of belief (or disbelief) would be the same if they'd used the p < 0.05 standard?

1 year ago 0 0 0 0

I suppose the converse question is interesting too: what grand-but-incorrect discoveries would we have made without an understanding of null hypothesis testing?

1 year ago 1 0 1 0

Great essay! You ask, "What are the grand discoveries that we wouldn’t have made without an understanding of null hypothesis testing?" Would the discovery of the Higgs boson count? As I understand it, the transition from "cool theory" to "Nobel prize" hinged on a p-value.

1 year ago 1 0 2 0

Yep! The argument in your paper makes sense. It was just the nonstandard use of "structural stability" that threw me. (In standard usage, e.g., the identity map on a manifold is *not* structurally stable.) Anyway, it's a great article, whatever the terminology you use!

1 year ago 1 0 1 0

Very likely nothing will change for one inference pass, by continuity. But it's entirely possible that after many more next-token inferences you'll see a large enough to change to affect what output token is produced. (This is much like roundoff error accumulating).

1 year ago 1 0 1 0
Advertisement

I should say that by "behavior" I mean the result of just one inference pass, as opposed to long-term dynamics.

1 year ago 1 0 0 0

You're making a simpler and stronger point, I believe: behavior changes *discontinuously* with parameters, a major departure from most neural nets. Traditional "structural stability" is more subtle, and my guess is it would probably be hard to show any real-world transformer is structurally stable.

1 year ago 0 0 2 0

Thanks for this very useful survey! A question: what exactly is your definition of "structural stability"? Usually the term applies to dynamical systems, but how exactly is a transformer a dynamical system? (It actually looks to me like you might be talking about "continuity" instead?)

1 year ago 2 0 1 0
Caricature by Edward Linley Sambourne from Punch in 1882 titled “Man is but a Worm,” depicting human evolution, commencing with Chaos, through worm, monkey, culminating in Darwin himself.

Caricature by Edward Linley Sambourne from Punch in 1882 titled “Man is but a Worm,” depicting human evolution, commencing with Chaos, through worm, monkey, culminating in Darwin himself.

OTD in 1881, Charles Darwin published his last book, on earthworms.

It reflected a long interest in animal minds: “One alternative alone is left, namely, that worms, although standing low in the scale of organization, possess some degree of intelligence.”

🧪 🦋🦫 #HistSTM #philsci #pschsky #cogsci

1 year ago 53 17 3 2