Advertisement · 728 × 90

Posts by Sea Person

The people I follow on 𝕏 share way more derpy Leftist outrage-bait from Bluesky than the people I follow on Bluesky. There's no symmetry w.r.t. derpy right-wing outrage-bait either, 𝕏 has full-spectrum derp supremacy.

11 months ago 0 0 0 0

Email bombing? I was email bombed recently by identity thieves trying to prevent me from noticing the emails pertaining to their fraudulent credit card application in my name. Most of the emails seemed to be from legit mailing lists.

1 year ago 1 0 1 0

"Never interrupt your enemy when he is making a mistake," I holler in the face of my enemy who is making a mistake while my buddies all point and laugh at the mistake he’s making.

1 year ago 0 0 0 0

And they're right.

1 year ago 3 2 0 0

IMO the deeper ground truth is "What works pragmatically to solve collective action problems?" Moral intuitions are adaptive for that, but so is deference to norms, and it's easier to have shared norms if they can be derived from simple principles in a transparent way.

1 year ago 1 0 0 0

Consistency and legibility are desirable because morality is an evolved mechanism for solving collective action problems. That's easier when allies can assume common knowledge of norms. Relying on opaque, highly context-dependent moral intuitions leaves more hiding places for defection to evolve.

1 year ago 4 0 0 1

A contrived way to do this would be, take a union of discrete log and SAT, weighted so most instances are discrete log. E.g. say the 2nd half of the input is interpreted as a SAT problem if the 1st half is all 0s, otherwise as discrete log. Unenlightening, but it's at least an existence proof.

1 year ago 3 0 1 0
Screen shot of a gallery image with "previous" button partially obscuring text, with a red arrow and the word "DIE!" pointing at it

Screen shot of a gallery image with "previous" button partially obscuring text, with a red arrow and the word "DIE!" pointing at it

Why did every UI designer simultaneously decide that every image gallery/video should have transparent controls partially obscuring the images/video? I hate it.

1 year ago 1 0 0 0
Advertisement

... "We have to get a back-up planet set up because killer asteroids," as if that couldn't wait until after we have the tech for Dyson spheres or whatever a more advanced civilization would do with our solar system.

1 year ago 1 0 0 0

Or maybe not it's not entirely the people involved consciously wanting prestige, but evolution itself, which rewards us for exploration and prestigious achievement with feelings of wonder and romance, but also causes us to confabulate irrational motives for doing those things, like ...

1 year ago 0 0 1 0

Yes. I haven't seen any sensible justification for e.g. a manned Mars mission with pre-Singularity tech. I think people just want the prestige of being one of the first n people on Mars for small n, or of contributing to that achievement in some way. Or they want to admire people who do that.

1 year ago 2 0 2 0
1 year ago 3 0 0 0

For all that, I still put *some* nontrivial credence on another AI winter, but I'm baffled that so many people predict it with confidence. Why couldn't the current trend just keep going until it blows right past human-level AGI?

1 year ago 0 0 0 0
My view on this has not changed in the past eight years: I have given many talks and written position paper in 2019 (link below). Progress is faster than my past expectation. My target date used to be ~2029 back then. Now it is 2026 for a superhuman AI mathematician. While a stretch, even 2025 is possible.

https://drive.google.com/file/d/1RucT6EMVtMnmmuROBimudZKneasyRRFA

My view on this has not changed in the past eight years: I have given many talks and written position paper in 2019 (link below). Progress is faster than my past expectation. My target date used to be ~2029 back then. Now it is 2026 for a superhuman AI mathematician. While a stretch, even 2025 is possible. https://drive.google.com/file/d/1RucT6EMVtMnmmuROBimudZKneasyRRFA

People with inside knowledge of the frontier labs, like Ilya Sutskever, Leopold Aschenbrenner, and Christian Szegedy, keep telling us we'll have further amazing progress in the next few years. These seem like serious people who wouldn't just bullshit us about that.
x.com/ChrSzegedy/s...

1 year ago 0 0 1 0

Progress on reasoning seems good, judging by recent work on coding agents, the Reuters article above, and Deepmind's AlphaProof IMO results. Progress in robotics seems good and will be a source of new training data.

1 year ago 0 0 1 0

Why tho? What's the obstacle that's going to stop the rapid progress we've seen over the past decade? AFAICT there's still plenty of room to spend more and use better hardware for bigger training runs. Maybe we're approaching a training data wall, but I doubt we're optimally using the data we have …

1 year ago 0 0 1 0
Advertisement
Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner” or “STaR”, one of the sources with knowledge of the matter said. STaR enables AI models to “bootstrap” themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters.
“I think that is both exciting and terrifying…if things keep going in that direction we have some serious things to think about as humans,” Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry.

Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner” or “STaR”, one of the sources with knowledge of the matter said. STaR enables AI models to “bootstrap” themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters. “I think that is both exciting and terrifying…if things keep going in that direction we have some serious things to think about as humans,” Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry.

I want more people to take seriously the possibility that we will soon have AIs that are useful in *exactly* the ways human artists and scientists are useful, and much, much more besides. I find the implications staggering and terrifying.

www.reuters.com/technology/a...

1 year ago 0 0 1 0