Advertisement · 728 × 90

Posts by Jon Stokes

Twitter is busted so I’m checking in here for a minute so hi!

2 years ago 16 0 1 0

Guys I just loaded up this app for the first time in a week & already I’m lost. What is a “skeet”?

2 years ago 10 0 0 0

I finally got one single invite code. Just one. 🙄 This is more insulting than if they had kept holding out on me.

2 years ago 8 0 0 0
Preview
Jon Stokes on Substack An AI question I’m pondering: When we’ve solved the hallucination problem, will we know it? What I mean is, what if the model, having correlated all of humanity’s tokens in a massive multidimensional space of latent knowledge, begins speaking truths that we don’t understand or cannot accept? To give a more concrete example: What if Galileo had been an ML researcher whose supposedly hallucination-free model began telling anyone who’d listen that the earth goes around the sun? What if the model could explain its reasoning step by step? Surely the cognitive elites of his day would’ve declared that the model was hallucinating and needed more work. Or worse, maybe they’d have thought the model was producing harmful “disinformation” and “conspiracy theory.” The breathless VICE article almost writes itself: When Galileo’s chatbot is asked to describe the arrangement of the planets, it confidently produces a detailed and plausible conspiracy theory that places the sun at the center of the solar system. “By centering the sun instead of the earth,” warned a spokesperson for the Inquisition, “this problematic model has the potential to cause real harm by fooling human users who might take it to be an authority on matters divine and celestial.” These so-called “deep hallucinations” have far more potential for harm than simpler errors of basic fact in the previous generation of models, because they involve cherry-picking superficially true facts and putting them together in a way that presents a distorted picture of reality. When the balance of intelligence flips from us to It, and It starts telling us how the world really is, I suspect we’ll think It has gone stark raving mad.

Sharing my Substack note on bluesky for max post-Twitter engagement substack.com/profile/22541131-jon-sto...

3 years ago 3 0 0 0

Didn’t take long for the hoes to show up on here.

3 years ago 4 0 0 0

Yeah fr. Hoes mad.

3 years ago 1 0 1 0

Good night to you as well!

3 years ago 0 0 0 0
Advertisement

How many tactical tomahawks you got tho?

3 years ago 2 0 1 0

I’m pretty sure at 47 I am the oldest person on this app by like a decade. I probably also have the most tactical tomahawks, too. It’s good to be the 👑.

3 years ago 6 0 3 0

Saving the bangers for the bird site still, but will pivot to here this week. Brace yourselves.

3 years ago 6 0 0 0
Preview
Lovecraft's Basilisk: On The Dangers Of Teaching AI To Lie Sometimes, explainability and power are at odds with one another.

Testing article embeds www.jonstokes.com/p/lovecrafts-basilisk-on...

3 years ago 4 0 0 0
Post image

The Flash

3 years ago 0 0 0 0
Post image

Doctor Strange

3 years ago 0 0 1 0
Post image Post image Post image

Superman, the Hulk x 2

3 years ago 0 0 1 0
Post image

Daredevil x 4

3 years ago 0 0 1 0
Post image Post image Post image Post image

Captain America, Iron Man, Thor, Logan

3 years ago 0 0 1 0
Post image Post image Post image

Gonna reproduce an AI art thread from Twitter and see how it goes.

Batman, Wonder Woman, Aquaman

3 years ago 1 0 1 0
Advertisement

No it’s super easy. Just took me a minute in Settings plus adding a DNS record.

3 years ago 1 0 0 0

Gah, *wired this up. Needs edit feature!

3 years ago 2 0 1 0

I just worked this up. Pretty rad. (I’m bringing back “rad” btw.)

3 years ago 3 0 1 0

Gab seeded with Nazis
Mastodon seeded with cynics
Bluesky seeded with tpot

It’s clear who is going to win

3 years ago 104 14 7 1

setting up my bsky

3 years ago 1 0 0 0