Advertisement · 728 × 90

Posts by Matt Beane

Computer-mediated carcinisation

11 months ago 1 0 1 0

This includes many of my papers, too. The point I am making is the findings in careful academic research likely represents a lower bound of AI capabilities at this point.

11 months ago 51 4 3 1
Post image

I can’t

i just …

i can’t

www.404media.co/anthropic-cl...

1 year ago 1062 322 35 91

I bet if someone *has* succeeded, it's via spinning up an elicitation-GPT that just drilled you for critical intel, wouldn't let you weasel out via under/overspecified output, then dumped it all back to you in standardized format so you could think faster - basically exporting your extraction algo.

1 year ago 1 0 0 0

Exactly. If we overheard Dario, Sam, and Demis chatting about certain well known AI critics, I'd be willing to bet they'd be expressing gratitude. Proving a grouch wrong is a real motivator.

1 year ago 0 0 0 0

Hi Everyone!

We're hosting our Wharton AI and the Future of Work Conference on 5/21-22. Last year was a great event with some of the top papers on AI and work.

Paper submission deadline is 3/3. Come join us! Submit papers here: forms.gle/ozJ5xEaktXDE...

1 year ago 16 15 2 2

Exciting new hobby project in the offing related to AI and skill. Involves a childhood passion, a wild leap into the unknown, made real via an order from Amazon just now. Will be 100% cool, I will be documenting things, sharing eventually. Feels like April 2023 again!

1 year ago 2 0 0 0

The Silo is so good. Just superb. This generation's answer to the BSG remake.

1 year ago 2 0 0 0

My hobby horse. You can simulate a rocket all you want, and use more energy on computation than the actual rocket would, but you won't get to orbit until you ignite rocket fuel. What if all the energy we are spending on simulating learning is not the juice we really need to make intelligence?

1 year ago 58 11 8 0
Advertisement

    The GPT-4 barrier was comprehensively broken
    Some of those GPT-4 models run on my laptop
    LLM prices crashed, thanks to competition and increased efficiency
    Multimodal vision is common, audio and video are starting to emerge
    Voice and live camera mode are science fiction come to life
    Prompt driven app generation is a commodity already
    Universal access to the best models lasted for just a few short months
    “Agents” still haven’t really happened yet
    Evals really matter
    Apple Intelligence is bad, Apple’s MLX library is excellent
    The rise of inference-scaling “reasoning” models
    Was the best currently available LLM trained in China for less than $6m?
    The environmental impact got better
    The environmental impact got much, much worse
    The year of slop
    Synthetic training data works great
    LLMs somehow got even harder to use
    Knowledge is incredibly unevenly distributed
    LLMs need better criticism
    Everything tagged “llms” on my blog in 2024

The GPT-4 barrier was comprehensively broken Some of those GPT-4 models run on my laptop LLM prices crashed, thanks to competition and increased efficiency Multimodal vision is common, audio and video are starting to emerge Voice and live camera mode are science fiction come to life Prompt driven app generation is a commodity already Universal access to the best models lasted for just a few short months “Agents” still haven’t really happened yet Evals really matter Apple Intelligence is bad, Apple’s MLX library is excellent The rise of inference-scaling “reasoning” models Was the best currently available LLM trained in China for less than $6m? The environmental impact got better The environmental impact got much, much worse The year of slop Synthetic training data works great LLMs somehow got even harder to use Knowledge is incredibly unevenly distributed LLMs need better criticism Everything tagged “llms” on my blog in 2024

Here's my end-of-year review of things we learned out about LLMs in 2024 - we learned a LOT of things simonwillison.net/2024/Dec/31/...

Table of contents:

1 year ago 651 149 28 46

In 2024 we learned a lot about how AI is impacting work. People report that they're saving 30 minutes a day using AI (aka.ms/nfw2024), and randomized controlled trials reveal they’re creating 10% more documents, reading 11% fewer e-mails, and spending 4% less time on e-mail (aka.ms/productivity...).

1 year ago 16 4 1 0
Post image Post image Post image Post image

Independent evaluations of OpenAI’s o3 suggest that it passed math & reasoning benchmarks that were previously considered far out of reach for AI including achieving a score on ARC-AGI that was associated with actually achieving AGI (though the creators of the benchmark don’t think it o3 is AGI)

1 year ago 141 30 13 8

Just *one* of the reasons that Blindsight was ahead of its time. Way ahead.

1 year ago 1 0 1 0

Massive congrats!! So excited to check it out.

1 year ago 3 0 3 1

Wow!

1 year ago 0 0 0 0
Video

Join me by the fireside this Friday with Matt Beane as we dive into one of today’s biggest workforce challenges: upskilling at scale. 📈

Linke below to hear the full discussion on Friday, December 13 at 11 am EST!

linktr.ee/RitaMcGrath

@mattbeane.bsky.social

1 year ago 4 2 1 0

I propose a workshop.

Most engineers/CS working on AI presume away well established, profound brakes on AI diffusion.

Most social scientists presume away how AI use could reshape those brakes.

Let's gather these groups, examine these brakes 1-by-1, make grounded predictions.

1 year ago 2 0 0 0
Advertisement

Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed

Most folks don’t regularly have a lot of tasks that bump up against the limits of human intelligence, so won’t see it

1 year ago 155 26 8 2

Grateful for the opportunity to visit and learn from the professionals at the L&DI conference. And very glad to hear you found my talk so valuable, Garth! Means a lot.

1 year ago 1 1 2 0

I made an HRI Starter Pack!

If you are a Human-Robot Interaction or Social Robotics researcher and I missed you while scrolling through bsky's suggestions, just ping me and I'll add ya.

go.bsky.app/CsnNn3s

1 year ago 42 14 11 2
The Avatar Economy Are remote workers the brains inside tomorrow’s robots?

Wrote a little something on this in 2012, though I didn't anticipate the main reason for hiring such workers - training data.

www.technologyreview.com/2012/07/18/1...

1 year ago 1 0 0 0

Ohmydeargod.

1 year ago 0 0 0 0

David Meyer (v.) /ˈdeɪvɪd ˈmaɪ.ər/

To attribute complex, intentional design or deeper meaning to simple emergent behaviors of large language models, especially when such behaviors are more likely explained by straightforward technical constraints or training artifacts.

1 year ago 2 0 0 0

They did NOT. Wow. Sign of the times.

And I can verify on your rule! I was so flabbergasted and honored. Your feedback was rich and so helpful. Remain grateful.

1 year ago 1 0 0 0

I remember *treasuring* the previews. I'd fight to get there on time. Was part of the thrill.

But ads? F*ck that noise. Seriously, straight up evil.

1 year ago 0 0 1 0

Never occurred to me there'd be an algo under the hood that could reliably learn to provide content I'd value more than a straight read of my hand-curated list of people. My solution has been following people if they post high signal stuff all the time.

1 year ago 2 0 1 0

I have never used the feed page. What a horror, can't quite understand why folks would try.

Only/ever the "following" page. Even there things got pretty intolerable towards/around the election, now settled down.

1 year ago 1 0 1 0
Advertisement
Preview
Kurt Vonnegut, Joe Heller, and How to Think Like a Mensch This story remains my favorite Thanksgiving message; it reminds me to be grateful for what I have and of the evils of jealousy and destructive competition. I first posted it on my work matters blog mo...

My Thanksgiving post. A Kurt Vonnegut poem. He talks with Joe Heller (Catch 22 fame) about a billionaire. Key part:

Joe said, "I've got something he can never have"

And I said, "What on earth could that be, Joe?"

And Joe said, "The knowledge that I've got enough"

www.linkedin.com/pulse/kurt-v...

1 year ago 12 2 0 1

Oh my dear god this is an incredible study.

1 year ago 0 0 0 0

I think there's likely an effect there!

1 year ago 0 0 0 0