Advertisement · 728 × 90

Posts by

#richardfeynman on brushing teeth 🪥 🤯 #nobelprize
#richardfeynman on brushing teeth 🪥 🤯 #nobelprize YouTube video by Wise Owl Wealth (WoW)

Yep, Richard Feynman said this and is at least influential in some parts of the manosphere. E.g. m.youtube.com/watch?v=IaWf...

I'm sorry to report that reality is beyond parody.

1 year ago 3 0 1 0

100% agree about wasted human potential. Automating all forms of toil and letting humans do the interesting bits is a vision I can get behind.

And my views on the state of ~most learning (and what I'm trying to do about it) are well-documented so I'll spare you a repetition of that rant.

1 year ago 0 0 0 0

Is that 80% stat from research someone's actually done? That'd be awesome (and seems evidently achievable for someone to perform)

1 year ago 0 0 1 0

Honestly, I was trying to do the opposite. I wanted to be impressed! It got the logic right, but translated it into maths (integers up to 10) incorrectly.

And I thought the ARC benchmarking involved pre-training on related examples, which seems like... cheating?

1 year ago 0 0 0 0

Solving problems of increased complexity with arbitrary values (which you couldn't learn by searching) seems like a big hurdle yet to be overcome.

1 year ago 0 0 1 0

Again, I really appreciate the replies. I tried O4 myself - asked it two (simple) questions, and it got them both wrong.

I think the thing that is hard is the interface between explicit / rule-based reasoning (like doing novel maths) and implicit reasoning (language).

1 year ago 0 0 1 0

Super interesting, thanks for the reply. What do you mean when you say "We've discovered the optimisation function"? And what makes you determine that it's skewing to AGI?

1 year ago 0 0 1 0

I find AI interesting, and I continue to think it is world-changing. But I really don't see it with current generative AI / LLMs. They're obviously impressive, but I don't find them exciting, and I can't find a use-case for them in my life.

What am I missing?

1 year ago 2 0 1 0

I've done a few talks about the Post Office/Horizon over the last couple of years, including that their latest rebuild had a better approach (more psych safety, more user contact, tho no evidence of shifting sec / QA left).

I feel genuinely sad to hear that it's being canned. What a waste!

1 year ago 1 0 0 0
Advertisement

Hard agree. Similarly, I find myself talking about it to try to push back against for hype.

1 year ago 1 0 0 0
Preview
Skiller Whale Fast, Flexible, Live Learning for Engineering Teams

I definitely have a horse in this race (skillerwhale.com)

If by struggle, we mean "understand, try, fail, improve" then that's an excellent way to learn skills. If we mean "try, fail, try, fail, bang head on wall, admit defeat, get peer to do it", then the learning is "I can't" rather than "I can".

1 year ago 2 0 1 0

Is it? The dialogue aspect seems weaker for coding, and seems mainly useful for giving information in the flow of work, rather than providing a skill.

1 year ago 0 0 1 0

It's important to distinguish knowledge, process knowledge, and skills. Google replaces knowledge, Intellisense implements process knowledge, but coding assistants are trying to emulate a skill. That's a big difference, that goes beyond automating boilerplate.

1 year ago 1 0 1 0

What makes you think that? And is it just a problem that's solved by better UX?

1 year ago 0 0 1 0

Or that previous iterations of search are dead. I can imagine a thesis that "the whole web is better with liberal AI sprinkles".

Agree about it being essential infrastructure - and the Chromium project exists! - but I think paid-for solutions that build on it can win the market (cf Ubuntu, maybe?)

1 year ago 0 0 1 0

I can imagine OpenAI putting in a bid

1 year ago 0 0 1 0

All GenAI? I'd agree with the Transformer model being hard to extend...

1 year ago 0 0 2 0
Advertisement

Thanks for sharing this link. I definitely agree that we're at least one breakthrough away from genuinely world-changing generative AI.

1 year ago 1 0 1 0

My answer is the same though - how do you get good enough to supervise a code-writing AI if you never write code? Or a contract-writing AI if you never write contracts? Or a car-driving AI if you never drive a car?

1 year ago 0 0 1 0

If juniors using AI tools can go straight to "applying" because the tools are so good, how do we make sure the juniors don't skip the "understanding phase" themselves?

How do you get good enough to be a supervisor if you never get to do the thing yourself?

1 year ago 0 0 1 0

Until someone develops a workable form of generative AI that can "understand" the "world", we need people to supervise the stuff they produce.

But that creates a new problem.

1 year ago 0 0 1 0

It's hard to test for understanding. But we can see its absence when an AI generates something that is obviously, to us, daft.

The third hand with 6 fingers in the generated picture, the assertion about a legal precedent that doesn't exist.

Humans only make those mistakes intentionally.

1 year ago 0 0 1 0

Not so for a generative AI.

"Knowing" maps neatly onto information recall. They're great at that!

"Applying" maps pretty well onto generating new content. They can do that too!

But they skip a step that people have to pass through - "Understanding".

1 year ago 0 0 1 0

So you, a presumed human, have to learn and understand some ideas before you can apply them to a problem.

And you have to be able to apply them to a problem before you can, say, evaluate what someone else has done.

1 year ago 0 0 1 0

AI is interesting. It subverts our ideas on learning.

I'll explain.

Bloom's taxonomy is a model for how people learn, which probably fits your intuition. Higher levels of learning depend on lower levels.

Specifically, creating > evaluation > analyzing > applying > understanding > knowing.

1 year ago 0 0 1 0

AI struggles to spot that there's a single dog behind the lamppost, to interpret sarcasm, and doesn't have a sense of "taste" or "quality" - things that are obvious to humans, with way less processing power.

That makes it easy to both overestimate and underestimate AI, in different ways.

1 year ago 0 0 0 0

AI is interesting.

AI suffers from categories of errors that humans don't, and is resilient to categories of errors that humans suffer from all the time.

Humans get taken in by the spam email linking to "PayPa1". AI pretty much can't fail to spot that.

1 year ago 0 0 1 0
Advertisement

First post on a new social media site feels hard to write, so I'm just going to throw it away like this. Take the pressure off.

1 year ago 6 0 0 0