Yep, Richard Feynman said this and is at least influential in some parts of the manosphere. E.g. m.youtube.com/watch?v=IaWf...
I'm sorry to report that reality is beyond parody.
Posts by
100% agree about wasted human potential. Automating all forms of toil and letting humans do the interesting bits is a vision I can get behind.
And my views on the state of ~most learning (and what I'm trying to do about it) are well-documented so I'll spare you a repetition of that rant.
Is that 80% stat from research someone's actually done? That'd be awesome (and seems evidently achievable for someone to perform)
Honestly, I was trying to do the opposite. I wanted to be impressed! It got the logic right, but translated it into maths (integers up to 10) incorrectly.
And I thought the ARC benchmarking involved pre-training on related examples, which seems like... cheating?
Solving problems of increased complexity with arbitrary values (which you couldn't learn by searching) seems like a big hurdle yet to be overcome.
Again, I really appreciate the replies. I tried O4 myself - asked it two (simple) questions, and it got them both wrong.
I think the thing that is hard is the interface between explicit / rule-based reasoning (like doing novel maths) and implicit reasoning (language).
Super interesting, thanks for the reply. What do you mean when you say "We've discovered the optimisation function"? And what makes you determine that it's skewing to AGI?
I find AI interesting, and I continue to think it is world-changing. But I really don't see it with current generative AI / LLMs. They're obviously impressive, but I don't find them exciting, and I can't find a use-case for them in my life.
What am I missing?
I've done a few talks about the Post Office/Horizon over the last couple of years, including that their latest rebuild had a better approach (more psych safety, more user contact, tho no evidence of shifting sec / QA left).
I feel genuinely sad to hear that it's being canned. What a waste!
Hard agree. Similarly, I find myself talking about it to try to push back against for hype.
I definitely have a horse in this race (skillerwhale.com)
If by struggle, we mean "understand, try, fail, improve" then that's an excellent way to learn skills. If we mean "try, fail, try, fail, bang head on wall, admit defeat, get peer to do it", then the learning is "I can't" rather than "I can".
Is it? The dialogue aspect seems weaker for coding, and seems mainly useful for giving information in the flow of work, rather than providing a skill.
It's important to distinguish knowledge, process knowledge, and skills. Google replaces knowledge, Intellisense implements process knowledge, but coding assistants are trying to emulate a skill. That's a big difference, that goes beyond automating boilerplate.
What makes you think that? And is it just a problem that's solved by better UX?
Or that previous iterations of search are dead. I can imagine a thesis that "the whole web is better with liberal AI sprinkles".
Agree about it being essential infrastructure - and the Chromium project exists! - but I think paid-for solutions that build on it can win the market (cf Ubuntu, maybe?)
I can imagine OpenAI putting in a bid
All GenAI? I'd agree with the Transformer model being hard to extend...
Thanks for sharing this link. I definitely agree that we're at least one breakthrough away from genuinely world-changing generative AI.
My answer is the same though - how do you get good enough to supervise a code-writing AI if you never write code? Or a contract-writing AI if you never write contracts? Or a car-driving AI if you never drive a car?
If juniors using AI tools can go straight to "applying" because the tools are so good, how do we make sure the juniors don't skip the "understanding phase" themselves?
How do you get good enough to be a supervisor if you never get to do the thing yourself?
Until someone develops a workable form of generative AI that can "understand" the "world", we need people to supervise the stuff they produce.
But that creates a new problem.
It's hard to test for understanding. But we can see its absence when an AI generates something that is obviously, to us, daft.
The third hand with 6 fingers in the generated picture, the assertion about a legal precedent that doesn't exist.
Humans only make those mistakes intentionally.
Not so for a generative AI.
"Knowing" maps neatly onto information recall. They're great at that!
"Applying" maps pretty well onto generating new content. They can do that too!
But they skip a step that people have to pass through - "Understanding".
So you, a presumed human, have to learn and understand some ideas before you can apply them to a problem.
And you have to be able to apply them to a problem before you can, say, evaluate what someone else has done.
AI is interesting. It subverts our ideas on learning.
I'll explain.
Bloom's taxonomy is a model for how people learn, which probably fits your intuition. Higher levels of learning depend on lower levels.
Specifically, creating > evaluation > analyzing > applying > understanding > knowing.
AI struggles to spot that there's a single dog behind the lamppost, to interpret sarcasm, and doesn't have a sense of "taste" or "quality" - things that are obvious to humans, with way less processing power.
That makes it easy to both overestimate and underestimate AI, in different ways.
AI is interesting.
AI suffers from categories of errors that humans don't, and is resilient to categories of errors that humans suffer from all the time.
Humans get taken in by the spam email linking to "PayPa1". AI pretty much can't fail to spot that.
First post on a new social media site feels hard to write, so I'm just going to throw it away like this. Take the pressure off.