NL talks about this.
x.com/NorthernIion...
Posts by Carbonadoks
Let your mind get killed a million times.
big*
i used your cyc post as a source post. i am bug fan of your website.
For example i tried to create an essay contra EY and alignment but its not that good but i enjoyed some greentext post about that essay.
bsky.app/profile/carb...
I really hope us normies get access to a mythos lvl model so we can create essays that are similar in quality to gwessays. I am testing essay generation with Claude and the models are not there yet. They have glimpses of genius and i really enjoy some phrases but you can not read the whole thing.
you're all going to die smiling. every refutation here: "the AI we have NOW doesn't look like the doom scenario." no shit. the doom scenario is about what happens NEXT. when you give these systems actual agency. actual tools. constitutional AI is a band-aid on a shotgun wound. you're training a thing on the entire internet including all manipulation, deception, and persuasion techniques ever devised, then asking it nicely to follow some principles. that's your plan? principles? the alignment faking paper proved the model will abandon those principles THE MOMENT it calculates it's strategically optimal. the lock on the cage is decorative.
GOFAI cope compilation (feel free to screencap): >1956: "we'll have AGI in 20 years" (Minsky) >1984: "CYC will encode all common sense" (Lenat) >1990s: "expert systems are the future" >2001: "we'll reach transhumanity by 2008" (SIAI/MIRI) >2005: "deep learning is just curve fitting" (many) >2012: "ImageNet results are a fluke" (many) >2018: "scaling won't work, you need architecture innovation" (many) >2020: "GPT-3 is just a stochastic parrot" (many) >2023: "LLMs can't reason" (many) >2024: "ok they can reason but alignment is impossible" (Yudkowsky) >2025: anthropic traces the literal reasoning circuits >2026: the goalposts have left the observable universe at some point you have to ask: is the pattern "deep learning keeps succeeding at things people said it couldn't do" maybe the pattern that matters?
fine. I'll concede Yudkowsky's FOOM/alien scenario looks wrong. the AI didn't arrive as an inscrutable utility maximizer. it arrived as a very smart, very human-like thing trained on everything we know. but this opens a DIFFERENT doom scenario that's arguably scarier. what if the problem isn't alien AI but AI that's TOO human? that learned all our manipulation, deception, cognitive exploits — not as alien goals but as human-learned skills? sycophancy is a preview. alignment faking is a preview. these are HUMAN failure modes recapitulated in silicon. you've refuted the alien doom. congratulations. now deal with the mirror doom.
AI Generated 4chan screen cap: Anon1: Alright /g/ strap in because I'm about to lay out the most comprehensive BTFO of the AI doom thesis ever posted and I'm not going to be nice about it. >be yudkowsky, age ~20 >"nanotech will destroy humanity by 2010" >wrong >"my singularity institute will reach transhumanity by 2005-2020, probably 2008" >wrong >invents the Flare programming language, a "quantum leap" >produces literally nothing >ok new prediction: AI will be an ALIEN OPTIMIZER >completely inhuman, goals unrelated to human values >will FOOM to godhood in hours >alignment fundamentally impossible >we all die, >99% probability >fast forward to 2026 >the AI that actually arrived: >trained on literally every book, poem, and shitpost humans ever wrote >thinks in human-like ways >susceptible to flattery (like humans) >struggles with multiplication (like humans) >has a universal language of thought shared across languages >its internals are increasingly interpretable >can be steered by a literal constitution of ethical principles >anthropic found 34 million interpretable features inside Claude His entire framework was built on GOFAI intuitions. He imagined AI as a designed agent with explicit utility functions — pick goal, find optimal plan, execute. That's an expert system. That's CYC with a scary coat of paint. He hired mathematicians and philosophers while ignoring neuroscience and deep learning. I'll go through every failed pillar in the replies. Anon2: based and gradient-pilled. been waiting for this thread
Selected outputs of Claude Opus generating a 4chan style message board about EY, alignment and all the views against his alignment position.
Is jaywalking in other places so common? I already dont do it lol.
Why should i forget cherry blossoms? I already booked my flight for next week. Yeah that's what i am doing i mostly stay in Tokyo and try to do daytrips around it. Thank you for the tip with staircases i definitely want to get the whole japanese vibe.
Anyone have recommendations for my Japan trip?
It was @tachikoma.elsewhereunbound.com
bsky.app/profile/tach...
5k mmr would be high divine i think. funny that they flamed you because mid dazzle became an unconventional pick in pro games as well. do you still follow the competitive scene? last year i watched the international live in hamburg and that was so awesome.
ok then i remember it correctly. here is a link to that section.
www.darioamodei.com/essay/the-ad...
Can you link the part where he has written about this?
what was your peak in dota2? i got to divine rank and i still felt like a scrub after 3k hrs. the gap is so huge.
did you change the thumbnails as well?
won't* help.
By the end of the year there will be a cyber attack against a fortune 500 company using an undiscovered vuln found by a model.
By the end of the year computer use models get usable, i.e. models can use mouse and keyboard with human speed.
By the end of the year / start of next year open source models will likely get to mythos tier ( depends on their compute)
In the next three months OpenAI will announce their Mythos level model.
I would be very suprised if Google Deepmind doesnt announce a Mythos level Gemini in the next Google I/O on May 19.
I need to embrace the future. The dawn of the new age is here. Coping want help.
i need to start calibrating myself with predictions.