4. An LLM made a mistake is a verboten sentence. You made a mistake: no offloading responsibility.
Posts by build-er-berg workshop
Noodling on some lab documentation for LLM usage but high level:
1. LLMs still make bad design decisions. Do not let them make design choices.
2. You’re still responsible for your code correctness. This means reading a lot of code. Hence, bullet point 1.
nah.
bsky.app/profile/werr...
I mean, these days the whiteboard is almost always replaced by an actual coding exercise
Larry Johnson is a fringe sleazeball who advanced a whole slew of false claims against Obama in the 2008 primary, and you would be wise not to listen to anything he alone claims no matter how much it reinforces your priors
I can't believe this, not because they wouldn't do it, but because I don't believe those that would be involved to be competent enough and capable enough of keeping it quiet
Mahan drop tf out challenge
this is Millervoice
So what they're saying is they can be automated and should be let go for cost efficiencies.
that could also just be moderation getting more savvy, though
just carefully engineered in the ally-alienation laboratory, like so many Israeli moves before it
damn i cant believe mastodon was vibe coded :/
in a similar way to decapitation being an extremely effective treatment for facial acne
I swear being normal cynical about job stuff is like a superpower these days. Everyone else is like "this is all a plot to end the Vital Essence of Humanity and/or prevent the inevitable rise of communism" no it's not. they want to fire people to get bonuses next quarter.
‘Simply stop giving millionaires water for literally free’ is one of the most insane sounding but actually serious policy solutions out there
they're already kinda doing this just through sheer market forces but I don't think they're yet doing it intentionally
on another note, here's a grim prediction: AI companies will use their power to limit your access to computational power in the future, manipulating policy more aggressively under safety and mercantilist concerns to restrict access and production
(this is a danger to a few specific companies too, of course)
I think it's dangerous to assume both the methods and hardware won't advance to a point where it becomes reasonable to run extremely powerful models locally through a relatively simple interface
it's interesting thinking of all the sci-fi out there where AI was still effectively an uncommon novelty - e.g. maybe one or two robots puttering around on a whole spaceship - rather than the ubiquitous platform it seems like it might be moving toward
let's be fair, a lot of it is data. truly unfathomable amounts of data.
the mayor of Oklahoma City, which is about half white, is a Bush admin appointee who worked for Dennis Hastert and James Inhofe
I mean, if you can't even come close to electing a left-leaning mayor in your state's largest city ...
yes, palmer luckey and his company too. everybody who’d name a company after some nerd-ass lotr thing gets the wall
i support whichever candidate will vow to eradicate palantir and its owners
the skysite cannot handle these bold truths
it is frustrating that we know much of the C-suite work will be automatable, and yet we also know that's generally not going to be on the table for the near future
enjoying the empty seat next to me because a billionaire dipshit wanted an extra fifteen minutes to talk to his doomsday bunker contractor
I am generally Team ETTA but the problem is that my spouse is Team Really, Really Early To The Airport due to some sort of very specific anxiety and I would rather drill a hole in my head than spend two hours waiting at the gate
probably correct but there are so many honorable mentions, like the Pillow Factory gag
youtu.be/pEb_BV5TAfA?...