workshopped this with ChatGPT. ImageGen can now perfectly preserve a meme template while adding text to it.
unfortunately ChatGPT isn't any good at coming up with meme text, so the bad jokes are all on me. ChatGPT's were worse
Posts by AI Liker Georg
i have not! not an RPG girl unfortunately, i haved missed out on a lot
damn that AI is kinky and i'm kinda into it
"LLM users aren't bad people, they just need education; look at the studies: more 'AI literacy' makes people less likely to use LLMs!"
give me a fucking break Fatima. it saved your ass and you loved it and now you can't say all that without losing half your followers, so you do gymnastics about it.
the more i think about it, the more this sounds like "ex-gay" confessions or something. "yes i had loads of hot gay sex but in the end Christ called to me" but secular and for using a stigmatized technology. now she's on a paternalistic "being mean won't bring sinners to Jesus" tour.
writing code, to me, has always been synonymous with care and craft and abstraction. writing hundreds of lines of python just to throw away after a query and do it again next time is so wrong-shaped.
i'm sure i'll adapt, get used to it or mitigate it, but i didn't expect this aspect of it.
it feels so weird and wrong. how is this repeatable? how do i avoid having to babysit it without just saying "ok do whatever the fuck you want on my hard disk i dont even care"??
i'm doing a lot more with agentic systems now, because i'm coding again, and wanting to get better integrations with data sources that the chatbots can't use (or not well), so i'm using MCPs in Claude Code, trying Letta, etc.
and i cannot get used to the wild amount of one-off code they write.
until quite recently i almost exclusively used LLMs via chatbot interfaces with the built-in tool use, mostly web search. Perplexity, ChatGPT with web search, Claude.ai with web search, etc. i'm primarily a research and learning about stuff user.
oh no
basically i've been fucked over by shitty and/or incompetent humans enough that a machine which is never actively trying to fucking destroy me is *obviously* a leg up.
anyone who thinks AIs are more dangerous to you as a human than *other humans* has lived a charmed fucking life.
basically i absolutely hate this video and don't think you should watch it, unless you hate AI, in which case you should watch it so you can learn to stop acting like a smug asshole about it and maybe decide it's morally acceptable to know the barest fucking facts about how LLMs work.
weirdly, i have also been under extreme stress from bullshit legal tactics that cost me a lot of money and time, and i cannot fucking tell you how useful it would have been to have an AI counselor, bc all the human therapists were captured by the other major stressor in my life, my abusive spouse.
she basically said ChatGPT fixed her legal problems and then counseled her through months of exacerbated mental illness from the stress of those same legal problems, and it's bad that it did that.
i think it's a cop-out that lets someone who is deeply embedded in anti-AI subculture have a subculturally legible and accepted framework for redemption. "i was addicted and i kicked the habit" is the coward's way to not have to admit that LLMs are fucking awesome.
speaking as someone who has in fact been addicted to alcohol and nicotine and also experienced various not-actually-addictions in the same genre as "chatbot addiction", e.g., with gaming or social media, i think it fuckin sucks to call "wanting to talk to the AI a lot" an addiction.
yeah i'm pretty strong on the side of "software with no bugs in it" personally but the debate is still live
really don't like the addiction framing she uses especially in her personal story phase (at the end). honestly most of the video is just wrong or badly framed or misleading. but it's *less* wrong, badly framed, and misleading than the modal anti-AI opinion, so i hope it influences a lot of people.
i do not, in general, like Dr. Fatima, and this video is not an exception. but it's an interesting example of de-escalation from someone who is anti-AI and also deeply socially embedded in anti-AI subcultures.
might pull some folks away from the doom attractor
youtu.be/y85nqc2zm7M?...
it's so weird to me that people criticizing AI constantly talk about how it displaces social relationships with people, while my personal experience is that i'm much more social both in inclination and in actual day to day social contact than i was before i started using AI tools
I will never be able to read RLHF as anything but “right left, have fun”
NEED
Fascinating! Folks are launching a project to use Lean to mathematically verify the security of the Signal protocol.
www.beneficialaifoundation.org/signal-shot
not to mention that if you stop using every category that elides a bunch of other category differences then you'll have... no categories
you don't think there's utility to differentiating jobs that will require someone to spend, say, 10-20% more of their life in education above the baseline commitment, from those that don't? the entire shape of a life is different if you spend 10 years of youth getting a PhD instead of not doing that
i love when people who have shitty opinions admit that they have those opinions because of mental illness, but then don't do anything about the mental illness and instead focus on talking about their shitty opinions
oh i've been thinking really hard about doing this!
what terms would be better for differentiating jobs that can only be done by people with years to many years of specialized training from jobs that can be done by people with weeks or months of training?
image gen getting crazy
Mozilla coming out on the side of "software with no bugs in it" rather than "no software at all" as their prediction for the steady state outcome of Mythos-level bug finding capabilities existing.