Another "fun" fact: Robert Mercer (major funder of Cambridge Analytica) was formerly a speech / MT researcher, and received the ACL Lifetime Achievement Award in 2014. I remember a lot of talk over whether ACL should revoke the award at NAACL 2018
Posts by suhr
4. An LLM made a mistake is a verboten sentence. You made a mistake: no offloading responsibility.
The tragedy of Computer Science is that it is really a rich discipline concerned with logic, structure, and the boundaries of knowledge and computation, but it was mutilated into a Silicon Valley internship mill.
We already KNOW how to make super babies, you eugenicist creeps! And it's all the things you fucking fucks are fucking up! THREAD!
Oh just saw your other comment about caffeine :( that's really too bad, it cuts out a lot of nice drinks
I switched to matcha lattes. (Also sadly had to give up coffee two years ago, I miss it every day)
www.youtube.com/watch?v=urkV...
unironically a beautiful and effective way to keep grifters out. if you get filtered by an anime catgirl popping up for 2 seconds because it's too cringe and gay for you was your heart really in the systems programming?
“Silicon Valley would not exist without [taxpayer-] funded research.”
www.thenation.com/article/soci...
Time to #TalkAboutHumanities -- Linguistics is the study of how language works and how we work with language, and linguists end up very sensitized to language use and how it shapes our social world.
Motors and engines can be more or less powerful in the very literal sense that they can generate more or less physical power.
Language can be powerful in that it gives us the power to move people.
>>
Language shapes everything, from how we think to how we connect and innovate.
Linguists are essential to understanding communication, and to creating ethical, responsible AI. We need more linguists.
#TalkAboutSocialSciences #TalkAboutHumanities
When I say that this is where I've been while I am sleeping, I am not exaggerating
I've been having weird New York dreams where I'm back on Roosevelt Island and something is Wrong. Cornell Tech has turned into some monstrous edifice (and it's become the same thing in nearly every dream.)
I think I have been accessing some alternate reality: expandedenvironment.org/dragonfly-ve...
I suspect the public won't buy this in the end, but I'm curious where you think this fits in, if it does
curious what you think about whether the narratives that Anthropic (this corporation in particular) constructs and uses are an attempt at rehabilitation? (e.g., "machines of loving grace", they call themselves a "public benefit corporation", broader narratives/framing coming from the rationalists)
AI doesn't fit a precise technical definition because its researchers haven't been serious about a precise technical project. in the 50s john mccarthy invented the term to get funding from the military for some summer project he wanted.
Amazing! (Did you learn about him in state history classes by chance??)
Yea just looking for role models lol
(I misread your "you can't call everybody a crackpot" at first, oops. I thought you meant, "you can't" because of the consequences, not "you can't" because it's somehow logically impossible for everyone around you to be a crackpot
just thinking out loud, maybe "dipshit" is better than "crackpot")
Do you have any suggestions for biographies/examples of particular scientists who've called everyone a crackpot, and succeeded at being listened to?
The end game is not reasonable regulatory measures. The end game is regulatory capture. The end game is total corporate control of government. And they are doing a very, very good job of making this vision a reality.
There is a powerful pressure among academic philosophers nowadays to take seriously claims of artificial general intelligence or AI sentience. Serious engagement with such propositions requires many layers of intellectual dishonesty combined with a willingness to watch the world burn.
I love working at a university
extract from page 12 https://arxiv.org/pdf/2507.19960
Something I hinged on to get to this what I describe: the Marxian fetishisation of artefacts is so complete in the case of AI that not only do we somehow conclude machines think, but we accept for them to think, speak, draw instead of us, while also thinking these are (expressions of) our thoughts.
all of which presupposes the legitimacy or reality of the measure itself. measurements justify themselves
maybe it's both. entirely based on the project of measuring some sort of intrinsic superiority, with the natural conclusion of legitimizing the authority of the optimum of this measurement (nazism) and a deep insecurity over failing to personally achieve this optimization oneself (inceldom)
it's eugenics