Wtf! 🤬
Posts by Nadxieli Toledo Bustamante
Why do I have to keep disabling copilot multiple times a day??? 🤬🤬🤬Can’t these companies respect our settings choices?!?! 🤬🤬🤬 (and by the way I only use microsoft because that’s what my university provides and requires me to use and I resent that too 🤬🤬🤬)
product image from amazon where one of the answers is "to be fully transparent as an AI"
Help I found this journal on amazon and the prompts in the image were clearly filled out by AI
A painting of a bird beside the text "not today"
I don't need a summary of what's available online, in biographies, or even in my own notes. I need to comb through and find the great details, decide what interests ME. No, it's not efficient. It's slow and painstaking. But I don't believe in efficiency as an ultimate good, especially in writing. /
Me, in therapy: I don’t feel anything (I just know I have some feelings)
Also me: sobbing non stop when a song I haven’t listened in years randomly shows in my playlist in the middle of a random task
😒🤷🏽♀️😩
My university: "Copilot is Microsoft's AI-powered productivity service that uses large language models (LLMs) to help you create content, analyze information, summarize documents, and complete tasks more efficiently."
Microsoft: LOL
We are thrilled to announce GHOST IN THE MACHINE is available for IRL and virtual community screening events, and on-demand rental on Kinema! Watch today!
kinema.com/films/ghost-...
"If there is one documentary with the power to burst the AI bubble, it’s this one." - The Wrap
There's very little procurement research out there so I read a lot of conference proceedings and anyway this one is super fun because it reinforces that, at least in the US regulatory environment, universities can't confirm anything vendors promise about data privacy. Neat!
This says “trap” in letters three stories high with flashing lights and a siren, and still some profs will walk right into it.
Strong recommendation to teaching faculty to just say no to this stuff, even if you are AI curious/enthusiastic. This is meant to reduce faculty autonomy and capture human labor with automation. You're selling out your future self and the profession as a whole. www.insidehighered.com/news/tech-in...
this is completely insane: since when is it acceptable for a tech company to rewrite news outlets' headlines without their consent? especially at a time when audiences are sensitive to how stories are framed in headlines? (ie Israel/Palestine, the Trump admin, ICE)
www.theverge.com/tech/896490/...
Sure, LLMs are useful for:
1. Fraud
2. Plagiarism
3. Cognitive off-loading
Which of those use-cases are you promoting?
I really hope the Grammerly reaction makes people think twice about “genie out of the bottle” AI-as-inevitable framings. We don’t actually have to just accept worse or unethical products and pretend that makes us fans of Progress!
I'm suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.
State law requires consent before someone's name can be used for commercial purposes.
www.wired.com/story/gramma...
“There’s kind of a defeatism, this idea that there’s no stopping technology & that resistance is futile, everything will be crushed in its path. That needs to change … We can decide that we want to be human.”
#AcademicSky
Hey @officialgrammarly.bsky.social we're gonna need to know the full list of identities you have stolen here as well as clear info on how to opt-out
techcrunch.com/2026/03/07/g...
Nowhere near enough media coverage about the interconnection between the destruction of content moderation, the proliferation of LLM "generative AI" bullshit engines, the death of consensus reality/knowledge making, misinformation and disinformation, & the aims of Western authoritarians and fascists
I think imposter syndrome is more insidious than mere self-doubt—I think some of us are expected to experience it as a form of humility. Reject that, fam. Know yourself, and, where appropriate, *trust yourself*.
An ibm slide from 1979 which says in black text on white background “a computer can never be held accountable therefore a computer must never make a management decision”
Sadly relevant, still:
The crazymaking thing about AI being pushed all throughout my grad school is that even when professors make the caveat that you have to double check the answer AI gives you — we are literally in school to do the intellectual work of getting to the point where we can doublecheck the AI answer!!!!
ai is driving people to suicide, but it also destroys the wages and working conditions of multiple industries without adding real value, so it's impossible to say if it's bad or not
Literally, at work this morning:
Clinician: Patient should stop medication A, but must keep taking medication B.
AI VR transcription of this: Patient should stop taking medication B.
Seriously NEVER LET A DOCTOR USE AI FOR YOUR APPOINTMENTS. NEVER. UNDER ANY FUCKING CIRCUMSTANCES.
Imaging a human saying to you, "i will only ever tell you what i think you want to hear, based on what i know about ppl like you, how you start our conversation, & what you say during it. Sometimes that will sound like truth, sometimes like errors or lies. Now: Let's talk about your medical history"
yep I agree, focusing only on making things fair in terms of academic honesty has been super detrimental to AI debates in academia and it really shows how we keep dissociating the world around us from the object of our research
This has never been only about academic honesty/dishonesty but if the latest information about what these companies (all of them) actually are doing /want to do with our human data has not shaken you yet, what will?
Education folks (k-12 and higher ed), when you say “ethical uses of AI” what are you even talking about?
«If I hated OpenAI more than Google, I hate Anthropic more than OpenAI» ✊🔥
People should stop claiming Anthropic is «good and ethical» They are just playing their game, with the same masks as always and people who believe them, are been used as tools to wash their image. Don’t be part of that.
SCOOP: Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter.
And everyone is running around praising them for being "ethical"
www.bloomberg.com/news/article...
My doomer-worry about AI is not that the LLMs become omnipotent and take over the world but that the wealthy and powerful use it as a means to consolidate power and marginalize or lay off skilled workers and also everything about our technological and political and social life gets worse