This is what I yell at my university, at least once a week. Every time I'm told we need to teach students how to use AI. We don't even teach them how to use tools like word processing and spreadsheets that we know will be useful!
And you know why? Because we don't know how! It's not our expertise!
Posts by Jérôme Melançon
It's 4/20! Let me remind you why marijuana was criminalized to begin with: "We knew we couldn't make it illegal to be either against the war or blacks, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities. We could arrest their leaders, raid their homes, break up their meetings, and vilify them night after night on the evening news. Did we know we were lying about the drugs? Of course we did.” - John Erlichman, senior advisor to Nixon
Happy 4/20 to those who celebrate!
I mean, my 7 followers
Lol this AI grifter followed me, trying to get access to a broader public. Blocking feels nice.
Baudripape
I think this is part of what's driving me potty at the moment. Seeing ostensibly intelligent people exercising zero critical thinking and falling for what is clearly a scam because they've not taken time to do the reading (ha, literally in many cases)
If you repost this it means you are also doing it. I don't make the rules.
This is making me think of the 1980s French serialized TV cartoon adaptation, "Sous le signe des mousquetaires," that melts together a few of Dumas' novels... where Aramis is eventually discovered to be a woman.
Just standing here eating cheese
I'd rather yell "NONE OF THIS IS AI IT'S ALL JUST MACHINES"
But yes, using a neural network and explaining to participants how it's use is just basic ethics. The same way I have to say I'm using say Qualtrics or NVivo. Which also have their dehumanizing aspects.
Right, it's difficult to address the discourse because there are no clear boundaries between kinds of AIs, and because the industry also conflates all of them. So there's criticism of the whole (critical AI literacy etc) and then criticism of the specific, which can still involve deliberate use.
Read this, liked, and now I am closing the app 😄
I keep pushing the president and vps at my institution about this. This is always the last argument when all other arguments are shown not to work. It's fear. And a lack of fear for what it does to us, and especially to already marginalized and vulnerabilized groups.
This is where principled stances are necessary - to see which principles have to come first. Being against AI can't be a principle; being against exploitation and dehumanization, yes. So the critical view of AI has to include a willingness to refuse it, but can also entail building around it.
I'd been resisting before AI became like this by looking at other methodologies that require more time. Which I could do, because I had tenure already! But most people don't have time to do what feels like endless methodologcal work - especially where you work with large ensembles of data.
Exactly! Of course I get not to worry about that because of the fields I work in. In the end, one of the drivers of AI use is the fear and reality of getting left behind or losing out to others, given a specific framing of productivity and the state of competition for jobs. It coopts us all.
The more I read about AI, the more I think about the Marxist idea of the dialectical change of quantity into quality: most of the problems with AI are already present where it's used. But it pushes them so far that they effectively become a different problem altogether. Not just a matter of degree.
There's a web of threads here about use of LLMs in research, it's worth starting here and going back through the rest!
We have an automated machine abstracting us into data and leaving us out of the possible results and outcomes. State statistics (yes, I know) at least have some kind of feedback loop over what ought to be counted, gathered, abstracted, because "democracy" which can rehumanize data, and serve groups.
The problems with the goals of research that we already see (serves careers, universities, etc., over participants) are also exacerbated: it becomes solely or mostly about producing academic publications.
Of course, this is a case of pushing what we already do with research software and tools. Which already presents these problems. But when there is this much machine processing and automation, we get into a deeper problem where researchers don't want to interact with the data, or the people, at all.
I absolutely would not participate in those cases. Research demands care for participants and care for responses. If little or no attention is given to what I (or my child!) take the time and effort to share, and our words are simply processed and remanufactured, then we are exploited & dehumanized.
As someone who has Done Okay at this contest in the past but hasn't written a poem in months or more, I certainly picked the right year to not take part.
I 100% look forward to all the attempts though!
#TalkAboutHumanities
We need scholars across the humanities, because these are the fields where we study what it is to be human, to inhabit different identities, and to connect with each other, to be human together.
youtu.be/T7Lc6QNxolQ...
Ok celui-ci je peux liker
Yes. The brain doesn’t work like AI. It’s not a computer. We use metaphors for it from technology b/c it’s really hard to understand how the brain works and we don’t entirely understand how a pile of electrified jello does all this. But it’s definitely much cooler than AI.
"If"
Yup I can imagine that, sounds good!
Do you live near a data center?
Do you know anyone who lives near a data center?
I want to talk to you!
jael@heatmp.news
“You gotta learn AI or get left behind!” is a bit galling given that the entire purpose of AI seems to be to steal our work and fuck our brains and shit up the planet, which is to say, its GOAL is to leave us all behind. That, in order to make its owners rich.