A light blue poster with black text and dark purple title with same info about meet up as found on this website: https://aisupplychainresearchandhci.wordpress.com/how-could-ai-supply-chain-research-shape-hci-inquiries-and-vice-versa/
‼️We @inhacha.bsky.social @amabalayn.bsky.social @blairaf.com are hosting a CHI 2026 meet-up on AI supply chains × HCI!
Lightning talks + panel + open discussion
Drop by anytime 👇
🗓 April 13 (Monday) | ⏰ 2:15–3:45 PM | 📍 Room 133
aisupplychainresearchandhci.wordpress.com/how-could-ai...
5 days ago
9
4
0
1
Then why is it baked into all of Microsoft’s *professional* and industry-standard software?
5 days ago
27
12
3
0
This is something tech people don't grasp about nonfiction writing: the process of deep research, of deciding which little details are interesting or which obscure anecdotes have historical value, is all part of the creative work of writing. It's not grunt work. It can't just be automated.
1 week ago
530
151
7
8
Not a lot to say about TDOV this year other than being visibly trans has been very exhausting this past year in particular
1 week ago
5
1
0
0
A Kalshi ad that says "We don't do death markets"
My "We don't allow bets on assassinations and murder" ad campaign has people asking a lot of questions already answered by the ad campaign
1 week ago
10669
2100
194
252
Key reminders: The number of things that are understandable as "AI" that can be used in the warfare pipeline are far larger than chatbots and LLMs. It doesn't have to be Claude or ChatGPT for it to be a bad automation which launders human biases, assumptions, and errors into death at speed & scale.
2 weeks ago
198
66
3
7
does anyone know what’s happening anymore
2 weeks ago
4
2
1
0
Advertisement
photograph or a poster on cream colored paper. "Dear President Ambar,
we are writing to you on a typewriter that is over 70 years old. This is a
machine that we all know well. With it, we misspell words without the crutch of spell check or generative AI and we think intently about every phrase we pound out. As we force ourselves, for once, to slow down, we engage in a cognitive dialogue with ourselves. We do not seek perfection because we know that education is about the growing and challenging of our young minds' potential, not the chasing of institutional 'gold-star' approval. We do not believe that your so-called 'Year of AI Exploration; providing enterprise ChatGPT and Google Gemini subscriptions to every Oberlin student aligns with our college's founding principles. You claim that this year will be one of experimentation, not adoption. But even just one semester of accepted (encouraged even) chat bot use will jettison our student body down a lazy and irredeemable tunnel of intellectual destruction. We are a college grounded in learning and labor, which now risks straying from these rooted ideals. With ChatGPT at the helm, our emails, essays,and discussion posts will be generated for us, not by us. And let's not fool ourselves. This is precisely what these platforms will be used for by our busy, anxious student body. We see your vision for this year as.advancing the college's 'businessification'--an alarming trend also seen in the takeover of our beloved library cafe by a 'bookstore' with no books in stock and an app replacing customer service. In one instance, the college assumes we want efficiency at all costs through automated rather than hand pulled coffee. In the other lies the false belief that we simply desire to turn in an essay, regardless of how little we've written of it." there's more that doesn't fit in the 2000 character limit :(
OH MY HEART...the Oberlin Luddites Reject "The Year of AI Exploration"! 💚
2 weeks ago
2674
774
56
200
Wait I’ve seen this one before
2 weeks ago
65
13
6
2
this is a huge deal and a sign of the changing legal tides for big tech. the plaintiffs attorneys here were early adopters of a novel legal strategy that uses product liability law to sidestep tech companies' go-to defense (section 230) and hold them accountable for defective or negligent design
2 weeks ago
343
89
4
3
This is utterly bonkers, and simultaneously completely unsurprising, entirely foreseeable, and long warned about by experts (and common sense) in Canada and around the world.
2 weeks ago
27
16
1
1
a reminder that sora, like chatgpt, is a commercial product that can come and go like any other website, which is another reason why a lot of us remain critical of “AI” — and why we focus more on its political economy than its technological mechanisms
2 weeks ago
385
90
1
9
An interesting feature of Edmonton Alberta is that they put extra chlorine in the drinking water at the start of each spring so you can constantly have swimming pool aura in your mouth for a couple of weeks
2 weeks ago
2
0
0
0
"hey wokesters your data center resistance is actually the opposite of woke" a 100% real take brought to you by Palantirs AI implementation and government relations heads
2 weeks ago
2
0
0
0
Washington Post oped entitled "Halting data center construction will entrench inequality"
The oped authors are Palantir's head of AI implementation and head of government affairs
actually burst out laughing when I saw the author bios
2 weeks ago
16
3
2
0
Advertisement
This has evolved into listening to radio news while I eat dinner officially entering my old lady era
2 weeks ago
59
4
7
0
This is the head of product development at Canvas essentially declaring that the choices Canvas makes will determine the shape of classroom instruction. This is not happening in partnership with professors or students or institutions. This is a profit-seeking 3rd party.
2 weeks ago
175
45
1
6
Spun out three abstract this aft, one about the political economy of AI governance, one about AI countergovernance, & one weird experimental take that AI governance doesn't actually exist - many fun things you can do with new data!
2 weeks ago
1
0
0
0
One of the fun parts of collecting piles of new research interview data is spinning off the preliminary findings into a bunch of workshop abstracts
2 weeks ago
4
0
1
0
Canvas Unrolls AI Teaching Agent
The new AI agent aims to save faculty time on “low-value tasks,” but stops short of fully automating grading. But some experts worry that the rise of agentic AI could lead to a dead classroom, where c...
Strong recommendation to teaching faculty to just say no to this stuff, even if you are AI curious/enthusiastic. This is meant to reduce faculty autonomy and capture human labor with automation. You're selling out your future self and the profession as a whole. www.insidehighered.com/news/tech-in...
2 weeks ago
709
333
14
63
Once more - no amount of AI wonder or unrelenting claims of productivity, abundance, and inevitability can overcome the hard realities of material supply chains. An AI dependent world is hugely more vulnerable, not less, because it makes almost no account of, nor has resiliance to, these realities.
2 weeks ago
69
26
0
0
Carney has made reshaping Canada’s place in the world a signature priority, but the federal government plans to cut the budget of the diplomats charged with implementing that vision by $1.83-billion, according to government documents.
www.hilltimes.com?p=495776
3 weeks ago
38
27
8
15
Maybe I will make a longer thread on this later but the overall issue is of course accuracy & reliability of where the sources are being drawn from and how inclusion criteria are being applied - to quality control this properly is a huge time sink & requires a lot of manual lit review experience
3 weeks ago
5
0
0
0
Advertisement
Many ppl saying Claude has gotten good at lit review recently so for the last 2 weeks I have been making a good faith effort to try integrating Claude into data collection workflow for a systematic lit review and my verdict is that Claude 100% cannot be trusted to do a systematic lit review
3 weeks ago
9
1
1
0
Table listing Carney's massive cuts.
3 weeks ago
14
10
2
0
The Pentagon Is Using Palantir AI to Bomb Thousands of Targets in Iran
The system, known as Project Maven, also incorporates the AI model Claude built by Anthropic.
The Pentagon is using AI technology by Palantir and the model Claude built by Anthropic to help speed up the “kill chain,” the process of identifying, approving and striking targets. “You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes,” says Craig Jones.
3 weeks ago
66
46
7
11
"sometimes the robot causes mass psychosis and we just have to wait for the devs to shut it down" is prob not the best regulatory norm!
1 month ago
21
5
1
0