I didn't think I was a bad person but I'm about to the point of bringing signs to heckle all-in GenAI presenters -> | Now share what happened the other 99 times | [citation needed] | "After hoc, therefore something else hoc"| AI CEO Quote...Drink! | Will their lawyer sign off on that claim? |
Posts by Brian Maass
This problem is actually pretty easy to solve:
scientists who include false citations in scientific papers should get fired for committing scientific fraud
I dont understand the passive "what can be done" framing here
Including false citations is scientific fraud. Treat it as such.
new quinnipiac poll on ai suggests bluesky users are not the ones living in a bubble over this shit. nobody trusts it, nobody’s excited about it (6 percent say they’re “very excited” about it- more believe in mermaids). 21-point majority thinks it does more harm than good poll.qu.edu/poll-release...
Great. The federal cafeterias are going to have to find another name for Spanish rice...
Would love to see this boilerplate phrase excised from articles and op-eds on AI:
"AI chatbots have become a common part of many of our daily lives"
It's vague, generalizing, likely overstating, and normalizing -- and appears even in critical pieces.
Yesterday I found out a student at my institution penned an open letter to the Board of Trustees complaining about a professor using AI to run their class. The student argued they're being denied the professor's expertise.
Goldman Sachs reports that 300 million full-time jobs could be replaced by AI by 2030. Labor turnover is high and hiring has slowed. 71% of Americans worry that AI will cause permanent job loss. As young people about to enter the workforce for the first time, the fear of unemployment is understandable, but we cannot save ourselves with the very tool that is putting us at risk. The irony is that as Penn pours endless money and energy into AI advancement in its attempt to get ahead, the University is only quickening its own demise. AI cannot coexist with education — it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought. With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship. Editorials represent the majority view of members of The Daily Pennsylvanian Editorial Board who meet regularly to discuss issues relevant to the Penn community. This body is led by Editorial Board Chair Jack Lakis and is entirely separate from the newsroom. Questions or comments should be directed to letters@thedp.com.
An unaccounted for part of the economy is how much young people virulently hate AI, despite how aggressively it's being forced on them. They realize it's making their friends dumber and ruining the world and they want nothing to do with it.
From the Penn student paper:
www.thedp.com/article/2026...
"This study provides crucial empirical evidence that, without proper safeguards, the harm caused by AI-generated falsehoods in this population and task is more potent and robust than the benefit derived from correct guidance."
A common informal definition of general intelligence, and the starting point of our discussions, is a system that can do almost all cognitive tasks that a human can do6,7. What tasks should be on that list engenders a lot of debate, but the phrase ‘a human’ also conceals a crucial ambiguity. Does it mean a top human expert for each task? Then no individual qualifies — Marie Curie won Nobel prizes in chemistry and physics but was not an expert in number theory. Does it mean a composite human with competence across the board? This, too, seems a high bar — Albert Einstein revolutionized physics, but he couldn’t speak Mandarin.
to be clear, this definition, by limiting intelligence to *cognitive tasks*, conveniently ignores embodied, relational, social and cultural nature of intelligence
9/
It start by misinterpreting Turing. the Turing test only assess if a human can be fooled by a machine (a question that has been rendered meaningless, imo)
(claim from the paper and the first page of Turing’s 1950 paper side by side)
2/
As the Atlantic’s Franklin Foer explains: Almost no other foreign-policy question has been studied harder over the past 20 years or so than the likely effect of U.S. military strikes on Iran. The many years spent pondering and preparing for a potential attack on Iran are the reason that the first days of the war were, for the most part, a bravura display of American power. Yet all of that study also pointed out the risks: spiking oil prices, the spread of violence throughout the Middle East, civilian casualties of the sort now evidenced by an apparent U.S. missile strike near an Iranian elementary school. When past presidents balked at the possibility of war with Iran, they weren’t just dodging a hard choice; they were deterred by all of the obvious reasons a conflict could perilously spiral. Nobody should be shocked that the expected is now coming to pass.
From @dandrezner.bsky.social's newsletter this AM. These are the same assholes who showed up to class never having done the reading, yet supremely confident they could bullshit their way through discussion before leaving early to do keg stands at the KA house while catcalling the women who walk by
remember when the IRS put out a a free online tax filing system where you could just do your taxes through the IRS and it was actually well made and was pretty well recieved and then the tax filing industry and Republicans killed it and now you have to use TurboTax again
I screwed up the very last part, sorry. It should say...
To read more click the essay: pressthink.org/2012/03/im-t...
Ed tech expert Neil Selwyn argues those in “industry and policy circles…hostile to the idea of expensively trained expert professional educators who have [tenure], pension rights and union protection… [welcome] AI replacement as a way of undermining the status of the professional teacher.”
@thegodpodcast.com
At my workplace, we do quick share-back presentations about conferences we attend. Every year, half my presentation is about what orgs and conferences could learn from #c4l26 by prioritizing accessibility. Also, it's volunteer run, supportive to new speakers, and not an association fundraiser.
New Book! Out Now!
Slow Librarianship: Reflections and Practices
Authors describe what slow librarianship means to them in their work and roles while sharing concrete practices and ways to enact the tenets of slow librarianship in your work.
https://litwinbooks.com/books/slow-librarianship/
"In our work, we found that none of the tested language models were ready for deployment in direct patient care."
#medlibs
www.nature.com/articles/s41...
Ack-chually (to the wisc•edu headline writer)...he didn't "need" an expert. He could have done anything and most 🇺🇲 wouldn't have known the difference. He "WANTED" an expert, because he cared about getting things right. 🫡 to 🐰.
Whatever the productivity gains promised by LLMs, they result in heavier workloads—and that leads to workers experiencing “cognitive fatigue, burnout, and weakened decision-making.”
All this from the notoriously pro-worker rag [checks notes] Harvard Business Review: hbr.org/2026/02/ai-d...
When AI was added to a tool for sinus surgery: “Cerebrospinal fluid leaked from one patient’s nose. In another… a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients suffered strokes after a major artery was accidentally injured”
www.reuters.com/investigatio...
The devil doesn't need any more advocates, but also: do you see the dehumanization in the move you made there?
We don't "sample", we interact. Language is fundamentally social and reducing people to their "output" is the basic problematic move of the AI boosters.
Would be interesting to compare the results on more recent models - but this problem won’t go away. LLMs are always going to be extrapolating from what has already, and often, been thought, which is why they aren’t windows to the future but anchors to the past.
No joke: I got angry hate mail today for writing an obituary of a Black woman scientist—because the person felt she did didn’t deserve the recognition.
Which just makes me want to share it again: www.nature.com/articles/d41...
A great thread on AI-driven epistemic contamination