🚨New preprint and our results are rather concerning..
We find the "boiling frog" equivalent of AI use. Using large-scale RCTs, we provide *casual* evidence that AI assistance reduces persistence and hurts independent performance.
And these effects emerge after just 10–15 minutes of AI use!
1/
Posts by Padhraic Smyth
This is really, really good.
If you are have any interest in the history of ML from the 70s/80's onwards, this podcast series by Tom Mitchell is well worth your attention: www.youtube.com/playlist?lis...
if your specific niche is also "bizarre intertwingled anecdotes about 1950s-1990s computer scientists" i have built your holy grail:
just took a thousand oral histories from the @computerhistory.bsky.social museum and made them fully searchable and deeply interconnected
🎉welcome to f0lkl0r3.dev
There's growing evidence that LLMs can p-hack.
But p-hacking also points to something bigger: a data science multiverse of defensible analytical choices.
We wrote a paper (arxiv.org/abs/2602.18710) on using LLM agents to map this multiverse systematically. 🧵
🥁🥁🥁 Newly out from us today in Science Advances: “Biased AI Writing Assistants Shift Users’ Attitudes on Societal Issues”.
Large Language Models are providing users with autocomplete writing suggestions on many platforms. Could these suggestions shift users’ own attitudes? (spoiler: YES) (1/7)
Michael @mkearnsphilly.bsky.social ) and I wrote a blog post about our experiences using AI for research, and our thoughts on what these developments will mean for research, publication, and education: www.amazon.science/blog/how-ai-...
This is worth considering, but I'm skeptical. Nobody likes being criticized, and authors are likely to take it out on the criticizer. I expect that this will inflate the positivity of reviews. The "prestige" of being listed as a reviewer is currently so minimal that it's not worth the danger.
I was lucky to get a sneak preview of Advait’s talk and it is SO GOOD. I wish everyone building AI would watch this.
The problem is us, with our Paleolithic vulnerabilities, our FOMO, our susceptibility to snake oil salesmen and the ELIZA effect. Say no to anthropomorphized tech solutionism and yes to stronger human institutions, fortified by ordinary technology. (11/11)
“To leave our students to their own devices — which is to say, to the devices of AI companies — is to deprive them of...the means to understand the world they live in or navigate it effectively,” Anastasia Berg writes. www.nytimes.com/2025/10/29/o...
Well done to everyone involved behind the scenes at WiML over the past 20 years - its been a very positive influence in the ML research community!
I cannot wait to celebrate TWENTY YEARS of #WiML at #NeurIPS in San Diego this December! 🎉🥳
Fun fact: The first #WiML was held in San Diego back in 2006! ❤️
Share your memories below, and come hang out on Dec 2! I will be there! I will be speaking! And have I mentioned I AM EXCITED?!?!?!
Nice writeup in @caltech.edu news about the impact of the #Visipedia project in Computer Vision and Citizen Science
Inspiring article .... this work has had fantastic impact. I remember Pietro trying to convince me in the 1990s at some point that machine learning could revolutionize computer vision.....it seemed a very long way off at the time...glad that he and you and many others persisted :)
Not being an AI-doomer, but having experienced in my own department over the past 7 years a steady erosion of faculty governance norms and diminished prioritization of research-oriented pedagogy, this tracks. Let faculty and faculty interests lead the way, rather than administrators/regents
We (UC Irvine) are hiring for a faculty position (any level) in AI/ML/vision/NLP/etc. If you would like to work in a great department with great colleagues please apply! and please distribute to students, researchers, faculty who may be interested. More info here: drive.google.com/file/d/1ZyY5...
Now that school is starting for lots of folks, it's time for a new release of Speech and Language Processing! Jim and I added all sorts of material for the August 2025 release! With slides to match! Check it out here: web.stanford.edu/~jurafsky/sl...
Florida taken over by rabid frequentists?
📣 Please share: We invite submissions to the 29th International Conference on Artificial Intelligence and Statistics (#AISTATS 2026) and welcome paper submissions at the intersection of AI, machine learning, statistics, and related areas. [1/3]
Every rich person is going to tell *you* how great AI teaching is while sending *their* kids to the kind of schooling the Ancient Greeks would recognize. I just wish everyone would think about why that is.
Our computer vision textbook is now available for free online here:
visionbook.mit.edu
We are working on adding some interactive components like search and (beta) integration with LLMs.
Hope this is useful and feel free to submit Github issues to help us improve the text!
I've heard this personally from multiple PMs at AI companies. Students are one of the biggest demographics and they need to "break in" and have even more usage to improve their metrics. Classic corporate economic incentives
This Senate proposal advanced by Sen Cruz would cancel new and existing State laws on any aspect of tech use including civil rights, consumer protection, privacy, fraud, safety for kids, accessibility, and more. In short, we’d lose the few laws we have that ensure responsible AI use. #killthebill
People keep plugging AI "Co-Scientists," so what happens when you ask them to do an important task like finding errors in papers?
We built SPOT, a dataset of STEM manuscripts across 10 fields annotated with real errors to find out.
(tl;dr not even close to usable) #NLProc
arxiv.org/abs/2505.11855
Llama 3.1 70B contains copies of nearly the entirety of some books. Harry Potter is just one of them. I don’t know if this means it’s an infringing copy. But the first question to answer is if it’s a copy at all/in the first place. That’s what our new results suggest:
arxiv.org/abs/2505.12546
A call for scientists to stand up for scientific freedom as well as funding: www.nature.com/articles/d41...
@candicemorey.bsky.social and I were just talking about this. Students, I think, are still (rightly) nervous about submitting LLM-produced work, but they are using it to summarise papers they struggle to read. And it shows in their subsequent writing. It's "just reading the abstracts", but worse.
Because we must build good things while we scream about the bad, I have started a "Data for Good" team @data-for-good-team.bsky.social that partners with organizations needing short-term data science help. We have three projects ongoing & will add more as our capacity grows.
data-for-good-team.org