The Abomination of AI – part 1 – setting the scene. First of a series of posts based on my ICoSCI 2026 keynote. The AI industry seems out of control as digital tech and AI undermine the assumptions of market economics driving massive inequality.
alandix.com/blog/2026/03...
Posts by Tommaso Turchi
Yessss please. Even for academics, my inbox is full of emails that expired days ago but are still haunting my storage and my brain! 😅
www.zerocarbon.email/
Late to the party, but if you missed it: a genuinely inspiring take on using AI to make you think, rather than think for you. Well done Advait! We need more of this, far from the usual "all-in or all-out" narrative around AI.
www.ted.com/talks/advait_sark...
We keep testing doctors on AI's turf — showing an image, asking "where's the tumor?", and calling it science.
But diagnosis isn't image labeling. It's context, time, uncertainty, and reasoning.
Our new paper argues that AI-in-healthcare needs ecological validity.
📄 doi.org/10.1145/3750069.3750072
This new Stanford study is another alarm bell: AI is hitting entry-level jobs hardest, with a 13% drop for young workers in exposed fields like software dev. The bottom rung is vanishing.
Humans are terrible at explaining their own decision-making. We confabulate logical stories for intuitive choices. New research shows LLMs do the same - with serious implications for medicine, law, and other high-stakes applications.
It's not AI that's undermining science (or education, or so many other fields) - it's the relentless push for more output with little reward for quality. AI just amplifies a pre-existing problem by offering a quick, sloppy way out.
This piece is a wake-up call: entry-level devs can't get the on-the-job training opportunities past generations had. AI eliminated the bottom rung of the ladder. Universities need to step up and provide the practical experience industry won't.
Can you spare 10–15 min for an #AI research project? My student is studying how explanations impact trust in AI decisions. No AI experience needed!
👉English: https://survey.trx.li/index.php/193548?lang=en
👉Italiano: https://survey.trx.li/index.php/193548?lang=it
RTs greatly appreciated! #Explainab
“Nobody expects a computer simulation of a hurricane to generate real wind and real rain.” Neuroscientist Anil Seth argues we overestimate the odds of conscious AI—intelligence ≠ consciousness. We should be cautious about trying to create conscious machines. bigthink.com/neuropsych/t...
If you're in AI or education—or simply affected by them (so everyone at this point)—you should watch this Veritasium talk.
It connects Kahneman's System 1 & 2 to why "education revolutions" keep falling short, and what AI might change (for better or worse).
https://www.youtube.com/watch?v=0xS68sl2
Stop trying to make LLMs run your app logic. They're language models—use them to interpret what the user meant, then hand off to real code. Great piece: https://sgnt.ai/p/hell-out-of-llms/
TLDR: Get in, get meaning, get the hell out.
🎉 Wrapping up an incredible workshop on Adaptive eXplainable AI (#AXAI)! Huge thanks to everyone who joined, shared insights, and sparked exciting conversations. Already counting down to next year's edition—expect even more innovation and collaboration! Stay tuned! #XAI
Just landed in Cagliari! 📍 Tomorrow's the big day—AXAI Workshop is almost here. We've just uploaded the camera-ready papers of all accepted contributions. Check them out here: https://axai.trx.li/accepted-papers/ #AXAI #XAI #IUI2025
Ever wondered how to make your data say exactly what you want? This tool lets you tweak variables like political affiliation and economic metrics to achieve that elusive "statistical significance". Surely that's just fictional... p-hacking at its finest!
Can we stop with research papers titled '... is all you need'? Seriously, I think we've had enough…
📢 Excited to announce the accepted papers for #AXAI2025! From digital twins and emotion recognition to LLM-powered explanations and adaptive interfaces - join us in Cagliari to explore the future of explainable AI. Check out the full list: https://axai.trx.li/accepted-papers/ #IUI2025
Some of this attitude comes from the fact that the people making AI tools are engineers viewing everything from an engineering perspective, but it's also that, as a culture, we have adopted this way of thinking as the default.
The Irony
"While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process."
Why do you want to work at Anthropic?
— Anthropic, online job application form
Prototype high fidelity spatial apps.
Funding scientific research… are we doing it well?
🔄 Deadline Extension!
AXAI Workshop @ ACM IUI 2025 submissions now due Jan 15th AoE.
Still time to contribute to shaping adaptive & explainable AI interfaces!
Submit at: https://cmt3.research.microsoft.com/AXAI2025/
More info: https://axai.trx.li
#XAI #HCI #IUI2025
Sometimes magic is just someone spending more time on something than anyone else might reasonably expect.
🎄 Season's Greetings from the Adaptive XAI Workshop organizers!
As we wrap up 2024, a friendly reminder: Just 2 weeks left to submit your work! Deadline: Jan 13, 2025
Join us at #IUI2025 in Sardinia to shape the future of explainable AI 🔍
Details: https://axai.trx.li
#XAI #HCI #AI