The Patrick Henry Winston Outstanding Educator Award went to Alan Mackworth (@alanmackworth.bsky.social UBC) and David Poole (@davpoole.bsky.social UBC). At #AAAI2026, they'll be giving a talk "The Essence of Intelligence is Appropriate Action..." Sunday Jan 25 8:30AM @aaai.org @hadihoss.bsky.social
Posts by
Announcement image for the winners of the 2026 AAAI/EAAI Patrick Henry Winston Outstanding Educator Award. Photos and names of David L. Poole and Alan K. Mackworth are shown below the title.
Cover of the book titled "Artificial Intelligence: Foundations of Computational Agents, Third Edition" by David L. Poole and Alan K. Mackworth. The background features green neural network-like structures.
Congratulations to authors @davpoole.bsky.social and @alanmackworth.bsky.social on receiving the 2026 AAAI/EAAI Patrick Henry Winston Outstanding Educator Award 🎉
Learn more about 'Artificial Intelligence: Foundations of Computational Agents' 🔗 https://cup.org/4sjPDI8
Drs. Mackworth and Poole recognized for making AI education accessible to students worldwide
UBC Computer Science Professors Emeriti Alan Mackworth and David Poole were awarded the AAAI/EAAI Patrick Henry Winston Outstanding Educator Award for developing free online resources to learn foundations of AI. Congratulations! Read more: www.cs.ubc.ca/news/2026/02...
Delighted to receive this award with @davpoole.bsky.social. @cs.ubc.ca @aaai.org Talk details here aaai.org/conference/a... #ai
"Generating text from LLMs can be seen as plagiarism, one word at a time." is a way to explain stochastic parrots of @emilymbender.bsky.social @timnitgebru.bsky.social Angelina McMillan-Major and @mmitchell.bsky.social
Why AI can’t take over creative writing theconversation.com/why-ai-cant-...
I wrote this piece on AI and creative writing after reading articles by creative writers lamenting the impact of AI theconversation.com/why-ai-cant-...
The 2024 Turing Prize winners announced this morning: Barto & Sutton for "developing the conceptual and algorithmic foundations of reinforcement learning". Well deserved. Built on 1959 Donald Michie's idea; matchboxes and colored beads learning tic-tac-toe. awards.acm.org/about/2024-t...
AI is like plastic — a lot of people hate it because it often comes across as fake and tacky, but it’s flexible and it’s cheap and there’s so many things that would be impossible or impractical without it.
And yes, a lot of AI will be junk. Just like everything else. (cf Sturgeon’s Law.)
The "Real World" is unfair. It is biased. And when it comes to estimating treatment effects, the "Big Data" can't fix a bias that's baked into how the data are collected (I would say "design", but there is usually no design involved). So pardon me if I prefer boring old "Useful Evidence".
The Washington post Breaking News Post Exclusive 8 minutes ago Police use facial recognition as it was never intended: As a shortcut to finding and arresting suspects without further evidence Confident in unproven facial recognition technology, sometimes investigators skip steps, and at least eight Americans have been wrongfully arrested, a Washington Post investigation found.
I’m sorry, this might be a great investigation, but “as it was never intended” is obscene. This is precisely how many people specifically and repeatedly told us — warned us — it would be used.
Explanation of how the myth of the market being "the natural expression of human freedom" is well a myth.
As Right-Wing Libertarians enjoy spreading this myth, we thought it's important to share this explanation.
I genuinely cannot believe Google is now showing Gemini-generated medical snippets including *drug summaries and medical advice* to treatserious health conditions.
I checked. It does. With barely a tiny disclaimer about GenAI. How can a (formerly?) trusted company be so incredibly reckless?
This passage from the book, Bullshit Jobs, is worth a read.
Autonomous vehicles accelerate the trend begun by cars of isolating travellers from their neighbourhoods, reducing neighbors to mere obstacles our sensors try to avoid, and hence dissolving the social bonds that hold together communities
🎊 Happy new year folks! 🎊
👀 ready to start working on that paper deadline? 👀
I have created a list of open research problem from my work and from writing our textbook. Something to work on for the new year! Comments please! #relationallearning #causality #AI #aifca #starAI www.cs.ubc.ca/~poole/resea...
For a textbook introduction to agentic AI see our recent Cambridge University Press AI textbook. We are much less bullish about LLMs and don't use the term "agentic". artint.info
Google AI reporting “no drug interactions” for two drugs that definitely have drug interactions. When can we collectively decide the AI experiment is over?
The UK is consulting on plans to favour AI firms over creators when it comes to copyright. For @newscientist.bsky.social, I wrote about what that means www.newscientist.com/article/2461...
Pleased to share the latest version of my paper with Arthur Spirling and @lexipalmer.bsky.social on replication using LMs
We show:
1. current applications of LMs in political science research *don't* meet basic standards of reproducibility...
Screenshot of Table of Contents (Part 1) Contents 1 Introduction 217 2 Positionality 221 3 Overview of Risks and Harms Associated with Computer Vision Systems and Proposed Mitigation Strategies 223 3.1 Representational Harms . . . . . . . . . . . . . . . . . . . 223 3.2 Quality-of-Service and Allocative Harms . . . . . . . . . . 229 3.3 Interpersonal Harms . . . . . . . . . . . . . . . . . . . . . 237 3.4 Societal Harms: System Destabilization and Exacerbating Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 245 4 Frameworks and Principles for Computer Vision Researchers 266 4.1 Guidelines for Responsible Data and Model Development . 267 4.2 Measurement Modeling . . . . . . . . . . . . . . . . . . . 271 4.3 Reflexivity . . . . . . . . . . . . . . . . . . . . . . . . . . 273 5 Reorientations of Computer Vision Research 276 5.1 Grounded in Historical Context and Considering Power Dynamics . . . . . . . . . . . . . . . . . . . . . . . 276 5.2 Small, Task Specific . . . . . . . . . . . . . . . . . . . . . 279 5.3 Community-Rooted . . . . . . . . . . . . . . . . . . . . . 280
Screenshot of Table of Contents (Part 2) 6 Systemic Change 285 6.1 Collective Action and Whistleblowing . . . . . . . . . . . . 285 6.2 Refusal/The Right not to Build Something . . . . . . . . . 287 6.3 Independent Funding Outside of Military and Multinational Corporations . . . . . . . . . . . . . . . . . . . . . . . . . 289 7 Conclusion 291 References 293
Dear computer vision researchers, students & practitioners🔇🔇🔇
Remi Denton & I have written what I consider to be a comprehensive paper on the harms of computer vision systems reported to date & how people have proposed addressing them, from different angles.
PDF: cdn.sanity.io/files/wc2kmx...
Josh Tenenbaum on scaling up vs growing up and the path to human-like reasoning #NeurIPS2024
2) how frustrated student (university) researchers are. In discussions at poster sessions, many said they couldn't attempt to answer scientific questions because they couldn't afford the compute time. This is a problem for the future of the field, and is one way big-data AI will hit a wall.
Two things I learned from #neurips2024 1) transcriptions are still terrible. The simultaneous transcriptions of the talks didn't take into account the vocabulary of the papers being presented. It's probably the fault of one-size-fits-all language models. There were slides with the vocabulary... 2/
2/ decision networks, MDPs, reinforcement learning, multiagent systems, logic programming, knowledge graphs, relational learning. A release candidate for version 1.0, so comments please! Based on our AI textbook artint.info See aipython.org @davpoole.bsky.social and @alanmackworth.bsky.social
We are pleased to announce the latest version of AIPython.org: open-source, runnable pseudocode (in Python) for all your favorite AI algorithms, including search, CSPs, logic, planning, supervised machine learning, neural network, graphical models, unsupervised learning, causality, /2
Human minds, intelligences and states of consciousness are beautifully diverse—I just don't buy that the right approach for AI is "all you need is more compute", pretending AI has an objective view from nowhere, and being owned by a small number of homogenously white-bread tech firms
Truly the stupidest idea I have ever seen in journalism.