Advertisement · 728 × 90

Posts by Learning Informatics Lab

2025 ImBlaze X LIL Graduate Research Competition logo

2025 ImBlaze X LIL Graduate Research Competition logo

We're excited to announce the semifinalists for the 2025 ImBlaze x LIL Graduate Research Competition. Click the link below for the full announcement, and congrats to all our semifinalists!
innovation.umn.edu/informatics/...

10 months ago 1 1 0 1
Cognitive Assessment of LLMs
Cognitive Assessment of LLMs YouTube video by Learning Informatics Lab

Check out our latest LIL colloquium from Karin de Langis, all about her work assessing the emergent cognitive abilities in large language models (LLMs): youtu.be/YGbRYEL7S40

#EduSky #LLM #AI

1 year ago 2 2 0 0
SmartPal: Augmenting Learning Management Systems with LLM Chatbots and Gamification
SmartPal: Augmenting Learning Management Systems with LLM Chatbots and Gamification YouTube video by Learning Informatics Lab

Check out our spring LIL colloquium from Dr. De Liu all about his work with SmartPal, a digital learning assistant that combines gamification and LLM chatbots to enhance student learning in online education spaces:
youtu.be/8j8LLoxwTMs

#EduSky, #LLM, #Gamification

1 year ago 1 0 0 0
A description of our fall colloquium talk by Dr. Harmanpreet Kaur. The main body of the text reads as follows: 
"Human-AI partnerships are increasingly commonplace. Yet, systems that rely on these partnerships are unable to effectively capture the dynamic needs of people, or explain complex AI reasoning and outputs. This results in a propagation of biases and missed edge cases in all kinds of applications. My work follows the belief that for human-AI interaction to be effective and safe, technical development in AI must come in concert with an understanding of human-centric cognitive, social, and organizational phenomena. Using human-AI interaction in the context of ML-based decision-support systems as a case study, in this talk, I will discuss my research that explains why explainable AI does not work in practice. I will also share design ideas—both completed and current work—to help people with varying expertise understand AI outputs."

A description of our fall colloquium talk by Dr. Harmanpreet Kaur. The main body of the text reads as follows: "Human-AI partnerships are increasingly commonplace. Yet, systems that rely on these partnerships are unable to effectively capture the dynamic needs of people, or explain complex AI reasoning and outputs. This results in a propagation of biases and missed edge cases in all kinds of applications. My work follows the belief that for human-AI interaction to be effective and safe, technical development in AI must come in concert with an understanding of human-centric cognitive, social, and organizational phenomena. Using human-AI interaction in the context of ML-based decision-support systems as a case study, in this talk, I will discuss my research that explains why explainable AI does not work in practice. I will also share design ideas—both completed and current work—to help people with varying expertise understand AI outputs."

We're excited to announce our Fall Colloquium entitled "Leveraging Social Theories to Enhance Human-AI Interaction".
Join Dr. Harmanpreet Kaur this Friday (12/6) at 4 in Ed Sci 325 as she delves into her work on human-centered and explainable AI!

#AI #Explainability #LearningInformatics

1 year ago 5 1 1 0
Post image

Going to #psynom24? Check out great work from our lab on misinformation, discourse and memory processes, and code comprehension! @kendeou.bsky.social @hwangphd.bsky.social

1 year ago 6 4 0 0