New episode in the #WiAIR @ EACL 2026 series is out!
We speak with Maor Juliet Lavi about her new paper:
"Detecting (Un)answerability in Large Language Models with Linear Directions"
๐ Watch it here: youtu.be/CCPE58A_FCQ
#EACL2026 #WiAIRpodcast
๐๏ธ ๐๐๐ฐ #๐๐ข๐๐๐ ๐๐ฉ๐ข๐ฌ๐จ๐๐ ๐๐ฎ๐ญ!
In the new #WiAIRpodcast episode with Hila Gonen, we talk about semantic leakage, interventional analysis of LLMs, and the line between bias, hallucination, and leakage.
๐ท YouTube: youtu.be/Lsq3UzM8wIg
๐๏ธ ๐๐๐ฐ #๐๐ข๐๐๐ ๐๐ฉ๐ข๐ฌ๐จ๐๐ ๐๐ฎ๐ญ!
Are reasoning models actually reasoning โ or just producing convincing stories?
In the new #WiAIRpodcast episode with Letitia Parcalabescu, we talk about faithfulness, hallucinations, RAG, and CoT.
๐ฌ YouTube: youtu.be/gzQiDCG_j7A
We're kicking off Season 2 of the #WiAIRpodcast with @swabhs.bsky.social (USC), discussing hidden system prompts, LLM safety and alignment.
๐ง Full episode coming soon, subscribe on youtube: youtu.be/DDjBG_AhUjQ
What does neuroscience say about how language models represent meaning, and why isn't scale enough?
In this #WiAIRpodcast episode, we speak with @mryskina.bsky.social on neuroscience ร AI, evaluation limits, interpretability, and why community shapes better research.
๐ฌ youtu.be/PQx4IvJR8Bg
Do LLMs really understand or are we mistaking language for thought?
In the next #WiAIRpodcast episode, @mryskina.bsky.social explores language vs. thought in LLMs, what AI can learn from cognitive science, and why model internals matter.
Full conversation coming soon.
youtu.be/1N-Cdts6Y7k
We're excited to welcome @mryskina.bsky.social, CIFAR AI Safety postdoc at Vector Institute for AI @vectorinstitute.ai, as our next guest on Women in AI Research.
#wiair #wiairpodcast
Check out this trailer of our next #WiAIRpodcast episode with @mariaa.bsky.social
๐ youtu.be/n0kL7p7gELA
๐๏ธ New WiAIR Episode Incoming!
We're excited to welcome @mariaa.bsky.social, Assistant Professor of Computer Science at the @colorado.edu, as our next guest on #WiAIRpodcast.
Subscribe to our Youtube and don't miss the upcoming episode: www.youtube.com/@WomeninAIRe...
#wiair #wiairpodcast
If you're not yet following my podcast on youtube, it's time to subscribe now - I'm covering the ##neurips2025 conference in San Diego there
youtu.be/1vAJdWiWLAg?...
#wiair #wiairpodcast
WiAIR is heading to @neuripsconf.bsky.social 2025 in San Diego! Will you be there next week?
Subscribe our Youtube channel for daily highlights during the conference: www.youtube.com/@WomeninAIRe...
#neurips2025 #wiair #wiairpodcast
In Cantonese & Taiwanese, the greeting isnโt "hello" - itโs "Have you eaten?"
Not about food, but about care. ๐
Dr. Annie Lee explains why LLMs often miss these cultural meanings - and why multilingual AI needs more than translation. More in the full episode!
#wiair #wiairpodcast
๐๏ธ New #WiAIR episode!
We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social) about multilingual & multicultural AI โ the language gap, missing benchmarks, and why domain-specific data matters.
#wiairpodcast
๐๏ธ New #WiAIR episode coming soon!
We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social & @utoronto.ca) about multilingual AI, inclusion in research - and proving you can build an amazing career while raising a family.
#wiairpodcast
AI models are built on human values - but whose values, exactly? ๐
Vered Shwartz highlights that diverse teams - across gender, culture, and discipline - are essential for building fair and trustworthy AI systems.
#llms #wiair #wiairpodcast
๐ค Can LLMs respect culture and facts?
We want AI systems that understand diverse cultures ๐ข๐ฏ๐ฅ stay grounded in factual truth.
But can we really have both?
Vered Shwartz explains this core challenge of modern LLMs.
#llms #wiair #wiairpodcast
Listen now to explore how culture shapes AI โ and why building culturally aware models is key to a fairer, more inclusive future.
๐ Youtube: youtu.be/RKIvrESep-g
#wiair #wiairpodcast
/4
๐๏ธ New episode of Women in AI Research (WiAIR) out now!
We sit down with @veredshwartz.bsky.social (Asst prof and CIFAR AI Chair) to talk about the important challenge in AI โ cultural bias. ๐
#nlproc #wiair #wiairpodcast
/1
LLMs are shaping hiring, healthcare, and law โ but can they truly understand users from every culture?
In our latest #WiAIRpodcast episode, Dr. Vered Shwartz explores how cultural bias impacts fairness and inclusivity in AI.
๐ง Watch here
๐ www.youtube.com/watch?v=9x2Q...
#wiair
๐ Do large language models really reason the way their chain-of-thoughts suggest?
This week on #WiAIRpodcast, we talk with Ana Marasoviฤ (@anamarasovic.bsky.social) about her paper โChain-of-Thought Unfaithfulness as Disguised Accuracy.โ (1/6๐งต)
๐ Paper: arxiv.org/pdf/2402.14897
How do we really know when and how much to trust large language models? ๐ค
In this weekโs #WiAIRpodcast, we talk with Ana Marasoviฤ (Asst Prof @ University of Utah; ex @ Allen AI, UWNLP) about explainability, trust, and humanโAI collaboration. (1/8๐งต)
Our new guest at #WiAIRpodcast is @anamarasovic.bsky.social
(Asst prof @ University of Utah , Ex @ Allen AI). We'll talk with her about faithfulness, trust and robustness in AI.
The episode is coming soon, don't miss:
www.youtube.com/@WomeninAIRe...
#WiAIR #NLProc
Don't miss the full episode:
๐ฌ YouTube: www.youtube.com/watch?v=DPhq...
๐ Spotify: open.spotify.com/episode/7aHP...
#WiAIRpodcast #WiAIR
๐ Can open science beat closed AI? Tรผlu 3 makes a powerful case. In our new #WiAIRpodcast, we speak with Valentina Pyatkin (@valentinapy.bsky.social) of @ai2.bsky.social and the University of Washington about a fully open post-training recipeโmodels, data, code, evals, and infra. #WomenInAI 1/8๐งต
๐ New WiAIR Podcast Episode!
Can open-source LLMs really outperform closed ones like Claude 3.5? ๐ค
We asked Valentina Pyatkin (AI2, UW) โ and you'll be interested to hear her answers
#NLProc #WiAIR #WiAIRpodcast
"๐๐๐ ๐๐จ๐ฌ๐ญ-๐ญ๐ซ๐๐ข๐ง๐ข๐ง๐ : ๐๐ฉ๐๐ง ๐๐๐ข๐๐ง๐๐ ๐๐ก๐๐ญ ๐๐จ๐ฐ๐๐ซ๐ฌ ๐๐ซ๐จ๐ ๐ซ๐๐ฌ๐ฌ " ๐๏ธ
On Sept 17, the #WiAIRpodcast speaks with @valentinapy.bsky.social (@ai2.bsky.social & University of Washington) about open science, post-training, mentorship, and visibility
#WiAIR #NLProc
LLMs are still black boxes โ but two research directions stand out:
โจ Natural language explanations
๐ Mechanistic interpretability
Both reshape how we think about trust in AI.
๐ฌ Which approach do you believe has more impact โ and why?
#AI #LLM #Interpretability #Explainability #WiAIR #WiAIRpodcast
๐ New WiAIR podcast episode is out!
Guest: Sophia Simeng Han (Yale PhD, Meta FAIR, ex-DeepMind/AWS).
We dive into:
โข Evaluating LLM reasoning beyond accuracy
โข Lessons from human reasoning chains
โข CogSci insights for AI
#WiAIRpodcast #WiAIR
โจ Accuracy isnโt enough.
To advance LLMs, we need to look beyond outputs & examine reasoning traces.
Full convo with Simeng Han dropping Aug 27 on #WiAIRPodcast:
๐ฌ YouTube: www.youtube.com/@WomeninAIRe...
๐๏ธ Spotify: open.spotify.com/show/51RJNlZ...
Early struggles and rejection are normal in academia. Your value as a scientist is not about how quickly things happen, it is about your persistence and passion. Keep going ๐ชโจ
Stay tuned: www.youtube.com/@WomeninAIRe...
#Science #Research #AcademicLife #WiAIRpodcast