Advertisement ยท 728 ร— 90
#
Hashtag
#WiAIRPodcast
Advertisement ยท 728 ร— 90
Why LLMs Hallucinate, and How to Make Them Say "I Don't Know" (EACL 2026)
Why LLMs Hallucinate, and How to Make Them Say "I Don't Know" (EACL 2026) YouTube video by Women in AI Research WiAIR

New episode in the #WiAIR @ EACL 2026 series is out!

We speak with Maor Juliet Lavi about her new paper:
"Detecting (Un)answerability in Large Language Models with Linear Directions"

๐Ÿ‘‰ Watch it here: youtu.be/CCPE58A_FCQ

#EACL2026 #WiAIRpodcast

0 0 0 0
Post image

๐ŸŽ™๏ธ ๐๐ž๐ฐ #๐–๐ข๐€๐ˆ๐‘ ๐„๐ฉ๐ข๐ฌ๐จ๐๐ž ๐Ž๐ฎ๐ญ!

In the new #WiAIRpodcast episode with Hila Gonen, we talk about semantic leakage, interventional analysis of LLMs, and the line between bias, hallucination, and leakage.

๐Ÿ“ท YouTube: youtu.be/Lsq3UzM8wIg

1 1 0 0
Post image

๐ŸŽ™๏ธ ๐๐ž๐ฐ #๐–๐ข๐€๐ˆ๐‘ ๐„๐ฉ๐ข๐ฌ๐จ๐๐ž ๐Ž๐ฎ๐ญ!
Are reasoning models actually reasoning โ€” or just producing convincing stories?
In the new #WiAIRpodcast episode with Letitia Parcalabescu, we talk about faithfulness, hallucinations, RAG, and CoT.

๐ŸŽฌ YouTube: youtu.be/gzQiDCG_j7A

2 1 0 0
Post image

We're kicking off Season 2 of the #WiAIRpodcast with @swabhs.bsky.social (USC), discussing hidden system prompts, LLM safety and alignment.

๐ŸŽง Full episode coming soon, subscribe on youtube: youtu.be/DDjBG_AhUjQ

2 0 0 0
Post image

What does neuroscience say about how language models represent meaning, and why isn't scale enough?

In this #WiAIRpodcast episode, we speak with @mryskina.bsky.social on neuroscience ร— AI, evaluation limits, interpretability, and why community shapes better research.

๐ŸŽฌ youtu.be/PQx4IvJR8Bg

7 3 0 1
Post image

Do LLMs really understand or are we mistaking language for thought?

In the next #WiAIRpodcast episode, @mryskina.bsky.social explores language vs. thought in LLMs, what AI can learn from cognitive science, and why model internals matter.

Full conversation coming soon.
youtu.be/1N-Cdts6Y7k

1 0 0 1
Post image

We're excited to welcome @mryskina.bsky.social, CIFAR AI Safety postdoc at Vector Institute for AI @vectorinstitute.ai, as our next guest on Women in AI Research.

#wiair #wiairpodcast

3 0 0 0
Post image

Check out this trailer of our next #WiAIRpodcast episode with @mariaa.bsky.social

๐Ÿ‘‰ youtu.be/n0kL7p7gELA

1 0 0 1
Post image

๐ŸŽ™๏ธ New WiAIR Episode Incoming!

We're excited to welcome @mariaa.bsky.social, Assistant Professor of Computer Science at the @colorado.edu, as our next guest on #WiAIRpodcast.

Subscribe to our Youtube and don't miss the upcoming episode: www.youtube.com/@WomeninAIRe...

#wiair #wiairpodcast

3 2 0 0
Day 1 at NeurIPS 2025 - WiML workshop
Day 1 at NeurIPS 2025 - WiML workshop YouTube video by Women in AI Research WiAIR

If you're not yet following my podcast on youtube, it's time to subscribe now - I'm covering the ##neurips2025 conference in San Diego there

youtu.be/1vAJdWiWLAg?...

#wiair #wiairpodcast

0 0 0 0
Video

WiAIR is heading to @neuripsconf.bsky.social 2025 in San Diego! Will you be there next week?

Subscribe our Youtube channel for daily highlights during the conference: www.youtube.com/@WomeninAIRe...

#neurips2025 #wiair #wiairpodcast

0 0 0 0
Video

In Cantonese & Taiwanese, the greeting isnโ€™t "hello" - itโ€™s "Have you eaten?"
Not about food, but about care. ๐Ÿ’›

Dr. Annie Lee explains why LLMs often miss these cultural meanings - and why multilingual AI needs more than translation. More in the full episode!

#wiair #wiairpodcast

0 0 0 0

๐ŸŽ™๏ธ New #WiAIR episode!

We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social) about multilingual & multicultural AI โ€” the language gap, missing benchmarks, and why domain-specific data matters.

#wiairpodcast

1 0 1 1
Post image

๐ŸŽ™๏ธ New #WiAIR episode coming soon!

We talk with Dr. Annie En-Shiun Lee (@ontariotechu.bsky.social & @utoronto.ca) about multilingual AI, inclusion in research - and proving you can build an amazing career while raising a family.

#wiairpodcast

2 0 1 0
Video

AI models are built on human values - but whose values, exactly? ๐ŸŒ

Vered Shwartz highlights that diverse teams - across gender, culture, and discipline - are essential for building fair and trustworthy AI systems.

#llms #wiair #wiairpodcast

2 0 0 0
Video

๐Ÿค– Can LLMs respect culture and facts?

We want AI systems that understand diverse cultures ๐˜ข๐˜ฏ๐˜ฅ stay grounded in factual truth.
But can we really have both?

Vered Shwartz explains this core challenge of modern LLMs.

#llms #wiair #wiairpodcast

0 1 0 0

Listen now to explore how culture shapes AI โ€” and why building culturally aware models is key to a fairer, more inclusive future.

๐Ÿ”— Youtube: youtu.be/RKIvrESep-g
#wiair #wiairpodcast

/4

0 0 0 0
Post image

๐ŸŽ™๏ธ New episode of Women in AI Research (WiAIR) out now!

We sit down with @veredshwartz.bsky.social (Asst prof and CIFAR AI Chair) to talk about the important challenge in AI โ€” cultural bias. ๐ŸŒ

#nlproc #wiair #wiairpodcast

/1

5 2 1 1
Post image

LLMs are shaping hiring, healthcare, and law โ€” but can they truly understand users from every culture?

In our latest #WiAIRpodcast episode, Dr. Vered Shwartz explores how cultural bias impacts fairness and inclusivity in AI.

๐ŸŽง Watch here
๐Ÿ‘‰ www.youtube.com/watch?v=9x2Q...

#wiair

0 0 0 0
Post image Post image Post image

๐Ÿ‘‰ Do large language models really reason the way their chain-of-thoughts suggest?
This week on #WiAIRpodcast, we talk with Ana Marasoviฤ‡ (@anamarasovic.bsky.social) about her paper โ€œChain-of-Thought Unfaithfulness as Disguised Accuracy.โ€ (1/6๐Ÿงต)
๐Ÿ“„ Paper: arxiv.org/pdf/2402.14897

5 1 1 0
Post image

How do we really know when and how much to trust large language models? ๐Ÿค”
In this weekโ€™s #WiAIRpodcast, we talk with Ana Marasoviฤ‡ (Asst Prof @ University of Utah; ex @ Allen AI, UWNLP) about explainability, trust, and humanโ€“AI collaboration. (1/8๐Ÿงต)

1 0 1 0
Post image

Our new guest at #WiAIRpodcast is @anamarasovic.bsky.social
(Asst prof @ University of Utah , Ex @ Allen AI). We'll talk with her about faithfulness, trust and robustness in AI.
The episode is coming soon, don't miss:
www.youtube.com/@WomeninAIRe...

#WiAIR #NLProc

2 1 0 0
Open Science and LLMs, with Dr. Valentina Pyatkin
Open Science and LLMs, with Dr. Valentina Pyatkin Can open-source large language models really outperform closed ones like Claude 3.5? ๐Ÿค” In this episode of the Women in AI Research podcast, Jekaterina Novikova and Malikeh Ehghaghi engage withโ€ฆ

Don't miss the full episode:
๐ŸŽฌ YouTube: www.youtube.com/watch?v=DPhq...
๐ŸŽ™ Spotify: open.spotify.com/episode/7aHP...

#WiAIRpodcast #WiAIR

0 0 0 0
Post image Post image

๐Ÿš€ Can open science beat closed AI? Tรผlu 3 makes a powerful case. In our new #WiAIRpodcast, we speak with Valentina Pyatkin (@valentinapy.bsky.social) of @ai2.bsky.social and the University of Washington about a fully open post-training recipeโ€”models, data, code, evals, and infra. #WomenInAI 1/8๐Ÿงต

6 1 1 0
Post image

๐Ÿš€ New WiAIR Podcast Episode!
Can open-source LLMs really outperform closed ones like Claude 3.5? ๐Ÿค”

We asked Valentina Pyatkin (AI2, UW) โ€” and you'll be interested to hear her answers

#NLProc #WiAIR #WiAIRpodcast

0 0 1 0
Post image

"๐‹๐‹๐Œ ๐๐จ๐ฌ๐ญ-๐ญ๐ซ๐š๐ข๐ง๐ข๐ง๐ : ๐Ž๐ฉ๐ž๐ง ๐’๐œ๐ข๐ž๐ง๐œ๐ž ๐“๐ก๐š๐ญ ๐๐จ๐ฐ๐ž๐ซ๐ฌ ๐๐ซ๐จ๐ ๐ซ๐ž๐ฌ๐ฌ " ๐ŸŽ™๏ธ

On Sept 17, the #WiAIRpodcast speaks with @valentinapy.bsky.social (@ai2.bsky.social & University of Washington) about open science, post-training, mentorship, and visibility

#WiAIR #NLProc

6 1 0 0
Video

LLMs are still black boxes โ€” but two research directions stand out:
โœจ Natural language explanations
๐Ÿ”Ž Mechanistic interpretability

Both reshape how we think about trust in AI.
๐Ÿ’ฌ Which approach do you believe has more impact โ€” and why?

#AI #LLM #Interpretability #Explainability #WiAIR #WiAIRpodcast

1 0 0 0
Post image

๐Ÿš€ New WiAIR podcast episode is out!
Guest: Sophia Simeng Han (Yale PhD, Meta FAIR, ex-DeepMind/AWS).

We dive into:
โ€ข Evaluating LLM reasoning beyond accuracy
โ€ข Lessons from human reasoning chains
โ€ข CogSci insights for AI

#WiAIRpodcast #WiAIR

0 0 1 0
Video

โœจ Accuracy isnโ€™t enough.
To advance LLMs, we need to look beyond outputs & examine reasoning traces.

Full convo with Simeng Han dropping Aug 27 on #WiAIRPodcast:
๐ŸŽฌ YouTube: www.youtube.com/@WomeninAIRe...
๐ŸŽ™๏ธ Spotify: open.spotify.com/show/51RJNlZ...

0 0 0 0
Video

Early struggles and rejection are normal in academia. Your value as a scientist is not about how quickly things happen, it is about your persistence and passion. Keep going ๐Ÿ’ชโœจ

Stay tuned: www.youtube.com/@WomeninAIRe...

#Science #Research #AcademicLife #WiAIRpodcast

0 0 0 0