Advertisement ยท 728 ร— 90

Posts by Srishti

Post image

Which, whose, and how much knowledge do LLMs represent?

I'm excited to share our preprint answering these questions:

"Epistemic Diversity and Knowledge Collapse in Large Language Models"

๐Ÿ“„Paper: arxiv.org/pdf/2510.04226
๐Ÿ’ปCode: github.com/dwright37/ll...

1/10

6 months ago 89 27 2 1

Happy to share that our work on multi-modal framing analysis of news was accepted to #EMNLP2025!

Understanding news output and embedded biases is especially important in today's environment and it's imperative to take a holistic look at it.

Looking forward to presenting it in Suzhou!

8 months ago 25 6 1 0
Microsoft Forms

๐ŸŽ“ Looking for PhD opportunities in #NLProc for a start in Spring 2026?

๐Ÿ—’๏ธ Add your expression of interest to join @copenlu.bsky.social here by 20 July: forms.office.com/e/HZSmgR9nXB

Selected candidates will be invited to submit a DARA fellowship application with me: daracademy.dk/fellowship/f...

9 months ago 14 13 0 0
Dara

๐Ÿ“ฃ I am happy to support Ph.D applications to the Danish Advanced Research Academy. My main areas of research include multimodal learning and tokenization-free language processing. Feel free to reach out if you have similar interests! Applications due August 29 www.daracademy.dk/fellowship/f...

9 months ago 4 1 0 0
Post image

Congratulations Andrew Rabinovich (PhD โ€˜08) on winning the Longuet-Higgins Prize at #CVPR2025! (1/2)

10 months ago 17 5 2 0
Post image

My favorite part of going to conferences: @belongielab.org alumni get-togethers! A big thank you to Menglin for coordinating the lunch at @cvprconference.bsky.social ๐Ÿ™

Left: Tsung-Yi Lin, Guandao Yang, Katie Luo, Boyi Li; Right: Menglin Jia, Subarna Tripathi, Ph.D., Srishti, Xun Huang

10 months ago 19 1 0 0
Post image Post image

Panel talk happening right now at @vlms4all.bsky.social ! Come join us at #CVPR25 (room: 104E)

10 months ago 3 1 0 0
Advertisement
Preview
[EvalEval Infra] Better Infrastructure for LM Evals Welcome to EvalEval Working Group Infrastructure! Please help us get set up by filling out this form - we are excited to get to know you! This is an interest form to contribute/collaborate on a research project, building standardized infrastructure for AI evaluation. Status Quo: The AI evaluation ecosystem currently lacks standardized methods for storing, sharing, and comparing evaluation results across different models and benchmarks. This fragmentation leads to unnecessary duplication of compute-intensive evaluations, challenges in reproducing results, and barriers to comprehensive cross-model analysis. What's the project? We plan to address these challenges by developing a comprehensive standardized format for capturing the complete evaluation lifecycle. This format will provide a clear and extensible structure for documenting evaluation inputs (hyperparameters, prompts, datasets), outputs, metrics, and metadata. This standardization enables efficient storage, retrieval, sharing, and comparison of evaluation results across the AI research community. Building on this foundation, we will create a centralized repository with both raw data access and API interfaces that allow researchers to contribute evaluation runs and access cached results. The project will integrate with popular evaluation frameworks (LM-eval, HELM, Unitxt) and provide SDKs to simplify adoption. Additionally, we will populate the repository with evaluation results from leading AI models across diverse benchmarks, creating a valuable resource that reduces computational redundancy and facilitates deeper comparative analysis. Tasks? As a collaborator, you would be expected to: Work towards merging/integrating popular evaluation frameworks (LM-eval, HELM, Unitxt) Group 1 - Extend to Any Task: Design universal metadata schemas that work for ANY NLP task, extending beyond current frameworks like lm-eval/DOVE to support specialized domains (e.g., machine translation) Group 2 - Save the Relevant: Develop efficient query/download systems for accessing only relevant data subsets from massive repositories (DOVE: 2TB, HELM: extensive metadata) The result will be open infrastructure for the AI research community, plus an academic publication. When? We're looking for researchers who can join ASAP and work with us for at least 5 to 7 months. We are hoping to find researchers who would take this on as an active project (8 hours+/week) in this period.

๐Ÿš€ Technical practitioners & grads โ€” join to build an LLM evaluation hub!
Infra Goals:
๐Ÿ”ง Share evaluation outputs & params
๐Ÿ“Š Query results across experiments

Perfect for ๐Ÿงฐ hands-on folks ready to build tools the whole community can use

Join the EvalEval Coalition here ๐Ÿ‘‡
forms.gle/6fEmrqJkxidy...

10 months ago 3 1 0 0

Please join us for the FGVC workshop at CVPR 2025 @cvprconference.bsky.social on Wed 11th of June. The full schedule and list of fantastic speakers can be found on our website:
sites.google.com/view/fgvc12

10 months ago 10 4 0 0
Post image

Can you train a performant language model using only openly licensed text?

We are thrilled to announce the Common Pile v0.1, an 8TB dataset of openly licensed and public domain text. We train 7B models for 1T and 2T tokens and match the performance similar models like LLaMA 1 & 2

10 months ago 146 60 2 2
Post image

"Large [language] models should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated." henryfarrell.net/wp-content/u...

10 months ago 80 18 2 5
Preview
NeurIPS participation in Europe We seek to understand if there is interest in being able to attend NeurIPS in Europe, i.e. without travelling to San Diego, US. In the following, assume that it is possible to present accepted papers ...

Would you present your next NeurIPS paper in Europe instead of traveling to San Diego (US) if this was an option? Sรธren Hauberg (DTU) and I would love to hear the answer through this poll: (1/6)

1 year ago 280 161 6 12
Preview
โ€œI donโ€™t want to outsource my brainโ€: How political cartoonists are bringing AI into their work Pulitzer-winning cartoonists are experimenting with AI image generators.

"I donโ€™t want to just be entering text prompts for the rest of my life."

I spoke to political cartoonists, including Pulitzer-winner Mark Fiore, about how they are using AI image generators in their work. My latest for @niemanlab.org.
www.niemanlab.org/2025/06/i-do...

10 months ago 6 3 0 0
Culture is not trivia: sociocultural theory for cultural NLP. By Naitian Zhou and David Bamman from the Berkeley School of Information and Isaac L. Bleaman from Berkeley Linguistics.

Culture is not trivia: sociocultural theory for cultural NLP. By Naitian Zhou and David Bamman from the Berkeley School of Information and Isaac L. Bleaman from Berkeley Linguistics.

There's been a lot of work on "culture" in NLP, but not much agreement on what it is.

A position paper by me, @dbamman.bsky.social, and @ibleaman.bsky.social on cultural NLP: what we want, what we have, and how sociocultural linguistics can clarify things.

Website: naitian.org/culture-not-...

1/n

1 year ago 122 35 5 4
Post image

Check out our new preprint ๐“๐ž๐ง๐ฌ๐จ๐ซ๐†๐‘๐š๐ƒ.
We use a robust decomposition of the gradient tensors into low-rank + sparse parts to reduce optimizer memory for Neural Operators by up to ๐Ÿ•๐Ÿ“%, while matching the performance of Adam, even on turbulent Navierโ€“Stokes (Re 10e5).

10 months ago 30 7 2 2

PhD student, Srishti Yadav and her collaborators, out with new, interdisciplinary work๐Ÿ‘‡

10 months ago 3 1 0 0
Advertisement

Check out our new paper led by @srishtiy.bsky.social and @nolauren.bsky.social! This work brings together computer vision, cultural theory, semiotics, and visual studies to provide new tools and perspectives for the study of ~culture~ in VLMs.

10 months ago 26 8 1 0

A delight to work with great colleagues to bring theory around visual culture and cultural studies to how we think about visual language models.

10 months ago 16 5 0 0

This work was an amazing collaboration with @nolauren.bsky.social @mariaa.bsky.social @taylor-arnold.bsky.social @jiaangli.bsky.social Siddhesh Pawar, Antonia Karamolegkou, @scfrank.bsky.social @zhaochongan.bsky.social Negar Rostamzadeh, @danielhers.bsky.social @serge.belongie.com Ekaterina Shutova

10 months ago 5 0 0 0

We find that decades of visual cultural studies offer powerful ways to decode cultural meaning in images!! Rather than proposing yet another benchmark, our goal with this paper was to revisit and re-contextualize foundational theories of culture so that it can pave way for more inclusive frameworks.

10 months ago 2 0 1 0
Post image

We then propose 5 frameworks to evaluate cultures in VLMs:
1๏ธโƒฃ Processual Grounding - who defines culture?
2๏ธโƒฃ Material Culture - what is represented?
3๏ธโƒฃ Symbolic Encoding - how is meaning layered?
4๏ธโƒฃ Contextual Interpretation - who understands and frames meaning?
5๏ธโƒฃ Temporality -when is culture situated?

10 months ago 2 2 1 0

In this paper, we call for integrating methods from 3 fields :
๐Ÿ“š Cultural Studies โ€“ how values, beliefs & identities are shaped through cultural forms like images.
๐Ÿ” Semiotics โ€“ how signs & symbols convey meaning
๐ŸŽจ Visual Studies โ€“ how visuals communicate across time & place

10 months ago 3 1 1 0
Post image

Modern Vision-Language Models (VLMs) often fail at cultural understanding. But culture isnโ€™t just recognizing things like food, clothes, rituals etc. It's how meaning is made and understood; it also about symbolism, context, and how these things evolve over time.

10 months ago 2 1 1 0
Paper title "Cultural Evaluations of Vision-Language Models
Have a Lot to Learn from Cultural Theory"

Paper title "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory"

I am excited to announce our latest work ๐ŸŽ‰ "Cultural Evaluations of Vision-Language Models Have a Lot to Learn from Cultural Theory". We review recent works on culture in VLMs and argue for deeper grounding in cultural theory to enable more inclusive evaluations.

Paper ๐Ÿ”—: arxiv.org/pdf/2505.22793

10 months ago 57 18 3 5
Advertisement
Post image

This morning at P1 a handful of lucky of lab members got to see the telescope while centre secretary Bjรถrg had the dome open for a building tour ๐Ÿ”ญ (1/7)

11 months ago 16 3 1 1
Post image

๐Ÿš€New Preprint๐Ÿš€
Can Multimodal Retrieval Enhance Cultural Awareness in Vision-Language Models?

Excited to introduce RAVENEA, a new benchmark aimed at evaluating cultural understanding in VLMs through RAG.
arxiv.org/abs/2505.14462

More details:๐Ÿ‘‡

10 months ago 17 7 1 2

When you have a lot of work before the deadline push, you keep thinking of others things (distractions) youโ€™d like to do. The day you get free, those things suddenly donโ€™t seem important anymore. And kind of miss work! ๐Ÿ™„

10 months ago 1 0 0 0

This is amazing!! I saw that dataset original webpage was being archived this month. I was wondering whatโ€™ll happen to this data.

11 months ago 3 0 0 0
Screenshot of the dataset viewer on the Hugging Face Hub. Shows a set of metadata for the newspaper navigator dataset. It also has previews of a few rows showing images alongside metadata columns.

Screenshot of the dataset viewer on the Hugging Face Hub. Shows a set of metadata for the newspaper navigator dataset. It also has previews of a few rows showing images alongside metadata columns.

๐Ÿ—ž๏ธ Just released a Parquet version of the Newspaper Navigator dataset on @hf.co!

- 3M+ visual elements from historic US newspapers โ€” photos, maps, cartoons, OCR + metadata.
- Parquet = fast filters, easier analysis.
- Great for ML + cultural research.

๐Ÿ‘‰ huggingface.co/datasets/big...

11 months ago 14 7 1 0

We work under this telescope and sometimes get to visit it!

11 months ago 10 1 0 0