Advertisement · 728 × 90

Posts by merve

Post image

llama.cpp has vision language model support now! ❤️‍🔥

get started with sota VLMs (gemma 3, Qwen2.5VL, InternVL3 & more) and serve them wherever you want 🤩
learn more github.com/ggml-org/lla... 📖

11 months ago 45 5 2 0
Post image

If you want to ✨ speed-up & harden ✨ your RAG pipelines, use visual document retrieval models ⬇️

We have shipped a how-to guide for VDR models in Hugging Face transformers 🤗📖 huggingface.co/docs/transfo...

11 months ago 27 3 3 0
Preview
Visually Multilingual: Introducing mcdse-2b A Blog post by Marco Cimolai on Hugging Face

here's a good blog on successful DSE model MCDSE, compression and more huggingface.co/blog/marco/a...

1 year ago 3 0 0 0
Post image

Why do people sleep on DSE multimodal retrieval models? 👀

They're just like ColPali, but highly scalable, fast and you can even make them more efficient with binarization or matryoshka with little degradation 🪆⚡️

I collected some here huggingface.co/collections/...

1 year ago 12 1 1 0
Video

I'm so hooked on @hf.co Inference Providers (specifically Qwen2.5-VL-72B) for multimodal agentic workflows with smolagents 🥹

get started ⤵️
> filter models provided by different providers
> test them through widget or Python/JS/cURL

1 year ago 10 2 0 0
Post image

my weekly summary on what's released in open AI is up on @hf.co huggingface.co/posts/merve/...

collection is here huggingface.co/collections/...

1 year ago 18 1 0 1
Post image

fan-favorite open-source PDF rendering model OlmOCR goes faster and more efficient ⚡️

RolmOCR-7B follows same recipe with OlmOCR, builds on Qwen2.5VL with training set modifications and improves accuracy & performance 🤝

huggingface.co/reducto/Rolm...

1 year ago 16 0 0 0
Advertisement
Login • Instagram Welcome back to Instagram. Sign in to check out what your friends, family & interests have been capturing & sharing around the world.

Hello friends 👋🏼

If visit Turkey this summer, know that millions of Turkish people are doing a boycott, once a week not buying anything and rest of the week only buying necessities

if you have plans, here's a post that summarizes where you should buy stuff from www.instagram.com/share/BADrkS...

1 year ago 28 1 0 0

SmolVLM paper is out and it's packed with great findings on training a good smol vision LM!

Andi summarized them below, give it a read if you want to see more insights 🤠

1 year ago 29 4 0 0
Post image

the model also has impressive OCR capabilities ⬇️

1 year ago 5 0 0 0
Post image

we'll give this model a test on agentic capabilities but here's an example from paper:

1 year ago 2 0 1 0
Post image

This model consists of a dynamic res handling MoonViT encoder, a projection layer and a 16B MoE decoder (with 2.8B active params)

the paper introduces an interesting pre-training pipeline to handle long context and the model saw 4.4T tokens arxiv.org/pdf/2504.07491

1 year ago 1 0 1 0
Post image

DO NOT SLEEP ON THIS MODEL

Kimi-VL-A3B-Thinking is the first ever capable open-source reasoning VLM with MIT license ❤️
> it has only 2.8B activated params 👏
> it's agentic 🔥 works on GUIs
> surpasses gpt-4o

I've put it to test (see below ⤵️) huggingface.co/spaces/moons...

1 year ago 30 2 1 0
Post image

InternVL3 is out 💥

> 7 ckpts with various sizes (1B to 78B)
> Built on InternViT encoder and Qwen2.5VL decoder, improves on Qwen2.5VL
> Can do reasoning, document tasks, extending to tool use and agentic capabilities 🤖
> easily use with Hugging Face transformers 🤗 huggingface.co/collections/...

1 year ago 12 2 0 0
Preview
Model Context Protocol has prompt injection security problems As more people start hacking around with implementations of MCP (the Model Context Protocol, a new standard for making tools available to LLM-powered systems) the security implications of tools built ...

Model Context Protocol has prompt injection security problems
simonwillison.net/2025/Apr/9/m...

1 year ago 116 21 9 3
Preview
From Chunks to Blocks: Accelerating Uploads and Downloads on the Hub We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Xet infra now backs 1000s of repos on @hf.co , which means we get to put on our researcher hats and peer into the bytes 👀 🤓

Xet clients chunk files (~64KB) and skip uploads of duplicate content, but what if those chunks are already in _another_ repo? We skip those too.

1 year ago 18 5 1 0

SmolVLM paper is out and it's packed with great findings on training a good smol vision LM!

Andi summarized them below, give it a read if you want to see more insights 🤠

1 year ago 29 4 0 0
Advertisement

X'in politikaları sebebiyle işimle alakalı post'ları burada da paylaşıyor olacağım, takip edebilirsiniz 😊

1 year ago 30 1 1 0
Preview
smol-vision/Fine_tune_SmolVLM2_on_Video.ipynb at main · merveenoyan/smol-vision Recipes for shrinking, optimizing, customizing cutting edge vision models. 💜 - merveenoyan/smol-vision

icymi I shipped a tutorial on fine-tuning vision language models on videos ⏯️

learn how to fine-tune SmolVLM2 on Video Feedback dataset 📖 github.com/merveenoyan/...

1 year ago 32 3 1 0
Post image

All the multimodal document retrieval models (ColPali, DSE et al) are now under visual document retrieval at @hf.co 📝🤗

take your favorite VDR model out for multimodal RAG 🤝

1 year ago 19 0 0 0
Post image

Smol but mighty:
• 256M delivers 80% of the performance of our 2.2B model.
• 500M hits 90%.
Both beat our SOTA 80B model from 17 months ago! 🎉

Efficiency 🤝 Performance

Explore the collection here: huggingface.co/collections/...
Blog: huggingface.co/blog/smolervlm

1 year ago 16 2 1 0
Post image

Introducing the smollest VLMs yet! 🤏
SmolVLM (256M & 500M) runs on <1GB GPU memory.
Fine-tune it on your laptop and run it on your toaster. 🚀
Even the 256M model outperforms our Idefics 80B (Aug '23).
How small can we go? 👀

1 year ago 48 7 1 2
Post image

Everything that was released passed week in open AI 🤠

> Link to all models, datasets, demos huggingface.co/collections/...
> Text-readable version is here huggingface.co/posts/merve/...

1 year ago 32 3 1 1
Preview
Visual Document Retrieval Goes Multilingual We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Learn more from their blog post here huggingface.co/blog/vdr-2b-... 📖

1 year ago 8 1 1 0
Post image

there's a new multimodal retrieval model in town 🤠
@llamaindex.bsky.social released vdr-2b-multi-v1
> uses 70% less image tokens, yet outperforming other dse-qwen2 based models
> 3x faster inference with less VRAM 💨
> shrinkable with matryoshka 🪆
huggingface.co/collections/...

1 year ago 46 2 1 1
Post image

What a week to open the year in open ML, all the things released at @hf.co 🤠

Here's everything released, find text-readable version here huggingface.co/posts/merve/...

All models are here huggingface.co/collections/...

1 year ago 21 1 0 0
Advertisement
Video

ViTPose -- best open-source pose estimation model just landed to @hf.co transformers 🕺🏻💃🏻

🔖 Model collection: huggingface.co/collections/...

🔖 Notebook on how to use: colab.research.google.com/drive/1e8fcb...

🔖 Try it here: huggingface.co/spaces/hysts...

1 year ago 67 8 1 0
Post image


The model is very interesting, it has different encoders for different modalities each (visual prompt, text prompt, image and video) then it concatenates these to feed into LLM 💬

the output segmentation tokens are passed to SAM2, to sort of match text (captions or semantic classes) to masks ⤵️

1 year ago 11 3 0 0
Post image

ByteDance just dropped SA2VA: a new family of vision LMs combining Qwen2VL/InternVL and SAM2 with MIT license 💗

The models are capable of tasks involving vision-language understanding and visual referrals (referring segmentation) both for images and videos ⏯️

1 year ago 59 8 3 2
Post image

see the blog and our docs for more insights around native agentic skills of LLMs and getting started with smolagents, courtesy of the amazing
@m--ric.bsky.social

> Blog: hf.co/blog/smolage...
> Quickstart: huggingface.co/docs/smolage...

1 year ago 11 2 0 0