Advertisement · 728 × 90

Posts by Daniël de Kok

Preview
International Transgender Day of Visibility - Wikipedia

Happy Trans Day of Visibility! 🏳️‍⚧️ The mere act of existing and being visible shouldn’t be as fraught as it is today. Let’s keep fighting to set things right.

en.wikipedia.org/wiki/Interna...

2 weeks ago 74 10 1 1
Video

I got tables working nice

2 weeks ago 1 1 0 0
Post image

More embedding models and an even more reliable inference engine is what you get with @hf.co Text Embeddings Inference v1.9.0 💥

More in the thread 🧵

1 month ago 3 3 1 0
Preview
Custom Kernels for All from Codex and Claude We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Kernels now has an agent skill to write custom Hub kernels: huggingface.co/blog/custom-...

Awesome work by @benburtenshaw.bsky.social and Sayak Paul! 🔥

2 months ago 0 0 0 0

And degoogle your phone.

2 months ago 7 0 0 0
Preview
Release v0.12.0 · huggingface/kernels New features Merge of kernels and kernel-builder repositories kernel-builder has been merged into the kernels repository. This makes it easier for us to coordinate changes that affect both the kern...

kernels 0.12 is out! 🎉

Changes:

* Support for kernel version branches to gracefully roll out kernel API changes.
* Support for PyTorch 2.10.
* kernel-builder is now merged into the kernels repo.
* Initial support for standardized kernel benchmarks.

github.com/huggingface/...

2 months ago 1 0 0 0

Zed has been great for me, is very fast, and has a single 'turn all AI off' toggle.

2 months ago 2 0 1 0
Preview
One Year Since the “DeepSeek Moment” A Blog post by Hugging Face on Hugging Face

DeepSeek R1 dropped one year ago 🐳 and a lot has changed.

With Irene Solaiman, we’re launching a blog series on
@hf.co about how that moment reshaped AI + open source in 2025, starting with strategic shifts and the explosion of new open models in China!

huggingface.co/blog/hugging...

2 months ago 21 4 0 0
Post image

🔥I am super excited for the official release of an open-source library we've been working on for about a year!

🪄interpreto is an interpretability toolbox for HF language models🤗. In both generation and classification!

Why do you need it, and for what?

1/8 (links at the end)

2 months ago 20 9 1 3
Advertisement

T-Head, it uses a fork of the 0.7 draft of the RISC-V Vector extension .

3 months ago 0 0 0 0
Preview
Natural Language Processing How do you build Large Language Models? How do humans experience Natural Language Processing (NLP) applications in their daily lives? And how can we...

👀 Look what 🎅 has broght just before Christmas 🎁: a brand new Research Master in Natural Language Processing at @facultyofartsug.bsky.social @rug.nl

Program: www.rug.nl/masters/natu...

Applications (2026/2027) are open! Come and study with us (you will also learn why we have a 🐮 in our logo)

3 months ago 25 15 0 0

We are currently doing a reading group on RISC-V and its vector extension. I actually got to implement it using the fast inverse square root because the T-Head board that we use does not have the vfrsqrt7.v instruction. So, full-circle I guess.

github.com/danieldk/low...

4 months ago 1 0 1 0

It started out as a joke with @kadarakos.bsky.social in 2022 when we worked at @explosion.ai that we should make an activation function using the fast inverse sqrt of Kahan/Walsh and famously used in Quake 3.

4 months ago 0 0 2 0
Benchmarks comparing RISC-V vectorized activation functions on a Milk-V Duo 256M. Dish is the fastest with 110M elements per second, followed by Swish with 57M elements per second and the slowest is the Cook GELU approximation coming in at 39M elements per second.

Benchmarks comparing RISC-V vectorized activation functions on a Milk-V Duo 256M. Dish is the fastest with 110M elements per second, followed by Swish with 57M elements per second and the slowest is the Cook GELU approximation coming in at 39M elements per second.

I finally made a page on my Dish activation function, replacing my deleted Tweet: danieldk.eu/Dish-Activat...

It's a non-monotonic function similar to GELU/SiLU, but does not require elementary functions, making it faster on various hardware.

I'll leave the empirical evaluation to someone else 😁.

4 months ago 1 0 1 0
Post image

Training LLMs end to end is hard. But way more people should, and will, be doing it in the future.

The @hf.co Research team is excited to share their new e-book that covers the full pipeline:
· pre-training,
· post-training,
· infra.

200+ pages of what worked and what didn’t. ⤵️

5 months ago 143 26 4 1
Graph showing the conversion of Hugging Face repositories from LFS storage to Xet storage.

Graph showing the conversion of Hugging Face repositories from LFS storage to Xet storage.

The Hub is on 100% on Xet. 🚀

A little over a year ago, @hf.co acquired XetHub to unlock the next phase of growth in models and datasets. huggingface.co/blog/xethub-...

In April, there were 1,000 Hugging Face repos on Xet. Now every repo (over 6M) on the Hub is on Xet.

6 months ago 12 5 2 0
Preview
From Zero to GPU: A Guide to Building and Scaling Production-Ready CUDA Kernels We’re on a journey to advance and democratize artificial intelligence through open source and open science.

We made a blog post on how you can use kernel-builder to develop and build compute kernels for the @hf.co Kernel Hub:

huggingface.co/blog/kernel-...

7 months ago 3 1 0 0
Preview
kernels-community (kernels-community) Org profile for kernels-community on Hugging Face, the AI community building the future.

Also a huge shout-out to @nixos-org.bsky.social! All the kernels in huggingface.co/kernels-comm... are built using kernel-builder, which uses Nix under the hood to build ABI3 kernels for all the supported Torch configurations (various CUDA/ROCm versions, Metal):

github.com/huggingface/...

8 months ago 0 0 0 0
Preview
Welcome GPT OSS, the new open-source model family from OpenAI! We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Yesterday we released support for GPT OSS (the new OpenAI open weight model) across the @hf.co ecosystem. The latest Transformers now integrates support for the kernels package and uses kernels from the HF Kernel Hub to run models like GPT OSS as fast as possible. 🚀

huggingface.co/blog/welcome...

8 months ago 2 0 1 0
Advertisement
Hugging Face Kernel Builder Walkthrough | Image to Grayscale CUDA Kernel
Hugging Face Kernel Builder Walkthrough | Image to Grayscale CUDA Kernel YouTube video by David Holtz

David Holz made an introduction video showing how to make your own kernels with kernel-builder:

www.youtube.com/watch?v=HS5P...

8 months ago 0 0 0 0

The kernel ecosystem is completely open: you can make your own kernels with kernel-builder, upload them to the hub, and register a mapping using the kernels package and they get used by transformers.

github.com/huggingface/...
github.com/huggingface/...

8 months ago 0 0 1 0
Preview
Release v4.54.0: Kernels, Transformers Serve, Ernie, Voxtral, LFM2, DeepSeek v2, ModernBERT Decoder... · huggingface/transformers Important news! In order to become the source of truth, we recognize that we need to address two common and long-heard critiques about transformers: transformers is bloated transformers is slow O...

Transformers 4.54.0 is out! This release adds support for compute kernels hosted on the Hub. When enabled, transformers can replace PyTorch layer implementations by fast, specialized kernels from the hub.

github.com/huggingface/...

8 months ago 7 2 1 0
Preview
GitHub - koaning/mktestdocs: Run pytest against markdown files/docstrings. Run pytest against markdown files/docstrings. Contribute to koaning/mktestdocs development by creating an account on GitHub.

Just released a new version of mktestdocs. It now also supports huggingface docstrings!

github.com/koaning/mkt...

8 months ago 4 1 0 0

Some of the ModernBERT team is back with new encoder models: Ettin, ranging from tiny to small: 17M, 32M, 68M, 150M, 400M & 1B parameters. They also trained decoder models & checked if decoders could classify & if encoders could generate.

Details in 🧵:

8 months ago 7 1 1 0
Your open-source companion - Reachy Mini
Your open-source companion - Reachy Mini YouTube video by Pollen Robotics

So excited to finally release our first robot today: Reachy Mini

A dream come true: cute and low priced, hackable yet easy to use, powered by open-source and the infinite community.

Read more and order now at huggingface.co/blog/reachy-...

9 months ago 82 17 2 7
Preview
SUSE Refines, Releases Open-Source LLM to Fuel Community Collaboration Today, SUSE has released a new fine-tuned version of the language model, Cavil-Qwen3-4B, as open source on openSUSE’s Hugging Face in order to make legal com...

SUSE has released Cavil-Qwen3-4B, a fine-tuned, #opensource #LLM on #HuggingFace. Built to detect #legal text like license declarations, it empowers #devs to stay #compliant. #fast #efficiently. #openSUSE #AI #Licenses news.opensuse.org/2025/06/24/s...

9 months ago 10 2 1 0
Preview
Learn the Hugging Face Kernel Hub in 5 Minutes We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Over the past few months, we have worked on the @hf.co Kernel Hub. Kernel Hub allows you to get cutting-edge compute kernels directly from the hub in a few lines of code.

David Holz made a great writeup of how you can use kernels in your projects: huggingface.co/blog/hello-h...

9 months ago 9 2 0 0
Post image

Hi Berlin people! @hugobowne.bsky.social is in town & we're celebrating by hosting a meetup together 🎉 This one is all about building with AI & we'll also open the floor for lightning talks. If you're around, come hang out with us!

📆 June 16, 18:00
📍 Native Instruments (Kreuzberg)
🎟️ lu.ma/d53y9p2u

10 months ago 9 4 0 1
Advertisement
Preview
Release v3.3.1 · huggingface/text-generation-inference This release updates TGI to Torch 2.7 and CUDA 12.8. What's Changed change HPU warmup logic: seq length should be with exponential growth by @kaixuanliu in #3217 adjust the round_up_seq logic to a...

TGI v3.3.1 is released! This version switches to Torch 2.7 and CUDA 12.8. This should improve support for GPUs with compute capabilities 10.0 (B200) and 12.0 (RTX50x0 and NVIDIA RTX PRO Blackwell GPUs).

github.com/huggingface/...

10 months ago 0 0 0 0