Advertisement Β· 728 Γ— 90

Posts by Valentin Wagner

Post image

The keynotes and award talks at 3DV 2026 have been made available, with talks by Christian Rupprecht ("3D computer vision done and Dusted?", Angela Dai, Alec Jacobson, Jitendra Malik, Songyou Peng and Michael Niemeyer.

www.youtube.com/playlist?lis...

3 weeks ago 20 6 0 0

Today NeurIPS is announcing our official satellite event in Paris.

After responding to the call from Ellis following the success of EurIPS in December, we are pleased to reach a new milestone by joining forces with the NeurIPS organizing committee for the 2026 edition.

4 weeks ago 89 32 1 9
Video

Flash-KMeans

An IO-aware implementation of exact k-means that rethinks the algorithm around modern GPU bottlenecks. Flash-KMeans achieves 30x speedup over cuML and 200x speedup over FAISS.

Paper: arxiv.org/abs/2603.09229
Code: github.com/svg-project/...

1 month ago 28 6 0 1
Post image

And now something positive:

solar and wind energy production in the EU surpasses fossil energy for the first time.

β˜€οΈ πŸ’¨

#TippingPoint

Source: dr.dk

2 months ago 1789 521 26 30
A bar chart showing the electricity use of several daily activities with the subtitle "The 'typical query' is not a useful way to think about coding agents' energy use." The bar for a 'typical ChatGPT query' is not even visible. My median Claude Code session is somewhere between the average US household per minute and toasting bread for three minutes. My median day with Claude Code is something like running a dishwasher.

A bar chart showing the electricity use of several daily activities with the subtitle "The 'typical query' is not a useful way to think about coding agents' energy use." The bar for a 'typical ChatGPT query' is not even visible. My median Claude Code session is somewhere between the average US household per minute and toasting bread for three minutes. My median day with Claude Code is something like running a dishwasher.

Whenever I read discourse on AI energy/water use that focuses on the "median query," I can't help but feel misled. Coding agents like Claude Code send hundreds of longer-than-median queries every session, and I run dozens of sessions a day.

On my blog: www.simonpcouch.com/blog/2026-01...

3 months ago 373 80 20 22
SVG Filters - Clickjacking 2.0 A novel and powerful twist on an old classic.

Developer attempts to replicate "Liquid Glass" in CSS, and once finished realizes what she'd actually created is an exploit for a fundamental, previously unknown, and rather serious browser vulnerability

lyra.horse/blog/2025/12...

"CSS hack accidentally becomes regular hack"

4 months ago 2033 579 24 37
ICLR 2026 Response to Security Incident – ICLR Blog

Latest details about the OpenReview leak by #ICLR2026

blog.iclr.cc/2025/12/03/i...

4 months ago 11 4 0 3
Post image

The Neurips test of time award goes to Faster R-CNN

4 months ago 32 4 2 0

When were #ICCV2025 papers available on arXiv? πŸ‘‡

5 months ago 9 1 0 0
Advertisement
Video

Super excited to introduce

✨ AnyUp: Universal Feature Upsampling πŸ”Ž

Upsample any feature - really any feature - with the same upsampler, no need for cumbersome retraining.
SOTA feature upsampling results while being feature-agnostic at inference time.

🌐 wimmerth.github.io/anyup/

6 months ago 28 5 2 2
Post image

The #ICCV2025 main conference open access proceedings is up:

openaccess.thecvf.com/ICCV2025

Workshop papers will be posted shortly. Aloha!

6 months ago 25 9 0 0
Andrej Karpathy & @karpathy
X.com
Excited to release new repo: nanochat! (it's among the most unhinged I've written).
Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web Ul.
It weighs ~8,000 lines of imo quite clean code to:
- Train the tokenizer using a new Rust implementation
- Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics
- Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use.
- SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval)
- RL the model optionally on GSM8K with
IPDDOI

Andrej Karpathy & @karpathy X.com Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web Ul. It weighs ~8,000 lines of imo quite clean code to: - Train the tokenizer using a new Rust implementation - Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics - Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use. - SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval) - RL the model optionally on GSM8K with IPDDOI

- RL the model optionally on GSM8K with
"GRPO"
- Efficient inference the model in an Engine with
KV cache, simple prefill/ decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUl.
- Write a single markdown report card, summarizing and gamifying the whole thing.
Even for as low as ~$100 in cost (~4 hours on an
8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions.
About ~12 hours surpasses GPT-2 CORE metric.
As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and
70s on ARC-Easy, 20s on GSM8K, etc.
My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow

- RL the model optionally on GSM8K with "GRPO" - Efficient inference the model in an Engine with KV cache, simple prefill/ decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUl. - Write a single markdown report card, summarizing and gamifying the whole thing. Even for as low as ~$100 in cost (~4 hours on an 8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions. About ~12 hours surpasses GPT-2 CORE metric. As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and 70s on ARC-Easy, 20s on GSM8K, etc. My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow

developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved.
Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply.
nanochat

developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved. Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply. nanochat

Karpathy: nanochat

A small training+inference pipeline for creating your own LLM from scratch

$100 will get you a somewhat functional model

$1000 is more coherent & solves math

detailed walkthrough: github.com/karpathy/nan...

repo: github.com/karpathy/nan...

6 months ago 94 20 3 2
Post image

Working on representation learning for Earth Observation?
Come join the discussion at the EurIPS workshop "REO: Advances in Representation Learning for Earth Observation"

Call for papers deadline: October 15, AoE
Workshop site: sites.google.com/view/reoeurips

@euripsconf.bsky.social @esa.int

6 months ago 8 4 0 1
Post image

🐝 The #3DV2026 Nectar Track is open for submissions!
1️⃣ Spotlight Track: Showcase your strong paper from recent venues with an oral presentation + poster.
2️⃣ Exploration Edge Track: Present preliminary ideas, research hurdles, or demos in a poster session.
More details: ⬇️

6 months ago 3 2 1 0
Post image

This marks the kick-off of our #CVPR2026 coverage! I’m @csprofkgd.bsky.social, and more publicity chairs will be joining in soon.

The key #CVPR2026 dates are now posted. One highlight: the supplementary materials deadline comes a full week AFTER the main paper deadline.

6 months ago 15 7 1 0
Post image Post image Post image Post image

MapAnything: Universal Feed-Forward Metric 3D Reconstruction

@nikv9.bsky.social et al.

tl;dr: flexible input & metric output version of VGGT

arxiv.org/abs/2509.13414

7 months ago 2 2 0 0

Great to see more initiatives to bring conferences/events to Europe πŸ‡ͺπŸ‡Ί. Reducing unnecessary co2 emissions caused by long-distance flights and simultaneously strengthening europe as research location.

9 months ago 2 0 0 0
Post image

#ICML2025 test of time award

9 months ago 27 3 0 3
Advertisement
from gremllm import Gremllm

# Be sure to tell your gremllm what sort of thing it is
counter = Gremllm('counter')
counter.value = 5
counter.increment()
print(counter.value)  # 6?
print(counter.to_roman_numerals()) # VI?

from gremllm import Gremllm # Be sure to tell your gremllm what sort of thing it is counter = Gremllm('counter') counter.value = 5 counter.increment() print(counter.value) # 6? print(counter.to_roman_numerals()) # VI?

The is diabolical... a Python object that hallucinates method implementations on demand any time you call them, using my LLM Python library github.com/awwaiid/grem...

9 months ago 221 36 12 11
Post image Post image Post image

We just released COLMAP v3.12, which adds long-awaited, end-to-end support for multi-camera rigs and 360Β° panoramas πŸ‘€ COLMAP just got better at handling your robotics, AR/VR, or 360 data - try it yourself and let us know! github.com/colmap/colma... Kudos to Johannes & team for this great work πŸš€

9 months ago 22 6 1 0
Post image Post image Post image Post image

#CVPR2025 paper statistics

10 months ago 6 3 2 0
Post image

The #CVPR2025 main conference open access proceedings is up:

openaccess.thecvf.com/CVPR2025

Workshop papers will be posted shortly. Stay tuned...

10 months ago 32 14 0 1
A screenshot of the important dates for the WACV2026 submission process.

A screenshot of the important dates for the WACV2026 submission process.

The #WACV2026 Call for Papers is live at wacv.thecvf.com/Conferences/...! First round paper registration is coming up on July 11th, with the submission deadline on July 18th (all deadlines are 23:59 AoE).

10 months ago 10 7 0 0
Camera Calibration and Pose Estimation (CALIPOSE) Workshop Information When: October 19th or 20th, 2025 Where: Honolulu, Hawai'i, ICCV 2025 Time: TBD Preliminary Schedule Opening [all organizers, 5 mins] Invited Talk I: Richard Hartley [30 mins] Inv...

Working on camera calibration or camera pose estimation? Want to finally know how calibrations and poses are obtained for the data you ae using?
Come join us at the CALIPOSE workshop at @iccv.bsky.social

Details are on the workshop website: sites.google.com/view/calipos...

10 months ago 24 10 2 1
Heat-map style graphic of Arctic sea ice extent anomalies for every day from 1 January 1979 through 31 December 2024. Blue shading is shown for greater sea ice, and red shading is shown for less sea ice. The baseline is 1981 to 2010. The graphic is in units of million square kilometers from -3 to +3. Data from NSIDC.

Heat-map style graphic of Arctic sea ice extent anomalies for every day from 1 January 1979 through 31 December 2024. Blue shading is shown for greater sea ice, and red shading is shown for less sea ice. The baseline is 1981 to 2010. The graphic is in units of million square kilometers from -3 to +3. Data from NSIDC.

Mosaic of daily #Arctic sea-ice extent anomalies over the last four decades or so. Another way of visualizing the long-term trend.

Graphic from zacklabe.com/arctic-sea-i...

11 months ago 91 27 1 1
Video

πŸš€Join the π’πœπšπ§ππžπ­++ π‚π‘πšπ₯π₯𝐞𝐧𝐠𝐞 @ CVPR 2025!
Think your method can handle large-scale 3D scenes?
Put it to the test:
kaldir.vc.in.tum.de/scannetpp/cv...

Updates:
βœ… Preprocessed, undistorted DSLR images
βœ… 3DGS demo: github.com/scannetpp/3D...

by Yueh-Cheng Liu, @cyeshwanth.bsky.social

11 months ago 11 4 1 0
Advertisement

Great thread about the evolution of radiance fields!

11 months ago 1 0 0 0
Video

🌍 Guessing where an image was taken is a hard, and often ambiguous problem. Introducing diffusion-based geolocationβ€”we predict global locations by refining random guesses into trajectories across the Earth's surface!

πŸ—ΊοΈ Paper, code, and demo: nicolas-dufour.github.io/plonk

1 year ago 97 32 8 5
Announcing the Test of Time Award Winners from ICLR 2015 – ICLR Blog

Announcing the Test of Time awards for ICLR 2025! This award recognizes papers published ten years ago at ICLR 2015 that have had a lasting impact on the field. Congratulations to the authors!

blog.iclr.cc/2025/04/14/a...

1 year ago 38 5 1 1

Great tool to keep up to date with papers and for finding related works πŸ“š

1 year ago 3 0 0 0