Advertisement Β· 728 Γ— 90

Posts by Gilles Puy

πŸš—πŸŒ Working on domain adaptation for 3D point clouds / LiDAR?

We'll present MuDDoS at BMVC: a method that boosts multimodal distillation for 3D semantic segmentation under domain shift.

πŸ“ BMVC
πŸ•š Monday, Poster Session 1: Multimodal Learning (11:00–12:30)
πŸ“Œ Hadfield Hall #859

4 months ago 13 4 1 3

The PhD graduation season in the team goes on!
Today, Corentin Sautier is defending his PhD on "Learning Actionable LiDAR Representations without Annotations".
Good luck! πŸš€

6 months ago 9 1 0 0
Post image

It’s PhD graduation season in the team!

Today, @bjoernmichele.bsky.social is defending his PhD on "Domain Adaptation for 3D Data"
Best of luck! πŸš€

6 months ago 10 2 0 0

The accounts/authors were probably fake. I couldn’t match them to real profiles. I think the goal was to increase the citation counts of certain papers while hiding it by citing many others. With a few exceptions, the same ~150 papers were cited each time, none from the authors.

6 months ago 2 0 0 0

Update: ResearchGate has investigated the case, and, as far as I can see, all the suspicious papers (~200) have now been removed. Many thanks to the @researchgate.bsky.social team!

6 months ago 4 3 1 0
Preview
3D Human Pose and Shape Estimation from LiDAR Point Clouds: A Review In this paper, we present a comprehensive review of 3D human pose estimation and human mesh recovery from in-the-wild LiDAR point clouds. We compare existing approaches across several key dimensions, ...

If you're interested in human pose estimation and mesh recovery from LiDAR data, we have this massive survey: arxiv.org/abs/2509.12197
Salma and Nermin put a tremendous amount of work in it, there's everything: the tasks, all the methods organized, datasets, numbers, challenges and opportunities.

7 months ago 20 8 0 0

The issue was reported yesterday to ResearchGate. By the time I created the report, Google Scholar indexed about 30 other additional papers (uploaded by the same 4 accounts).

Curious to know if you’ve seen similar suspicious citations with your papers and what you did about it.
[4/]

7 months ago 0 0 0 0

But also:
(d) Reference lists are very similar across papers.
(f) All papers cover almost identical topics (segmentation/detection/recognition in medical imaging), far from the topic of our work.
(e) Our paper appears in the bibliography but I never saw it cited in the text.
[3/]

7 months ago 0 0 1 0
Advertisement

Some elements which raised suspicions:
(a) Papers uploaded from 4 accounts only.
(b) At least 1 paper claims a publication date in 2021, before RangeViT, and cites papers from 2025.
(c) Rate of uploads is accelerating: from 1 paper on August 20 to 19 papers on September 9.
[2/]

7 months ago 0 0 1 0

Discovered that our RangeViT paper keeps being cited in what might be LLM-generated papers. Number of citations increased rapidly in the last weeks. Too good to be true.

Papers popped up on different platforms, but mainly on ResearchGate with ~80 papers in just 3 weeks.
[1/]

7 months ago 6 5 1 2
Post image

1/ Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research.

9 months ago 83 22 2 3

We just released the code of #LiDPM, go ahead and play with it (and don't forget to star 🀭🀩)!

Training and inference code available, along with the model checkpoint.

Github repo: github.com/astra-vision...

#IV2025

9 months ago 6 3 1 0
Post image

1/n πŸš€New paper out - accepted at #ICCV2025!

Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding

Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!

9 months ago 21 6 1 4
Post image Post image

Presenting our project #LiDPM in the afternoon oral session at #IV2025!

Project page: astra-vision.github.io/LiDPM/

w/ @gillespuy.bsky.social, @alexandreboulch.bsky.social, Renaud Marlet, Raoul de Charette

Also, see our poster at 3pm in the Caravaggio room and AMA πŸ˜‰

9 months ago 10 3 1 1
Post image

🚨 New preprint!
How far can we go with ImageNet for Text-to-Image generation? w. @arrijitghosh.bsky.social @lucasdegeorge.bsky.social @nicolasdufour.bsky.social @vickykalogeiton.bsky.social
TL;DR: Train a text-to-image model using 1000 less data in 200 GPU hrs!

πŸ“œhttps://arxiv.org/abs/2502.21318
πŸ§΅πŸ‘‡

1 year ago 66 16 2 7
Preview
2025 IMAGINE Internships 2025 Internship proposals at IMAGINE IMAGINE is a top research group on computer vision and machine learning. It is part of the LIGM lab and hosted at Γ‰cole des Ponts ParisTech (ENPC), about 25 min f...

We @imagineenpc.bsky.social are slowly but surely entering our proposals for master's degree internships here: docs.google.com/document/d/1...
These are 6 months projects that typically correspond to the end-of-study project in the French curriculum.
Probably more offers to come, check it regularly.

1 year ago 32 11 2 2
Advertisement