Advertisement Β· 728 Γ— 90

Posts by Bart Duisterhof

RaySt3R was accepted to NeurIPS! Check out the HuggingFace demo for image to 3D in cluttered scenes huggingface.co/spaces/bartd...

7 months ago 5 2 0 0

In "hearing the slide"πŸ‘‚ (led by @yuemin-mao.bsky.social ) we estimate *loss* of contact with a contact microphone, and use it to learn dynamic constraints.⚑ It allows moving multiple intricate objects🍷 efficiently, even objects that would otherwise be hard to grasp. fast-non-prehensile.github.io

10 months ago 4 0 0 0
GitHub - naver/pow3r Contribute to naver/pow3r development by creating an account on GitHub.

For which the code is also available github.com/naver/pow3r

10 months ago 6 2 0 1
Preview
GitHub - naver/dune: Code repository for "DUNE: Distilling a Universal Encoder from Heterogeneous 2D and 3D Teachers" Code repository for "DUNE: Distilling a Universal Encoder from Heterogeneous 2D and 3D Teachers" - naver/dune

Thanks Christian for the advertisement.

github link: github.com/naver/dune

10 months ago 15 3 0 0
Preview
GitHub - Duisterhof/rayst3r Contribute to Duisterhof/rayst3r development by creating an account on GitHub.

πŸ”— Project Website: rayst3r.github.io
πŸ“„ arXiv: arxiv.org/abs/2506.05285
πŸš€ Code: github.com/Duisterhof/...
πŸ€— HF Demo: Coming (very) soon!

@CMU_Robotics @SCSatCMU @nvidia @NVIDIAAI @NVIDIARobotics

10 months ago 2 0 0 0

Big thanks to the awesome contributors to this project!πŸ‘ Jan Oberst, @bowenwen_me, @BirchfieldStan, @RamananDeva and @jeff_ichnowski. Also thanks to OctMAE author @s1wase, @nvidia for sponsoring compute πŸ–₯️, and the scientists at @naverlabseurope for the inspiration! πŸ§—β€β™‚οΈ

10 months ago 1 0 1 0
Post image Post image

We also study the impact of the confidence threshold on reconstruction quality. Our ablations suggest setting a higher confidence threshold improves accuracy, while limiting completeness and edge-bleeding. Users can tune the threshold for application-specific requirements πŸŽ›οΈ.

10 months ago 0 0 1 0
Advertisement
Video

We evaluate RaySt3R against the baselines on synthetic and real-world datasets. The results suggest RaySt3R achieves zero-shot generalization to the real world, and outperforms all baselines by up to 44% in 3D chamfer distance πŸš€.

10 months ago 0 0 1 0
Post image Post image

We train RaySt3R by curating a new dataset, for a total of 12 million views πŸ“· with Objaverse and GSO objects. The ablations πŸ” suggest that more and more diverse data improves RaySt3R's performance. RaySt3R does not require GT meshes, paving the way for training on real-world data.

10 months ago 0 0 1 0
Video

πŸ’‘ Our key insight is that 3D object shape completion can be recasted as a novel-view synthesis problem. RaySt3R takes a masked RGB-D image as input, and predicts depth maps and object masks for novel views. We query multiple views and merge the predictions into a consistent point cloud.

10 months ago 1 0 1 0

We focus on multi-object 3D shape completion for robotics. Robots are commonly equipped with a RGB-D camera πŸ“·, but their measurements are noisy and incomplete.

Using only DINOv2 features πŸ¦– as pretraining, we train a new model (RaySt3R) to produce accurate geometry.

10 months ago 1 0 1 0
Video

Imagine if robots could fill in the blanks in cluttered scenes.

✨ Enter RaySt3R: a single masked RGB-D image in, complete 3D out.
It infers depth, object masks, and confidence for novel views, and merges the predictions into a single point cloud. rayst3r.github.io

10 months ago 24 3 1 2

Do you think Europe will take the opportunity? The Netherlands is even cutting research funds under the new administration... It feels like there are still significantly more opportunities in the US.

1 year ago 1 0 0 0

Thanks Chris! This was a push with the entire dust3r team @naverlabseurope.bsky.social, congrats everyone!

1 year ago 8 0 0 0
Post image

The Best Student Paper Award goes to MASt3R-SfM! #3DV2025

1 year ago 42 8 0 2
Video

πŸŽ‰Excited to share that our paper was a finalist for best paper at #HRI2025! We introduce MOE-Hair, a soft robot system for hair care πŸ’‡πŸ»πŸ’†πŸΌ that uses mechanical compliance and visual force sensing for safe, comfortable interaction. Check our work: moehair.github.io @cmurobotics.bsky.social 🧡1/7

1 year ago 10 5 1 1
Advertisement
Post image Post image Post image Post image

MUSt3R: Multi-view Network for Stereo 3D Reconstruction

Yohann Cabon, Lucas Stoffl, Leonid Antsfeld, Gabriela Csurka, Boris Chidlovskii, Jerome Revaud, @vincentleroy.bsky.social

tl;dr: make DUSt3R symmetric and iterative+multi-layer memory mechanism->multi-view DUSt3R

arxiv.org/abs/2503.01661

1 year ago 25 4 1 0

Great news, CMU's Center for Machine Learning and Health (CMLH) decided to fund another year of our research! If you're a PhD student at CMU, consider applying for the next iterations of the fellowship - the funding is generous and relatively unconstrained :)

1 year ago 3 0 0 0

πŸ˜†

1 year ago 1 0 0 0

Is the book just as good/better than the show for "The 3 body problem"?

1 year ago 0 0 2 0
RI Seminar : Jeffrey Ichnowski : Learning for Dynamic Robot Manipulation of Deformable...
RI Seminar : Jeffrey Ichnowski : Learning for Dynamic Robot Manipulation of Deformable... YouTube video by CMU Robotics Institute

Watch Professor Jeff Ichnowski's RI seminar talk: "Learning for Dynamic Robot Manipulation of Deformable and Transparent Objects" πŸ¦ΎπŸ€–

@jeff-ichnowski.bsky.social closed out our Fall seminar series. Keep an eye out for the Spring schedule in the new year!

www.youtube.com/watch?v=DvvF...

1 year ago 15 2 0 0
Video

Intro Post
Hello World!
I'm a 2nd year Robotics PhD student at CMU, working on distributed dexterous manipulation, accessible soft robots and sensors, sample efficient robot learning, and causal inference.

Here are my cute robots:
PS: Videos are old and sped up. They move slower in real-world :3

1 year ago 15 3 0 0

My growing list of #computervision researchers on Bsky.

Missed you? Let me know.

go.bsky.app/M7HGC3Y

1 year ago 131 42 88 9
Preview
GitHub - BerkeleyAutomation/FogROS2: An Adaptive and Extensible Platform for Cloud and Fog Robotics Using ROS 2 An Adaptive and Extensible Platform for Cloud and Fog Robotics Using ROS 2 - BerkeleyAutomation/FogROS2

My advisor @jeff-ichnowski.bsky.social! For example: github.com/BerkeleyAuto...

1 year ago 2 0 0 0

For international students: renewing your visa asap might be a good idea.

1 year ago 1 0 1 0

My lab mate @yuemin-mao.bsky.social :)

1 year ago 2 0 0 0
Advertisement

Welcome to all new arrivals here on Bluesky! :) Here's a starter pack of people working on computer vision.
go.bsky.app/PkAKJu5

1 year ago 96 34 21 4

After my general computer vision starter pack is now full (150/150 entries reached), here is one specific to 3D Vision: go.bsky.app/Cfm9XFe

1 year ago 105 29 10 1

Check out this work by my lab mates: learning dynamic tasks using a soft robotic hand!

1 year ago 12 1 0 0

Thank you for making the list! Could you add me as well? I work on vision for robot manipulation :)

1 year ago 1 0 0 0