Advertisement ยท 728 ร— 90

Posts by Jane Wu

One thing about humanoid robots i think is under appreciated, is that you dont have to make some super crazy training set for some specialized arm on wheels. It can learn from humans performing actions, so this is incredibly useful

8 months ago 39 5 3 0

Video recordings from our workshop on Embodied Intelligence and tutorial on Robotics 101 @cvprconference.bsky.social are now up, just in time to catch up with things over the summer.

Enjoy! #CVPR2025

9 months ago 8 3 0 0
Post image

VGGT for the masses ๐Ÿค˜! #cvpr2025

10 months ago 25 3 0 0
SGP 2025 - Submit page

The Symposium on Geometry Processing is an amazing venue for geometry research: meshes, point clouds, neural fields, 3D ML, etc. Reviews are quick and high-quality.

The deadline is in ~10 days. Consider submitting your work, I'm planning to submit!

sgp2025.my.canva.site/submit-page-...

1 year ago 42 10 0 0
Post image

๐Ÿ“ข We present CWGrasp, a framework for generating 3D Whole-body Grasps with Directional Controllability ๐ŸŽ‰
Specifically:
๐Ÿ‘‰ given a grasping object (shown in red color) placed on a receptacle (brown color)
๐Ÿ‘‰ we aim to generate a body (gray color) that grasps the object.

๐Ÿงต 1/10

1 year ago 11 1 1 1
Post image

๐Ÿ“ข๐Ÿ“ข๐Ÿ“ข Submit to our workshop on Physics-inspired 3D Vision and Imaging at #CVPR2025!

Speakers ๐Ÿ—ฃ๏ธ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!

๐ŸŒ pi3dvi.github.io

You can also just come hangout with us at the workshop @cvprconference.bsky.social!

1 year ago 10 3 0 0

I will not lie: having the supp mat DL on the same day as the main paper DL (as ICLR and NeurIPS always did, of course) does not have the best impact on the stress component of the paper submission crunch.

1 year ago 9 2 1 0
Preview
โ€˜Flowโ€™ wins best animated feature film Oscar LOS ANGELES, March 2 (Reuters) - The independent film โ€œFlowโ€ won the best animated feature film Oscar on Sunday, securing the first Academy Award for Latvia and its Latvian director Gints Zilbalodis.

A huge congrats to Flow for winning the Oscar for Best Animated Feature! It was made by a tiny crew entirely using Blender and rendered entirely using Eevee. IMO everyone in the wider animation industry has lessons to learn from Flow.

www.reuters.com/lifestyle/fl...

1 year ago 38 3 0 1

Paper: arxiv.org/pdf/2311.16042
Code: github.com/janehwu/clot...

1 year ago 2 0 0 0

This project started as a cold email back in 2020, and from it came a wonderful new collaboration and immense personal growth. It's not everyday that my research requires writing CUDA kernels..

Thank you to Diego Thomas (who will also be at WACV) and Ron Fedkiw for guiding me every step of the way!

1 year ago 2 0 1 0
Advertisement
Post image

Our method is able to reconstruct a unified human mesh from in-the-wild images, where high-frequency details like cloth wrinkles can be recovered even in the absence of any ground truth 3D data.

1 year ago 3 0 1 0
Post image

In this paper, we introduce a low-cost, optimization-based method for 3D human reconstruction guided by inferred 2D normal maps.

Aiming for end-to-end differentiability, we derive analytical gradients to backpropagate from predicted normal maps to network-inferred SDF values on a tetrahedral mesh.

1 year ago 3 0 1 0

It all started with a question that can be best characterized as โ€œborn out of resource scarcityโ€: can we reconstruct humans from consumer-grade cameras without using *any* 3D training data? ๐Ÿซ 

(Half a PhD later) Yes, we can! ๐Ÿ˜ฎโ€๐Ÿ’จ

1 year ago 2 0 1 0
Post image

I'll be presenting "Sparse-View 3D Reconstruction of Clothed Humans via Normal Maps" tomorrow morning at #WACV2025 Oral Session 1.1. Excited to share the final project of my PhD! A brief story ๐Ÿงต

1 year ago 8 0 2 0
Post image

Our method is able to reconstruct a unified human mesh from in-the-wild images, where high-frequency details like cloth wrinkles can be recovered even in the absence of any ground truth 3D data.

1 year ago 1 0 0 0
Post image

In this paper, we introduce a low-cost, optimization-based method for 3D human reconstruction guided by inferred 2D normal maps.

Aiming for end-to-end differentiability, we derive analytical gradients to backpropagate from predicted normal maps to network-inferred SDF values on a tetrahedral mesh.

1 year ago 1 0 1 0

It all started with a question that can be best characterized as โ€œborn out of resource scarcityโ€: can we reconstruct humans from consumer-grade cameras without using *any* 3D training data? ๐Ÿซ 

(Half a PhD later) Yes, we can! ๐Ÿ˜ฎโ€๐Ÿ’จ

1 year ago 0 0 1 0
Advertisement
Video

What happens when vision๐Ÿค robotics meet? ๐Ÿšจ Happy to share our new work on Pretraining Robotic Foundational Models!๐Ÿ”ฅ

ARM4R is an Autoregressive Robotic Model that leverages low-level 4D Representations learned from human video data to yield a better robotic model.

BerkeleyAI ๐Ÿ˜Š

1 year ago 16 5 1 0

Full quality video here: www.youtube.com/watch?v=uVcB...

1 year ago 3 1 1 0
Video

GPUDrive got accepted to ICLR 2025!

With that, we release GPUDrive v0.4.0! ๐Ÿšจ You can now install the repo and run your first fast PPO experiment in under 10 minutes.

Iโ€™m honestly so excited about the new opportunities and research the sim makes possible. ๐Ÿš€ 1/2

1 year ago 45 4 2 1
Post image

Just found a new winner for the most hype-baiting, unscientific plot I have seen. (From the recent Figure AI release)

1 year ago 37 6 1 1
Post image

Really excited to put together this #CVPR2025 workshop on "4D Vision: Modeling the Dynamic World" -- one of the most fascinating areas in computer vision today!

We've invited incredible researchers who are leading fantastic work at various related fields.

4dvisionworkshop.github.io

1 year ago 23 3 1 3
MULA 2025 Eighth Multimodal Learning and Applications Workshop

Paper submission is now open for the 8th Multimodal Learning and Applications Workshop at #CVPR2025!

Call For Papers: mula-workshop.github.io

#computervision #cvpr #multimodal #ai

1 year ago 6 1 0 0
EgoVis 2023/2024 Distinguished Paper Awards EgoVis

๐Ÿ… Call for Nominations EgoVis 2023/2024 Distinguished Paper Awards

Did you publish a paper contributing to Ego Vision in 2023 or 2024?
Innovative &advancing Ego Vision?
Worthy of a prize?

DL 1 April 2025

Decisions
@cvprconference.bsky.social
#CVPR2025
egovis.github.io/awards/2023_...

1 year ago 9 4 0 2
Advertisement
Video

(1/n)
๐Ÿ“ข๐Ÿ“ข ๐๐ž๐‘๐’๐ž๐ฆ๐›๐ฅ๐ž ๐ฏ๐Ÿ ๐ƒ๐š๐ญ๐š๐ฌ๐ž๐ญ ๐‘๐ž๐ฅ๐ž๐š๐ฌ๐ž ๐Ÿ“ข๐Ÿ“ข

Head captures of 7.1MP from 16 cameras at 73fps:
* More recordings (425 people)
* Better color calibration
* Convenient download scripts

github.com/tobias-kirsc...

1 year ago 14 7 1 0
Video

Announcing Diffusion Forcing Transformer (DFoT), our new video diffusion algorithm that generates ultra-long videos of 800+ frames. DFoT enables History Guidance, a simple add-on to any existing video diffusion models for a quality boost. Website: boyuan.space/history-guidance (1/7)

1 year ago 35 6 1 0
Video

We can usually only get partial observations of scenes, but getting complete object information could be helpful for many tasks in robotics and graphics. Our new ICLR 2025 paper extends point-based single object completion models to completing multiple objects in a scene, (1/3)๐Ÿงต

1 year ago 9 2 1 0
Video

๐Ÿ›‘๐Ÿ“ข
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
hd-epic.github.io
arxiv.org/abs/2502.04144
New collected videos
263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks.
26K VQA benchmark to challenge current VLMs
1/N

1 year ago 34 6 2 4
DexGen

Seeing some of the early results from DexterityGen were definitely a wow moment for me!

It doesn't take a lot to realize all the new opportunities a strong teleop system like this enables! ๐Ÿš€

X thread: x.com/zhaohengyin/...
Link: zhaohengyin.github.io/dexteritygen/

1 year ago 2 1 0 0
Video

Our new work has made a big leap moving away from depth based end-to-end to raw rgb pixels based end-to-end. We have two versions: mono and stereo, all trained entirely in simulation (IsaacLab).

1 year ago 21 2 1 1