Advertisement ยท 728 ร— 90

Posts by Long Le

Cool! I love your blenderproc code!

1 year ago 1 0 0 0
Post image Post image

๐Ÿš€ How well does it work?

Articulate-Anything is much better than the baselines both quantitatively and qualitatively. This is possible due to (1) leveraging richer input modalities, (2) modeling articulation as a high-level program synthesis, (3) leveraging a closed-loop actor-critic system

1 year ago 0 0 0 0
Video

๐ŸงฉHow does it work?

Articulate-Anything breaks the problem into three steps: (1) Mesh retrieval, (2) Link placement, which spatially arranges the parts together, and (3) Joint prediction, which determines the kinematic movement between parts. Take a look at a video explaining this pipeline!

1 year ago 0 0 1 0
Post image

๐Ÿ™€Why does this matter?

Creating interactable 3D models of the world is hard. An artist have to model the physical appearance of the object to create a mesh. Then a roboticist needs to manually annotate the kinematic joints to give object movement in URDF.
But what we can automate all these steps?

1 year ago 1 0 1 0
Video

๐Ÿ“ฆ Can frontier AI transform ANY physical object from ANY input modality into a high-quality digital twin that also MOVES?

Excited to share our work,Articulate-Anything ๐Ÿต, exploring how VLMs can bridge the gap between the physical and digital worlds.

Website: articulate-anything.github.io

1 year ago 4 1 1 0

Hello, can I be added please ๐Ÿ™? I work on robot/reinforcement learning and we have a cool sim2real RL paper in submission!

1 year ago 0 0 0 0

Hello! Can I be added please ๐Ÿ™? I work on 3d vision for robot learning

1 year ago 1 0 1 0
Advertisement