Advertisement · 728 × 90

Posts by Mahi Shafiullah

If you don’t have the capacity to distinguish between what’s true and what’s not, your truths are just as incidental as your lies.

1 year ago 3 0 0 0
Post image Post image

Reading comprehension is an important but easily overlooked quality IMO

1 year ago 3 0 1 0
Video

Ever struggled with multi-sensor data from cameras, depth sensors, and other custom sensors? Meet AnySense—an iPhone app for effortless data acquisition and streaming. Working with multimodal sensor data will never be a chore again!

1 year ago 5 2 1 0
Video

We just released AnySense, an iPhone app for effortless data acquisition and streaming for robotics. We leverage Apple’s development frameworks to record and stream:

1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors

1 year ago 35 10 2 0
Post image

Just found a new winner for the most hype-baiting, unscientific plot I have seen. (From the recent Figure AI release)

1 year ago 37 6 1 1

One reason to be intolerant of misleading hype in tech and science is that tolerating the small lies and deception is how you get tolerance of big lies

1 year ago 185 27 4 0
Video

Can we extend the power of world models beyond just online model-based learning? Absolutely!

We believe the true potential of world models lies in enabling agents to reason at test time.
Introducing DINO-WM: World Models on Pre-trained Visual Features for Zero-shot Planning.

1 year ago 20 8 1 1

My advisor warned me that academics trend towards bitterness. He encouraged me to intentionally resist this, remember where I came from, and never forget the privilege of getting to spend a life working with knowledge and ideas. He too said that bitterness and resentment is easy.

1 year ago 251 37 1 5

This is super helpful for a non-sim person, thanks for the perspective!

1 year ago 12 1 0 0
Advertisement
Video

New paper! We show that by using keypoint-based image representation, robot policies become robust to different object types and background changes.

We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io

1 year ago 37 7 1 2
Video

Modern policy architectures are unnecessarily complex. In our #NeurIPS2024 project called BAKU, we focus on what really matters for good policy learning.

BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!

1 year ago 30 9 1 2
Video

Since we are nearing the end of the year, I'll revisit some of our work I'm most excited about from the last year and maybe a sneak peek of what we are up to next.

To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.

1 year ago 36 8 2 3

I agree, the paper could definitely be clearer. My assumption is “same training loop” ≈ “all else being equal”, but that can be totally incorrect.

1 year ago 1 0 0 0
Post image

AFAIK it's the same dataset, they just use the larger pretrained model as the teacher model. Screenshot is from the DinoV2 paper section 5: arxiv.org/abs/2304.07193

1 year ago 1 0 1 0
Preview
GitHub - facebookresearch/dinov2: PyTorch code and models for the DINOv2 self-supervised learning method. PyTorch code and models for the DINOv2 self-supervised learning method. - facebookresearch/dinov2

Dino-v2 is a good recent example in vision –

1 year ago 1 0 1 0
Video

I'd like to introduce what I've been working at @hellorobot.bsky.social: Stretch AI, a set of open-source tools for language-guided autonomy, exploration, navigation, and learning from demonstration.

Check it out: github.com/hello-robot/...

Thread ->

1 year ago 132 23 6 4
Video

Turns out aria-glasses are a very useful tool to demonstrate actions to robots: Based on egocentric video we track dynamic changes in a scene graph and use the representation to replay or plan interactions for robots
🔗 behretj.github.io/LostAndFound/
📄 arxiv.org/abs/2411.19162
📺 youtu.be/xxMsaBSeMXo

1 year ago 27 6 1 2
Advertisement

A reminder for folks in financial need: many PhD applications have application fee waivers, those waivers are not super onerous, and they are usually granted (at least at the two schools I'm familiar with). Please take advantage of them.

1 year ago 26 10 1 0

I wish it were only podcasts, I am seeing form steamrolling over content in academic papers more and more these days.

1 year ago 1 0 0 0

👋

1 year ago 0 0 1 0

Would like to be added!

1 year ago 1 0 1 0

I collected some folk knowledge for RL and stuck them in my lecture slides a couple weeks back: web.mit.edu/6.7920/www/l... See Appendix B... sorry, I know, appendix of a lecture slide deck is not the best for discovery. Suggestions very welcome.

1 year ago 114 18 3 3
Preview
Learn Git Branching An interactive Git visualization tool to educate and challenge!

On one of the first projects I supervised in my PhD, a student repeatedly ignored suggestions to commit and then accidentally deleted the project at the end of the semester. Please use git! There are even "fun" games you can use to learn it:
learngitbranching.js.org

1 year ago 60 6 5 0
Preview
Supervised Policy Learning for Real Robots RSS 2024 Tutorial on Supervised Policy Learning for Real Robots. Friday, July 19 Afternoon (2PM - 6PM Central European time, 8AM - 12AM Eastern Time).

We took a bunch of them in robot learning and made a tutorial about them! I tried to put everything that I find myself regularly telling my students there somewhere. Really think it can save some days to months of a new grad students’ life.

supervised-robot-learning.github.io

1 year ago 13 2 0 1
Advertisement

Interesting article but the author drank the Kool-Aid and never sought out other viewpoints: “Foundation models like GPT-4 have largely subsumed [previous] models that help robots with planning and vision, and locomotion and dexterity will probably soon be subsumed, too.”

1 year ago 27 4 1 0
Video

I'll be presenting AnySkin at the Stanford Center for Design Research today at 2pm! Stop by for a chat and try the sensor out!

More info: any-skin.github.io

1 year ago 6 2 0 0
Post image
1 year ago 37 4 0 2

A reminder that many feeds here are non algorithmic so reposting is more helpful than it is on twitter

1 year ago 23 4 1 0

I was presenting this at NEMS, yes :)

1 year ago 0 0 0 0
Robots for Humanity: In-Home Deployment of Stretch RE2
Robots for Humanity: In-Home Deployment of Stretch RE2 YouTube video by Vinitha Ranganeni

This week's #PaperILike is "Robots for Humanity: In-Home Deployment of Stretch RE2" (Ranganeni et al., HRI 2024).
This is probably the most inspiring robot video/demo that I've ever seen.

Video: www.youtube.com/watch?v=K2U7...
Paper: dl.acm.org/doi/abs/10.1...

1 year ago 12 2 0 1