Advertisement · 728 × 90

Posts by Josh Susskind

I saw an interview with Terrance Tao talking about this -- that people need to treat deep thinking as a form of workout.

2 weeks ago 0 0 1 0

Cogsci peeps! This is a great opportunity! @sineadwilliamson.bsky.social is a great mentor and scientist, along with a wonderful team :)

5 months ago 2 0 0 0
Video

We have been working with Michal Klein on pushing a module to train *flow matching* models using JAX. This is shipped as part of our new release of the OTT-JAX toolbox (github.com/ott-jax/ott)

The tutorial to do so is here: ott-jax.readthedocs.io/tutorials/ne...

5 months ago 13 7 1 0
Sharded Sinkhorn — ott 0.5.1.dev34+g3462f28 documentation

scaling up the computation of optimal transport couplings to hundreds of thousands of 3k dimensional vectors made easy using sharding and OTT-JAX! check this notebook, it only takes a few lines of code thanks to JAX's native sharding abilities ott-jax.readthedocs.io/en/latest/tu...

8 months ago 14 2 0 0

Wow. Thank you for your bravery whoever you are.

10 months ago 3 0 0 0

How about "Machine Learning and Computer Science"? MLCS. 😁

11 months ago 0 0 0 0
Preview
A Call for Constructive Engagement | AAC&U A Call for Constructive Engagement

What say you @stanford.edu?
www.aacu.org/newsroom/a-c...

11 months ago 3 1 0 0

Check out our Apple research work on scaling laws for native multimodal models! Combined with mixtures of experts, native models develop both specialized and multimodal representations! Lots of rich findings and opportunists for follow up research!

1 year ago 6 5 0 1
https://arxiv.org/pdf/2502.18435

Paper link: t.co/Z2FZ6YSpbA
Code/Model checkpoints: t.co/bXYHZOONOm

1 year ago 0 0 0 0

My colleagues in #Apple ML Research posted a fun paper investigating how autoregressive design choices affect reasoning (in this case, multi-choice question answering), showing a benefit to R2L ordering. Reminds me of similar findings for reverse order addition in arxiv.org/abs/2310.16028!

1 year ago 10 3 1 0
Advertisement

Permanent Hellcountry is a badass name for a band! Too bad it's also us. stranger than fiction.

1 year ago 0 0 0 0

My colleague Shuangfei Zhai is looking for a summer research intern to work on improving TarFlow at Apple. If interested, send your CV to szhai at apple.com by this week.

1 year ago 4 0 0 1
A Sad Moment in American History
A Sad Moment in American History YouTube video by Senator Bernie Sanders

youtu.be/rKBM2kS6B8o

Thank you @sanders.senate.gov for speaking up

1 year ago 2 0 0 0

Is there an article associated with this thread?

1 year ago 1 0 1 0

Here's a great paper on scaling laws for teacher-student neural network distillation led by @dbusbridge.bsky.social and Apple colleagues. I've often seen people struggle to get distillation working well enough in practical settings, and I expect the insights in this paper can really help!

1 year ago 4 0 0 0

Here's a fun Apple research paper seeking to understand when/why diffusion models can be composed to generate images containing multiple independent concepts. For example, composing images from a model trained on Preetum's dog and a model trained on hats. Because why wouldn't you want to do that?!!

1 year ago 1 0 0 0

Yeah! that's what I said when I saw it too :) Better than any dog-horse I could make!

1 year ago 2 0 0 0

If you are interested in doing an internship in ML research at Apple, I highly recommend talking with Etai Littwin (and Vimal Thilak is pretty awesome too!)

1 year ago 1 0 0 0

I think it's really important for more of this kind of work to be published openly rather than be walled off due to corporate greed -- scientific inquiry benefits us all. Hopefully we will continue to see lots and lots more of this!

1 year ago 0 0 1 0

This work was born from an Apple internship with Harshay Shah. Samira provided excellent direction and technical contributions along with Vimal, and the entire team was incredibly helpful! I'm intrigued reading comprehension tasks do not follow pre-training scaling curves -- gotta follow this up!

1 year ago 1 0 1 0
Advertisement

Missing the deep learning part? go check out the follow up work @neuripsconf.bsky.social (tinyurl.com/yvf72kzf) and @iclr-conf.bsky.social (tinyurl.com/4vh8vuzk)

1 year ago 11 3 0 0

Too disgusted by the Twitter/X vomit and could not justify keeping my account there. Hoping this platform steers clear of disinformation and hate -- and remains a positive place to share science and other good things.

1 year ago 1 0 0 0

Here's a really cool cross-institution study leveraging optimal transport techniques developed by my Apple ML Research colleagues! It's great to see basic research in machine learning translate into scientific tools like this. Cuts into the AI hype a bit ;)

1 year ago 2 1 0 0
Post image

Excited about vision-language models? 🚀 Check out our latest work on FastVLM, a new family of efficient vision-language models that balances the tradeoff between high-resolution image understanding and latency without compromising accuracy!

arxiv.org/abs/2412.13303

1 year ago 1 1 6 0

If you're looking for research scientist roles in Europe, check out Marco's post! The Paris team is fantastic, and does diverse idea-driven and impactful research. In addition, MLR is highly collaborative across timezones, so you'd have a chance to work with many others too.

1 year ago 2 1 0 0
Preview
Apple Machine Learning Research at NeurIPS 2024 Apple researchers are advancing the field of ML through fundamental research that improves the world’s understanding of this technology and…

Last but not least, please check out the flurry of papers being presented at #NeurIPS2024 , highlighted here in this post machinelearning.apple.com/research/neu... that showcases work from many teams at Apple and their academic collaborators.

Thanks for making it to the end ;-)

1 year ago 2 0 0 0

EC-IJEPA makes the JEPA approach less brittle, and also further unlocks its use in diverse planning and reasoning tasks that leverage pre-tained visual representations as a world model. We're excited to see others build on this work with us!
12/n

1 year ago 0 0 1 0
x.com

Returning to the theme of powerful visual representation learning, please check out Vimal Thilak, Etai Littwin, and Anand Gopalakrishnan's EC-IJEPA paper, on improving JEPA models with spatial conditioning:
x.com/AggieInCA/st...
11/n

1 year ago 0 0 1 0

WVD is impressive because it enables a range of downstream 3D tasks including 3D view synthesis and depth estimation at inference time by training a good generative model of RGB + XYZ values. Many directions to follow up on here including modeling dynamics!
10/n

1 year ago 0 0 1 0
x.com

Moving from images to video and 3D generation, I'm also excited to highlight Jiatao Gu and collaborators' work on WVD (world video diffusion), which jointly models multi-view images and 3D geometry: x.com/thoma_gu/sta...
9/n

1 year ago 0 0 1 0
Advertisement