I saw an interview with Terrance Tao talking about this -- that people need to treat deep thinking as a form of workout.
Posts by Josh Susskind
Cogsci peeps! This is a great opportunity! @sineadwilliamson.bsky.social is a great mentor and scientist, along with a wonderful team :)
We have been working with Michal Klein on pushing a module to train *flow matching* models using JAX. This is shipped as part of our new release of the OTT-JAX toolbox (github.com/ott-jax/ott)
The tutorial to do so is here: ott-jax.readthedocs.io/tutorials/ne...
scaling up the computation of optimal transport couplings to hundreds of thousands of 3k dimensional vectors made easy using sharding and OTT-JAX! check this notebook, it only takes a few lines of code thanks to JAX's native sharding abilities ott-jax.readthedocs.io/en/latest/tu...
Wow. Thank you for your bravery whoever you are.
How about "Machine Learning and Computer Science"? MLCS. 😁
Check out our Apple research work on scaling laws for native multimodal models! Combined with mixtures of experts, native models develop both specialized and multimodal representations! Lots of rich findings and opportunists for follow up research!
My colleagues in #Apple ML Research posted a fun paper investigating how autoregressive design choices affect reasoning (in this case, multi-choice question answering), showing a benefit to R2L ordering. Reminds me of similar findings for reverse order addition in arxiv.org/abs/2310.16028!
Permanent Hellcountry is a badass name for a band! Too bad it's also us. stranger than fiction.
My colleague Shuangfei Zhai is looking for a summer research intern to work on improving TarFlow at Apple. If interested, send your CV to szhai at apple.com by this week.
youtu.be/rKBM2kS6B8o
Thank you @sanders.senate.gov for speaking up
Is there an article associated with this thread?
Here's a great paper on scaling laws for teacher-student neural network distillation led by @dbusbridge.bsky.social and Apple colleagues. I've often seen people struggle to get distillation working well enough in practical settings, and I expect the insights in this paper can really help!
Here's a fun Apple research paper seeking to understand when/why diffusion models can be composed to generate images containing multiple independent concepts. For example, composing images from a model trained on Preetum's dog and a model trained on hats. Because why wouldn't you want to do that?!!
Yeah! that's what I said when I saw it too :) Better than any dog-horse I could make!
If you are interested in doing an internship in ML research at Apple, I highly recommend talking with Etai Littwin (and Vimal Thilak is pretty awesome too!)
I think it's really important for more of this kind of work to be published openly rather than be walled off due to corporate greed -- scientific inquiry benefits us all. Hopefully we will continue to see lots and lots more of this!
This work was born from an Apple internship with Harshay Shah. Samira provided excellent direction and technical contributions along with Vimal, and the entire team was incredibly helpful! I'm intrigued reading comprehension tasks do not follow pre-training scaling curves -- gotta follow this up!
Missing the deep learning part? go check out the follow up work @neuripsconf.bsky.social (tinyurl.com/yvf72kzf) and @iclr-conf.bsky.social (tinyurl.com/4vh8vuzk)
Too disgusted by the Twitter/X vomit and could not justify keeping my account there. Hoping this platform steers clear of disinformation and hate -- and remains a positive place to share science and other good things.
Here's a really cool cross-institution study leveraging optimal transport techniques developed by my Apple ML Research colleagues! It's great to see basic research in machine learning translate into scientific tools like this. Cuts into the AI hype a bit ;)
Excited about vision-language models? 🚀 Check out our latest work on FastVLM, a new family of efficient vision-language models that balances the tradeoff between high-resolution image understanding and latency without compromising accuracy!
arxiv.org/abs/2412.13303
If you're looking for research scientist roles in Europe, check out Marco's post! The Paris team is fantastic, and does diverse idea-driven and impactful research. In addition, MLR is highly collaborative across timezones, so you'd have a chance to work with many others too.
Last but not least, please check out the flurry of papers being presented at #NeurIPS2024 , highlighted here in this post machinelearning.apple.com/research/neu... that showcases work from many teams at Apple and their academic collaborators.
Thanks for making it to the end ;-)
EC-IJEPA makes the JEPA approach less brittle, and also further unlocks its use in diverse planning and reasoning tasks that leverage pre-tained visual representations as a world model. We're excited to see others build on this work with us!
12/n
Returning to the theme of powerful visual representation learning, please check out Vimal Thilak, Etai Littwin, and Anand Gopalakrishnan's EC-IJEPA paper, on improving JEPA models with spatial conditioning:
x.com/AggieInCA/st...
11/n
WVD is impressive because it enables a range of downstream 3D tasks including 3D view synthesis and depth estimation at inference time by training a good generative model of RGB + XYZ values. Many directions to follow up on here including modeling dynamics!
10/n
Moving from images to video and 3D generation, I'm also excited to highlight Jiatao Gu and collaborators' work on WVD (world video diffusion), which jointly models multi-view images and 3D geometry: x.com/thoma_gu/sta...
9/n