The AI for Content Creation workshop is kicking off today at #CVPR2025 - Grand Ballroom A1 - @magrawala.bsky.social Kai Zhang (Adobe), Charles Herrmann (Google), Mark Boss (Stability AI), Yutong Bai (UC Berkeley), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social ! See you soon!
Posts by Jun-Yan Zhu
AI for Content Creation workshop @ #CVPR2025 - Grand Ballroom A1 - 4pm - panel on "Open Source in AI and the Creative Industry" - with @magrawala.bsky.social (Stanford), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social (Google) - go go!
[2/2] Work led by @avalovelace.bsky.social, @kangledeng.bsky.social, Ruixuan Liu, and CMU faculty Changliu Liu and Deva Ramanan. LegoGPT is a small first step towards generative manufacturing of physical objects. Current version is limited to 20x20x20, 21 object categories, and simple brick types.
[1/2] We've released the code for LegoGPT. Our autoregressive model generates physically stable and buildable designs from text prompts by integrating physics laws and assembly constraints into LLM training and inference.
Code: github.com/AvaLovelace1...
Website: avalovelace1.github.io/LegoGPT/
Reve Image is our first step towards world-class image generation β and you can experience it for free today π
(π)
The AI for Content Creation workshop #CVPR2025 is accepting paper submissions. ai4cc.net Deadline March 21st 2025 midnight PST. 4 page extended abstracts, 8 pagers, and previously published work (ECCV, NeurIPS, even CVPR)! Many topics π·πΉπ¬π²βοΈππΌοΈπππ’ - come spend the day with us!
Can we generate a training dataset of the same object in different contexts for customization? Check out our work SynCD, which uses Objaverse assets and shared attention in text-to-image models for the same.
cs.cmu.edu/~syncd-proje...
w/ Xi Yin, @junyanz.bsky.social, Ishan Misra, and Samaneh Azadi
One day walking home, I noticed a dry cleaners across the street. βWas that always there?β I thought. A little Googling revealed that it was on my street longer than I have.
Here's a blog post on why we often miss what's right in front of us. #visionscience
aaronhertzmann.com/2024/05/09/i...
Excited to bring the 5th CV4Animals Workshop to #CVPR2025
We welcome submissions in 2 tracks:
1) unpublished work up to 4 pages
2) papers published within last 2 years
Submit by Mar 28 & join us with amazing speakers in Nashville:
www.cv4animals.com
π¦πͺΌπ¬πΏοΈπ¦©π’π¦π¦π¦₯π¦
@cvprconference.bsky.social
3D content creation with touch!
We exploit tactile sensing to enhance geometric details for text- and image-to-3D generation.
Check out our #NeurIPS2024 work on Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation: ruihangao.github.io/TactileDream...
1/3
Huge congratulations to RI Ph.D. Sheng-Yu Wang for receiving a 2024 Google Fellowship! π
The two year fellowship supports Wangβs work in data attribution for text-to-image models.
Read about his achievement in our news site! www.ri.cmu.edu/robotics-ins...
I created a huggingface space for my current work PairCustomization - You can choose from a set of pretrained LoRAs trained with our method, and run inference with our novel style guidance:
huggingface.co/spaces/pairc...
I demo'ed this at #SIGRAPHASIA2024 and it went great! :)
3/3
Check out Maxwell et al.'s recent SIGGRAPH Asia paper on model customization with a single image pair. The code is available at github.com/PairCustomiz...
#JobAlert! Come join me at Carnegie Mellon's School of Artβ we're hiring an open-rank tenure-track professor in "Experimental Animation and Emerging Media Practices"! Deadline is Jan 5: art.cmu.edu/employment/#...
Introducing Generative Omnimatte:
A method for decomposing a video into complete layers, including objects and their associated effects (e.g., shadows, reflections).
It enables a wide range of cool applications, such as video stylization, compositions, moment retiming, and object removal.
TTIC building. Photo credit, TTIC.
I am recruiting exceptional PhD students & postdocs with an adventurous soul for my π«new TTIC AI labπ«! We aim to understand intelligence, one pixel at a time, inspired by psychology, neuroscience, language, robotics, and the arts. Apply: www.ttic.edu/studentappli...
sites.google.com/ttic.edu/ope...