Brown CS is looking for a professor of the practice in AI - feel free to reach out if you have any questions.
jobs.chronicle.com/job/37970844...
Posts by James Tompkin
Why submit?
π Non-archival & flexible formats: 8-page full paper or 4-page extended abstract
π Best student paper award with prizesπ΅
βοΈ Travel grants available
πTrack for already-accepted works, even at CVPR 2026!
Come hang our with us!
π¨ Work on generative AI or creative content?
AI for Content Creation Workshop @ CVPR! @cvprconference.bsky.social
β° Submit by March 23!
ai-for-content-creation.github.io
β’ Image, video, 3D, world
β’ Multi-modal
β’ Generation, editing, transfer
β’ VFX, games, fashion, design, architecture & more
π·π·=> βοΈ? Need 3D scene flow from _two_ images, single camera, with the goal of generalized performance? Inference code and model weights now out for the CVPR 2025 ZeroMSF method github.com/NVlabs/zero-...
By Yiqing Liang lynl7130.github.io with Abhishek Badki, Hang Su, and Orazio Gallo at NVIDIA.
AI for Content Creation workshop @ #CVPR2025 - Grand Ballroom A1 - 4pm - panel on "Open Source in AI and the Creative Industry" - with @magrawala.bsky.social (Stanford), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social (Google) - go go!
The AI for Content Creation workshop is kicking off today at #CVPR2025 - Grand Ballroom A1 - @magrawala.bsky.social Kai Zhang (Adobe), Charles Herrmann (Google), Mark Boss (Stability AI), Yutong Bai (UC Berkeley), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social ! See you soon!
Thanks to the org team: @junyanz.bsky.social @lingjieliu.bsky.social Deqing Sun, Lu Jiang, Fitsum Reda, and Krishna Kumar Singh!
The AI for Content Creation workshop #CVPR2025 is accepting paper submissions. ai4cc.net Deadline March 21st 2025 midnight PST. 4 page extended abstracts, 8 pagers, and previously published work (ECCV, NeurIPS, even CVPR)! Many topics π·πΉπ¬π²βοΈππΌοΈπππ’ - come spend the day with us!
π’π’π’ Submit to our workshop on Physics-inspired 3D Vision and Imaging at #CVPR2025!
Speakers π£οΈ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!
π pi3dvi.github.io
You can also just come hangout with us at the workshop @cvprconference.bsky.social!
ICCV 2025 #ICCV2025 Workshop proposals deadline is tomorrow midnight anywhere on earth! iccv.thecvf.com/Conferences/... If you have any questions, send us an email! The chairs are happy to help. See you in Hawaii? ποΈ
Thanks, but I just twiddle my thumbs - it's all Nick and Aaron : )
arXiv: arxiv.org/abs/2501.05441
HuggingFace: huggingface.co/papers/2501....
OpenReview: openreview.net/forum?id=Ort...
We prioritize simplicity and performance over functionality. As a minimal baseline, our model does only basic image generation, lacking many features required for downstream tasks. Think of it as DCGAN in 2025 rather than something feature-rich like StyleGAN. We hope this helps further GAN research!
Given the well-behaved loss, we move away from the 2015-ish architecture in StyleGAN and implement G and D with a minimalist yet modern architecture---a simplified ConvNeXt. With the two components combined, we obtain a simple GAN baseline that is stable to train and surpasses StyleGAN performance.
To further GAN research, we first improve the GAN loss to alleviate mode dropping and non-convergence. This makes GAN optimization sufficiently easy that we can now discard existing GAN tricks w/o training failure. The dependence on outdated GAN-specific architectures is also eliminated.
GANs are often criticized for their training instability, and it is often believed that GANs cannot work w/o many engineering tricks. They use outdated network architectures without modern backbone advances. These supposed weaknesses resulted in the abandonment of GAN research in favor of diffusion.
Can GANs compete in 2025? In 'The GAN is dead; long live the GAN! A Modern GAN Baseline', we show that a minimalist GAN w/o any tricks can match the performance of EDM with half the size and one-step generation - github.com/brownvc/r3gan - work of Nick Huang, @skylion.bsky.social, Volodymyr Kuleshov
Need evaluation and insight into why monocular dynamic scene reconstruction is difficult especially with Gaussian splats? Need apples-to-apples comparison of basic motion models on a scene with controlled camera and object motion? Here you go.
Hey that's us! Let me know if anyone has any questions : )
But what if you _really_ like reflections? Local Gaussian Density Mixtures updates lumigraphs by optimizing mixtures of per-view volumes for πmaximum shineπ #SIGGRAPHAsia2024 xchaowu.github.io/papers/lgdm/... First author Xiuchao Wu is graduating soon and is looking for a job!
Created a starter pack for researchers working in inverse graphics, 3D vision, and geometry processing.
Would love your help to expand this list!
go.bsky.app/9uEdjzb
Welcome to all new arrivals here on Bluesky! :) Here's a starter pack of people working on computer vision.
go.bsky.app/PkAKJu5
Converted my Graphics Research list to a starter pack (not sure what's the difference though). Let me know who we are missing here :)
Here goes! go.bsky.app/ApQNTt2