Advertisement · 728 × 90

Posts by Erkut Erdem

A huge thanks to the incredible team behind HyperGAN-CLIP! Abdul Basit Anees, Ahmet Canberk Baykal, Muhammed Burak Kizil, Duygu Ceylan @aykuterdem.bsky.social 🙌 (4/n)

1 year ago 2 0 0 0
Post image

🏮Curious to learn more? Catch us at #SIGGRAPHAsia 2024 in Tokyo or check out our paper for all the details.

Project page: cyberiada.github.io/HyperGAN-CLIP
Arxiv: arxiv.org/abs/2411.12832
(3/n)

1 year ago 2 1 1 0
Post image

Our approach extends a pre-trained StyleGAN by integrating CLIP space via hypernetworks. This allows us to dynamically adapt it to new domains using reference images or text descriptions.

It’s flexible, efficient, and unlocks new applications for GANs. 🌍✨ (2/n)

1 year ago 2 0 1 0
A Unified Framework for Domain Adaptation, Image Synthesis and Manipulation (SIGGRAPH Asia'24)
A Unified Framework for Domain Adaptation, Image Synthesis and Manipulation (SIGGRAPH Asia'24) YouTube video by Erkut Erdem

GANs like StyleGAN generate highly realistic images. But adapting them to new domains or tasks like text-guided editing or reference-guided synthesis with limited data is challenging! 🖼️✨

Our #SIGGRAPHAsia 2024 paper, HyperGAN-CLIP, tackles this: youtu.be/X0VOYFhPWxQ (1/n)

1 year ago 2 0 1 2