Advertisement · 728 × 90

Posts by David Garrido

Post image Post image Post image

It is crazy how well Doing img2img with IC LoRA works for injecting a logo into an image. The game has changed.

Try it out here -> glif.app/glifs/cm3o7d...

1 year ago 11 2 0 0
Post image Post image Post image Post image

None of these pictures are real.

Last weekend I trained a LoRA of my dog locally thanks to Fluxgym in Pinokio.

It took almost 3 hours, but I’m pretty happy with the outputs.

1 year ago 0 0 0 0

👋

1 year ago 1 0 1 0
Post image Post image


We have a new toy to play with in Magnific: style reference with Mystic.

This is my very first test, using the same prompt with different reference images👇

1 year ago 0 0 0 0

Mmm it seems like Bluesky change the order of the images…

First v5.1
Second v5
Third v5.1 Raw

🤷🏻‍♂️

2 years ago 0 0 0 0
Post image Post image Post image

A little test with Elon Musk.

Same prompt, different versions of Midjourney 5.

“a portrait photography of Elon Musk looking at camera --seed 12345”

-Regular v5
-New v5.1
-New v5.1 RAW

#midjourney #IA

2 years ago 0 0 1 0
Post image

Midjourney v 5.1 is here!

#midjourney

2 years ago 4 0 1 0
Post image Post image Post image Post image

The four styles you can try with #nijijourney!

Neutral, cute, expressive and scenic.

Sale prompt, sale seed.

“A portrait of Elon Musk looking at camera --niji 5 --seed 12345”

2 years ago 0 0 0 0
Post image Post image

Done with the new style in Nijijourney, “cute”.

You can select it in settings.

3 years ago 0 0 0 0
Advertisement
Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models We develop Video Latent Diffusion Models (Video LDMs) for computationally efficient high-resolution video synthesis. We first pre-train an LDM on images only; then, we turn the image generator into a video generator by introducing a temporal dimension to the latent space diffusion model and fine-tuning on encoded image sequences, i.e., videos. We demonstrate high-resolution text-to-video synthesis and in-the-wild driving video generation.

This could be the next step in text2video. From Nvidia👇

https://research.nvidia.com/labs/toronto-ai/VideoLDM/

3 years ago 0 0 0 0