Jump to content
Sneeze Fetish Forum

Stable Diffusion Technique - "Easy" sneezify an existing image


baron_von_gotta

Recommended Posts

Here's a repeatable technique I discovered to convert an existing image into a sneezy one. I realized that I was psyching myself out using the previous techniques... I would try to sink hours into an image and get every detail right. And end up with something that was so overproduced that it wasn't any good. At least for me, in order to make something good, the AI has to do it right with relatively few human inputs. 

So, I developed a simple technique to take an existing image and make it sneezy. Here are the results:

GfwEDnp.png

OJ8zFNn.png

RxqUKGt.png

HY8kREL.png

kEs33La.png

pn5X3RK.png

It's not perfect, hopefully one day AI will be able to add a sneeze face to an image without messing with the details too much. This process can probably be refined to make better results, even with existing technology. I've also switched back to the SamDoesSexy model, I like the fact that the cartoonish style is somewhat "forgiving" for minor mistakes in the AI.

So, without further ado, here's the process:

1. Get the WebUI working with ControlNet. This is the "Easy" part that I referenced in the title. Technically all the information can be found here: https://github.com/Mikubill/sd-webui-controlnet. It's not terribly easy to follow this guide though. If you're looking to get this set up on your own, I'll be happy to answer any questions.

2. Install the "depth" ControlNet model.

3. Get a perfectly square source image. I grabbed photos from Instagram, and used GIMP to crop them to the right size. If your image isn't perfectly square, Stable Diffusion will crop it for you, and it won't always do a good job:

kEs33La.png

4. Open the WebGUI and ensure it's on "txt2img". Ensure "restore faces" is on.

5. Open the ControlNet subpanel. "Enable" it. Select "depth" for the preprocessor. For the model file, select the model you installed in step 2. 

6. Click "Preview Annotator Result". This step isn't strictly necessary, but it will let you know that your inputs are working. If you did it right, you should see something like this:

YJjNX4F.png

7. For the prompt, try your best to describe the image, focusing on whatever details are important to you. For this image, I used "beautiful brunette, leaning against wall, baseball cap, enormous ass, tight white jeans". Then add sneezing stuff, with high emphasis. I used "((((desperate sneeze, terrible allergies, sneezing violently, eyes closed, angry scream sneeze))))". Add a negative prompt, I used: "deformed, bad anatomy, disfigured, poorly drawn face, mutation, mutated, extra limb, ugly, disgusting, poorly drawn hands, missing limb, floating limbs, disconnected limbs, malformed hands, blurry, ((((mutated hands and fingers)))), watermark, watermarked, oversaturated, censored, distorted hands, amputation, missing hands, obese, doubled face, double hands,, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, mutation, deformed, dehydrated, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck"

The prompting is probably one place that this technique could really be improved. I'll play around at some point I'm sure.

8. Pick a seed. Run it on Euler A with 8 iterations, and 16 batch count. This will create 16 images to play with:
QsrJrSy.jpg

9. Choose 1. Focus on one that's in a good pose, and has something resembling a face. I chose the third row, first column:

EnqifIq.png

10. Type that seed to overwrite your current seed. Change the model to "Heun" and the batch count to 1, and the generations to 60:

T8Uz4Dd.png

11. OK, making some progress. Now click "send to inpaint". Paint over just the facial details, along with as much of the face as you can grab without grabbing anything outside the face. Essentially: DO GRAB - eyes, nose, mouth, DON'T GRAB - ears, hair, hat, neck

JJJgsVz.png

12. For the inpainting settings, ensure "Restore Faces" is on, and inpaint area should be "Only Masked". The model should be Euler a, 16 batch count, 8 step count:

ojoIv91.png

13. Pick your favorite, focusing on proportions. As you can see, a lot of them aren't really understanding the contours of the face. I chose row 4, column 2:

jO0at41.png

14. Overwrite your seed with that seed, and change it to Heun 60 iterations 1 batch count. Sometimes this produces a better result, sometimes not. Looking at it now, I might have stuck with the Euler.

81Ko6Xm.png

15. Click "send to extras" and choose "Scale to" -> 2196x2196. Choose the R-ESRGAN 4x Anime 6b upscaler (at least for SamDoesSexy).

16. And, done:

pn5X3RK.png

Link to comment

Oh I forgot to mention something important. The ControlNet is OFF once you switch to inpainting. This should be automatic (just because the "Send to Inpaint" button doesn't copy your ControlNet settings). 

Link to comment

This is a good workflow!  Thanks for sharing.  I'm trying to do something similar with pose editors in ControlNet and some online tools with human 3D models to make a pose.  Might work better than me trying to crudely sketch something with my mouse...

Keep up the learning and sharing the great work!

Link to comment
  • 2 weeks later...

would like a method that is laptop friendly though. I can't run the client ui nor controlnet cause i dont meet the requirements. 

Link to comment
  • 5 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...