using Stable Diffusion feels a bit like giving instructions to a guy who just can't picture in his head what you're trying to describe to him.

i'm noticing that it can combine elements it has seen before, but if it's something new, that an artist nevertheless could picture in his head if you described it carefully enough, the model kind of breaks down.

like, if you tried to tell it to make the blue people from Avatar, it would probably struggle, because it hasn't seen blue people before.

Follow

@thor (1/2) That depends heavily on what model you are using, most models out there are merges, when you merge a model, defects in one model get passed down. A very common defect is ignoring the tokens in the prompt, like for example, ignoring the second token, or the fifth. This is lowkey intentional. Also, moving up the cfg scale a little will also help. Another problem is that retraining models make the model lose info, for example, if you...

@thor (2/2) re-train a working model with not-properly tagged pictures of corgis, now all dogs are going to start looking a little more like corgis

@thor Lose of information also gets passed down when merging.

@thor Almost forgot, nowadays, every single model out there is a merge of some damaged model, if you want to get something good and update, you may have to do weighted u-net layer merging of older models with new models and pray that it comes all right or at the very least you get more insight in what exactly you are doing.

Sign in to participate in the conversation
Merovingian Club

A club for red-pilled exiles.