How can we change the sample model (checkpoint) when creating samples during LoRA training? #3026
Unanswered
gituser123456789000
asked this question in
Q&A
Replies: 2 comments
-
Or is there some code to add to the sample prompts to change the sampler checkpoint directory? Like I know to put an ending like --w 1024 --h 1024 --l 2.0 --s 40 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Umm the sample actually uses the training model, you cant find it because you didnt save it every epoch. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I assume the samples seem pretty clearly to be using the base sdxl model, which nobody that knows what they're doing would still be using.. they would be using some fine-tuned checkpoint.
As a result of using the base model, the results are often poor looking and little indication of how results will look when using a LoRA after it's created. So how do we change what checkpoint is being used when creating sample images during LoRA training?
I looked through kohya;s files as best I know how and I didn't see the base model sdxl file. No folders seem to be big enough to be holding an SDXL checkpoint as far as I see. So I don't even know how kohya is able to create samples to begin with, without the model somewhere in the program.
If the model is there somewhere, I'd assume it should be pretty simple to delete the base model and put a different, more current, fine-tuned model in its place and just rename it to the base model's name that works in the code.
Does anyone already know or can figure out a solution for this?
And for a feature suggestion, .....
hmmm... Now that I think about it, I suppose it uses the model you set as the model you're training the LoRA on to create the samples.
But I assume most people train the LoRA on the base model, so that it works on all/most fine-tunes, even though they don't use the base model when later using the LoRA..
So feature suggestion would be to add an option in the Sample section to set a different model/checkpoint to use when creating samples during LoRA training. That way users can train on the base model, but get actually usable sample results with a model/checkpoint they'd actually be using the LoRA with.
Beta Was this translation helpful? Give feedback.
All reactions