-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feat]Add strength in flux_fill pipeline (denoising strength for fluxfill) #10603
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've left some comments. However, we should note that Flux Fill is intended to fill areas based on a text description and not for object removal.
self.mask_processor = VaeImageProcessor( | ||
vae_scale_factor=self.vae_scale_factor * 2, | ||
vae_latent_channels=latent_channels, | ||
vae_latent_channels=self.vae.config.latent_channels, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to use latent_channels
here not the config directly. This allows pipelines to be used without the component e.g. FluxFillPipeline(vae=None, ...)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't know that, Thanks, I changed. :)
@@ -627,6 +659,8 @@ def disable_vae_tiling(self): | |||
# Copied from diffusers.pipelines.flux.pipeline_flux.FluxPipeline.prepare_latents |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we copy from FluxImg2ImgPipeline.prepare_latents
or FluxInpaintPipeline.prepare_latents
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure , That would be more clean. Thanks for the review :)
@@ -809,6 +866,10 @@ def __call__( | |||
self._joint_attention_kwargs = joint_attention_kwargs | |||
self._interrupt = False | |||
|
|||
original_image = image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep :) Maby that came from sdxl inpaint pipeline, but it is not used in this pipeline
@@ -855,13 +952,13 @@ def __call__( | |||
if masked_image_latents is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Above # 6. Prepare mask and masked image latents
.
Thankyou for the fast and kind review, I'll revise it and test :) |
Co-authored-by: hlky <[email protected]>
Co-authored-by: hlky <[email protected]>
Co-authored-by: hlky <[email protected]>
Hmm, Maby I missed the intention than ;) @hlky Thanks for the review again, I changed the code that you've reviewed. |
Hi, you're saying you used lama to remove the object before doing this? If that's what you did, you're just essentially using this pipeline as an img2img one and the model as just a refiner. The original fill model was trained to do this without the need of changing the strength and this is from the original implementation, I'm not saying that what you're doing is wrong though, people used redux in a more different and creative way and it works alright but I want to understand your use case since I haven't encountered the need to lower the strength to make it work. Did you try what you're doing here with just the regular Flux model and the img2img pipeline? |
Thanks for your response :) I tried with sdxl model with img2img model. But not flux image2image. As I use this pipeline an image edit pipeline, I just want to use the single pipeline as an object adder or outpainter but also object remover. If I have to use my case with flux img2img pipeline I can do that, but as you know flux model is quite large. If I can use fluxfill pipeline with |
@Suprhimp To confirm, you're using lama cleaner to remove the object first? |
@hlky Yep I used lamacleaner before both case (my pipeline, and default pipeline) |
lama with the refiner is very good at removing objects. I thought the whole point of Flux fill was to replace or insert an object. Big difference there. |
What does this PR do?
allows the fluxfill pipeline to have a denoising strength (denoising strength).
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@yiyixuxu and @asomoza
I use fluxfill pipeline to edit image or outpatin. Yes it works very well but I don't agree
Flux Fill pipeline does not require
strengthas an input like regular inpainting pipelines
this line in the docs.When remove object in the image, We need denoising strength I guess. without denoising strength I can't get clean image that remove object that I want.
let me gives you an example I tested.
this is mask and original image that I want to edit.
And this is the result with denoising strength (0.6 with revised pipeline) and none(default pipeline)
I did lama inpaint to remove more clearly in both case but output is different.
As you can see with denoising strength we can have more control with the image quality. So I think there is no reason to not use denoising strength in fluxfill pipeline
I changed the fluxfill pipeline code with reference sdxl inpaint pipeline.
So, I think there would be things that need to be fixed.
Thanks for reading :)