Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The size of tensor a (577) must match the size of tensor b (257) at non-singleton dimension 1 #52

Open
coolgech1978 opened this issue Jul 22, 2024 · 11 comments

Comments

@coolgech1978
Copy link

Error occurred when executing MZ_IPAdapterAdvancedKolors:

The size of tensor a (577) must match the size of tensor b (257) at non-singleton dimension 1

File "/home/chawk/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/custom_nodes/ComfyUI-Kolors-MZ/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 786, in apply_ipadapter
work_model, face_image = ipadapter_execute(work_model, ipadapter_model, clip_vision, **ipa_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/custom_nodes/ComfyUI-Kolors-MZ/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 334, in ipadapter_execute
img_cond_embeds = encode_image_masked(clipvision, image, batch_size=encode_batch_size, size=image_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/custom_nodes/ComfyUI-Kolors-MZ/ComfyUI_IPAdapter_plus/utils.py", line 177, in encode_image_masked
out = clip_vision.model(pixel_values=pixel_values, intermediate_output=-2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/comfy/clip_model.py", line 192, in forward
x = self.vision_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/comfy/clip_model.py", line 178, in forward
x = self.embeddings(pixel_values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/venv/comfyvenv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chawk/ComfyUI/comfy/clip_model.py", line 160, in forward
return torch.cat([self.class_embedding.to(embeds.device).expand(pixel_values.shape[0], 1, -1), embeds], dim=1) + self.position_embedding.weight.to(embeds.device)



@wailovet
Copy link
Contributor

Show me a screenshot of your workflow.

@coolgech1978
Copy link
Author

The workflow I tried is as follows:
workflow
kolors_ipa_workflow1.json
jsion file form [ComfyUI-Kolors-MZ]

workflow (2)
ipadapter_kolors_example_in_ipadapter_plus.json
jsion file form ComfyUI_IPAdapter_plus last version

@NyxWeigh
Copy link

same error here

@YacratesWyh
Copy link

same

@YacratesWyh
Copy link

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

@wailovet
Copy link
Contributor

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

Support for 336clip in comfyui has actually been merged, see comfyanonymous/ComfyUI#4042

This problem may only occur because the comfyui version is not the latest.

@yatoubusha
Copy link

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

i uesed the MZ cliploader, the error still exit.
image

@yatoubusha
Copy link

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

Support for 336clip in comfyui has actually been merged, see comfyanonymous/ComfyUI#4042

This problem may only occur because the comfyui version is not the latest.

i update the comfyui version and restart, the error is still exit, should i load "openai/clip-vit-large-patch14-336 " as image encoder?

@wailovet
Copy link
Contributor

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

Support for 336clip in comfyui has actually been merged, see comfyanonymous/ComfyUI#4042
This problem may only occur because the comfyui version is not the latest.

i update the comfyui version and restart, the error is still exit, should i load "openai/clip-vit-large-patch14-336 " as image encoder?

The clip encoder should be taken from pytorch_model.bin

This is the official model repository of KolorsIPA.

@yatoubusha
Copy link

solved, you need to use his cliploader instead of the default. Maybe something is different in the implement, maybe you need to renew your example

i uesed the MZ cliploader, the error still exit. image

solved, you can use "workflow_ipa_legacy.png" workflow. All related loaders must be changed to the author's own

@coolgech1978
Copy link
Author

there are some differentiation between two files that named 'pytorch_model.bin', one is download form huggingface,the other is form hf-mirror

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants