Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] <title>TypeError: CPMTrainer.training_step() takes 3 positional arguments but 4 were given #698

Open
2 tasks done
FrancisFan98 opened this issue Dec 19, 2024 · 6 comments

Comments

@FrancisFan98
Copy link

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/MiniCPM-V-main/finetune/finetune.py", line 299, in
[rank0]: train()
[rank0]: File "/opt/MiniCPM-V-main/finetune/finetune.py", line 289, in train
[rank0]: trainer.train()
[rank0]: File "/root/miniconda3/envs/transformers/lib/python3.10/site-packages/transformers/trainer.py", line 2164, in train
[rank0]: return inner_training_loop(
[rank0]: File "/root/miniconda3/envs/transformers/lib/python3.10/site-packages/transformers/trainer.py", line 2524, in _inner_training_loop
[rank0]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank0]: TypeError: CPMTrainer.training_step() takes 3 positional arguments but 4 were given

I got this error when I tried to finetune the model, anyone knows how to solve it?

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

@Bravo5542
Copy link

Same problem, recently appeared

@BugBugdie
Copy link

same, anybody solved?

@owl-10
Copy link

owl-10 commented Dec 26, 2024

Same problem, recently appeared

@PhucNDA
Copy link

PhucNDA commented Dec 29, 2024

It is because of transformers version:
Transformers:
transformers==4.47.1: vllms (used for inference)
transformers==4.40.0: trainable (used for training)

My current solution is swapping between these two. Looking forward to more feasible approaches...

@yuting89830
Copy link

Same problem

@PhucNDA
Copy link

PhucNDA commented Jan 2, 2025

Envs little bit messy but this works for me

Training:

pip uninstall vllm-flash-attn
pip uninstall xformers
pip uninstall openai
pip install -r ../requirements.txt

Testing:
pip install vllm==0.5.4

Flipping between these two works well for me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants