We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我用a100 40g执行微调报显存不够,有什么办法解决吗?一定要80g的? 参数如下: #! /bin/bash export CUDA_VISIBLE_DEVICES=0 GPUS_PER_NODE=1
NNODES=1 MASTER_ADDR="localhost" MASTER_PORT=12345
OPTS="" OPTS+=" --use-delta" OPTS+=" --model-config ../cpm-bee-10b.json" OPTS+=" --dataset /root/autodl-tmp/cpm_bee_training/voice_train/bin_data/adhoc_train_cpmbee0621_train_000000_0_claude2/" #OPTS+=" --eval_dataset path/to/eval/dataset" OPTS+=" --epoch 150" OPTS+=" --batch-size 1" #OPTS+=" --train-iters 100" OPTS+=" --save-name cpm_bee_finetune" OPTS+=" --max-length 128" OPTS+=" --save results/" OPTS+=" --lr 0.0001" OPTS+=" --inspect-iters 100" OPTS+=" --warmup-iters 1" OPTS+=" --eval-interval 100" OPTS+=" --early-stop-patience 5" OPTS+=" --lr-decay-style noam" OPTS+=" --weight-decay 0.01" OPTS+=" --clip-grad 1.0" OPTS+=" --loss-scale 32768" OPTS+=" --start-step 0" OPTS+=" --load /root/autodl-tmp/cpm-bee-10b/pytorch_model.bin"
CMD="torchrun --nnodes=${NNODES} --nproc_per_node=${GPUS_PER_NODE} ../finetune_cpm_bee.py ${OPTS}"
echo ${CMD} $CMD
The text was updated successfully, but these errors were encountered:
要60G+才能lora微调
Sorry, something went wrong.
No branches or pull requests
我用a100 40g执行微调报显存不够,有什么办法解决吗?一定要80g的?
参数如下:
#! /bin/bash
export CUDA_VISIBLE_DEVICES=0
GPUS_PER_NODE=1
NNODES=1
MASTER_ADDR="localhost"
MASTER_PORT=12345
OPTS=""
OPTS+=" --use-delta"
OPTS+=" --model-config ../cpm-bee-10b.json"
OPTS+=" --dataset /root/autodl-tmp/cpm_bee_training/voice_train/bin_data/adhoc_train_cpmbee0621_train_000000_0_claude2/"
#OPTS+=" --eval_dataset path/to/eval/dataset"
OPTS+=" --epoch 150"
OPTS+=" --batch-size 1"
#OPTS+=" --train-iters 100"
OPTS+=" --save-name cpm_bee_finetune"
OPTS+=" --max-length 128"
OPTS+=" --save results/"
OPTS+=" --lr 0.0001"
OPTS+=" --inspect-iters 100"
OPTS+=" --warmup-iters 1"
OPTS+=" --eval-interval 100"
OPTS+=" --early-stop-patience 5"
OPTS+=" --lr-decay-style noam"
OPTS+=" --weight-decay 0.01"
OPTS+=" --clip-grad 1.0"
OPTS+=" --loss-scale 32768"
OPTS+=" --start-step 0"
OPTS+=" --load /root/autodl-tmp/cpm-bee-10b/pytorch_model.bin"
CMD="torchrun --nnodes=${NNODES} --nproc_per_node=${GPUS_PER_NODE} ../finetune_cpm_bee.py ${OPTS}"
echo ${CMD}
$CMD
The text was updated successfully, but these errors were encountered: