- sft Parameters
- dpo Parameters
- merge-lora infer Parameters
- export Parameters
- eval Parameters
- app-ui Parameters
- deploy Parameters
-
--model_type
: Represents the selected model type, default isNone
.model_type
specifies the defaultlora_target_modules
,template_type
, and other information for the corresponding model. You can fine-tune by specifying onlymodel_type
. The correspondingmodel_id_or_path
will use default settings, and the model will be downloaded from ModelScope and use the default cache path. One of model_type and model_id_or_path must be specified. You can see the list of availablemodel_type
here. You can set theUSE_HF
environment variable to control downloading models and datasets from the HF Hub, see HuggingFace Ecosystem Compatibility Documentation. -
--model_id_or_path
: Represents themodel_id
in the ModelScope/HuggingFace Hub or a local path for the model, default isNone
. If the providedmodel_id_or_path
has already been registered, themodel_type
will be inferred based on themodel_id_or_path
. If it has not been registered, bothmodel_type
andmodel_id_or_path
must be specified, e.g.--model_type <model_type> --model_id_or_path <model_id_or_path>
. -
--model_revision
: The version number corresponding tomodel_id
on ModelScope Hub, default isNone
. Ifmodel_revision
isNone
, use the revision registered inMODEL_MAPPING
. Otherwise, force use of themodel_revision
passed from command line. -
--local_repo_path
: Some models rely on a GitHub repo for loading. To avoid network issues duringgit clone
, you can directly use the local repo. This parameter requires input of the local repo path, and defaults toNone
. These models include:- mPLUG-Owl model:
https://github.com/X-PLUG/mPLUG-Owl
- DeepSeek-VL model:
https://github.com/deepseek-ai/DeepSeek-VL
- YI-VL model:
https://github.com/01-ai/Yi
- LLAVA model:
https://github.com/haotian-liu/LLaVA.git
- mPLUG-Owl model:
-
--sft_type
: Fine-tuning method, default is'lora'
. Options include: 'lora', 'full', 'longlora', 'adalora', 'ia3', 'llamapro', 'adapter', 'vera', 'boft'. If using qlora, you need to set--sft_type lora --quantization_bit 4
. -
--packing
: pack the dataset length tomax-length
, defaultFalse
. -
--freeze_parameters
: When sft_type is set to 'full', freeze the bottommost parameters of the model. Range is 0. ~ 1., default is0.
. This provides a compromise between lora and full fine-tuning. -
--additional_trainable_parameters
: In addition to freeze_parameters, only allowed when sft_type is 'full', default is[]
. For example, if you want to train embedding layer in addition to 50% of parameters, you can set--freeze_parameters 0.5 --additional_trainable_parameters transformer.wte
, all parameters starting withtransformer.wte
will be activated. You can also set--freeze_parameters 1 --additional_trainable_parameters xxx
to customize the trainable layers. -
--tuner_backend
: Backend support for lora, qlora, default is'peft'
. Options include: 'swift', 'peft', 'unsloth'. -
--template_type
: Type of dialogue template used, default is'AUTO'
, i.e. look uptemplate
inMODEL_MAPPING
based onmodel_type
. Availabletemplate_type
options can be found inTEMPLATE_MAPPING.keys()
. -
--output_dir
: Directory to store ckpt, default is'output'
. We will appendmodel_type
and fine-tuning version number to this directory, allowing users to do multiple comparative experiments on different models without changing theoutput_dir
command line argument. If you don't want to append this content, specify--add_output_dir_suffix false
. -
--add_output_dir_suffix
: Default isTrue
, indicating that a suffix ofmodel_type
and fine-tuning version number will be appended to theoutput_dir
directory. Set toFalse
to avoid this behavior. -
--ddp_backend
: Backend support for distributed training, default isNone
. Options include: 'nccl', 'gloo', 'mpi', 'ccl'. -
--seed
: Global seed, default is42
. Used to reproduce training results. -
--resume_from_checkpoint
: Used for resuming training from a checkpoint, default isNone
. You can set it to the path of the checkpoint, for example:--resume_from_checkpoint output/qwen-7b-chat/vx-xxx/checkpoint-xxx
, to resume training from that point. Supports adjusting--resume_only_model
to only read the model file during checkpoint continuation. -
--resume_only_model
: Default isFalse
, which means strict checkpoint continuation, this will read the weights of the model, optimizer, lr_scheduler, and the random seeds stored on each device, and continue training from the last paused steps. If set toTrue
, it will only read the weights of the model. -
--dtype
: torch_dtype when loading base model, default is'AUTO'
, i.e. intelligently select dtype: if machine does not support bf16, use fp16; ifMODEL_MAPPING
specifies torch_dtype for corresponding model, use its dtype; otherwise use bf16. Options include: 'bf16', 'fp16', 'fp32'. -
--dataset
: Used to select the training dataset, default is[]
. You can see the list of available datasets here. If you need to train with multiple datasets, you can use ',' or ' ' to separate them, for example:--dataset alpaca-en,alpaca-zh
or--dataset alpaca-en alpaca-zh
. It supports Modelscope Hub/HuggingFace Hub/local paths, subset selection, and dataset sampling. The specified format for each dataset is as follows:[HF or MS::]{dataset_name} or {dataset_id} or {dataset_path}[:subset1/subset2/...][#dataset_sample]
. The simplest case requires specifying only dataset_name, dataset_id, or dataset_path. Customizing datasets can be found in the Customizing and Extending Datasets document- Supports MS and HF hub, as well as dataset_sample. For example, 'MS::alpaca-zh#2000', 'HF::jd-sentiment-zh#2000' (the default hub used is controlled by the
USE_UF
environment variable, default is MS). - More fine-grained control over subsets: It uses the subsets specified during registration by default (if not specified during registration, it uses 'default'). For example, 'sharegpt-gpt4'. If subsets are specified, it uses the corresponding subset of the dataset. For example, 'sharegpt-gpt4:default/V3_format#2000'. Separated by '/'.
- Support for dataset_id. For example, 'AI-ModelScope/alpaca-gpt4-data-zh#2000', 'HF::llm-wizard/alpaca-gpt4-data-zh#2000', 'hurner/alpaca-gpt4-data-zh#2000', 'HF::shibing624/alpaca-zh#2000'. If the dataset_id has been registered, it will use the preprocessing function, subsets, split, etc. specified during registration. Otherwise, it will use
SmartPreprocessor
, support 5 dataset formats, and use 'default' subsets, with split set to 'train'. The supported dataset formats can be found in the Customizing and Extending Datasets document. - Support for dataset_path. For example, '1.jsonl#5000' (if it is a relative path, it is relative to the running directory).
- Supports MS and HF hub, as well as dataset_sample. For example, 'MS::alpaca-zh#2000', 'HF::jd-sentiment-zh#2000' (the default hub used is controlled by the
-
--val_dataset
: Specify separate validation datasets with the same format of thedataset
argument, default is[]
. If usingval_dataset
, thedataset_test_ratio
will be ignored. -
--dataset_seed
: Seed for dataset processing, default is42
. Exists as random_state, does not affect global seed. -
--dataset_test_ratio
: Used to specify the ratio for splitting the sub-dataset into training and validation sets. The default value is0.01
. If--val_dataset
is set, this parameter becomes ineffective. -
--train_dataset_sample
: The number of samples for the training dataset, default is-1
, which means using the complete training dataset for training. This parameter is deprecated, please use--dataset {dataset_name}#{dataset_sample}
instead. -
--val_dataset_sample
: Used to sample the validation set, with a default value ofNone
, which automatically selects a suitable number of data samples for validation. If you specify-1
, the complete validation set is used for validation. This parameter is deprecated and the number of samples in the validation set is controlled by--dataset_test_ratio
or--val_dataset {dataset_name}#{dataset_sample}
. -
--system
: System used in dialogue template, default isNone
, i.e. use the model's default system. If set to '', no system is used. -
--tools_prompt
: Select the corresponding tools system prompt for the tools field transformation. The options are ['react_en', 'react_zh', 'toolbench'], which correspond to the English version of ReAct format, Chinese version of ReAct format and the toolbench format, respectively. The default is the English version of the ReAct format. For more information, you can refer to the Agent Deployment Best Practices. -
--max_length
: Maximum token length, default is2048
. Avoids OOM issues caused by individual overly long samples. When--truncation_strategy delete
is specified, samples exceeding max_length will be deleted. When--truncation_strategy truncation_left
is specified, the leftmost tokens will be truncated:input_ids[-max_length:]
. If set to -1, no limit. -
--truncation_strategy
: Default is'delete'
which removes sentences exceeding max_length from dataset.'truncation_left'
will truncate excess text from the left, which may truncate special tokens and affect performance, not recommended. -
--check_dataset_strategy
: Default is'none'
, i.e. no checking. If training an LLM model,'warning'
is recommended as data check strategy. If your training target is sentence classification etc., setting to'none'
is recommended. -
--custom_train_dataset_path
: Default value is[]
. This parameter has been deprecated, please use--dataset {dataset_path}
. -
--custom_val_dataset_path
: Default value is[]
. This parameter is deprecated. Please use--val_dataset {dataset_path}
instead. -
--self_cognition_sample
: The number of samples for the self-cognition dataset. Default is0
. If you set this value to >0, you need to specify--model_name
and--model_author
at the same time. This parameter has been deprecated, please use--dataset self-cognition#{self_cognition_sample}
instead. -
--model_name
: Default value is[None, None]
. If self-cognition dataset sampling is enabled (i.e., specifying--dataset self-cognition
or self_cognition_sample>0), you need to provide two values, representing the Chinese and English names of the model, respectively. For example:--model_name 小黄 'Xiao Huang'
. If you want to learn more, you can refer to the Self-Cognition Fine-tuning Best Practices. -
--model_name
: Default is[None, None]
. If self-cognition dataset sampling is enabled (i.e. self_cognition_sample>0), you need to pass two values, representing the model's Chinese and English names respectively. E.g.--model_name 小黄 'Xiao Huang'
. -
--model_author
: Default is[None, None]
. If self-cognition dataset sampling is enabled, you need to pass two values, representing the author's Chinese and English names respectively. E.g.--model_author 魔搭 ModelScope
. -
--quant_method
: Quantization method, default is None. You can choose from 'bnb', 'hqq', 'eetq'. -
--quantization_bit
: Specifies whether to quantize and number of quantization bits, default is0
, i.e. no quantization. To use 4bit qlora, set--sft_type lora --quantization_bit 4
.Hqq support 1,2,3,4,8bit, bnb support 4,8bit -
--hqq_axis
: Hqq argument. Axis along which grouping is performed. Supported values are 0 or 1. default is0
-
--hqq_dynamic_config_path
: Parameters for dynamic configuration. The key is the name tag of the layer and the value is a quantization config. If set, each layer specified by its id will use its dedicated quantization configuration.ref -
--bnb_4bit_comp_dtype
: When doing 4bit quantization, we need to dequantize during model forward and backward passes. This specifies the torch_dtype after dequantization. Default is'AUTO'
, i.e. consistent withdtype
. Options: 'fp16', 'bf16', 'fp32'. Has no effect when quantization_bit is 0. -
--bnb_4bit_quant_type
: Quantization method for 4bit quantization, default is'nf4'
. Options: 'nf4', 'fp4'. Has no effect when quantization_bit is 0. -
--bnb_4bit_use_double_quant
: Whether to enable double quantization for 4bit quantization, default isTrue
. Has no effect when quantization_bit is 0. -
--bnb_4bit_quant_storage
: Default vlaueNone
.This sets the storage type to pack the quanitzed 4-bit prarams. Has no effect when quantization_bit is 0. -
--lora_target_modules
: Specify lora modules, default is['DEFAULT']
. If lora_target_modules is passed'DEFAULT'
or'AUTO'
, look uplora_target_modules
inMODEL_MAPPING
based onmodel_type
(default specifies qkv). If passed'ALL'
, all Linear layers (excluding head) will be specified as lora modules. If passed'EMBEDDING'
, Embedding layer will be specified as lora module. If memory allows, setting to 'ALL' is recommended. You can also set['ALL', 'EMBEDDING']
to specify all Linear and embedding layers as lora modules. This parameter only takes effect whensft_type
is 'lora'. -
--lora_target_regex
: The lora target regex inOptional[str]
. default isNone
. If this argument is specified, thelora_target_modules
will have no effect. -
--lora_rank
: Default is8
. Only takes effect whensft_type
is 'lora'. -
--lora_alpha
: Default is32
. Only takes effect whensft_type
is 'lora'. -
--lora_dropout_p
: Default is0.05
, only takes effect whensft_type
is 'lora'. -
--init_lora_weights
: Method to initialize LoRA weights, can be specified astrue
,false
,gaussian
,pissa
, orpissa_niter_[number of iters]
. Default valuetrue
. -
--lora_bias_trainable
: Default is'none'
, options: 'none', 'all'. Set to'all'
to make all biases trainable. -
--lora_modules_to_save
: Default is[]
. If you want to train embedding, lm_head, or layer_norm, you can set this parameter, e.g.--lora_modules_to_save EMBEDDING LN lm_head
. If passed'EMBEDDING'
, Embedding layer will be added tolora_modules_to_save
. If passed'LN'
,RMSNorm
andLayerNorm
will be added tolora_modules_to_save
. -
--lora_dtype
: Default is'AUTO'
, specifies dtype for lora modules. IfAUTO
, follow dtype of original module. Options: 'fp16', 'bf16', 'fp32', 'AUTO'. -
--use_dora
: Default isFalse
, whether to useDoRA
. -
--use_rslora
: Default isFalse
, whether to useRS-LoRA
. -
--neftune_noise_alpha
: The noise coefficient added byNEFTune
can improve performance of instruction fine-tuning, default isNone
. Usually can be set to 5, 10, 15. See related paper. -
--neftune_backend
: The backend ofNEFTune
, default usestransformers
library, may encounter incompatibility when training VL models, in which case it's recommended to specify asswift
. -
--gradient_checkpointing
: Whether to enable gradient checkpointing, default isTrue
. This can be used to save memory, although it slightly reduces training speed. Has significant effect when max_length and batch_size are large. -
--deepspeed
: Specifies the path to the deepspeed configuration file or directly passes in configuration information in json format, default isNone
, i.e. deepspeed is not enabled. Deepspeed can save memory. We have written a default ZeRO-2 configuration file, ZeRO-3 configuration file. You only need to specify 'default-zero2' to use the default zero2 config file; specify 'default-zero3' to use the default zero3 config file. -
--batch_size
: Batch_size during training, default is1
. Increasing batch_size can improve GPU utilization, but won't necessarily improve training speed, because within a batch, shorter sentences need to be padded to the length of the longest sentence in the batch, introducing invalid computations. -
--eval_batch_size
: Batch_size during evaluation, default isNone
, i.e. set to 1 whenpredict_with_generate
is True, set tobatch_size
when False. -
--num_train_epochs
: Number of epochs to train, default is1
. Ifmax_steps >= 0
, this overridesnum_train_epochs
. Usually set to 3 ~ 5. -
--max_steps
: Max_steps for training, default is-1
. Ifmax_steps >= 0
, this overridesnum_train_epochs
. -
--optim
: Default is'adamw_torch'
. -
--adam_beta1
: Default is0.9
. -
--adam_beta2
: Default is0.999
. -
--adam_epsilon
: Default is1e-8
. -
--learning_rate
: Default isNone
, i.e. set to 1e-4 ifsft_type
is lora, set to 1e-5 ifsft_type
is full. -
--weight_decay
: Default is0.01
. -
--gradient_accumulation_steps
: Gradient accumulation, default isNone
, set tomath.ceil(16 / self.batch_size / world_size)
.total_batch_size = batch_size * gradient_accumulation_steps * world_size
. -
--max_grad_norm
: Gradient clipping, default is0.5
. -
--predict_with_generate
: Whether to use generation for evaluation, default isFalse
. If set to False, evaluate usingloss
. If set to True, evaluate usingROUGE-L
and other metrics. Generative evaluation takes a long time, choose carefully. -
--lr_scheduler_type
: Default is'cosine'
, options: 'linear', 'cosine', 'constant', etc. -
--warmup_ratio
: Proportion of warmup in total training steps, default is0.05
. -
--warmup_steps
: The number of warmup steps, default is0
. If warmup_steps > 0 is set, it overrides warmup_ratio. -
--eval_steps
: Evaluate every this many steps, default is50
. -
--save_steps
: Save every this many steps, default isNone
, i.e. set toeval_steps
. -
--save_only_model
: Whether to save only model parameters, without saving intermediate states needed for checkpoint resuming, default isNone
, i.e. ifsft_type
is 'lora' and not using deepspeed (deepspeed
isNone
), set to False, otherwise set to True (e.g. using full fine-tuning or deepspeed). -
--save_total_limit
: Number of checkpoints to save, default is2
, i.e. save best and last checkpoint. If set to -1, save all checkpoints. -
--logging_steps
: Print training information (e.g. loss, learning_rate, etc.) every this many steps, default is5
. -
--acc_steps
: How often to calculate the accuracy information during training, by default it is calculated every1
iteration. -
--dataloader_num_workers
: Default value isNone
. If running on a Windows machine, set it to0
; otherwise, set it to1
. -
--push_to_hub
: Whether to sync push trained checkpoint to ModelScope Hub, default isFalse
. -
--hub_model_id
: Model_id to push to on ModelScope Hub, default isNone
, i.e. set tof'{model_type}-{sft_type}'
. You can set this to model_id or repo_name. We will infer user_name based on hub_token. If the remote repository to push to does not exist, a new repository will be created, otherwise the previous repository will be reused. This parameter only takes effect whenpush_to_hub
is set to True. -
--hub_token
: SDK token needed for pushing. Can be obtained from https://modelscope.cn/my/myaccesstoken, default isNone
, i.e. obtained from environment variableMODELSCOPE_API_TOKEN
. This parameter only takes effect whenpush_to_hub
is set to True. -
--hub_private_repo
: Whether to set the permission of the pushed model repository on ModelScope Hub to private, default isFalse
. This parameter only takes effect whenpush__to_hub
is set to True. -
--push_hub_strategy
: Push strategy, default is'push_best'
. Options include: 'end', 'push_best', 'push_last', 'checkpoint', 'all_checkpoints'. 'push_best' means when saving weights each time, push and overwrite the best model from before, 'push_last' means when saving weights each time, push and overwrite the last weights from before, 'end' means only push the best model at the end of training. This parameter only takes effect whenpush_to_hub
is set to True. -
--test_oom_error
: Used to detect whether training will cause OOM, default isFalse
. If set to True, will sort the training set in descending order by max_length, easy for OOM testing. This parameter is generally used for testing, use carefully. -
--disable_tqdm
: Whether to disable tqdm, useful when launching script withnohup
. Default isFalse
, i.e. enable tqdm. -
--lazy_tokenize
: If set to False, preprocess all text beforetrainer.train()
. If set to True, delay encoding text, reducing preprocessing wait and memory usage, useful when processing large datasets. Default isNone
, i.e. we intelligently choose based on template type, usually set to False for LLM models, set to True for multimodal models (to avoid excessive memory usage from loading images and audio). -
--preprocess_num_proc
: Use multiprocessing when preprocessing dataset (tokenizing text). Default is1
. Same aslazy_tokenize
command line argument, used to solve slow preprocessing issue. But this strategy cannot reduce memory usage, so if dataset is huge,lazy_tokenize
is recommended. Recommended values: 4, 8. Note: When using qwen-audio, this parameter will be forced to 1, because qwen-audio's preprocessing function uses torch's multiprocessing, which will cause compatibility issues. -
--use_flash_attn
: Whether to use flash attn, default isNone
. Installation steps for flash_attn can be found at https://github.com/Dao-AILab/flash-attention. Models supporting flash_attn can be found in LLM Supported Models. -
--ignore_args_error
: Whether to ignore Error thrown by command line parameter errors, default isFalse
. Set to True if need to copy code to notebook to run. -
--check_model_is_latest
: Check if model is latest, default isTrue
. Set this toFalse
if you need to train offline. -
--logging_dir
: Default isNone
. I.e. set tof'{self.output_dir}/runs'
, representing path to store tensorboard files. -
--report_to
: Default is['tensorboard']
. You can set--report_to all
to report to all installed integrations. -
--acc_strategy
: Default is'token'
, options include: 'token', 'sentence'. -
--save_on_each_node
: Takes effect during multi-machine training, default isFalse
. -
--save_strategy
: Strategy for saving checkpoint, default is'steps'
, options include: 'steps', 'epoch', no'. -
--evaluation_strategy
: Strategy for evaluation, default is'steps'
, options include: 'steps', 'epoch', no'. -
--save_safetensors
: Default isTrue
. -
--include_num_input_tokens_seen
: Default isFalse
. Tracks the number of input tokens seen throughout training. -
--max_new_tokens
: Default is2048
. This parameter only takes effect whenpredict_with_generate
is set to True. -
--do_sample
: Default isTrue
. This parameter only takes effect whenpredict_with_generate
is set to True. -
--temperature
: Default is0.3
. This parameter only takes effect whendo_sample
is set to True. This parameter will be used as default value in deployment parameters. -
--top_k
: Default is20
. This parameter only takes effect whendo_sample
is set to True. This parameter will be used as default value in deployment parameters. -
--top_p
: Default is0.7
. This parameter only takes effect whendo_sample
is set to True. This parameter will be used as default value in deployment parameters. -
--repetition_penalty
: Default is1.
. This parameter will be used as default value in deployment parameters. -
--num_beams
: Default is1
. This parameter only takes effect whenpredict_with_generate
is set to True. -
--gpu_memory_fraction
: Default isNone
. This parameter aims to run training under a specified maximum available GPU memory percentage, used for extreme testing. -
--train_dataset_mix_ratio
: Default is0.
. This parameter defines how to mix datasets for training. When this parameter is specified, it will mix the training dataset with a multiple oftrain_dataset_mix_ratio
of the general knowledge dataset specified bytrain_dataset_mix_ds
. This parameter has been deprecated, please use--dataset {dataset_name}#{dataset_sample}
to mix datasets. -
--train_dataset_mix_ds
: Default is['ms-bench']
. Used for preventing knowledge forgetting, this is the general knowledge dataset. This parameter has been deprecated, please use--dataset {dataset_name}#{dataset_sample}
to mix datasets. -
--use_loss_scale
: Default isFalse
. When taking effect, strengthens loss weight of some Agent fields (Action/Action Input part) to enhance CoT, has no effect in regular SFT scenarios. -
--loss_scale_config_path
: option specifies a custom loss_scale configuration, applicable when use_loss_scale is enabled, such as in Agent training to amplify the loss weights for Action and other crucial ReAct fields.- In the configuration file, you can set the loss_scale using a dictionary format. Each key represents a specific field name, and its associated value specifies the loss scaling factor for that field and its subsequent content. For instance, setting
"Observation:": [2, 0]
means that when the response containsxxxx Observation:error
, the loss for theObservation:
field will be doubled, while the loss for theerror
portion will not be counted. Besides literal matching, the configuration also supports regular expression rules for more flexible matching; for example, the pattern'<.*?>':[2.0]
doubles the loss for any content enclosed in angle brackets. The loss scaling factors for field matching and regex matching are respectively indicated by lists of length 2 and 1. - There is also support for setting loss_scale for the entire response based on matching queries, which is extremely useful in dealing with fixed multi-turn dialogue queries described in the Agent-Flan paper paper. If the query includes any of the predefined keys, the corresponding response will use the associated loss_scale value. Refer to swift/llm/agent/agentflan.json for an example.
- By default, we have preset loss scaling values for fields such as Action:, Action Input:, Thought:, Final Answer:, and Observation:. We also provide default configurations for alpha-umi and Agent-FLAN, which you can use by setting to alpha-umi and agent-flan respectively. The default configuration files are located under swift/llm/agent.
- The application priority of matching rules is as follows, from highest to lowest: query fields > specific response fields > regular expression matching rules.
- In the configuration file, you can set the loss_scale using a dictionary format. Each key represents a specific field name, and its associated value specifies the loss scaling factor for that field and its subsequent content. For instance, setting
-
--custom_register_path
: Default isNone
. Pass in a.py
file used to register templates, models, and datasets. -
--custom_dataset_info
: Default isNone
. Pass in the path to an externaldataset_info.json
, a JSON string, or a dictionary. Used to register custom datasets. The format example: https://github.com/modelscope/swift/blob/main/swift/llm/data/dataset_info.json -
--device_map_config_path
: Manually configure the model's device map from a local file, defaults to None. -
--device_max_memory
: The max memory of each device can use fordevice_map
,List
, default is[]
, The number of values must equal to the device count. Like10GB 10GB
.
--rope_scaling
: DefaultNone
, Supportlinear
anddynamic
to scale positional embeddings. Use whenmax_length
exceedsmax_position_embeddings
.
-
--fsdp
: Default value''
, the FSDP type, please check this documentation for details. -
--fsdp_config
: Default valueNone
, the FSDP config file path.
--sequence_parallel_size
: Default value1
, a positive value can be used to split a sequence to multiple GPU to reduce memory usage. The value should divide the GPU count.
--boft_block_size
: BOFT block size, default value is 4.--boft_block_num
: Number of BOFT blocks, cannot be used simultaneously withboft_block_size
.--boft_target_modules
: BOFT target modules. Default is['DEFAULT']
. Ifboft_target_modules
is set to'DEFAULT'
or'AUTO'
, it will look upboft_target_modules
in theMODEL_MAPPING
based onmodel_type
(default specified as qkv). If set to'ALL'
, all Linear layers (excluding the head) will be designated as BOFT modules.--boft_dropout
: Dropout value for BOFT, default is 0.0.--boft_modules_to_save
: Additional modules to be trained and saved, default isNone
.
--vera_rank
: Size of Vera Attention, default value is 256.--vera_projection_prng_key
: Whether to store the Vera projection matrix, default is True.--vera_target_modules
: Vera target modules. Default is['DEFAULT']
. Ifvera_target_modules
is set to'DEFAULT'
or'AUTO'
, it will look upvera_target_modules
in theMODEL_MAPPING
based onmodel_type
(default specified as qkv). If set to'ALL'
, all Linear layers (excluding the head) will be designated as Vera modules. Vera modules need to share a same shape.--vera_dropout
: Dropout value for Vera, default is 0.0.--vera_d_initial
: Initial value for Vera's d matrix, default is 0.1.--vera_modules_to_save
: Additional modules to be trained and saved, default isNone
.
--lora_lr_ratio
: DefaultNone
, recommended value10~16
, specify this parameter when using lora to enable lora+.
--use_galore: bool
: Default False, whether to use GaLore.--galore_target_modules: Union[str, List[str]]
: Default None, apply GaLore to attention and mlp when not passed.--galore_rank: int
: Default 128, rank value for GaLore.--galore_update_proj_gap: int
: Default 50, update interval for decomposition matrix.--galore_scale: int
: Default 1.0, matrix weight coefficient.--galore_proj_type: str
: Defaultstd
, GaLore matrix decomposition type.--galore_optim_per_parameter: bool
: Default False, whether to set a separate optimizer for each Galore target Parameter.--galore_with_embedding: bool
: Default False, whether to apply GaLore to embedding.--galore_quantization
: Whether to use q-galore. Default valueFalse
.--galore_proj_quant
: Whether to quantize the SVD decomposition matrix, defaultFalse
.--galore_proj_bits
: Number of bits for SVD quantization.--galore_proj_group_size
: Number of groups for SVD quantization.--galore_cos_threshold
: Cosine similarity threshold for updating the projection matrix. Default value 0.4.--galore_gamma_proj
: When the projection matrix gradually becomes similar, this parameter is the coefficient for extending the update interval each time, default value 2.--galore_queue_size
: Queue length for calculating projection matrix similarity, default value 5.
Note: LISA only supports full training, which is --sft_type full
.
--lisa_activated_layers
: Default value0
, which means use withoutLISA
, suggested value is2
or8
.--lisa_step_interval
: Default value20
, how many iters to switch the layers to back-propagate.
unsloth has no new parameters,you can use the existing parameters to use unsloth:
--tuner_backend unsloth
--sft_type full/lora
--quantization_type 4
--llamapro_num_new_blocks
: Default4
, total number of new layers inserted.--llamapro_num_groups
: DefaultNone
, how many groups to insert new_blocks into, ifNone
then equalsllamapro_num_new_blocks
, i.e. each new layer is inserted into original model separately.
The following parameters take effect when sft_type
is set to adalora
. AdaLoRA's target_modules
and other parameters inherit from lora's corresponding parameters, but the lora_dtype
parameter has no effect.
--adalora_target_r
: Default8
, AdaLoRA's average rank.--adalora_init_r
: Default12
, AdaLoRA's initial rank.--adalora_tinit
: Default0
, AdaLoRA's initial warmup.--adalora_tfinal
: Default0
, AdaLoRA's final warmup.--adalora_deltaT
: Default1
, AdaLoRA's step interval.--adalora_beta1
: Default0.85
, AdaLoRA's EMA parameter.--adalora_beta2
: Default0.85
, AdaLoRA's EMA parameter.--adalora_orth_reg_weight
: Default0.5
, AdaLoRA's regularization parameter.
The following parameters take effect when sft_type
is set to ia3
.
--ia3_target_modules
: Specify IA3 target modules, default is['DEFAULT']
. Seelora_target_modules
for specific meaning.--ia3_feedforward_modules
: Specify the Linear name of IA3's MLP, this name must be inia3_target_modules
.--ia3_modules_to_save
: Additional modules participating in IA3 training. See meaning oflora_modules_to_save
.
RLHF parameters are an extension of the sft parameters, with the addition of the following options:
-
--rlhf_type
: Choose the alignment algorithm, with options such as 'dpo', 'orpo', 'simpo', 'kto', 'cpo'. For training scripts with different algorithms, please refer to document -
--ref_model_type
: Select the reference model, similar to the model_type parameter, by default consistent with the model being trained. No selection is necessary forcpo
andsimpo
algorithms. -
--ref_model_id_or_path
: Local cache path for the reference model, default isNone
. -
--max_prompt_length
: The maximum length of the prompt. This parameter is passed to the corresponding Trainer to ensure the prompt length does not exceed the set value. The default value is1024
. -
--beta
: Coefficient for the KL regularization term. Forsimpo
the default is 2.0, for other algorithms, the default is 0.1. For detail please checkdocument -
--label_smoothing
: Whether to use DPO smoothing, the default value is 0, normally set between 0 and 0.5. -
--loss_type
: Type of loss, default value is 'sigmoid'. -
--sft_beta
: Whether to include sft loss in DPO, default is 0.1, supporting the range$[0, 1)$ . The final loss is(1-sft_beta)*KL_loss + sft_beta * sft_loss
. -
--simpo_gamma
: The reward margin term in the SimPO algorithm, the paper recommends setting it to 0.5-1.5, the default is 1.0. -
--cpo_alpha
: The coefficient for the NLL loss in the CPO loss, with a default value of 1.0. In SimPO, a mixed NLL loss is employed to enhance training stability. -
--desirable_weight
: The loss weight for desirable responses$\lambda_D$ in the KTO algorithm, default is 1.0. -
--undesirable_weight
: The loss weight for undesirable responses$\lambda_U$ in the KTO paper, default is 1.0. Let$n_d$ and$n_u$ represent the number of desirable and undesirable examples in the dataset, respectively. The paper recommends controlling$\frac{\lambda_D n_D}{\lambda_Un_U} \in [1,\frac{4}{3}]$ .
--model_type
: Default isNone
, seesft.sh command line arguments
for parameter details.--model_id_or_path
: Default isNone
, seesft.sh command line arguments
for parameter details. Recommended to use model_type to specify.--model_revision
: Default isNone
. Seesft.sh command line arguments
for parameter details. Ifmodel_id_or_path
is None or a local model directory, this parameter has no effect.--sft_type
: Default is'lora'
, seesft.sh command line arguments
for parameter details.--template_type
: Default is'AUTO'
, seesft.sh command line arguments
for parameter details.--infer_backend
: Options are 'AUTO', 'vllm', 'pt'. Default uses 'AUTO', for intelligent selection, i.e. ifckpt_dir
is not passed or using full fine-tuning, and vllm is installed and model supports vllm, then use vllm engine, otherwise use native torch for inference. vllm environment setup can be found in VLLM Inference Acceleration and Deployment, vllm supported models can be found in Supported Models.--ckpt_dir
: Required, value is the checkpoint path saved in SFT stage, e.g.'/path/to/your/vx-xxx/checkpoint-xxx'
.--load_args_from_ckpt_dir
: Whether to read model configuration info fromsft_args.json
file inckpt_dir
. Default isTrue
.--load_dataset_config
: This parameter only takes effect when--load_args_from_ckpt_dir true
. I.e. whether to read dataset related configuration fromsft_args.json
file inckpt_dir
. Default isFalse
.--eval_human
: Whether to evaluate using validation set portion of dataset or manual evaluation. Default isNone
, for intelligent selection, if no datasets (including custom datasets) are passed, manual evaluation will be used. If datasets are passed, dataset evaluation will be used.--device_map_config_path
: Manually configure the model's device map from a local file, defaults to None.--device_max_memory
: The max memory of each device can use fordevice_map
,List
, default is[]
, The number of values must equal to the device count. Like10GB 10GB
.--seed
: Default is42
, seesft.sh command line arguments
for parameter details.--dtype
: Default is'AUTO
, seesft.sh command line arguments
for parameter details.--dataset
: Default is[]
, seesft.sh command line arguments
for parameter details.--val_dataset
: Default is[]
, seesft.sh command line arguments
for parameter details.--dataset_seed
: Default is42
, seesft.sh command line arguments
for parameter details.--dataset_test_ratio
: Default value is0.01
. For specific parameter details, refer to thesft.sh command line arguments
.--show_dataset_sample
: Represents number of validation set samples to evaluate and display, default is10
.--system
: Default isNone
. Seesft.sh command line arguments
for parameter details.--tools_prompt
: Default isreact_en
. Seesft.sh command line arguments
for parameter details.--max_length
: Default is-1
. Seesft.sh command line arguments
for parameter details.--truncation_strategy
: Default is'delete'
. Seesft.sh command line arguments
for parameter details.--check_dataset_strategy
: Default is'none'
, seesft.sh command line arguments
for parameter details.--custom_train_dataset_path
: Default value is[]
. This parameter has been deprecated, please use--dataset {dataset_path}
.--custom_val_dataset_path
: Default value is[]
. This parameter is deprecated. Please use--val_dataset {dataset_path}
instead.--quantization_bit
: Default is 0. Seesft.sh command line arguments
for parameter details.--quant_method
: Quantization method, default is None. You can choose from 'bnb', 'hqq', 'eetq'.--hqq_axis
: Hqq argument. Axis along which grouping is performed. Supported values are 0 or 1. default is0
--hqq_dynamic_config_path
: Parameters for dynamic configuration. The key is the name tag of the layer and the value is a quantization config. If set, each layer specified by its id will use its dedicated quantization configuration.ref--bnb_4bit_comp_dtype
: Default is'AUTO'
. Seesft.sh command line arguments
for parameter details. Ifquantization_bit
is set to 0, this parameter has no effect.--bnb_4bit_quant_type
: Default is'nf4'
. Seesft.sh command line arguments
for parameter details. Ifquantization_bit
is set to 0, this parameter has no effect.--bnb_4bit_use_double_quant
: Default isTrue
. Seesft.sh command line arguments
for parameter details. Ifquantization_bit
is set to 0, this parameter has no effect.--bnb_4bit_quant_storage
: Default valueNone
.Seesft.sh command line arguments
for parameter details. Ifquantization_bit
is set to 0, this parameter has no effect.--max_new_tokens
: Maximum number of new tokens to generate, default is2048
.--do_sample
: Whether to use greedy generation or sampling generation, default isTrue
.--temperature
: Default is0.3
. This parameter only takes effect whendo_sample
is set to True. This parameter will be used as default value in deployment parameters.--top_k
: Default is20
. This parameter only takes effect whendo_sample
is set to True. This parameter will be used as default value in deployment parameters.--top_p
: Default is0.7
. This parameter only takes effect whendo_sample
is set to True. This parameter will be used as default value in deployment parameters.--repetition_penalty
: Default is1.
. This parameter will be used as default value in deployment parameters.--num_beams
: Default is1
.--use_flash_attn
: Default isNone
, i.e. 'auto'. Seesft.sh command line arguments
for parameter details.--ignore_args_error
: Default isFalse
, seesft.sh command line arguments
for parameter details.--stream
: Whether to use streaming output, default isTrue
. This parameter only takes effect when using dataset evaluation and verbose is True.--merge_lora
: Whether to merge lora weights into base model and save full weights, default isFalse
. Weights will be saved in the same level directory asckpt_dir
, e.g.'/path/to/your/vx-xxx/checkpoint-xxx-merged'
directory.--merge_device_map
: device_map used when merge-lora, default isNone
, to reduce memory usage, useauto
only during merge-lora process, otherwise default iscpu
.--save_safetensors
: Whether to save assafetensors
file orbin
file. Default isTrue
.--overwrite_generation_config
: Whether to save the generation_config used for evaluation asgeneration_config.json
file, default isNone
. Ifckpt_dir
is specified, set toTrue
, otherwise set toFalse
. The generation_config file saved during training will be overwritten.--verbose
: If set to False, use tqdm style inference. If set to True, output inference query, response, label. Default isNone
, for auto selection, i.e. whenlen(val_dataset) >= 100
, set to False, otherwise set to True. This parameter only takes effect when using dataset evaluation.--gpu_memory_utilization
: Parameter for initializing vllm engineEngineArgs
, default is0.9
. This parameter only takes effect when using vllm. VLLM inference acceleration and deployment can be found in VLLM Inference Acceleration and Deployment.--tensor_parallel_size
: Parameter for initializing vllm engineEngineArgs
, default is1
. This parameter only takes effect when using vllm.--max_model_len
: Override model's max_model__len, default isNone
. This parameter only takes effect when using vllm.--disable_custom_all_reduce
: Whether to disable the custom all-reduce kernel and fallback to NCCL. The default isTrue
, which is different from the default value of vLLM.--enforce_eager
: vllm uses the PyTorch eager mode or builds the CUDA graph. Default isFalse
. Setting to True can save memory, but it may affect efficiency.--vllm_enable_lora
: DefaultFalse
. Whether to support vllm with lora.--vllm_max_lora_rank
: Default16
. Lora rank in VLLM.--lora_modules
: Default[]
, the input format is'{lora_name}={lora_path}'
, e.g.--lora_modules lora_name1=lora_path1 lora_name2=lora_path2
.ckpt_dir
will be added withf'default-lora={args.ckpt_dir}'
by default.--custom_register_path
: Default isNone
. Pass in a.py
file used to register templates, models, and datasets.--custom_dataset_info
: Default isNone
. Pass in the path to an externaldataset_info.json
, a JSON string, or a dictionary. Used for expanding datasets.--rope_scaling
: DefaultNone
, Supportlinear
anddynamic
to scale positional embeddings. Use whenmax_length
exceedsmax_position_embeddings
.
export parameters inherit from infer parameters, with the following added parameters:
--to_peft_format
: Default isFalse
. Convert the swift format of LoRA (--tuner_backend swift
) to peft format.--merge_lora
: Default isFalse
. This parameter is already defined in InferArguments, not a new parameter. Whether to merge lora weights into base model and save full weights. Weights will be saved in the same level directory asckpt_dir
, e.g.'/path/to/your/vx-xxx/checkpoint-xxx-merged'
directory.--quant_bits
: Number of bits for quantization. Default is0
, i.e. no quantization. If you set--quant_method awq
, you can set this to4
for 4bits quantization. If you set--quant_method gptq
, you can set this to2
,3
,4
,8
for corresponding bits quantization. If quantizing original model, weights will be saved inf'{args.model_type}-{args.quant_method}-int{args.quant_bits}'
directory. If quantizing fine-tuned model, weights will be saved in the same level directory asckpt_dir
, e.g.f'/path/to/your/vx-xxx/checkpoint-xxx-{args.quant_method}-int{args.quant_bits}'
directory.--quant_method
: Quantization method, default is'awq'
. Options are 'awq', 'gptq', 'bnb'.--dataset
: This parameter is already defined in InferArguments, for export it means quantization dataset. Default is[]
. More details: including how to customize quantization dataset, can be found in LLM Quantization Documentation.--quant_n_samples
: Quantization parameter, default is256
. When set to--quant_method awq
, if OOM occurs during quantization, you can moderately reduce--quant_n_samples
and--quant_seqlen
.--quant_method gptq
generally does not encounter quantization OOM.--quant_seqlen
: Quantization parameter, default is2048
.--quant_batch_size
: Calibrating batch_size,Default1
.--quant_device_map
: Default is'cpu'
, to save memory. You can specify 'cuda:0', 'auto', 'cpu', etc., representing the device to load model during quantization. This parameter is independent of the actual device that performs the quantization, such as AWQ and GPTQ which will carry out quantization on cuda:0.quant_output_dir
: Default isNone
, the default quant_output_dir will be printed in the command line.--push_to_hub
: Default isFalse
. Whether to push the finalckpt_dir
to ModelScope Hub. If you specifymerge_lora
, full parameters will be pushed; if you also specifyquant_bits
, quantized model will be pushed.--hub_model_id
: Default isNone
. Model_id to push to on ModelScope Hub. Ifpush_to_hub
is set to True, this parameter must be set.--hub_token
: Default isNone
. Seesft.sh command line arguments
for parameter details.--hub_private_repo
: Default isFalse
. Seesft.sh command line arguments
for parameter details.--commit_message
: Default is'update files'
.--to_ollama
: Export to ollama modelfile.--ollama_output_dir
: ollama output dir. Default is<modeltype>-ollama
.
The eval parameters inherit from the infer parameters, and additionally include the following parameters: (Note: The generation_config parameter in infer will be invalid, controlled by evalscope.)
--eval_dataset
: The official evaluation dataset, default isNone
, means all datasets. ifcustom_eval_config
is specified, this arg will be ignored.Currently supported datasets include: 'obqa', 'AX_b', 'siqa', 'nq', 'mbpp', 'winogrande', 'mmlu', 'BoolQ', 'cluewsc', 'ocnli', 'lambada', 'CMRC', 'ceval', 'csl', 'cmnli', 'bbh', 'ReCoRD', 'math', 'humaneval', 'eprstmt', 'WSC', 'storycloze', 'MultiRC', 'RTE', 'chid', 'gsm8k', 'AX_g', 'bustm', 'afqmc', 'piqa', 'lcsts', 'strategyqa', 'Xsum', 'agieval', 'ocnli_fc', 'C3', 'tnews', 'race', 'triviaqa', 'CB', 'WiC', 'hellaswag', 'summedits', 'GaokaoBench', 'ARC_e', 'COPA', 'ARC_c', 'DRCD'
--eval_few_shot
: The few-shot number of sub-datasets for each evaluation set, with a default value ofNone
, meaning to use the default configuration of the dataset. This parameter is currently deprecated.--eval_limit
: The sampling quantity for each sub-dataset of the evaluation set, with a default value ofNone
indicating full-scale evaluation. You can pass integer(number of samples from each eval dataset) or str([10:20]
, slice).--name
: Used to differentiate the result storage path for evaluating the same configuration. Like:{eval_output_dir}/{name}
, default will beeval_outputs/defaults
, in which a timestamp named folder will hold each eval result.--eval_url
: The standard model invocation interface for OpenAI, for example,http://127.0.0.1:8000/v1
. This needs to be set when evaluating in a deployed manner, usually not needed. Default isNone
.swift eval --eval_url http://127.0.0.1:8000/v1 --eval_is_chat_model true --model_type gpt4 --eval_token xxx
--eval_token
: The token for the standard model invocation interface for OpenAI, with a default value of'EMPTY'
, indicating no token.--eval_is_chat_model
: Ifeval_url
is not empty, this value needs to be passed to determine if it is a "chat" model. False represents a "base" model. Default isNone
.--custom_eval_config
: Used for evaluating with custom datasets, and needs to be a locally existing file path. For details on file format, refer to Custom Evaluation Set. Default isNone
.--eval_use_cache
: Whether to use already generated evaluation cache, so that previously evaluated results won't be rerun but only the evaluation results regenerated. Default isFalse
.--eval_output_dir
: Output path for evaluation results, default iseval_outputs
in the current folder.--eval_batch_size
: Input batch size for evaluation, default is 8.--deploy_timeout
: The timeout duration for waiting for model deployment before evaluation, default is 60, which means one minute.
app-ui parameters inherit from infer parameters, with the following added parameters:
--host
: Default is'127.0.0.1'
. Passed to thedemo.queue().launch(...)
function of gradio.--port
: Default is7860
. Passed to thedemo.queue().launch(...)
function of gradio.--share
: Default isFalse
. Passed to thedemo.queue().launch(...)
function of gradio.
deploy parameters inherit from infer parameters, with the following added parameters:
--host
: Default is'127.0.0.1
.--port
: Default is8000
.--api_key
: The default isNone
, meaning that the request will not be subjected to api_key verification.--ssl_keyfile
: Default isNone
.--ssl_certfile
: Default isNone
.
--host
: Default'127.0.0.1'
.--port
: DefaultNone
.--lang
: Default'zh'
.--share
: DefaultFalse
.