You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Download the pretrained checkpoint
cd output_models
bash download.sh all
cd -
content inside dir after running -> bash download.sh all
output_models/
coco_task_annotation.json
download.sh
pretrained_minigpt4_7b.pth
pretrained_minigpt4_13b.pth
task_tuned.pth
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
Questios:
How to merge the robin lora model with the original llama model?
this output_models/robin-7b doesn't exist do i need to create this?
merge a lora model with a base model, is llama model a base model? what is base model here?
how to Replace 'path/to/pretrained_linear_weights' in the config file to the real path.
trying to use this script but not sure how to get this {huggingface-model-name-or-path-to-base-model} , {path-to-lora-model
and {path-to-merged-model}
If you want to merge the robin lora model with the original llama model, please refer to this script. https://github.com/OptimalScale/LMFlow/blob/main/utils/apply_delta.py by running python apply_delta.py --base <path_to_llama_weights> --target <path_to_save> --delta <path_to_delta_weights>
After merging the weights, say if you save the model to <path_to_save>. Then output_models/robin-7b should be replaced by <path_to_save>.
yes, llama is the base model. Here base model means llama.
path/to/pretrained_linear_weights should be replaced with http://lmflow.org:5000/detgpt/task_tuned.pth
We noticed there are some leaked weights in huggingface hub
If you want to save time, you may use them.
Hi Developers, Thank You for this wonderful job !!!
Trying to run DetGPT locally, following readme file but i feel steps are not clear please fix this.
git clone https://github.com/OptimalScale/DetGPT.git
cd DetGPT
conda create -n detgpt python=3.9 -y
conda activate detgpt
pip install -e .
Step 1 : Completed
python -m pip install -e GroundingDINO
cd output_models
bash download.sh all
cd -
content inside dir after running -> bash download.sh all
output_models/
coco_task_annotation.json
download.sh
pretrained_minigpt4_7b.pth
pretrained_minigpt4_13b.pth
task_tuned.pth
Step 2 : Completed
having problem here
-> Merge the robin lora model with the original llama model and save the merged model to output_models/robin-7b, w[here] (https://github.com/OptimalScale/DetGPT/blob/main/detgpt/configs/models/detgpt_robin_7b.yaml#L16) the corresponding model path is specified in this config file here.
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
Questios:
trying to use this script but not sure how to get this {huggingface-model-name-or-path-to-base-model} , {path-to-lora-model
and {path-to-merged-model}
python examples/merge_lora.py
--model_name_or_path {huggingface-model-name-or-path-to-base-model}
--lora_model_path {path-to-lora-model}
--output_model_path {path-to-merged-model}
what path to specify here -> ckpt: 'path/to/pretrained_linear_weights'
Please help in running this locally with required steps.
The text was updated successfully, but these errors were encountered: