-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature branch finetune #1
base: main
Are you sure you want to change the base?
Conversation
Changed the data_path and image_folder paths
Changed the path for train_mem.py to absolute path
uncommented the PROMPT_VERSION AND MODEL_VERSION
changed path names for model_name_or_path and output_dir
Added --mm_projector_type mlp2x_gelu \ as per wandb blog and commented the pretrain_mm_mlp_adapter line
changed the pretrain_mm_mlp_adapter path to suit the llava-v1.5-7b path in home dir and also removed the --mm_projector_type command
mm_vision_select_layer commented
Removed comments, was second guessing that might be issue of recurrent command not found prompts
Commented the commands not found
Stupid mistake of missing a "\" after pretarin_mm_mlp_adapter
code spacing typos
Changed the file paths for the v1.5 script
Reverted the finetune_lora.sh file to og
…VA into feature-branch-Finetune
|
No description provided.