Skip to content

LORA inference

Kannan Ramamoorthy edited this page Nov 1, 2023 · 1 revision

The below command can be used for inference.

MODEL="openlm-research/open_llama_3b" # Or any other model that you want to train
python generate/lora_ui_gen.py --checkpoint_dir checkpoints/$MODEL --lora_path=<the latest checkpoint .pth file from the output dir of finetuning> --prompt "Apply a top navbar."
Clone this wiki locally