-
Notifications
You must be signed in to change notification settings - Fork 48
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat: Add deps to evaluate qLora tuned model (#312)
* Add support to load qLora tuned model in run_inference.py script Signed-off-by: Angel Luu <[email protected]> * Remove comment Signed-off-by: Angel Luu <[email protected]> * Disable gptq by default Signed-off-by: Angel Luu <[email protected]> * Remove the gptq-dev install in Dockerfile Signed-off-by: Angel Luu <[email protected]> * Rename gptq-dev package from gptq Signed-off-by: Angel Luu <[email protected]> * Add comments in run_inference.py Signed-off-by: Angel Luu <[email protected]> * Update device to cuda Signed-off-by: Angel Luu <[email protected]> * Add in the case that there's no adapter found Signed-off-by: Angel Luu <[email protected]> * Use torch.float16 for quantized Signed-off-by: Angel Luu <[email protected]> --------- Signed-off-by: Angel Luu <[email protected]>
- Loading branch information
Showing
2 changed files
with
59 additions
and
15 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters