Skip to content

Commit

Permalink
formatting fixes to README
Browse files Browse the repository at this point in the history
Signed-off-by: 1000850000 user <[email protected]>
  • Loading branch information
achew010 committed Sep 18, 2024
1 parent 8c71b12 commit f38c827
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -469,12 +469,12 @@ Notes:
- pass `--fast_kernels True True True --auto_gptq triton_v2 --fused_lora auto_gptq True` for GPTQ-LoRA
- pass `--fast_kernels True True True --bitsandbytes nf4 --fused_lora bitsandbytes True` for QLoRA
* Notes on Padding Free
- works for both *single* and *multi-gpu*.
- works on both *pretokenized* and *untokenized* datasets
- verified against the version found in HF main, merged in via PR https://github.com/huggingface/transformers/pull/31629.
- works for both *single* and *multi-gpu*.
- works on both *pretokenized* and *untokenized* datasets
- verified against the version found in HF main, merged in via PR https://github.com/huggingface/transformers/pull/31629.
* Notes on Multipack
- works only for *multi-gpu*.
- currently only includes the version of *multipack* optimized for linear attention implementations like *flash-attn*.
- works only for *multi-gpu*.
- currently only includes the version of *multipack* optimized for linear attention implementations like *flash-attn*.

Activate `TRANSFORMERS_VERBOSITY=info` to see the huggingface trainer printouts and verify that `AccelerationFramework` is activated!

Expand Down

0 comments on commit f38c827

Please sign in to comment.