From f38c8276081801cc7117b39e7e09a28e350c61d9 Mon Sep 17 00:00:00 2001 From: 1000850000 user Date: Wed, 18 Sep 2024 16:24:30 +0000 Subject: [PATCH] formatting fixes to README Signed-off-by: 1000850000 user --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index 6b3fb17a4..87241cdb4 100644 --- a/README.md +++ b/README.md @@ -469,12 +469,12 @@ Notes: - pass `--fast_kernels True True True --auto_gptq triton_v2 --fused_lora auto_gptq True` for GPTQ-LoRA - pass `--fast_kernels True True True --bitsandbytes nf4 --fused_lora bitsandbytes True` for QLoRA * Notes on Padding Free - - works for both *single* and *multi-gpu*. - - works on both *pretokenized* and *untokenized* datasets - - verified against the version found in HF main, merged in via PR https://github.com/huggingface/transformers/pull/31629. + - works for both *single* and *multi-gpu*. + - works on both *pretokenized* and *untokenized* datasets + - verified against the version found in HF main, merged in via PR https://github.com/huggingface/transformers/pull/31629. * Notes on Multipack - - works only for *multi-gpu*. - - currently only includes the version of *multipack* optimized for linear attention implementations like *flash-attn*. + - works only for *multi-gpu*. + - currently only includes the version of *multipack* optimized for linear attention implementations like *flash-attn*. Activate `TRANSFORMERS_VERBOSITY=info` to see the huggingface trainer printouts and verify that `AccelerationFramework` is activated!