From a96b8fbd43eb5a7ad1a3ff58031a24cebc24946f Mon Sep 17 00:00:00 2001 From: Felipe Mello Date: Fri, 8 Nov 2024 09:42:34 -0800 Subject: [PATCH] typo --- docs/source/tutorials/memory_optimizations.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/tutorials/memory_optimizations.rst b/docs/source/tutorials/memory_optimizations.rst index aa75024e6a..a0f6d16c91 100644 --- a/docs/source/tutorials/memory_optimizations.rst +++ b/docs/source/tutorials/memory_optimizations.rst @@ -167,7 +167,7 @@ In addition to :ref:`reducing model and optimizer precision All of our recipes support lower-precision optimizers from the `torchao `_ library. For single device recipes, we also support `bitsandbytes `_. -A good place to start might be the :class:`torchao.prototype.low_bit_optim.torchao.AdamW8bit` and :class:`bitsandbytes.optim.PagedAdamW8bit` optimizers. +A good place to start might be the :class:`torchao.prototype.low_bit_optim.AdamW8bit` and :class:`bitsandbytes.optim.PagedAdamW8bit` optimizers. Both reduce memory by quantizing the optimizer state dict. Paged optimizers will also offload to CPU if there isn't enough GPU memory available. In practice, you can expect higher memory savings from bnb's PagedAdamW8bit but higher training speed from torchao's AdamW8bit. @@ -180,7 +180,7 @@ a low precision optimizer using the :ref:`cli_label`: .. code-block:: bash tune run --config \ - optimizer=torchao.prototype.low_bit_optim.torchao.AdamW8bit + optimizer=torchao.prototype.low_bit_optim.AdamW8bit .. code-block:: bash