Skip to content

Commit

Permalink
typo (#1972)
Browse files Browse the repository at this point in the history
Co-authored-by: Felipe Mello <[email protected]>
  • Loading branch information
felipemello1 and Felipe Mello authored Nov 8, 2024
1 parent aa96cae commit bc6229b
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions docs/source/tutorials/memory_optimizations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ In addition to :ref:`reducing model and optimizer precision <glossary_precision>
All of our recipes support lower-precision optimizers from the `torchao <https://github.com/pytorch/ao/tree/main/torchao/prototype/low_bit_optim>`_ library.
For single device recipes, we also support `bitsandbytes <https://huggingface.co/docs/bitsandbytes/main/en/index>`_.

A good place to start might be the :class:`torchao.prototype.low_bit_optim.torchao.AdamW8bit` and :class:`bitsandbytes.optim.PagedAdamW8bit` optimizers.
A good place to start might be the :class:`torchao.prototype.low_bit_optim.AdamW8bit` and :class:`bitsandbytes.optim.PagedAdamW8bit` optimizers.
Both reduce memory by quantizing the optimizer state dict. Paged optimizers will also offload to CPU if there isn't enough GPU memory available. In practice,
you can expect higher memory savings from bnb's PagedAdamW8bit but higher training speed from torchao's AdamW8bit.

Expand All @@ -180,7 +180,7 @@ a low precision optimizer using the :ref:`cli_label`:
.. code-block:: bash
tune run <RECIPE> --config <CONFIG> \
optimizer=torchao.prototype.low_bit_optim.torchao.AdamW8bit
optimizer=torchao.prototype.low_bit_optim.AdamW8bit
.. code-block:: bash
Expand Down

0 comments on commit bc6229b

Please sign in to comment.