Skip to content

Releases: foundation-model-stack/fms-model-optimizer

v0.2.0

13 Dec 17:50
e8bc88e
Compare
Choose a tag to compare

This is the first release of FMS Model Optimizer. It provides the core functionality:

  • Python API to enable model quantization: With the addition of a few lines of codes, module-level and/or function-level operations replacement will be performed.
  • Robust: Verified for INT 8/4-bit quantization on important vision/speech/NLP/object detection/LLMs.
  • Flexible: Options to analyze the network using PyTorch Dynamo, apply best practices, such as clip_val initialization, layer-level precision setting, optimizer param group setting, etc. during quantization.
  • State-of-the-art INT and FP quantization techniques for weights and activations, such as SmoothQuant, SAWB+ and PACT+.
  • Supports key compute-intensive operations like Conv2d, Linear, LSTM, MM and BMM

What's Changed

New Contributors

Full Changelog: https://github.com/foundation-model-stack/fms-model-optimizer/commits/v0.2.0