From e2fbae0bec3a3d14568447c48bd4f1624f04e054 Mon Sep 17 00:00:00 2001 From: Jing Xu Date: Thu, 4 Aug 2022 15:11:05 +0900 Subject: [PATCH] add known issue of EmbeddingBag INT8 accuracy loss (#1017) * add known issue restate the embedding issue * add release notes for 1.12.100 --- docs/tutorials/performance_tuning/known_issues.md | 2 ++ docs/tutorials/releases.md | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/docs/tutorials/performance_tuning/known_issues.md b/docs/tutorials/performance_tuning/known_issues.md index 0dee3aba4..f67e09db9 100644 --- a/docs/tutorials/performance_tuning/known_issues.md +++ b/docs/tutorials/performance_tuning/known_issues.md @@ -1,6 +1,8 @@ Known Issues ============ +- Supporting of EmbeddingBag with INT8 when bag size > 1 is working in progress. + - Compiling with gcc 11 might result in `illegal instruction` error. - `RuntimeError: Overflow when unpacking long` when a tensor's min max value exceeds int range while performing int8 calibration. Please customize QConfig to use min-max calibration method. diff --git a/docs/tutorials/releases.md b/docs/tutorials/releases.md index 512b2e9ed..258e4053c 100644 --- a/docs/tutorials/releases.md +++ b/docs/tutorials/releases.md @@ -1,6 +1,10 @@ Releases ============= +## 1.12.100 + +This is a patch release to fix the AVX2 issue that blocks running on non-AVX512 platforms. + ## 1.12.0 We are excited to bring you the release of IntelĀ® Extension for PyTorch\* 1.12.0-cpu, by tightly following PyTorch [1.12](https://github.com/pytorch/pytorch/releases/tag/v1.12.0) release. In this release, we matured the automatic int8 quantization and made it a stable feature. We stabilized runtime extension and brought about a MultiStreamModule feature to further boost throughput in offline inference scenario. We also brought about various enhancements in operation and graph which are positive for performance of broad set of workloads.