From 78c9e561daad6ec24116889a9746445e77b77a57 Mon Sep 17 00:00:00 2001 From: jatinwadhwa921 Date: Tue, 28 May 2024 00:05:05 -0700 Subject: [PATCH] Remove unattended-upgrades from Dockerfile due to security vulnerabilities --- docs/python/ReadMeOV.rst | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/docs/python/ReadMeOV.rst b/docs/python/ReadMeOV.rst index 6ef16e1378139..ef5ad06d776d6 100644 --- a/docs/python/ReadMeOV.rst +++ b/docs/python/ReadMeOV.rst @@ -7,6 +7,7 @@ OpenVINO™ Execution Provider for ONNX Runtime accelerates inference across man - Intel® CPUs - Intel® integrated GPUs - Intel® discrete GPUs + - Intel® NPUs Installation ------------ @@ -14,27 +15,28 @@ Installation Requirements ^^^^^^^^^^^^ -- Ubuntu 18.04, 20.04, RHEL(CPU only) or Windows 10 - 64 bit +- Ubuntu 18.04, 20.04, 22.04, RHEL(CPU only) or Windows 10 - 64 bit - Python 3.8 or 3.9 or 3.10 for Linux and only Python3.10 for Windows This package supports: - Intel® CPUs - Intel® integrated GPUs - Intel® discrete GPUs + - Intel® NPUs ``pip3 install onnxruntime-openvino`` Please install OpenVINO™ PyPi Package separately for Windows. For installation instructions on Windows please refer to `OpenVINO™ Execution Provider for ONNX Runtime for Windows `_. -**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2023.0.0 eliminating the need to install OpenVINO™ separately. The OpenVINO™ libraries are prebuilt with CXX11_ABI flag set to 0. +**OpenVINO™ Execution Provider for ONNX Runtime** Linux Wheels comes with pre-built libraries of OpenVINO™ version 2024.1.0 eliminating the need to install OpenVINO™ separately. The OpenVINO™ libraries are prebuilt with CXX11_ABI flag set to 0. For more details on build and installation please refer to `Build `_. Usage ^^^^^ -By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated or discrete GPU. +By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated or discrete GPU or NPU. Invoke `the provider config device type argument `_ to change the hardware on which inferencing is done. For more API calls and environment variables, see `Usage `_.