ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://onnxruntime.ai/docs/install/ |
| Last Crawled | 2026-04-05 17:42:20 (1 day ago) |
| First Indexed | 2021-09-16 16:40:53 (4 years ago) |
| HTTP Status Code | 200 |
| Meta Title | Install ONNX Runtime | onnxruntime |
| Meta Description | Instructions to install ONNX Runtime on your target platform in your environment |
| Meta Canonical | null |
| Boilerpipe Text | See the
installation matrix
for recommended instructions for desired combinations of target operating system, hardware, accelerator, and language.
Details on OS versions, compilers, language versions, dependent libraries, etc can be found under
Compatibility
.
Contents
Requirements
CUDA and CuDNN
Python Installs
Install ONNX Runtime CPU
Install nightly
Install ONNX Runtime GPU (DirectML) - Sustained Engineering Mode
Install nightly
Install ONNX Runtime GPU (CUDA or TensorRT)
CUDA 12.x
Nightly for CUDA 13.x
Nightly for CUDA 12.x
CUDA 11.x
Install ONNX Runtime QNN
Install nightly
C#/C/C++/WinML Installs
Install ONNX Runtime
Install ONNX Runtime CPU
Install ONNX Runtime GPU (CUDA 12.x)
Install ONNX Runtime GPU (CUDA 11.8)
DirectML (sustained engineering - use WinML for new projects)
WinML (recommended for Windows)
Install on web and mobile
JavaScript Installs
Install ONNX Runtime Web (browsers)
Install ONNX Runtime Node.js binding (Node.js)
Install ONNX Runtime for React Native
Install on iOS
C/C++
Objective-C
Custom build
Install on Android
Java/Kotlin
C/C++
Custom build
Install for On-Device Training
Offline Phase - Prepare for Training
Training Phase - On-Device Training
Large Model Training
Inference install table for all languages
Training install table for all languages
Requirements
All builds require the English language package with
en_US.UTF-8
locale. On Linux, install
language-pack-en package
by running
locale-gen en_US.UTF-8
and
update-locale LANG=en_US.UTF-8
Windows builds require
Visual C++ 2019 runtime
. The latest version is recommended.
CUDA and CuDNN
For ONNX Runtime GPU package, it is required to install
CUDA
and
cuDNN
. Check
CUDA execution provider requirements
for compatible version of CUDA and cuDNN.
Zlib is required by cuDNN 9.x for Linux only (zlib is statically linked into the cuDNN 9.x Windows dynamic libraries), or cuDNN 8.x for Linux and Windows. Follow the
cuDNN 8.9 installation guide
to install zlib in Linux or Windows.
In Windows, the path of CUDA
bin
and cuDNN
bin
directories must be added to the
PATH
environment variable.
In Linux, the path of CUDA
lib64
and cuDNN
lib
directories must be added to the
LD_LIBRARY_PATH
environment variable.
For
onnxruntime-gpu
package, it is possible to work with PyTorch without the need for manual installations of CUDA or cuDNN. Refer to
Compatibility with PyTorch
for more information.
Python Installs
Install ONNX Runtime CPU
pip
install
onnxruntime
Install nightly
pip
install
coloredlogs flatbuffers numpy packaging protobuf sympy
pip
install
--pre
--index-url
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime
Install ONNX Runtime GPU (DirectML) - Sustained Engineering Mode
Note
: DirectML is in sustained engineering. For new Windows projects, consider
WinML
instead.
pip
install
onnxruntime-directml
Install nightly
pip
install
coloredlogs flatbuffers numpy packaging protobuf sympy
pip
install
--pre
--index-url
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-directml
Install ONNX Runtime GPU (CUDA or TensorRT)
CUDA 12.x
The default CUDA version for
onnxruntime-gpu in pypi
is 12.x since 1.19.0.
pip
install
onnxruntime-gpu
For previous versions, you can download here:
1.18.1
,
1.18.0
Nightly for CUDA 13.x
pip
install
coloredlogs flatbuffers numpy packaging protobuf sympy
pip
install
--pre
--index-url
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-13-nightly/pypi/simple/ onnxruntime-gpu
Nightly for CUDA 12.x
pip
install
coloredlogs flatbuffers numpy packaging protobuf sympy
pip
install
--pre
--index-url
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu
CUDA 11.x
For Cuda 11.x, please use the following instructions to install from
ORT Azure Devops Feed
for 1.19.2 or later.
pip
install
coloredlogs flatbuffers numpy packaging protobuf sympy
pip
install
onnxruntime-gpu
--index-url
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/
For previous versions, you can download here:
1.18.1
,
1.18.0
Install ONNX Runtime QNN
pip
install
onnxruntime-qnn
Install nightly
pip
install
coloredlogs flatbuffers numpy packaging protobuf sympy
pip
install
--pre
--index-url
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-qnn
C#/C/C++/WinML Installs
Install ONNX Runtime
Install ONNX Runtime CPU
# CPU
dotnet add package Microsoft.ML.OnnxRuntime
Install ONNX Runtime GPU (CUDA 12.x)
The default CUDA version for ORT is 12.x
# GPU
dotnet add package Microsoft.ML.OnnxRuntime.Gpu
Install ONNX Runtime GPU (CUDA 11.8)
Project Setup
Ensure you have installed the latest version of the Azure Artifacts keyring from the its
Github Repo
.
Add a nuget.config file to your project in the same directory as your .csproj file.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear/>
<add
key=
"onnxruntime-cuda-11"
value=
"https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/nuget/v3/index.json"
/>
</packageSources>
</configuration>
Restore packages
Restore packages (using the interactive flag, which allows dotnet to prompt you for credentials)
dotnet add package Microsoft.ML.OnnxRuntime.Gpu
Note: You don’t need –interactive every time. dotnet will prompt you to add –interactive if it needs updated credentials.
DirectML (sustained engineering - use WinML for new projects)
dotnet add package Microsoft.ML.OnnxRuntime.DirectML
Note
: DirectML is in sustained engineering. For new Windows projects, use WinML instead:
WinML (recommended for Windows)
dotnet add package Microsoft.AI.MachineLearning
Install on web and mobile
The pre-built packages have full support for all ONNX opsets and operators.
If the pre-built package is too large, you can create a
custom build
. A custom build can include just the opsets and operators in your model/s to reduce the size.
JavaScript Installs
Install ONNX Runtime Web (browsers)
# install latest release version
npm
install
onnxruntime-web
# install nightly build dev version
npm
install
onnxruntime-web@dev
Install ONNX Runtime Node.js binding (Node.js)
# install latest release version
npm
install
onnxruntime-node
Install ONNX Runtime for React Native
# install latest release version
npm
install
onnxruntime-react-native
Install on iOS
In your CocoaPods
Podfile
, add the
onnxruntime-c
or
onnxruntime-objc
pod, depending on which API you want to use.
C/C++
use_frameworks!
pod
'onnxruntime-c'
Objective-C
use_frameworks!
pod
'onnxruntime-objc'
Run
pod install
.
Custom build
Refer to the instructions for creating a
custom iOS package
.
Install on Android
Java/Kotlin
In your Android Studio Project, make the following changes to:
build.gradle (Project):
repositories
{
mavenCentral
()
}
build.gradle (Module):
dependencies
{
implementation
'com.microsoft.onnxruntime:onnxruntime-android:latest.release'
}
C/C++
Download the
onnxruntime-android
AAR hosted at MavenCentral, change the file extension from
.aar
to
.zip
, and unzip it. Include the header files from the
headers
folder, and the relevant
libonnxruntime.so
dynamic library from the
jni
folder in your NDK project.
Custom build
Refer to the instructions for creating a
custom Android package
.
Install for On-Device Training
Unless stated otherwise, the installation instructions in this section refer to pre-built packages designed to perform on-device training.
If the pre-built training package supports your model but is too large, you can create a
custom training build
.
Offline Phase - Prepare for Training
python
-m
pip
install
cerberus flatbuffers h5py numpy>
=
1.16.6 onnx packaging protobuf sympy setuptools>
=
41.4.0
pip
install
-i
https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training-cpu
Training Phase - On-Device Training
Device
Language
PackageName
Installation Instructions
Windows
C, C++, C#
Microsoft.ML.OnnxRuntime.Training
dotnet add package Microsoft.ML.OnnxRuntime.Training
Linux
C, C++
onnxruntime-training-linux*.tgz
Download the
*.tgz
file from
here
.
Extract it.
Move and include the header files in the
include
directory.
Move the
libonnxruntime.so
dynamic library to a desired path and include it.
Python
onnxruntime-training
pip install onnxruntime-training
Android
C, C++
onnxruntime-training-android
Download the
onnxruntime-training-android (full package)
AAR hosted at Maven Central.
Change the file extension from
.aar
to
.zip
, and unzip it.
Include the header files from the
headers
folder.
Include the relevant
libonnxruntime.so
dynamic library from the
jni
folder in your NDK project.
Java/Kotlin
onnxruntime-training-android
In your Android Studio Project, make the following changes to:
build.gradle (Project):
repositories { mavenCentral() }
build.gradle (Module):
dependencies { implementation 'com.microsoft.onnxruntime:onnxruntime-training-android:latest.release' }
iOS
C, C++
CocoaPods: onnxruntime-training-c
In your CocoaPods
Podfile
, add the
onnxruntime-training-c
pod:
use_frameworks!
pod 'onnxruntime-training-c'
Run
pod install
.
Objective-C
CocoaPods: onnxruntime-training-objc
In your CocoaPods
Podfile
, add the
onnxruntime-training-objc
pod:
use_frameworks!
pod 'onnxruntime-training-objc'
Run
pod install
.
Web
JavaScript, TypeScript
onnxruntime-web
npm install onnxruntime-web
Use either
import * as ort from 'onnxruntime-web/training';
or
const ort = require('onnxruntime-web/training');
Large Model Training
pip
install
torch-ort
python
-m
torch_ort.configure
Note
: This installs the default version of the
torch-ort
and
onnxruntime-training
packages that are mapped to specific versions of the CUDA libraries. Refer to the install options in
onnxruntime.ai
.
Inference install table for all languages
The table below lists the build variants available as officially supported packages. Others can be
built from source
from each
release branch
.
In addition to general
requirements
, please note additional requirements and dependencies in the table below:
Â
Official build
Nightly build
Reqs
Python
If using pip, run
pip install --upgrade pip
prior to downloading.
Â
Â
Â
CPU:
onnxruntime
onnxruntime (nightly)
Â
Â
GPU (CUDA/TensorRT) for CUDA 12.x:
onnxruntime-gpu
onnxruntime-gpu (nightly)
View
Â
GPU (DirectML)
sustained engineering
:
onnxruntime-directml
onnxruntime-directml (nightly)
View
Â
OpenVINO:
intel/onnxruntime
-
Intel managed
Â
View
Â
TensorRT (Jetson):
Jetson Zoo
-
NVIDIA managed
Â
Â
Â
Azure (Cloud):
onnxruntime-azure
Â
Â
C#/C/C++
CPU:
Microsoft.ML.OnnxRuntime
onnxruntime (nightly)
Â
Â
GPU (CUDA/TensorRT):
Microsoft.ML.OnnxRuntime.Gpu
onnxruntime (nightly)
View
Â
GPU (DirectML)
sustained engineering
:
Microsoft.ML.OnnxRuntime.DirectML
onnxruntime (nightly)
View
WinML
recommended for Windows
Microsoft.AI.MachineLearning
onnxruntime (nightly)
View
Java
CPU:
com.microsoft.onnxruntime:onnxruntime
Â
View
Â
GPU (CUDA/TensorRT):
com.microsoft.onnxruntime:onnxruntime_gpu
Â
View
Android
com.microsoft.onnxruntime:onnxruntime-android
Â
View
iOS (C/C++)
CocoaPods:
onnxruntime-c
Â
View
Objective-C
CocoaPods:
onnxruntime-objc
Â
View
React Native
onnxruntime-react-native
(latest)
onnxruntime-react-native (dev)
View
Node.js
onnxruntime-node
(latest)
onnxruntime-node (dev)
View
Web
onnxruntime-web
(latest)
onnxruntime-web (dev)
View
Note: Nightly builds created from the main branch are available for testing newer changes between official releases. Please use these at your own risk. We strongly advise against deploying these to production workloads as support is limited for nightly builds.
Training install table for all languages
Refer to the getting started with
Optimized Training
page for more fine-grained installation instructions. |
| Markdown | [Skip to main content](https://onnxruntime.ai/docs/install/#main-content)
- [ONNX Runtime](https://onnxruntime.ai/docs/)
- [Install ONNX Runtime](https://onnxruntime.ai/docs/install/)
- [Get Started](https://onnxruntime.ai/docs/get-started/)
- [Python](https://onnxruntime.ai/docs/get-started/with-python.html)
- [C++](https://onnxruntime.ai/docs/get-started/with-cpp.html)
- [C](https://onnxruntime.ai/docs/get-started/with-c.html)
- [C\#](https://onnxruntime.ai/docs/get-started/with-csharp.html)
- [Java](https://onnxruntime.ai/docs/get-started/with-java.html)
- [JavaScript](https://onnxruntime.ai/docs/get-started/with-javascript/)
- [Web](https://onnxruntime.ai/docs/get-started/with-javascript/web.html)
- [Node.js binding](https://onnxruntime.ai/docs/get-started/with-javascript/node.html)
- [React Native](https://onnxruntime.ai/docs/get-started/with-javascript/react-native.html)
- [Objective-C](https://onnxruntime.ai/docs/get-started/with-obj-c.html)
- [Julia, Ruby and Rust APIs](https://onnxruntime.ai/docs/get-started/community-projects.html)
- [Windows](https://onnxruntime.ai/docs/get-started/with-windows.html)
- [Mobile](https://onnxruntime.ai/docs/get-started/with-mobile.html)
- [On-Device Training](https://onnxruntime.ai/docs/get-started/training-on-device.html)
- [Large Model Training](https://onnxruntime.ai/docs/get-started/training-pytorch.html)
- [Tutorials](https://onnxruntime.ai/docs/tutorials/)
- [API Basics](https://onnxruntime.ai/docs/tutorials/api-basics.html)
- [Accelerate PyTorch](https://onnxruntime.ai/docs/tutorials/accelerate-pytorch/)
- [PyTorch Inference](https://onnxruntime.ai/docs/tutorials/accelerate-pytorch/pytorch.html)
- [Inference on multiple targets](https://onnxruntime.ai/docs/tutorials/accelerate-pytorch/resnet-inferencing.html)
- [Accelerate PyTorch Training](https://onnxruntime.ai/docs/tutorials/accelerate-pytorch/ort-training.html)
- [Accelerate TensorFlow](https://onnxruntime.ai/docs/tutorials/tensorflow.html)
- [Accelerate Hugging Face](https://onnxruntime.ai/docs/tutorials/huggingface.html)
- [Deploy on AzureML](https://onnxruntime.ai/docs/tutorials/azureml.html)
- [Deploy on mobile](https://onnxruntime.ai/docs/tutorials/mobile/)
- [Object detection and pose estimation with YOLOv8](https://onnxruntime.ai/docs/tutorials/mobile/pose-detection.html)
- [Mobile image recognition on Android](https://onnxruntime.ai/docs/tutorials/mobile/deploy-android.html)
- [Improve image resolution on mobile](https://onnxruntime.ai/docs/tutorials/mobile/superres.html)
- [Mobile objection detection on iOS](https://onnxruntime.ai/docs/tutorials/mobile/deploy-ios.html)
- [ORT Mobile Model Export Helpers](https://onnxruntime.ai/docs/tutorials/mobile/helpers/)
- [Web](https://onnxruntime.ai/docs/tutorials/web/)
- [Build a web app with ONNX Runtime](https://onnxruntime.ai/docs/tutorials/web/build-web-app.html)
- [The 'env' Flags and Session Options](https://onnxruntime.ai/docs/tutorials/web/env-flags-and-session-options.html)
- [Using WebGPU](https://onnxruntime.ai/docs/tutorials/web/ep-webgpu.html)
- [Using WebNN](https://onnxruntime.ai/docs/tutorials/web/ep-webnn.html)
- [Working with Large Models](https://onnxruntime.ai/docs/tutorials/web/large-models.html)
- [Performance Diagnosis](https://onnxruntime.ai/docs/tutorials/web/performance-diagnosis.html)
- [Deploying ONNX Runtime Web](https://onnxruntime.ai/docs/tutorials/web/deploy.html)
- [Troubleshooting](https://onnxruntime.ai/docs/tutorials/web/trouble-shooting.html)
- [Classify images with ONNX Runtime and Next.js](https://onnxruntime.ai/docs/tutorials/web/classify-images-nextjs-github-template.html)
- [Custom Excel Functions for BERT Tasks in JavaScript](https://onnxruntime.ai/docs/tutorials/web/excel-addin-bert-js.html)
- [Deploy on IoT and edge](https://onnxruntime.ai/docs/tutorials/iot-edge/)
- [IoT Deployment on Raspberry Pi](https://onnxruntime.ai/docs/tutorials/iot-edge/rasp-pi-cv.html)
- [Deploy traditional ML](https://onnxruntime.ai/docs/tutorials/traditional-ml.html)
- [Inference with C\#](https://onnxruntime.ai/docs/tutorials/csharp/)
- [Basic C\# Tutorial](https://onnxruntime.ai/docs/tutorials/csharp/basic_csharp.html)
- [Inference BERT NLP with C\#](https://onnxruntime.ai/docs/tutorials/csharp/bert-nlp-csharp-console-app.html)
- [Configure CUDA for GPU with C\#](https://onnxruntime.ai/docs/tutorials/csharp/csharp-gpu.html)
- [Image recognition with ResNet50v2 in C\#](https://onnxruntime.ai/docs/tutorials/csharp/resnet50_csharp.html)
- [Stable Diffusion with C\#](https://onnxruntime.ai/docs/tutorials/csharp/stable-diffusion-csharp.html)
- [Object detection in C\# using OpenVINO](https://onnxruntime.ai/docs/tutorials/csharp/yolov3_object_detection_csharp.html)
- [Object detection with Faster RCNN in C\#](https://onnxruntime.ai/docs/tutorials/csharp/fasterrcnn_csharp.html)
- [On-Device Training](https://onnxruntime.ai/docs/tutorials/on-device-training/)
- [Building an Android Application](https://onnxruntime.ai/docs/tutorials/on-device-training/android-app.html)
- [Building an iOS Application](https://onnxruntime.ai/docs/tutorials/on-device-training/ios-app.html)
- [API Docs](https://onnxruntime.ai/docs/api/)
- [Build ONNX Runtime](https://onnxruntime.ai/docs/build/)
- [Build for inferencing](https://onnxruntime.ai/docs/build/inferencing.html)
- [Build for training](https://onnxruntime.ai/docs/build/training.html)
- [Build with different EPs](https://onnxruntime.ai/docs/build/eps.html)
- [Build for web](https://onnxruntime.ai/docs/build/web.html)
- [Build for Android](https://onnxruntime.ai/docs/build/android.html)
- [Build for iOS](https://onnxruntime.ai/docs/build/ios.html)
- [Custom build](https://onnxruntime.ai/docs/build/custom.html)
- [Execution Providers](https://onnxruntime.ai/docs/execution-providers/)
- [NVIDIA - CUDA](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)
- [NVIDIA - TensorRT](https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html)
- [NVIDIA - TensorRT RTX](https://onnxruntime.ai/docs/execution-providers/TensorRTRTX-ExecutionProvider.html)
- [Intel - OpenVINO™](https://onnxruntime.ai/docs/execution-providers/OpenVINO-ExecutionProvider.html)
- [Intel - oneDNN](https://onnxruntime.ai/docs/execution-providers/oneDNN-ExecutionProvider.html)
- [Windows - DirectML](https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html)
- [Qualcomm - QNN](https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html)
- [Android - NNAPI](https://onnxruntime.ai/docs/execution-providers/NNAPI-ExecutionProvider.html)
- [Apple - CoreML](https://onnxruntime.ai/docs/execution-providers/CoreML-ExecutionProvider.html)
- [XNNPACK](https://onnxruntime.ai/docs/execution-providers/Xnnpack-ExecutionProvider.html)
- [AMD - ROCm](https://onnxruntime.ai/docs/execution-providers/ROCm-ExecutionProvider.html)
- [AMD - MIGraphX](https://onnxruntime.ai/docs/execution-providers/MIGraphX-ExecutionProvider.html)
- [AMD - Vitis AI](https://onnxruntime.ai/docs/execution-providers/Vitis-AI-ExecutionProvider.html)
- [Cloud - Azure](https://onnxruntime.ai/docs/execution-providers/Azure-ExecutionProvider.html)
- [Community-maintained](https://onnxruntime.ai/docs/execution-providers/community-maintained/)
- [Arm - ACL](https://onnxruntime.ai/docs/execution-providers/community-maintained/ACL-ExecutionProvider.html)
- [Arm - Arm NN](https://onnxruntime.ai/docs/execution-providers/community-maintained/ArmNN-ExecutionProvider.html)
- [Apache - TVM](https://onnxruntime.ai/docs/execution-providers/community-maintained/TVM-ExecutionProvider.html)
- [Rockchip - RKNPU](https://onnxruntime.ai/docs/execution-providers/community-maintained/RKNPU-ExecutionProvider.html)
- [Huawei - CANN](https://onnxruntime.ai/docs/execution-providers/community-maintained/CANN-ExecutionProvider.html)
- [Add a new provider](https://onnxruntime.ai/docs/execution-providers/add-execution-provider.html)
- [EP Context Design](https://onnxruntime.ai/docs/execution-providers/EP-Context-Design.html)
- [Plugin Execution Provider Libraries](https://onnxruntime.ai/docs/execution-providers/plugin-ep-libraries/)
- [Usage](https://onnxruntime.ai/docs/execution-providers/plugin-ep-libraries/usage.html)
- [Development](https://onnxruntime.ai/docs/execution-providers/plugin-ep-libraries/development.html)
- [Testing](https://onnxruntime.ai/docs/execution-providers/plugin-ep-libraries/testing.html)
- [Packaging](https://onnxruntime.ai/docs/execution-providers/plugin-ep-libraries/packaging.html)
- [Generate API (Preview)](https://onnxruntime.ai/docs/genai/)
- [Tutorials](https://onnxruntime.ai/docs/genai/tutorials/)
- [Phi-3.5 vision tutorial](https://onnxruntime.ai/docs/genai/tutorials/phi3-v.html)
- [Phi-3 tutorial](https://onnxruntime.ai/docs/genai/tutorials/phi3-python.html)
- [Phi-2 tutorial](https://onnxruntime.ai/docs/genai/tutorials/phi2-python.html)
- [Run with LoRA adapters](https://onnxruntime.ai/docs/genai/tutorials/finetune.html)
- [DeepSeek-R1-Distill tutorial](https://onnxruntime.ai/docs/genai/tutorials/deepseek-python.html)
- [Run on Snapdragon devices](https://onnxruntime.ai/docs/genai/tutorials/snapdragon.html)
- [API docs](https://onnxruntime.ai/docs/genai/api/)
- [Python API](https://onnxruntime.ai/docs/genai/api/python.html)
- [C\# API](https://onnxruntime.ai/docs/genai/api/csharp.html)
- [C API](https://onnxruntime.ai/docs/genai/api/c.html)
- [C++ API](https://onnxruntime.ai/docs/genai/api/cpp.html)
- [Java API](https://onnxruntime.ai/docs/genai/api/java.html)
- [How to](https://onnxruntime.ai/docs/genai/howto/)
- [Install](https://onnxruntime.ai/docs/genai/howto/install.html)
- [Build from source](https://onnxruntime.ai/docs/genai/howto/build-from-source.html)
- [Build models](https://onnxruntime.ai/docs/genai/howto/build-model.html)
- [Build models for Snapdragon](https://onnxruntime.ai/docs/genai/howto/build-models-for-snapdragon.html)
- [Troubleshoot](https://onnxruntime.ai/docs/genai/howto/troubleshoot.html)
- [Migrate](https://onnxruntime.ai/docs/genai/howto/migrate.html)
- [Past present share buffer](https://onnxruntime.ai/docs/genai/howto/past-present-share-buffer.html)
- [Reference](https://onnxruntime.ai/docs/genai/reference/)
- [Config reference](https://onnxruntime.ai/docs/genai/reference/config.html)
- [Adapter file spec](https://onnxruntime.ai/docs/genai/reference/adapter.html)
- [Extensions](https://onnxruntime.ai/docs/extensions/)
- [Add Operators](https://onnxruntime.ai/docs/extensions/add-op.html)
- [Build](https://onnxruntime.ai/docs/extensions/build.html)
- [Performance](https://onnxruntime.ai/docs/performance/)
- [Tune performance](https://onnxruntime.ai/docs/performance/tune-performance/)
- [Profiling tools](https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html)
- [Logging & Tracing](https://onnxruntime.ai/docs/performance/tune-performance/logging_tracing.html)
- [Memory consumption](https://onnxruntime.ai/docs/performance/tune-performance/memory.html)
- [Thread management](https://onnxruntime.ai/docs/performance/tune-performance/threading.html)
- [I/O Binding](https://onnxruntime.ai/docs/performance/tune-performance/iobinding.html)
- [Troubleshooting](https://onnxruntime.ai/docs/performance/tune-performance/troubleshooting.html)
- [Model optimizations](https://onnxruntime.ai/docs/performance/model-optimizations/)
- [Quantize ONNX models](https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html)
- [Float16 and mixed precision models](https://onnxruntime.ai/docs/performance/model-optimizations/float16.html)
- [Graph optimizations](https://onnxruntime.ai/docs/performance/model-optimizations/graph-optimizations.html)
- [ORT model format](https://onnxruntime.ai/docs/performance/model-optimizations/ort-format-models.html)
- [ORT model format runtime optimization](https://onnxruntime.ai/docs/performance/model-optimizations/ort-format-model-runtime-optimization.html)
- [Transformers optimizer](https://onnxruntime.ai/docs/performance/transformers-optimization.html)
- [End to end optimization with Olive](https://onnxruntime.ai/docs/performance/olive.html)
- [Device tensors](https://onnxruntime.ai/docs/performance/device-tensor.html)
- [Ecosystem](https://onnxruntime.ai/docs/ecosystem/)
- [Azure Container for PyTorch (ACPT)](https://onnxruntime.ai/docs/ecosystem/acpt.html)
- [Reference](https://onnxruntime.ai/docs/reference/)
- [Releases](https://onnxruntime.ai/docs/reference/releases-servicing.html)
- [Compatibility](https://onnxruntime.ai/docs/reference/compatibility.html)
- [Operators](https://onnxruntime.ai/docs/reference/operators/)
- [Operator kernels](https://onnxruntime.ai/docs/reference/operators/OperatorKernels.html)
- [Contrib operators](https://onnxruntime.ai/docs/reference/operators/ContribOperators.html)
- [Custom operators](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html)
- [Reduced operator config file](https://onnxruntime.ai/docs/reference/operators/reduced-operator-config-file.html)
- [Architecture](https://onnxruntime.ai/docs/reference/high-level-design.html)
- [Citing ONNX Runtime](https://onnxruntime.ai/docs/reference/citing.html)
- [Dependency Management in ONNX Runtime](https://onnxruntime.ai/docs/build/dependencies.html)
- [ONNX Runtime Docs on GitHub](https://github.com/microsoft/onnxruntime/tree/gh-pages)
This site uses [Just the Docs](https://github.com/just-the-docs/just-the-docs), a documentation theme for Jekyll.
Search onnxruntime
- [ONNX Runtime](https://onnxruntime.ai/)
- [Install](https://onnxruntime.ai/docs/install/)
- [Get Started](https://onnxruntime.ai/docs/get-started/)
- [Tutorials](https://onnxruntime.ai/docs/tutorials/)
- [API Docs](https://onnxruntime.ai/docs/api/)
- [YouTube](https://www.youtube.com/onnxruntime)
- [GitHub](https://github.com/microsoft/onnxruntime)
# Install ONNX Runtime
See the [installation matrix](https://onnxruntime.ai/) for recommended instructions for desired combinations of target operating system, hardware, accelerator, and language.
Details on OS versions, compilers, language versions, dependent libraries, etc can be found under [Compatibility](https://onnxruntime.ai/docs/reference/compatibility).
## Contents
- [Requirements](https://onnxruntime.ai/docs/install/#requirements)
- [CUDA and CuDNN](https://onnxruntime.ai/docs/install/#cuda-and-cudnn)
- [Python Installs](https://onnxruntime.ai/docs/install/#python-installs)
- [Install ONNX Runtime CPU](https://onnxruntime.ai/docs/install/#install-onnx-runtime-cpu)
- [Install nightly](https://onnxruntime.ai/docs/install/#install-nightly)
- [Install ONNX Runtime GPU (DirectML) - Sustained Engineering Mode](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-directml---sustained-engineering-mode)
- [Install nightly](https://onnxruntime.ai/docs/install/#install-nightly-1)
- [Install ONNX Runtime GPU (CUDA or TensorRT)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-or-tensorrt)
- [CUDA 12.x](https://onnxruntime.ai/docs/install/#cuda-12x)
- [Nightly for CUDA 13.x](https://onnxruntime.ai/docs/install/#nightly-for-cuda-13x)
- [Nightly for CUDA 12.x](https://onnxruntime.ai/docs/install/#nightly-for-cuda-12x)
- [CUDA 11.x](https://onnxruntime.ai/docs/install/#cuda-11x)
- [Install ONNX Runtime QNN](https://onnxruntime.ai/docs/install/#install-onnx-runtime-qnn)
- [Install nightly](https://onnxruntime.ai/docs/install/#install-nightly-2)
- [C\#/C/C++/WinML Installs](https://onnxruntime.ai/docs/install/#cccwinml-installs)
- [Install ONNX Runtime](https://onnxruntime.ai/docs/install/#install-onnx-runtime-1)
- [Install ONNX Runtime CPU](https://onnxruntime.ai/docs/install/#install-onnx-runtime-cpu-1)
- [Install ONNX Runtime GPU (CUDA 12.x)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x)
- [Install ONNX Runtime GPU (CUDA 11.8)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-118)
- [DirectML (sustained engineering - use WinML for new projects)](https://onnxruntime.ai/docs/install/#directml-sustained-engineering---use-winml-for-new-projects)
- [WinML (recommended for Windows)](https://onnxruntime.ai/docs/install/#winml-recommended-for-windows)
- [Install on web and mobile](https://onnxruntime.ai/docs/install/#install-on-web-and-mobile)
- [JavaScript Installs](https://onnxruntime.ai/docs/install/#javascript-installs)
- [Install ONNX Runtime Web (browsers)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-web-browsers)
- [Install ONNX Runtime Node.js binding (Node.js)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-nodejs-binding-nodejs)
- [Install ONNX Runtime for React Native](https://onnxruntime.ai/docs/install/#install-onnx-runtime-for-react-native)
- [Install on iOS](https://onnxruntime.ai/docs/install/#install-on-ios)
- [C/C++](https://onnxruntime.ai/docs/install/#cc)
- [Objective-C](https://onnxruntime.ai/docs/install/#objective-c)
- [Custom build](https://onnxruntime.ai/docs/install/#custom-build)
- [Install on Android](https://onnxruntime.ai/docs/install/#install-on-android)
- [Java/Kotlin](https://onnxruntime.ai/docs/install/#javakotlin)
- [C/C++](https://onnxruntime.ai/docs/install/#cc-1)
- [Custom build](https://onnxruntime.ai/docs/install/#custom-build-1)
- [Install for On-Device Training](https://onnxruntime.ai/docs/install/#install-for-on-device-training)
- [Offline Phase - Prepare for Training](https://onnxruntime.ai/docs/install/#offline-phase---prepare-for-training)
- [Training Phase - On-Device Training](https://onnxruntime.ai/docs/install/#training-phase---on-device-training)
- [Large Model Training](https://onnxruntime.ai/docs/install/#large-model-training)
- [Inference install table for all languages](https://onnxruntime.ai/docs/install/#inference-install-table-for-all-languages)
- [Training install table for all languages](https://onnxruntime.ai/docs/install/#training-install-table-for-all-languages)
## Requirements
- All builds require the English language package with `en_US.UTF-8` locale. On Linux, install [language-pack-en package](https://packages.ubuntu.com/search?keywords=language-pack-en) by running `locale-gen en_US.UTF-8` and `update-locale LANG=en_US.UTF-8`
- Windows builds require [Visual C++ 2019 runtime](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#latest-microsoft-visual-c-redistributable-version). The latest version is recommended.
### CUDA and CuDNN
For ONNX Runtime GPU package, it is required to install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [cuDNN](https://developer.nvidia.com/cudnn). Check [CUDA execution provider requirements](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) for compatible version of CUDA and cuDNN.
- Zlib is required by cuDNN 9.x for Linux only (zlib is statically linked into the cuDNN 9.x Windows dynamic libraries), or cuDNN 8.x for Linux and Windows. Follow the [cuDNN 8.9 installation guide](https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-890/install-guide/index.html) to install zlib in Linux or Windows.
- In Windows, the path of CUDA `bin` and cuDNN `bin` directories must be added to the `PATH` environment variable.
- In Linux, the path of CUDA `lib64` and cuDNN `lib` directories must be added to the `LD_LIBRARY_PATH` environment variable.
For `onnxruntime-gpu` package, it is possible to work with PyTorch without the need for manual installations of CUDA or cuDNN. Refer to [Compatibility with PyTorch](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#compatibility-with-pytorch) for more information.
## Python Installs
### Install ONNX Runtime CPU
```
pip install onnxruntime
```
#### Install nightly
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime
```
### Install ONNX Runtime GPU (DirectML) - Sustained Engineering Mode
**Note**: DirectML is in sustained engineering. For new Windows projects, consider [WinML](https://onnxruntime.ai/docs/install/#winml-recommended-for-windows) instead.
```
pip install onnxruntime-directml
```
#### Install nightly
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-directml
```
### Install ONNX Runtime GPU (CUDA or TensorRT)
#### CUDA 12.x
The default CUDA version for [onnxruntime-gpu in pypi](https://pypi.org/project/onnxruntime-gpu) is 12.x since 1.19.0.
```
pip install onnxruntime-gpu
```
For previous versions, you can download here: [1\.18.1](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.1), [1\.18.0](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.0)
#### Nightly for CUDA 13.x
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-13-nightly/pypi/simple/ onnxruntime-gpu
```
#### Nightly for CUDA 12.x
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu
```
#### CUDA 11.x
For Cuda 11.x, please use the following instructions to install from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview) for 1.19.2 or later.
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install onnxruntime-gpu --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/
```
For previous versions, you can download here: [1\.18.1](https://pypi.org/project/onnxruntime-gpu/1.18.1/), [1\.18.0](https://pypi.org/project/onnxruntime-gpu/1.18.0/)
### Install ONNX Runtime QNN
```
pip install onnxruntime-qnn
```
#### Install nightly
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-qnn
```
## C\#/C/C++/WinML Installs
### Install ONNX Runtime
#### Install ONNX Runtime CPU
```
# CPU
dotnet add package Microsoft.ML.OnnxRuntime
```
#### Install ONNX Runtime GPU (CUDA 12.x)
The default CUDA version for ORT is 12.x
```
# GPU
dotnet add package Microsoft.ML.OnnxRuntime.Gpu
```
#### Install ONNX Runtime GPU (CUDA 11.8)
1. Project Setup
Ensure you have installed the latest version of the Azure Artifacts keyring from the its [Github Repo](https://github.com/microsoft/artifacts-credprovider#azure-artifacts-credential-provider).
Add a nuget.config file to your project in the same directory as your .csproj file.
```
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear/>
<add key="onnxruntime-cuda-11"
value="https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/nuget/v3/index.json"/>
</packageSources>
</configuration>
```
1. Restore packages
Restore packages (using the interactive flag, which allows dotnet to prompt you for credentials)
```
dotnet add package Microsoft.ML.OnnxRuntime.Gpu
```
Note: You don’t need –interactive every time. dotnet will prompt you to add –interactive if it needs updated credentials.
#### DirectML (sustained engineering - use WinML for new projects)
```
dotnet add package Microsoft.ML.OnnxRuntime.DirectML
```
**Note**: DirectML is in sustained engineering. For new Windows projects, use WinML instead:
#### WinML (recommended for Windows)
```
dotnet add package Microsoft.AI.MachineLearning
```
## Install on web and mobile
The pre-built packages have full support for all ONNX opsets and operators.
If the pre-built package is too large, you can create a [custom build](https://onnxruntime.ai/docs/build/custom.html). A custom build can include just the opsets and operators in your model/s to reduce the size.
### JavaScript Installs
#### Install ONNX Runtime Web (browsers)
```
# install latest release version
npm install onnxruntime-web
# install nightly build dev version
npm install onnxruntime-web@dev
```
#### Install ONNX Runtime Node.js binding (Node.js)
```
# install latest release version
npm install onnxruntime-node
```
#### Install ONNX Runtime for React Native
```
# install latest release version
npm install onnxruntime-react-native
```
### Install on iOS
In your CocoaPods `Podfile`, add the `onnxruntime-c` or `onnxruntime-objc` pod, depending on which API you want to use.
#### C/C++
```
use_frameworks!
pod 'onnxruntime-c'
```
#### Objective-C
```
use_frameworks!
pod 'onnxruntime-objc'
```
Run `pod install`.
#### Custom build
Refer to the instructions for creating a [custom iOS package](https://onnxruntime.ai/docs/build/custom.html#ios).
### Install on Android
#### Java/Kotlin
In your Android Studio Project, make the following changes to:
1. build.gradle (Project):
```
repositories {
mavenCentral()
}
```
2. build.gradle (Module):
```
dependencies {
implementation 'com.microsoft.onnxruntime:onnxruntime-android:latest.release'
}
```
#### C/C++
Download the [onnxruntime-android](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android) AAR hosted at MavenCentral, change the file extension from `.aar` to `.zip`, and unzip it. Include the header files from the `headers` folder, and the relevant `libonnxruntime.so` dynamic library from the `jni` folder in your NDK project.
#### Custom build
Refer to the instructions for creating a [custom Android package](https://onnxruntime.ai/docs/build/custom.html#android).
## Install for On-Device Training
Unless stated otherwise, the installation instructions in this section refer to pre-built packages designed to perform on-device training.
If the pre-built training package supports your model but is too large, you can create a [custom training build](https://onnxruntime.ai/docs/build/custom.html).
### Offline Phase - Prepare for Training
```
python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0
pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training-cpu
```
### Training Phase - On-Device Training
| Device | Language | PackageName | Installation Instructions |
|---|---|---|---|
| Windows | C, C++, C\# | [Microsoft.ML.OnnxRuntime.Training](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | `dotnet add package Microsoft.ML.OnnxRuntime.Training` |
| Linux | C, C++ | [onnxruntime-training-linux\*.tgz](https://github.com/microsoft/onnxruntime/releases) | Download the `*.tgz` file from [here](https://github.com/microsoft/onnxruntime/releases). Extract it. Move and include the header files in the `include` directory. Move the `libonnxruntime.so` dynamic library to a desired path and include it. |
| | Python | [onnxruntime-training](https://pypi.org/project/onnxruntime-training/) | `pip install onnxruntime-training` |
| Android | C, C++ | [onnxruntime-training-android](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-training-android) | Download the [onnxruntime-training-android (full package)](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android) AAR hosted at Maven Central. Change the file extension from `.aar` to `.zip`, and unzip it. Include the header files from the `headers` folder. Include the relevant `libonnxruntime.so` dynamic library from the `jni` folder in your NDK project. |
| | Java/Kotlin | [onnxruntime-training-android](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android) | In your Android Studio Project, make the following changes to: build.gradle (Project): ` repositories { mavenCentral() }` build.gradle (Module): ` dependencies { implementation 'com.microsoft.onnxruntime:onnxruntime-training-android:latest.release' }` |
| iOS | C, C++ | **CocoaPods: onnxruntime-training-c** | In your CocoaPods `Podfile`, add the `onnxruntime-training-c` pod: Run `pod install`. |
| | Objective-C | **CocoaPods: onnxruntime-training-objc** | In your CocoaPods `Podfile`, add the `onnxruntime-training-objc` pod: Run `pod install`. |
| Web | JavaScript, TypeScript | onnxruntime-web | Use either `import * as ort from 'onnxruntime-web/training';` or `const ort = require('onnxruntime-web/training');` |
## Large Model Training
```
pip install torch-ort
python -m torch_ort.configure
```
**Note**: This installs the default version of the `torch-ort` and `onnxruntime-training` packages that are mapped to specific versions of the CUDA libraries. Refer to the install options in [onnxruntime.ai](https://onnxruntime.ai/).
## Inference install table for all languages
The table below lists the build variants available as officially supported packages. Others can be [built from source](https://onnxruntime.ai/docs/build/inferencing) from each [release branch](https://github.com/microsoft/onnxruntime/tags).
In addition to general [requirements](https://onnxruntime.ai/docs/install/#requirements), please note additional requirements and dependencies in the table below:
| | Official build | Nightly build | Reqs |
|---|---|---|---|
| Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | |
| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime/overview) | |
| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu/overview/) | [View](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) |
| | GPU (DirectML) **sustained engineering**: [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html#requirements) |
| | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](https://onnxruntime.ai/docs/build/eps.html#openvino) |
| | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | |
| | Azure (Cloud): [**onnxruntime-azure**](https://pypi.org/project/onnxruntime-azure/) | | |
| C\#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | |
| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider) |
| | GPU (DirectML) **sustained engineering**: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider) |
| WinML **recommended for Windows** | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) |
| Java | CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) | | [View](https://onnxruntime.ai/docs/api/java) |
| | GPU (CUDA/TensorRT): [**com.microsoft.onnxruntime:onnxruntime\_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu) | | [View](https://onnxruntime.ai/docs/api/java) |
| Android | [**com.microsoft.onnxruntime:onnxruntime-android**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-android) | | [View](https://onnxruntime.ai/docs/install/#install-on-android) |
| iOS (C/C++) | CocoaPods: **onnxruntime-c** | | [View](https://onnxruntime.ai/docs/install/#install-on-ios) |
| Objective-C | CocoaPods: **onnxruntime-objc** | | [View](https://onnxruntime.ai/docs/install/#install-on-ios) |
| React Native | [**onnxruntime-react-native** (latest)](https://www.npmjs.com/package/onnxruntime-react-native) | [onnxruntime-react-native (dev)](https://www.npmjs.com/package/onnxruntime-react-native?activeTab=versions) | [View](https://onnxruntime.ai/docs/api/js) |
| Node.js | [**onnxruntime-node** (latest)](https://www.npmjs.com/package/onnxruntime-node) | [onnxruntime-node (dev)](https://www.npmjs.com/package/onnxruntime-node?activeTab=versions) | [View](https://onnxruntime.ai/docs/api/js) |
| Web | [**onnxruntime-web** (latest)](https://www.npmjs.com/package/onnxruntime-web) | [onnxruntime-web (dev)](https://www.npmjs.com/package/onnxruntime-web?activeTab=versions) | [View](https://onnxruntime.ai/docs/api/js) |
*Note: Nightly builds created from the main branch are available for testing newer changes between official releases. Please use these at your own risk. We strongly advise against deploying these to production workloads as support is limited for nightly builds.*
## Training install table for all languages
Refer to the getting started with [Optimized Training](https://onnxruntime.ai/getting-started) page for more fine-grained installation instructions.
***
For documentation questions, please [file an issue](https://github.com/microsoft/onnxruntime/issues/new?assignees=&labels=documentation&projects=&template=02-documentation.yml&title=%5BDocumentation%5D+).
[Edit this page on GitHub](https://github.com/microsoft/onnxruntime/tree/gh-pages/docs/install/index.md)
This site uses [Just the Docs](https://github.com/just-the-docs/just-the-docs), a documentation theme for Jekyll. |
| Readable Markdown | See the [installation matrix](https://onnxruntime.ai/) for recommended instructions for desired combinations of target operating system, hardware, accelerator, and language.
Details on OS versions, compilers, language versions, dependent libraries, etc can be found under [Compatibility](https://onnxruntime.ai/docs/reference/compatibility).
## Contents
- [Requirements](https://onnxruntime.ai/docs/install/#requirements)
- [CUDA and CuDNN](https://onnxruntime.ai/docs/install/#cuda-and-cudnn)
- [Python Installs](https://onnxruntime.ai/docs/install/#python-installs)
- [Install ONNX Runtime CPU](https://onnxruntime.ai/docs/install/#install-onnx-runtime-cpu)
- [Install nightly](https://onnxruntime.ai/docs/install/#install-nightly)
- [Install ONNX Runtime GPU (DirectML) - Sustained Engineering Mode](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-directml---sustained-engineering-mode)
- [Install nightly](https://onnxruntime.ai/docs/install/#install-nightly-1)
- [Install ONNX Runtime GPU (CUDA or TensorRT)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-or-tensorrt)
- [CUDA 12.x](https://onnxruntime.ai/docs/install/#cuda-12x)
- [Nightly for CUDA 13.x](https://onnxruntime.ai/docs/install/#nightly-for-cuda-13x)
- [Nightly for CUDA 12.x](https://onnxruntime.ai/docs/install/#nightly-for-cuda-12x)
- [CUDA 11.x](https://onnxruntime.ai/docs/install/#cuda-11x)
- [Install ONNX Runtime QNN](https://onnxruntime.ai/docs/install/#install-onnx-runtime-qnn)
- [Install nightly](https://onnxruntime.ai/docs/install/#install-nightly-2)
- [C\#/C/C++/WinML Installs](https://onnxruntime.ai/docs/install/#cccwinml-installs)
- [Install ONNX Runtime](https://onnxruntime.ai/docs/install/#install-onnx-runtime-1)
- [Install ONNX Runtime CPU](https://onnxruntime.ai/docs/install/#install-onnx-runtime-cpu-1)
- [Install ONNX Runtime GPU (CUDA 12.x)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-12x)
- [Install ONNX Runtime GPU (CUDA 11.8)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-gpu-cuda-118)
- [DirectML (sustained engineering - use WinML for new projects)](https://onnxruntime.ai/docs/install/#directml-sustained-engineering---use-winml-for-new-projects)
- [WinML (recommended for Windows)](https://onnxruntime.ai/docs/install/#winml-recommended-for-windows)
- [Install on web and mobile](https://onnxruntime.ai/docs/install/#install-on-web-and-mobile)
- [JavaScript Installs](https://onnxruntime.ai/docs/install/#javascript-installs)
- [Install ONNX Runtime Web (browsers)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-web-browsers)
- [Install ONNX Runtime Node.js binding (Node.js)](https://onnxruntime.ai/docs/install/#install-onnx-runtime-nodejs-binding-nodejs)
- [Install ONNX Runtime for React Native](https://onnxruntime.ai/docs/install/#install-onnx-runtime-for-react-native)
- [Install on iOS](https://onnxruntime.ai/docs/install/#install-on-ios)
- [C/C++](https://onnxruntime.ai/docs/install/#cc)
- [Objective-C](https://onnxruntime.ai/docs/install/#objective-c)
- [Custom build](https://onnxruntime.ai/docs/install/#custom-build)
- [Install on Android](https://onnxruntime.ai/docs/install/#install-on-android)
- [Java/Kotlin](https://onnxruntime.ai/docs/install/#javakotlin)
- [C/C++](https://onnxruntime.ai/docs/install/#cc-1)
- [Custom build](https://onnxruntime.ai/docs/install/#custom-build-1)
- [Install for On-Device Training](https://onnxruntime.ai/docs/install/#install-for-on-device-training)
- [Offline Phase - Prepare for Training](https://onnxruntime.ai/docs/install/#offline-phase---prepare-for-training)
- [Training Phase - On-Device Training](https://onnxruntime.ai/docs/install/#training-phase---on-device-training)
- [Large Model Training](https://onnxruntime.ai/docs/install/#large-model-training)
- [Inference install table for all languages](https://onnxruntime.ai/docs/install/#inference-install-table-for-all-languages)
- [Training install table for all languages](https://onnxruntime.ai/docs/install/#training-install-table-for-all-languages)
## Requirements
- All builds require the English language package with `en_US.UTF-8` locale. On Linux, install [language-pack-en package](https://packages.ubuntu.com/search?keywords=language-pack-en) by running `locale-gen en_US.UTF-8` and `update-locale LANG=en_US.UTF-8`
- Windows builds require [Visual C++ 2019 runtime](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170#latest-microsoft-visual-c-redistributable-version). The latest version is recommended.
### CUDA and CuDNN
For ONNX Runtime GPU package, it is required to install [CUDA](https://developer.nvidia.com/cuda-toolkit) and [cuDNN](https://developer.nvidia.com/cudnn). Check [CUDA execution provider requirements](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) for compatible version of CUDA and cuDNN.
- Zlib is required by cuDNN 9.x for Linux only (zlib is statically linked into the cuDNN 9.x Windows dynamic libraries), or cuDNN 8.x for Linux and Windows. Follow the [cuDNN 8.9 installation guide](https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn-890/install-guide/index.html) to install zlib in Linux or Windows.
- In Windows, the path of CUDA `bin` and cuDNN `bin` directories must be added to the `PATH` environment variable.
- In Linux, the path of CUDA `lib64` and cuDNN `lib` directories must be added to the `LD_LIBRARY_PATH` environment variable.
For `onnxruntime-gpu` package, it is possible to work with PyTorch without the need for manual installations of CUDA or cuDNN. Refer to [Compatibility with PyTorch](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#compatibility-with-pytorch) for more information.
## Python Installs
### Install ONNX Runtime CPU
```
pip install onnxruntime
```
#### Install nightly
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime
```
### Install ONNX Runtime GPU (DirectML) - Sustained Engineering Mode
**Note**: DirectML is in sustained engineering. For new Windows projects, consider [WinML](https://onnxruntime.ai/docs/install/#winml-recommended-for-windows) instead.
```
pip install onnxruntime-directml
```
#### Install nightly
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-directml
```
### Install ONNX Runtime GPU (CUDA or TensorRT)
#### CUDA 12.x
The default CUDA version for [onnxruntime-gpu in pypi](https://pypi.org/project/onnxruntime-gpu) is 12.x since 1.19.0.
```
pip install onnxruntime-gpu
```
For previous versions, you can download here: [1\.18.1](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.1), [1\.18.0](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-12/PyPI/onnxruntime-gpu/overview/1.18.0)
#### Nightly for CUDA 13.x
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ort-cuda-13-nightly/pypi/simple/ onnxruntime-gpu
```
#### Nightly for CUDA 12.x
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu
```
#### CUDA 11.x
For Cuda 11.x, please use the following instructions to install from [ORT Azure Devops Feed](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/onnxruntime-cuda-11/PyPI/onnxruntime-gpu/overview) for 1.19.2 or later.
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install onnxruntime-gpu --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/pypi/simple/
```
For previous versions, you can download here: [1\.18.1](https://pypi.org/project/onnxruntime-gpu/1.18.1/), [1\.18.0](https://pypi.org/project/onnxruntime-gpu/1.18.0/)
### Install ONNX Runtime QNN
```
pip install onnxruntime-qnn
```
#### Install nightly
```
pip install coloredlogs flatbuffers numpy packaging protobuf sympy
pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-qnn
```
## C\#/C/C++/WinML Installs
### Install ONNX Runtime
#### Install ONNX Runtime CPU
```
# CPU
dotnet add package Microsoft.ML.OnnxRuntime
```
#### Install ONNX Runtime GPU (CUDA 12.x)
The default CUDA version for ORT is 12.x
```
# GPU
dotnet add package Microsoft.ML.OnnxRuntime.Gpu
```
#### Install ONNX Runtime GPU (CUDA 11.8)
1. Project Setup
Ensure you have installed the latest version of the Azure Artifacts keyring from the its [Github Repo](https://github.com/microsoft/artifacts-credprovider#azure-artifacts-credential-provider).
Add a nuget.config file to your project in the same directory as your .csproj file.
```
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear/>
<add key="onnxruntime-cuda-11"
value="https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-11/nuget/v3/index.json"/>
</packageSources>
</configuration>
```
1. Restore packages
Restore packages (using the interactive flag, which allows dotnet to prompt you for credentials)
```
dotnet add package Microsoft.ML.OnnxRuntime.Gpu
```
Note: You don’t need –interactive every time. dotnet will prompt you to add –interactive if it needs updated credentials.
#### DirectML (sustained engineering - use WinML for new projects)
```
dotnet add package Microsoft.ML.OnnxRuntime.DirectML
```
**Note**: DirectML is in sustained engineering. For new Windows projects, use WinML instead:
#### WinML (recommended for Windows)
```
dotnet add package Microsoft.AI.MachineLearning
```
## Install on web and mobile
The pre-built packages have full support for all ONNX opsets and operators.
If the pre-built package is too large, you can create a [custom build](https://onnxruntime.ai/docs/build/custom.html). A custom build can include just the opsets and operators in your model/s to reduce the size.
### JavaScript Installs
#### Install ONNX Runtime Web (browsers)
```
# install latest release version
npm install onnxruntime-web
# install nightly build dev version
npm install onnxruntime-web@dev
```
#### Install ONNX Runtime Node.js binding (Node.js)
```
# install latest release version
npm install onnxruntime-node
```
#### Install ONNX Runtime for React Native
```
# install latest release version
npm install onnxruntime-react-native
```
### Install on iOS
In your CocoaPods `Podfile`, add the `onnxruntime-c` or `onnxruntime-objc` pod, depending on which API you want to use.
#### C/C++
```
use_frameworks!
pod 'onnxruntime-c'
```
#### Objective-C
```
use_frameworks!
pod 'onnxruntime-objc'
```
Run `pod install`.
#### Custom build
Refer to the instructions for creating a [custom iOS package](https://onnxruntime.ai/docs/build/custom.html#ios).
### Install on Android
#### Java/Kotlin
In your Android Studio Project, make the following changes to:
1. build.gradle (Project):
```
repositories {
mavenCentral()
}
```
2. build.gradle (Module):
```
dependencies {
implementation 'com.microsoft.onnxruntime:onnxruntime-android:latest.release'
}
```
#### C/C++
Download the [onnxruntime-android](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android) AAR hosted at MavenCentral, change the file extension from `.aar` to `.zip`, and unzip it. Include the header files from the `headers` folder, and the relevant `libonnxruntime.so` dynamic library from the `jni` folder in your NDK project.
#### Custom build
Refer to the instructions for creating a [custom Android package](https://onnxruntime.ai/docs/build/custom.html#android).
## Install for On-Device Training
Unless stated otherwise, the installation instructions in this section refer to pre-built packages designed to perform on-device training.
If the pre-built training package supports your model but is too large, you can create a [custom training build](https://onnxruntime.ai/docs/build/custom.html).
### Offline Phase - Prepare for Training
```
python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0
pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training-cpu
```
### Training Phase - On-Device Training
| Device | Language | PackageName | Installation Instructions |
|---|---|---|---|
| Windows | C, C++, C\# | [Microsoft.ML.OnnxRuntime.Training](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | `dotnet add package Microsoft.ML.OnnxRuntime.Training` |
| Linux | C, C++ | [onnxruntime-training-linux\*.tgz](https://github.com/microsoft/onnxruntime/releases) | Download the `*.tgz` file from [here](https://github.com/microsoft/onnxruntime/releases). Extract it. Move and include the header files in the `include` directory. Move the `libonnxruntime.so` dynamic library to a desired path and include it. |
| | Python | [onnxruntime-training](https://pypi.org/project/onnxruntime-training/) | `pip install onnxruntime-training` |
| Android | C, C++ | [onnxruntime-training-android](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-training-android) | Download the [onnxruntime-training-android (full package)](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android) AAR hosted at Maven Central. Change the file extension from `.aar` to `.zip`, and unzip it. Include the header files from the `headers` folder. Include the relevant `libonnxruntime.so` dynamic library from the `jni` folder in your NDK project. |
| | Java/Kotlin | [onnxruntime-training-android](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android) | In your Android Studio Project, make the following changes to: build.gradle (Project): ` repositories { mavenCentral() }` build.gradle (Module): ` dependencies { implementation 'com.microsoft.onnxruntime:onnxruntime-training-android:latest.release' }` |
| iOS | C, C++ | **CocoaPods: onnxruntime-training-c** | In your CocoaPods `Podfile`, add the `onnxruntime-training-c` pod: Run `pod install`. |
| | Objective-C | **CocoaPods: onnxruntime-training-objc** | In your CocoaPods `Podfile`, add the `onnxruntime-training-objc` pod: Run `pod install`. |
| Web | JavaScript, TypeScript | onnxruntime-web | Use either `import * as ort from 'onnxruntime-web/training';` or `const ort = require('onnxruntime-web/training');` |
## Large Model Training
```
pip install torch-ort
python -m torch_ort.configure
```
**Note**: This installs the default version of the `torch-ort` and `onnxruntime-training` packages that are mapped to specific versions of the CUDA libraries. Refer to the install options in [onnxruntime.ai](https://onnxruntime.ai/).
## Inference install table for all languages
The table below lists the build variants available as officially supported packages. Others can be [built from source](https://onnxruntime.ai/docs/build/inferencing) from each [release branch](https://github.com/microsoft/onnxruntime/tags).
In addition to general [requirements](https://onnxruntime.ai/docs/install/#requirements), please note additional requirements and dependencies in the table below:
| | Official build | Nightly build | Reqs |
|---|---|---|---|
| Python | If using pip, run `pip install --upgrade pip` prior to downloading. | | |
| | CPU: [**onnxruntime**](https://pypi.org/project/onnxruntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime/overview) | |
| | GPU (CUDA/TensorRT) for CUDA 12.x: [**onnxruntime-gpu**](https://pypi.org/project/onnxruntime-gpu) | [onnxruntime-gpu (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-gpu/overview/) | [View](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) |
| | GPU (DirectML) **sustained engineering**: [**onnxruntime-directml**](https://pypi.org/project/onnxruntime-directml/) | [onnxruntime-directml (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/onnxruntime-directml/overview/) | [View](https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html#requirements) |
| | OpenVINO: [**intel/onnxruntime**](https://github.com/intel/onnxruntime/releases/latest) - *Intel managed* | | [View](https://onnxruntime.ai/docs/build/eps.html#openvino) |
| | TensorRT (Jetson): [**Jetson Zoo**](https://elinux.org/Jetson_Zoo#ONNX_Runtime) - *NVIDIA managed* | | |
| | Azure (Cloud): [**onnxruntime-azure**](https://pypi.org/project/onnxruntime-azure/) | | |
| C\#/C/C++ | CPU: [**Microsoft.ML.OnnxRuntime**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | |
| | GPU (CUDA/TensorRT): [**Microsoft.ML.OnnxRuntime.Gpu**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.gpu) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly) | [View](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider) |
| | GPU (DirectML) **sustained engineering**: [**Microsoft.ML.OnnxRuntime.DirectML**](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview) | [View](https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider) |
| WinML **recommended for Windows** | [**Microsoft.AI.MachineLearning**](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) | [onnxruntime (nightly)](https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/NuGet/Microsoft.AI.MachineLearning/overview) | [View](https://docs.microsoft.com/en-us/windows/ai/windows-ml/port-app-to-nuget#prerequisites) |
| Java | CPU: [**com.microsoft.onnxruntime:onnxruntime**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) | | [View](https://onnxruntime.ai/docs/api/java) |
| | GPU (CUDA/TensorRT): [**com.microsoft.onnxruntime:onnxruntime\_gpu**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu) | | [View](https://onnxruntime.ai/docs/api/java) |
| Android | [**com.microsoft.onnxruntime:onnxruntime-android**](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime-android) | | [View](https://onnxruntime.ai/docs/install/#install-on-android) |
| iOS (C/C++) | CocoaPods: **onnxruntime-c** | | [View](https://onnxruntime.ai/docs/install/#install-on-ios) |
| Objective-C | CocoaPods: **onnxruntime-objc** | | [View](https://onnxruntime.ai/docs/install/#install-on-ios) |
| React Native | [**onnxruntime-react-native** (latest)](https://www.npmjs.com/package/onnxruntime-react-native) | [onnxruntime-react-native (dev)](https://www.npmjs.com/package/onnxruntime-react-native?activeTab=versions) | [View](https://onnxruntime.ai/docs/api/js) |
| Node.js | [**onnxruntime-node** (latest)](https://www.npmjs.com/package/onnxruntime-node) | [onnxruntime-node (dev)](https://www.npmjs.com/package/onnxruntime-node?activeTab=versions) | [View](https://onnxruntime.ai/docs/api/js) |
| Web | [**onnxruntime-web** (latest)](https://www.npmjs.com/package/onnxruntime-web) | [onnxruntime-web (dev)](https://www.npmjs.com/package/onnxruntime-web?activeTab=versions) | [View](https://onnxruntime.ai/docs/api/js) |
*Note: Nightly builds created from the main branch are available for testing newer changes between official releases. Please use these at your own risk. We strongly advise against deploying these to production workloads as support is limited for nightly builds.*
## Training install table for all languages
Refer to the getting started with [Optimized Training](https://onnxruntime.ai/getting-started) page for more fine-grained installation instructions. |
| Shard | 57 (laksa) |
| Root Hash | 16223958384227938257 |
| Unparsed URL | ai,onnxruntime!/docs/install/ s443 |