âšď¸ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | FAIL | meta_canonical IS NULL OR = '' OR = src_unparsed | co,huggingface!/docs/transformers/installation s443 |
| Property | Value |
|---|---|
| URL | https://huggingface.co/docs/transformers/en/installation |
| Last Crawled | 2026-04-06 08:17:38 (1 day ago) |
| First Indexed | 2023-02-07 01:57:45 (3 years ago) |
| HTTP Status Code | 200 |
| Meta Title | Installation ¡ Hugging Face |
| Meta Description | Weâre on a journey to advance and democratize artificial intelligence through open source and open science. |
| Meta Canonical | co,huggingface!/docs/transformers/installation s443 |
| Boilerpipe Text | Transformers works with
PyTorch
. It has been tested on Python 3.10+ and PyTorch 2.4+.
Virtual environment
uv
is an extremely fast Rust-based Python package and project manager and requires a
virtual environment
by default to manage different projects and avoids compatibility issues between dependencies.
It can be used as a drop-in replacement for
pip
, but if you prefer to use pip, remove
uv
from the commands below.
Refer to the uv
installation
docs to install uv.
Create a virtual environment to install Transformers in.
uv venv .
env
source
.
env
/bin/activate
Python
Install Transformers with the following command.
uv
is a fast Rust-based Python package and project manager.
uv pip install transformers
For GPU acceleration, install the appropriate CUDA drivers for
PyTorch
.
Run the command below to check if your system detects an NVIDIA GPU.
nvidia-smi
To install a CPU-only version of Transformers, run the following command.
uv pip install torch --index-url https://download.pytorch.org/whl/cpu
uv pip install transformers
Test whether the install was successful with the following command. It should return a label and score for the provided text.
python -c
"from transformers import pipeline; print(pipeline('sentiment-analysis')('hugging face is the best'))"
[{
'label'
:
'POSITIVE'
,
'score'
: 0.9998704791069031}]
Source install
Installing from source installs the
latest
version rather than the
stable
version of the library. It ensures you have the most up-to-date changes in Transformers and itâs useful for experimenting with the latest features or fixing a bug that hasnât been officially released in the stable version yet.
The downside is that the latest version may not always be stable. If you encounter any problems, please open a
GitHub Issue
so we can fix it as soon as possible.
Install from source with the following command.
uv pip install git+https://github.com/huggingface/transformers
Check if the install was successful with the command below. It should return a label and score for the provided text.
python -c
"from transformers import pipeline; print(pipeline('sentiment-analysis')('hugging face is the best'))"
[{
'label'
:
'POSITIVE'
,
'score'
: 0.9998704791069031}]
Editable install
An
editable install
is useful if youâre developing locally with Transformers. It links your local copy of Transformers to the Transformers
repository
instead of copying the files. The files are added to Pythonâs import path.
git
clone
https://github.com/huggingface/transformers.git
cd
transformers
uv pip install -e .
You must keep the local Transformers folder to keep using it.
Update your local version of Transformers with the latest changes in the main repository with the following command.
cd
~/transformers/
git pull
conda
conda
is a language-agnostic package manager. Install Transformers from the
conda-forge
channel in your newly created virtual environment.
conda install conda-forge::transformers
Set up
After installation, you can configure the Transformers cache location or set up the library for offline usage.
Cache directory
When you load a pretrained model with
from_pretrained()
, the model is downloaded from the Hub and locally cached.
Every time you load a model, it checks whether the cached model is up-to-date. If itâs the same, then the local model is loaded. If itâs not the same, the newer model is downloaded and cached.
The default directory given by the shell environment variable
HF_HUB_CACHE
is
~/.cache/huggingface/hub
. On Windows, the default directory is
C:\Users\username\.cache\huggingface\hub
.
Cache a model in a different directory by changing the path in the following shell environment variables (listed by priority).
HF_HUB_CACHE
(default)
HF_HOME
XDG_CACHE_HOME
+
/huggingface
(only if
HF_HOME
is not set)
Offline mode
To use Transformers in an offline or firewalled environment requires the downloaded and cached files ahead of time. Download a model repository from the Hub with the
snapshot_download
method.
Refer to the
Download files from the Hub
guide for more options for downloading files from the Hub. You can download files from specific revisions, download from the CLI, and even filter which files to download from a repository.
from
huggingface_hub
import
snapshot_download
snapshot_download(repo_id=
"meta-llama/Llama-2-7b-hf"
, repo_type=
"model"
)
Set the environment variable
HF_HUB_OFFLINE=1
to prevent HTTP calls to the Hub when loading a model.
HF_HUB_OFFLINE=1 \
python examples/pytorch/language-modeling/run_clm.py --model_name_or_path meta-llama/Llama-2-7b-hf --dataset_name wikitext ...
Another option for only loading cached files is to set
local_files_only=True
in
from_pretrained()
.
from
transformers
import
LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained(
"./path/to/local/directory"
, local_files_only=
True
)
Update
on GitHub |
| Markdown | [ Hugging Face](https://huggingface.co/)
- [Models](https://huggingface.co/models)
- [Datasets](https://huggingface.co/datasets)
- [Spaces](https://huggingface.co/spaces)
- [Buckets new](https://huggingface.co/storage)
- [Docs](https://huggingface.co/docs)
- [Enterprise](https://huggingface.co/enterprise)
- [Pricing](https://huggingface.co/pricing)
- ***
- [Log In](https://huggingface.co/login)
- [Sign Up](https://huggingface.co/join)
Transformers documentation
Installation
# Transformers
Search documentation
`Ctrl+K`
[158,856](https://github.com/huggingface/transformers)
Get started
[Transformers](https://huggingface.co/docs/transformers/en/index)[Installation](https://huggingface.co/docs/transformers/en/installation)[Quickstart](https://huggingface.co/docs/transformers/en/quicktour)
Base classes
Models
Preprocessors
Inference
Pipeline API
Generate API
Optimization
Chat with models
Serving
Training
Get started
Customization
[Parameter-efficient fine-tuning](https://huggingface.co/docs/transformers/en/peft)
Distributed training
Hardware
Quantization
Ecosystem integrations
Resources
Contribute
API

Join the Hugging Face community
and get access to the augmented documentation experience
Collaborate on models, datasets and Spaces
Faster examples with accelerated inference
Switch between documentation themes
[Sign Up](https://huggingface.co/join)
to get started
Copy page
# Installation
Transformers works with [PyTorch](https://pytorch.org/get-started/locally/). It has been tested on Python 3.10+ and PyTorch 2.4+.
## Virtual environment
[uv](https://docs.astral.sh/uv/) is an extremely fast Rust-based Python package and project manager and requires a [virtual environment](https://docs.astral.sh/uv/pip/environments/) by default to manage different projects and avoids compatibility issues between dependencies.
It can be used as a drop-in replacement for [pip](https://pip.pypa.io/en/stable/), but if you prefer to use pip, remove `uv` from the commands below.
> Refer to the uv [installation](https://docs.astral.sh/uv/guides/install-python/) docs to install uv.
Create a virtual environment to install Transformers in.
Copied
```
uv venv .env
source .env/bin/activate
```
## Python
Install Transformers with the following command.
[uv](https://docs.astral.sh/uv/) is a fast Rust-based Python package and project manager.
Copied
```
uv pip install transformers
```
For GPU acceleration, install the appropriate CUDA drivers for [PyTorch](https://pytorch.org/get-started/locally).
Run the command below to check if your system detects an NVIDIA GPU.
Copied
```
nvidia-smi
```
To install a CPU-only version of Transformers, run the following command.
Copied
```
uv pip install torch --index-url https://download.pytorch.org/whl/cpu
uv pip install transformers
```
Test whether the install was successful with the following command. It should return a label and score for the provided text.
Copied
```
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('hugging face is the best'))"
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
```
### Source install
Installing from source installs the *latest* version rather than the *stable* version of the library. It ensures you have the most up-to-date changes in Transformers and itâs useful for experimenting with the latest features or fixing a bug that hasnât been officially released in the stable version yet.
The downside is that the latest version may not always be stable. If you encounter any problems, please open a [GitHub Issue](https://github.com/huggingface/transformers/issues) so we can fix it as soon as possible.
Install from source with the following command.
Copied
```
uv pip install git+https://github.com/huggingface/transformers
```
Check if the install was successful with the command below. It should return a label and score for the provided text.
Copied
```
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('hugging face is the best'))"
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
```
### Editable install
An [editable install](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs) is useful if youâre developing locally with Transformers. It links your local copy of Transformers to the Transformers [repository](https://github.com/huggingface/transformers) instead of copying the files. The files are added to Pythonâs import path.
Copied
```
git clone https://github.com/huggingface/transformers.git
cd transformers
uv pip install -e .
```
> You must keep the local Transformers folder to keep using it.
Update your local version of Transformers with the latest changes in the main repository with the following command.
Copied
```
cd ~/transformers/
git pull
```
## conda
[conda](https://docs.conda.io/projects/conda/en/stable/) is a language-agnostic package manager. Install Transformers from the [conda-forge](https://anaconda.org/conda-forge/transformers) channel in your newly created virtual environment.
Copied
```
conda install conda-forge::transformers
```
## Set up
After installation, you can configure the Transformers cache location or set up the library for offline usage.
### Cache directory
When you load a pretrained model with [from\_pretrained()](https://huggingface.co/docs/transformers/v5.5.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained), the model is downloaded from the Hub and locally cached.
Every time you load a model, it checks whether the cached model is up-to-date. If itâs the same, then the local model is loaded. If itâs not the same, the newer model is downloaded and cached.
The default directory given by the shell environment variable `HF_HUB_CACHE` is `~/.cache/huggingface/hub`. On Windows, the default directory is `C:\Users\username\.cache\huggingface\hub`.
Cache a model in a different directory by changing the path in the following shell environment variables (listed by priority).
1. [HF\_HUB\_CACHE](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#hfhubcache) (default)
2. [HF\_HOME](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#hfhome)
3. [XDG\_CACHE\_HOME](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#xdgcachehome) + `/huggingface` (only if `HF_HOME` is not set)
### Offline mode
To use Transformers in an offline or firewalled environment requires the downloaded and cached files ahead of time. Download a model repository from the Hub with the `snapshot_download` method.
> Refer to the [Download files from the Hub](https://hf.co/docs/huggingface_hub/guides/download) guide for more options for downloading files from the Hub. You can download files from specific revisions, download from the CLI, and even filter which files to download from a repository.
Copied
```
from huggingface_hub import snapshot_download
snapshot_download(repo_id="meta-llama/Llama-2-7b-hf", repo_type="model")
```
Set the environment variable `HF_HUB_OFFLINE=1` to prevent HTTP calls to the Hub when loading a model.
Copied
```
HF_HUB_OFFLINE=1 \
python examples/pytorch/language-modeling/run_clm.py --model_name_or_path meta-llama/Llama-2-7b-hf --dataset_name wikitext ...
```
Another option for only loading cached files is to set `local_files_only=True` in [from\_pretrained()](https://huggingface.co/docs/transformers/v5.5.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained).
Copied
```
from transformers import LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained("./path/to/local/directory", local_files_only=True)
```
[Update on GitHub](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md)
[âTransformers](https://huggingface.co/docs/transformers/en/index) [Quickstartâ](https://huggingface.co/docs/transformers/en/quicktour)
[Installation](https://huggingface.co/docs/transformers/en/installation#installation)
[Virtual environment](https://huggingface.co/docs/transformers/en/installation#virtual-environment)
[Python](https://huggingface.co/docs/transformers/en/installation#python)
[Source install](https://huggingface.co/docs/transformers/en/installation#source-install)
[Editable install](https://huggingface.co/docs/transformers/en/installation#editable-install)
[conda](https://huggingface.co/docs/transformers/en/installation#conda)
[Set up](https://huggingface.co/docs/transformers/en/installation#set-up)
[Cache directory](https://huggingface.co/docs/transformers/en/installation#cache-directory)
[Offline mode](https://huggingface.co/docs/transformers/en/installation#offline-mode) |
| Readable Markdown | Transformers works with [PyTorch](https://pytorch.org/get-started/locally/). It has been tested on Python 3.10+ and PyTorch 2.4+.
## Virtual environment
[uv](https://docs.astral.sh/uv/) is an extremely fast Rust-based Python package and project manager and requires a [virtual environment](https://docs.astral.sh/uv/pip/environments/) by default to manage different projects and avoids compatibility issues between dependencies.
It can be used as a drop-in replacement for [pip](https://pip.pypa.io/en/stable/), but if you prefer to use pip, remove `uv` from the commands below.
> Refer to the uv [installation](https://docs.astral.sh/uv/guides/install-python/) docs to install uv.
Create a virtual environment to install Transformers in.
```
uv venv .env
source .env/bin/activate
```
## Python
Install Transformers with the following command.
[uv](https://docs.astral.sh/uv/) is a fast Rust-based Python package and project manager.
```
uv pip install transformers
```
For GPU acceleration, install the appropriate CUDA drivers for [PyTorch](https://pytorch.org/get-started/locally).
Run the command below to check if your system detects an NVIDIA GPU.
```
nvidia-smi
```
To install a CPU-only version of Transformers, run the following command.
```
uv pip install torch --index-url https://download.pytorch.org/whl/cpu
uv pip install transformers
```
Test whether the install was successful with the following command. It should return a label and score for the provided text.
```
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('hugging face is the best'))"
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
```
### Source install
Installing from source installs the *latest* version rather than the *stable* version of the library. It ensures you have the most up-to-date changes in Transformers and itâs useful for experimenting with the latest features or fixing a bug that hasnât been officially released in the stable version yet.
The downside is that the latest version may not always be stable. If you encounter any problems, please open a [GitHub Issue](https://github.com/huggingface/transformers/issues) so we can fix it as soon as possible.
Install from source with the following command.
```
uv pip install git+https://github.com/huggingface/transformers
```
Check if the install was successful with the command below. It should return a label and score for the provided text.
```
python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('hugging face is the best'))"
[{'label': 'POSITIVE', 'score': 0.9998704791069031}]
```
### Editable install
An [editable install](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs) is useful if youâre developing locally with Transformers. It links your local copy of Transformers to the Transformers [repository](https://github.com/huggingface/transformers) instead of copying the files. The files are added to Pythonâs import path.
```
git clone https://github.com/huggingface/transformers.git
cd transformers
uv pip install -e .
```
> You must keep the local Transformers folder to keep using it.
Update your local version of Transformers with the latest changes in the main repository with the following command.
```
cd ~/transformers/
git pull
```
## conda
[conda](https://docs.conda.io/projects/conda/en/stable/) is a language-agnostic package manager. Install Transformers from the [conda-forge](https://anaconda.org/conda-forge/transformers) channel in your newly created virtual environment.
```
conda install conda-forge::transformers
```
## Set up
After installation, you can configure the Transformers cache location or set up the library for offline usage.
### Cache directory
When you load a pretrained model with [from\_pretrained()](https://huggingface.co/docs/transformers/v5.5.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained), the model is downloaded from the Hub and locally cached.
Every time you load a model, it checks whether the cached model is up-to-date. If itâs the same, then the local model is loaded. If itâs not the same, the newer model is downloaded and cached.
The default directory given by the shell environment variable `HF_HUB_CACHE` is `~/.cache/huggingface/hub`. On Windows, the default directory is `C:\Users\username\.cache\huggingface\hub`.
Cache a model in a different directory by changing the path in the following shell environment variables (listed by priority).
1. [HF\_HUB\_CACHE](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#hfhubcache) (default)
2. [HF\_HOME](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#hfhome)
3. [XDG\_CACHE\_HOME](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#xdgcachehome) + `/huggingface` (only if `HF_HOME` is not set)
### Offline mode
To use Transformers in an offline or firewalled environment requires the downloaded and cached files ahead of time. Download a model repository from the Hub with the `snapshot_download` method.
> Refer to the [Download files from the Hub](https://hf.co/docs/huggingface_hub/guides/download) guide for more options for downloading files from the Hub. You can download files from specific revisions, download from the CLI, and even filter which files to download from a repository.
```
from huggingface_hub import snapshot_download
snapshot_download(repo_id="meta-llama/Llama-2-7b-hf", repo_type="model")
```
Set the environment variable `HF_HUB_OFFLINE=1` to prevent HTTP calls to the Hub when loading a model.
```
HF_HUB_OFFLINE=1 \
python examples/pytorch/language-modeling/run_clm.py --model_name_or_path meta-llama/Llama-2-7b-hf --dataset_name wikitext ...
```
Another option for only loading cached files is to set `local_files_only=True` in [from\_pretrained()](https://huggingface.co/docs/transformers/v5.5.0/en/main_classes/model#transformers.PreTrainedModel.from_pretrained).
```
from transformers import LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained("./path/to/local/directory", local_files_only=True)
```
[Update on GitHub](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md) |
| Shard | 70 (laksa) |
| Root Hash | 18270453918568933270 |
| Unparsed URL | co,huggingface!/docs/transformers/en/installation s443 |