🕷️ Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 70 (from laksa134)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

ℹ️ Skipped - page is already crawled

đź“„
INDEXABLE
âś…
CRAWLED
17 hours ago
🤖
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH0 months ago
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://huggingface.co/docs/transformers/index
Last Crawled2026-04-09 22:05:20 (17 hours ago)
First Indexed2021-12-02 23:53:46 (4 years ago)
HTTP Status Code200
Meta TitleTransformers · Hugging Face
Meta DescriptionWe’re on a journey to advance and democratize artificial intelligence through open source and open science.
Meta Canonicalnull
Boilerpipe Text
Transformers Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal models, for both inference and training. It centralizes the model definition so that this definition is agreed upon across the ecosystem. transformers is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, …), inference engines (vLLM, SGLang, TGI, …), and adjacent modeling libraries (llama.cpp, mlx, …) which leverage the model definition from transformers . We pledge to help support new state-of-the-art models and democratize their usage by having their model definition be simple, customizable, and efficient. There are over 1M+ Transformers model checkpoints on the Hugging Face Hub you can use. Explore the Hub today to find a model and use Transformers to help you get started right away. Explore the Models Timeline to discover the latest text, vision, audio and multimodal model architectures in Transformers. Features Transformers provides everything you need for inference or training with state-of-the-art pretrained models. Some of the main features include: Pipeline : Simple and optimized inference class for many machine learning tasks like text generation, image segmentation, automatic speech recognition, document question answering, and more. Trainer : A comprehensive trainer that supports features such as mixed precision, torch.compile, and FlashAttention for training and distributed training for PyTorch models. generate : Fast text generation with large language models (LLMs) and vision language models (VLMs), including support for streaming and multiple decoding strategies. Design Read our Philosophy to learn more about Transformers’ design principles. Transformers is designed for developers and machine learning engineers and researchers. Its main design principles are: Fast and easy to use: Every model is implemented from only three main classes (configuration, model, and preprocessor) and can be quickly used for inference or training with Pipeline or Trainer . Pretrained models: Reduce your carbon footprint, compute cost and time by using a pretrained model instead of training an entirely new one. Each pretrained model is reproduced as closely as possible to the original model and offers state-of-the-art performance. Learn If you’re new to Transformers or want to learn more about transformer models, we recommend starting with the LLM course . This comprehensive course covers everything from the fundamentals of how transformer models work to practical applications across various tasks. You’ll learn the complete workflow, from curating high-quality datasets to fine-tuning large language models and implementing reasoning capabilities. The course contains both theoretical and hands-on exercises to build a solid foundational knowledge of transformer models as you learn. Update on GitHub
Markdown
[![Hugging Face's logo](https://huggingface.co/front/assets/huggingface_logo-noborder.svg) Hugging Face](https://huggingface.co/) - [Models](https://huggingface.co/models) - [Datasets](https://huggingface.co/datasets) - [Spaces](https://huggingface.co/spaces) - [Buckets new](https://huggingface.co/storage) - [Docs](https://huggingface.co/docs) - [Enterprise](https://huggingface.co/enterprise) - [Pricing](https://huggingface.co/pricing) - *** - [Log In](https://huggingface.co/login) - [Sign Up](https://huggingface.co/join) Transformers documentation Transformers # Transformers Search documentation Get started [Transformers](https://huggingface.co/docs/transformers/index)[Installation](https://huggingface.co/docs/transformers/installation)[Quickstart](https://huggingface.co/docs/transformers/quicktour) Base classes Models Preprocessors Inference Pipeline API Generate API Optimization Chat with models Serving Training Get started Customization [Parameter-efficient fine-tuning](https://huggingface.co/docs/transformers/peft) Distributed training Hardware Quantization Ecosystem integrations Resources Contribute API ![Hugging Face's logo](https://huggingface.co/front/assets/huggingface_logo-noborder.svg) Join the Hugging Face community and get access to the augmented documentation experience Collaborate on models, datasets and Spaces Faster examples with accelerated inference Switch between documentation themes [Sign Up](https://huggingface.co/join) to get started Copy page # Transformers ### ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/transformers_as_a_model_definition.png) Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal models, for both inference and training. It centralizes the model definition so that this definition is agreed upon across the ecosystem. `transformers` is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, …), inference engines (vLLM, SGLang, TGI, …), and adjacent modeling libraries (llama.cpp, mlx, …) which leverage the model definition from `transformers`. We pledge to help support new state-of-the-art models and democratize their usage by having their model definition be simple, customizable, and efficient. There are over 1M+ Transformers [model checkpoints](https://huggingface.co/models?library=transformers&sort=trending) on the [Hugging Face Hub](https://huggingface.com/models) you can use. Explore the [Hub](https://huggingface.com/) today to find a model and use Transformers to help you get started right away. Explore the [Models Timeline](https://huggingface.co/docs/transformers/models_timeline) to discover the latest text, vision, audio and multimodal model architectures in Transformers. ## Features Transformers provides everything you need for inference or training with state-of-the-art pretrained models. Some of the main features include: - [Pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial): Simple and optimized inference class for many machine learning tasks like text generation, image segmentation, automatic speech recognition, document question answering, and more. - [Trainer](https://huggingface.co/docs/transformers/trainer): A comprehensive trainer that supports features such as mixed precision, torch.compile, and FlashAttention for training and distributed training for PyTorch models. - [generate](https://huggingface.co/docs/transformers/llm_tutorial): Fast text generation with large language models (LLMs) and vision language models (VLMs), including support for streaming and multiple decoding strategies. ## Design > Read our [Philosophy](https://huggingface.co/docs/transformers/philosophy) to learn more about Transformers’ design principles. Transformers is designed for developers and machine learning engineers and researchers. Its main design principles are: 1. Fast and easy to use: Every model is implemented from only three main classes (configuration, model, and preprocessor) and can be quickly used for inference or training with [Pipeline](https://huggingface.co/docs/transformers/v5.5.3/en/main_classes/pipelines#transformers.Pipeline) or [Trainer](https://huggingface.co/docs/transformers/v5.5.3/en/main_classes/trainer#transformers.Trainer). 2. Pretrained models: Reduce your carbon footprint, compute cost and time by using a pretrained model instead of training an entirely new one. Each pretrained model is reproduced as closely as possible to the original model and offers state-of-the-art performance. [![HuggingFace Expert Acceleration Program](https://hf.co/datasets/huggingface/documentation-images/resolve/81d7d9201fd4ceb537fc4cebc22c29c37a2ed216/transformers/transformers-index.png)](https://huggingface.co/support) ## Learn If you’re new to Transformers or want to learn more about transformer models, we recommend starting with the [LLM course](https://huggingface.co/learn/llm-course/chapter1/1?fw=pt). This comprehensive course covers everything from the fundamentals of how transformer models work to practical applications across various tasks. You’ll learn the complete workflow, from curating high-quality datasets to fine-tuning large language models and implementing reasoning capabilities. The course contains both theoretical and hands-on exercises to build a solid foundational knowledge of transformer models as you learn. [Update on GitHub](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) [Installation→](https://huggingface.co/docs/transformers/installation) [Transformers](https://huggingface.co/docs/transformers/index#transformers) [Features](https://huggingface.co/docs/transformers/index#features) [Design](https://huggingface.co/docs/transformers/index#design) [Learn](https://huggingface.co/docs/transformers/index#learn)
Readable Markdown
## Transformers ### ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/transformers_as_a_model_definition.png) Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal models, for both inference and training. It centralizes the model definition so that this definition is agreed upon across the ecosystem. `transformers` is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, …), inference engines (vLLM, SGLang, TGI, …), and adjacent modeling libraries (llama.cpp, mlx, …) which leverage the model definition from `transformers`. We pledge to help support new state-of-the-art models and democratize their usage by having their model definition be simple, customizable, and efficient. There are over 1M+ Transformers [model checkpoints](https://huggingface.co/models?library=transformers&sort=trending) on the [Hugging Face Hub](https://huggingface.com/models) you can use. Explore the [Hub](https://huggingface.com/) today to find a model and use Transformers to help you get started right away. Explore the [Models Timeline](https://huggingface.co/docs/transformers/models_timeline) to discover the latest text, vision, audio and multimodal model architectures in Transformers. ## Features Transformers provides everything you need for inference or training with state-of-the-art pretrained models. Some of the main features include: - [Pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial): Simple and optimized inference class for many machine learning tasks like text generation, image segmentation, automatic speech recognition, document question answering, and more. - [Trainer](https://huggingface.co/docs/transformers/trainer): A comprehensive trainer that supports features such as mixed precision, torch.compile, and FlashAttention for training and distributed training for PyTorch models. - [generate](https://huggingface.co/docs/transformers/llm_tutorial): Fast text generation with large language models (LLMs) and vision language models (VLMs), including support for streaming and multiple decoding strategies. ## Design > Read our [Philosophy](https://huggingface.co/docs/transformers/philosophy) to learn more about Transformers’ design principles. Transformers is designed for developers and machine learning engineers and researchers. Its main design principles are: 1. Fast and easy to use: Every model is implemented from only three main classes (configuration, model, and preprocessor) and can be quickly used for inference or training with [Pipeline](https://huggingface.co/docs/transformers/v5.5.3/en/main_classes/pipelines#transformers.Pipeline) or [Trainer](https://huggingface.co/docs/transformers/v5.5.3/en/main_classes/trainer#transformers.Trainer). 2. Pretrained models: Reduce your carbon footprint, compute cost and time by using a pretrained model instead of training an entirely new one. Each pretrained model is reproduced as closely as possible to the original model and offers state-of-the-art performance. [![HuggingFace Expert Acceleration Program](https://hf.co/datasets/huggingface/documentation-images/resolve/81d7d9201fd4ceb537fc4cebc22c29c37a2ed216/transformers/transformers-index.png)](https://huggingface.co/support) ## Learn If you’re new to Transformers or want to learn more about transformer models, we recommend starting with the [LLM course](https://huggingface.co/learn/llm-course/chapter1/1?fw=pt). This comprehensive course covers everything from the fundamentals of how transformer models work to practical applications across various tasks. You’ll learn the complete workflow, from curating high-quality datasets to fine-tuning large language models and implementing reasoning capabilities. The course contains both theoretical and hands-on exercises to build a solid foundational knowledge of transformer models as you learn. [Update on GitHub](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md)
Shard70 (laksa)
Root Hash18270453918568933270
Unparsed URLco,huggingface!/docs/transformers/index s443