βΉοΈ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking |
| Last Crawled | 2026-04-09 14:09:23 (15 hours ago) |
| First Indexed | 2025-09-09 17:23:27 (7 months ago) |
| HTTP Status Code | 200 |
| Meta Title | baidu/ERNIE-4.5-21B-A3B-Thinking Β· Hugging Face |
| Meta Description | Weβre on a journey to advance and democratize artificial intelligence through open source and open science. |
| Meta Canonical | null |
| Boilerpipe Text | Model Highlights
Over the past three months, we have continued to scale the
thinking capability
of ERNIE-4.5-21B-A3B, improving both the
quality and depth
of reasoning, thereby advancing the competitiveness of ERNIE
lightweight models
in complex reasoning tasks. We are pleased to introduce
ERNIE-4.5-21B-A3B-Thinking
, featuring the following key enhancements:
Significantly improved performance
on reasoning tasks, including logical reasoning, mathematics, science, coding, text generation, and academic benchmarks that typically require human expertise.
Efficient tool usage
capabilities.
Enhanced 128K long-context understanding
capabilities.
Note: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
Model Overview
ERNIE-4.5-21B-A3B-Thinking is a text MoE post-trained model, with 21B total parameters and 3B activated parameters for each token. The following are the model configuration details:
Key
Value
Modality
Text
Training Stage
Posttraining
Params(Total / Activated)
21B / 3B
Layers
28
Heads(Q/KV)
20 / 4
Text Experts(Total / Activated)
64 / 6
Shared Experts
2
Context Length
131072
Quickstart
To align with the wider community, this model releases Transformer-style weights. Both PyTorch and PaddlePaddle ecosystem tools, such as vLLM, transformers, and FastDeploy, are expected to be able to load and run this model.
FastDeploy Inference
Quickly deploy services using FastDeploy as shown below. For more detailed usage, refer to the
FastDeploy GitHub Repository
.
Note
: 80GB x 1 GPU resources are required. Deploying this model requires FastDeploy version 2.2.
python -m fastdeploy.entrypoints.openai.api_server \
--model baidu/ERNIE-4.5-21B-A3B-Thinking \
--port 8180 \
--metrics-port 8181 \
--engine-worker-queue-port 8182 \
--load-choices
"default_v1"
\
--tensor-parallel-size 1 \
--max-model-len 131072 \
--reasoning-parser ernie_x1 \
--tool-call-parser ernie_x1 \
--max-num-seqs 32
The ERNIE-4.5-21B-A3B-Thinking model supports function call.
curl -X POST
"http://0.0.0.0:8180/v1/chat/completions"
\
-H
"Content-Type: application/json"
\
-d $
'{
"messages": [
{
"role": "user",
"content": "How \'
s the weather
in
Beijing today?
"
}
],
"
tools
": [
{
"
type
": "
function
",
"
function
": {
"
name
": "
get_weather
",
"
description
": "
Determine weather
in
my location
",
"
parameters
": {
"
type
": "
object
",
"
properties
": {
"
location
": {
"
type
": "
string
",
"
description
": "
The city and state e.g. San Francisco, CA
"
},
"
unit
": {
"
type
": "
string
",
"
enum
": [
"
c
",
"
f
"
]
}
},
"
additionalProperties
": false,
"
required
": [
"
location
",
"
unit
"
]
},
"
strict
": true
}
}]
}'
vLLM inference
VLLM>=0.10.2 (excluding 0.11.0)
vllm serve baidu/ERNIE-4.5-21B-A3B-Thinking
The
reasoning-parser
and
tool-call-parser
for vLLM Ernie need install vllm main branch
Using
transformers
library
Note
: You'll need the
transformers
library (version 4.54.0 or newer) installed to use this model.
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
import
torch
from
transformers
import
AutoModelForCausalLM, AutoTokenizer
model_name =
"baidu/ERNIE-4.5-21B-A3B-Thinking"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map=
"auto"
,
torch_dtype=torch.bfloat16,
)
# prepare the model input
prompt =
"Give me a short introduction to large language model."
messages = [
{
"role"
:
"user"
,
"content"
: prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=
False
,
add_generation_prompt=
True
)
model_inputs = tokenizer([text], add_special_tokens=
False
, return_tensors=
"pt"
).to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=
1024
)
output_ids = generated_ids[
0
][
len
(model_inputs.input_ids[
0
]):].tolist()
# decode the generated ids
generate_text = tokenizer.decode(output_ids, skip_special_tokens=
True
)
print
(
"generate_text:"
, generate_text)
License
The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
Citation
If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report:
@misc{ernie2025technicalreport,
title={ERNIE 4.5 Technical Report},
author={Baidu-ERNIE-Team},
year={2025},
primaryClass={cs.CL},
howpublished={\url{https://ernie.baidu.com/blog/publication/ERNIE_Technical_Report.pdf}}
} |
| Markdown | [ Hugging Face](https://huggingface.co/)
- [Models](https://huggingface.co/models)
- [Datasets](https://huggingface.co/datasets)
- [Spaces](https://huggingface.co/spaces)
- [Buckets new](https://huggingface.co/storage)
- [Docs](https://huggingface.co/docs)
- [Enterprise](https://huggingface.co/enterprise)
- [Pricing](https://huggingface.co/pricing)
- ***
- [Log In](https://huggingface.co/login)
- [Sign Up](https://huggingface.co/join)
# [](https://huggingface.co/baidu) [baidu](https://huggingface.co/baidu) / [ERNIE-4.5-21B-A3B-Thinking](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking) like 777 Follow  BAIDU 2\.06k
[Text Generation](https://huggingface.co/models?pipeline_tag=text-generation)
[Transformers](https://huggingface.co/models?library=transformers)
[Safetensors](https://huggingface.co/models?library=safetensors)
[English](https://huggingface.co/models?language=en)
[Chinese](https://huggingface.co/models?language=zh)
[ernie4\_5\_moe](https://huggingface.co/models?other=ernie4_5_moe)
[ERNIE4.5](https://huggingface.co/models?other=ERNIE4.5)
[conversational](https://huggingface.co/models?other=conversational)
License: apache-2.0
[Model card](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking)
[Files Files and versions xet](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking/tree/main)
[Community 10](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking/discussions)
Deploy
Use this model
- [ERNIE-4.5-21B-A3B-Thinking](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#ernie-45-21b-a3b-thinking "ERNIE-4.5-21B-A3B-Thinking")
- [Model Highlights](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#model-highlights "Model Highlights")
- [Model Overview](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#model-overview "Model Overview")
- [Quickstart](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#quickstart "Quickstart")
- [FastDeploy Inference](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#fastdeploy-inference "FastDeploy Inference")
- [vLLM inference](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#vllm-inference "vLLM inference")
- [Using `transformers` library](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#using-transformers-library "Using <code>transformers</code> library")
- [License](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#license "License")
- [Citation](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#citation "Citation")
[](https://ernie.baidu.com/) [](https://huggingface.co/baidu) [](https://github.com/PaddlePaddle/ERNIE) [](https://ernie.baidu.com/blog/ernie4.5) [](https://discord.gg/JPmZXDsEEK) [](https://x.com/PaddlePaddle)
[](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#license)
# ERNIE-4.5-21B-A3B-Thinking
## Model Highlights
Over the past three months, we have continued to scale the **thinking capability** of ERNIE-4.5-21B-A3B, improving both the **quality and depth** of reasoning, thereby advancing the competitiveness of ERNIE **lightweight models** in complex reasoning tasks. We are pleased to introduce **ERNIE-4.5-21B-A3B-Thinking**, featuring the following key enhancements:
- **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, text generation, and academic benchmarks that typically require human expertise.
- **Efficient tool usage** capabilities.
- **Enhanced 128K long-context understanding** capabilities.
> Note: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
[](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking/blob/main/benchmark.png)
## Model Overview
ERNIE-4.5-21B-A3B-Thinking is a text MoE post-trained model, with 21B total parameters and 3B activated parameters for each token. The following are the model configuration details:
| Key | Value |
|---|---|
| Modality | Text |
| Training Stage | Posttraining |
| Params(Total / Activated) | 21B / 3B |
| Layers | 28 |
| Heads(Q/KV) | 20 / 4 |
| Text Experts(Total / Activated) | 64 / 6 |
| Shared Experts | 2 |
| Context Length | 131072 |
## Quickstart
> To align with the wider community, this model releases Transformer-style weights. Both PyTorch and PaddlePaddle ecosystem tools, such as vLLM, transformers, and FastDeploy, are expected to be able to load and run this model.
### FastDeploy Inference
Quickly deploy services using FastDeploy as shown below. For more detailed usage, refer to the [FastDeploy GitHub Repository](https://github.com/PaddlePaddle/FastDeploy).
**Note**: 80GB x 1 GPU resources are required. Deploying this model requires FastDeploy version 2.2.
```
python -m fastdeploy.entrypoints.openai.api_server \
--model baidu/ERNIE-4.5-21B-A3B-Thinking \
--port 8180 \
--metrics-port 8181 \
--engine-worker-queue-port 8182 \
--load-choices "default_v1" \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--reasoning-parser ernie_x1 \
--tool-call-parser ernie_x1 \
--max-num-seqs 32
```
The ERNIE-4.5-21B-A3B-Thinking model supports function call.
```
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
-H "Content-Type: application/json" \
-d $'{
"messages": [
{
"role": "user",
"content": "How \'s the weather in Beijing today?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Determine weather in my location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"c",
"f"
]
}
},
"additionalProperties": false,
"required": [
"location",
"unit"
]
},
"strict": true
}
}]
}'
```
### vLLM inference
VLLM\>=0.10.2 (excluding 0.11.0)
```
vllm serve baidu/ERNIE-4.5-21B-A3B-Thinking
```
The `reasoning-parser` and `tool-call-parser` for vLLM Ernie need install vllm main branch
### Using `transformers` library
**Note**: You'll need the`transformers`library (version 4.54.0 or newer) installed to use this model.
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "baidu/ERNIE-4.5-21B-A3B-Thinking"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], add_special_tokens=False, return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# decode the generated ids
generate_text = tokenizer.decode(output_ids, skip_special_tokens=True)
print("generate_text:", generate_text)
```
## License
The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
## Citation
If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report:
```
@misc{ernie2025technicalreport,
title={ERNIE 4.5 Technical Report},
author={Baidu-ERNIE-Team},
year={2025},
primaryClass={cs.CL},
howpublished={\url{https://ernie.baidu.com/blog/publication/ERNIE_Technical_Report.pdf}}
}
```
Downloads last month
645
Safetensors
Model size
22B params
Tensor type
F32
Β·
BF16
Β·
Chat template
Files info
## Model tree for baidu/ERNIE-4.5-21B-A3B-Thinking
Adapters
[2 models](https://huggingface.co/models?other=base_model:adapter:baidu/ERNIE-4.5-21B-A3B-Thinking)
Finetunes
[9 models](https://huggingface.co/models?other=base_model:finetune:baidu/ERNIE-4.5-21B-A3B-Thinking)
Merges
[1 model](https://huggingface.co/models?other=base_model:merge:baidu/ERNIE-4.5-21B-A3B-Thinking)
Quantizations
[24 models](https://huggingface.co/models?other=base_model:quantized:baidu/ERNIE-4.5-21B-A3B-Thinking)
## Spaces using baidu/ERNIE-4.5-21B-A3B-Thinking 10
[π» akhaliq/ERNIE-4.5-21B-A3B-Thinking](https://huggingface.co/spaces/akhaliq/ERNIE-4.5-21B-A3B-Thinking)
[β‘ jairwaal/image](https://huggingface.co/spaces/jairwaal/image)
[β‘ Jakob08/moneychatbot](https://huggingface.co/spaces/Jakob08/moneychatbot)
[π jzhang533/ernie4.5\_21b\_a3b\_thinking\_demo](https://huggingface.co/spaces/jzhang533/ernie4.5_21b_a3b_thinking_demo)
[π ERNIE-Community/DeepSite-Using-ERNIE](https://huggingface.co/spaces/ERNIE-Community/DeepSite-Using-ERNIE)
[β‘ Secondprinsipal/image](https://huggingface.co/spaces/Secondprinsipal/image)
[β‘ armaansingh752k1/Image\_generator\_Docker](https://huggingface.co/spaces/armaansingh752k1/Image_generator_Docker)
[π synthetic-data-universe/synth](https://huggingface.co/spaces/synthetic-data-universe/synth)
[πΌ Merunes/HW\_4](https://huggingface.co/spaces/Merunes/HW_4)
[π malekradwan130/EbWFsZWtyYWR3YW4xMzBAZ21haWwuY](https://huggingface.co/spaces/malekradwan130/EbWFsZWtyYWR3YW4xMzBAZ21haWwuY)
\+ 5 Spaces
## Collection including baidu/ERNIE-4.5-21B-A3B-Thinking
[ERNIE 4.5 Collection collection of ERNIE 4.5 models. β’ 27 items β’ Updated Nov 11, 2025 β’ 186](https://huggingface.co/collections/baidu/ernie-45)
System theme
Company
[TOS](https://huggingface.co/terms-of-service) [Privacy](https://huggingface.co/privacy) [About](https://huggingface.co/huggingface) [Careers](https://apply.workable.com/huggingface/)
Website
[Models](https://huggingface.co/models) [Datasets](https://huggingface.co/datasets) [Spaces](https://huggingface.co/spaces) [Pricing](https://huggingface.co/pricing) [Docs](https://huggingface.co/docs) |
| Readable Markdown | [](https://ernie.baidu.com/) [](https://huggingface.co/baidu) [](https://github.com/PaddlePaddle/ERNIE) [](https://ernie.baidu.com/blog/ernie4.5) [](https://discord.gg/JPmZXDsEEK) [](https://x.com/PaddlePaddle)
[](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking#license)
## Model Highlights
Over the past three months, we have continued to scale the **thinking capability** of ERNIE-4.5-21B-A3B, improving both the **quality and depth** of reasoning, thereby advancing the competitiveness of ERNIE **lightweight models** in complex reasoning tasks. We are pleased to introduce **ERNIE-4.5-21B-A3B-Thinking**, featuring the following key enhancements:
- **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, text generation, and academic benchmarks that typically require human expertise.
- **Efficient tool usage** capabilities.
- **Enhanced 128K long-context understanding** capabilities.
> Note: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
[](https://huggingface.co/baidu/ERNIE-4.5-21B-A3B-Thinking/blob/main/benchmark.png)
## Model Overview
ERNIE-4.5-21B-A3B-Thinking is a text MoE post-trained model, with 21B total parameters and 3B activated parameters for each token. The following are the model configuration details:
| Key | Value |
|---|---|
| Modality | Text |
| Training Stage | Posttraining |
| Params(Total / Activated) | 21B / 3B |
| Layers | 28 |
| Heads(Q/KV) | 20 / 4 |
| Text Experts(Total / Activated) | 64 / 6 |
| Shared Experts | 2 |
| Context Length | 131072 |
## Quickstart
> To align with the wider community, this model releases Transformer-style weights. Both PyTorch and PaddlePaddle ecosystem tools, such as vLLM, transformers, and FastDeploy, are expected to be able to load and run this model.
### FastDeploy Inference
Quickly deploy services using FastDeploy as shown below. For more detailed usage, refer to the [FastDeploy GitHub Repository](https://github.com/PaddlePaddle/FastDeploy).
**Note**: 80GB x 1 GPU resources are required. Deploying this model requires FastDeploy version 2.2.
```
python -m fastdeploy.entrypoints.openai.api_server \
--model baidu/ERNIE-4.5-21B-A3B-Thinking \
--port 8180 \
--metrics-port 8181 \
--engine-worker-queue-port 8182 \
--load-choices "default_v1" \
--tensor-parallel-size 1 \
--max-model-len 131072 \
--reasoning-parser ernie_x1 \
--tool-call-parser ernie_x1 \
--max-num-seqs 32
```
The ERNIE-4.5-21B-A3B-Thinking model supports function call.
```
curl -X POST "http://0.0.0.0:8180/v1/chat/completions" \
-H "Content-Type: application/json" \
-d $'{
"messages": [
{
"role": "user",
"content": "How \'s the weather in Beijing today?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Determine weather in my location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"c",
"f"
]
}
},
"additionalProperties": false,
"required": [
"location",
"unit"
]
},
"strict": true
}
}]
}'
```
### vLLM inference
VLLM\>=0.10.2 (excluding 0.11.0)
```
vllm serve baidu/ERNIE-4.5-21B-A3B-Thinking
```
The `reasoning-parser` and `tool-call-parser` for vLLM Ernie need install vllm main branch
### Using `transformers` library
**Note**: You'll need the`transformers`library (version 4.54.0 or newer) installed to use this model.
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "baidu/ERNIE-4.5-21B-A3B-Thinking"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], add_special_tokens=False, return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=1024
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# decode the generated ids
generate_text = tokenizer.decode(output_ids, skip_special_tokens=True)
print("generate_text:", generate_text)
```
## License
The ERNIE 4.5 models are provided under the Apache License 2.0. This license permits commercial use, subject to its terms and conditions. Copyright (c) 2025 Baidu, Inc. All Rights Reserved.
## Citation
If you find ERNIE 4.5 useful or wish to use it in your projects, please kindly cite our technical report:
```
@misc{ernie2025technicalreport,
title={ERNIE 4.5 Technical Report},
author={Baidu-ERNIE-Team},
year={2025},
primaryClass={cs.CL},
howpublished={\url{https://ernie.baidu.com/blog/publication/ERNIE_Technical_Report.pdf}}
}
``` |
| Shard | 70 (laksa) |
| Root Hash | 18270453918568933270 |
| Unparsed URL | co,huggingface!/baidu/ERNIE-4.5-21B-A3B-Thinking s443 |