ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0.2 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://catboost.ai/docs/en/concepts/python-reference_catboostclassifier_modelcompare |
| Last Crawled | 2026-04-02 03:05:26 (5 days ago) |
| First Indexed | 2024-11-18 16:12:15 (1 year ago) |
| HTTP Status Code | 200 |
| Meta Title | compare | CatBoost |
| Meta Description | Draw train and evaluation metrics in Jupyter Notebook for two trained models. Method call format. |
| Meta Canonical | null |
| Boilerpipe Text | Draw train and evaluation metrics in
Jupyter Notebook
for two trained models.
Method call format
compare(model,
data=
None
,
metrics=
None
,
ntree_start=
0
,
ntree_end=
0
,
eval_period=
1
,
thread_count=-
1
,
tmp_dir=
None
,
log_cout=sys.stdout,
log_cerr=sys.stderr)
Parameters
Parameter:
model
Possible types:
CatBoost Model
Description
The CatBoost model to compare with.
Default value
Required parameter
Parameter:
metrics
list of strings
The list of metrics to be calculated.
Supported metrics
For example, if the AUC and Logloss metrics should be calculated, use the following construction:
[
'Logloss'
,
'AUC'
]
Required parameter
Parameter:
data
Possible types:
catboost.Pool
Description
A file or matrix with the input dataset, on which the compared metric values should be calculated.
Default value
Required parameter
Parameter:
ntree_start
Possible types:
int
Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to
[ntree_start; ntree_end)
and the
eval_period
parameter to
k
to calculate metrics on every
k
-th iteration.
This parameter defines the index of the first tree to be used when applying the model or calculating the metrics (the inclusive left border of the range). Indices are zero-based.
Default value
0
Parameter:
ntree_end
Possible types:
int
Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to
[ntree_start; ntree_end)
and the
eval_period
parameter to
k
to calculate metrics on every
k
-th iteration.
This parameter defines the index of the first tree not to be used when applying the model or calculating the metrics (the exclusive right border of the range). Indices are zero-based.
Default value
0 (the index of the last tree to use equals to the number of trees in the
model minus one)
Parameter:
eval_period
Possible types:
int
Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to
[ntree_start; ntree_end)
and the
eval_period
parameter to
k
to calculate metrics on every
k
-th iteration.
This parameter defines the step to iterate over the range
[
ntree_start
;
ntree_end
)
. For example, let's assume that the following parameter values are set:
ntree_start
is set 0
ntree_end
is set to N (the total tree count)
eval_period
is set to 2
In this case, the metrics are calculated for the following tree ranges:
[0, 2)
,
[0, 4)
, ... ,
[0, N)
Default value
1 (the trees are applied sequentially: the first tree, then the first two
trees, etc.)
Parameter:
thread_count
int
The number of threads to use.
Optimizes the speed of execution. This parameter doesn't affect results.
-1 (the number of threads is equal to the number of processor cores)
Parameter:
tmp_dir
Possible types:
String
Description
The name of the temporary directory for intermediate results.
Default value
None (the name is generated)
log_cout
Output stream or callback for logging.
Possible types
callable Python object
python object providing the
write()
method
Default value
sys.stdout
log_cerr
Error stream or callback for logging.
Possible types
callable Python object
python object providing the
write()
method
Default value
sys.stderr
Examples
from
catboost
import
Pool, CatBoostClassifier
train_data = [[
0
,
3
],
[
4
,
1
],
[
8
,
1
],
[
9
,
1
]]
train_labels = [
0
,
0
,
1
,
1
]
eval_data = [[
1
,
3
],
[
4
,
2
],
[
8
,
2
],
[
8
,
3
]]
eval_labels = [
1
,
0
,
0
,
1
]
train_dataset = Pool(train_data, train_labels)
eval_dataset = Pool(eval_data, eval_labels)
model1 = CatBoostClassifier(iterations=
100
, learning_rate=
0.1
)
model1.fit(train_dataset, verbose=
False
)
model2 = CatBoostClassifier(iterations=
100
, learning_rate=
0.3
)
model2.fit(train_dataset, verbose=
False
)
model1.compare(model2, eval_dataset, [
'Logloss'
])
The following is a chart plotted with
Jupyter Notebook
for the given example. |
| Markdown | [](https://catboost.ai/ "CatBoost")
- Installation
- [Overview](https://catboost.ai/docs/en/concepts/en/concepts/installation)
- Python package installation
- CatBoost for Apache Spark installation
- R package installation
- Command-line version binary
- Build from source
- Key Features
- Training parameters
- Python package
- [Quick start](https://catboost.ai/docs/en/concepts/en/concepts/python-quickstart)
- CatBoost
- CatBoostClassifier
- [Overview](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier)
- [fit](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_fit)
- [predict](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_predict)
- [predict\_proba](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_predict_proba)
- [Attributes](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_attributes)
- [calc\_leaf\_indexes](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboost_calc_leaf_indexes)
- [calc\_feature\_statistics](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_calc_feature_statistics)
- [compare](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare)
- [copy](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_copy)
- [eval\_metrics](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_eval-metrics)
- [get\_all\_params](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_all_params)
- [get\_best\_iteration](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_best_iteration)
- [get\_best\_score](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_best_score)
- [get\_borders](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_borders)
- [get\_evals\_result](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_evals_result)
- [get\_feature\_importance](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_feature_importance)
- [get\_metadata](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_metadata)
- [get\_object\_importance](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_object_importance)
- [get\_param](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_param)
- [get\_params](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_params)
- [get\_probability\_threshold](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_probability_threshold)
- [get\_scale\_and\_bias](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_scale_and_bias)
- [get\_test\_eval](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_get_test_eval)
- [grid\_search](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_grid_search)
- [is\_fitted](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_is_fitted)
- [load\_model](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_load_model)
- [plot\_predictions](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_plot_predictions)
- [plot\_tree](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_plot_tree)
- [randomized\_search](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_randomized_search)
- [save\_borders](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_save_borders)
- [save\_model](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_save_model)
- [score](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_score)
- [select\_features](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_select_features)
- [set\_feature\_names](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_set_feature_names)
- [set\_params](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_set_params)
- [set\_probability\_threshold](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_set_probability_threshold)
- [set\_scale\_and\_bias](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_set_scale_and_bias)
- [shrink](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_shrink)
- [staged\_predict](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_staged_predict)
- [staged\_predict\_proba](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_staged_predict_proba)
- CatBoostRanker
- CatBoostRegressor
- [cv](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_cv)
- datasets
- FeaturesData
- MetricVisualizer
- Pool
- [sample\_gaussian\_process](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_sample_gaussian_process)
- [sum\_models](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_sum_models)
- [to\_classifier](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_to_classifier)
- [to\_regressor](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_to_regressor)
- [train](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_train)
- Text processing
- utils
- [Usage examples](https://catboost.ai/docs/en/concepts/en/concepts/python-usages-examples)
- CatBoost for Apache Spark
- R package
- Command-line version
- Applying models
- Objectives and metrics
- Model analysis
- Data format description
- [Parameter tuning](https://catboost.ai/docs/en/concepts/en/concepts/parameter-tuning)
- [Speeding up the training](https://catboost.ai/docs/en/concepts/en/concepts/speed-up-training)
- Data visualization
- Algorithm details
- [FAQ](https://catboost.ai/docs/en/concepts/en/concepts/faq)
- Educational materials
- [Development and contributions](https://catboost.ai/docs/en/concepts/en/concepts/development-and-contributions)
- [Contacts](https://catboost.ai/docs/en/concepts/en/concepts/contacts)
compare
## In this article:
- [Method call format](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#compare__method-call-format)
- [Parameters](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#parameters)
- [log\_cout](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#log_cout)
- [log\_cerr](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#log_cerr)
- [Examples](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#examples)
1. Python package
2. [CatBoostClassifier](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier)
3. compare
# compare
- [Method call format](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#compare__method-call-format)
- [Parameters](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#parameters)
- [log\_cout](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#log_cout)
- [log\_cerr](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#log_cerr)
- [Examples](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_modelcompare#examples)
Draw train and evaluation metrics in [Jupyter Notebook](https://catboost.ai/docs/en/concepts/en/features/visualization_jupyter-notebook) for two trained models.
## Method call format
```
compare(model,
data=None,
metrics=None,
ntree_start=0,
ntree_end=0,
eval_period=1,
thread_count=-1,
tmp_dir=None,
log_cout=sys.stdout,
log_cerr=sys.stderr)
```
## Parameters
**Parameter:** `model`
**Possible types:** CatBoost Model
#### Description
The CatBoost model to compare with.
**Default value**
Required parameter
**Parameter:** `metrics`list of strings
The list of metrics to be calculated.
[Supported metrics](https://catboost.ai/docs/en/concepts/en/references/custom-metric__supported-metrics)
For example, if the AUC and Logloss metrics should be calculated, use the following construction:
```
['Logloss', 'AUC']
```
Required parameter
**Parameter:** `data`
**Possible types:** catboost.Pool
#### Description
A file or matrix with the input dataset, on which the compared metric values should be calculated.
**Default value**
Required parameter
**Parameter:** `ntree_start`
**Possible types:** int
#### Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to`[ntree_start; ntree_end)` and the `eval_period` parameter to *k* to calculate metrics on every *k*\-th iteration.
This parameter defines the index of the first tree to be used when applying the model or calculating the metrics (the inclusive left border of the range). Indices are zero-based.
**Default value**
0
**Parameter:** `ntree_end`
**Possible types:** int
#### Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to`[ntree_start; ntree_end)` and the `eval_period` parameter to *k* to calculate metrics on every *k*\-th iteration.
This parameter defines the index of the first tree not to be used when applying the model or calculating the metrics (the exclusive right border of the range). Indices are zero-based.
**Default value**
0 (the index of the last tree to use equals to the number of trees in the
model minus one)
**Parameter:** `eval_period`
**Possible types:** int
#### Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to`[ntree_start; ntree_end)` and the `eval_period` parameter to *k* to calculate metrics on every *k*\-th iteration.
This parameter defines the step to iterate over the range `[`ntree\_start`;`ntree\_end`)`. For example, let's assume that the following parameter values are set:
- `ntree_start` is set 0
- `ntree_end` is set to N (the total tree count)
- `eval_period` is set to 2
In this case, the metrics are calculated for the following tree ranges: `[0, 2)`, `[0, 4)`, ... , `[0, N)`
**Default value**
1 (the trees are applied sequentially: the first tree, then the first two
trees, etc.)
**Parameter:** `thread_count`int
The number of threads to use.
Optimizes the speed of execution. This parameter doesn't affect results.
\-1 (the number of threads is equal to the number of processor cores)
**Parameter:** `tmp_dir`
**Possible types:** String
#### Description
The name of the temporary directory for intermediate results.
**Default value**
None (the name is generated)
### log\_cout
Output stream or callback for logging.
**Possible types**
- callable Python object
- python object providing the `write()` method
**Default value**
sys.stdout
### log\_cerr
Error stream or callback for logging.
**Possible types**
- callable Python object
- python object providing the `write()` method
**Default value**
sys.stderr
## Examples
```
from catboost import Pool, CatBoostClassifier
train_data = [[0, 3],
[4, 1],
[8, 1],
[9, 1]]
train_labels = [0, 0, 1, 1]
eval_data = [[1, 3],
[4, 2],
[8, 2],
[8, 3]]
eval_labels = [1, 0, 0, 1]
train_dataset = Pool(train_data, train_labels)
eval_dataset = Pool(eval_data, eval_labels)
model1 = CatBoostClassifier(iterations=100, learning_rate=0.1)
model1.fit(train_dataset, verbose=False)
model2 = CatBoostClassifier(iterations=100, learning_rate=0.3)
model2.fit(train_dataset, verbose=False)
model1.compare(model2, eval_dataset, ['Logloss'])
```
The following is a chart plotted with [Jupyter Notebook](https://catboost.ai/docs/en/concepts/en/features/visualization_jupyter-notebook) for the given example.

### Was the article helpful?
Yes
No
Previous
[calc\_feature\_statistics](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_calc_feature_statistics)
Next
[copy](https://catboost.ai/docs/en/concepts/en/concepts/python-reference_catboostclassifier_copy)
 |
| Readable Markdown | Draw train and evaluation metrics in [Jupyter Notebook](https://catboost.ai/docs/en/features/visualization_jupyter-notebook) for two trained models.
## Method call format
```
compare(model,
data=None,
metrics=None,
ntree_start=0,
ntree_end=0,
eval_period=1,
thread_count=-1,
tmp_dir=None,
log_cout=sys.stdout,
log_cerr=sys.stderr)
```
## Parameters
**Parameter:** `model`
**Possible types:** CatBoost Model
#### Description
The CatBoost model to compare with.
**Default value**
Required parameter
**Parameter:** `metrics`list of strings
The list of metrics to be calculated.
[Supported metrics](https://catboost.ai/docs/en/references/custom-metric__supported-metrics)
For example, if the AUC and Logloss metrics should be calculated, use the following construction:
```
['Logloss', 'AUC']
```
Required parameter
**Parameter:** `data`
**Possible types:** catboost.Pool
#### Description
A file or matrix with the input dataset, on which the compared metric values should be calculated.
**Default value**
Required parameter
**Parameter:** `ntree_start`
**Possible types:** int
#### Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to`[ntree_start; ntree_end)` and the `eval_period` parameter to *k* to calculate metrics on every *k*\-th iteration.
This parameter defines the index of the first tree to be used when applying the model or calculating the metrics (the inclusive left border of the range). Indices are zero-based.
**Default value**
0
**Parameter:** `ntree_end`
**Possible types:** int
#### Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to`[ntree_start; ntree_end)` and the `eval_period` parameter to *k* to calculate metrics on every *k*\-th iteration.
This parameter defines the index of the first tree not to be used when applying the model or calculating the metrics (the exclusive right border of the range). Indices are zero-based.
**Default value**
0 (the index of the last tree to use equals to the number of trees in the
model minus one)
**Parameter:** `eval_period`
**Possible types:** int
#### Description
To reduce the number of trees to use when the model is applied or the metrics are calculated, set the range of the tree indices to`[ntree_start; ntree_end)` and the `eval_period` parameter to *k* to calculate metrics on every *k*\-th iteration.
This parameter defines the step to iterate over the range `[`ntree\_start`;`ntree\_end`)`. For example, let's assume that the following parameter values are set:
- `ntree_start` is set 0
- `ntree_end` is set to N (the total tree count)
- `eval_period` is set to 2
In this case, the metrics are calculated for the following tree ranges: `[0, 2)`, `[0, 4)`, ... , `[0, N)`
**Default value**
1 (the trees are applied sequentially: the first tree, then the first two
trees, etc.)
**Parameter:** `thread_count`int
The number of threads to use.
Optimizes the speed of execution. This parameter doesn't affect results.
\-1 (the number of threads is equal to the number of processor cores)
**Parameter:** `tmp_dir`
**Possible types:** String
#### Description
The name of the temporary directory for intermediate results.
**Default value**
None (the name is generated)
### log\_cout
Output stream or callback for logging.
**Possible types**
- callable Python object
- python object providing the `write()` method
**Default value**
sys.stdout
### log\_cerr
Error stream or callback for logging.
**Possible types**
- callable Python object
- python object providing the `write()` method
**Default value**
sys.stderr
## Examples
```
from catboost import Pool, CatBoostClassifier
train_data = [[0, 3],
[4, 1],
[8, 1],
[9, 1]]
train_labels = [0, 0, 1, 1]
eval_data = [[1, 3],
[4, 2],
[8, 2],
[8, 3]]
eval_labels = [1, 0, 0, 1]
train_dataset = Pool(train_data, train_labels)
eval_dataset = Pool(eval_data, eval_labels)
model1 = CatBoostClassifier(iterations=100, learning_rate=0.1)
model1.fit(train_dataset, verbose=False)
model2 = CatBoostClassifier(iterations=100, learning_rate=0.3)
model2.fit(train_dataset, verbose=False)
model1.compare(model2, eval_dataset, ['Logloss'])
```
The following is a chart plotted with [Jupyter Notebook](https://catboost.ai/docs/en/features/visualization_jupyter-notebook) for the given example.
 |
| Shard | 169 (laksa) |
| Root Hash | 17435841955170310369 |
| Unparsed URL | ai,catboost!/docs/en/concepts/python-reference_catboostclassifier_modelcompare s443 |