ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 0 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://catboost.ai/docs/en/concepts/loss-functions-ranking |
| Last Crawled | 2026-04-11 22:13:01 (5 hours ago) |
| First Indexed | 2024-11-18 17:11:24 (1 year ago) |
| HTTP Status Code | 200 |
| Meta Title | Ranking: objectives and metrics | CatBoost |
| Meta Description | Pairwise metrics. |
| Meta Canonical | null |
| Boilerpipe Text | Pairwise metrics
Pairwise metrics use special labeled information — pairs of dataset objects where one object is considered the
winner
and the other is considered the
loser
. This information might be not exhaustive (not all possible pairs of objects are labeled in such a way). It is also possible to specify the weight for each pair.
If GroupId is specified, then all pairs must have both members from the same group if this dataset is used in pairwise modes.
Read more about GroupId
The identifier of the object's group. An arbitrary string, possibly representing an integer.
If the labeled pairs data is not specified for the dataset, then pairs are generated automatically in each group using per-object label values (labels must be specified and must be numerical). The object with a greater label value in the pair is considered the
winner
.
The following variables are used in formulas of the described pairwise metrics:
p
p
is the positive object in the pair.
n
n
is the negative object in the pair.
See all common variables in
Variables used in formulas
.
PairLogit
−
∑
p
,
n
∈
P
a
i
r
s
w
p
n
(
l
o
g
(
1
1
+
e
−
(
a
p
−
a
n
)
)
)
∑
p
,
n
∈
P
a
i
r
s
w
p
n
\displaystyle\frac{-\sum\limits_{p, n \in Pairs} w_{pn} \left(log(\displaystyle\frac{1}{1 + e^{- (a_{p} - a_{n})}})\right)}{\sum\limits_{p, n \in Pairs} w_{pn}}
Note
The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead.
Usage information
See
more
.
User-defined parameters
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
max_pairs
The maximum number of generated pairs in each group. Takes effect if no pairs are given and therefore are generated without repetition.
Default:
All possible pairs are generated in each group
PairLogitPairwise
−
∑
p
,
n
∈
P
a
i
r
s
w
p
n
(
l
o
g
(
1
1
+
e
−
(
a
p
−
a
n
)
)
)
∑
p
,
n
∈
P
a
i
r
s
w
p
n
\displaystyle\frac{-\sum\limits_{p, n \in Pairs} w_{pn} \left(log(\displaystyle\frac{1}{1 + e^{- (a_{p} - a_{n})}})\right)}{\sum\limits_{p, n \in Pairs} w_{pn}}
This metric may give more accurate results on large datasets compared to PairLogit but it is calculated significantly slower.
This technique is described in the
Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank
paper.
Usage information
See
more
.
Note
The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
max_pairs
The maximum number of generated pairs in each group. Takes effect if no pairs are given and therefore are generated without repetition.
Default:
All possible pairs are generated in each group
PairAccuracy
∑
p
,
n
∈
P
a
i
r
s
w
p
n
[
a
p
>
a
n
]
∑
p
,
n
∈
P
a
i
r
s
w
p
n
\displaystyle\frac{\sum\limits_{p, n \in Pairs} w_{pn} [a_{p} > a_{n}] }{\sum\limits_{p, n \in Pairs} w_{pn} }
Note
The object weights are not used to calculate the value of this metric. The weights of object pairs are used instead.
Can't be used for optimization.
See
more
.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
Groupwise metrics
YetiRank
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the
hints=skip_train~false
parameter to enable the calculation.
An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.
The value of this metric can not be calculated. The metric that is written to
output data
if YetiRank is optimized depends on the range of all
N
target values (
i
∈
[
1
;
N
]
i \in [1; N]
) of the dataset:
t
a
r
g
e
t
i
∈
[
0
;
1
]
target_{i} \in [0; 1]
— PFound
t
a
r
g
e
t
i
∉
[
0
;
1
]
target_{i} \notin [0; 1]
— NDCG
This metric gives less accurate results on big datasets compared to YetiRankPairwise but it is significantly faster.
Note
The object weights are not used to optimize this metric. The group weights are used instead.
This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the
Group weights
file or the GroupWeight column of the
Columns description
file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.
Usage information
See
more
.
Since CatBoost 1.2.1 YetiRank meaning has been expanded to allow for optimizing specific ranking loss functions by specifying
mode
loss function parameter. Default YetiRank can now also be referred as
mode=Classic
.
User-defined parameters
mode
The mode of operation. Either
Classic
- the traditional YetiRank as described in
Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank
or a specific ranking loss function to optimize as described in
Which Tricks are Important for Learning to Rank?
paper. Possible loss function values are
DCG
,
NDCG
,
MRR
,
ERR
,
MAP
. Non-Classic modes are supported only on CPU.
Default:
Classic
permutations
The number of permutations.
Default:
10
decay
Used only in
Classic
mode.
The probability of search continuation after reaching the current object.
Default:
0.85
top
Used in all modes except
Classic
.
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Unlimited by default.
dcg_type
Used in modes
DCG
and
NDCG
.
Principle of calculation of *DCG metrics.
Default
: Base.
Possible values
:
Base
,
Exp
.
dcg_denominator
Used in modes
DCG
and
NDCG
.
Principle of calculation of the denominator in *DCG metrics.
Default
: Position.
Possible values
:
LogPosition
,
Position
.
noise
Type of noise to add to approxes.
Default
:
Gumbel
.
Possible values
:
Gumbel
,
Gauss
,
No
.
noise_power
Power of noise to add (multiplier). Used only for
Gauss
noise for now.
Default
: 1.
num_neighbors
Used in all modes except
Classic
.
Number of neighbors used in the metric calculation.
Default
: 1.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
YetiRankPairwise
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the
hints=skip_train~false
parameter to enable the calculation.
An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.
The value of this metric can not be calculated. The metric that is written to
output data
if YetiRank is optimized depends on the range of all
N
target values (
i
∈
[
1
;
N
]
i \in [1; N]
) of the dataset:
t
a
r
g
e
t
i
∈
[
0
;
1
]
target_{i} \in [0; 1]
— PFound
t
a
r
g
e
t
i
∉
[
0
;
1
]
target_{i} \notin [0; 1]
— NDCG
This metric gives more accurate results on big datasets compared to YetiRank but it is significantly slower.
This technique is described in the
Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank
paper.
Note
The object weights are not used to optimize this metric. The group weights are used instead.
This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the
Group weights
file or the GroupWeight column of the
Columns description
file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.
Usage information
See
more
.
Since CatBoost 1.2.1 YetiRankPairwise meaning has been expanded to allow for optimizing specific ranking loss functions by specifying
mode
loss function parameter. Default YetiRankPairwise can now also be referred as
mode=Classic
.
User-defined parameters
mode
The mode of operation. Either
Classic
- the traditional YetiRankPairwise as described in
Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank
or a specific ranking loss function to optimize as described in
Which Tricks are Important for Learning to Rank?
paper. Possible loss function values are
DCG
,
NDCG
,
MRR
,
ERR
,
MAP
. Non-Classic modes are supported only on CPU.
Default:
Classic
permutations
The number of permutations.
Default:
10
decay
Used only in
Classic
mode.
The probability of search continuation after reaching the current object.
Default:
0.85
top
Used in all modes except
Classic
.
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Unlimited by default.
dcg_type
Used in modes
DCG
and
NDCG
.
Principle of calculation of *DCG metrics.
Default
: Base.
Possible values
:
Base
,
Exp
.
dcg_denominator
Used in modes
DCG
and
NDCG
.
Principle of calculation of the denominator in *DCG metrics.
Default
: Position.
Possible values
:
LogPosition
,
Position
.
noise
Type of noise to add to approxes.
Default
:
Gumbel
.
Possible values
:
Gumbel
,
Gauss
,
No
.
noise_power
Power of noise to add (multiplier). Used only for
Gauss
noise for now.
Default
: 1.
num_neighbors
Used in all modes except
Classic
.
Number of neighbors used in the metric calculation.
Default
: 1.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
LambdaMart
Directly optimize the selected metric. The value of the selected metric is written to
output data
Refer to the
From RankNet to LambdaRank to LambdaMART
paper for details.
Usage information
See
more
.
User-defined parameters
metric
The metric that should be optimized.
Default
:
NDCG
Supported values
:
DCG
,
NDCG
,
MRR
,
ERR
,
MAP
.
sigma
General sigmoid parameter. See
From RankNet to LambdaRank to LambdaMART
paper for details.
Default
: 1.0
Supported values
: Real positive values.
norm
Derivatives should be normalized.
Default
: True
Supported values
: False, True.
StochasticFilter
Directly optimize the FilteredDCG metric calculated for a pre-defined order of objects for filtration of objects under a fixed ranking. As a result, the FilteredDCG metric can be used for optimization.
F
i
l
t
e
r
e
d
D
C
G
=
∑
i
=
1
n
t
i
i
,
w
h
e
r
e
FilteredDCG = \sum\limits_{i=1}^{n}\displaystyle\frac{t_{i}}{i} { , where}
t
i
t_{i}
is the relevance of an object in the group and the sum is computed over the documents with
a
>
0
a > 0
.
The filtration is defined via the raw formula value:
Zeros correspond to filtered instances and ones correspond to the remaining ones.
The ranking is defined by the order of objects in the dataset.
Warning
Sort objects by the column you are interested in before training with this loss function and use the
--has-time
for the Command-line version option to avoid further objects reordering.
For optimization, a distribution of filtrations is defined:
P
(
filter
∣
x
)
=
σ
(
a
)
,
w
h
e
r
e
\mathbb{P}(\text{filter}|x) = \sigma(a) { , where}
σ
(
z
)
=
1
1
+
e
−
z
\sigma(z) = \displaystyle\frac{1}{1 + \text{e}^{-z}}
The gradient is estimated via REINFORCE.
Refer to the
Learning to Select for a Predefined Ranking
paper for calculation details.
Usage information
See
more
.
User-defined parameters
sigma
The scale for multiplying predictions.
Default:
1
num_estimations
The number of gradient samples.
Default:
1
StochasticRank
Directly optimize the selected metric. The value of the selected metric is written to
output data
Refer to the
StochasticRank: Global Optimization of Scale-Free Discrete Functions
paper for details.
Usage information
See
more
.
User-defined parameters
Common parameters:
metric
The metric that should be optimized.
Default
: Obligatory parameter
Supported values
:
DCG
,
NDCG
,
PFound
.
num_estimations
The number of gradient estimation iterations.
Default
: 1
mu
Controls the penalty for coinciding predictions (aka
ties
).
Default
: 0
Metric-specific parameters:
Available if the corresponding metric is set in the metric parameter.
DCG
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
type
Metric calculation principles.
Default
: Base.
Possible values
:
Base
,
Exp
.
denominator
Metric denominator type.
Default
:
Default
: LogPosition.
Possible values
:
LogPosition
,
Position
.
NDCG
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
type
Metric calculation principles.
Default
: Base.
Possible values
:
Base
,
Exp
.
denominator
Metric denominator type.
Default
: LogPosition.
Possible values
:
LogPosition
,
Position
.
PFound
decay
The probability of search continuation after reaching the current object.
Default
: 0.85
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
QueryCrossEntropy
Q
u
e
r
y
C
r
o
s
s
E
n
t
r
o
p
y
(
α
)
=
(
1
−
α
)
⋅
L
o
g
L
o
s
s
+
α
⋅
L
o
g
L
o
s
s
g
r
o
u
p
QueryCrossEntropy(\alpha) = (1 - \alpha) \cdot LogLoss + \alpha \cdot LogLoss_{group}
See the
QueryCrossEntropy
section for more details.
Usage information
See
more
.
User-defined parameters
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
alpha
The coefficient used in quantile-based losses.
Default:
0.95
QueryRMSE
∑
G
r
o
u
p
∈
G
r
o
u
p
s
∑
i
∈
G
r
o
u
p
w
i
(
t
i
−
a
i
−
∑
j
∈
G
r
o
u
p
w
j
(
t
j
−
a
j
)
∑
j
∈
G
r
o
u
p
w
j
)
2
∑
G
r
o
u
p
∈
G
r
o
u
p
s
∑
i
∈
G
r
o
u
p
w
i
\displaystyle\sqrt{\displaystyle\frac{\sum\limits_{Group \in Groups} \sum\limits_{i \in Group} w_{i} \left( t_{i} - a_{i} - \displaystyle\frac{\sum\limits_{j \in Group} w_{j} (t_{j} - a_{j})}{\sum\limits_{j \in Group} w_{j}} \right)^{2}} {\sum\limits_{Group \in Groups} \sum\limits_{i \in Group} w_{i}}}
Usage information
See
more
.
User-defined parameters
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
QuerySoftMax
−
∑
G
r
o
u
p
∈
G
r
o
u
p
s
∑
i
∈
G
r
o
u
p
w
i
t
i
log
(
w
i
e
β
a
i
∑
j
∈
G
r
o
u
p
w
j
e
β
a
j
)
∑
G
r
o
u
p
∈
G
r
o
u
p
s
∑
i
∈
G
r
o
u
p
w
i
t
i
- \displaystyle\frac{\sum\limits_{Group \in Groups} \sum\limits_{i \in Group}w_{i} t_{i} \log \left(\displaystyle\frac{w_{i} e^{\beta a_{i}}}{\sum\limits_{j\in Group} w_{j} e^{\beta a_{j}}}\right)} {\sum\limits_{Group \in Groups} \sum_{i\in Group} w_{i} t_{i}}
Usage information
See
more
.
User-defined parameters
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
beta
The input scale coefficient.
Default:
1
GroupQuantile
∑
G
r
o
u
p
∈
G
r
o
u
p
s
∑
i
∈
G
r
o
u
p
w
i
(
α
−
I
(
t
i
≤
a
i
−
g
G
r
o
u
p
m
e
a
n
)
)
(
t
i
−
a
i
−
g
G
r
o
u
p
m
e
a
n
)
∑
G
r
o
u
p
∈
G
r
o
u
p
s
∑
i
∈
G
r
o
u
p
w
i
\displaystyle\frac{\sum\limits_{Group \in Groups} \sum\limits_{i \in Group}w_{i} (\alpha - I(t_{i} \leq a_{i} - g_{Group\ mean} ))(t_{i} - a_{i} - g_{Group\ mean}) } {\sum\limits_{Group \in Groups} \sum_{i\in Group} w_{i}}
,
where
g
G
r
o
u
p
m
e
a
n
=
∑
j
∈
G
r
o
u
p
w
j
(
t
j
−
a
j
)
∑
j
∈
G
r
o
u
p
w
j
g_{Group\ mean}=\displaystyle\frac{\sum\limits_{j \in Group} w_{j} (t_{j} - a_{j})}{\sum\limits_{j \in Group} w_{j}}
.
Usage information
See
more
.
User-defined parameters
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
PFound
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the
hints=skip_train~false
parameter to enable the calculation.
P
F
o
u
n
d
(
t
o
p
,
d
e
c
a
y
)
=
PFound(top, decay) =
=
∑
g
r
o
u
p
∈
g
r
o
u
p
s
P
F
o
u
n
d
(
g
r
o
u
p
,
t
o
p
,
d
e
c
a
y
)
= \sum_{group \in groups} PFound(group, top, decay)
See the
PFound
section for more details
Can't be used for optimization.
See
more
.
User-defined parameters
decay
The probability of search continuation after reaching the current object.
Default
: 0.85
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
NDCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the
hints=skip_train~false
parameter to enable the calculation.
n
D
C
G
(
t
o
p
)
=
D
C
G
(
t
o
p
)
I
D
C
G
(
t
o
p
)
nDCG(top) = \frac{DCG(top)}{IDCG(top)}
See the
NDCG
section for more details.
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
type
Metric calculation principles.
Default
: Base.
Possible values
:
Base
,
Exp
.
denominator
Metric denominator type.
Default
: LogPosition.
Possible values
:
LogPosition
,
Position
.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
DCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the
hints=skip_train~false
parameter to enable the calculation.
D
C
G
(
t
o
p
)
DCG(top)
See the
NDCG
section for more details.
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
type
Metric calculation principles.
Default
: Base.
Possible values
:
Base
,
Exp
.
denominator
Metric denominator type.
Default
: LogPosition.
Possible values
:
LogPosition
,
Position
.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
FilteredDCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the
hints=skip_train~false
parameter to enable the calculation.
See the
FilteredDCG
section for more details.
Can't be used for optimization.
See
more
.
User-defined parameters
type
Metric calculation principles.
Default
: Base.
Possible values
:
Base
,
Exp
.
denominator
Metric denominator type.
Default
: LogPosition.
Possible values
:
LogPosition
,
Position
.
QueryAverage
Represents the average value of the label values for objects with the defined top
M
M
label values.
See the
QueryAverage
section for more details.
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: This parameter is obligatory (the default value is not defined).
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default:
true
PrecisionAt
The calculation of this function consists of the following steps:
The objects are sorted in descending order of predicted relevancies (
a
i
a_{i}
)
The metric is calculated as follows:
P
r
e
c
i
s
i
o
n
A
t
(
t
o
p
,
b
o
r
d
e
r
)
=
∑
i
=
1
t
o
p
R
e
l
e
v
a
n
t
i
t
o
p
,
w
h
e
r
e
PrecisionAt(top, border) = \frac{\sum\limits_{i=1}^{top} Relevant_{i}}{top} { , where}
R
e
l
e
v
a
n
t
i
=
{
1
,
t
i
>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant_{i} = \begin{cases} 1 { , } & t_{i} > {border} \\ 0 { , } & {in other cases} \end{cases}
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
Default
: 0
RecallAt
The calculation of this function consists of the following steps:
The objects are sorted in descending order of predicted relevancies (
a
i
a_{i}
)
The metric is calculated as follows:
R
e
c
a
l
A
t
(
t
o
p
,
b
o
r
d
e
r
)
=
∑
i
=
1
t
o
p
R
e
l
e
v
a
n
t
i
∑
i
=
1
N
R
e
l
e
v
a
n
t
i
RecalAt(top, border) = \frac{\sum\limits_{i=1}^{top} Relevant_{i}}{\sum\limits_{i=1}^{N} Relevant_{i}}
R
e
l
e
v
a
n
t
i
=
{
1
,
t
i
>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant_{i} = \begin{cases} 1 { , } & t_{i} > {border} \\ 0 { , } & {in other cases} \end{cases}
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
Default
: 0
MAP
The objectsare sorted in descending order of predicted relevancies (
a
i
a_{i}
)
The metric is calculated as follows:
M
A
P
(
t
o
p
,
b
o
r
d
e
r
)
=
1
N
g
r
o
u
p
s
∑
j
=
1
N
g
r
o
u
p
s
A
v
e
r
a
g
e
P
r
e
c
i
s
i
o
n
A
t
j
(
t
o
p
,
b
o
r
d
e
r
)
,
w
h
e
r
e
MAP(top, border) = \frac{1}{N_{groups}} \sum\limits_{j = 1}^{N_{groups}} AveragePrecisionAt_{j}(top, border) { , where}
N
g
r
o
u
p
s
N_{groups}
is the number of groups
A
v
e
r
a
g
e
P
r
e
c
i
s
i
o
n
A
t
(
t
o
p
,
b
o
r
d
e
r
)
=
∑
i
=
1
t
o
p
R
e
l
e
v
a
n
t
i
∗
P
r
e
c
i
s
i
o
n
A
t
i
∑
i
=
1
t
o
p
R
e
l
e
v
a
n
t
i
AveragePrecisionAt(top, border) = \frac{\sum\limits_{i=1}^{top} Relevant_{i} * PrecisionAt_{i}}{\sum\limits_{i=1}^{top} Relevant_{i} }
The value is calculated individually for each
j
-th group.
R
e
l
e
v
a
n
t
i
=
{
1
,
t
i
>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant_{i} = \begin{cases} 1 { , } & t_{i} > {border} \\ 0 { , } & {in other cases} \end{cases}
P
r
e
c
i
s
i
o
n
A
t
i
=
∑
j
=
1
i
R
e
l
e
v
a
n
t
j
i
PrecisionAt_{i} = \frac{\sum\limits_{j=1}^{i} Relevant_{j}}{i}
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
Default
: 0
ERR
E
R
R
=
1
∣
Q
∣
∑
q
=
1
∣
Q
∣
E
R
R
q
ERR = \frac{1}{|Q|} \sum_{q=1}^{|Q|} ERR_q
E
R
R
q
=
∑
i
=
1
t
o
p
1
i
t
q
,
i
∏
j
=
1
i
−
1
(
1
−
t
q
,
j
)
ERR_q = \sum_{i=1}^{top} \frac{1}{i} t_{q,i} \prod_{j=1}^{i-1} (1 - t_{q,j})
Targets should be from the range [0, 1].
t
q
,
i
∈
[
0
,
1
]
t_{q,i} \in [0, 1]
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
MRR
M
R
R
=
1
∣
Q
∣
∑
q
=
1
∣
Q
∣
1
r
a
n
k
q
MRR = \frac{1}{|Q|} \sum_{q=1}^{|Q|} \frac{1}{rank_q}
, where
r
a
n
k
q
rank_q
refers to the rank position of the first relevant document for the
q
-th query.
Can't be used for optimization.
See
more
.
User-defined parameters
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Default
: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
Default
: 0
AUC
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the
hints=skip_train~false
parameter to enable the calculation.
The type of AUC. Defines the metric calculation principles.
Classic type
∑
I
(
a
i
,
a
j
)
⋅
w
i
⋅
w
j
∑
w
i
⋅
w
j
\displaystyle\frac{\sum I(a_{i}, a_{j}) \cdot w_{i} \cdot w_{j}} {\sum w_{i} \cdot w_{j}}
The sum is calculated on all pairs of objects
(
i
,
j
)
(i,j)
such that:
t
i
=
0
t_{i} = 0
t
j
=
1
t_{j} = 1
I
(
x
,
y
)
=
{
0
,
x
<
y
0.5
,
x
=
y
1
,
x
>
y
I(x, y) = \begin{cases} 0 { , } & x < y \\ 0.5 { , } & x=y \\ 1 { , } & x>y \end{cases}
Refer to the
Wikipedia article
for details.
If the target type is not binary, then every object with target value
t
t
and weight
w
w
is replaced with two objects for the metric calculation:
o
1
o_{1}
with weight
t
⋅
w
t \cdot w
and target value 1
o
2
o_{2}
with weight
(
1
–
t
)
⋅
w
(1 – t) \cdot w
and target value 0.
Target values must be in the range [0; 1].
Ranking type
∑
I
(
a
i
,
a
j
)
⋅
w
i
⋅
w
j
∑
w
i
∗
w
j
\displaystyle\frac{\sum I(a_{i}, a_{j}) \cdot w_{i} \cdot w_{j}} {\sum w_{i} * w_{j}}
The sum is calculated on all pairs of objects
(
i
,
j
)
(i,j)
such that:
t
i
<
t
j
t_{i} < t_{j}
I
(
x
,
y
)
=
{
0
,
x
<
y
0.5
,
x
=
y
1
,
x
>
y
I(x, y) = \begin{cases} 0 { , } & x < y \\ 0.5 { , } & x=y \\ 1 { , } & x>y \end{cases}
Can't be used for optimization.
See
more
.
User-defined parameters
type
The type of AUC. Defines the metrics calculation principles.
Default
:
Classic
.
Possible values
:
Classic
,
Ranking
.
Examples
:
AUC:type=Classic
,
AUC:type=Ranking
.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default
:
False
for Classic type,
True
for Ranking type.
Examples
:
AUC:type=Ranking;use_weights=False
.
QueryAUC
Classic type
∑
q
∑
i
,
j
∈
q
∑
I
(
a
i
,
a
j
)
⋅
w
i
⋅
w
j
∑
q
∑
i
,
j
∈
q
∑
w
i
⋅
w
j
\displaystyle\frac{ \sum_q \sum_{i, j \in q} \sum I(a_{i}, a_{j}) \cdot w_{i} \cdot w_{j}} { \sum_q \sum_{i, j \in q}
\sum w_{i} \cdot w_{j}}
The sum is calculated on all pairs of objects
(
i
,
j
)
(i,j)
such that:
t
i
=
0
t_{i} = 0
t
j
=
1
t_{j} = 1
I
(
x
,
y
)
=
{
0
,
x
<
y
0.5
,
x
=
y
1
,
x
>
y
I(x, y) = \begin{cases} 0 { , } & x < y \\ 0.5 { , } & x=y \\ 1 { , } & x>y \end{cases}
Refer to the
Wikipedia article
for details.
If the target type is not binary, then every object with target value
t
t
and weight
w
w
is replaced with two objects for the metric calculation:
o
1
o_{1}
with weight
t
⋅
w
t \cdot w
and target value 1
o
2
o_{2}
with weight
(
1
–
t
)
⋅
w
(1 – t) \cdot w
and target value 0.
Target values must be in the range [0; 1].
Ranking type
∑
q
∑
i
,
j
∈
q
∑
I
(
a
i
,
a
j
)
⋅
w
i
⋅
w
j
∑
q
∑
i
,
j
∈
q
∑
w
i
∗
w
j
\displaystyle\frac{ \sum_q \sum_{i, j \in q} \sum I(a_{i}, a_{j}) \cdot w_{i} \cdot w_{j}} { \sum_q \sum_{i, j \in q} \sum w_{i} * w_{j}}
The sum is calculated on all pairs of objects
(
i
,
j
)
(i,j)
such that:
t
i
<
t
j
t_{i} < t_{j}
I
(
x
,
y
)
=
{
0
,
x
<
y
0.5
,
x
=
y
1
,
x
>
y
I(x, y) = \begin{cases} 0 { , } & x < y \\ 0.5 { , } & x=y \\ 1 { , } & x>y \end{cases}
Can't be used for optimization.
See
more
.
User-defined parameters
type
The type of QueryAUC. Defines the metric calculation principles.
Default
:
Ranking
.
Possible values
:
Classic
,
Ranking
.
Examples
:
QueryAUC:type=Classic
,
QueryAUC:type=Ranking
.
use_weights
Use object/group weights to calculate metrics if the specified value is
true
and set all weights to
1
regardless of the input data if the specified value is
false
.
Default
:
False
.
Examples
:
QueryAUC:type=Ranking;use_weights=False
.
Used for optimization
Name
Optimization
GPU Support
PairLogit
+
+
PairLogitPairwise
+
+
PairAccuracy
-
-
YetiRank
+
+ (but only Classic mode)
YetiRankPairwise
+
+ (but only Classic mode)
LambdaMart
+
-
StochasticFilter
+
-
StochasticRank
+
-
QueryCrossEntropy
+
+
QueryRMSE
+
+
QuerySoftMax
+
+
GroupQuantile
+
-
PFound
-
-
NDCG
-
-
DCG
-
-
FilteredDCG
-
-
QueryAverage
-
-
PrecisionAt
-
-
RecallAt
-
-
MAP
-
-
ERR
-
-
MRR
-
-
AUC
-
-
QueryAUC
-
- |
| Markdown | [](https://catboost.ai/ "CatBoost")
- Installation
- [Overview](https://catboost.ai/docs/en/concepts/en/concepts/installation)
- Python package installation
- CatBoost for Apache Spark installation
- R package installation
- Command-line version binary
- Build from source
- Key Features
- Training parameters
- Python package
- CatBoost for Apache Spark
- R package
- Command-line version
- Applying models
- Objectives and metrics
- [Overview](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions)
- [Variables used in formulas](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-variables-used)
- [Regression](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-regression)
- [Multiregression](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-multiregression)
- [Classification](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-classification)
- [Multiclassification](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-multiclassification)
- [Multilabel Classification](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-multilabel-classification)
- [Ranking](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking)
- Model analysis
- Data format description
- [Parameter tuning](https://catboost.ai/docs/en/concepts/en/concepts/parameter-tuning)
- [Speeding up the training](https://catboost.ai/docs/en/concepts/en/concepts/speed-up-training)
- Data visualization
- Algorithm details
- [FAQ](https://catboost.ai/docs/en/concepts/en/concepts/faq)
- Educational materials
- [Development and contributions](https://catboost.ai/docs/en/concepts/en/concepts/development-and-contributions)
- [Contacts](https://catboost.ai/docs/en/concepts/en/concepts/contacts)
Ranking: objectives and metrics
## In this article:
- [Pairwise metrics](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#pairwise-metrics)
- [PairLogit](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairLogit)
- [PairLogitPairwise](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairLogitPairwise)
- [PairAccuracy](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairAccuracy)
- [Groupwise metrics](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#groupwise-metrics)
- [YetiRank](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#YetiRank)
- [YetiRankPairwise](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#YetiRankPairwise)
- [LambdaMart](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#LambdaMart)
- [StochasticFilter](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#StochasticFilter)
- [StochasticRank](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#StochasticRank)
- [QueryCrossEntropy](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryCrossEntropy)
- [QueryRMSE](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryRMSE)
- [QuerySoftMax](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QuerySoftMax)
- [GroupQuantile](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#GroupQuantile)
- [PFound](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PFound)
- [NDCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#ndcg)
- [DCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#dcg)
- [FilteredDCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#FilteredDCG)
- [QueryAverage](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryAverage)
- [PrecisionAt](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PrecisionAtK)
- [RecallAt](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#RecallAtK)
- [MAP](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#mapk)
- [ERR](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#err)
- [MRR](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#mrr)
- [AUC](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#AUC)
- [QueryAUC](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryAUC)
- [Used for optimization](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information)
1. [Objectives and metrics](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions)
2. Ranking
# Ranking: objectives and metrics
- [Pairwise metrics](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#pairwise-metrics)
- [PairLogit](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairLogit)
- [PairLogitPairwise](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairLogitPairwise)
- [PairAccuracy](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairAccuracy)
- [Groupwise metrics](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#groupwise-metrics)
- [YetiRank](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#YetiRank)
- [YetiRankPairwise](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#YetiRankPairwise)
- [LambdaMart](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#LambdaMart)
- [StochasticFilter](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#StochasticFilter)
- [StochasticRank](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#StochasticRank)
- [QueryCrossEntropy](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryCrossEntropy)
- [QueryRMSE](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryRMSE)
- [QuerySoftMax](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QuerySoftMax)
- [GroupQuantile](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#GroupQuantile)
- [PFound](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PFound)
- [NDCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#ndcg)
- [DCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#dcg)
- [FilteredDCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#FilteredDCG)
- [QueryAverage](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryAverage)
- [PrecisionAt](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PrecisionAtK)
- [RecallAt](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#RecallAtK)
- [MAP](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#mapk)
- [ERR](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#err)
- [MRR](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#mrr)
- [AUC](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#AUC)
- [QueryAUC](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryAUC)
- [Used for optimization](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information)
## Pairwise metrics
Pairwise metrics use special labeled information — pairs of dataset objects where one object is considered the "winner" and the other is considered the "loser". This information might be not exhaustive (not all possible pairs of objects are labeled in such a way). It is also possible to specify the weight for each pair.
If GroupId is specified, then all pairs must have both members from the same group if this dataset is used in pairwise modes.
Read more about GroupId
The identifier of the object's group. An arbitrary string, possibly representing an integer.
If the labeled pairs data is not specified for the dataset, then pairs are generated automatically in each group using per-object label values (labels must be specified and must be numerical). The object with a greater label value in the pair is considered the "winner".
The following variables are used in formulas of the described pairwise metrics:
- p
p
p
is the positive object in the pair.
- n
n
n
is the negative object in the pair.
See all common variables in [Variables used in formulas](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-variables-used).
### PairLogit
− ∑ p , n ∈ P a i r s w p n ( l o g ( 1 1 \+ e − ( a p − a n ) ) ) ∑ p , n ∈ P a i r s w p n \\displaystyle\\frac{-\\sum\\limits\_{p, n \\in Pairs} w\_{pn} \\left(log(\\displaystyle\\frac{1}{1 + e^{- (a\_{p} - a\_{n})}})\\right)}{\\sum\\limits\_{p, n \\in Pairs} w\_{pn}} p,n∈Pairs∑wpn−p,n∈Pairs∑wpn(log(1\+e−(ap−an)1))
Note
The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
max\_pairs
The maximum number of generated pairs in each group. Takes effect if no pairs are given and therefore are generated without repetition.
*Default:* All possible pairs are generated in each group
### PairLogitPairwise
− ∑ p , n ∈ P a i r s w p n ( l o g ( 1 1 \+ e − ( a p − a n ) ) ) ∑ p , n ∈ P a i r s w p n \\displaystyle\\frac{-\\sum\\limits\_{p, n \\in Pairs} w\_{pn} \\left(log(\\displaystyle\\frac{1}{1 + e^{- (a\_{p} - a\_{n})}})\\right)}{\\sum\\limits\_{p, n \\in Pairs} w\_{pn}} p,n∈Pairs∑wpn−p,n∈Pairs∑wpn(log(1\+e−(ap−an)1))
This metric may give more accurate results on large datasets compared to PairLogit but it is calculated significantly slower.
This technique is described in the [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) paper.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
Note
The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
max\_pairs
The maximum number of generated pairs in each group. Takes effect if no pairs are given and therefore are generated without repetition.
*Default:* All possible pairs are generated in each group
### PairAccuracy
∑ p , n ∈ P a i r s w p n \[ a p \> a n \] ∑ p , n ∈ P a i r s w p n \\displaystyle\\frac{\\sum\\limits\_{p, n \\in Pairs} w\_{pn} \[a\_{p} \> a\_{n}\] }{\\sum\\limits\_{p, n \\in Pairs} w\_{pn} } p,n∈Pairs∑wpnp,n∈Pairs∑wpn\[ap\>an\]
Note
The object weights are not used to calculate the value of this metric. The weights of object pairs are used instead.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
## Groupwise metrics
### YetiRank
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.
The value of this metric can not be calculated. The metric that is written to [output data](https://catboost.ai/docs/en/concepts/en/concepts/output-data) if YetiRank is optimized depends on the range of all *N* target values (i ∈ \[ 1 ; N \] i \\in \[1; N\] i∈\[1;N\]) of the dataset:
- t
a
r
g
e
t
i
∈
\[
0
;
1
\]
target\_{i} \\in \[0; 1\]
targeti∈\[0;1\]
— PFound
- t
a
r
g
e
t
i
∉
\[
0
;
1
\]
target\_{i} \\notin \[0; 1\]
targeti∈/\[0;1\]
— NDCG
This metric gives less accurate results on big datasets compared to YetiRankPairwise but it is significantly faster.
Note
The object weights are not used to optimize this metric. The group weights are used instead.
This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the [Group weights](https://catboost.ai/docs/en/concepts/en/concepts/input-data_group-weights) file or the GroupWeight column of the [Columns description](https://catboost.ai/docs/en/concepts/en/concepts/input-data_column-descfile) file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
Since CatBoost 1.2.1 YetiRank meaning has been expanded to allow for optimizing specific ranking loss functions by specifying `mode` loss function parameter. Default YetiRank can now also be referred as `mode=Classic`.
**User-defined parameters**
mode
The mode of operation. Either `Classic` - the traditional YetiRank as described in [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) or a specific ranking loss function to optimize as described in [Which Tricks are Important for Learning to Rank?](https://arxiv.org/abs/2204.01500) paper. Possible loss function values are `DCG`, `NDCG`, `MRR`, `ERR`, `MAP`. Non-Classic modes are supported only on CPU.
*Default:* `Classic`
permutations
The number of permutations.
*Default:* 10
decay
Used only in `Classic` mode.
The probability of search continuation after reaching the current object.
*Default:* 0.85
top
Used in all modes except `Classic`.
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Unlimited by default.
dcg\_type
Used in modes `DCG` and `NDCG`.
Principle of calculation of \*DCG metrics.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
dcg\_denominator
Used in modes `DCG` and `NDCG`.
Principle of calculation of the denominator in \*DCG metrics.
*Default*: Position.
*Possible values*: `LogPosition`, `Position`.
noise
Type of noise to add to approxes.
*Default*: `Gumbel`.
*Possible values*: `Gumbel`, `Gauss`, `No`.
noise\_power
Power of noise to add (multiplier). Used only for `Gauss` noise for now.
*Default*: 1.
num\_neighbors
Used in all modes except `Classic`.
Number of neighbors used in the metric calculation.
*Default*: 1.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### YetiRankPairwise
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.
The value of this metric can not be calculated. The metric that is written to [output data](https://catboost.ai/docs/en/concepts/en/concepts/output-data) if YetiRank is optimized depends on the range of all *N* target values (i ∈ \[ 1 ; N \] i \\in \[1; N\] i∈\[1;N\]) of the dataset:
- t
a
r
g
e
t
i
∈
\[
0
;
1
\]
target\_{i} \\in \[0; 1\]
targeti∈\[0;1\]
— PFound
- t
a
r
g
e
t
i
∉
\[
0
;
1
\]
target\_{i} \\notin \[0; 1\]
targeti∈/\[0;1\]
— NDCG
This metric gives more accurate results on big datasets compared to YetiRank but it is significantly slower.
This technique is described in the [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) paper.
Note
The object weights are not used to optimize this metric. The group weights are used instead.
This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the [Group weights](https://catboost.ai/docs/en/concepts/en/concepts/input-data_group-weights) file or the GroupWeight column of the [Columns description](https://catboost.ai/docs/en/concepts/en/concepts/input-data_column-descfile) file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
Since CatBoost 1.2.1 YetiRankPairwise meaning has been expanded to allow for optimizing specific ranking loss functions by specifying `mode` loss function parameter. Default YetiRankPairwise can now also be referred as `mode=Classic`.
**User-defined parameters**
mode
The mode of operation. Either `Classic` - the traditional YetiRankPairwise as described in [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) or a specific ranking loss function to optimize as described in [Which Tricks are Important for Learning to Rank?](https://arxiv.org/abs/2204.01500) paper. Possible loss function values are `DCG`, `NDCG`, `MRR`, `ERR`, `MAP`. Non-Classic modes are supported only on CPU.
*Default:* `Classic`
permutations
The number of permutations.
*Default:* 10
decay
Used only in `Classic` mode.
The probability of search continuation after reaching the current object.
*Default:* 0.85
top
Used in all modes except `Classic`.
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Unlimited by default.
dcg\_type
Used in modes `DCG` and `NDCG`.
Principle of calculation of \*DCG metrics.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
dcg\_denominator
Used in modes `DCG` and `NDCG`.
Principle of calculation of the denominator in \*DCG metrics.
*Default*: Position.
*Possible values*: `LogPosition`, `Position`.
noise
Type of noise to add to approxes.
*Default*: `Gumbel`.
*Possible values*: `Gumbel`, `Gauss`, `No`.
noise\_power
Power of noise to add (multiplier). Used only for `Gauss` noise for now.
*Default*: 1.
num\_neighbors
Used in all modes except `Classic`.
Number of neighbors used in the metric calculation.
*Default*: 1.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### LambdaMart
Directly optimize the selected metric. The value of the selected metric is written to [output data](https://catboost.ai/docs/en/concepts/en/concepts/output-data)
Refer to the [From RankNet to LambdaRank to LambdaMART](https://www.microsoft.com/en-us/research/uploads/prod/2016/02/MSR-TR-2010-82.pdf) paper for details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
metric
The metric that should be optimized.
*Default*: `NDCG`
*Supported values*: `DCG`, `NDCG`, `MRR`, `ERR`, `MAP`.
sigma
General sigmoid parameter. See [From RankNet to LambdaRank to LambdaMART](https://www.microsoft.com/en-us/research/uploads/prod/2016/02/MSR-TR-2010-82.pdf) paper for details.
*Default*: 1.0
*Supported values*: Real positive values.
norm
Derivatives should be normalized.
*Default*: True
*Supported values*: False, True.
### StochasticFilter
Directly optimize the FilteredDCG metric calculated for a pre-defined order of objects for filtration of objects under a fixed ranking. As a result, the FilteredDCG metric can be used for optimization.
F i l t e r e d D C G \= ∑ i \= 1 n t i i , w h e r e FilteredDCG = \\sum\\limits\_{i=1}^{n}\\displaystyle\\frac{t\_{i}}{i} { , where} FilteredDCG\=i\=1∑niti,where
t i t\_{i} ti is the relevance of an object in the group and the sum is computed over the documents with a \> 0 a \> 0 a\>0.
The filtration is defined via the raw formula value:

Zeros correspond to filtered instances and ones correspond to the remaining ones.
The ranking is defined by the order of objects in the dataset.
Warning
Sort objects by the column you are interested in before training with this loss function and use the `--has-time`for the Command-line version option to avoid further objects reordering.
For optimization, a distribution of filtrations is defined:
P ( filter ∣ x ) \= σ ( a ) , w h e r e \\mathbb{P}(\\text{filter}\|x) = \\sigma(a) { , where} P(filter∣x)\=σ(a),where
- σ
(
z
)
\=
1
1
\+
e
−
z
\\sigma(z) = \\displaystyle\\frac{1}{1 + \\text{e}^{-z}}
σ(z)\=1\+e−z1
- The gradient is estimated via REINFORCE.
Refer to the [Learning to Select for a Predefined Ranking](http://proceedings.mlr.press/v97/vorobev19a/vorobev19a.pdf) paper for calculation details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
sigma
The scale for multiplying predictions.
*Default:* 1
num\_estimations
The number of gradient samples.
*Default:* 1
### StochasticRank
Directly optimize the selected metric. The value of the selected metric is written to [output data](https://catboost.ai/docs/en/concepts/en/concepts/output-data)
Refer to the [StochasticRank: Global Optimization of Scale-Free Discrete Functions](https://arxiv.org/abs/2003.02122v1) paper for details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
Common parameters:
metric
The metric that should be optimized.
*Default*: Obligatory parameter
*Supported values*: `DCG`, `NDCG`, `PFound`.
num\_estimations
The number of gradient estimation iterations.
*Default*: 1
mu
Controls the penalty for coinciding predictions (aka *ties*).
*Default*: 0
Metric-specific parameters:
Available if the corresponding metric is set in the metric parameter.
**DCG**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: *Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
**NDCG**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
**PFound**
decay
The probability of search continuation after reaching the current object.
*Default*: 0.85
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
### QueryCrossEntropy
Q u e r y C r o s s E n t r o p y ( α ) \= ( 1 − α ) ⋅ L o g L o s s \+ α ⋅ L o g L o s s g r o u p QueryCrossEntropy(\\alpha) = (1 - \\alpha) \\cdot LogLoss + \\alpha \\cdot LogLoss\_{group} QueryCrossEntropy(α)\=(1−α)⋅LogLoss\+α⋅LogLossgroup
See the [QueryCrossEntropy](https://catboost.ai/docs/en/concepts/en/references/querycrossentropy) section for more details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
alpha
The coefficient used in quantile-based losses.
*Default:* 0.95
### QueryRMSE
∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i ( t i − a i − ∑ j ∈ G r o u p w j ( t j − a j ) ∑ j ∈ G r o u p w j ) 2 ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i \\displaystyle\\sqrt{\\displaystyle\\frac{\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group} w\_{i} \\left( t\_{i} - a\_{i} - \\displaystyle\\frac{\\sum\\limits\_{j \\in Group} w\_{j} (t\_{j} - a\_{j})}{\\sum\\limits\_{j \\in Group} w\_{j}} \\right)^{2}} {\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group} w\_{i}}} Group∈Groups∑i∈Group∑wi Group∈Groups∑i∈Group∑wi ti−ai−j∈Group∑wjj∈Group∑wj(tj−aj) 2
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### QuerySoftMax
− ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i t i log ( w i e β a i ∑ j ∈ G r o u p w j e β a j ) ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i t i \- \\displaystyle\\frac{\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group}w\_{i} t\_{i} \\log \\left(\\displaystyle\\frac{w\_{i} e^{\\beta a\_{i}}}{\\sum\\limits\_{j\\in Group} w\_{j} e^{\\beta a\_{j}}}\\right)} {\\sum\\limits\_{Group \\in Groups} \\sum\_{i\\in Group} w\_{i} t\_{i}} − Group∈Groups∑∑i∈Groupwiti Group∈Groups∑i∈Group∑witilog j∈Group∑wjeβajwieβai
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
beta
The input scale coefficient.
*Default:* 1
### GroupQuantile
∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i ( α − I ( t i ≤ a i − g G r o u p m e a n ) ) ( t i − a i − g G r o u p m e a n ) ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i \\displaystyle\\frac{\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group}w\_{i} (\\alpha - I(t\_{i} \\leq a\_{i} - g\_{Group\\ mean} ))(t\_{i} - a\_{i} - g\_{Group\\ mean}) } {\\sum\\limits\_{Group \\in Groups} \\sum\_{i\\in Group} w\_{i}} Group∈Groups∑∑i∈GroupwiGroup∈Groups∑i∈Group∑wi(α−I(ti≤ai−gGroup mean))(ti−ai−gGroup mean),
where g G r o u p m e a n \= ∑ j ∈ G r o u p w j ( t j − a j ) ∑ j ∈ G r o u p w j g\_{Group\\ mean}=\\displaystyle\\frac{\\sum\\limits\_{j \\in Group} w\_{j} (t\_{j} - a\_{j})}{\\sum\\limits\_{j \\in Group} w\_{j}} gGroup mean\=j∈Group∑wjj∈Group∑wj(tj−aj).
**Usage information** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### PFound
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
P F o u n d ( t o p , d e c a y ) \= PFound(top, decay) = PFound(top,decay)\=
\= ∑ g r o u p ∈ g r o u p s P F o u n d ( g r o u p , t o p , d e c a y ) \= \\sum\_{group \\in groups} PFound(group, top, decay) \=∑group∈groupsPFound(group,top,decay)
See the [PFound](https://catboost.ai/docs/en/concepts/en/references/pfound) section for more details
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
decay
The probability of search continuation after reaching the current object.
*Default*: 0.85
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### NDCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
n D C G ( t o p ) \= D C G ( t o p ) I D C G ( t o p ) nDCG(top) = \\frac{DCG(top)}{IDCG(top)} nDCG(top)\=IDCG(top)DCG(top)
See the [NDCG](https://catboost.ai/docs/en/concepts/en/references/ndcg) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### DCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
D C G ( t o p ) DCG(top) DCG(top)
See the [NDCG](https://catboost.ai/docs/en/concepts/en/references/ndcg) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### FilteredDCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
See the [FilteredDCG](https://catboost.ai/docs/en/concepts/en/references/filtereddcg) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
### QueryAverage
Represents the average value of the label values for objects with the defined top M M M label values.
See the [QueryAverage](https://catboost.ai/docs/en/concepts/en/references/queryaverage) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: This parameter is obligatory (the default value is not defined).
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### PrecisionAt
The calculation of this function consists of the following steps:
1. The objects are sorted in descending order of predicted relevancies (a i a\_{i} ai)
2. The metric is calculated as follows:
P r e c i s i o n A t ( t o p , b o r d e r ) \= ∑ i \= 1 t o p R e l e v a n t i t o p , w h e r e PrecisionAt(top, border) = \\frac{\\sum\\limits\_{i=1}^{top} Relevant\_{i}}{top} { , where} PrecisionAt(top,border)\=topi\=1∑topRelevanti,where
- R
e
l
e
v
a
n
t
i
\=
{
1
,
t
i
\>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant\_{i} = \\begin{cases} 1 { , } & t\_{i} \> {border} \\\\ 0 { , } & {in other cases} \\end{cases}
Relevanti\={1,0,ti\>borderinothercases
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### RecallAt
The calculation of this function consists of the following steps:
1. The objects are sorted in descending order of predicted relevancies (a i a\_{i} ai)
2. The metric is calculated as follows:
R e c a l A t ( t o p , b o r d e r ) \= ∑ i \= 1 t o p R e l e v a n t i ∑ i \= 1 N R e l e v a n t i RecalAt(top, border) = \\frac{\\sum\\limits\_{i=1}^{top} Relevant\_{i}}{\\sum\\limits\_{i=1}^{N} Relevant\_{i}} RecalAt(top,border)\=i\=1∑NRelevantii\=1∑topRelevanti
- R
e
l
e
v
a
n
t
i
\=
{
1
,
t
i
\>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant\_{i} = \\begin{cases} 1 { , } & t\_{i} \> {border} \\\\ 0 { , } & {in other cases} \\end{cases}
Relevanti\={1,0,ti\>borderinothercases
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### MAP
1. The objectsare sorted in descending order of predicted relevancies (a i a\_{i} ai)
2. The metric is calculated as follows:
M A P ( t o p , b o r d e r ) \= 1 N g r o u p s ∑ j \= 1 N g r o u p s A v e r a g e P r e c i s i o n A t j ( t o p , b o r d e r ) , w h e r e MAP(top, border) = \\frac{1}{N\_{groups}} \\sum\\limits\_{j = 1}^{N\_{groups}} AveragePrecisionAt\_{j}(top, border) { , where} MAP(top,border)\=Ngroups1j\=1∑NgroupsAveragePrecisionAtj(top,border),where
- N
g
r
o
u
p
s
N\_{groups}
Ngroups
is the number of groups
- A
v
e
r
a
g
e
P
r
e
c
i
s
i
o
n
A
t
(
t
o
p
,
b
o
r
d
e
r
)
\=
∑
i
\=
1
t
o
p
R
e
l
e
v
a
n
t
i
∗
P
r
e
c
i
s
i
o
n
A
t
i
∑
i
\=
1
t
o
p
R
e
l
e
v
a
n
t
i
AveragePrecisionAt(top, border) = \\frac{\\sum\\limits\_{i=1}^{top} Relevant\_{i} \* PrecisionAt\_{i}}{\\sum\\limits\_{i=1}^{top} Relevant\_{i} }
AveragePrecisionAt(top,border)\=i\=1∑topRelevantii\=1∑topRelevanti∗PrecisionAti
The value is calculated individually for each *j*\-th group.
- R
e
l
e
v
a
n
t
i
\=
{
1
,
t
i
\>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant\_{i} = \\begin{cases} 1 { , } & t\_{i} \> {border} \\\\ 0 { , } & {in other cases} \\end{cases}
Relevanti\={1,0,ti\>borderinothercases
- P
r
e
c
i
s
i
o
n
A
t
i
\=
∑
j
\=
1
i
R
e
l
e
v
a
n
t
j
i
PrecisionAt\_{i} = \\frac{\\sum\\limits\_{j=1}^{i} Relevant\_{j}}{i}
PrecisionAti\=ij\=1∑iRelevantj
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### ERR
E R R \= 1 ∣ Q ∣ ∑ q \= 1 ∣ Q ∣ E R R q ERR = \\frac{1}{\|Q\|} \\sum\_{q=1}^{\|Q\|} ERR\_q ERR\=∣Q∣1∑q\=1∣Q∣ERRq
E R R q \= ∑ i \= 1 t o p 1 i t q , i ∏ j \= 1 i − 1 ( 1 − t q , j ) ERR\_q = \\sum\_{i=1}^{top} \\frac{1}{i} t\_{q,i} \\prod\_{j=1}^{i-1} (1 - t\_{q,j}) ERRq\=∑i\=1topi1tq,i∏j\=1i−1(1−tq,j)
Targets should be from the range \[0, 1\].
t q , i ∈ \[ 0 , 1 \] t\_{q,i} \\in \[0, 1\] tq,i∈\[0,1\]
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
### MRR
M R R \= 1 ∣ Q ∣ ∑ q \= 1 ∣ Q ∣ 1 r a n k q MRR = \\frac{1}{\|Q\|} \\sum\_{q=1}^{\|Q\|} \\frac{1}{rank\_q} MRR\=∣Q∣1∑q\=1∣Q∣rankq1, where r a n k q rank\_q rankq refers to the rank position of the first relevant document for the *q*\-th query.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### AUC
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
The type of AUC. Defines the metric calculation principles.
#### Classic type
∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ w i ⋅ w j \\displaystyle\\frac{\\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} {\\sum w\_{i} \\cdot w\_{j}} ∑wi⋅wj∑I(ai,aj)⋅wi⋅wj
The sum is calculated on all pairs of objects ( i , j ) (i,j) (i,j) such that:
- t
i
\=
0
t\_{i} = 0
ti\=0
- t
j
\=
1
t\_{j} = 1
tj\=1
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
I(x,y)\=
⎩
⎨
⎧
0,0\.5,1,x\<yx\=yx\>y
Refer to the [Wikipedia article](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) for details.
If the target type is not binary, then every object with target value t t t and weight w w w is replaced with two objects for the metric calculation:
- o
1
o\_{1}
o1
with weight
t
⋅
w
t \\cdot w
t⋅w
and target value 1
- o
2
o\_{2}
o2
with weight
(
1
–
t
)
⋅
w
(1 – t) \\cdot w
(1–t)⋅w
and target value 0.
Target values must be in the range \[0; 1\].
#### Ranking type
∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ w i ∗ w j \\displaystyle\\frac{\\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} {\\sum w\_{i} \* w\_{j}} ∑wi∗wj∑I(ai,aj)⋅wi⋅wj
The sum is calculated on all pairs of objects ( i , j ) (i,j) (i,j) such that:
- t
i
\<
t
j
t\_{i} \< t\_{j}
ti\<tj
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
I(x,y)\=
⎩
⎨
⎧
0,0\.5,1,x\<yx\=yx\>y
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
type
The type of AUC. Defines the metrics calculation principles.
*Default*: `Classic`.
*Possible values*: `Classic`, `Ranking`.
*Examples*: `AUC:type=Classic`, `AUC:type=Ranking`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default*: `False` for Classic type, `True` for Ranking type.
*Examples*: `AUC:type=Ranking;use_weights=False`.
### QueryAUC
#### Classic type
∑ q ∑ i , j ∈ q ∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ q ∑ i , j ∈ q ∑ w i ⋅ w j \\displaystyle\\frac{ \\sum\_q \\sum\_{i, j \\in q} \\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} { \\sum\_q \\sum\_{i, j \\in q} \\sum w\_{i} \\cdot w\_{j}} ∑q∑i,j∈q∑wi⋅wj∑q∑i,j∈q∑I(ai,aj)⋅wi⋅wj
The sum is calculated on all pairs of objects ( i , j ) (i,j) (i,j) such that:
- t
i
\=
0
t\_{i} = 0
ti\=0
- t
j
\=
1
t\_{j} = 1
tj\=1
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
I(x,y)\=
⎩
⎨
⎧
0,0\.5,1,x\<yx\=yx\>y
Refer to the [Wikipedia article](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) for details.
If the target type is not binary, then every object with target value t t t and weight w w w is replaced with two objects for the metric calculation:
- o
1
o\_{1}
o1
with weight
t
⋅
w
t \\cdot w
t⋅w
and target value 1
- o
2
o\_{2}
o2
with weight
(
1
–
t
)
⋅
w
(1 – t) \\cdot w
(1–t)⋅w
and target value 0.
Target values must be in the range \[0; 1\].
#### Ranking type
∑ q ∑ i , j ∈ q ∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ q ∑ i , j ∈ q ∑ w i ∗ w j \\displaystyle\\frac{ \\sum\_q \\sum\_{i, j \\in q} \\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} { \\sum\_q \\sum\_{i, j \\in q} \\sum w\_{i} \* w\_{j}} ∑q∑i,j∈q∑wi∗wj∑q∑i,j∈q∑I(ai,aj)⋅wi⋅wj
The sum is calculated on all pairs of objects ( i , j ) (i,j) (i,j) such that:
- t
i
\<
t
j
t\_{i} \< t\_{j}
ti\<tj
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
I(x,y)\=
⎩
⎨
⎧
0,0\.5,1,x\<yx\=yx\>y
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#optimization).
**User-defined parameters**
type
The type of QueryAUC. Defines the metric calculation principles.
*Default*: `Ranking`.
*Possible values*: `Classic`, `Ranking`.
*Examples*: `QueryAUC:type=Classic`, `QueryAUC:type=Ranking`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default*: `False`.
*Examples*: `QueryAUC:type=Ranking;use_weights=False`.
## Used for optimization
| Name | Optimization | GPU Support |
|---|---|---|
| [PairLogit](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairLogit) | \+ | \+ |
| [PairLogitPairwise](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairLogitPairwise) | \+ | \+ |
| [PairAccuracy](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PairAccuracy) | \- | \- |
| [YetiRank](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#YetiRank) | \+ | \+ (but only Classic mode) |
| [YetiRankPairwise](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#YetiRankPairwise) | \+ | \+ (but only Classic mode) |
| [LambdaMart](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#LambdaMart) | \+ | \- |
| [StochasticFilter](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#StochasticFilter) | \+ | \- |
| [StochasticRank](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#StochasticRank) | \+ | \- |
| [QueryCrossEntropy](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryCrossEntropy) | \+ | \+ |
| [QueryRMSE](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryRMSE) | \+ | \+ |
| [QuerySoftMax](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QuerySoftMax) | \+ | \+ |
| [GroupQuantile](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#GroupQuantile) | \+ | \- |
| [PFound](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PFound) | \- | \- |
| [NDCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#ndcg) | \- | \- |
| [DCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#dcg) | \- | \- |
| [FilteredDCG](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PFilteredDCG) | \- | \- |
| [QueryAverage](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryAverage) | \- | \- |
| [PrecisionAt](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#PrecisionAtK) | \- | \- |
| [RecallAt](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#RecallAtK) | \- | \- |
| [MAP](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#mapk) | \- | \- |
| [ERR](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#err) | \- | \- |
| [MRR](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#mrr) | \- | \- |
| [AUC](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#AUC) | \- | \- |
| [QueryAUC](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-ranking#QueryAUC) | \- | \- |
### Was the article helpful?
Yes
No
Previous
[Multilabel Classification](https://catboost.ai/docs/en/concepts/en/concepts/loss-functions-multilabel-classification)
Next
[Overview](https://catboost.ai/docs/en/concepts/en/concepts/model-analysis)
 |
| Readable Markdown | ## Pairwise metrics
Pairwise metrics use special labeled information — pairs of dataset objects where one object is considered the "winner" and the other is considered the "loser". This information might be not exhaustive (not all possible pairs of objects are labeled in such a way). It is also possible to specify the weight for each pair.
If GroupId is specified, then all pairs must have both members from the same group if this dataset is used in pairwise modes.
Read more about GroupId
The identifier of the object's group. An arbitrary string, possibly representing an integer.
If the labeled pairs data is not specified for the dataset, then pairs are generated automatically in each group using per-object label values (labels must be specified and must be numerical). The object with a greater label value in the pair is considered the "winner".
The following variables are used in formulas of the described pairwise metrics:
- p
p
is the positive object in the pair.
- n
n
is the negative object in the pair.
See all common variables in [Variables used in formulas](https://catboost.ai/docs/en/concepts/loss-functions-variables-used).
### PairLogit
− ∑ p , n ∈ P a i r s w p n ( l o g ( 1 1 \+ e − ( a p − a n ) ) ) ∑ p , n ∈ P a i r s w p n \\displaystyle\\frac{-\\sum\\limits\_{p, n \\in Pairs} w\_{pn} \\left(log(\\displaystyle\\frac{1}{1 + e^{- (a\_{p} - a\_{n})}})\\right)}{\\sum\\limits\_{p, n \\in Pairs} w\_{pn}}
Note
The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
max\_pairs
The maximum number of generated pairs in each group. Takes effect if no pairs are given and therefore are generated without repetition.
*Default:* All possible pairs are generated in each group
### PairLogitPairwise
− ∑ p , n ∈ P a i r s w p n ( l o g ( 1 1 \+ e − ( a p − a n ) ) ) ∑ p , n ∈ P a i r s w p n \\displaystyle\\frac{-\\sum\\limits\_{p, n \\in Pairs} w\_{pn} \\left(log(\\displaystyle\\frac{1}{1 + e^{- (a\_{p} - a\_{n})}})\\right)}{\\sum\\limits\_{p, n \\in Pairs} w\_{pn}}
This metric may give more accurate results on large datasets compared to PairLogit but it is calculated significantly slower.
This technique is described in the [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) paper.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
Note
The object weights are not used to calculate and optimize the value of this metric. The weights of object pairs are used instead.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
max\_pairs
The maximum number of generated pairs in each group. Takes effect if no pairs are given and therefore are generated without repetition.
*Default:* All possible pairs are generated in each group
### PairAccuracy
∑ p , n ∈ P a i r s w p n \[ a p \> a n \] ∑ p , n ∈ P a i r s w p n \\displaystyle\\frac{\\sum\\limits\_{p, n \\in Pairs} w\_{pn} \[a\_{p} \> a\_{n}\] }{\\sum\\limits\_{p, n \\in Pairs} w\_{pn} }
Note
The object weights are not used to calculate the value of this metric. The weights of object pairs are used instead.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
## Groupwise metrics
### YetiRank
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.
The value of this metric can not be calculated. The metric that is written to [output data](https://catboost.ai/docs/en/concepts/output-data) if YetiRank is optimized depends on the range of all *N* target values (i ∈ \[ 1 ; N \] i \\in \[1; N\]) of the dataset:
- t
a
r
g
e
t
i
∈
\[
0
;
1
\]
target\_{i} \\in \[0; 1\]
— PFound
- t
a
r
g
e
t
i
∉
\[
0
;
1
\]
target\_{i} \\notin \[0; 1\]
— NDCG
This metric gives less accurate results on big datasets compared to YetiRankPairwise but it is significantly faster.
Note
The object weights are not used to optimize this metric. The group weights are used instead.
This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the [Group weights](https://catboost.ai/docs/en/concepts/input-data_group-weights) file or the GroupWeight column of the [Columns description](https://catboost.ai/docs/en/concepts/input-data_column-descfile) file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
Since CatBoost 1.2.1 YetiRank meaning has been expanded to allow for optimizing specific ranking loss functions by specifying `mode` loss function parameter. Default YetiRank can now also be referred as `mode=Classic`.
**User-defined parameters**
mode
The mode of operation. Either `Classic` - the traditional YetiRank as described in [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) or a specific ranking loss function to optimize as described in [Which Tricks are Important for Learning to Rank?](https://arxiv.org/abs/2204.01500) paper. Possible loss function values are `DCG`, `NDCG`, `MRR`, `ERR`, `MAP`. Non-Classic modes are supported only on CPU.
*Default:* `Classic`
permutations
The number of permutations.
*Default:* 10
decay
Used only in `Classic` mode.
The probability of search continuation after reaching the current object.
*Default:* 0.85
top
Used in all modes except `Classic`.
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Unlimited by default.
dcg\_type
Used in modes `DCG` and `NDCG`.
Principle of calculation of \*DCG metrics.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
dcg\_denominator
Used in modes `DCG` and `NDCG`.
Principle of calculation of the denominator in \*DCG metrics.
*Default*: Position.
*Possible values*: `LogPosition`, `Position`.
noise
Type of noise to add to approxes.
*Default*: `Gumbel`.
*Possible values*: `Gumbel`, `Gauss`, `No`.
noise\_power
Power of noise to add (multiplier). Used only for `Gauss` noise for now.
*Default*: 1.
num\_neighbors
Used in all modes except `Classic`.
Number of neighbors used in the metric calculation.
*Default*: 1.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### YetiRankPairwise
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
An approximation of ranking metrics (such as NDCG and PFound). Allows to use ranking metrics for optimization.
The value of this metric can not be calculated. The metric that is written to [output data](https://catboost.ai/docs/en/concepts/output-data) if YetiRank is optimized depends on the range of all *N* target values (i ∈ \[ 1 ; N \] i \\in \[1; N\]) of the dataset:
- t
a
r
g
e
t
i
∈
\[
0
;
1
\]
target\_{i} \\in \[0; 1\]
— PFound
- t
a
r
g
e
t
i
∉
\[
0
;
1
\]
target\_{i} \\notin \[0; 1\]
— NDCG
This metric gives more accurate results on big datasets compared to YetiRank but it is significantly slower.
This technique is described in the [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) paper.
Note
The object weights are not used to optimize this metric. The group weights are used instead.
This objective is used to optimize PairLogit. Automatically generated object pairs are used for this purpose. These pairs are generated independently for each object group. Use the [Group weights](https://catboost.ai/docs/en/concepts/input-data_group-weights) file or the GroupWeight column of the [Columns description](https://catboost.ai/docs/en/concepts/input-data_column-descfile) file to change the group importance. In this case, the weight of each generated pair is multiplied by the value of the corresponding group weight.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
Since CatBoost 1.2.1 YetiRankPairwise meaning has been expanded to allow for optimizing specific ranking loss functions by specifying `mode` loss function parameter. Default YetiRankPairwise can now also be referred as `mode=Classic`.
**User-defined parameters**
mode
The mode of operation. Either `Classic` - the traditional YetiRankPairwise as described in [Winning The Transfer Learning Track of Yahoo!’s Learning To Rank Challenge with YetiRank](http://proceedings.mlr.press/v14/gulin11a.html) or a specific ranking loss function to optimize as described in [Which Tricks are Important for Learning to Rank?](https://arxiv.org/abs/2204.01500) paper. Possible loss function values are `DCG`, `NDCG`, `MRR`, `ERR`, `MAP`. Non-Classic modes are supported only on CPU.
*Default:* `Classic`
permutations
The number of permutations.
*Default:* 10
decay
Used only in `Classic` mode.
The probability of search continuation after reaching the current object.
*Default:* 0.85
top
Used in all modes except `Classic`.
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
Unlimited by default.
dcg\_type
Used in modes `DCG` and `NDCG`.
Principle of calculation of \*DCG metrics.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
dcg\_denominator
Used in modes `DCG` and `NDCG`.
Principle of calculation of the denominator in \*DCG metrics.
*Default*: Position.
*Possible values*: `LogPosition`, `Position`.
noise
Type of noise to add to approxes.
*Default*: `Gumbel`.
*Possible values*: `Gumbel`, `Gauss`, `No`.
noise\_power
Power of noise to add (multiplier). Used only for `Gauss` noise for now.
*Default*: 1.
num\_neighbors
Used in all modes except `Classic`.
Number of neighbors used in the metric calculation.
*Default*: 1.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### LambdaMart
Directly optimize the selected metric. The value of the selected metric is written to [output data](https://catboost.ai/docs/en/concepts/output-data)
Refer to the [From RankNet to LambdaRank to LambdaMART](https://www.microsoft.com/en-us/research/uploads/prod/2016/02/MSR-TR-2010-82.pdf) paper for details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
metric
The metric that should be optimized.
*Default*: `NDCG`
*Supported values*: `DCG`, `NDCG`, `MRR`, `ERR`, `MAP`.
sigma
General sigmoid parameter. See [From RankNet to LambdaRank to LambdaMART](https://www.microsoft.com/en-us/research/uploads/prod/2016/02/MSR-TR-2010-82.pdf) paper for details.
*Default*: 1.0
*Supported values*: Real positive values.
norm
Derivatives should be normalized.
*Default*: True
*Supported values*: False, True.
### StochasticFilter
Directly optimize the FilteredDCG metric calculated for a pre-defined order of objects for filtration of objects under a fixed ranking. As a result, the FilteredDCG metric can be used for optimization.
F i l t e r e d D C G \= ∑ i \= 1 n t i i , w h e r e FilteredDCG = \\sum\\limits\_{i=1}^{n}\\displaystyle\\frac{t\_{i}}{i} { , where}
t i t\_{i} is the relevance of an object in the group and the sum is computed over the documents with a \> 0 a \> 0.
The filtration is defined via the raw formula value:

Zeros correspond to filtered instances and ones correspond to the remaining ones.
The ranking is defined by the order of objects in the dataset.
Warning
Sort objects by the column you are interested in before training with this loss function and use the `--has-time`for the Command-line version option to avoid further objects reordering.
For optimization, a distribution of filtrations is defined:
P ( filter ∣ x ) \= σ ( a ) , w h e r e \\mathbb{P}(\\text{filter}\|x) = \\sigma(a) { , where}
- σ
(
z
)
\=
1
1
\+
e
−
z
\\sigma(z) = \\displaystyle\\frac{1}{1 + \\text{e}^{-z}}
- The gradient is estimated via REINFORCE.
Refer to the [Learning to Select for a Predefined Ranking](http://proceedings.mlr.press/v97/vorobev19a/vorobev19a.pdf) paper for calculation details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
sigma
The scale for multiplying predictions.
*Default:* 1
num\_estimations
The number of gradient samples.
*Default:* 1
### StochasticRank
Directly optimize the selected metric. The value of the selected metric is written to [output data](https://catboost.ai/docs/en/concepts/output-data)
Refer to the [StochasticRank: Global Optimization of Scale-Free Discrete Functions](https://arxiv.org/abs/2003.02122v1) paper for details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
Common parameters:
metric
The metric that should be optimized.
*Default*: Obligatory parameter
*Supported values*: `DCG`, `NDCG`, `PFound`.
num\_estimations
The number of gradient estimation iterations.
*Default*: 1
mu
Controls the penalty for coinciding predictions (aka *ties*).
*Default*: 0
Metric-specific parameters:
Available if the corresponding metric is set in the metric parameter.
**DCG**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: *Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
**NDCG**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
**PFound**
decay
The probability of search continuation after reaching the current object.
*Default*: 0.85
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
### QueryCrossEntropy
Q u e r y C r o s s E n t r o p y ( α ) \= ( 1 − α ) ⋅ L o g L o s s \+ α ⋅ L o g L o s s g r o u p QueryCrossEntropy(\\alpha) = (1 - \\alpha) \\cdot LogLoss + \\alpha \\cdot LogLoss\_{group}
See the [QueryCrossEntropy](https://catboost.ai/docs/en/references/querycrossentropy) section for more details.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
alpha
The coefficient used in quantile-based losses.
*Default:* 0.95
### QueryRMSE
∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i ( t i − a i − ∑ j ∈ G r o u p w j ( t j − a j ) ∑ j ∈ G r o u p w j ) 2 ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i \\displaystyle\\sqrt{\\displaystyle\\frac{\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group} w\_{i} \\left( t\_{i} - a\_{i} - \\displaystyle\\frac{\\sum\\limits\_{j \\in Group} w\_{j} (t\_{j} - a\_{j})}{\\sum\\limits\_{j \\in Group} w\_{j}} \\right)^{2}} {\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group} w\_{i}}}
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### QuerySoftMax
− ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i t i log ( w i e β a i ∑ j ∈ G r o u p w j e β a j ) ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i t i \- \\displaystyle\\frac{\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group}w\_{i} t\_{i} \\log \\left(\\displaystyle\\frac{w\_{i} e^{\\beta a\_{i}}}{\\sum\\limits\_{j\\in Group} w\_{j} e^{\\beta a\_{j}}}\\right)} {\\sum\\limits\_{Group \\in Groups} \\sum\_{i\\in Group} w\_{i} t\_{i}}
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
beta
The input scale coefficient.
*Default:* 1
### GroupQuantile
∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i ( α − I ( t i ≤ a i − g G r o u p m e a n ) ) ( t i − a i − g G r o u p m e a n ) ∑ G r o u p ∈ G r o u p s ∑ i ∈ G r o u p w i \\displaystyle\\frac{\\sum\\limits\_{Group \\in Groups} \\sum\\limits\_{i \\in Group}w\_{i} (\\alpha - I(t\_{i} \\leq a\_{i} - g\_{Group\\ mean} ))(t\_{i} - a\_{i} - g\_{Group\\ mean}) } {\\sum\\limits\_{Group \\in Groups} \\sum\_{i\\in Group} w\_{i}},
where g G r o u p m e a n \= ∑ j ∈ G r o u p w j ( t j − a j ) ∑ j ∈ G r o u p w j g\_{Group\\ mean}=\\displaystyle\\frac{\\sum\\limits\_{j \\in Group} w\_{j} (t\_{j} - a\_{j})}{\\sum\\limits\_{j \\in Group} w\_{j}}.
**Usage information** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### PFound
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
P F o u n d ( t o p , d e c a y ) \= PFound(top, decay) =
\= ∑ g r o u p ∈ g r o u p s P F o u n d ( g r o u p , t o p , d e c a y ) \= \\sum\_{group \\in groups} PFound(group, top, decay)
See the [PFound](https://catboost.ai/docs/en/references/pfound) section for more details
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
decay
The probability of search continuation after reaching the current object.
*Default*: 0.85
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### NDCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
n D C G ( t o p ) \= D C G ( t o p ) I D C G ( t o p ) nDCG(top) = \\frac{DCG(top)}{IDCG(top)}
See the [NDCG](https://catboost.ai/docs/en/references/ndcg) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### DCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
D C G ( t o p ) DCG(top)
See the [NDCG](https://catboost.ai/docs/en/references/ndcg) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### FilteredDCG
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
See the [FilteredDCG](https://catboost.ai/docs/en/references/filtereddcg) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
type
Metric calculation principles.
*Default*: Base.
*Possible values*: `Base`, `Exp`.
denominator
Metric denominator type.
*Default*: LogPosition.
*Possible values*: `LogPosition`, `Position`.
### QueryAverage
Represents the average value of the label values for objects with the defined top M M label values.
See the [QueryAverage](https://catboost.ai/docs/en/references/queryaverage) section for more details.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: This parameter is obligatory (the default value is not defined).
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default:* true
### PrecisionAt
The calculation of this function consists of the following steps:
1. The objects are sorted in descending order of predicted relevancies (a i a\_{i})
2. The metric is calculated as follows:
P r e c i s i o n A t ( t o p , b o r d e r ) \= ∑ i \= 1 t o p R e l e v a n t i t o p , w h e r e PrecisionAt(top, border) = \\frac{\\sum\\limits\_{i=1}^{top} Relevant\_{i}}{top} { , where}
- R
e
l
e
v
a
n
t
i
\=
{
1
,
t
i
\>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant\_{i} = \\begin{cases} 1 { , } & t\_{i} \> {border} \\\\ 0 { , } & {in other cases} \\end{cases}
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### RecallAt
The calculation of this function consists of the following steps:
1. The objects are sorted in descending order of predicted relevancies (a i a\_{i})
2. The metric is calculated as follows:
R e c a l A t ( t o p , b o r d e r ) \= ∑ i \= 1 t o p R e l e v a n t i ∑ i \= 1 N R e l e v a n t i RecalAt(top, border) = \\frac{\\sum\\limits\_{i=1}^{top} Relevant\_{i}}{\\sum\\limits\_{i=1}^{N} Relevant\_{i}}
- R
e
l
e
v
a
n
t
i
\=
{
1
,
t
i
\>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant\_{i} = \\begin{cases} 1 { , } & t\_{i} \> {border} \\\\ 0 { , } & {in other cases} \\end{cases}
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### MAP
1. The objectsare sorted in descending order of predicted relevancies (a i a\_{i})
2. The metric is calculated as follows:
M A P ( t o p , b o r d e r ) \= 1 N g r o u p s ∑ j \= 1 N g r o u p s A v e r a g e P r e c i s i o n A t j ( t o p , b o r d e r ) , w h e r e MAP(top, border) = \\frac{1}{N\_{groups}} \\sum\\limits\_{j = 1}^{N\_{groups}} AveragePrecisionAt\_{j}(top, border) { , where}
- N
g
r
o
u
p
s
N\_{groups}
is the number of groups
- A
v
e
r
a
g
e
P
r
e
c
i
s
i
o
n
A
t
(
t
o
p
,
b
o
r
d
e
r
)
\=
∑
i
\=
1
t
o
p
R
e
l
e
v
a
n
t
i
∗
P
r
e
c
i
s
i
o
n
A
t
i
∑
i
\=
1
t
o
p
R
e
l
e
v
a
n
t
i
AveragePrecisionAt(top, border) = \\frac{\\sum\\limits\_{i=1}^{top} Relevant\_{i} \* PrecisionAt\_{i}}{\\sum\\limits\_{i=1}^{top} Relevant\_{i} }
The value is calculated individually for each *j*\-th group.
- R
e
l
e
v
a
n
t
i
\=
{
1
,
t
i
\>
b
o
r
d
e
r
0
,
i
n
o
t
h
e
r
c
a
s
e
s
Relevant\_{i} = \\begin{cases} 1 { , } & t\_{i} \> {border} \\\\ 0 { , } & {in other cases} \\end{cases}
- P
r
e
c
i
s
i
o
n
A
t
i
\=
∑
j
\=
1
i
R
e
l
e
v
a
n
t
j
i
PrecisionAt\_{i} = \\frac{\\sum\\limits\_{j=1}^{i} Relevant\_{j}}{i}
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### ERR
E R R \= 1 ∣ Q ∣ ∑ q \= 1 ∣ Q ∣ E R R q ERR = \\frac{1}{\|Q\|} \\sum\_{q=1}^{\|Q\|} ERR\_q
E R R q \= ∑ i \= 1 t o p 1 i t q , i ∏ j \= 1 i − 1 ( 1 − t q , j ) ERR\_q = \\sum\_{i=1}^{top} \\frac{1}{i} t\_{q,i} \\prod\_{j=1}^{i-1} (1 - t\_{q,j})
Targets should be from the range \[0, 1\].
t q , i ∈ \[ 0 , 1 \] t\_{q,i} \\in \[0, 1\]
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
### MRR
M R R \= 1 ∣ Q ∣ ∑ q \= 1 ∣ Q ∣ 1 r a n k q MRR = \\frac{1}{\|Q\|} \\sum\_{q=1}^{\|Q\|} \\frac{1}{rank\_q}, where r a n k q rank\_q refers to the rank position of the first relevant document for the *q*\-th query.
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
top
The number of top samples in a group that are used to calculate the ranking metric. Top samples are either the samples with the largest approx values or the ones with the lowest target values if approx values are the same.
*Default*: –1 (all label values are used).
border
The label value border. If the value is strictly greater than this threshold, it is considered a positive class. Otherwise it is considered a negative class.
*Default*: 0
### AUC
The calculation of this metric is disabled by default for the training dataset to speed up the training. Use the `hints=skip_train~false` parameter to enable the calculation.
The type of AUC. Defines the metric calculation principles.
#### Classic type
∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ w i ⋅ w j \\displaystyle\\frac{\\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} {\\sum w\_{i} \\cdot w\_{j}}
The sum is calculated on all pairs of objects ( i , j ) (i,j) such that:
- t
i
\=
0
t\_{i} = 0
- t
j
\=
1
t\_{j} = 1
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
Refer to the [Wikipedia article](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) for details.
If the target type is not binary, then every object with target value t t and weight w w is replaced with two objects for the metric calculation:
- o
1
o\_{1}
with weight
t
⋅
w
t \\cdot w
and target value 1
- o
2
o\_{2}
with weight
(
1
–
t
)
⋅
w
(1 – t) \\cdot w
and target value 0.
Target values must be in the range \[0; 1\].
#### Ranking type
∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ w i ∗ w j \\displaystyle\\frac{\\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} {\\sum w\_{i} \* w\_{j}}
The sum is calculated on all pairs of objects ( i , j ) (i,j) such that:
- t
i
\<
t
j
t\_{i} \< t\_{j}
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#usage-information).
**User-defined parameters**
type
The type of AUC. Defines the metrics calculation principles.
*Default*: `Classic`.
*Possible values*: `Classic`, `Ranking`.
*Examples*: `AUC:type=Classic`, `AUC:type=Ranking`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default*: `False` for Classic type, `True` for Ranking type.
*Examples*: `AUC:type=Ranking;use_weights=False`.
### QueryAUC
#### Classic type
∑ q ∑ i , j ∈ q ∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ q ∑ i , j ∈ q ∑ w i ⋅ w j \\displaystyle\\frac{ \\sum\_q \\sum\_{i, j \\in q} \\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} { \\sum\_q \\sum\_{i, j \\in q} \\sum w\_{i} \\cdot w\_{j}}
The sum is calculated on all pairs of objects ( i , j ) (i,j) such that:
- t
i
\=
0
t\_{i} = 0
- t
j
\=
1
t\_{j} = 1
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
Refer to the [Wikipedia article](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve) for details.
If the target type is not binary, then every object with target value t t and weight w w is replaced with two objects for the metric calculation:
- o
1
o\_{1}
with weight
t
⋅
w
t \\cdot w
and target value 1
- o
2
o\_{2}
with weight
(
1
–
t
)
⋅
w
(1 – t) \\cdot w
and target value 0.
Target values must be in the range \[0; 1\].
#### Ranking type
∑ q ∑ i , j ∈ q ∑ I ( a i , a j ) ⋅ w i ⋅ w j ∑ q ∑ i , j ∈ q ∑ w i ∗ w j \\displaystyle\\frac{ \\sum\_q \\sum\_{i, j \\in q} \\sum I(a\_{i}, a\_{j}) \\cdot w\_{i} \\cdot w\_{j}} { \\sum\_q \\sum\_{i, j \\in q} \\sum w\_{i} \* w\_{j}}
The sum is calculated on all pairs of objects ( i , j ) (i,j) such that:
- t
i
\<
t
j
t\_{i} \< t\_{j}
- I
(
x
,
y
)
\=
{
0
,
x
\<
y
0\.5
,
x
\=
y
1
,
x
\>
y
I(x, y) = \\begin{cases} 0 { , } & x \< y \\\\ 0.5 { , } & x=y \\\\ 1 { , } & x\>y \\end{cases}
**Can't be used for optimization.** See [more](https://catboost.ai/docs/en/concepts/loss-functions-ranking#optimization).
**User-defined parameters**
type
The type of QueryAUC. Defines the metric calculation principles.
*Default*: `Ranking`.
*Possible values*: `Classic`, `Ranking`.
*Examples*: `QueryAUC:type=Classic`, `QueryAUC:type=Ranking`.
use\_weights
Use object/group weights to calculate metrics if the specified value is "true" and set all weights to "1" regardless of the input data if the specified value is "false".
*Default*: `False`.
*Examples*: `QueryAUC:type=Ranking;use_weights=False`.
## Used for optimization
| Name | Optimization | GPU Support |
|---|---|---|
| [PairLogit](https://catboost.ai/docs/en/concepts/loss-functions-ranking#PairLogit) | \+ | \+ |
| [PairLogitPairwise](https://catboost.ai/docs/en/concepts/loss-functions-ranking#PairLogitPairwise) | \+ | \+ |
| [PairAccuracy](https://catboost.ai/docs/en/concepts/loss-functions-ranking#PairAccuracy) | \- | \- |
| [YetiRank](https://catboost.ai/docs/en/concepts/loss-functions-ranking#YetiRank) | \+ | \+ (but only Classic mode) |
| [YetiRankPairwise](https://catboost.ai/docs/en/concepts/loss-functions-ranking#YetiRankPairwise) | \+ | \+ (but only Classic mode) |
| [LambdaMart](https://catboost.ai/docs/en/concepts/loss-functions-ranking#LambdaMart) | \+ | \- |
| [StochasticFilter](https://catboost.ai/docs/en/concepts/loss-functions-ranking#StochasticFilter) | \+ | \- |
| [StochasticRank](https://catboost.ai/docs/en/concepts/loss-functions-ranking#StochasticRank) | \+ | \- |
| [QueryCrossEntropy](https://catboost.ai/docs/en/concepts/loss-functions-ranking#QueryCrossEntropy) | \+ | \+ |
| [QueryRMSE](https://catboost.ai/docs/en/concepts/loss-functions-ranking#QueryRMSE) | \+ | \+ |
| [QuerySoftMax](https://catboost.ai/docs/en/concepts/loss-functions-ranking#QuerySoftMax) | \+ | \+ |
| [GroupQuantile](https://catboost.ai/docs/en/concepts/loss-functions-ranking#GroupQuantile) | \+ | \- |
| [PFound](https://catboost.ai/docs/en/concepts/loss-functions-ranking#PFound) | \- | \- |
| [NDCG](https://catboost.ai/docs/en/concepts/loss-functions-ranking#ndcg) | \- | \- |
| [DCG](https://catboost.ai/docs/en/concepts/loss-functions-ranking#dcg) | \- | \- |
| [FilteredDCG](https://catboost.ai/docs/en/concepts/loss-functions-ranking#PFilteredDCG) | \- | \- |
| [QueryAverage](https://catboost.ai/docs/en/concepts/loss-functions-ranking#QueryAverage) | \- | \- |
| [PrecisionAt](https://catboost.ai/docs/en/concepts/loss-functions-ranking#PrecisionAtK) | \- | \- |
| [RecallAt](https://catboost.ai/docs/en/concepts/loss-functions-ranking#RecallAtK) | \- | \- |
| [MAP](https://catboost.ai/docs/en/concepts/loss-functions-ranking#mapk) | \- | \- |
| [ERR](https://catboost.ai/docs/en/concepts/loss-functions-ranking#err) | \- | \- |
| [MRR](https://catboost.ai/docs/en/concepts/loss-functions-ranking#mrr) | \- | \- |
| [AUC](https://catboost.ai/docs/en/concepts/loss-functions-ranking#AUC) | \- | \- |
| [QueryAUC](https://catboost.ai/docs/en/concepts/loss-functions-ranking#QueryAUC) | \- | \- | |
| Shard | 169 (laksa) |
| Root Hash | 17435841955170310369 |
| Unparsed URL | ai,catboost!/docs/en/concepts/loss-functions-ranking s443 |