ℹ️ Skipped - page is already crawled
| Filter | Status | Condition | Details |
|---|---|---|---|
| HTTP status | PASS | download_http_code = 200 | HTTP 200 |
| Age cutoff | PASS | download_stamp > now() - 6 MONTH | 1.2 months ago |
| History drop | PASS | isNull(history_drop_reason) | No drop reason |
| Spam/ban | PASS | fh_dont_index != 1 AND ml_spam_score = 0 | ml_spam_score=0 |
| Canonical | PASS | meta_canonical IS NULL OR = '' OR = src_unparsed | Not set |
| Property | Value |
|---|---|
| URL | https://contrib.scikit-learn.org/category_encoders/jamesstein.html |
| Last Crawled | 2026-03-14 13:02:31 (1 month ago) |
| First Indexed | 2021-03-18 23:26:37 (5 years ago) |
| HTTP Status Code | 200 |
| Meta Title | James-Stein Encoder — Category Encoders 2.8.1 documentation |
| Meta Description | null |
| Meta Canonical | null |
| Boilerpipe Text | James-Stein Encoder
class
category_encoders.james_stein.
JamesSteinEncoder
(
verbose
=
0
,
cols
=
None
,
drop_invariant
=
False
,
return_df
=
True
,
handle_unknown
=
'value'
,
handle_missing
=
'value'
,
model
=
'independent'
,
random_state
=
None
,
randomized
=
False
,
sigma
=
0.05
)
[source]
James-Stein estimator.
Supported targets: binomial and continuous.
For polynomial target support, see PolynomialWrapper.
For feature value
i
, James-Stein estimator returns a weighted average of:
The mean target value for the observed feature value
i
.
The mean target value (regardless of the feature value).
This can be written as:
JS_i
=
(
1
-
B
)
*
mean
(
y_i
)
+
B
*
mean
(
y
)
The question is, what should be the weight
B
?
If we put too much weight on the conditional mean value, we will overfit.
If we put too much weight on the global mean, we will underfit.
The canonical solution in machine learning is to perform cross-validation.
However, Charles Stein came with a closed-form solution to the problem.
The intuition is: If the estimate of
mean(y_i)
is unreliable (
y_i
has high variance),
we should put more weight on
mean(y)
. Stein put it into an equation as:
B
=
var
(
y_i
)
/
(
var
(
y_i
)
+
var
(
y
))
The only remaining issue is that we do not know
var(y)
, let alone
var(y_i)
.
Hence, we have to estimate the variances. But how can we reliably estimate the
variances, when we already struggle with the estimation of the mean values?!
There are multiple solutions:
1. If we have the same count of observations for each feature value
i
and all
y_i
are close to each other, we can pretend that all
var(y_i)
are identical.
This is called a pooled model.
2. If the observation counts are not equal, it makes sense to replace the variances
with squared standard errors, which penalize small observation counts:
SE
^
2
=
var
(
y
)
/
count
(
y
)
This is called an independent model.
James-Stein estimator has, however, one practical limitation - it was defined
only for normal distributions. If you want to apply it for binary classification,
which allows only values {0, 1}, it is better to first convert the mean target value
from the bound interval <0,1> into an unbounded interval by replacing mean(y)
with log-odds ratio:
log
-
odds_ratio_i
=
log
(
mean
(
y_i
)
/
mean
(
y_not_i
))
This is called binary model. The estimation of parameters of this model is, however,
tricky and sometimes it fails fatally. In these situations, it is better to use beta
model, which generally delivers slightly worse accuracy than binary model but does
not suffer from fatal failures.
Parameters
:
verbose: int
integer indicating verbosity of the output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded.
drop_invariant: bool
boolean for whether or not to drop encoded columns with 0 variance.
return_df: bool
boolean for whether to return a pandas DataFrame from transform
(otherwise it will be a numpy array).
handle_missing: str
options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’,
which returns the prior probability.
handle_unknown: str
options are ‘return_nan’, ‘error’ and ‘value’, defaults to ‘value’,
which returns the prior probability.
model: str
options are ‘pooled’, ‘beta’, ‘binary’ and ‘independent’, defaults to ‘independent’.
randomized: bool,
adds normal (Gaussian) distribution noise into training data in order to decrease
overfitting (testing data are untouched).
sigma: float
standard deviation (spread or “width”) of the normal distribution.
Methods
fit
(X[, y])
Fits the encoder according to X and y.
fit_transform
(X[, y])
Fit and transform using the target information.
get_feature_names
()
Deprecated method to get feature names.
get_feature_names_in
()
Get the names of all input columns present when fitting.
get_feature_names_out
([input_features])
Get the names of all transformed / added columns.
get_metadata_routing
()
Get metadata routing of this object.
get_params
([deep])
Get parameters for this estimator.
set_output
(*[, transform])
Set output container.
set_params
(**params)
Set the parameters of this estimator.
set_transform_request
(*[, override_return_df])
Configure whether metadata should be requested to be passed to the
transform
method.
transform
(X[, y, override_return_df])
Perform the transformation to new categorical data.
References
[
1
]
Parametric empirical Bayes inference: Theory and applications, equations 1.19 & 1.20,
from
https://www.jstor.org/stable/2287098
[
2
]
Empirical Bayes for multiple sample sizes, from
http://chris-said.io/2017/05/03/empirical-bayes-for-multiple-sample-sizes/
[
3
]
Shrinkage Estimation of Log-odds Ratios for Comparing Mobility Tables, from
https://journals.sagepub.com/doi/abs/10.1177/0081175015570097
[
4
]
Stein’s paradox and group rationality, from
http://www.philos.rug.nl/~romeyn/presentation/
2017_romeijn_
-_Paris_Stein.pdf
[
5
]
Stein’s Paradox in Statistics, from
http://statweb.stanford.edu/~ckirby/brad/other/Article1977.pdf
fit
(
X
:
ndarray
|
DataFrame
|
list
|
generic
|
csr_matrix
,
y
:
list
|
Series
|
ndarray
|
tuple
|
DataFrame
|
None
=
None
,
**
kwargs
)
Fits the encoder according to X and y.
Parameters
:
X
array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y
array-like, shape = [n_samples]
Target values.
Returns
:
self
encoder
Returns self.
fit_transform
(
X
:
ndarray
|
DataFrame
|
list
|
generic
|
csr_matrix
,
y
:
list
|
Series
|
ndarray
|
tuple
|
DataFrame
|
None
=
None
,
**
fit_params
)
Fit and transform using the target information.
This also uses the target for transforming, not only for training.
get_feature_names
(
)
→
ndarray
Deprecated method to get feature names. Use
get_feature_names_out
instead.
get_feature_names_in
(
)
→
ndarray
Get the names of all input columns present when fitting.
These columns are necessary for the transform step.
get_feature_names_out
(
input_features
=
None
)
→
ndarray
Get the names of all transformed / added columns.
Note that in sklearn the get_feature_names_out function takes the feature_names_in
as an argument and determines the output feature names using the input.
A fit is usually not necessary and if so a NotFittedError is raised.
We just require a fit all the time and return the fitted output columns.
Returns
:
feature_names: np.ndarray
A numpy array with all feature names transformed or added.
Note: potentially dropped features (because the feature is constant/invariant)
are not included!
get_metadata_routing
(
)
Get metadata routing of this object.
Please check
User Guide
on how the routing
mechanism works.
Returns
:
routing
MetadataRequest
A
MetadataRequest
encapsulating
routing information.
get_params
(
deep
=
True
)
Get parameters for this estimator.
Parameters
:
deep
bool, default=True
If True, will return the parameters for this estimator and
contained subobjects that are estimators.
Returns
:
params
dict
Parameter names mapped to their values.
set_output
(
*
,
transform
=
None
)
Set output container.
See
sphx_glr_auto_examples_miscellaneous_plot_set_output.py
for an example on how to use the API.
Parameters
:
transform
{“default”, “pandas”, “polars”}, default=None
Configure output of
transform
and
fit_transform
.
“default”
: Default output format of a transformer
“pandas”
: DataFrame output
“polars”
: Polars output
None
: Transform configuration is unchanged
Added in version 1.4:
“polars”
option was added.
Returns
:
self
estimator instance
Estimator instance.
set_params
(
**
params
)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as
Pipeline
). The latter have
parameters of the form
<component>__<parameter>
so that it’s
possible to update each component of a nested object.
Parameters
:
**params
dict
Estimator parameters.
Returns
:
self
estimator instance
Estimator instance.
set_transform_request
(
*
,
override_return_df
:
bool
|
None
|
str
=
'$UNCHANGED$'
)
→
JamesSteinEncoder
Configure whether metadata should be requested to be passed to the
transform
method.
Note that this method is only relevant when this estimator is used as a
sub-estimator within a
meta-estimator
and metadata routing is enabled
with
enable_metadata_routing=True
(see
sklearn.set_config()
).
Please check the
User Guide
on how the routing
mechanism works.
The options for each parameter are:
True
: metadata is requested, and passed to
transform
if provided. The request is ignored if metadata is not provided.
False
: metadata is not requested and the meta-estimator will not pass it to
transform
.
None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the
existing request. This allows you to change the request for some
parameters and not others.
Added in version 1.3.
Parameters
:
override_return_df
str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
override_return_df
parameter in
transform
.
Returns
:
self
object
The updated object.
transform
(
X
:
ndarray
|
DataFrame
|
list
|
generic
|
csr_matrix
,
y
:
list
|
Series
|
ndarray
|
tuple
|
DataFrame
|
None
=
None
,
override_return_df
:
bool
=
False
)
Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not.
This is mainly due to regularisation in order to avoid overfitting.
On training data transform should be called with y, on test data without.
Parameters
:
X
array-like, shape = [n_samples, n_features]
y
array-like, shape = [n_samples] or None
override_return_df
bool
override self.return_df to force to return a data frame
Returns
:
p
array or DataFrame, shape = [n_samples, n_features_out]
Transformed values with encoding applied. |
| Markdown | [Category Encoders](https://contrib.scikit-learn.org/category_encoders/index.html)
- [Backward Difference Coding](https://contrib.scikit-learn.org/category_encoders/backward_difference.html)
- [BaseN](https://contrib.scikit-learn.org/category_encoders/basen.html)
- [Binary](https://contrib.scikit-learn.org/category_encoders/binary.html)
- [CatBoost Encoder](https://contrib.scikit-learn.org/category_encoders/catboost.html)
- [Count Encoder](https://contrib.scikit-learn.org/category_encoders/count.html)
- [Generalized Linear Mixed Model Encoder](https://contrib.scikit-learn.org/category_encoders/glmm.html)
- [Gray](https://contrib.scikit-learn.org/category_encoders/gray.html)
- [Hashing](https://contrib.scikit-learn.org/category_encoders/hashing.html)
- [Helmert Coding](https://contrib.scikit-learn.org/category_encoders/helmert.html)
- [James-Stein Encoder](https://contrib.scikit-learn.org/category_encoders/jamesstein.html)
- [`JamesSteinEncoder`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder)
- [`JamesSteinEncoder.fit()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.fit)
- [`JamesSteinEncoder.fit_transform()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.fit_transform)
- [`JamesSteinEncoder.get_feature_names()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names)
- [`JamesSteinEncoder.get_feature_names_in()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names_in)
- [`JamesSteinEncoder.get_feature_names_out()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names_out)
- [`JamesSteinEncoder.get_metadata_routing()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_metadata_routing)
- [`JamesSteinEncoder.get_params()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_params)
- [`JamesSteinEncoder.set_output()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_output)
- [`JamesSteinEncoder.set_params()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_params)
- [`JamesSteinEncoder.set_transform_request()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_transform_request)
- [`JamesSteinEncoder.transform()`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.transform)
- [Leave One Out](https://contrib.scikit-learn.org/category_encoders/leaveoneout.html)
- [M-estimate](https://contrib.scikit-learn.org/category_encoders/mestimate.html)
- [One Hot](https://contrib.scikit-learn.org/category_encoders/onehot.html)
- [Ordinal](https://contrib.scikit-learn.org/category_encoders/ordinal.html)
- [Polynomial Coding](https://contrib.scikit-learn.org/category_encoders/polynomial.html)
- [Quantile Encoder](https://contrib.scikit-learn.org/category_encoders/quantile.html)
- [RankHotEncoder](https://contrib.scikit-learn.org/category_encoders/rankhot.html)
- [Sum Coding](https://contrib.scikit-learn.org/category_encoders/sum.html)
- [Summary Encoder](https://contrib.scikit-learn.org/category_encoders/summary.html)
- [Target Encoder](https://contrib.scikit-learn.org/category_encoders/targetencoder.html)
- [Weight of Evidence](https://contrib.scikit-learn.org/category_encoders/woe.html)
- [Wrappers](https://contrib.scikit-learn.org/category_encoders/wrapper.html)
[Category Encoders](https://contrib.scikit-learn.org/category_encoders/index.html)
- James-Stein Encoder
- [View page source](https://contrib.scikit-learn.org/category_encoders/_sources/jamesstein.rst.txt)
***
# James-Stein Encoder[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#james-stein-encoder "Link to this heading")
*class* category\_encoders.james\_stein.JamesSteinEncoder(*verbose\=0*, *cols\=None*, *drop\_invariant\=False*, *return\_df\=True*, *handle\_unknown\='value'*, *handle\_missing\='value'*, *model\='independent'*, *random\_state\=None*, *randomized\=False*, *sigma\=0\.05*)[\[source\]](https://contrib.scikit-learn.org/category_encoders/_modules/category_encoders/james_stein.html#JamesSteinEncoder)[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder "Link to this definition")
James-Stein estimator.
Supported targets: binomial and continuous. For polynomial target support, see PolynomialWrapper.
For feature value i, James-Stein estimator returns a weighted average of:
> 1. The mean target value for the observed feature value i.
> 2. The mean target value (regardless of the feature value).
This can be written as:
```
JS_i = (1 - B) * mean(y_i) + B * mean(y)
```
The question is, what should be the weight B? If we put too much weight on the conditional mean value, we will overfit. If we put too much weight on the global mean, we will underfit. The canonical solution in machine learning is to perform cross-validation. However, Charles Stein came with a closed-form solution to the problem. The intuition is: If the estimate of mean(y\_i) is unreliable (y\_i has high variance), we should put more weight on mean(y). Stein put it into an equation as:
```
B = var(y_i) / (var(y_i) + var(y))
```
The only remaining issue is that we do not know var(y), let alone var(y\_i). Hence, we have to estimate the variances. But how can we reliably estimate the variances, when we already struggle with the estimation of the mean values?! There are multiple solutions:
> 1\. If we have the same count of observations for each feature value i and all y\_i are close to each other, we can pretend that all var(y\_i) are identical. This is called a pooled model. 2. If the observation counts are not equal, it makes sense to replace the variances with squared standard errors, which penalize small observation counts:
> ```
> SE^2 = var(y)/count(y)
> ```
> This is called an independent model.
James-Stein estimator has, however, one practical limitation - it was defined only for normal distributions. If you want to apply it for binary classification, which allows only values {0, 1}, it is better to first convert the mean target value from the bound interval \<0,1\> into an unbounded interval by replacing mean(y) with log-odds ratio:
```
log-odds_ratio_i = log(mean(y_i)/mean(y_not_i))
```
This is called binary model. The estimation of parameters of this model is, however, tricky and sometimes it fails fatally. In these situations, it is better to use beta model, which generally delivers slightly worse accuracy than binary model but does not suffer from fatal failures.
Parameters:
**verbose: int**
integer indicating verbosity of the output. 0 for none.
**cols: list**
a list of columns to encode, if None, all string columns will be encoded.
**drop\_invariant: bool**
boolean for whether or not to drop encoded columns with 0 variance.
**return\_df: bool**
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array).
**handle\_missing: str**
options are ‘return\_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns the prior probability.
**handle\_unknown: str**
options are ‘return\_nan’, ‘error’ and ‘value’, defaults to ‘value’, which returns the prior probability.
**model: str**
options are ‘pooled’, ‘beta’, ‘binary’ and ‘independent’, defaults to ‘independent’.
**randomized: bool,**
adds normal (Gaussian) distribution noise into training data in order to decrease overfitting (testing data are untouched).
**sigma: float**
standard deviation (spread or “width”) of the normal distribution.
Methods
| | |
|---|---|
| [`fit`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.fit "category_encoders.james_stein.JamesSteinEncoder.fit")(X\[, y\]) | Fits the encoder according to X and y. |
| [`fit_transform`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.fit_transform "category_encoders.james_stein.JamesSteinEncoder.fit_transform")(X\[, y\]) | Fit and transform using the target information. |
| [`get_feature_names`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names "category_encoders.james_stein.JamesSteinEncoder.get_feature_names")() | Deprecated method to get feature names. |
| [`get_feature_names_in`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names_in "category_encoders.james_stein.JamesSteinEncoder.get_feature_names_in")() | Get the names of all input columns present when fitting. |
| [`get_feature_names_out`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names_out "category_encoders.james_stein.JamesSteinEncoder.get_feature_names_out")(\[input\_features\]) | Get the names of all transformed / added columns. |
| [`get_metadata_routing`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_metadata_routing "category_encoders.james_stein.JamesSteinEncoder.get_metadata_routing")() | Get metadata routing of this object. |
| [`get_params`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_params "category_encoders.james_stein.JamesSteinEncoder.get_params")(\[deep\]) | Get parameters for this estimator. |
| [`set_output`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_output "category_encoders.james_stein.JamesSteinEncoder.set_output")(\*\[, transform\]) | Set output container. |
| [`set_params`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_params "category_encoders.james_stein.JamesSteinEncoder.set_params")(\*\*params) | Set the parameters of this estimator. |
| [`set_transform_request`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_transform_request "category_encoders.james_stein.JamesSteinEncoder.set_transform_request")(\*\[, override\_return\_df\]) | Configure whether metadata should be requested to be passed to the `transform` method. |
| [`transform`](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.transform "category_encoders.james_stein.JamesSteinEncoder.transform")(X\[, y, override\_return\_df\]) | Perform the transformation to new categorical data. |
References
\[1\]
Parametric empirical Bayes inference: Theory and applications, equations 1.19 & 1.20,
from <https://www.jstor.org/stable/2287098>
\[2\]
Empirical Bayes for multiple sample sizes, from
<http://chris-said.io/2017/05/03/empirical-bayes-for-multiple-sample-sizes/>
\[3\]
Shrinkage Estimation of Log-odds Ratios for Comparing Mobility Tables, from
<https://journals.sagepub.com/doi/abs/10.1177/0081175015570097>
\[4\]
Stein’s paradox and group rationality, from
<http://www.philos.rug.nl/~romeyn/presentation/>[2017\_romeijn\_](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#id6)\-\_Paris\_Stein.pdf
\[5\]
Stein’s Paradox in Statistics, from
<http://statweb.stanford.edu/~ckirby/brad/other/Article1977.pdf>
fit(*X: ndarray \| DataFrame \| list \| generic \| csr\_matrix*, *y: list \| Series \| ndarray \| tuple \| DataFrame \| None \= None*, *\*\*kwargs*)[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.fit "Link to this definition")
Fits the encoder according to X and y.
Parameters:
**X**array-like, shape = \[n\_samples, n\_features\]
Training vectors, where n\_samples is the number of samples and n\_features is the number of features.
**y**array-like, shape = \[n\_samples\]
Target values.
Returns:
**self**encoder
Returns self.
fit\_transform(*X: ndarray \| DataFrame \| list \| generic \| csr\_matrix*, *y: list \| Series \| ndarray \| tuple \| DataFrame \| None \= None*, *\*\*fit\_params*)[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.fit_transform "Link to this definition")
Fit and transform using the target information.
This also uses the target for transforming, not only for training.
get\_feature\_names() → ndarray[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names "Link to this definition")
Deprecated method to get feature names. Use get\_feature\_names\_out instead.
get\_feature\_names\_in() → ndarray[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names_in "Link to this definition")
Get the names of all input columns present when fitting.
These columns are necessary for the transform step.
get\_feature\_names\_out(*input\_features\=None*) → ndarray[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_feature_names_out "Link to this definition")
Get the names of all transformed / added columns.
Note that in sklearn the get\_feature\_names\_out function takes the feature\_names\_in as an argument and determines the output feature names using the input. A fit is usually not necessary and if so a NotFittedError is raised. We just require a fit all the time and return the fitted output columns.
Returns:
feature\_names: np.ndarray
A numpy array with all feature names transformed or added. Note: potentially dropped features (because the feature is constant/invariant) are not included\!
get\_metadata\_routing()[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_metadata_routing "Link to this definition")
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
Returns:
**routing**MetadataRequest
A `MetadataRequest` encapsulating routing information.
get\_params(*deep\=True*)[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.get_params "Link to this definition")
Get parameters for this estimator.
Parameters:
**deep**bool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
Returns:
**params**dict
Parameter names mapped to their values.
set\_output(*\**, *transform\=None*)[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_output "Link to this definition")
Set output container.
See sphx\_glr\_auto\_examples\_miscellaneous\_plot\_set\_output.py for an example on how to use the API.
Parameters:
**transform**{“default”, “pandas”, “polars”}, default=None
Configure output of transform and fit\_transform.
- “default”: Default output format of a transformer
- “pandas”: DataFrame output
- “polars”: Polars output
- None: Transform configuration is unchanged
Added in version 1.4: “polars” option was added.
Returns:
**self**estimator instance
Estimator instance.
set\_params(*\*\*params*)[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_params "Link to this definition")
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as `Pipeline`). The latter have parameters of the form `<component>__<parameter>` so that it’s possible to update each component of a nested object.
Parameters:
**\*\*params**dict
Estimator parameters.
Returns:
**self**estimator instance
Estimator instance.
set\_transform\_request(*\**, *override\_return\_df: bool \| None \| str \= '\$UNCHANGED\$'*) → [JamesSteinEncoder](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder "category_encoders.james_stein.JamesSteinEncoder")[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.set_transform_request "Link to this definition")
Configure whether metadata should be requested to be passed to the `transform` method.
Note that this method is only relevant when this estimator is used as a sub-estimator within a meta-estimator and metadata routing is enabled with `enable_metadata_routing=True` (see `sklearn.set_config()`). Please check the User Guide on how the routing mechanism works.
The options for each parameter are:
- `True`: metadata is requested, and passed to `transform` if provided. The request is ignored if metadata is not provided.
- `False`: metadata is not requested and the meta-estimator will not pass it to `transform`.
- `None`: metadata is not requested, and the meta-estimator will raise an error if the user provides it.
- `str`: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (`sklearn.utils.metadata_routing.UNCHANGED`) retains the existing request. This allows you to change the request for some parameters and not others.
Added in version 1.3.
Parameters:
**override\_return\_df**str, True, False, or None, default=sklearn.utils.metadata\_routing.UNCHANGED
Metadata routing for `override_return_df` parameter in `transform`.
Returns:
**self**object
The updated object.
transform(*X: ndarray \| DataFrame \| list \| generic \| csr\_matrix*, *y: list \| Series \| ndarray \| tuple \| DataFrame \| None \= None*, *override\_return\_df: bool \= False*)[](https://contrib.scikit-learn.org/category_encoders/jamesstein.html#category_encoders.james_stein.JamesSteinEncoder.transform "Link to this definition")
Perform the transformation to new categorical data.
Some encoders behave differently on whether y is given or not. This is mainly due to regularisation in order to avoid overfitting. On training data transform should be called with y, on test data without.
Parameters:
**X**array-like, shape = \[n\_samples, n\_features\]
**y**array-like, shape = \[n\_samples\] or None
**override\_return\_df**bool
override self.return\_df to force to return a data frame
Returns:
**p**array or DataFrame, shape = \[n\_samples, n\_features\_out\]
Transformed values with encoding applied.
[Previous](https://contrib.scikit-learn.org/category_encoders/helmert.html "Helmert Coding") [Next](https://contrib.scikit-learn.org/category_encoders/leaveoneout.html "Leave One Out")
***
© Copyright 2024, Paul Westenthanner, Will McGinnis.
Built with [Sphinx](https://www.sphinx-doc.org/) using a [theme](https://github.com/readthedocs/sphinx_rtd_theme) provided by [Read the Docs](https://readthedocs.org/). |
| Readable Markdown | null |
| Shard | 148 (laksa) |
| Root Hash | 6052685795207125548 |
| Unparsed URL | org,scikit-learn!contrib,/category_encoders/jamesstein.html s443 |