šŸ•·ļø Crawler Inspector

URL Lookup

Direct Parameter Lookup

Raw Queries and Responses

1. Shard Calculation

Query:
Response:
Calculated Shard: 152 (from laksa161)

2. Crawled Status Check

Query:
Response:

3. Robots.txt Check

Query:
Response:

4. Spam/Ban Check

Query:
Response:

5. Seen Status Check

ā„¹ļø Skipped - page is already crawled

šŸ“„
INDEXABLE
āœ…
CRAWLED
1 month ago
šŸ¤–
ROBOTS ALLOWED

Page Info Filters

FilterStatusConditionDetails
HTTP statusPASSdownload_http_code = 200HTTP 200
Age cutoffPASSdownload_stamp > now() - 6 MONTH1 months ago (distributed domain, exempt)
History dropPASSisNull(history_drop_reason)No drop reason
Spam/banPASSfh_dont_index != 1 AND ml_spam_score = 0ml_spam_score=0
CanonicalPASSmeta_canonical IS NULL OR = '' OR = src_unparsedNot set

Page Details

PropertyValue
URLhttps://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator
Last Crawled2026-03-19 01:38:09 (1 month ago)
First Indexed2015-05-31 01:26:37 (10 years ago)
HTTP Status Code200
Meta TitleJames–Stein estimator - Wikipedia
Meta Descriptionnull
Meta Canonicalnull
Boilerpipe Text
From Wikipedia, the free encyclopedia The James–Stein estimator is an estimator of the mean for a multivariate random variable . It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956, [ 1 ] when Charles Stein reached a relatively shocking conclusion that while the then-usual estimate of the mean, the sample mean , is admissible when , it is inadmissible when . Stein proposed a possible improvement to the estimator that shrinks the sample mean towards a more central mean vector (which can be chosen a priori or commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to as Stein's example or paradox . In 1961, Willard James and Charles Stein simplified the original process. [ 2 ] It can be shown that the James–Stein estimator dominates the "ordinary" least squares approach in the sense that the James–Stein estimator has a lower mean squared error than the "ordinary" least squares estimator for all . This is possible because the James–Stein estimator is biased , so that the Gauss–Markov theorem does not apply. Similar to the Hodges' estimator , the James-Stein estimator is superefficient and non-regular at . [ 3 ] Let where the vector is the unknown mean of , which is -variate normally distributed and with known covariance matrix . We are interested in obtaining an estimate, , of , based on a single observation, , of . In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independent Gaussian noise . Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is the least squares estimator, which is . Stein demonstrated that in terms of mean squared error , the least squares estimator, , is sub-optimal to shrinkage based estimators, such as the James–Stein estimator , . [ 1 ] The paradoxical result, that there is a (possibly) better and never any worse estimate of in mean squared error as compared to the sample mean, became known as Stein's example . MSE (R) of least squares estimator (ML) vs. James–Stein estimator (JS). The James–Stein estimator gives its best estimate when the norm of the actual parameter vector Īø is near zero. If is known, the James–Stein estimator is given by James and Stein showed that the above estimator dominates for any , meaning that the James–Stein estimator has a lower mean squared error (MSE) than the maximum likelihood estimator. [ 2 ] [ 4 ] By definition, this makes the least squares estimator inadmissible when . Notice that if then this estimator simply takes the natural estimator and shrinks it towards the origin 0 . In fact this is not the only direction of shrinkage that works. Let be an arbitrary fixed vector of dimension . Then there exists an estimator of the James–Stein type that shrinks toward , namely The James–Stein estimator dominates the usual estimator for any . A natural question to ask is whether the improvement over the usual estimator is independent of the choice of . The answer is no. The improvement is small if is large. Thus to get a very great improvement some knowledge of the location of is necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledge a priori . But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher. Nonetheless, James and Stein's result is that any finite guess improves the expected MSE over the maximum-likelihood estimator, which is tantamount to using an infinite , surely a poor guess. Seeing the James–Stein estimator as an empirical Bayes method gives some intuition to this result: One assumes that itself is a random variable with prior distribution , where is estimated from the data itself. Estimating only gives an advantage compared to the maximum-likelihood estimator when the dimension is large enough; hence it does not work for . The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator. [ 5 ] A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator is admissible . A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon the total MSE, i.e., the sum of the expected squared errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator. The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in a telecommunication setting, it is reasonable to combine channel tap measurements in a channel estimation scenario, as the goal is to minimize the total channel estimation error. The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of the entropic uncertainty principle for more than three measurements. [ 6 ] An intuitive derivation and interpretation is given by the Galtonian perspective. [ 7 ] Under this interpretation, we aim to predict the population means using the imperfectly measured sample means . The equation of the OLS estimator in a hypothetical regression of the population means on the sample means gives an estimator of the form of either the James–Stein estimator (when we force the OLS intercept to equal 0) or of the Efron-Morris estimator (when we allow the intercept to vary). Positive-part James–Stein shrinkage operator [ edit ] Despite the intuition that the James–Stein estimator shrinks the unbiased least-squares estimator toward , the estimator actually moves away from for small values of as the multiplier on is then negative. This can be remedied by replacing this multiplier by zero when it is negative. To this end, define the positive-part James-Stein shrinkage operator : where , and apply this operator component-wise to the (unbiased) least-squares estimator of (with known ) for each : The resulting estimator of is called the positive-part James–Stein estimator and can be written in vector notation as: This estimator has a smaller risk than the basic James–Stein estimator for . It follows that the basic James–Stein estimator is itself inadmissible . [ 8 ] It turns out, however, that the positive-part estimator is also inadmissible. [ 4 ] This follows from a more general result which requires admissible estimators to be smooth. Positive-part James–Stein shrinkage and model selection [ edit ] Recall the initial setup: where the variance coefficient is known and we wish to estimate the unknown (mean response) coefficient . In the more general setting of linear regression , the mean response is instead given by where is a matrix with columns. As in the previous section, we can use the positive-part James-Stein shrinkage operator to obtain a shrinkage estimator of . In particular, any that satisfies the James-Stein KKT conditions : [ 9 ] is a (positive-part) James-Stein estimator of with the useful property that it performs both shrinkage and model selection simultaneously. This is because, depending on the value of the known , there is a (possibly empty) set such that In other words, some (or all) of the could be estimated as exactly zero, which is equivalent to the selection of a suitable linear regression model. The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is often inadmissible for simultaneous estimation of several parameters. This effect has been called Stein's phenomenon , and has been demonstrated for several different problem settings, some of which are briefly outlined below. James and Stein demonstrated that the estimator presented above can still be used when the variance is unknown, by replacing it with the standard estimator of the variance, . The dominance result still holds under the same condition, namely, . [ 2 ] All the results above are for the case when only a single observation vector y is available. For the more general case when vectors are available, we consider the estimator where is the -length average of the observations, so that, . The work of James and Stein has been extended to the case of a general measurement covariance matrix, i.e., where measurements may be statistically dependent and may have differing variances. [ 10 ] A similar dominating estimator can be constructed, with a suitably generalized dominance condition. This can be used to construct a linear regression technique which outperforms the standard application of the LS estimator. [ 10 ] Stein's result has been extended to a wide class of distributions and loss functions. However, this theory provides only an existence result, in that explicit dominating estimators were not actually exhibited. [ 11 ] It is quite difficult to obtain explicit estimators improving upon the usual estimator without specific restrictions on the underlying distributions. [ 4 ] Admissible decision rule Hodges' estimator Shrinkage estimator Regular estimator KL divergence ^ a b Stein, C. (1956), "Inadmissibility of the usual estimator for the mean of a multivariate distribution", Proc. Third Berkeley Symp. Math. Statist. Prob. , vol.Ā 1, pp.Ā  197– 206, MR Ā  0084922 , Zbl Ā  0073.35602 ^ a b c James, W.; Stein, C. (1961), "Estimation with quadratic loss", Proc. Fourth Berkeley Symp. Math. Statist. Prob. , vol.Ā 1, pp.Ā  361– 379, MR Ā  0133191 ^ Beran, R. (1995). THE ROLE OF HAJEK’S CONVOLUTION THEOREM IN STATISTICAL THEORY ^ a b c Lehmann, E. L.; Casella, G. (1998), Theory of Point Estimation (2ndĀ ed.), New York: Springer ^ Efron, B.; Morris, C. (1973). "Stein's Estimation Rule and Its Competitors—An Empirical Bayes Approach". Journal of the American Statistical Association . 68 (341). American Statistical Association: 117– 130. doi : 10.2307/2284155 . JSTOR Ā  2284155 . ^ Stander, M. (2017), Using Stein's estimator to correct the bound on the entropic uncertainty principle for more than two measurements , arXiv : 1702.02440 , Bibcode : 2017arXiv170202440S ^ Stigler, Stephen M. (1990-02-01). "The 1988 Neyman Memorial Lecture: A Galtonian Perspective on Shrinkage Estimators" . Statistical Science . 5 (1). doi : 10.1214/ss/1177012274 . ISSN Ā  0883-4237 . ^ Anderson, T. W. (1984), An Introduction to Multivariate Statistical Analysis (2ndĀ ed.), New York: John Wiley & Sons ^ Botev, Zdravko I.; Kroese, Dirk P.; Taimre, Thomas (2025). Data Science and Machine Learning: Mathematical and Statistical Methods (2ndĀ ed.). Boca RatonĀ ; London: CRC Press. pp.Ā  277– 279. ISBN Ā  978-1-032-48868-4 . ^ a b Bock, M. E. (1975), "Minimax estimators of the mean of a multivariate normal distribution", Annals of Statistics , 3 (1): 209– 218, doi : 10.1214/aos/1176343009 , MR Ā  0381064 , Zbl Ā  0314.62005 ^ Brown, L. D. (1966), "On the admissibility of invariant estimators of one or more location parameters", Annals of Mathematical Statistics , 37 (5): 1087– 1136, doi : 10.1214/aoms/1177699259 , MR Ā  0216647 , Zbl Ā  0156.39401 Judge, George G.; Bock, M. E. (1978). The Statistical Implications of Pre-Test and Stein-Rule Estimators in Econometrics . New York: North Holland. pp.Ā  229– 257. ISBN Ā  0-7204-0729-X .
Markdown
[Jump to content](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#bodyContent) Main menu Main menu move to sidebar hide Navigation - [Main page](https://en.wikipedia.org/wiki/Main_Page "Visit the main page [z]") - [Contents](https://en.wikipedia.org/wiki/Wikipedia:Contents "Guides to browsing Wikipedia") - [Current events](https://en.wikipedia.org/wiki/Portal:Current_events "Articles related to current events") - [Random article](https://en.wikipedia.org/wiki/Special:Random "Visit a randomly selected article [x]") - [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About "Learn about Wikipedia and how it works") - [Contact us](https://en.wikipedia.org/wiki/Wikipedia:Contact_us "How to contact Wikipedia") Contribute - [Help](https://en.wikipedia.org/wiki/Help:Contents "Guidance on how to use and edit Wikipedia") - [Learn to edit](https://en.wikipedia.org/wiki/Help:Introduction "Learn how to edit Wikipedia") - [Community portal](https://en.wikipedia.org/wiki/Wikipedia:Community_portal "The hub for editors") - [Recent changes](https://en.wikipedia.org/wiki/Special:RecentChanges "A list of recent changes to Wikipedia [r]") - [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_upload_wizard "Add images or other media for use on Wikipedia") - [Special pages](https://en.wikipedia.org/wiki/Special:SpecialPages "A list of all special pages [q]") [![](https://en.wikipedia.org/static/images/icons/enwiki-25.svg) ![Wikipedia](https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-wordmark-en-25.svg) ![The Free Encyclopedia](https://en.wikipedia.org/static/images/mobile/copyright/wikipedia-tagline-en-25.svg)](https://en.wikipedia.org/wiki/Main_Page) [Search](https://en.wikipedia.org/wiki/Special:Search "Search Wikipedia [f]") Appearance - [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en) - [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=James%E2%80%93Stein+estimator "You are encouraged to create an account and log in; however, it is not mandatory") - [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=James%E2%80%93Stein+estimator "You're encouraged to log in; however, it's not mandatory. [o]") Personal tools - [Donate](https://donate.wikimedia.org/?wmf_source=donate&wmf_medium=sidebar&wmf_campaign=en.wikipedia.org&uselang=en) - [Create account](https://en.wikipedia.org/w/index.php?title=Special:CreateAccount&returnto=James%E2%80%93Stein+estimator "You are encouraged to create an account and log in; however, it is not mandatory") - [Log in](https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=James%E2%80%93Stein+estimator "You're encouraged to log in; however, it's not mandatory. [o]") ## Contents move to sidebar hide - [(Top)](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator) - [1 Setting](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#Setting) - [2 Formulation](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#Formulation) - [3 Interpretation](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#Interpretation) - [4 Positive-part James–Stein shrinkage operator](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#Positive-part_James%E2%80%93Stein_shrinkage_operator) - [5 Positive-part James–Stein shrinkage and model selection](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#Positive-part_James%E2%80%93Stein_shrinkage_and_model_selection) - [6 Further extensions](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#Further_extensions) - [7 See also](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#See_also) - [8 References](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#References) - [9 Further reading](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#Further_reading) Toggle the table of contents # James–Stein estimator 3 languages - [Deutsch](https://de.wikipedia.org/wiki/James-Stein-Sch%C3%A4tzer "James-Stein-SchƤtzer – German") - [EspaƱol](https://es.wikipedia.org/wiki/Estimador_de_James-Stein "Estimador de James-Stein – Spanish") - [Polski](https://pl.wikipedia.org/wiki/Estymator_Jamesa-Steina "Estymator Jamesa-Steina – Polish") [Edit links](https://www.wikidata.org/wiki/Special:EntityPage/Q6146297#sitelinks-wikipedia "Edit interlanguage links") - [Article](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator "View the content page [c]") - [Talk](https://en.wikipedia.org/wiki/Talk:James%E2%80%93Stein_estimator "Discuss improvements to the content page [t]") English - [Read](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator) - [Edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit "Edit this page [e]") - [View history](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=history "Past revisions of this page [h]") Tools Tools move to sidebar hide Actions - [Read](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator) - [Edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit "Edit this page [e]") - [View history](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=history) General - [What links here](https://en.wikipedia.org/wiki/Special:WhatLinksHere/James%E2%80%93Stein_estimator "List of all English Wikipedia pages containing links to this page [j]") - [Related changes](https://en.wikipedia.org/wiki/Special:RecentChangesLinked/James%E2%80%93Stein_estimator "Recent changes in pages linked from this page [k]") - [Upload file](https://en.wikipedia.org/wiki/Wikipedia:File_Upload_Wizard "Upload files [u]") - [Permanent link](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&oldid=1342138970 "Permanent link to this revision of this page") - [Page information](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=info "More information about this page") - [Cite this page](https://en.wikipedia.org/w/index.php?title=Special:CiteThisPage&page=James%E2%80%93Stein_estimator&id=1342138970&wpFormIdentifier=titleform "Information on how to cite this page") - [Get shortened URL](https://en.wikipedia.org/w/index.php?title=Special:UrlShortener&url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FJames%25E2%2580%2593Stein_estimator) Print/export - [Download as PDF](https://en.wikipedia.org/w/index.php?title=Special:DownloadAsPdf&page=James%E2%80%93Stein_estimator&action=show-download-screen "Download this page as a PDF file") - [Printable version](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&printable=yes "Printable version of this page [p]") In other projects - [Wikidata item](https://www.wikidata.org/wiki/Special:EntityPage/Q6146297 "Structured data on this page hosted by Wikidata [g]") Appearance move to sidebar hide From Wikipedia, the free encyclopedia Rule for estimating the mean of a dataset | | | |---|---| | ![](https://upload.wikimedia.org/wikipedia/en/thumb/f/f2/Edit-clear.svg/40px-Edit-clear.svg.png) | This article **may be too technical for most readers to understand**. Please [help improve it](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit) to [make it understandable to non-experts](https://en.wikipedia.org/wiki/Wikipedia:Make_technical_articles_understandable "Wikipedia:Make technical articles understandable"), without removing the technical details. *(November 2017)* *([Learn how and when to remove this message](https://en.wikipedia.org/wiki/Help:Maintenance_template_removal "Help:Maintenance template removal"))* | The **James–Stein estimator** is an [estimator](https://en.wikipedia.org/wiki/Estimator "Estimator") of the [mean](https://en.wikipedia.org/wiki/Mean "Mean") Īø := ( Īø 1 , Īø 2 , … Īø m ) {\\displaystyle {\\boldsymbol {\\theta }}:=(\\theta \_{1},\\theta \_{2},\\dots \\theta \_{m})} ![{\\displaystyle {\\boldsymbol {\\theta }}:=(\\theta \_{1},\\theta \_{2},\\dots \\theta \_{m})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea925cd9dce005c92b0c86c343e8b005ecf8a3f3) for a multivariate [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable") Y := ( Y 1 , Y 2 , … Y m ) {\\displaystyle {\\boldsymbol {Y}}:=(Y\_{1},Y\_{2},\\dots Y\_{m})} ![{\\displaystyle {\\boldsymbol {Y}}:=(Y\_{1},Y\_{2},\\dots Y\_{m})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c7d304d83650977659f023b20fae116a55c065b0). It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956,[\[1\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-stein-56-1) when [Charles Stein](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") reached a relatively shocking conclusion that while the then-usual estimate of the mean, the [sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean"), is [admissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") when m ≤ 2 {\\displaystyle m\\leq 2} ![{\\displaystyle m\\leq 2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9ef46309a66e2cdf7737c7246afd8e78e3052f3d), it is [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") when m ≄ 3 {\\displaystyle m\\geq 3} ![{\\displaystyle m\\geq 3}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4610f29d2708d1febf1c0090f58ddd3986593545). Stein proposed a possible improvement to the estimator that [shrinks](https://en.wikipedia.org/wiki/Shrinkage_\(statistics\) "Shrinkage (statistics)") the sample mean Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) towards a more central mean vector ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) (which can be chosen [a priori](https://en.wikipedia.org/wiki/A_priori_and_a_posteriori "A priori and a posteriori") or commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to as [Stein's example or paradox](https://en.wikipedia.org/wiki/Stein%27s_example "Stein's example"). In 1961, [Willard James](https://en.wikipedia.org/wiki/Willard_D._James "Willard D. James") and Charles Stein simplified the original process.[\[2\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-james%E2%80%93stein-61-2) It can be shown that the James–Stein estimator [dominates](https://en.wikipedia.org/wiki/Dominating_decision_rule "Dominating decision rule") the "ordinary" [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares") approach in the sense that the James–Stein estimator has a lower [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") than the "ordinary" least squares estimator for all Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf). This is possible because the James–Stein estimator is [biased](https://en.wikipedia.org/wiki/Bias_of_an_estimator "Bias of an estimator"), so that the [Gauss–Markov theorem](https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem "Gauss–Markov theorem") does not apply. Similar to the [Hodges' estimator](https://en.wikipedia.org/wiki/Hodges%27_estimator "Hodges' estimator"), the James-Stein estimator is [superefficient](https://en.wikipedia.org/w/index.php?title=Superefficient&action=edit&redlink=1 "Superefficient (page does not exist)") and [non-regular](https://en.wikipedia.org/wiki/Regular_estimator "Regular estimator") at Īø \= 0 {\\displaystyle {\\boldsymbol {\\theta }}=\\mathbf {0} } ![{\\displaystyle {\\boldsymbol {\\theta }}=\\mathbf {0} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/3278daf129fb6d34f19a067b7fd6b7a90cee5aeb).[\[3\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-3) ## Setting \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=1 "Edit section: Setting")\] Let Y ∼ N m ( Īø , σ 2 I ) , {\\displaystyle {\\mathbf {Y} }\\sim N\_{m}({\\boldsymbol {\\theta }},\\sigma ^{2}I),\\,} ![{\\displaystyle {\\mathbf {Y} }\\sim N\_{m}({\\boldsymbol {\\theta }},\\sigma ^{2}I),\\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9f91e318ab7b67f7cf60d538cc31280dde768387)where the vector Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) is the unknown [mean](https://en.wikipedia.org/wiki/Expected_value "Expected value") of Y {\\displaystyle {\\mathbf {Y} }} ![{\\displaystyle {\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7020214a70ec832bbdd74738ec96ca9989a695e6), which is [m {\\displaystyle m} ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc)\-variate normally distributed](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") and with known [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix "Covariance matrix") σ 2 I {\\displaystyle \\sigma ^{2}I} ![{\\displaystyle \\sigma ^{2}I}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e2975cf33cb590e842a2c26a906f0949af795ff). We are interested in obtaining an estimate, Īø ^ {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/42013234f6f8ffb7d8cc4268ec825db065113510), of Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf), based on a single observation, y {\\displaystyle {\\mathbf {y} }} ![{\\displaystyle {\\mathbf {y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4ce8e1f631864c8e56c48f7861efef6666236d9), of Y {\\displaystyle {\\mathbf {Y} }} ![{\\displaystyle {\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7020214a70ec832bbdd74738ec96ca9989a695e6). In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independent [Gaussian noise](https://en.wikipedia.org/wiki/Gaussian_noise "Gaussian noise"). Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is the [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares") estimator, which is Īø ^ L S \= Y {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}={\\mathbf {Y} }} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}={\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83ee2a6a995c8a02c8f565486b778e1e70bd2fac). Stein demonstrated that in terms of [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") E ⁔ \[ ‖ Īø āˆ’ Īø ^ ‖ 2 \] {\\displaystyle \\operatorname {E} \\left\[\\left\\\|{\\boldsymbol {\\theta }}-{\\widehat {\\boldsymbol {\\theta }}}\\right\\\|^{2}\\right\]} ![{\\displaystyle \\operatorname {E} \\left\[\\left\\\|{\\boldsymbol {\\theta }}-{\\widehat {\\boldsymbol {\\theta }}}\\right\\\|^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bdd1163606c619b36e0e0d2310a54e10903e7234), the least squares estimator, Īø ^ L S {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/58b39a6819b4e3e61fbcb24dd72a258cd0455d3d), is sub-optimal to shrinkage based estimators, such as the **James–Stein estimator**, Īø ^ J S {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/805e2127008d46c29a7952101fada0840f6fe252).[\[1\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-stein-56-1) The paradoxical result, that there is a (possibly) better and never any worse estimate of Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) in mean squared error as compared to the sample mean, became known as [Stein's example](https://en.wikipedia.org/wiki/Stein%27s_example "Stein's example"). ## Formulation \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=2 "Edit section: Formulation")\] [![](https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/MSE_of_ML_vs_JS.png/500px-MSE_of_ML_vs_JS.png)](https://en.wikipedia.org/wiki/File:MSE_of_ML_vs_JS.png) MSE (R) of least squares estimator (ML) vs. James–Stein estimator (JS). The James–Stein estimator gives its best estimate when the norm of the actual parameter vector Īø is near zero. If σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) is known, the James–Stein estimator is given by Īø ^ J S \= ( 1 āˆ’ ( m āˆ’ 2 ) σ 2 ‖ Y ‖ 2 ) Y . {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }\\\|^{2}}}\\right){\\mathbf {Y} }.} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }\\\|^{2}}}\\right){\\mathbf {Y} }.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/abf3aef7dbf3a1cd7401484e59500b5504961d65) James and Stein showed that the above estimator [dominates](https://en.wikipedia.org/wiki/Dominating_decision_rule "Dominating decision rule") Īø ^ L S {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/58b39a6819b4e3e61fbcb24dd72a258cd0455d3d) for any m ≄ 3 {\\displaystyle m\\geq 3} ![{\\displaystyle m\\geq 3}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4610f29d2708d1febf1c0090f58ddd3986593545), meaning that the James–Stein estimator has a lower [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") (MSE) than the [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") estimator.[\[2\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-james%E2%80%93stein-61-2)[\[4\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-lehmann-casella-98-4) By definition, this makes the least squares estimator [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") when m ≄ 3 {\\displaystyle m\\geq 3} ![{\\displaystyle m\\geq 3}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4610f29d2708d1febf1c0090f58ddd3986593545). Notice that if ( m āˆ’ 2 ) σ 2 \< ‖ Y ‖ 2 {\\displaystyle (m-2)\\sigma ^{2}\<\\\|{\\mathbf {Y} }\\\|^{2}} ![{\\displaystyle (m-2)\\sigma ^{2}\<\\\|{\\mathbf {Y} }\\\|^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/791e4ba2fdd291a31cd1e0da5f16826c143f55c1) then this estimator simply takes the natural estimator Y {\\displaystyle \\mathbf {Y} } ![{\\displaystyle \\mathbf {Y} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/c92a7716a99fadda050469747fce1e475e0ec549) and shrinks it towards the origin **0**. In fact this is not the only direction of [shrinkage](https://en.wikipedia.org/wiki/Shrinkage_\(statistics\) "Shrinkage (statistics)") that works. Let ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) be an arbitrary fixed vector of dimension m {\\displaystyle m} ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc). Then there exists an estimator of the James–Stein type that shrinks toward ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160), namely Īø ^ J S \= ( 1 āˆ’ ( m āˆ’ 2 ) σ 2 ‖ Y āˆ’ ν ‖ 2 ) ( Y āˆ’ ν ) \+ ν , m ≄ 3\. {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|^{2}}}\\right)({\\mathbf {Y} }-{\\boldsymbol {\\nu }})+{\\boldsymbol {\\nu }},\\qquad m\\geq 3.} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|^{2}}}\\right)({\\mathbf {Y} }-{\\boldsymbol {\\nu }})+{\\boldsymbol {\\nu }},\\qquad m\\geq 3.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fda106b6c468638610f33348f5c7549428c9810f) The James–Stein estimator dominates the usual estimator for any ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160). A natural question to ask is whether the improvement over the usual estimator is independent of the choice of ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160). The answer is no. The improvement is small if ‖ Īø āˆ’ ν ‖ {\\displaystyle \\\|{{\\boldsymbol {\\theta }}-{\\boldsymbol {\\nu }}}\\\|} ![{\\displaystyle \\\|{{\\boldsymbol {\\theta }}-{\\boldsymbol {\\nu }}}\\\|}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4f285d34fe737275b0d397ec39b887f4fdabd4c) is large. Thus to get a very great improvement some knowledge of the location of Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) is necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledge [a priori](https://en.wikipedia.org/wiki/A_priori_and_a_posteriori "A priori and a posteriori"). But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher. Nonetheless, James and Stein's result is that *any* finite guess ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) improves the expected MSE over the maximum-likelihood estimator, which is tantamount to using an infinite ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160), surely a poor guess. ## Interpretation \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=3 "Edit section: Interpretation")\] Seeing the James–Stein estimator as an [empirical Bayes method](https://en.wikipedia.org/wiki/Empirical_Bayes_method "Empirical Bayes method") gives some intuition to this result: One assumes that Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) itself is a random variable with [prior distribution](https://en.wikipedia.org/wiki/Prior_probability "Prior probability") ∼ N ( 0 , A ) {\\displaystyle \\sim N(0,A)} ![{\\displaystyle \\sim N(0,A)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/70b48c6623236065e6e4d8940bd524e756c7b641), where A {\\displaystyle A} ![{\\displaystyle A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3) is estimated from the data itself. Estimating A {\\displaystyle A} ![{\\displaystyle A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3) only gives an advantage compared to the [maximum-likelihood estimator](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") when the dimension m {\\displaystyle m} ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc) is large enough; hence it does not work for m ≤ 2 {\\displaystyle m\\leq 2} ![{\\displaystyle m\\leq 2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9ef46309a66e2cdf7737c7246afd8e78e3052f3d). The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator.[\[5\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-5) A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator is [admissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule"). A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon the *total* MSE, i.e., the sum of the expected squared errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator. The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in a [telecommunication](https://en.wikipedia.org/wiki/Telecommunication "Telecommunication") setting, it is reasonable to combine [channel](https://en.wikipedia.org/wiki/Communication_channel "Communication channel") tap measurements in a [channel estimation](https://en.wikipedia.org/wiki/Channel_estimation "Channel estimation") scenario, as the goal is to minimize the total channel estimation error. The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of the [entropic uncertainty principle](https://en.wikipedia.org/wiki/Entropic_uncertainty_principle "Entropic uncertainty principle") for more than three measurements.[\[6\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-stander-17-6) An intuitive derivation and interpretation is given by the [Galtonian](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") perspective.[\[7\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-7) Under this interpretation, we aim to predict the population means using the [imperfectly measured sample means](https://en.wikipedia.org/wiki/Measurement_error_model "Measurement error model"). The equation of the [OLS](https://en.wikipedia.org/wiki/Ordinary_least_squares "Ordinary least squares") estimator in a hypothetical regression of the population means on the sample means gives an estimator of the form of either the James–Stein estimator (when we force the OLS intercept to equal 0) or of the Efron-Morris estimator (when we allow the intercept to vary). ## Positive-part James–Stein shrinkage operator \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=4 "Edit section: Positive-part James–Stein shrinkage operator")\] Despite the intuition that the James–Stein estimator shrinks the unbiased least-squares estimator Y {\\displaystyle {\\mathbf {Y} }} ![{\\displaystyle {\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7020214a70ec832bbdd74738ec96ca9989a695e6) *toward* ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160), the estimator actually moves *away* from ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) for small values of ‖ Y āˆ’ ν ‖ , {\\displaystyle \\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|,} ![{\\displaystyle \\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65e1a7e674ea9579de967e045b596907ae5fde77) as the multiplier on Y āˆ’ ν {\\displaystyle {\\mathbf {Y} }-{\\boldsymbol {\\nu }}} ![{\\displaystyle {\\mathbf {Y} }-{\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6fddab8608ca8b09eeef298a960defea57c78395) is then negative. This can be remedied by replacing this multiplier by zero when it is negative. To this end, define the *positive-part James-Stein shrinkage operator*: S Ī» ( x ) \= x \[ 1 āˆ’ ( Ī» / x ) 2 \] \+ , {\\displaystyle S\_{\\lambda }(x)=x\\left\[1-\\left(\\lambda /x\\right)^{2}\\right\]\_{+},} ![{\\displaystyle S\_{\\lambda }(x)=x\\left\[1-\\left(\\lambda /x\\right)^{2}\\right\]\_{+},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/682ea45fcbaf678f56319df06a3029306a5e5e51) where x \+ \= max { 0 , x } {\\displaystyle x\_{+}=\\max\\{0,x\\}} ![{\\displaystyle x\_{+}=\\max\\{0,x\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5a5cf1f9bd0134bd7a9e50fa8b4e96acbbd05e32), and apply this operator component-wise to the (unbiased) least-squares estimator of Īø āˆ’ ν {\\displaystyle {\\boldsymbol {\\theta }}-{\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\theta }}-{\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad81208959563b1216bdeb1e64c895fe4c1fddf4) (with known ν {\\displaystyle {\\boldsymbol {\\nu }}} ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160)) for each i \= 1 , … , m {\\displaystyle i=1,\\ldots ,m} ![{\\displaystyle i=1,\\ldots ,m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/74690f54a3c93a332ecb2935e900178b9a555483): Īø ^ i \+ āˆ’ ν i \= S Ī» i ( Y i āˆ’ ν i ) , Ī» i := σ m āˆ’ 2 \| Y i āˆ’ ν i \| ‖ Y āˆ’ ν ‖ . {\\displaystyle {\\widehat {\\theta }}\_{i}^{+}-\\nu \_{i}=S\_{\\lambda \_{i}}(Y\_{i}-\\nu \_{i}),\\quad \\lambda \_{i}:=\\sigma {\\sqrt {m-2}}\\,{\\frac {\|Y\_{i}-\\nu \_{i}\|}{\\\|\\mathbf {Y} -{\\boldsymbol {\\nu }}\\\|}}.} ![{\\displaystyle {\\widehat {\\theta }}\_{i}^{+}-\\nu \_{i}=S\_{\\lambda \_{i}}(Y\_{i}-\\nu \_{i}),\\quad \\lambda \_{i}:=\\sigma {\\sqrt {m-2}}\\,{\\frac {\|Y\_{i}-\\nu \_{i}\|}{\\\|\\mathbf {Y} -{\\boldsymbol {\\nu }}\\\|}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/26dfce05be8073ab053938126bd30ae4555cad60) The resulting estimator Īø ^ \+ {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}^{+}} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}^{+}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7f7f54f4ecec860db59a9850ffa278bda136898d) of Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) is called the *positive-part James–Stein estimator* and can be written in vector notation as: Īø ^ \+ āˆ’ ν \= ( 1 āˆ’ ( m āˆ’ 2 ) σ 2 ‖ Y āˆ’ ν ‖ 2 ) \+ ( Y āˆ’ ν ) . {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}^{+}-{\\boldsymbol {\\nu }}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|^{2}}}\\right)\_{+}({\\mathbf {Y} }-{\\boldsymbol {\\nu }}).} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}^{+}-{\\boldsymbol {\\nu }}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|^{2}}}\\right)\_{+}({\\mathbf {Y} }-{\\boldsymbol {\\nu }}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8dada56b5049df33fce01cf1d6f9ddbdab23eb3) This estimator has a smaller risk than the basic James–Stein estimator for m ≄ 4 {\\displaystyle m\\geq 4} ![{\\displaystyle m\\geq 4}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e9ba367983142680a2456aa5ae1f61e21e1aa186). It follows that the basic James–Stein estimator is itself [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule").[\[8\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-Anderson-84-8) It turns out, however, that the positive-part estimator is also inadmissible.[\[4\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-lehmann-casella-98-4) This follows from a more general result which requires admissible estimators to be smooth. ## Positive-part James–Stein shrinkage and model selection \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=5 "Edit section: Positive-part James–Stein shrinkage and model selection")\] Recall the initial setup: Y ∼ N ( Īø , σ 2 I ) , {\\displaystyle {\\mathbf {Y} }\\sim N({\\boldsymbol {\\theta }},\\sigma ^{2}I),\\,} ![{\\displaystyle {\\mathbf {Y} }\\sim N({\\boldsymbol {\\theta }},\\sigma ^{2}I),\\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b19ee31bd76ccbb7a75983bad82ac0421914d40a) where the variance coefficient σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) is known and we wish to estimate the unknown (mean response) coefficient Īø \= E Y {\\displaystyle {\\boldsymbol {\\theta }}=\\mathbb {E} \\mathbf {Y} } ![{\\displaystyle {\\boldsymbol {\\theta }}=\\mathbb {E} \\mathbf {Y} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a9b78572d3e5dd5cb8362412e27f57d1a5241fdd). In the more general setting of [linear regression](https://en.wikipedia.org/wiki/Linear_regression "Linear regression"), the mean response is instead given by E Y \= X Īø , {\\displaystyle \\mathbb {E} \\mathbf {Y} =\\mathbf {X} {\\boldsymbol {\\theta }},} ![{\\displaystyle \\mathbb {E} \\mathbf {Y} =\\mathbf {X} {\\boldsymbol {\\theta }},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a9b27e84b5235faaa4c502229ce10644dbba2ac5) where X \= \[ v 1 , … , v m \] {\\displaystyle \\mathbf {X} =\[\\mathbf {v} \_{1},\\ldots ,\\mathbf {v} \_{m}\]} ![{\\displaystyle \\mathbf {X} =\[\\mathbf {v} \_{1},\\ldots ,\\mathbf {v} \_{m}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ad1c826b3a19756dc60523573d2d769dd2315d6) is a matrix with m {\\displaystyle m} ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc) columns. As in the previous section, we can use the *positive-part James-Stein shrinkage operator* to obtain a [shrinkage estimator](https://en.wikipedia.org/wiki/Shrinkage_estimator "Shrinkage estimator") of Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf). In particular, any Īø ^ {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/42013234f6f8ffb7d8cc4268ec825db065113510) that satisfies the *James-Stein [KKT conditions](https://en.wikipedia.org/wiki/KKT_conditions "KKT conditions")*:[\[9\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-:0-9) Īø ^ i \= S σ ‖ v i ‖ ( Īø ^ i \+ v i ⊤ ( Y āˆ’ X Īø ^ ) ‖ v i ‖ 2 ) , i \= 1 , … , m {\\displaystyle {\\hat {\\theta }}\_{i}=S\_{\\frac {\\sigma }{\\\|\\mathbf {v} \_{i}\\\|}}{\\bigg (}{\\hat {\\theta }}\_{i}+{\\frac {\\mathbf {v} \_{i}^{\\top }(\\mathbf {Y} -\\mathbf {X} {\\widehat {\\boldsymbol {\\theta }}})}{\\\|\\mathbf {v} \_{i}\\\|^{2}}}{\\bigg )},\\quad i=1,\\ldots ,m} ![{\\displaystyle {\\hat {\\theta }}\_{i}=S\_{\\frac {\\sigma }{\\\|\\mathbf {v} \_{i}\\\|}}{\\bigg (}{\\hat {\\theta }}\_{i}+{\\frac {\\mathbf {v} \_{i}^{\\top }(\\mathbf {Y} -\\mathbf {X} {\\widehat {\\boldsymbol {\\theta }}})}{\\\|\\mathbf {v} \_{i}\\\|^{2}}}{\\bigg )},\\quad i=1,\\ldots ,m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/936cbd3895bad590cedb7d6a87a98a32fb35ce2a) is a (positive-part) James-Stein estimator of Īø {\\displaystyle {\\boldsymbol {\\theta }}} ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) with the useful property that it performs both shrinkage and [model selection](https://en.wikipedia.org/wiki/Model_selection "Model selection") simultaneously. This is because, depending on the value of the known σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5), there is a (possibly empty) set S āŠ† { 1 , … , m } {\\displaystyle {\\mathcal {S}}\\subseteq \\{1,\\ldots ,m\\}} ![{\\displaystyle {\\mathcal {S}}\\subseteq \\{1,\\ldots ,m\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/720aba8454507e0c7eded393c9d045ff1417c5ce) such that Īø ^ i \= 0 , i ∈ S . {\\displaystyle {\\hat {\\theta }}\_{i}=0,\\quad i\\in {\\mathcal {S}}.} ![{\\displaystyle {\\hat {\\theta }}\_{i}=0,\\quad i\\in {\\mathcal {S}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2192ecfb09e8f50a2d6a6d23f9fd842e4fbe6321) In other words, some (or all) of the Īø i {\\displaystyle \\theta \_{i}} ![{\\displaystyle \\theta \_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/302b19204ed378e99ff4575341a67eebdbe5a555) could be estimated as exactly zero, which is equivalent to the selection of a suitable linear regression model. ## Further extensions \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=6 "Edit section: Further extensions")\] The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is often [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") for simultaneous estimation of several parameters. This effect has been called [Stein's phenomenon](https://en.wikipedia.org/wiki/Stein%27s_phenomenon "Stein's phenomenon"), and has been demonstrated for several different problem settings, some of which are briefly outlined below. - James and Stein demonstrated that the estimator presented above can still be used when the variance σ 2 {\\displaystyle \\sigma ^{2}} ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) is unknown, by replacing it with the standard estimator of the variance, σ ^ 2 \= 1 m āˆ‘ ( Y i āˆ’ Y ĀÆ ) 2 {\\displaystyle {\\widehat {\\sigma }}^{2}={\\frac {1}{m}}\\sum (Y\_{i}-{\\overline {Y}})^{2}} ![{\\displaystyle {\\widehat {\\sigma }}^{2}={\\frac {1}{m}}\\sum (Y\_{i}-{\\overline {Y}})^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/950af315e27d5b7a8dcb8a34f12250702441a8ad) . The dominance result still holds under the same condition, namely, m \> 2 {\\displaystyle m\>2} ![{\\displaystyle m\>2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/44e4ce1c04edd8f9602e60f0ec4457b7ac12fcd4) .[\[2\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-james%E2%80%93stein-61-2) - All the results above are for the case when only a single observation vector **y** is available. For the more general case when n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) vectors are available, we consider the estimator Īø ^ J S \= ( 1 āˆ’ ( m āˆ’ 2 ) σ 2 n ‖ Y ĀÆ ‖ 2 ) Y ĀÆ , {\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2){\\frac {\\sigma ^{2}}{n}}}{\\\|{\\overline {\\mathbf {Y} }}\\\|^{2}}}\\right){\\overline {\\mathbf {Y} }},} ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2){\\frac {\\sigma ^{2}}{n}}}{\\\|{\\overline {\\mathbf {Y} }}\\\|^{2}}}\\right){\\overline {\\mathbf {Y} }},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a97236e07a0088f3d9a1b8adc410e9e6de0fd599) where Y ĀÆ {\\displaystyle {\\overline {\\mathbf {Y} }}} ![{\\displaystyle {\\overline {\\mathbf {Y} }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9415319ff8cfc6377fe06bf6ae88700802497c3c) is the m {\\displaystyle m} ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc) \-length average of the n {\\displaystyle n} ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) observations, so that, Y ĀÆ ∼ N m ( Īø , σ 2 n I ) {\\displaystyle {\\overline {\\mathbf {Y} }}\\sim N\_{m}{\\Big (}{\\boldsymbol {\\theta }},{\\frac {\\sigma ^{2}}{n}}I{\\Big )}} ![{\\displaystyle {\\overline {\\mathbf {Y} }}\\sim N\_{m}{\\Big (}{\\boldsymbol {\\theta }},{\\frac {\\sigma ^{2}}{n}}I{\\Big )}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e72554ac36054093a779d821974a6bbedec2d3df) . - The work of James and Stein has been extended to the case of a general measurement covariance matrix, i.e., where measurements may be statistically dependent and may have differing variances.[\[10\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-bock75-10) A similar dominating estimator can be constructed, with a suitably generalized dominance condition. This can be used to construct a [linear regression](https://en.wikipedia.org/wiki/Linear_regression "Linear regression") technique which outperforms the standard application of the LS estimator.[\[10\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-bock75-10) - Stein's result has been extended to a wide class of distributions and loss functions. However, this theory provides only an existence result, in that explicit dominating estimators were not actually exhibited.[\[11\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-brown66-11) It is quite difficult to obtain explicit estimators improving upon the usual estimator without specific restrictions on the underlying distributions.[\[4\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-lehmann-casella-98-4) ## See also \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=7 "Edit section: See also")\] - [Admissible decision rule](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") - [Hodges' estimator](https://en.wikipedia.org/wiki/Hodges%27_estimator "Hodges' estimator") - [Shrinkage estimator](https://en.wikipedia.org/wiki/Shrinkage_estimator "Shrinkage estimator") - [Regular estimator](https://en.wikipedia.org/wiki/Regular_estimator "Regular estimator") - [KL divergence](https://en.wikipedia.org/wiki/KL_divergence "KL divergence") ## References \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=8 "Edit section: References")\] 1. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-stein-56_1-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-stein-56_1-1) [Stein, C.](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") (1956), "Inadmissibility of the usual estimator for the mean of a multivariate distribution", [*Proc. Third Berkeley Symp. Math. Statist. Prob.*](http://projecteuclid.org/euclid.bsmsp/1200501656), vol. 1, pp. 197–206, [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0084922](https://mathscinet.ams.org/mathscinet-getitem?mr=0084922), [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0073\.35602](https://zbmath.org/?format=complete&q=an:0073.35602) 2. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-james%E2%80%93stein-61_2-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-james%E2%80%93stein-61_2-1) [***c***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-james%E2%80%93stein-61_2-2) James, W.; [Stein, C.](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") (1961), "Estimation with quadratic loss", [*Proc. Fourth Berkeley Symp. Math. Statist. Prob.*](http://projecteuclid.org/euclid.bsmsp/1200512173), vol. 1, pp. 361–379, [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0133191](https://mathscinet.ams.org/mathscinet-getitem?mr=0133191) 3. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-3)** Beran, R. (1995). THE ROLE OF HAJEK’S CONVOLUTION THEOREM IN STATISTICAL THEORY 4. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-lehmann-casella-98_4-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-lehmann-casella-98_4-1) [***c***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-lehmann-casella-98_4-2) Lehmann, E. L.; Casella, G. (1998), *Theory of Point Estimation* (2nd ed.), New York: Springer 5. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-5)** Efron, B.; Morris, C. (1973). "Stein's Estimation Rule and Its Competitors—An Empirical Bayes Approach". *Journal of the American Statistical Association*. **68** (341). American Statistical Association: 117–130\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2284155](https://doi.org/10.2307%2F2284155). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2284155](https://www.jstor.org/stable/2284155). 6. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-stander-17_6-0)** Stander, M. (2017), *Using Stein's estimator to correct the bound on the entropic uncertainty principle for more than two measurements*, [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1702\.02440](https://arxiv.org/abs/1702.02440), [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2017arXiv170202440S](https://ui.adsabs.harvard.edu/abs/2017arXiv170202440S) 7. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-7)** Stigler, Stephen M. (1990-02-01). ["The 1988 Neyman Memorial Lecture: A Galtonian Perspective on Shrinkage Estimators"](https://doi.org/10.1214%2Fss%2F1177012274). *Statistical Science*. **5** (1). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/ss/1177012274](https://doi.org/10.1214%2Fss%2F1177012274). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0883-4237](https://search.worldcat.org/issn/0883-4237). 8. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-Anderson-84_8-0)** Anderson, T. W. (1984), *An Introduction to Multivariate Statistical Analysis* (2nd ed.), New York: John Wiley & Sons 9. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-:0_9-0)** Botev, Zdravko I.; Kroese, Dirk P.; Taimre, Thomas (2025). *Data Science and Machine Learning: Mathematical and Statistical Methods* (2nd ed.). Boca Raton ; London: CRC Press. pp. 277–279\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-1-032-48868-4](https://en.wikipedia.org/wiki/Special:BookSources/978-1-032-48868-4 "Special:BookSources/978-1-032-48868-4") . 10. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-bock75_10-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-bock75_10-1) Bock, M. E. (1975), "Minimax estimators of the mean of a multivariate normal distribution", *[Annals of Statistics](https://en.wikipedia.org/wiki/Annals_of_Statistics "Annals of Statistics")*, **3** (1): 209–218, [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176343009](https://doi.org/10.1214%2Faos%2F1176343009), [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0381064](https://mathscinet.ams.org/mathscinet-getitem?mr=0381064), [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0314\.62005](https://zbmath.org/?format=complete&q=an:0314.62005) 11. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-brown66_11-0)** [Brown, L. D.](https://en.wikipedia.org/wiki/Lawrence_D._Brown "Lawrence D. Brown") (1966), "On the admissibility of invariant estimators of one or more location parameters", *Annals of Mathematical Statistics*, **37** (5): 1087–1136, [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177699259](https://doi.org/10.1214%2Faoms%2F1177699259), [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0216647](https://mathscinet.ams.org/mathscinet-getitem?mr=0216647), [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0156\.39401](https://zbmath.org/?format=complete&q=an:0156.39401) ## Further reading \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=9 "Edit section: Further reading")\] - Judge, George G.; Bock, M. E. (1978). *The Statistical Implications of Pre-Test and Stein-Rule Estimators in Econometrics*. New York: North Holland. pp. 229–257\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-7204-0729-X](https://en.wikipedia.org/wiki/Special:BookSources/0-7204-0729-X "Special:BookSources/0-7204-0729-X") . ![](https://en.wikipedia.org/wiki/Special:CentralAutoLogin/start?useformat=desktop&type=1x1&usesul3=1) Retrieved from "<https://en.wikipedia.org/w/index.php?title=James–Stein_estimator&oldid=1342138970>" [Categories](https://en.wikipedia.org/wiki/Help:Category "Help:Category"): - [Estimator](https://en.wikipedia.org/wiki/Category:Estimator "Category:Estimator") - [Normal distribution](https://en.wikipedia.org/wiki/Category:Normal_distribution "Category:Normal distribution") Hidden categories: - [Articles with short description](https://en.wikipedia.org/wiki/Category:Articles_with_short_description "Category:Articles with short description") - [Short description is different from Wikidata](https://en.wikipedia.org/wiki/Category:Short_description_is_different_from_Wikidata "Category:Short description is different from Wikidata") - [Wikipedia articles that are too technical from November 2017](https://en.wikipedia.org/wiki/Category:Wikipedia_articles_that_are_too_technical_from_November_2017 "Category:Wikipedia articles that are too technical from November 2017") - [All articles that are too technical](https://en.wikipedia.org/wiki/Category:All_articles_that_are_too_technical "Category:All articles that are too technical") - This page was last edited on 7 March 2026, at 06:50 (UTC). - Text is available under the [Creative Commons Attribution-ShareAlike 4.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_4.0_International_License "Wikipedia:Text of the Creative Commons Attribution-ShareAlike 4.0 International License"); additional terms may apply. By using this site, you agree to the [Terms of Use](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Terms_of_Use "foundation:Special:MyLanguage/Policy:Terms of Use") and [Privacy Policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy "foundation:Special:MyLanguage/Policy:Privacy policy"). WikipediaĀ® is a registered trademark of the [Wikimedia Foundation, Inc.](https://wikimediafoundation.org/), a non-profit organization. - [Privacy policy](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Privacy_policy) - [About Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:About) - [Disclaimers](https://en.wikipedia.org/wiki/Wikipedia:General_disclaimer) - [Contact Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:Contact_us) - [Legal & safety contacts](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Legal:Wikimedia_Foundation_Legal_and_Safety_Contact_Information) - [Code of Conduct](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Universal_Code_of_Conduct) - [Developers](https://developer.wikimedia.org/) - [Statistics](https://stats.wikimedia.org/#/en.wikipedia.org) - [Cookie statement](https://foundation.wikimedia.org/wiki/Special:MyLanguage/Policy:Cookie_statement) - [Mobile view](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&mobileaction=toggle_view_mobile) - [![Wikimedia Foundation](https://en.wikipedia.org/static/images/footer/wikimedia.svg)](https://www.wikimedia.org/) - [![Powered by MediaWiki](https://en.wikipedia.org/w/resources/assets/mediawiki_compact.svg)](https://www.mediawiki.org/) Search Toggle the table of contents James–Stein estimator 3 languages [Add topic](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator)
Readable Markdown
From Wikipedia, the free encyclopedia The **James–Stein estimator** is an [estimator](https://en.wikipedia.org/wiki/Estimator "Estimator") of the [mean](https://en.wikipedia.org/wiki/Mean "Mean") ![{\\displaystyle {\\boldsymbol {\\theta }}:=(\\theta \_{1},\\theta \_{2},\\dots \\theta \_{m})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ea925cd9dce005c92b0c86c343e8b005ecf8a3f3) for a multivariate [random variable](https://en.wikipedia.org/wiki/Random_variable "Random variable") ![{\\displaystyle {\\boldsymbol {Y}}:=(Y\_{1},Y\_{2},\\dots Y\_{m})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c7d304d83650977659f023b20fae116a55c065b0). It arose sequentially in two main published papers. The earlier version of the estimator was developed in 1956,[\[1\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-stein-56-1) when [Charles Stein](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") reached a relatively shocking conclusion that while the then-usual estimate of the mean, the [sample mean](https://en.wikipedia.org/wiki/Sample_mean "Sample mean"), is [admissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") when ![{\\displaystyle m\\leq 2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9ef46309a66e2cdf7737c7246afd8e78e3052f3d), it is [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") when ![{\\displaystyle m\\geq 3}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4610f29d2708d1febf1c0090f58ddd3986593545). Stein proposed a possible improvement to the estimator that [shrinks](https://en.wikipedia.org/wiki/Shrinkage_\(statistics\) "Shrinkage (statistics)") the sample mean ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) towards a more central mean vector ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) (which can be chosen [a priori](https://en.wikipedia.org/wiki/A_priori_and_a_posteriori "A priori and a posteriori") or commonly as the "average of averages" of the sample means, given all samples share the same size). This observation is commonly referred to as [Stein's example or paradox](https://en.wikipedia.org/wiki/Stein%27s_example "Stein's example"). In 1961, [Willard James](https://en.wikipedia.org/wiki/Willard_D._James "Willard D. James") and Charles Stein simplified the original process.[\[2\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-james%E2%80%93stein-61-2) It can be shown that the James–Stein estimator [dominates](https://en.wikipedia.org/wiki/Dominating_decision_rule "Dominating decision rule") the "ordinary" [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares") approach in the sense that the James–Stein estimator has a lower [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") than the "ordinary" least squares estimator for all ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf). This is possible because the James–Stein estimator is [biased](https://en.wikipedia.org/wiki/Bias_of_an_estimator "Bias of an estimator"), so that the [Gauss–Markov theorem](https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem "Gauss–Markov theorem") does not apply. Similar to the [Hodges' estimator](https://en.wikipedia.org/wiki/Hodges%27_estimator "Hodges' estimator"), the James-Stein estimator is [superefficient](https://en.wikipedia.org/w/index.php?title=Superefficient&action=edit&redlink=1 "Superefficient (page does not exist)") and [non-regular](https://en.wikipedia.org/wiki/Regular_estimator "Regular estimator") at ![{\\displaystyle {\\boldsymbol {\\theta }}=\\mathbf {0} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/3278daf129fb6d34f19a067b7fd6b7a90cee5aeb).[\[3\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-3) Let ![{\\displaystyle {\\mathbf {Y} }\\sim N\_{m}({\\boldsymbol {\\theta }},\\sigma ^{2}I),\\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9f91e318ab7b67f7cf60d538cc31280dde768387)where the vector ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) is the unknown [mean](https://en.wikipedia.org/wiki/Expected_value "Expected value") of ![{\\displaystyle {\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7020214a70ec832bbdd74738ec96ca9989a695e6), which is [![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc)\-variate normally distributed](https://en.wikipedia.org/wiki/Multivariate_normal_distribution "Multivariate normal distribution") and with known [covariance matrix](https://en.wikipedia.org/wiki/Covariance_matrix "Covariance matrix") ![{\\displaystyle \\sigma ^{2}I}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6e2975cf33cb590e842a2c26a906f0949af795ff). We are interested in obtaining an estimate, ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/42013234f6f8ffb7d8cc4268ec825db065113510), of ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf), based on a single observation, ![{\\displaystyle {\\mathbf {y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4ce8e1f631864c8e56c48f7861efef6666236d9), of ![{\\displaystyle {\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7020214a70ec832bbdd74738ec96ca9989a695e6). In real-world application, this is a common situation in which a set of parameters is sampled, and the samples are corrupted by independent [Gaussian noise](https://en.wikipedia.org/wiki/Gaussian_noise "Gaussian noise"). Since this noise has mean of zero, it may be reasonable to use the samples themselves as an estimate of the parameters. This approach is the [least squares](https://en.wikipedia.org/wiki/Least_squares "Least squares") estimator, which is ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}={\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/83ee2a6a995c8a02c8f565486b778e1e70bd2fac). Stein demonstrated that in terms of [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") ![{\\displaystyle \\operatorname {E} \\left\[\\left\\\|{\\boldsymbol {\\theta }}-{\\widehat {\\boldsymbol {\\theta }}}\\right\\\|^{2}\\right\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/bdd1163606c619b36e0e0d2310a54e10903e7234), the least squares estimator, ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/58b39a6819b4e3e61fbcb24dd72a258cd0455d3d), is sub-optimal to shrinkage based estimators, such as the **James–Stein estimator**, ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/805e2127008d46c29a7952101fada0840f6fe252).[\[1\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-stein-56-1) The paradoxical result, that there is a (possibly) better and never any worse estimate of ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) in mean squared error as compared to the sample mean, became known as [Stein's example](https://en.wikipedia.org/wiki/Stein%27s_example "Stein's example"). [![](https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/MSE_of_ML_vs_JS.png/500px-MSE_of_ML_vs_JS.png)](https://en.wikipedia.org/wiki/File:MSE_of_ML_vs_JS.png) MSE (R) of least squares estimator (ML) vs. James–Stein estimator (JS). The James–Stein estimator gives its best estimate when the norm of the actual parameter vector Īø is near zero. If ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) is known, the James–Stein estimator is given by ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }\\\|^{2}}}\\right){\\mathbf {Y} }.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/abf3aef7dbf3a1cd7401484e59500b5504961d65) James and Stein showed that the above estimator [dominates](https://en.wikipedia.org/wiki/Dominating_decision_rule "Dominating decision rule") ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{LS}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/58b39a6819b4e3e61fbcb24dd72a258cd0455d3d) for any ![{\\displaystyle m\\geq 3}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4610f29d2708d1febf1c0090f58ddd3986593545), meaning that the James–Stein estimator has a lower [mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error "Mean squared error") (MSE) than the [maximum likelihood](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") estimator.[\[2\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-james%E2%80%93stein-61-2)[\[4\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-lehmann-casella-98-4) By definition, this makes the least squares estimator [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") when ![{\\displaystyle m\\geq 3}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4610f29d2708d1febf1c0090f58ddd3986593545). Notice that if ![{\\displaystyle (m-2)\\sigma ^{2}\<\\\|{\\mathbf {Y} }\\\|^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/791e4ba2fdd291a31cd1e0da5f16826c143f55c1) then this estimator simply takes the natural estimator ![{\\displaystyle \\mathbf {Y} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/c92a7716a99fadda050469747fce1e475e0ec549) and shrinks it towards the origin **0**. In fact this is not the only direction of [shrinkage](https://en.wikipedia.org/wiki/Shrinkage_\(statistics\) "Shrinkage (statistics)") that works. Let ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) be an arbitrary fixed vector of dimension ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc). Then there exists an estimator of the James–Stein type that shrinks toward ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160), namely ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|^{2}}}\\right)({\\mathbf {Y} }-{\\boldsymbol {\\nu }})+{\\boldsymbol {\\nu }},\\qquad m\\geq 3.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/fda106b6c468638610f33348f5c7549428c9810f) The James–Stein estimator dominates the usual estimator for any ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160). A natural question to ask is whether the improvement over the usual estimator is independent of the choice of ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160). The answer is no. The improvement is small if ![{\\displaystyle \\\|{{\\boldsymbol {\\theta }}-{\\boldsymbol {\\nu }}}\\\|}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c4f285d34fe737275b0d397ec39b887f4fdabd4c) is large. Thus to get a very great improvement some knowledge of the location of ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) is necessary. Of course this is the quantity we are trying to estimate so we don't have this knowledge [a priori](https://en.wikipedia.org/wiki/A_priori_and_a_posteriori "A priori and a posteriori"). But we may have some guess as to what the mean vector is. This can be considered a disadvantage of the estimator: the choice is not objective as it may depend on the beliefs of the researcher. Nonetheless, James and Stein's result is that *any* finite guess ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) improves the expected MSE over the maximum-likelihood estimator, which is tantamount to using an infinite ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160), surely a poor guess. Seeing the James–Stein estimator as an [empirical Bayes method](https://en.wikipedia.org/wiki/Empirical_Bayes_method "Empirical Bayes method") gives some intuition to this result: One assumes that ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) itself is a random variable with [prior distribution](https://en.wikipedia.org/wiki/Prior_probability "Prior probability") ![{\\displaystyle \\sim N(0,A)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/70b48c6623236065e6e4d8940bd524e756c7b641), where ![{\\displaystyle A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3) is estimated from the data itself. Estimating ![{\\displaystyle A}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7daff47fa58cdfd29dc333def748ff5fa4c923e3) only gives an advantage compared to the [maximum-likelihood estimator](https://en.wikipedia.org/wiki/Maximum_likelihood "Maximum likelihood") when the dimension ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc) is large enough; hence it does not work for ![{\\displaystyle m\\leq 2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9ef46309a66e2cdf7737c7246afd8e78e3052f3d). The James–Stein estimator is a member of a class of Bayesian estimators that dominate the maximum-likelihood estimator.[\[5\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-5) A consequence of the above discussion is the following counterintuitive result: When three or more unrelated parameters are measured, their total MSE can be reduced by using a combined estimator such as the James–Stein estimator; whereas when each parameter is estimated separately, the least squares (LS) estimator is [admissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule"). A quirky example would be estimating the speed of light, tea consumption in Taiwan, and hog weight in Montana, all together. The James–Stein estimator always improves upon the *total* MSE, i.e., the sum of the expected squared errors of each component. Therefore, the total MSE in measuring light speed, tea consumption, and hog weight would improve by using the James–Stein estimator. However, any particular component (such as the speed of light) would improve for some parameter values, and deteriorate for others. Thus, although the James–Stein estimator dominates the LS estimator when three or more parameters are estimated, any single component does not dominate the respective component of the LS estimator. The conclusion from this hypothetical example is that measurements should be combined if one is interested in minimizing their total MSE. For example, in a [telecommunication](https://en.wikipedia.org/wiki/Telecommunication "Telecommunication") setting, it is reasonable to combine [channel](https://en.wikipedia.org/wiki/Communication_channel "Communication channel") tap measurements in a [channel estimation](https://en.wikipedia.org/wiki/Channel_estimation "Channel estimation") scenario, as the goal is to minimize the total channel estimation error. The James–Stein estimator has also found use in fundamental quantum theory, where the estimator has been used to improve the theoretical bounds of the [entropic uncertainty principle](https://en.wikipedia.org/wiki/Entropic_uncertainty_principle "Entropic uncertainty principle") for more than three measurements.[\[6\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-stander-17-6) An intuitive derivation and interpretation is given by the [Galtonian](https://en.wikipedia.org/wiki/Francis_Galton "Francis Galton") perspective.[\[7\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-7) Under this interpretation, we aim to predict the population means using the [imperfectly measured sample means](https://en.wikipedia.org/wiki/Measurement_error_model "Measurement error model"). The equation of the [OLS](https://en.wikipedia.org/wiki/Ordinary_least_squares "Ordinary least squares") estimator in a hypothetical regression of the population means on the sample means gives an estimator of the form of either the James–Stein estimator (when we force the OLS intercept to equal 0) or of the Efron-Morris estimator (when we allow the intercept to vary). ## Positive-part James–Stein shrinkage operator \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=4 "Edit section: Positive-part James–Stein shrinkage operator")\] Despite the intuition that the James–Stein estimator shrinks the unbiased least-squares estimator ![{\\displaystyle {\\mathbf {Y} }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7020214a70ec832bbdd74738ec96ca9989a695e6) *toward* ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160), the estimator actually moves *away* from ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160) for small values of ![{\\displaystyle \\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/65e1a7e674ea9579de967e045b596907ae5fde77) as the multiplier on ![{\\displaystyle {\\mathbf {Y} }-{\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6fddab8608ca8b09eeef298a960defea57c78395) is then negative. This can be remedied by replacing this multiplier by zero when it is negative. To this end, define the *positive-part James-Stein shrinkage operator*: ![{\\displaystyle S\_{\\lambda }(x)=x\\left\[1-\\left(\\lambda /x\\right)^{2}\\right\]\_{+},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/682ea45fcbaf678f56319df06a3029306a5e5e51) where ![{\\displaystyle x\_{+}=\\max\\{0,x\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5a5cf1f9bd0134bd7a9e50fa8b4e96acbbd05e32), and apply this operator component-wise to the (unbiased) least-squares estimator of ![{\\displaystyle {\\boldsymbol {\\theta }}-{\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ad81208959563b1216bdeb1e64c895fe4c1fddf4) (with known ![{\\displaystyle {\\boldsymbol {\\nu }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/57d030f5358ad62e959e7907ec1508746a563160)) for each ![{\\displaystyle i=1,\\ldots ,m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/74690f54a3c93a332ecb2935e900178b9a555483): ![{\\displaystyle {\\widehat {\\theta }}\_{i}^{+}-\\nu \_{i}=S\_{\\lambda \_{i}}(Y\_{i}-\\nu \_{i}),\\quad \\lambda \_{i}:=\\sigma {\\sqrt {m-2}}\\,{\\frac {\|Y\_{i}-\\nu \_{i}\|}{\\\|\\mathbf {Y} -{\\boldsymbol {\\nu }}\\\|}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/26dfce05be8073ab053938126bd30ae4555cad60) The resulting estimator ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}^{+}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7f7f54f4ecec860db59a9850ffa278bda136898d) of ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) is called the *positive-part James–Stein estimator* and can be written in vector notation as: ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}^{+}-{\\boldsymbol {\\nu }}=\\left(1-{\\frac {(m-2)\\sigma ^{2}}{\\\|{\\mathbf {Y} }-{\\boldsymbol {\\nu }}\\\|^{2}}}\\right)\_{+}({\\mathbf {Y} }-{\\boldsymbol {\\nu }}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f8dada56b5049df33fce01cf1d6f9ddbdab23eb3) This estimator has a smaller risk than the basic James–Stein estimator for ![{\\displaystyle m\\geq 4}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e9ba367983142680a2456aa5ae1f61e21e1aa186). It follows that the basic James–Stein estimator is itself [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule").[\[8\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-Anderson-84-8) It turns out, however, that the positive-part estimator is also inadmissible.[\[4\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-lehmann-casella-98-4) This follows from a more general result which requires admissible estimators to be smooth. ## Positive-part James–Stein shrinkage and model selection \[[edit](https://en.wikipedia.org/w/index.php?title=James%E2%80%93Stein_estimator&action=edit&section=5 "Edit section: Positive-part James–Stein shrinkage and model selection")\] Recall the initial setup: ![{\\displaystyle {\\mathbf {Y} }\\sim N({\\boldsymbol {\\theta }},\\sigma ^{2}I),\\,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b19ee31bd76ccbb7a75983bad82ac0421914d40a) where the variance coefficient ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) is known and we wish to estimate the unknown (mean response) coefficient ![{\\displaystyle {\\boldsymbol {\\theta }}=\\mathbb {E} \\mathbf {Y} }](https://wikimedia.org/api/rest_v1/media/math/render/svg/a9b78572d3e5dd5cb8362412e27f57d1a5241fdd). In the more general setting of [linear regression](https://en.wikipedia.org/wiki/Linear_regression "Linear regression"), the mean response is instead given by ![{\\displaystyle \\mathbb {E} \\mathbf {Y} =\\mathbf {X} {\\boldsymbol {\\theta }},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a9b27e84b5235faaa4c502229ce10644dbba2ac5) where ![{\\displaystyle \\mathbf {X} =\[\\mathbf {v} \_{1},\\ldots ,\\mathbf {v} \_{m}\]}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ad1c826b3a19756dc60523573d2d769dd2315d6) is a matrix with ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc) columns. As in the previous section, we can use the *positive-part James-Stein shrinkage operator* to obtain a [shrinkage estimator](https://en.wikipedia.org/wiki/Shrinkage_estimator "Shrinkage estimator") of ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf). In particular, any ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/42013234f6f8ffb7d8cc4268ec825db065113510) that satisfies the *James-Stein [KKT conditions](https://en.wikipedia.org/wiki/KKT_conditions "KKT conditions")*:[\[9\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-:0-9) ![{\\displaystyle {\\hat {\\theta }}\_{i}=S\_{\\frac {\\sigma }{\\\|\\mathbf {v} \_{i}\\\|}}{\\bigg (}{\\hat {\\theta }}\_{i}+{\\frac {\\mathbf {v} \_{i}^{\\top }(\\mathbf {Y} -\\mathbf {X} {\\widehat {\\boldsymbol {\\theta }}})}{\\\|\\mathbf {v} \_{i}\\\|^{2}}}{\\bigg )},\\quad i=1,\\ldots ,m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/936cbd3895bad590cedb7d6a87a98a32fb35ce2a) is a (positive-part) James-Stein estimator of ![{\\displaystyle {\\boldsymbol {\\theta }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/33b025a6bf54ec02e65c871dc3e5897c921419cf) with the useful property that it performs both shrinkage and [model selection](https://en.wikipedia.org/wiki/Model_selection "Model selection") simultaneously. This is because, depending on the value of the known ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5), there is a (possibly empty) set ![{\\displaystyle {\\mathcal {S}}\\subseteq \\{1,\\ldots ,m\\}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/720aba8454507e0c7eded393c9d045ff1417c5ce) such that ![{\\displaystyle {\\hat {\\theta }}\_{i}=0,\\quad i\\in {\\mathcal {S}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2192ecfb09e8f50a2d6a6d23f9fd842e4fbe6321) In other words, some (or all) of the ![{\\displaystyle \\theta \_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/302b19204ed378e99ff4575341a67eebdbe5a555) could be estimated as exactly zero, which is equivalent to the selection of a suitable linear regression model. The James–Stein estimator may seem at first sight to be a result of some peculiarity of the problem setting. In fact, the estimator exemplifies a very wide-ranging effect; namely, the fact that the "ordinary" or least squares estimator is often [inadmissible](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") for simultaneous estimation of several parameters. This effect has been called [Stein's phenomenon](https://en.wikipedia.org/wiki/Stein%27s_phenomenon "Stein's phenomenon"), and has been demonstrated for several different problem settings, some of which are briefly outlined below. - James and Stein demonstrated that the estimator presented above can still be used when the variance ![{\\displaystyle \\sigma ^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/53a5c55e536acf250c1d3e0f754be5692b843ef5) is unknown, by replacing it with the standard estimator of the variance, ![{\\displaystyle {\\widehat {\\sigma }}^{2}={\\frac {1}{m}}\\sum (Y\_{i}-{\\overline {Y}})^{2}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/950af315e27d5b7a8dcb8a34f12250702441a8ad). The dominance result still holds under the same condition, namely, ![{\\displaystyle m\>2}](https://wikimedia.org/api/rest_v1/media/math/render/svg/44e4ce1c04edd8f9602e60f0ec4457b7ac12fcd4).[\[2\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-james%E2%80%93stein-61-2) - All the results above are for the case when only a single observation vector **y** is available. For the more general case when ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) vectors are available, we consider the estimator ![{\\displaystyle {\\widehat {\\boldsymbol {\\theta }}}\_{JS}=\\left(1-{\\frac {(m-2){\\frac {\\sigma ^{2}}{n}}}{\\\|{\\overline {\\mathbf {Y} }}\\\|^{2}}}\\right){\\overline {\\mathbf {Y} }},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a97236e07a0088f3d9a1b8adc410e9e6de0fd599) where ![{\\displaystyle {\\overline {\\mathbf {Y} }}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/9415319ff8cfc6377fe06bf6ae88700802497c3c) is the ![{\\displaystyle m}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0a07d98bb302f3856cbabc47b2b9016692e3f7bc)\-length average of the ![{\\displaystyle n}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a601995d55609f2d9f5e233e36fbe9ea26011b3b) observations, so that, ![{\\displaystyle {\\overline {\\mathbf {Y} }}\\sim N\_{m}{\\Big (}{\\boldsymbol {\\theta }},{\\frac {\\sigma ^{2}}{n}}I{\\Big )}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e72554ac36054093a779d821974a6bbedec2d3df). - The work of James and Stein has been extended to the case of a general measurement covariance matrix, i.e., where measurements may be statistically dependent and may have differing variances.[\[10\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-bock75-10) A similar dominating estimator can be constructed, with a suitably generalized dominance condition. This can be used to construct a [linear regression](https://en.wikipedia.org/wiki/Linear_regression "Linear regression") technique which outperforms the standard application of the LS estimator.[\[10\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-bock75-10) - Stein's result has been extended to a wide class of distributions and loss functions. However, this theory provides only an existence result, in that explicit dominating estimators were not actually exhibited.[\[11\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-brown66-11) It is quite difficult to obtain explicit estimators improving upon the usual estimator without specific restrictions on the underlying distributions.[\[4\]](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_note-lehmann-casella-98-4) - [Admissible decision rule](https://en.wikipedia.org/wiki/Admissible_decision_rule "Admissible decision rule") - [Hodges' estimator](https://en.wikipedia.org/wiki/Hodges%27_estimator "Hodges' estimator") - [Shrinkage estimator](https://en.wikipedia.org/wiki/Shrinkage_estimator "Shrinkage estimator") - [Regular estimator](https://en.wikipedia.org/wiki/Regular_estimator "Regular estimator") - [KL divergence](https://en.wikipedia.org/wiki/KL_divergence "KL divergence") 1. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-stein-56_1-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-stein-56_1-1) [Stein, C.](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") (1956), "Inadmissibility of the usual estimator for the mean of a multivariate distribution", [*Proc. Third Berkeley Symp. Math. Statist. Prob.*](http://projecteuclid.org/euclid.bsmsp/1200501656), vol. 1, pp. 197–206, [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0084922](https://mathscinet.ams.org/mathscinet-getitem?mr=0084922), [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0073\.35602](https://zbmath.org/?format=complete&q=an:0073.35602) 2. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-james%E2%80%93stein-61_2-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-james%E2%80%93stein-61_2-1) [***c***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-james%E2%80%93stein-61_2-2) James, W.; [Stein, C.](https://en.wikipedia.org/wiki/Charles_Stein_\(statistician\) "Charles Stein (statistician)") (1961), "Estimation with quadratic loss", [*Proc. Fourth Berkeley Symp. Math. Statist. Prob.*](http://projecteuclid.org/euclid.bsmsp/1200512173), vol. 1, pp. 361–379, [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0133191](https://mathscinet.ams.org/mathscinet-getitem?mr=0133191) 3. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-3)** Beran, R. (1995). THE ROLE OF HAJEK’S CONVOLUTION THEOREM IN STATISTICAL THEORY 4. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-lehmann-casella-98_4-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-lehmann-casella-98_4-1) [***c***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-lehmann-casella-98_4-2) Lehmann, E. L.; Casella, G. (1998), *Theory of Point Estimation* (2nd ed.), New York: Springer 5. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-5)** Efron, B.; Morris, C. (1973). "Stein's Estimation Rule and Its Competitors—An Empirical Bayes Approach". *Journal of the American Statistical Association*. **68** (341). American Statistical Association: 117–130\. [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.2307/2284155](https://doi.org/10.2307%2F2284155). [JSTOR](https://en.wikipedia.org/wiki/JSTOR_\(identifier\) "JSTOR (identifier)") [2284155](https://www.jstor.org/stable/2284155). 6. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-stander-17_6-0)** Stander, M. (2017), *Using Stein's estimator to correct the bound on the entropic uncertainty principle for more than two measurements*, [arXiv](https://en.wikipedia.org/wiki/ArXiv_\(identifier\) "ArXiv (identifier)"):[1702\.02440](https://arxiv.org/abs/1702.02440), [Bibcode](https://en.wikipedia.org/wiki/Bibcode_\(identifier\) "Bibcode (identifier)"):[2017arXiv170202440S](https://ui.adsabs.harvard.edu/abs/2017arXiv170202440S) 7. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-7)** Stigler, Stephen M. (1990-02-01). ["The 1988 Neyman Memorial Lecture: A Galtonian Perspective on Shrinkage Estimators"](https://doi.org/10.1214%2Fss%2F1177012274). *Statistical Science*. **5** (1). [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/ss/1177012274](https://doi.org/10.1214%2Fss%2F1177012274). [ISSN](https://en.wikipedia.org/wiki/ISSN_\(identifier\) "ISSN (identifier)") [0883-4237](https://search.worldcat.org/issn/0883-4237). 8. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-Anderson-84_8-0)** Anderson, T. W. (1984), *An Introduction to Multivariate Statistical Analysis* (2nd ed.), New York: John Wiley & Sons 9. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-:0_9-0)** Botev, Zdravko I.; Kroese, Dirk P.; Taimre, Thomas (2025). *Data Science and Machine Learning: Mathematical and Statistical Methods* (2nd ed.). Boca Raton ; London: CRC Press. pp. 277–279\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [978-1-032-48868-4](https://en.wikipedia.org/wiki/Special:BookSources/978-1-032-48868-4 "Special:BookSources/978-1-032-48868-4") . 10. ^ [***a***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-bock75_10-0) [***b***](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-bock75_10-1) Bock, M. E. (1975), "Minimax estimators of the mean of a multivariate normal distribution", *[Annals of Statistics](https://en.wikipedia.org/wiki/Annals_of_Statistics "Annals of Statistics")*, **3** (1): 209–218, [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aos/1176343009](https://doi.org/10.1214%2Faos%2F1176343009), [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0381064](https://mathscinet.ams.org/mathscinet-getitem?mr=0381064), [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0314\.62005](https://zbmath.org/?format=complete&q=an:0314.62005) 11. **[^](https://en.wikipedia.org/wiki/James%E2%80%93Stein_estimator#cite_ref-brown66_11-0)** [Brown, L. D.](https://en.wikipedia.org/wiki/Lawrence_D._Brown "Lawrence D. Brown") (1966), "On the admissibility of invariant estimators of one or more location parameters", *Annals of Mathematical Statistics*, **37** (5): 1087–1136, [doi](https://en.wikipedia.org/wiki/Doi_\(identifier\) "Doi (identifier)"):[10\.1214/aoms/1177699259](https://doi.org/10.1214%2Faoms%2F1177699259), [MR](https://en.wikipedia.org/wiki/MR_\(identifier\) "MR (identifier)") [0216647](https://mathscinet.ams.org/mathscinet-getitem?mr=0216647), [Zbl](https://en.wikipedia.org/wiki/Zbl_\(identifier\) "Zbl (identifier)") [0156\.39401](https://zbmath.org/?format=complete&q=an:0156.39401) - Judge, George G.; Bock, M. E. (1978). *The Statistical Implications of Pre-Test and Stein-Rule Estimators in Econometrics*. New York: North Holland. pp. 229–257\. [ISBN](https://en.wikipedia.org/wiki/ISBN_\(identifier\) "ISBN (identifier)") [0-7204-0729-X](https://en.wikipedia.org/wiki/Special:BookSources/0-7204-0729-X "Special:BookSources/0-7204-0729-X") .
Shard152 (laksa)
Root Hash17790707453426894952
Unparsed URLorg,wikipedia!en,/wiki/James%E2%80%93Stein_estimator s443