Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Comp Stat

Robust statistics: a selective overview and new directions

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Classical statistics relies largely on parametric models. Typically, assumptions are made on the structural and the stochastic parts of the model and optimal procedures are derived under these assumptions. Standard examples are least squares estimators in linear models and their extensions, maximum‐likelihood estimators and the corresponding likelihood‐based tests, and generalized methods of moments (GMM) techniques in econometrics. Robust statistics deals with deviations from the stochastic assumptions and their dangers for classical estimators and tests and develops statistical procedures that are still reliable and reasonably efficient in the presence of such deviations. It can be viewed as a statistical theory dealing with approximate parametric models by providing a reasonable compromise between the rigidity of a strict parametric approach and the potential difficulties of interpretation of a fully nonparametric analysis. Many classical procedures are well known for not being robust. These procedures are optimal when the assumed model holds exactly, but they are biased and/or inefficient when small deviations from the model are present. The statistical results obtained from standard classical procedures on real data applications can therefore be misleading. In this paper we will give a brief introduction to robust statistics by reviewing some basic general concepts and tools and by showing how they can be used in data analysis to provide an alternative complementary analysis with additional useful information. In this study, we focus on robust statistical procedures based on M‐estimators and tests because they provide a unified statistical framework that complements the classical theory. Robust procedures will be discussed for standard models, including linear models, general linear model, and multivariate analysis. Some recent developments in high‐dimensional statistics will also be outlined. WIREs Comput Stat 2015, 7:372–393. doi: 10.1002/wics.1363 This article is categorized under: Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods Statistical and Graphical Methods of Data Analysis > Robust Methods
Nonparametric, parametric, and robust statistics.
[ Normal View | Magnified View ]
Size of model selected under three contamination scenarios.
[ Normal View | Magnified View ]
Predicted log(MSE) and L2‐loss for high‐dimensional Poisson regression under three contamination scenarios.
[ Normal View | Magnified View ]
Influence of outliers on classical and robust covariance estimates.
[ Normal View | Magnified View ]
Weights obtained from the robust analysis.
[ Normal View | Magnified View ]
Mapping of the multivariate Huber function.
[ Normal View | Magnified View ]
Huber's function.
[ Normal View | Magnified View ]
Bias of LS estimator for the intercept (a) and the slope (b) in a regression model. The contaminated samples are obtained by replacing an ε‐percentage of the observations in the clean sample. The true bias can be obtained in closed‐form.
[ Normal View | Magnified View ]
Various approaches to robustness.
[ Normal View | Magnified View ]
Weekly exchange rate returns of the Swedish krona versus US dollar (November 29, 1993 to November 17, 2003) from Datastream (a) and weights implied by the robust estimation of the AR(1)–ARCH(1) model with c = 4 (b).
[ Normal View | Magnified View ]
P‐Value of the two sample t‐test and Wilcoxon test as the value of y(10) increases.
[ Normal View | Magnified View ]

Browse by Topic

Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods
Statistical and Graphical Methods of Data Analysis > Robust Methods

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts