This Title All WIREs
How to cite this WIREs title:
WIREs Comp Stat

Variable selection in linear models

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Variable selection in linear models is essential for improved inference and interpretation, an activity which has become even more critical for high dimensional data. In this article, we provide a selective review of some classical methods including Akaike information criterion, Bayesian information criterion, Mallow's Cp and risk inflation criterion, as well as regularization methods including Lasso, bridge regression, smoothly clipped absolute deviation, minimax concave penalty, adaptive Lasso, elastic‐net, and group Lasso. We discuss how to select the penalty parameters. We also provide a review for some screening procedures for ultra high dimensions. WIREs Comput Stat 2014, 6:1–9. doi: 10.1002/wics.1284 This article is categorized under: Statistical Models > Linear Models Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods Statistical Models > Model Selection
Penalty functions of Lasso, SCAD, MCP, SELO, ENet, and L0L1. The tuning parameters are selected as follows: λ = 1.5 for Lasso; a = 3.7 and λ = 1.5 for SCAD; a = 2 and λ = 1.5 for MCP; λ1 = 1.5 and λ2 = 2 for SELO; λ1 = 1 and λ2 = 0.1 for ENet; and λ1 = 1.5 and λ2 = 2 for L0L1.
[ Normal View | Magnified View ]

Browse by Topic

Statistical Models > Model Selection
Statistical Learning and Exploratory Methods of the Data Sciences > Modeling Methods
Statistical Models > Linear Models

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts