Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Comp Stat

An overview of reciprocal L 1 ‐regularization for high dimensional regression data

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

High dimensional data plays a key role in the modern statistical analysis. A common objective for the high dimensional data analysis is to perform model selection, and penalized likelihood method is one of the most popular approaches. Typical penalty functions are usually symmetric about 0, continuous and nondecreasing in (0, ∞). In this review article, we will focus on a special type of penalty function, the so call reciprocal Lasso (rLasso) penalty. The rLasso penalty functions are decreasing in (0, ∞), discontinuous at 0, and converge to infinity when the coefficients approach zero. Although uncommon, this choice of penalty is intuitively appealing if one seeks a parsimonious model fitting. In this article, we will provide an overview for the motivation, theory, and computational challenges of this rLasso penalty, and we will also compare the theoretical properties and empirical performance of rLasso with other popular penalty choices.

Evolution paths of the best objective function values obtained in 20 runs of SAA (left plot) and simulated annealing (right plot), where each run was initialized with a randomly selected model.
[ Normal View | Magnified View ]
Comparison of shapes of different penalty functions.
[ Normal View | Magnified View ]
Boxplot of the ratio of the selected tuning parameter value over the benchmark tuning parameter value. Note that the outliers is not plotted in order to make the plot clean.
[ Normal View | Magnified View ]
The average of false selection, positive selection, and squared L2 estimation error for LASSO, SCAD, MCP, and rLasso as the signal strength increases under CV (nv) cross validation.
[ Normal View | Magnified View ]
Part of regularization paths of SCAD, LASSO, and rLasso for the same simulated data with n = 200, p = 30 and |t| = 8.
[ Normal View | Magnified View ]
Boxplot of the ratio of the selected tuning parameter value over the benchmark tuning parameter value. Note that the outliers is not plotted in order to make the plot clean.
[ Normal View | Magnified View ]
The average of false selection, positive selection, and squared L2 estimation error for LASSO, SCAD, MCP, and rLasso as the signal strength increases under fivefold cross validation.
[ Normal View | Magnified View ]
The average of false selection, positive selection, and squared L2 estimation error for LASSO, SCAD, MCP, and rLasso as the signal strength increases under fixed λ.
[ Normal View | Magnified View ]

Browse by Topic

Statistical Methods > Statistical Theory and Applications
Modeling and Simulation > Modeling Methods and Algorithms

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts