This Title All WIREs
How to cite this WIREs title:
WIREs Comp Stat

Optimization cultures

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Computational optimization methods can be broadly classified into two groups: classical methods—which require and exploit specific functional forms of objective function and constraints—and heuristics. Those latter methods impose few, if any, restrictions on models, at the price of being more computationally demanding. But because of the growth of computing capacity over the last decades, those methods are now perfectly practical tools for everyday use. Yet, instead of realizing the advantages of heuristics, users still cling to classical methods. We discuss the reasons for this nonacceptance of heuristics, and argue that the choice of numerical‐optimization techniques is as much driven by the culture of the user—field of work and educational background—as by the quality of the method. In particular, we argue that many of the alleged shortcomings of heuristics could be overcome if researchers stopped treating optimization as a mathematical, exact discipline; instead, they should consider it a practical, computational tool. WIREs Comput Stat 2014, 6:352–358. doi: 10.1002/wics.1312 This article is categorized under: Algorithms and Computational Methods > Stochastic Optimization

Depiction of the objective functions of two portfolio‐optimization models. In both models, the goal is to minimize the risk of a portfolio of three assets. In the left panel, we equate risk with return variance. Thus, the function shows the variance of the portfolio for different weights of two assets; the third asset's weight is fixed through the budget constraint (i.e., we cannot invest more wealth than we have). This is the standard model in portfolio optimization, introduced by Harry Markowitz. In order to solve it, a classical optimization technique starts at some point that is specified by the user. Then it moves downhill (‘minus the gradient’) until at some point the gradient becomes zero: the objective function is flat, and we have arrived at the minimum. That minimum is easily found because the function is smooth and only has one optimum. In fact, Markowitz chose this specification for risk because the function is so well‐behaved, not because he considered it the best financial specification. Already in the 1950s Markowitz pondered using downside semi‐variance as a measure for risk, but rejected it mainly because he could not find an algorithm to solve the resulting model.

In the right panel we use the same dataset, but now we define risk as the Value‐at‐Risk (VaR), a quantile of the return distribution. We clearly see that the function is not smooth and has many local minima. A classical method would not be appropriate for such a model, since it would stop at the first minimum that it finds. Heuristics, on the other hand, have been successfully applied to such models. See for instance Gilli and Këllezi or Dueck and Winker, which is the first application of optimization heuristics to portfolio‐selection problems.

[ Normal View | Magnified View ]

Browse by Topic

Algorithms and Computational Methods > Stochastic Optimization

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts