We had our first taste of the problem with mean-variance optimization at a hedge fund some years back. We loaded the positions into an optimizer, pressed the button, and discovered 25% of the portfolio should be in General Mills. You've probably experienced the same sort of thing; weird behavior, even more perplexing because the optimization is an impenetrable black box.

What is going on? If we can’t trust it, we throw up our hands and revert to making adjustments by hand.

Computers Versus The World
The optimization actually is doing exactly what it is supposed to do, at least as far as the computer is concerned. The problem for us stems from the computer's insistence that the key data, the variance-covariance matrix, is the literal truth, and will be for all of the future. For General Mills, there were two instances where the stock moved opposite the direction of the market in a big way, leading to a strong value for diversification—if we have a replay of the past. Not likely.

If the relationship among the equities is fully reflected through that matrix (which it isn't), if that relationship never changes (which it will) and if the optimized portfolio is held unswervingly forever (which it won't), the computer is getting it right.

Of course, we know the variance-covariance matrix is only an estimate based on what has happened in recent history. We also know that the world will change, so even if we got it one hundred percent right, it would not work going forward. Also, we are not holding the portfolio long enough for the optimality to show through. We will be changing the portfolio based on changes in the client's objectives and the changes in portfolio value, so "set it and forget it" based on the current optimal portfolio is ignoring the ever-changing world.

Portfolios Versus People
Our most recent run-in with mean-variance optimization was in trying out a software program—one designed specifically for financial advisors. True to form it occasionally gave funny results, though not as bad because of secondary logic to filter out those sorts of outcomes. But its results missed the question being asked. Advisors are not fishing for alpha; they are not trying to maximize return subject to a specified portfolio volatility. They are trying to create the best portfolio for their client. And that means designing a portfolio that meets the client's objectives, objectives that extend well beyond returns and focuses the client on only one aspect of risk—volatility. The objectives are multifaceted and vary over the client's lifetime. So, optimization that might resonate for a portfolio manager will fall flat for advising a client.

Take as an example the approach of Ashvin Chhabra, who looks at three types of objectives for individuals: Having a baseline of financial security, maintaining lifestyle and reaching for aspirational goals. Each of these will demand different portfolios, ranging from low risk for security to high risk for aspirations. These will vary over time in a somewhat predictable way, and importantly, require more than simply constrained minimizing of the variance within each bucket. Someone in their late 20s who is single and has marketable job skills will need less in the security bucket than in his mid-30s when he is married with three children in tow, and that will differ from when he hits his empty-nested-60s with amassed wealth. How can a mean-variance optimization speak to this? It can't. And to make matters worse for the mean-variance approach, there are different objectives for each of the buckets.

Additionally, each bucket might require a different asset mix. Investors now have access to a broader set of asset classes, going beyond the traditional equity-fixed income split. They can now invest in private equity and hedge funds. While these alternative asset classes promise outsized returns, the risks embedded in them go beyond a simple volatility measure to things like liquidity risk.

There is no "set it and forget it," because the path of a client’s life is subject to twists and turns. We want to design the portfolio the way we might design a guided missile. When we are far away and see the target veering to the right, we don't keep going straight ahead, but we also don't move to direct ourselves to its current location either, because we know its current location has a cloud of uncertainty around it; there is zigging and zagging to come. Given the sensitivities of standard, mean-variance optimization to estimates of returns and correlations between assets, in times of uncertainty this approach might suggest large changes to a client's portfolio, leading to substantial costs with ultimately little value to a client's needs. 

Rethinking Optimization
We must take a coarse view, both in how we construct the portfolio now, and how we adjust for the future expected path. A coarse view is one that takes a lesson from the cockroach. A cockroach has a very simple defense mechanism. It doesn't hear, or see or smell. All it does is move in the opposite direction of wind hitting little hairs on its legs. It will never win the best designed insect of the year award. But it has done well enough. Well enough to survive as jungles turned into deserts, and deserts turned into cities. It survives because it has coarse behavior. The markets and the client's objective change as well, and the lesson of the cockroach rings true there.

So, how can a mechanistic optimization engine be rejiggered in the face of the need to accommodate estimation errors, uncertainty about the future, and the dynamics of both the market performance and the client's objectives?

One approach is to think of optimization in the context of risk factors rather than assets. For this we use the MSCI factor model. Risk factors are more stable than assets, so the correlation issues, while still there, are lessened. Factor influences thread through the assets, so the relationships between assets are tethered to something more than historical asset relationships. And most of the risk usually resides in a handful of factors as opposed to possibly hundreds of assets. The lower dimensionality makes the problem cleaner, with a reduced chance that some General Food asset that had behaved just so takes on an unreasonably dominant role.

First « 1 2 » Next