We had our first taste of the problem with mean-variance optimization at a hedge fund some years back. We loaded the positions into an optimizer, pressed the button, and discovered 25% of the portfolio should be in General Mills. You've probably experienced the same sort of thing; weird behavior, even more perplexing because the optimization is an impenetrable black box.

What is going on? If we can’t trust it, we throw up our hands and revert to making adjustments by hand.

Computers Versus The World
The optimization actually is doing exactly what it is supposed to do, at least as far as the computer is concerned. The problem for us stems from the computer's insistence that the key data, the variance-covariance matrix, is the literal truth, and will be for all of the future. For General Mills, there were two instances where the stock moved opposite the direction of the market in a big way, leading to a strong value for diversification—if we have a replay of the past. Not likely.

If the relationship among the equities is fully reflected through that matrix (which it isn't), if that relationship never changes (which it will) and if the optimized portfolio is held unswervingly forever (which it won't), the computer is getting it right.

Of course, we know the variance-covariance matrix is only an estimate based on what has happened in recent history. We also know that the world will change, so even if we got it one hundred percent right, it would not work going forward. Also, we are not holding the portfolio long enough for the optimality to show through. We will be changing the portfolio based on changes in the client's objectives and the changes in portfolio value, so "set it and forget it" based on the current optimal portfolio is ignoring the ever-changing world.

Portfolios Versus People
Our most recent run-in with mean-variance optimization was in trying out a software program—one designed specifically for financial advisors. True to form it occasionally gave funny results, though not as bad because of secondary logic to filter out those sorts of outcomes. But its results missed the question being asked. Advisors are not fishing for alpha; they are not trying to maximize return subject to a specified portfolio volatility. They are trying to create the best portfolio for their client. And that means designing a portfolio that meets the client's objectives, objectives that extend well beyond returns and focuses the client on only one aspect of risk—volatility. The objectives are multifaceted and vary over the client's lifetime. So, optimization that might resonate for a portfolio manager will fall flat for advising a client.

Take as an example the approach of Ashvin Chhabra, who looks at three types of objectives for individuals: Having a baseline of financial security, maintaining lifestyle and reaching for aspirational goals. Each of these will demand different portfolios, ranging from low risk for security to high risk for aspirations. These will vary over time in a somewhat predictable way, and importantly, require more than simply constrained minimizing of the variance within each bucket. Someone in their late 20s who is single and has marketable job skills will need less in the security bucket than in his mid-30s when he is married with three children in tow, and that will differ from when he hits his empty-nested-60s with amassed wealth. How can a mean-variance optimization speak to this? It can't. And to make matters worse for the mean-variance approach, there are different objectives for each of the buckets.

Additionally, each bucket might require a different asset mix. Investors now have access to a broader set of asset classes, going beyond the traditional equity-fixed income split. They can now invest in private equity and hedge funds. While these alternative asset classes promise outsized returns, the risks embedded in them go beyond a simple volatility measure to things like liquidity risk.

There is no "set it and forget it," because the path of a client’s life is subject to twists and turns. We want to design the portfolio the way we might design a guided missile. When we are far away and see the target veering to the right, we don't keep going straight ahead, but we also don't move to direct ourselves to its current location either, because we know its current location has a cloud of uncertainty around it; there is zigging and zagging to come. Given the sensitivities of standard, mean-variance optimization to estimates of returns and correlations between assets, in times of uncertainty this approach might suggest large changes to a client's portfolio, leading to substantial costs with ultimately little value to a client's needs. 

Rethinking Optimization
We must take a coarse view, both in how we construct the portfolio now, and how we adjust for the future expected path. A coarse view is one that takes a lesson from the cockroach. A cockroach has a very simple defense mechanism. It doesn't hear, or see or smell. All it does is move in the opposite direction of wind hitting little hairs on its legs. It will never win the best designed insect of the year award. But it has done well enough. Well enough to survive as jungles turned into deserts, and deserts turned into cities. It survives because it has coarse behavior. The markets and the client's objective change as well, and the lesson of the cockroach rings true there.

So, how can a mechanistic optimization engine be rejiggered in the face of the need to accommodate estimation errors, uncertainty about the future, and the dynamics of both the market performance and the client's objectives?

One approach is to think of optimization in the context of risk factors rather than assets. For this we use the MSCI factor model. Risk factors are more stable than assets, so the correlation issues, while still there, are lessened. Factor influences thread through the assets, so the relationships between assets are tethered to something more than historical asset relationships. And most of the risk usually resides in a handful of factors as opposed to possibly hundreds of assets. The lower dimensionality makes the problem cleaner, with a reduced chance that some General Food asset that had behaved just so takes on an unreasonably dominant role.

 

As an example, consider the case of a client portfolio with the asset class allocations described in Table 1. When compared to the target portfolio, the client's current portfolio is overweight equities and underweight alternatives.

A naive rebalance might suggest a substantial reduction in the equity holdings, and a substantial increase in the alternatives holdings. However, looking at the underlying factors and corresponding risk contributions in the figure, and rebalancing to those risk contributions allows for smaller changes. Because factors thread across assets one can reach the target risk attribution through smaller moves. This points to another important advantage of rebalancing across risk factors: it reduces the need for large trades. Furthermore, if the risk exposures are aligned between the target and the client portfolio, one might not even need to rebalance, preventing unnecessary trading costs.

Human Plus Machine—Guided Rebalancing
And then there is the human—the experience and common sense.

Think of three principle use cases for optimization. One is to build the target portfolio. A second is to keep the portfolio within an acceptable range of the target. And a third is to make adjustments as the market changes or as views of the market change. All of these are sensitive to the individual client’s requirements and constraints.

We set the target without an optimization machine. If we keep on top of the portfolio, the straying from the target will be marginal. And if we move from asset to factor space, we can look at variations from the target to understand areas of material bias. Moving back to the mathematical world where the optimization tools and computers reside, adding human judgment creates what can be called, in statistical terms, a Bayesian approach. We create a starting point based on our experience and judgment, which in the Bayesian world is called the prior, and then push it on to the computer to make adjustments, resulting in what is called the posterior. Not that any of this is essential to know in a practical sense, but it is comforting to know that bringing in the human element is fair game from a purist’s standpoint.

Now advisors and their clients are thinking of investments in this way. The abstract point of view that is taken when performing traditional optimization of a portfolio as a simple collection of assets is inadequate when confronted with the reality of how client portfolios are actually set up. The portfolio is effectively a set of sub-portfolios, each with a particular mandate, possibly with underlying accounts with differing objectives. Clients might hold legacy positions or place constraints on the buying or selling of certain assets. At a minimum, any portfolio construction or rebalancing exercise should be cognizant of these realities.

The key point is that mean-variance optimization is too blunt a tool, and one that is difficult to customize to address the needs for a heterogeneous set of clients. The tools that advisors use need to be malleable and flexible to account for this heterogeneity. Modern technology, computing power combined with advances in mathematical techniques, can help advisors move from brute force, machine-driven optimization to what we call guided rebalancing. Guided by the human sense of the baseline and acceptable variations, and guided by optimization methods that respect the realities of the market and the needs and objectives of individuals.

Rick Bookstaber is co-founder and head of risk at Fabric RQ. Dhruv Sharma is head of portfolio intelligence at Fabric RQ.