We were visiting a hedge fund some years back when we had our first taste of the problem with mean-variance optimization—the tool advisors use to balance risk and reward in client portfolios. We loaded a portfolio’s positions into an optimizer, pressed the button, and discovered 25% of the portfolio should be in General Mills. You’ve probably experienced the same sort of weird behavior from your perplexing optimization tools, which sometimes act like impenetrable black boxes.

What is going on? If we can’t trust this tool, should we throw up our hands and revert to making adjustments by hand?

Computers Versus The World
As far as the computer is concerned, the optimization tool is doing exactly what it’s supposed to do. The problem for us stems from the computer’s insistence that the key data, the variance-covariance matrix that encapsulates the relationships among the variables, is the literal truth, and will be for all of the future. For General Mills, there were two instances where the stock moved in the opposite direction of the market in a big way, giving the company a strong diversification value—but only if we have a replay of the past. That’s not likely.

If the relationship among equities is fully reflected through that matrix (which it isn’t), if that relationship never changes (which it will), and if the optimized portfolio is held unswervingly forever (which it won’t be), the computer is getting it right.

Of course we know the variance-covariance matrix is only an estimate based on what has happened in recent history. We also know that the world will change.

So even if we got it 100% right, it would not work going forward. Also, we are not holding the portfolio long enough for the most optimal performance to show through. We will be changing the portfolio according to the client’s objectives and the changes in portfolio value, so if we “set it and forget it” with the current optimal portfolio, we’re ignoring the ever-changing world.

Portfolios Versus People
Our most recent run-in with mean-variance optimization was in trying out a software program designed specifically for financial advisors. True to form, it occasionally gave funny results, though these were not as bad as other outcomes because there was a secondary logic filter to keep its results from getting too out there. But its results missed the question being asked. Advisors are not fishing for alpha; they are not trying to maximize return for a specified portfolio volatility. They are trying to create the best portfolio for their client. And that means designing a portfolio that meets the client’s objectives, something that goes beyond returns and volatility risk. The objectives are multifaceted and vary over the client’s lifetime.

If we don’t look at the client’s objectives, we risk using an optimization program that resonates with a portfolio manager and falls flat for a client.

For one take on these objectives, Ashvin Chhabra, the president and CIO of Euclidean Capital, looks at three types of client objectives, requiring wealth for three different buckets: Clients want a baseline of financial security, they want to maintain their lifestyle, and they have aspirations they want to fulfill. Each of these buckets demands different portfolios, ranging from something with low risk and high liquidity for financial security to something with high risk for the clients’ pursuit of their aspirations.

The clients’ objectives will vary over time and in a somewhat predictable way, and will require more from the advisor than the simple minimizing of data variance for each bucket. A single client in his late 20s with marketable job skills needs less in financial security than someone in his mid-30s who is married with three children in tow, and his needs are different still from an empty-nester in his 60s with amassed wealth. How can a mean-variance optimization speak to this? It can’t.

Most importantly, advisors cannot set it and forget it because the path of a client’s life is subject to twists and turns. We want to design the portfolio the way we might design a guided missile. When we are far away and see the target veering to the right, we don’t keep going straight ahead, but we also don’t move to direct ourselves to its current location either, because we know its current location has a cloud of uncertainty around it; there is zigging and zagging to come. Given the sensitivities of standard, mean-variance optimization to estimates of returns and correlations between assets, in times of uncertainty this optimization approach might tell us we need to make big changes to a client’s portfolio, but that leads to substantial costs with ultimately little value to a client’s needs.

 

Rethinking Optimization
We must take a coarse view, both in how we construct the portfolio now, and how we adjust for the future expected path. A coarse view is one that takes a lesson from the cockroach. This insect has a very simple defense mechanism. It doesn’t hear, or see, or smell. All it does is move in the opposite direction of wind hitting little hairs on its legs. It will never win the best designed insect of the year award. But it has done well enough to survive even as jungles have turned into deserts and deserts into cities. It survives because it has coarse behavior.

The markets and clients’ objectives change as well. The lesson of the cockroach rings clear.

So, how can a mechanistic optimization engine be rejiggered to accommodate estimation errors, uncertainty about the future, and the dynamics of both the market performance and the client’s objectives?

One approach is to think of optimization in terms of risk factors rather than assets. By risk factors, we specifically mean a security’s exposure to certain sectors and industries, to countries, to market characteristics and to styles like value and growth.

Viewing a portfolio in factor space rather than in asset space has a number of advantages. Risk factors are more stable than assets, so the correlation issues, while still there, are lessened. The factor influences thread through the assets, so the relationships between them are tethered to something more than their historical behavior. And most of the risk can usually be found in a handful of factors as opposed to possibly hundreds of assets. When you can home in on these fewer dimensions, you make the problem cleaner, with a reduced chance that some company like General Mills, behaving just so, takes on an unreasonably dominant role in your portfolio.

Consider the client’s portfolio and its asset allocations described in Table 1. When compared to the target portfolio, the client’s current portfolio is overweight equities and underweight alternatives. In a naive rebalancing, we might end up substantially reducing the equity holdings and substantially increasing the alternatives. However, if we look at the underlying factors and corresponding risk contributions in the figure, and then rebalance for those, we end up making smaller changes. Because factors thread across assets, one can reach the target risk attribution through these smaller moves.

This suggests another important advantage in rebalancing across risk factors: It reduces the need for large trades. Furthermore, if the risk exposures are aligned between the target and the client portfolio, we might not even need to rebalance, and thus we’ve avoided unnecessary trading costs. Note that the original portfolio has no commodity exposure while the target does, and that is reflected in the rebalancing. (In this example, we are using Fabric’s platform, which employs MSCI’s Multi-Asset Class Factor Model Tier 4.)

Human Plus Machine
Finally, we bring the human aspect of the rebalancing, including our experience and common sense.

There are three main aims of rebalancing: One is to build the target portfolio. A second is to keep the portfolio within an acceptable range of the target. And a third is to make adjustments as the market changes or as views of the market change. Whenever we do this rebalancing, we have to stay sensitive to the client’s needs and the variable constraints.

Let’s move back to the mathematical world, to optimization tools and computers, but now add human judgment, which creates what can be called, in statistical terms, a Bayesian approach. (Though it’s not essential to cast things in a Bayesian framework, it’s comforting to know that bringing in the human element is fair game from a statistical purist’s standpoint.) Here we create a starting point based on our experience and judgment, which in the Bayesian world is called the prior, and then push it on to the computer to make adjustments, resulting in what is called the posterior. One way to do this is to set the target without an optimization machine. If we keep on top of the portfolio, we’ll stray only marginally from the target. And if we start rebalancing for factors rather than assets we can gain an intuitive understanding of material bias as we look at variations from the target.

People in the past have taken an abstract view of portfolio optimization, seeing it as a mathematical operation on a collection of assets. But that’s inadequate when we look at how client portfolios are actually set up. An investment portfolio is effectively a set of sub-portfolios, each with a particular mandate, possibly with underlying accounts that have differing objectives. Clients might hold legacy positions or place constraints on the buying or selling of certain assets. Any portfolio construction or rebalancing exercise should at least be aware of these things.

Our takeaway: Mean-variance optimization is too blunt a tool, one that ignores the essentials of the client’s objectives, one that is difficult to customize to address the multi-dimensional needs of a heterogeneous set of clients. The tools that advisors use need to be flexible to account for all these variations.

Today’s computing power, combined with advances in mathematical techniques, can help advisors move from brute force, machine-driven optimization to what we call guided rebalancing. By that we mean it’s guided by the human sense of the baseline and acceptable variations, and by optimization methods that respect the realities of the market and the needs and objectives of individuals.

Rick Bookstaber is the co-founder of Fabric RQ, a technology firm for risk-aware portfolio design. Dhruv Sharma is the head of portfolio intelligence at Fabric RQ.