Monte Carlo simulation software has captured the attention of both financial advisors and the media in recent years. Software companies are touting its benefits, and the advisor community is locked in a debate as to the mathematical accuracy of the models being used by investors and advisors alike.
As many of you know, Monte Carlo simulation is a mathematical technique that uses probabilities of occurrences to generate a range of possible answers to problems. I have reviewed several versions of Monte Carlo software and have a more practical question that seems very few software packages address and surprisingly few advisors challenge. My question is: "Are your boots in the water?"
Perhaps an analogy will help explain the question. Suppose in the later years of the 19th century an electrician and mathematician reviewing a new product studied the range of electrical amps used and the frequency of shock experiences when the product's electrical cord was plugged into a particular socket. The electrician confirmed the wiring was properly installed, but on occasion he would be shocked when touching the plugged-in cord.
The mathematician charted the number of times the electrician touched the wire. He measured the number of shock occurrences for each level of amps used. He determined the distribution of shocks based on amps used and calculated the standard deviation around the mean before triumphantly declaring that his mathematical model determined, with a high degree of certainty, that the probability of being shocked was less than 5%, regardless of the amps used. Hence, the mathematician determined that the probability of success was 95%. Having never been shocked, the mathematician felt that the probability of the shock was low enough to encourage widespread use of the product.
The electrician, however, had been shocked a number of times during the experiments and was interested in finding out more about the environment that existed when he was shocked. One particularly rainy day, the electrician noticed he was getting shocked not 5% of the time, but every time he touched the wire. Looking down, he noticed his boots were standing in a thin film of water accumulating on his garage floor. He quickly pulled over a stepladder, removed his boots and dried off his feet. Cautiously touching the wire, he felt no shock. Relieved, he reviewed his calendar where he had always noted each day's weather and discovered a pattern that linked the weather to the shocks.
The electrician noted that he was shocked only on rainy days. He also noted that the more it rained, the more shocks he would receive and the more severe the shocks became. Scanning his environment, he noticed a steady drip of water accumulating from a small leak in the roof. After repairing the roof, the electrician stopped the source of water, eliminated the shocks and then supported the release of the new product. The electrician discovered that the mathematician included too much of the universe of possibilities by counting every time the wire was touched. When the count was confined only to the relevant data points, rainy days, the electrician discovered that shocks occurred 100% of the time when standing in water.
Studying The Environment
Investors, like the electrician, should be less concerned about the probability of success and more concerned about the consequence of failure. While clients may be dazzled by the graphs, charts and a 95% probability of success strangled from the bowels of the black box, the client should prefer to know, "Under what type of environment does the 5% failure occur?"
Since most simulators randomly generate thousands of "historical" return sequences or thousand of "simulated" return sequences derived from user input, they produce similar probabilities whenever the simulation is run with the same "client variables." Client variables considered in my analysis include a 5% initial draw down with an annual increase in the initial draw down of 3.1%. Draw down is used here to describe the annual percentage of a portfolio used by an investor. A 5% draw down simulation run in March 2000 (when equity markets were at their peak) would result in similar probabilities of success as a 5% draw-down simulation run in July 2002 (when significant market values had already been shed). The results are similar rather than the same, because both simulations consider all of the same market variables or user-input variables reordered in a new set of 1,000 random sequences. Misunderstood, this distinction can be a dangerous omission of logic for the unwary.
The Monte Carlo simulations, in fact, dilute the importance of "the current market environment" by including all market environments. The thousand iterations include periods when markets reflect 5, 7, 10 or 14 P/E multiples even though the current market may be priced at 30 times earnings. Another approach, referred to as "experimental simulation," simulates "real world historical periods" that are similar to the current market environment. It resembles the approach of the electrician by first asking the question, "Under what type of market environment is my projection prepared?"
The experimental simulator would more likely compare a portfolio that started draw downs in March 2000 with periods when the markets were priced similarly or when the economic climate was similar. Alternatively, the experimental simulator would simply identify worst-case scenarios from prior periods applied to the client's portfolio and draw-down rate. The author acknowledges that more than P/E ratios or similar economic circumstances should be considered in any experimental simulation. But a process that allows the advisor and client to observe prior economic and investment circumstances under which failures occurred in the portfolio and draw-down rate seems infinitely more useful than quantifying the probability of failure from a wide universe of return sequences.
Nevertheless, the important issues to communicate to clients are the impact of the sequence of returns on their portfolios, and that the sequence is outside of our control. While Monte Carlo does a good job of showing the wide variance of possible results and the probability of success over thousands of "different market environments," it does not address the consequences based on the "relevant market environment" that exists over an investor's lifetime. The 70-year old widow wants to know how her portfolio could be impacted considering the current market environment during her one life over the next 10 or 20 years. Experimental simulation not only can provide that client graphic examples based on similar periods, but the advisor also can quantify how any scenario can be improved upon by positive actions of the client.
Current Valuations Do Affect Future Performance
As a practitioner of experimental simulation, I ask the same question as the electrician: "Under what environments do the 5% failures occur?" I have yet to run thousands of simulations against real world portfolios, but I have measured all trailing 10-year market returns for each month since 1871 and compared the trailing returns with the beginning of period P/E ratios. Not surprisingly, I found the results suggest that linking simulations to current market conditions is imperative for reasonable conclusions to be drawn.
Ten-year returns based on monthly returns since 1871 were compiled resulting in 1,426 months of measurable data, with P/E ranges from five to 30 times earnings. Only seven 10-year periods had beginning-of-period P/E ratios at more than 25 times earnings. The P/E ratios for each month were stratified within ranges described in Exhibit 1, and their subsequent 10-year period returns were calculated. The results suggest that while P/E ratios are not necessarily always the best indicator of subsequent period performance, there is a significant correlation between high P/E ratios and lower subsequent period performance.
Starting with every month since 1996 the S&P 500 has had P/E ratios above 20. As a result, new data on 10-year subsequent period performance will start to be available in larger quantities starting in 2006. The 2000-2001 market cycle may have a large impact on many of those 10-year periods.
To test the impact of hypothetically lower-return sequences in the initial 10-year period, I changed my market assumptions used in my 30-year simulation (Exhibit 2, Scenario 1) and ran a second Monte Carlo simulation split into two distinct periods-a 10-year simulation followed by a 20-year simulation on the remaining portfolio balance. I used the lower market returns shown in Scenario 2 for the initial 10-year simulation period.
The results of the initial 10-year simulation from Scenario 1 were compared with the lower returns, lower standard deviations and higher correlations of Scenario 2. Ten-year results after a 5% initial draw down with an annual increase of 3.1% resulted in the following probability results for the two market scenarios first using an 80% stock allocation and then a 60% stock allocation.
Lower returns in the initial 10-year period not surprisingly resulted in a much higher probability that the portfolio will be less in 10 years than it was at the starting point. The likelihood that a portfolio would be less than 80% of its initial value after 10 years rose from 10% or 20% to more than 35% when the initial return sequence was reduced to the Exhibit 2, Scenario 2 levels.
Advocates of Monte Carlo rightfully argue that no valid statistical inference could be drawn if the March 2000 simulation used return sequences that were limited to the few times the markets were priced at more than 30 times earnings. They further rightfully argue that simulations, to be credible, must have statistical validity. But to include highly unlikely return sequences in a simulation simply to achieve statistically validity is really worse than a flip of the coin. At least with a flip of a coin you can only be wrong one half the time. As can be seen above, the return sequence in the initial 10-year period can make a large difference on short-term account values.
The Impact Of An Increasing Draw-Down Rate
Perhaps the best way to test the validity of Monte Carlo results is to prepare a 30-year simulation using an initial draw-down rate increased each year as in my example (5% initial draw down with a 3.1% annual increase). That simulation can be compared with a 20-year simulation with an increased initial draw-down rate. A 5% initial draw down increased by 3.1% each year for 10 years will become a 6.785% draw-down 10 years later (5% compounded at 3.1% for 10 years) if the portfolio remained at the same value at the end of the 10-year period as it was at the beginning. As shown in Exhibit 3, the likelihood of that situation is quite high. If the portfolio value had declined to 80% of the initial value by the end of the initial 10-year period, the new draw-down rate would be 7.29% (6.785% divided by 80%). If the portfolio stood at only 60% of the initial value after 10 years, the new draw-down rate would be 9.71%.
By applying the new draw-down rates to the remaining 20 years of our original 30-year simulation, I found much higher failure rates than the original simulation would suggest, as shown in Exhibits 4 and 5.
Solving The Simulation Dilemma
Most advisors agree that effective communication is key to a successful financial planning relationship. We cannot, for example, suggest that if a client lived 1,000 lives, her chance of running out of money would only be 5%. Nor is it particularly useful to suggest you can draw valid conclusions when running thousands of historical return sequences jumbled in as many different patterns against a client's portfolio and draw-down rate. I am trying to raise a more fundamental question, one that goes beyond how we communicate our findings and focuses more on how we rely on our tools, perhaps to the detriment of our good judgment.
Long-term investing is also a lesson in "reversion to the mean," whether or not an advisor attempts to sidestep the impact of the process by tactical portfolio changes. By relying on random number generation, Monte Carlo fails to consider the impact of this powerful investment gravity on the sequence of returns. Worse yet, if the software assumes equal probability of all random sequences, then it ignores the question, "Are your boots in the water?"
Understanding the limitations of Monte Carlo simulation allows the advisor to ask more relevant questions and ultimately expand his or her own understanding of the relationship between draw-down and rate-of-return sequences. It is the impact that this relationship can have on our client's financial and emotional health that we need to understand more clearly. It is our understanding of this relationship that helps us develop strategies to address this impact on our clients' portfolios and their psyche. Monte Carlo clouds a deeper understanding of that relationship by boiling the impact of return sequences down to probabilities. Both advisors and clients will be better served by advisors willing to refine their understanding of these relationships by further experimental simulations of specific time periods, portfolio mixes and draw-down rates and then applying what-if analysis to the time periods in which failures occurred. In the meantime, advisors should use Monte Carlo to educate clients on the wide range of results that can occur and stay away from suggesting success probabilities.
So, Are Your Boots In The Water?
In my analogy, boots represent the "client variables" of draw down, equity allocations and the like, weather represents "market variables" and rainwater represents the "current market environment." When considering each time the wire was touched, the mathematician derived a 5% probability of shock or a 95% probability of success. When touches were counted only when rainwater covered the boots, the electrician derived a 100% probability of shock. If you begin draw down in a market priced at 30 times earnings, your boots may be in the water. Just as the 5% electrical shocks occurred 100% of the time when the environment included rain, so too could the 5% investor shock occur with a much higher probability than Monte Carlo simulations suggest.
Advisors would be wise to add one more question to their software due-diligence checklist, which typically includes questions on normal or log-normal distributions; cross, serial and cross-serial correlation; standard deviations; and arithmetic or geometric average returns. That question is, of course, "Does your software consider if your boots are in the water?"
James A. Shambo, CPA/PFS, is president of Lifetime Planning Concepts in Colorado Springs, Colo.