Few mainstream software models employed by financial advisors provided any clues in early 2008 about the devastation that awaited the financial markets. So it's not surprising that over the last several months, a great deal of criticism has been leveled at Monte Carlo simulations and their use as a financial planning tool. An article in the May 2, 2009 issue of The Wall Street Journal entitled "Odds on Imperfection: Monte Carlo Simulation" is fairly representative of the criticisms that have been leveled against MCS recently. It calls into question Monte Carlo's ability to quantify various risks, such as the risk of a portfolio being depleted within a certain period of time.

If the critics are right, and if Monte Carlo simulations are so imperfect as to be of little use, the implications for both consumers and professional financial planning software are extremely serious, since many of the most widely used financial planning tools include an MCS component. The Journal was quick to highlight this fact in the article when it stated:

"If one had asked a financial adviser 18 months ago for retirement-planning guidance, there is a good chance he would have run a 'Monte Carlo' simulation. This calculation method, as it is commonly used in financial planning, estimates the odds of reaching retirement financial goals."

So are the critics right? Is the Monte Carlo simulation faulty, or is it still a valid tool that advisors can use with confidence? The answer requires some explanation, but there's no need to get very technical. The intricacies of better models can be challenging, but the basics of MCS can be understood without an advanced degree in rocket science. Once we've addressed the validity of MCS as a planning tool, we'll look briefly at the question of what else can be done to improve on the advice we give to clients and then discuss the implications for the planning profession.

What Is A Monte Carlo Simulation?
According to Wikipedia: "Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems. Because of their reliance on repeated computation and random or pseudo-random numbers, Monte Carlo methods are most suited to calculation by a computer. Monte Carlo methods tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm."

The conceptual framework surrounding Monte Carlo simulations has been around for a long time. The term "Monte Carlo method" was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory. Wikipedia states: "The name 'Monte Carlo' was popularized by physics researchers Stanislaw Ulam, Enrico Fermi, John von Neumann and Nicholas Metropolis, among others; the name is a reference to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money to gamble. The use of randomness and the repetitive nature of the process are analogous to the activities conducted at a casino."

In order to evaluate the value of Monte Carlo simulations as they are being used today for personal financial planning, there are at least three factors one must consider: the quality of the model being employed, the quality of the inputs and the way the results are conveyed to clients.

The Quality Of The Models
"To say that all Monte Carlo models are good or bad is not just inaccurate; it's crazy," says Thomas Idzorek, CFA, the chief investment officer and director of research and product development at Ibbotson Associates (a Morningstar company). "Not all Monte Carlo is equal."

Some Monte Carlo engines built for personal financial planning applications are quite robust and sophisticated, others are much less so. For example, some engines are tax aware, while others take into account no tax consequences whatsoever. Since most planners may not have the in-house expertise to evaluate the underlying mathematical model, they may have to rely on a third party to evaluate the validity of the model (Morningstar/Ibbotson is one of a number of firms that have extensive expertise in this area). At the very least, users should understand the factors that the model is attempting to account for when running simulations and also understand the general theoretical framework underlying a specific model. Any reputable company offering Monte Carlo engines should be able to supply the advisor with such documentation.

The Quality Of The Inputs And Assumptions
Much of the recent criticism leveled at Monte Carlo simulations lately seems to be targeting the inputs and assumptions built into many of the Monte Carlo tools being used for personal financial planning. There's been much written about normal versus log normal distributions, fat tails and other terms that can be intimidating to the uninitiated. What it all really boils down to is the question of whether a Monte Carlo model accurately depicts the likelihood and the frequency of one really bad year showing up in a series of Monte Carlo iterations (or two bad years in a row). Let's look at a few problems that can occur with inputs and assumptions. This is by no means a comprehensive list, but rather a sample to illustrate the point.

In a very simple model, let's imagine you test the likelihood of a portfolio lasting 30 years. The model needs to make some assumptions about the mean rate of return and the distribution of the returns. The model will then pick 30 years worth of returns. That set of returns amounts to one trial. Either there will be money remaining in the portfolio after the 30-year simulation or there won't be. If there is money remaining, the trial was a "success." If the money ran out before the 30 years were up, it was a failure. If we run 1,000 trials and 900 of them are successful, we have a 90% success rate, or conversely, a 10% failure rate.

Now if you assume there will be a 10% average rate of return and a standard deviation of 18, and if you assume there will be a normal bell-shaped distribution (as some models do), the failure rate is going to be low because most of the numbers that the model picks will be centered around the mean. Very few extreme numbers on the high side or the low side will appear. So one criticism leveled against many simple models is that they use a normal distribution; hence, too few "bad" scenarios are generated.

The success rates generated are too high. In fact, Paul Kaplan of Morningstar recently studied the predictive powers of a "standard" model using historic S&P 500 data. He found that "extreme events" (defined as a monthly return that falls three standard deviations below average) occur five to ten times more frequently than the standard model predicts. Kaplan concluded that a log stable model does a much better job of representing the historical returns of the S&P than does a normally distributed model.

If you a want to keep the model very simple and unsophisticated, Idzorek pointed out that another way to introduce more "bad" scenarios into the mix is to raise the standard deviation on the normal model to 25 or higher. You'll get some extreme good results, but you'll get some bad ones too.

Clearly, as you introduce additional factors such as tax sensitivity into the models, the math can get complicated. But whether the model is sophisticated or not, it's fair to criticize some Monte Carlo models when their assumptions and inputs are poor.

Morningstar/Ibbotson and other firms that produce this type of software are reviewing and updating tools to provide what hopefully will be better assumptions going forward.

Some could also argue, however, that critics of current models are placing too much emphasis on the recent past. Bob Veres, co-author of the research paper "Making Retirement Income Last a Lifetime," which employed Monte Carlo analysis, told me that one of the criticisms leveled against his paper was that the capital market assumptions were too pessimistic. While everyone, including Veres, acknowledges that there is room for improvement in the inputs and assumptions, we should also remain aware of the tendency to overweight the recent past results.

Of course, it is possible to use other tools besides Monte Carlo to help clients fully understand the implications of a severe event like that we've recently experienced. Many popular financial planning software programs, including EISI's NaviPlan and MoneyGuidePro to name just two, allow users to generate "worst-case scenarios" or "stress-test" plans. The idea is to create a case that would show how a client might fare, for example, if he were to retire just as the markets tanked for two years. The idea would not be to predict such an outcome, but rather to inform the client that such an outcome is a possibility. This not only helps prepare the client mentally for such an eventuality, but also helps him foster a discussion as to what the alternatives would be if such an outcome were to present itself.

Conveying Results To Clients
Most of the fault that financial professionals have found with Monte Carlo has not been with the technique itself but with the way it is being used (or misused). Dan Moisand, CFP, a former president of the FPA who practices in Melbourne, Fla., put it like this:
"MCS hasn't failed anybody, but apparently too many people counted on it to do something it can't do. Maybe it is because I live on the Space Coast and serve so many real-life rocket scientists, but my view is that, used properly, MCS is great for framing the retirement puzzle-but otherwise certainly isn't a crystal ball. In most cases, the problem isn't with the software or the technique.

Some are better than others, certainly, but better software won't help if it isn't used correctly and its limitations understood. When you are launching something-anything-into space, the world's most sophisticated simulations are used, but all risk still cannot be modeled or eliminated. Good planning recognizes this and provides good framing for decision-making so we can adjust to circumstances in sensible ways. To make it even more challenging for planners, most people don't understand probabilities very well anyway."

Joe Taylor, who practices in Myrtle Beach, S.C., says:
"I believe Monte Carlo analysis is a helpful tool in financial planning, but too much credence is given to its output. Advisors must always remember the future is truly unknowable. I believe financial planning is a bit like navigating the open seas with a compass and a sextant; you have to take measurements and adjust your course often because the wind and tides will inevitably cause you to stray. Too many times, Monte Carlo analysis is thought of as navigating with a GPS system-you put in point 'A' and point 'B,' then just keep moving."

Bedda D'Angelo offers an important reminder that producing a plan is not enough: "The key here is monitoring. Many planners don't monitor once they have set the plan up. If you believe your first and only analysis is true, then you did not complete Step 6 of the financial planning process. ['Monitor the plan.'] In that case, clients may have been let down, but it was not by our tools. It was by our failure to monitor and misapplication of our tools."

Lessons Learned And Implications For The Future
The consensus among advisors, software developers and other experts is that there is nothing fundamentally wrong with Monte Carlo models. "They are not perfect," says Dr. Linda Strachan of  , "but they are far better than the straight-line and deterministic models that have been used in the past." Strachan believes that Monte Carlo simulations, used in conjunction with other planning tools and techniques, offer the best planning outcomes.

Most experts and advisors seem to believe that the fault lies with the way the results of Monte Carlo simulations are being presented. Strachan says you must make a clear distinction between the model and the plan. If you run a model and it succeeds 100% of the time, that does not necessarily mean the plan is guaranteed to succeed in the future. Advisors must be sure that clients understand the distinction.

As for the criticisms of the assumptions and inputs, there is a corollary to the discussion that critics usually fail to mention. If you argue that most current Monte Carlo models under-represent the probability of extremely bad outcomes, it follows that we need to understand the implications if we adjust the models so that bad outcomes are drawn more often.

It should be pretty obvious that if we have more failed iterations, the odds of success for the typical plan will decline. Advisors will then be faced with the task of delivering the bad news to clients and offering solutions. Those solutions will include saving more, spending less, working longer or some combination of the three. Another conclusion one might reach is that the current portfolio is more "risky" than previously thought. If that is the case, perhaps the client will want to change the investment mix to one with less volatility. If he does, odds are the portfolio will have a lower expected return. Again, he will need to save more, spend less, etc.

After a historic market decline, some advisors may find it difficult sending this message now, particularly if they voiced a high degree of confidence in the model they used two years ago.

Monte Carlo simulations used in a vacuum are not a total financial planning solution. They can add value, of course, when advisors use them to educate clients about the volatility of returns and the probability of disasters. But they need to be used alongside other tools to assess various situations, including worst-case scenarios. Furthermore, financial modeling is still evolving. Some models are better than others, but there is no consensus about what represents the best model, so advisors must make their own value judgments.

Even when advisors use a good model with good inputs, they must take great care when helping their clients interpret the output. It is imperative clients understand that no model can accurately predict the unknowable future. Finally, we must never lose sight of the fact that financial planning is an ongoing process. Running a model and making a set of recommendations is not the end of the process. Periodic reviews and revisions are the only way to keep clients on course.