In a very simple model, let's imagine you test the likelihood of a portfolio lasting 30 years. The model needs to make some assumptions about the mean rate of return and the distribution of the returns. The model will then pick 30 years worth of returns. That set of returns amounts to one trial. Either there will be money remaining in the portfolio after the 30-year simulation or there won't be. If there is money remaining, the trial was a "success." If the money ran out before the 30 years were up, it was a failure. If we run 1,000 trials and 900 of them are successful, we have a 90% success rate, or conversely, a 10% failure rate.

Now if you assume there will be a 10% average rate of return and a standard deviation of 18, and if you assume there will be a normal bell-shaped distribution (as some models do), the failure rate is going to be low because most of the numbers that the model picks will be centered around the mean. Very few extreme numbers on the high side or the low side will appear. So one criticism leveled against many simple models is that they use a normal distribution; hence, too few "bad" scenarios are generated.

The success rates generated are too high. In fact, Paul Kaplan of Morningstar recently studied the predictive powers of a "standard" model using historic S&P 500 data. He found that "extreme events" (defined as a monthly return that falls three standard deviations below average) occur five to ten times more frequently than the standard model predicts. Kaplan concluded that a log stable model does a much better job of representing the historical returns of the S&P than does a normally distributed model.

If you a want to keep the model very simple and unsophisticated, Idzorek pointed out that another way to introduce more "bad" scenarios into the mix is to raise the standard deviation on the normal model to 25 or higher. You'll get some extreme good results, but you'll get some bad ones too.

Clearly, as you introduce additional factors such as tax sensitivity into the models, the math can get complicated. But whether the model is sophisticated or not, it's fair to criticize some Monte Carlo models when their assumptions and inputs are poor.

Morningstar/Ibbotson and other firms that produce this type of software are reviewing and updating tools to provide what hopefully will be better assumptions going forward.

Some could also argue, however, that critics of current models are placing too much emphasis on the recent past. Bob Veres, co-author of the research paper "Making Retirement Income Last a Lifetime," which employed Monte Carlo analysis, told me that one of the criticisms leveled against his paper was that the capital market assumptions were too pessimistic. While everyone, including Veres, acknowledges that there is room for improvement in the inputs and assumptions, we should also remain aware of the tendency to overweight the recent past results.

Of course, it is possible to use other tools besides Monte Carlo to help clients fully understand the implications of a severe event like that we've recently experienced. Many popular financial planning software programs, including EISI's NaviPlan and MoneyGuidePro to name just two, allow users to generate "worst-case scenarios" or "stress-test" plans. The idea is to create a case that would show how a client might fare, for example, if he were to retire just as the markets tanked for two years. The idea would not be to predict such an outcome, but rather to inform the client that such an outcome is a possibility. This not only helps prepare the client mentally for such an eventuality, but also helps him foster a discussion as to what the alternatives would be if such an outcome were to present itself.

Conveying Results To Clients
Most of the fault that financial professionals have found with Monte Carlo has not been with the technique itself but with the way it is being used (or misused). Dan Moisand, CFP, a former president of the FPA who practices in Melbourne, Fla., put it like this:
"MCS hasn't failed anybody, but apparently too many people counted on it to do something it can't do. Maybe it is because I live on the Space Coast and serve so many real-life rocket scientists, but my view is that, used properly, MCS is great for framing the retirement puzzle-but otherwise certainly isn't a crystal ball. In most cases, the problem isn't with the software or the technique.

Some are better than others, certainly, but better software won't help if it isn't used correctly and its limitations understood. When you are launching something-anything-into space, the world's most sophisticated simulations are used, but all risk still cannot be modeled or eliminated. Good planning recognizes this and provides good framing for decision-making so we can adjust to circumstances in sensible ways. To make it even more challenging for planners, most people don't understand probabilities very well anyway."