My interest was piqued when I heard that Forrester Research, an independent research firm, was releasing an updated report on financial planning software. I became even more interested when people began contacting me to share their interpretation of the results.

Journalists, including yours truly, are fond of quoting research from firms such as Forrester because they lend an additional air of credibility to a point we are trying to make. We rarely, however, dig below the surface to ascertain whether the "research" we are quoting is valid, and I suspect the same can be said for other casual observers. Since I like to believe that I know a little bit about financial planning software, I decided to delve a little deeper to see if the summary information Forrester provides rings true. I was also curious to discover whether all of us could learn some lessons from the Forrester approach to software evaluation.

When I began my investigation, I was, if anything, a bit skeptical of Forrester's research in this area. I remembered reading their first research report on financial planning software back in 2002, and some of the findings struck me as strange. The clear favorite of the Forrester analyst at the time was a program called netDecide. In fact, the March 2002 report states: "NetDecide will emerge as a leader. No other vendor comes close to serving large firms, with their varied users and needs, as well as netDecide." In a September 2002 brief, Forrester preserved the software's ranking: "NetDecide retains its lead. The app's deep planning functionality and clean interface kept it atop the rankings, but its hold among comprehensive planning apps is tenuous." Those statements did not ring true to me at the time, and they certainly do not ring true today.

In all fairness, Forrester was much better when discussing macro trends. They correctly predicted a move to greater integration among apps. They also predicted that Web-based applications would eventually overtake desktop applications, a statement that was met with skepticism at the time, although it is widely accepted today.

The next Forrester update of financial planning software came in 2005. By that time, netDecide, the darling of the 2002 reports, had been sold to Informa Investment Solutions and renamed AdvisorDecide. Of the programs under review, AdvisorDecide was ranked dead last. The new top-ranked program was AdviceAmerica, an application that few, if any, independent financial planning firms were using at the time.

The 2007 report ranks eMoney and AdviceAmerica in almost a dead heat for top spot, with SunGard, EISI NaviPlan and PIE Technologies (MoneyGuidePro) close behind. Of all the Forrester reports I've read on the subject, I'd say that the 2007 report is the best yet, but I can't say that I'm in 100% agreement with the results, so I decided to try and figure out why my perceptions were different from Forrester's.

Before delving into the details, there are a few things that it is important for you to understand. One is that the Forrester information is available in various formats. You can go to their Web site and order a summary report, but if you are a Forrester client you can use an interactive tool to create your own weightings for the various criteria to arrive at your own custom ranking. I believe that the custom rankings are a viable tool, provided the person using them knows what he or she is doing. However, as is often the case with financial planning software, a sophisticated tool in the wrong hands will provide poor results.

My larger concern, however, is that most people who hear about the Forrester reports never even read the summary. They either look at the single graph that shows the position of each firm next to its peers, or they hear from a vendor or a third party that so and so was ranked highly by Forrester.     Very few of the people who hear about the rankings ever get to look at a detailed printout of the actual scoring system.

As I began looking over that data, it didn't take long to figure out why there were some inconsistencies in the rankings over time-changes that could not be explained away simply by alterations to the programs themselves. There appear to be two primary factors that influence the rankings. One is that the methodology and criteria are altered over time. The other is that there has been a different lead analyst for each major report release.
Forrester's current report ranks the software products on 124 criteria. According to firm analyst Alyson Clarke, the criteria and the way they are applied are objective. I don't question Ms. Clarke's sincerity, but I do take a slightly different view. In addition, the weightings in the published reports certainly are subjective. Clearly, there is no one set of weightings that can be applied equally to all firms. For example, in the print report only 50% of a software product rating deals specifically with the product itself. The other 50% is a "strategy" score, which is designed to give readers an idea of how     Forrester thinks that vendor's product will fare in the future.

Generally, I found the rankings dealing with the current offering to be more objective, while the strategy rankings struck me as more subjective. Forrester's rankings on things like product direction, executive vision and product commitment are just opinions. Only time will tell how good those opinions are. I question their financial scoring system, which weights gross revenues almost three times as much as it weights profitability.

Few would argue that a product's future prospects are important. The question is: How knowable are they, and what weighting should you assign them? I'd argue that surprises in this area are not that rare, so I'd give them less weight. I'll grant you that the weighting should vary based on who the buyer is. For a very large enterprise, changing financial planning solutions will be time-consuming and costly, so the number should be somewhat higher. For a smaller firm, it is much easier and less costly to change course, so the weighting should be less. In no case, however, would I go as high as 50%.
When evaluating a firm's current product offering, however, the rating criteria are more objective, but the weightings are still subjective. Forrester assigns equal weighting to the ability to import from, and export to, the firm's client database. But I've found when I've visited financial planning firms, the ability to import is more important than the exporting.

The scoring system rewards platforms that can monitor how long advisors and their clients stay on the system. I'm not sure how that is measured or how it is relevant. If you measure how long a user stays on the system by simply counting the minutes they are online (with a Web-based application), what does that really tell you? You don't even know if they are sitting at the computer. Even if they are, what does it mean? One person may perform a task in 15 minutes that another can perform in 30, so the raw numbers themselves don't tell you anything.

Ease of use, an important factor in any purchasing decision, is subjective, in spite of what Forrester might have you believe. If a program's navigation bar has many links, I'd prefer to have a collapsible bar, whereby you can choose to expose or hide the sub-menus, as long as there is some visual cue for the user to recognize that the sub-menus exist. In the Forrester methodology, points are subtracted for not exposing sub-menus.
Forrester views the ability to grant a client access as an important feature; so do I, but I approach it differently than they do. They grant extra points for those programs that allow clients to enter their own assumptions and save their own "private" scenarios. I would not want my clients to have that sort of functionality available to them.

The product support rankings also strike me as skewed. The ratings reward quantity, not quality. Vendors are scored on their gross number of service reps, as opposed to a ratio of reps to users. Extra points are awarded for multilingual reps, again in gross numbers. So if one firm had reps who could speak English and Spanish, while another's spoke English and Swahili, presumably they would be awarded identical scores.
After I reviewed all 124 criteria in some detail, it became apparent to me that the Forrester rankings were designed to serve a specific demographic: the very large financial service enterprise. The perfect application in Forrester's view, at least according to their scoring methodology, is one capable of providing one-stop shopping to the largest banks, brokerage firms and insurance companies. There is nothing wrong with this, and it makes sense for Forrester to take this approach since large firms are who they cater to; I would simply point out that not all readers will be well served by using their rankings blindly.

It is not my intention to single Forrester out for criticism; on the contrary, Alyson Clarke, the lead author of the study, impressed me as both knowledgeable about the topic and passionate about it. My intention is to try and point out some of the mistakes that planners make when evaluating software, and to offer some ideas as to how to improve your decision-making in the future.

One thing that Forrester did very well was to arrive at 124 criteria that are important to many firms. A good percentage of the criteria are objective. A good first step when evaluating software is to create your own list of criteria. If your firm is a small one, your list will not approach 124, nor should it. Next, weight the criteria according to their importance to you.

While I can't provide a complete list here, some things that I believe will be important are the software's ease of use; the ability to integrate the software with other programs you use; the software's default assumptions and one's ability to alter them; its work flow capabilities; the financial viability of the vendor (based on your criteria); client support; and of course, the quality of the analytical tools themselves. Some firms may also place a premium on the ability to run multiple scenarios side by side, on Web access and on client/third-party access.

The next step is to actually try the software. For many readers, this is the most difficult one, so they look for shortcuts, such as a single graph from Forrester or a single article in Financial Advisor or anywhere else that will provide them with all the answers. Stop looking; it doesn't exist! The best any of these tools can do is help you narrow your list of candidates. I've learned over time that my beliefs with regard to ease of use and analytical tools are not universal; the only method you can use to arrive at the best product for you is to try it yourself.

Ms. Clarke and her team have done an excellent job in identifying financial planning software evaluation criteria, but their application of the criteria in the printed reports I've seen will only benefit large enterprises. In order to benefit fully from Forrester's research, you would need full access to the online tool which allows you to weight the 124 criteria to meet your needs, perhaps ignoring some of the measurements that are subjective or faulty altogether. Others would be better served by creating their own criteria and weighting them appropriately for their circumstances.