Because relatively few financial advisors claim these levels of income from health insurance, Rousso wanted to see where his business "fit" in the total scheme of things. "I had participated in the Moss Adams studies for the FPA, but when I heard about the scorecard, even though there was a cost to do it, I jumped right in." It was Rousso's hope that even though there wouldn't be firms exactly like his, the narrower, regional study focus would give him more useful comparisons. "I hoped to find an improved business model that I might move towards," he says.

What he learned was a lesson not uncommon to advisors who start out in high-volume forms of business: "Although my revenue and expense ratios are better than average, I learned I need to prune customers." Rousso distinguishes between "clients" and "customers," the former being more desirable because they have a larger "footprint" within his business. "I probably have 600 to 700 clients, most of whom are 'customers.' Most of the clients who will move forward with me, though, are from the financial planning and investment side of my business."

Rousso says at first he would have pruned according to the quality of his relationship with each client, but the studies he's followed suggest revenue is a preferred criterion. "Someone might be a great client, but he's just doing a simple Blue Cross policy with me." Rousso's solution, having "graded" his clients as advisors often do, has been to have his two associates-one who is an insurance rep and paraplanner and one who is a sales assistant with a Series 7-work more directly with the "B's" and "C's," those who he says are good clients but who don't generate a lot of revenue.

Did Rousso find any surprises in his scorecard? "Only how much money is being made out there. There are some amazing numbers being generated by some small firms. Having been isolated in working with Equitable for all those years, I didn't realize the extent of it."

But some question the reliability of a study that focuses on regions. Mark Tibergien, a former principal with Moss Adams who was actively involved in the studies conducted for the FPA before McLagan's involvement, says, "When Moss Adams created benchmarking for the operating performance of advisory firms, the goal was to provide advisors with both guidelines and insights to manage their businesses better, not to keep score. The Moss Adams team does more than any other [research firm] in the market to scrub data carefully-even including personal calls to firms who submitted data with apparent anomalies."

"There is a validation process," says Keuls. "If data don't add up or make sense, we go back to the practice with questions. Of course, not everyone gets a call, because the data template has some validations built into it, so if the advisor gives illogical answers or something not internally consistent or above the normal range for that category, [the discrepancy will be caught]. We'd rather throw out data than use questionable data." Keuls adds that a study participant who submits questionable data may get a scorecard with other firms' data, but those other firms' scorecards would not include the participant's data.

When asked what he believes advisors need to track their progress, Tibergien says, "The key for advisors is to observe trends using five to seven key performance indicators in comparison with relevant benchmarks, including their own best year, and calculate the financial impact of negative variances so they can judge the magnitude of the problem. The more data points they have to compare and contrast, the more effective their management will become."

Rebecca Pomering, now heading up the Moss Adams division Tibergien once took responsibility for, adds, "Regional data is certainly valuable for advisors in evaluating compensation benchmarks. However, the sample size would have to be in the thousands to collect meaningful data on a regional basis. For instance, if you are in Kansas City and you want to know what other paraplanners in Kansas City are being paid, the study would have to have gathered compensation data for hundreds of paraplanners in Kansas City to give you meaningful data, and thousands of paraplanners if you then want to know how experience level, tenure, secondary roles, productivity level, etc. impact the compensation of paraplanners in that market." Those factors, believes Pomering, have a bigger impact on compensation than do regional differences.

Michael Halvorsen, senior vice president of Asset Planning Services Ltd. of Harleysville, Pa., a firm that participated in the scorecard project, says his market, Philadelphia-Camden-Atlantic City-Wilmington, had somewhere between 16 and 25 respondents. Says Keuls, "We believe that is a statistically significant sample size ... enough so to get a good sample for a benchmark."

Explaining how they determined these markets, Keuls says, "We created groups of markets with similar demographics-markets that our past experience said would give us comparable practices. When you structure a market to get a meaningful sample, you don't want to combine practices in Chicago with more rural practices in Illinois because the demographics affect not only revenues but costs, too." For example, says Keuls, salary costs and occupancy costs vary tremendously by market, and hence, so does practice performance, so you have to be smart about how you define each market. "Defining by states doesn't work well. However, where you have a large metropolitan market like New York, it's easy; New York might reach into Greenwich and other nearby markets because they're similar. Now, in the mid-Plains and Midwest, we combined many small rural locations across a large geographic area to form a market because it makes sense to compare rural markets within a state."