Haven't heard of the FPA's new "Financial Advisor Practice Management Scorecard" yet? According to the association's Web site (https://fpascorecard.mclagan.com/), "McLagan Partners and the Financial Planning Association (FPA) have teamed up to create a tool to help financial advisor practices improve their performance through relevant, local benchmarking.". "Local" being the key word.

Robert J. Powell III, president of Unison Associates LLC, a Salem, Mass., consulting firm, has worked with Moss Adams LLP in Seattle and McLagan Partners Inc. of Stamford, Conn., to help produce FPA-sponsored statistical data for advisors to use in benchmarking their profitability and compensation structures. Says Powell, "About two years ago, concerned that the Moss Adams studies weren't addressing all FPA members' concerns, the FPA put out a request for proposal for the creation of an instrument that members would find more useful and actionable. After hearing from folks such as Ernst & Young and similar firms, we went with McLagan because they've been doing these studies for 40 years and were willing to work with the FPA to create a study [with more local relevance]."

The new scorecard was first launched with 2006 accounting data, and the FPA received approximately 500 study responses, about the same as for earlier Moss Adams studies. What makes this remarkable, says Powell, is that the former studies were free; FPA members paid to participate in the McLagan study, since the end result is a "scorecard" showing each respondent's numbers in comparison to other firms in its region-a result apparently valuable enough to members to motivate them to pay $195 plus $30 for each individual financial advisor scorecard desired beyond the first one, which is free. (Results can be compared on both firm-level and individual-level bases.).

"The scorecard," explains Peter Keuls, head of the private client business at McLagan Partners, "is a three-page affair: one page for revenues, one for practice profitability and costs and then individual advisor performance pages."

The firm has prepared a sample scorecard that starts with a page labeled "Practice Scorecard: Revenues, Assets & Clients," indicates the relevant market (a city and state); the number of other firms represented (which, in the case of the sample, is 11); and the practice type (for example, a sole proprietorship). For variables such as revenues, assets under management, numbers of households, etc., the scorecard shows the advisor's other firms' data based on "low quartile," "median" and "high quartile" breakdowns, followed by a graphic indicating the advisor's personal relative performance in the study.

Our sample respondent's 2006 revenues from asset management alone were $231,300, between the median and upper quartile measures of $157,000 and $328,800. The sample firm's "practice rank" among the 12 firms (11 comparison firms plus itself)  is five out of 12 for asset management revenues. Other measures on the same page include revenues per advisor, in which the sample firm scores two out of 12, and one-year percentage growth rate, in which the sample firm scores five out of 12.

Page 2, "Staffing, Expenses & Profitability," is similarly arranged to show the respondent his firm's employee roster by job type (e.g., "specialists," "licensed support staff" and "administrative staff") and how the firm compares with others using the same low and high quartile and median information. In the same format is an abbreviated version of the firm's income statement, showing how total revenues, expenses (with breakdowns) and profit compare with those numbers for the other 11 firms in the same market. Expenses-unlike revenues and profits-are not shown in a dollar amount, only as a percentage of revenues.

Page 3 is a sample "Financial Advisor Scorecard" for a "senior financial advisor." What this portion of the scorecard attempts to do is compare employees from like job classifications based on total revenues generated, total assets per employee and how compensation data (salary, bonuses, commissions, etc.) compare with those for the local market. For example, the sole proprietor on our sample scorecard earned a total salary of $153,000, which puts him right around the median of $150,000. The comparison, in this case, is against a total of 24 financial advisors in the same market.

The initial reactions to the scorecard by its participants have been positive, as exemplified by Leon Rousso of Leon Rousso & Associates in Ventura, Calif. Thirty years ago, Rousso was looking for the secret to rock 'n' roll stardom; now he's looking for secrets that might further propel his success as a financial advisor.

"At 29, with a pregnant wife, I quit trying to be a rock star," says Rousso. He went on to try his fortune in the health insurance field working for Equitable and, later, got his CFP and his Series 7 to move into what he perceived as more lucrative investment services. Eventually realizing Equitable wouldn't support his brand of financial planning, Rousso finally started his present firm four years ago. "My business now runs on 60% revenues from health insurance and employee benefits and 40% revenues from financial planning and investment management," he explains.

Because relatively few financial advisors claim these levels of income from health insurance, Rousso wanted to see where his business "fit" in the total scheme of things. "I had participated in the Moss Adams studies for the FPA, but when I heard about the scorecard, even though there was a cost to do it, I jumped right in." It was Rousso's hope that even though there wouldn't be firms exactly like his, the narrower, regional study focus would give him more useful comparisons. "I hoped to find an improved business model that I might move towards," he says.

What he learned was a lesson not uncommon to advisors who start out in high-volume forms of business: "Although my revenue and expense ratios are better than average, I learned I need to prune customers." Rousso distinguishes between "clients" and "customers," the former being more desirable because they have a larger "footprint" within his business. "I probably have 600 to 700 clients, most of whom are 'customers.' Most of the clients who will move forward with me, though, are from the financial planning and investment side of my business."

Rousso says at first he would have pruned according to the quality of his relationship with each client, but the studies he's followed suggest revenue is a preferred criterion. "Someone might be a great client, but he's just doing a simple Blue Cross policy with me." Rousso's solution, having "graded" his clients as advisors often do, has been to have his two associates-one who is an insurance rep and paraplanner and one who is a sales assistant with a Series 7-work more directly with the "B's" and "C's," those who he says are good clients but who don't generate a lot of revenue.

Did Rousso find any surprises in his scorecard? "Only how much money is being made out there. There are some amazing numbers being generated by some small firms. Having been isolated in working with Equitable for all those years, I didn't realize the extent of it."

But some question the reliability of a study that focuses on regions. Mark Tibergien, a former principal with Moss Adams who was actively involved in the studies conducted for the FPA before McLagan's involvement, says, "When Moss Adams created benchmarking for the operating performance of advisory firms, the goal was to provide advisors with both guidelines and insights to manage their businesses better, not to keep score. The Moss Adams team does more than any other [research firm] in the market to scrub data carefully-even including personal calls to firms who submitted data with apparent anomalies."

"There is a validation process," says Keuls. "If data don't add up or make sense, we go back to the practice with questions. Of course, not everyone gets a call, because the data template has some validations built into it, so if the advisor gives illogical answers or something not internally consistent or above the normal range for that category, [the discrepancy will be caught]. We'd rather throw out data than use questionable data." Keuls adds that a study participant who submits questionable data may get a scorecard with other firms' data, but those other firms' scorecards would not include the participant's data.

When asked what he believes advisors need to track their progress, Tibergien says, "The key for advisors is to observe trends using five to seven key performance indicators in comparison with relevant benchmarks, including their own best year, and calculate the financial impact of negative variances so they can judge the magnitude of the problem. The more data points they have to compare and contrast, the more effective their management will become."

Rebecca Pomering, now heading up the Moss Adams division Tibergien once took responsibility for, adds, "Regional data is certainly valuable for advisors in evaluating compensation benchmarks. However, the sample size would have to be in the thousands to collect meaningful data on a regional basis. For instance, if you are in Kansas City and you want to know what other paraplanners in Kansas City are being paid, the study would have to have gathered compensation data for hundreds of paraplanners in Kansas City to give you meaningful data, and thousands of paraplanners if you then want to know how experience level, tenure, secondary roles, productivity level, etc. impact the compensation of paraplanners in that market." Those factors, believes Pomering, have a bigger impact on compensation than do regional differences.

Michael Halvorsen, senior vice president of Asset Planning Services Ltd. of Harleysville, Pa., a firm that participated in the scorecard project, says his market, Philadelphia-Camden-Atlantic City-Wilmington, had somewhere between 16 and 25 respondents. Says Keuls, "We believe that is a statistically significant sample size ... enough so to get a good sample for a benchmark."

Explaining how they determined these markets, Keuls says, "We created groups of markets with similar demographics-markets that our past experience said would give us comparable practices. When you structure a market to get a meaningful sample, you don't want to combine practices in Chicago with more rural practices in Illinois because the demographics affect not only revenues but costs, too." For example, says Keuls, salary costs and occupancy costs vary tremendously by market, and hence, so does practice performance, so you have to be smart about how you define each market. "Defining by states doesn't work well. However, where you have a large metropolitan market like New York, it's easy; New York might reach into Greenwich and other nearby markets because they're similar. Now, in the mid-Plains and Midwest, we combined many small rural locations across a large geographic area to form a market because it makes sense to compare rural markets within a state."

In other words, Keuls would argue that sample size is not the most critical factor in constructing a valid study as long as the markets are appropriately defined. But Pomering says, "The more likely method to derive regional data-and the methodology used by most research firms and compensation consulting firms-is to gather data nationally and adjust it to a given region using regional compensation adjustment factors such as those published by ERI [http://www.erieri.com/index.cfm?FuseAction=ERIGA.Main]. This allows you to gather a large, national sample, break it out by region where you have enough respondents in a given region to be statistically meaningful, and apply regional adjustment factors for regions where you don't have a large enough sample."

Of course, market definition isn't the only variable that can trip up a study's validity. Anytime a study seeks to examine small-firm profitability, there are pitfalls in defining profits, too. For example, if a market is composed of 10 firms representing every possible business entity-C corporations, S corporations, LLCs, sole proprietorships, etc.-then the accounting systems that determine profitability may vary greatly. And what if one owner runs more personal benefits through her company than another? How does one normalize accounting profits for all of these entities in order to make comparisons valid?

"We handled this challenge by not asking for profit data," says Keuls. "Instead, we calculated the margin from carefully defined expense-line items. The process will never be a perfect science because of variations in how business owners treat expenses, but we can 'normalize' by defining line items carefully-for example, what gets included in travel and entertainment." Adds Keuls, one way the McLagan team normalized expenses was to go through the various categories and cross out outliers. "If someone's spending 10% of their revenue on entertainment, that's obviously pretty high, so we'll kick it out." Then, he says, the team would go back to the respondent to clarify the number in question.
Halvorsen, who heard about the study through Fidelity(one of the sponsoring firms), took part in it primarily because he was seeking financials against which to benchmark his firm against other local firms. "We've been discussing for the past four to five years how best to structure our business so we have a fair model where clients are paying for both financial planning and investment management, but we've had some frustration finding good comparisons." Halvorsen says his firm also participated in the FPA/Moss Adams studies, but those included firms that didn't fit their business model, such as brokers and one- or two-person planner shops.
His firm's other goal was to find compensation guidance. "We don't compensate back-office employees based on revenues or new clients they bring in. We give them a salary plus bonus, so we're constantly trying to benchmark ourselves against firms like us or individuals who have similar skill sets working in other industries." Did he get the answers he was looking for? "The scorecard has drawbacks; namely, specific geographic regions are limited in their number of participants. Being the first year, our region had between 16 and 25 different participants but, statistically, we should have had 100 participants for statistical accuracy."
Nevertheless, Halvorsen gleaned some valuable lessons from his firm's scorecard. "We learned we need to position ourselves so clients understand our value proposition: wealth management and investment advisory services. We think we charge less than most national firms but do more for our clients, and yet we found we were not as profitable as other firms. We want fair compensation for all services."
Given his skepticism about the scorecard's sample size, Halvorsen was pleasantly surprised to find that the results of the McLagan study did not significantly differ from those of previous Moss Adams studies on which he'd relied. "I used to work in HR, so I'm pretty well versed in this, and what I've seen is that this industry is woefully inadequate in putting together compensation packages for its employees."
What he'll do with the scorecard information is selectively add new hires. "We know that if we are to be competitive with clients, we have to get the right employees. The scorecard helped us realize we needed to make some compensation adjustments-both for new hires and existing staff. In some cases, we sweetened the compensation, but not across the board. For each position, based on the level of skills and required experience, are we now paying competitively. For some employees, we found we'd been overly generous and for others not generous enough, so we made adjustments to both compensation and overall benefits."
One downside to the FPA's new study format is the paucity of data available to non-participants. Because the Moss Adams studies were national in scope, they produced ample summary statistics for discussion by the media. Using scorecards, the FPA has created a more personal experience, which is good for the participant but yields less value to the industry looking on.
That said, McLagan did release some summary data to whet our appetites. For example, the study found that independent advisor practice profit margin before owners' draws ranges from under 20% to over 80%. It found that markets such as Southern California, Washington, D.C., and San Francisco have lower net effective payout rates, primarily because of the high overhead costs relative to productivity. Washington, D.C., and San Francisco, however, offer opportunities for growth that may make up for their higher cost.
In its press release announcing these and other findings, Keuls is quoted as saying, "These results demonstrate how important it is for financial advisors to benchmark their practice against relevant local peers and the limited value of national benchmarks." Precisely. What we have here is the clever marketing of a potentially valuable service. By taking the individualized scorecard approach, the FPA may ultimately realize greater income from this service than with the Moss Adams approach. At the same time, few advisors would argue whether regional or national data is better-as long as there's enough of it.
The FPA expects the McLagan scorecard to be an annual fixture. If you would like to register for the next opportunity to participate, you can do so at https://fpascorecard.mclagan.com/.  

An independent financial advisor since 1981, David J. Drucker, MBA, CFP, has also been a familiar journalistic voice since 1993. Drucker's entire body of work can now be purchased at www.DavidDrucker.com in 14 compendiums, by topic.