In other words, Keuls would argue that sample size is not the most critical factor in constructing a valid study as long as the markets are appropriately defined. But Pomering says, "The more likely method to derive regional data-and the methodology used by most research firms and compensation consulting firms-is to gather data nationally and adjust it to a given region using regional compensation adjustment factors such as those published by ERI [http://www.erieri.com/index.cfm?FuseAction=ERIGA.Main]. This allows you to gather a large, national sample, break it out by region where you have enough respondents in a given region to be statistically meaningful, and apply regional adjustment factors for regions where you don't have a large enough sample."

Of course, market definition isn't the only variable that can trip up a study's validity. Anytime a study seeks to examine small-firm profitability, there are pitfalls in defining profits, too. For example, if a market is composed of 10 firms representing every possible business entity-C corporations, S corporations, LLCs, sole proprietorships, etc.-then the accounting systems that determine profitability may vary greatly. And what if one owner runs more personal benefits through her company than another? How does one normalize accounting profits for all of these entities in order to make comparisons valid?

"We handled this challenge by not asking for profit data," says Keuls. "Instead, we calculated the margin from carefully defined expense-line items. The process will never be a perfect science because of variations in how business owners treat expenses, but we can 'normalize' by defining line items carefully-for example, what gets included in travel and entertainment." Adds Keuls, one way the McLagan team normalized expenses was to go through the various categories and cross out outliers. "If someone's spending 10% of their revenue on entertainment, that's obviously pretty high, so we'll kick it out." Then, he says, the team would go back to the respondent to clarify the number in question.
Halvorsen, who heard about the study through Fidelity(one of the sponsoring firms), took part in it primarily because he was seeking financials against which to benchmark his firm against other local firms. "We've been discussing for the past four to five years how best to structure our business so we have a fair model where clients are paying for both financial planning and investment management, but we've had some frustration finding good comparisons." Halvorsen says his firm also participated in the FPA/Moss Adams studies, but those included firms that didn't fit their business model, such as brokers and one- or two-person planner shops.
His firm's other goal was to find compensation guidance. "We don't compensate back-office employees based on revenues or new clients they bring in. We give them a salary plus bonus, so we're constantly trying to benchmark ourselves against firms like us or individuals who have similar skill sets working in other industries." Did he get the answers he was looking for? "The scorecard has drawbacks; namely, specific geographic regions are limited in their number of participants. Being the first year, our region had between 16 and 25 different participants but, statistically, we should have had 100 participants for statistical accuracy."
Nevertheless, Halvorsen gleaned some valuable lessons from his firm's scorecard. "We learned we need to position ourselves so clients understand our value proposition: wealth management and investment advisory services. We think we charge less than most national firms but do more for our clients, and yet we found we were not as profitable as other firms. We want fair compensation for all services."
Given his skepticism about the scorecard's sample size, Halvorsen was pleasantly surprised to find that the results of the McLagan study did not significantly differ from those of previous Moss Adams studies on which he'd relied. "I used to work in HR, so I'm pretty well versed in this, and what I've seen is that this industry is woefully inadequate in putting together compensation packages for its employees."
What he'll do with the scorecard information is selectively add new hires. "We know that if we are to be competitive with clients, we have to get the right employees. The scorecard helped us realize we needed to make some compensation adjustments-both for new hires and existing staff. In some cases, we sweetened the compensation, but not across the board. For each position, based on the level of skills and required experience, are we now paying competitively. For some employees, we found we'd been overly generous and for others not generous enough, so we made adjustments to both compensation and overall benefits."
One downside to the FPA's new study format is the paucity of data available to non-participants. Because the Moss Adams studies were national in scope, they produced ample summary statistics for discussion by the media. Using scorecards, the FPA has created a more personal experience, which is good for the participant but yields less value to the industry looking on.
That said, McLagan did release some summary data to whet our appetites. For example, the study found that independent advisor practice profit margin before owners' draws ranges from under 20% to over 80%. It found that markets such as Southern California, Washington, D.C., and San Francisco have lower net effective payout rates, primarily because of the high overhead costs relative to productivity. Washington, D.C., and San Francisco, however, offer opportunities for growth that may make up for their higher cost.
In its press release announcing these and other findings, Keuls is quoted as saying, "These results demonstrate how important it is for financial advisors to benchmark their practice against relevant local peers and the limited value of national benchmarks." Precisely. What we have here is the clever marketing of a potentially valuable service. By taking the individualized scorecard approach, the FPA may ultimately realize greater income from this service than with the Moss Adams approach. At the same time, few advisors would argue whether regional or national data is better-as long as there's enough of it.
The FPA expects the McLagan scorecard to be an annual fixture. If you would like to register for the next opportunity to participate, you can do so at https://fpascorecard.mclagan.com/.  

An independent financial advisor since 1981, David J. Drucker, MBA, CFP, has also been a familiar journalistic voice since 1993. Drucker's entire body of work can now be purchased at www.DavidDrucker.com in 14 compendiums, by topic.

First « 1 2 3 » Next