After a scathing report in the Wall Street Journal found that Morningstar’s star ratings were not predictive of mutual fund performance, the company defended its methods and research in a series of online posts.

On Thursday, Jeffrey Ptak, Morningstar’s head of global manager research, argued that the star rating does have some moderate predictive power and that the Journal’s analysis was flawed and misleading. This followed a message on Wednesday from the company’s CEO, Kunal Kapoor, defending the Chicago-based investment research company’s independence and transparency, and countering the Journal’s research.

“We strongly disagree with the conclusions it reached about the efficacy of our ratings,” wrote Kapoor. “We have responded to the Journal to request corrections to numerous points that mischaracterize our business.”

Kicking off the controversy was “The Morningstar Mirage,” a piece published Wednesday morning after a year-long study of the company’s ratings by the newspaper. The research tested the performance of thousands of mutual funds rated by Morningstar since 2003.

Morningstar rates funds from one to five stars, with five stars being the best of the best. The Wall Street Journal’s report, however, found that five-star funds failed to sustain their performance. After achieving a five-star rating, only 12 percent of funds did well enough through the next five years to retain their status, while 10 percent of five-star funds fell to the one-star rating, the newspaper reported.

“Funds that earned high star ratings attracted the vast majority of investor dollars. Most of them failed to perform,” the Journal reported.

 

After 10 years, the average performance spread between one-star and five-star rated funds converges dramatically, the newspaper reported. After a fund receives a one-star rating, it ascends to an average rating of 1.9 stars after 10 years. A fund that receives a three-star rating descends to an average rating of 2.5 stars after 10 years. A top-rated five star fund, on the other hand, descends to an average rating of three stars a decade later, the story said.

Morningstar acknowledged that the star ratings were “backward-looking” measures with only “moderately predictive” value; the firm argued the Wall Street Journal’s research showed evidence of that value.

“We’ve encouraged users to consider combining the star rating with other data and measures to aid in fund selection,” wrote Ptak. “In this way, users could benefit from some of the star rating’s more distantly valuable features—that is, the way it emphasizes longer time frame changes, accounts for risk and measures performance after fees and charges, considerations that don’t usually figure into “leaders and laggards” tallies—while leveraging other forward-looking measures like the Morningstar Analyst Rating.”

Morningstar's analyst rating system, which awards funds a gold, silver, bronze, neutral or negative rating based on a qualitative review conducted by the firm’s analysts, was also criticized by the the report.

The newspaper found that qualitative analyst ratings were somewhat predictive of future quantitative performance, but not dramatically: Funds awarded a gold medal, for example, ended up with an average rating of 3.4 starts after a five-year period. Silver-medal funds ended up with an average of 3.3 stars after the same time period, while bronze funds had an average rating of three stars.

Morningstar accused the Journal’s analysis of comparing apples to oranges.

“We had counseled the Journal against using the star rating as a measure of the analyst rating’s predictiveness for a simple reason: The star rating is based on funds’ trailing risk- and load-adjusted returns versus category peers,” wrote Ptak. “When analysts are assigning analyst ratings, they’re not taking loads into consideration, so there’s a mismatch of the two, a point we made to the Journal in urging them to reconsider the star rating in favor a risk-adjusted measure like CAPM alpha.”

 

Ptak noted that the Wall Street Journal actually “corroborated” Morningstar’s assertions that its star ratings were predictive. The newspaper’s analysis found that four- and five-star mutual funds were less likely to be merged or liquidated than funds receiving a lower rating. Four- and five-star funds were also significantly more likely to retain a high rating after the next 10 years of performance.

For example, the Journal reported, 69 percent of one-star rated funds were merged or liquidated within 10 years of their rating, as opposed to 22 percent of five-star funds.Thirty-five percent of five-star funds retained a rating of four or five stars after 10 years, according to the Journal; only 5 percent of one-star funds had achieved a rating of four or five stars after 10 years.