Why do we rate funds? Rating investment managers is a mechanism that attempts to normalize an otherwise complex research process using a combination of quantitative and qualitative techniques. In our day-to-day life we are constantly exposed to ratings… it’s pervasive in restaurants, household goods, technology, customer service and more.
We are psychologically conditioned to assign ratings and rank most things in our life. (For example, a rating system could even apply to “manbun sightings ”. Not familiar with the manbun? It is a current hairstyle trend that is just as disturbing, if not more, as the mullet was back in the 80’s and early 90’s. “Manbun Quality” could be ranked quantitatively across height, girth and overall complexity to determine an overall score. Anecdotally, one spotted on the subway this morning was by far the highest scoring to date at a 9.5) But we digress...
We researched investors’ rating methodologies as part of a DiligenceVault feature addition, during which an intriguing question arose. On a traditional 5 star rating scale, if nothing is ever a 1 or 5, (which over the course of our interviews, many commented is how managers typically shake out) why would we want to use ratings for quantifying fund research? If everyone falls in the middle of the pack, how do you distinguish between the good and bad? The reason it’s useful is that with so many moving parts, ratings come in handy when leveraged for peer analysis or historical trends.
While the choice of rating scale ranges from qualitative definitions such as high, medium, low, letter ratings or a numeric scale, the outcome of ratings exercise is two-fold:
While rating is generally based on forward-looking expectations, it sometimes is held accountable for future short-term performance. It is possible that a 5 rated manager can actually underperform.
An anecdote from a recent product discussion: if a firm has an impeccable valuation process, and trades in structured credit, should they get a perfect score for valuation? There could be periods of liquidity interruption. How does the rating change? Or should it?
Most firms have periodic qualitative and quantitative monitoring processes, which is key to investment success. This framework also feeds into watchlisting, a process where investors monitor managers more closely than usual. Watchlisting can be triggered for a variety of reasons but the usual suspects include: poor performance, regulatory examinations, organizational changes such as employee turnover or a change in management, style drift within the portfolio, and your lead PM wearing a manbun to the annual investor meeting.
Most firms typically adopt investment research and operational/risk research, each performed by separate teams. Further, the teams that evaluate long-only and alternative managers use a similar framework and evaluation thesis, with some differences as highlighted in the table below.
Through examining investors’ monitoring processes, one wonders exactly how meaningful a change in a manager’s overall score from a 2.5 to a 2.3 actually is? Does it signal a need for action or further analysis? Perhaps a more useful analysis is in analyzing what the sub components of the scoring system are signaling when the overall score changes. Why not look across the portfolio and pinpoint the underlying exposures to various qualitative risks within a portfolio, much like firms traditionally do through holding analysis?
One example of the type of evaluation that could be performed is pictured below in the form of rating distributions depicting a portfolio’s manager exposure across various qualitative categories, emphasizing the tails.
Another example is if investors could see that 34% of their managers have low scores in governance, or that 17% of the portfolio have lower than average scores in turnover, they could diversify not only across quant factors, but also qualitative as well. By aggregating scoring components, a heat map could be used to warn investors of problem areas within their portfolio.
In an increasingly transparent, information sharing economy what are the consequences of sharing ratings with managers? There seems to be a growing demand from investment managers for some feedback on the increasing levels of time and effort they are putting into the process.
However, currently most prefer to keep this information in-house. The reasons could include the risk of diluted feedback, and/or the potential for legal and regulatory hiccups. Going forward, perhaps there is a way for the two parties to meet in the middle. Share basic feedback so both parties feel they are getting value out of the process. After all, sharing is caring. Imagine the countless benefits to society if the general population shared their real opinion of manbuns, they would quickly become a thing of the past, gone the way of the mullet, a distant memory of hairstyles gone awry…
Imagine a world where investors and managers join together, share useful feedback, hold hands and sing kumbaya...without a manbun in sight. Agree, do not agree? We would like to hear your views.
A special thanks to all who participated in our ratings discovery exercise! We appreciate your insights and feedback.