Do Unsolicited Ratings Contain a Strategic Rating Component? Evidence from S&P

2008 ◽  
Author(s):  
Patrick Behr ◽  
Christina E. Bannier ◽  
Andre Guettler
Keyword(s):  
2014 ◽  
Vol 42 ◽  
pp. 326-338 ◽  
Author(s):  
Soku Byoun ◽  
Jon A. Fulkerson ◽  
Seung Hun Han ◽  
Yoon S. Shin

2019 ◽  
Vol 07 (02) ◽  
pp. 1950005
Author(s):  
ANNA GIBERT

This paper analyzes the extent to which selection explains the observed discrepancy between solicited and unsolicited ratings. I propose a model of selection with truth telling rating agencies and borrowers with the ability to veto the revelation of the rating. The observed difference between the two categories of ratings in different sectors is in line with the prediction of the model. In the sovereign market there is a positive selection of borrowers into unsolicited ratings whereas other sectors have, on the contrary, lower unsolicited rating grades than those solicited.


10.2196/13053 ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. e13053
Author(s):  
Roy Johannus Petrus Hendrikx ◽  
Hanneke Wil-Trees Drewes ◽  
Marieke Spreeuwenberg ◽  
Dirk Ruwaard ◽  
Caroline Baan

Background Regional population management (PM) health initiatives require insight into experienced quality of care at the regional level. Unsolicited online provider ratings have shown potential for this use. This study explored the addition of comments accompanying unsolicited online ratings to regional analyses. Objective The goal was to create additional insight for each PM initiative as well as overall comparisons between these initiatives by attempting to determine the reasoning and rationale behind a rating. Methods The Dutch Zorgkaart database provided the unsolicited ratings from 2008 to 2017 for the analyses. All ratings included both quantitative ratings as well as qualitative text comments. Nine PM regions were used to aggregate ratings geographically. Sentiment analyses were performed by categorizing ratings into negative, neutral, and positive ratings. Per category, as well as per PM initiative, word frequencies (ie, unigrams and bigrams) were explored. Machine learning—naïve Bayes and random forest models—was applied to identify the most important predictors for rating overall sentiment and for identifying PM initiatives. Results A total of 449,263 unsolicited ratings were available in the Zorgkaart database: 303,930 positive ratings, 97,739 neutral ratings, and 47,592 negative ratings. Bigrams illustrated that feeling like not being “taken seriously” was the dominant bigram in negative ratings, while bigrams in positive ratings were mostly related to listening, explaining, and perceived knowledge. Comparing bigrams between PM initiatives showed a lot of overlap but several differences were identified. Machine learning was able to predict sentiments of comments but was unable to distinguish between specific PM initiatives. Conclusions Adding information from text comments that accompany online ratings to regional evaluations provides insight for PM initiatives into the underlying reasons for ratings. Text comments provide useful overarching information for health care policy makers but due to a lot of overlap, they add little region-specific information. Specific outliers for some PM initiatives are insightful.


2009 ◽  
Vol 14 (2) ◽  
pp. 263-294 ◽  
Author(s):  
Christina E. Bannier ◽  
Patrick Behr ◽  
Andre Güttler
Keyword(s):  

2018 ◽  
Author(s):  
Roy Johannus Petrus Hendrikx ◽  
Hanneke Wil-Trees Drewes ◽  
Marieke Spreeuwenberg ◽  
Dirk Ruwaard ◽  
Caroline Baan

BACKGROUND Regional population management (PM) health initiatives require insight into experienced quality of care at the regional level. Unsolicited online provider ratings have shown potential for this use. This study explored the addition of comments accompanying unsolicited online ratings to regional analyses. OBJECTIVE The goal was to create additional insight for each PM initiative as well as overall comparisons between these initiatives by attempting to determine the reasoning and rationale behind a rating. METHODS The Dutch Zorgkaart database provided the unsolicited ratings from 2008 to 2017 for the analyses. All ratings included both quantitative ratings as well as qualitative text comments. Nine PM regions were used to aggregate ratings geographically. Sentiment analyses were performed by categorizing ratings into negative, neutral, and positive ratings. Per category, as well as per PM initiative, word frequencies (ie, unigrams and bigrams) were explored. Machine learning—naïve Bayes and random forest models—was applied to identify the most important predictors for rating overall sentiment and for identifying PM initiatives. RESULTS A total of 449,263 unsolicited ratings were available in the Zorgkaart database: 303,930 positive ratings, 97,739 neutral ratings, and 47,592 negative ratings. Bigrams illustrated that feeling like not being “taken seriously” was the dominant bigram in negative ratings, while bigrams in positive ratings were mostly related to listening, explaining, and perceived knowledge. Comparing bigrams between PM initiatives showed a lot of overlap but several differences were identified. Machine learning was able to predict sentiments of comments but was unable to distinguish between specific PM initiatives. CONCLUSIONS Adding information from text comments that accompany online ratings to regional evaluations provides insight for PM initiatives into the underlying reasons for ratings. Text comments provide useful overarching information for health care policy makers but due to a lot of overlap, they add little region-specific information. Specific outliers for some PM initiatives are insightful.


2009 ◽  
Vol 12 (01) ◽  
pp. 103-123 ◽  
Author(s):  
Lisa M. Fairchild ◽  
Susan M. V. Flaherty ◽  
Yoon S. Shin

Previous studies show that the unsolicited ratings of S&P and Fitch are lower than the solicited ratings assigned by these two agencies. The unsolicited ratings of S&P and Fitch are based on publicly available information for a firm. However, no previous study has examined the unsolicited ratings of Moody's because Moody's does not disclose whether its ratings are solicited or unsolicited. Using Moody's solicited and unsolicited ratings collected from a survey of Japanese firms, we find that unsolicited credit ratings are still lower than solicited ratings even though firms with unsolicited ratings provide Moody's with some degree of inside information. We also compare the unsolicited ratings of S&P with those of Moody's and find that Moody's ratings are no different than those assigned by S&P although S&P's unsolicited ratings are based on public information. Therefore, we conclude that, regardless of the rating agency, unsolicited ratings are lower than solicited ratings because firms with unsolicited ratings provide incomplete private information to rating agencies.


2008 ◽  
Vol 32 (4) ◽  
pp. 587-599 ◽  
Author(s):  
Patrick Behr ◽  
André Güttler

Sign in / Sign up

Export Citation Format

Share Document