Expectations and Performance of Chinese Credit Rating Agencies ...

10 downloads 165 Views 11MB Size Report
After the subprime crisis in 2008, the leaders from the Chinese Credit Rating Agency. (CCRA) industry have become more recognisable to the local market (Xie, ...
Expectations and Performance of Chinese Credit Rating Agencies

Wei Sun

Thesis submitted in January 2015 as partial fulfilment of the requirements of the University of the West of Scotland for the award of Doctor of Philosophy

W. Sun 2015

2

ABSTRACT The aim of this research is to develop, verify and measure the Chinese Credit Rating Agency Expectation Gap (CCRAEG). This has been achieved through the following objectives: (1) identifying relevant conceptual theories, attributes, and responsibilities of Credit Rating Agencies (CRAs) and Chinese Credit Rating Agencies (CCRAs); and then developing the CCRAEG model through a Systematic Literature Review (SLR) of the existing knowledge of the Audit Expectation Gap (AEG), CRAs and CCRAs; (2) confirming the structure, components and attributes of the CCRAEG through interviews and questionnaires; and, (3) evaluating the CCRAEG through a cross analysis of these results.

It was identified from the literature review that a Knowledge Gap existed between (1) what society believed and (2) what CCRAs and academia perceived in respect to the scope and meaning of CCRAs’ responsibilities. While global CRAs have faced widespread criticism (particularly after the subprime crisis), it was postulated that an Expectation Gap existed between (1) the expectations of the role of CCRAs by society, and (2) the perception of CCRAs’ performance by society. Along with the AEG, this research identified an Expectation Gap within the CCRA industry. This Expectation Gap comprised the same AEG components (Knowledge Gap, Regulation Gap and the actual Performance Gap), as well as Perception Gaps amongst different interest groups. However, the CCRAEG model developed in this research was presented with the ‘Reasonable Expectation Gap’ instead of the ‘Unreasonable Expectation Gap’ (a gap component in the other AEG studies) to avoid any ambiguity of arbitration on the complex role of CRAs, competition level, and issues related to management of conflicts of interest.

In Study 1, the results from a traditional literature review and three SLRs of AEG, CRAs, and CCRAs were examined, and a conceptual theory of CCRAEG was developed to explain and measure the Expectation Gap within the CCRA industry. This study confirmed the existence concerning (1) gap components, (2) Perception Gaps, (3) nine attributes within each gap component. The CCREAG model was verified in Study 2 through sixty-nine questionnaires and twenty interviews. Results were evaluated through a convergent parallel approach (by comparing and relating quantitative and qualitative results) with the data collected from Study 1 and Study 2, as well as Study 3 (which incorporated 689 questionnaires and a further three interviews).

It was found that the Knowledge Gap (in 4.8-13.2% of suggested duties) and the Regulation Gap (in 65-73.5% of sugested duties) were the two main contributors of this

W. Sun 2015

3

CCRAEG. Recommendations for narrowing this expectation gap include: (1) educating society about the role and the code of conduct of CCRAs; (2) The Chinese governments should work with academia to improve the legal system. Further recommendations include: (3) strengthening the performance of CCRAs in relation to 9.6-34.9% of suggested duties, as this would reduce the actual Performance Gap; (4) introducing topics for further academic research in relation to 7.2% of suggested duties, which can narrow the Unreasonable Expectation Gap; and, (5) considering the expectations and perceptions from all interest groups to improve the overall CCRAEG industry performance.

W. Sun 2015

4

TABLE OF CONTENTS ABSTRACT ................................................................................................................ 2 TABLE OF CONTENTS ............................................................................................. 4 LIST OF FIGURES .................................................................................................... 8 LIST OF ABBREVIATIONS ...................................................................................... 11 ACKNOWLEDGEMENTS ........................................................................................ 13 AUTHOR’S DECLARATION .................................................................................... 14 CHAPTER 1 INTRODUCTION................................................................................. 15 1.1 Introduction .................................................................................................................... 15 1.2 Rationale and Motivation of Study ................................................................................. 16 1.3 Aim and Objectives ........................................................................................................ 18 1.4 The Nature of Research ................................................................................................. 20 1.5 Thesis Structures ........................................................................................................... 21 1.6 Summary ........................................................................................................................ 23

CHAPTER 2 BACKGROUND .................................................................................. 24 2.1 Introduction .................................................................................................................... 24 2.2 CCRAs ........................................................................................................................... 24 2.2.1 Terminologies .......................................................................................................... 24 2.2.2 Regulations and Policies ......................................................................................... 28 2.2.3 Background of CCRAs and the industry ................................................................. 40 2.2.4 Background of the Chinese Financial Market ......................................................... 55 2.3 Summary ........................................................................................................................ 57

CHAPTER 3 SYSTEMATIC LITERATURE REVIEW (SLR) ..................................... 59 3.1 Introduction .................................................................................................................... 59 3.2 Understanding and Application of AEG ......................................................................... 59 3.2.1 General information of the SLR for a literature review of AEG ............................... 60 3.2.2 The origin of AEG.................................................................................................... 61 3.2.3 The attributes of AEG ............................................................................................. 64 3.2.4 The conceptual enquiry of definitions of AEG ......................................................... 68 3.2.5 The alternative conceptual frameworks in AEG studies ......................................... 76 3.2.6 Application of Porter’s AEG .................................................................................... 80 3.2.7 The proposed structure of the CCRAEG definition ................................................. 83 3.3 Development of the third dimension from CRA related literature .................................. 83 3.3.1 SLR approach for CRA related literature ................................................................ 83 3.3.2 Previous empirical perception studies .................................................................... 85 3.3.3 Attributes and topics in the previous empirical studies ........................................... 89 3.3.4 Nvivo analysis of gap element through CRA relevant literatures ......................... 106 3.3.5 The CRAEG and patterns of attributes and gap components .............................. 131 3.3.6 Other relevant theories ......................................................................................... 132 3.4 The Attributes of CCRAEG .......................................................................................... 134 3.4.1 SLR Approach for CCRA related literature ........................................................... 135 3.4.2 The CCRAEG and attributes ................................................................................. 136 3.4.3. SLR results on CCRA history ............................................................................... 144 3.5 Summary ...................................................................................................................... 154

CHAPTER 4 METHODOLOGY .............................................................................. 155 4.1 Introduction .................................................................................................................. 155

W. Sun 2015

5

4.2 Research Hypothesis ................................................................................................... 155 4.3 Research Methodology ................................................................................................ 158 4.3.1 Ontology, Epistemology and Axiology .................................................................. 160 4.3.2 Philosophy............................................................................................................. 163 4.3.3 Approach ............................................................................................................... 165 4.3.4 Methodological choices ......................................................................................... 165 4.3.5 Strategies and Time horizon ................................................................................. 165 4.3.6 Research Process and Design ............................................................................. 166 4.3.7 Data Collection ...................................................................................................... 169 4.3.8 Data analysis ......................................................................................................... 178 4.4 Research Ethics ........................................................................................................... 188 4.5 Limitation of Methodology ............................................................................................ 188 4.6 Summary ...................................................................................................................... 189

CHAPTER 5 FINDINGS FROM INTERVIEWS....................................................... 190 5.1 Introduction .................................................................................................................. 190 5.2 Mismatched Expectations- Study 2 ............................................................................. 190 5.2.1 CCRAs Do Not Perform the Role of Gatekeepers ................................................ 190 5.2.2 Perceptions about CCRAs’ Performance ............................................................. 192 5.2.3 Knowledge Gap..................................................................................................... 195 5.2.4 Lack of Regulation, Over-Regulated and Complex Supervision System ............. 196 5.2.5 The CCRA Market is not a Two-Sided Market...................................................... 198 5.2.6 Other Mismatched Expectations ........................................................................... 201 5.3 Guanxi Web, Ethical Relationships and Conflicts of Interest – Study 3 ...................... 203 5.4 Summary ...................................................................................................................... 206

CHAPTER 6 FINDINGS FROM QUESTIONNAIRES ............................................. 209 6.1 Introduction .................................................................................................................. 209 6.2 Hypothesis Test Results .............................................................................................. 209 6.2.1 Verification of gap components – the first dimension ........................................... 209 6.2.2 Confirmation of Perception Gaps – the second dimension................................... 215 6.2.3 Nine attributes – the third dimensions .................................................................. 218 6.2.4 The structure of CCRAEG .................................................................................... 223 6.3 Attribute a: Ancillary service ......................................................................................... 224 6.4 Attribute b: Rating fee .................................................................................................. 225 6.5 Attribute c: Communication / Transparency ................................................................. 227 6.6 Attribute d: Accuracy / Quality ...................................................................................... 229 6.7 Attribute e: Independence / Conflict of Interest ............................................................ 231 6.8 Attribute f: DCO ............................................................................................................ 232 6.9 Attribute g: Staff Competence ...................................................................................... 233 6.10 Attribute h: Record Keeping / Submitting ................................................................... 233 6.11 Attribute i: Self-regulation ........................................................................................... 234 6.12 Summary of the overall CCRAEG measurement from questionnaires ...................... 235

CHAPTER 7 DISCUSSION .................................................................................... 237 7.1 Introduction .................................................................................................................. 237 7.2 Original Theoretical Contribution ................................................................................. 237 7.2.1 The refined theory of the CCRAEG ...................................................................... 237 7.2.2 Limitations of others .............................................................................................. 241 7.2.3 Re-structured Knowledge Gap .............................................................................. 242 7.2.4 Highlighting the Reasonable Expectation Gap ..................................................... 243 7.2.5 Removing the ambiguous boundary between reasonable and unreasonable expectations ................................................................................................................... 245 7.2.6 Validity of the definition of the CCRAEG .............................................................. 246 7.3 Original Methodological Contribution ........................................................................... 250

W. Sun 2015

6

7.3.1 The refined methodology of the CCRAEG investigation....................................... 250 7.3.2 Limitations of other statistical methods ................................................................. 252 7.3.3 Adopted SLR ......................................................................................................... 254 7.3.4 Validity of research design and significance level of statistics ............................. 256 7.4 Original Substantive Contribution ................................................................................ 257 7.4.1 The CCRAEG according to CCRAs ...................................................................... 257 7.4.2 The CCRAEG according to CCRAs’ customers ................................................... 262 7.4.3 The CCRAEG according to investor/public ........................................................... 264 7.4.4 The CCRAEG according to regulators .................................................................. 267 7.5 Summary ...................................................................................................................... 268

CHAPTER 8 CONCLUSION .................................................................................. 270 8.1 Introduction .................................................................................................................. 270 8.2 Review of Research Aim and Objectives ..................................................................... 270 8.3 Research Hypotheses and Results.............................................................................. 275 8.4 Original Contributions to the Canon of Knowledge ...................................................... 275 8.5 Recommendations ....................................................................................................... 276 8.5.1 Clarify standards to the public to reduce the Actual Performance Gap ................ 276 8.5.2 Work with academia to reduce Regulation Gap ................................................... 277 8.5.3 Enhancement of Education to reduce the Knowledge Gap .................................. 279 8.5.4 Future studies and the Reasonable Expectation Gap .......................................... 279 8.6 Research Limitation ..................................................................................................... 281 8.7 Further Research Development ................................................................................... 281 8.8 Summary ...................................................................................................................... 282

REFERENCES ...................................................................................................... 284 APPENDICES ........................................................................................................ 357 Appendix 1 Rating Report Requirement in Chinese Regulations .......................................... 357 Appendix 2 Requirement about Rating Quality in Chinese Regulations ............................... 358 Appendix 3 Product Type in China ........................................................................................ 359 Appendix 4 AEG Investigations Categories (Table 4.a-4.f) ................................................... 360 Appendix 5 AEG Investigations Summary List ...................................................................... 365 Appendix 6 Authors Who Quoted Each Definition ................................................................. 382 Appendix 7 Analysis and Evaluation of 40 definitions of AEG ............................................... 385 Appendix 8 CCRAEG Dimension Model and Units ............................................................... 390 Appendix 9 Perception differences on Communication/Transparency .................................. 390 Appendix 10 Perception Differences on Communication/Transparency ............................... 393 Appendix 11 Perception Differences on Accuracy/Quality .................................................... 395 Appendix 12 Perception Differences on Accuracy/Quality .................................................... 398 Appendix 13 Perception Differences on Independence/Avoidance of Conflicts of Interests . 401 Appendix 14 Perception Differences on Independence/Avoidance of Conflicts of Interests . 403 Appendix 15 Perception Differences on Staff Competence................................................... 405 Appendix 16 Perception Differences on Staff Competence................................................... 406 Appendix 17 Perception Differences on Record keeping and Submitting ............................. 407 Appendix 18 Perception Differences on Record keeping and Submitting ............................. 408 Appendix 19 Interview Protocol ............................................................................................. 409 Appendix 20 Ethic Approval ................................................................................................... 411 Appendix 21 Questionnaires for Study 2 (English) ................................................................ 412 Appendix 22 Questionnaires for Second Survey (English) .................................................... 419 Appendix 23 Questionnaires for Second Survey (Chinese) .................................................. 429 Appendix 24 Collected Responsibilities/Comparisions/Element ........................................... 437 Appendix 25 Chinese Regulations, Policies and Guides ....................................................... 448 Appendix 26 The Mann-Whitney U Test Results for Perception Gaps on Attribute a, b, f, i . 450 Appendix 27 The Kruskal-Wallis Test and The Wilcoxon Signed-Rank Test Results .......... 451

W. Sun 2015

7

Appendix 28 The Perception Gaps Analysis Results ............................................................ 453 Appendix 29 Algorithms (SPSS 22) ....................................................................................... 455 Appendix 30 Historical Review Method ................................................................................. 458 Appendix 31 Percentage of Choice from Interest Group for Question (Study 3) .................. 459 Appendix 32 Comparisons of Service Gap Models ............................................................... 462

W. Sun 2015

8

LIST OF FIGURES Figure 1.1: Porter's AEG Model ............................................................................................... 18 Figure 2.1: Credit Rating Terminology ..................................................................................... 25 Figure 2.2: The Scope of Credibility......................................................................................... 29 Figure 2.3: CCRA Recognition Lists ........................................................................................ 30 Figure 2.4: Qualification of Dagong ......................................................................................... 32 Figure 2.5: Recognised CRAs by Area .................................................................................... 33 Figure 2.6: Entity Can Be Rated .............................................................................................. 33 Figure 2.7: Entity Should Be Rated .......................................................................................... 34 Figure 2.8: Supervision Departments ...................................................................................... 39 Figure 2.9: Supervision and Product Type .............................................................................. 40 Figure 2.10: History of American CRAs ................................................................................... 41 Figure 2.11: Development Level of Domestic CRAs ............................................................... 43 Figure 2.12: CCRAs Background ............................................................................................. 46 Figure 2.13: Product Scope ..................................................................................................... 48 Figure 2.14: Company Structure of CCX ................................................................................. 49 Figure 2.15: Methods of CCRAs and global CRAs .................................................................. 54 Figure 2.16: Financial Reform .................................................................................................. 55 Figure 2.17: Financial Assets by Region ................................................................................. 56 Figure 3.1: The SLR Strategy of AEG...................................................................................... 60 Figure 3.2: The Scope of Literature for AEG ........................................................................... 61 Figure 3.3: References and Publications ................................................................................. 62 Figure 3.4: AEG Studies and Countries ................................................................................... 63 Figure 3.5: AEG Historical Development ................................................................................. 64 Figure 3.6: AEG Attributes Analysis (1) ................................................................................... 65 Figure 3.7: AEG Attribute Analysis (2) ..................................................................................... 65 Figure 3.8: Categories Used for Conceptual Enquiry .............................................................. 69 Figure 3.9: Definition and No.of Articles .................................................................................. 70 Figure 3.10: Perception Gaps in Porter’s Model ...................................................................... 71 Figure 3.11: Six Perception Comparisons - AEG .................................................................... 71 Figure 3.12: The First AEG Model ........................................................................................... 74 Figure 3.13: Conceptual Frameworj Used in AEG Studies...................................................... 77 Figure 3.14: Attribution of Auditors’ Responsibilities ............................................................... 78 Figure 3.15: Three Types of Information and Internality .......................................................... 79 Figure 3.16: Service Quality Gap ............................................................................................. 80 Figure 3.17: Publications Related to Porter’s Model ................................................................ 81 Figure 3.18: Research that Adopted Porter’s Model ................................................................ 81 Figure 3.19: Application of Porter’s Model ............................................................................... 82 Figure 3.20: The SLR Strategy of CRA.................................................................................... 84 Figure 3.21: Perception about CRAs or CRAs’s Performance ................................................ 86 Figure 3.22: Attribute Analysis of CRAEG (1) .......................................................................... 89 Figure 3.23: Perception Gaps about Communication .............................................................. 92 Figure 3.24: Perception Gaps about Accuracy (1) ................................................................... 95 Figure 3.25: Perception Difference about Rating Quality ........................................................ 95 Figure 3.26: Perception Gap about Accuracy / Quality............................................................ 97 Figure 3.27: Perception Gap about CRAs’ Understanding ...................................................... 98 Figure 3.28: Perception Gap about Timeliness of CRAs ......................................................... 99 Figure 3.29: Perception Gap about Ranking of Factor (1) ..................................................... 105 Figure 3.30: Perception Gaps about Ranking of Factors (2) ................................................. 105 Figure 3.31: Word Cloud - Academics ................................................................................... 107 Figure 3.32: Word Frequency of Each Popular Word ............................................................ 108 Figure 3.33: Word Frequency of Each Topic ......................................................................... 109 Figure 3.34: Attribute Analysis of CRAEG ............................................................................. 111 Figure 3.35: Cluster Map of Attributes ................................................................................... 113

W. Sun 2015

9

Figure 3.36: Accountability and Standardising ...................................................................... 120 Figure 3.37: Roles and Functions of CRAs ........................................................................... 123 Figure 3.38: Regulation Proposals ......................................................................................... 128 Figure 3.39: Attributes and Research Foci ............................................................................ 131 Figure 3.40: Conceptual Theories Adopted by Others .......................................................... 133 Figure 3.41: Discrepancies in CCRA Related Literature ....................................................... 135 Figure 3.42: The SLR Strategy for the History of CCRAs...................................................... 136 Figure 3.44: Proposed the CRAEG Conceptual Framework with Three Dimensions ........... 143 Figure 3.45: Differences in History Studies about CCRAs .................................................... 145 Figure 3.46: 1st Stage (1987-1992) ........................................................................................ 147 Figure 3.47: 2nd Stage (1993-1996) ....................................................................................... 148 Figure 3.48: 3rd Stage (1997-2002) ........................................................................................ 149 Figure 3.49: 4th Stage (2002-2007) ........................................................................................ 150 Figure 3.50: 5th Stage (2008-Present) ................................................................................... 152 Figure 3.51: Summary of History Development Stages ......................................................... 153 Figure 4.1: Interconnextion of Wolrdviews, Strategies of Inquiry and Research Methods .... 159 Figure 4.2: Research Onion ................................................................................................... 160 Figure 4.3: Researcher’s Position for Each layer .................................................................. 162 Figure 4.4: Research Process Design ................................................................................... 166 Figure 4.5: Research Methods / Approaches ........................................................................ 168 Figure 4.6: Sampling and Coding for Interviews .................................................................... 170 Figure 4.7: Attributes and Items in Questionnaires ................................................................ 172 Figure 4.8: Sample Groups .................................................................................................... 174 Figure 4.9: Data Log of SLR .................................................................................................. 177 Figure 4.10: Qualitative Analysis ........................................................................................... 179 Figure 4.11: Scales Used in the Mann-Whitney U Tests (Perception Differences) ............... 181 Figure 4.12: Scales Used in the Wilcoxon Signed Rank Tests (Gap Components) .............. 181 Figure 4.13: The Mann-Whitney U Tests on Perception Differences .................................... 182 Figure 4.14: The Wilcoxon Signed-Rank Tests on Knowledge Gap ..................................... 183 Figure 4.15: The Wilcoxon Signed-Rank Tests on Perpception Differences......................... 184 Figure 4.16: The Kruskal-Wallis Tests on Attributes According to the SLR results ............... 186 Figure 4.17: Subtraction for Differences in Medians .............................................................. 187 Figure 5.1: The Standard Two-Sided Market of CCRA Indsutry (Issuer-Pay Approach) ...... 199 Figure 5.2: Multi-Dimensions of CCRA Market (Issuer-Pay Approach) ................................. 200 Figure 5.3: Associations with the Government and Associations .......................................... 204 Figure 5.4: Associations with Other CCRAs .......................................................................... 205 Figure 5.5: Attributes of Expectations (Initial Analysis).......................................................... 207 Figure 6.1: Results for Hypotheses H1-H4 (The 1st Dimension) ............................................. 210 Figure 6.2: The Median Analysis of Gap Components (CCRAs) .......................................... 211 Figure 6.3: The Median Analysis of Gap Components (CCRAs’ Customers) ....................... 211 Figure 6.4: The Median Analysis of Gap Components (Investors / Public) ........................... 212 Figure 6.5: The Median Analysis of Gap Components (Regulators) ..................................... 212 Figure 6.6: Expectation-Performance Gap Composition Analysis (CCRAs) ......................... 213 Figure 6.7: Expectation-Performance Gap Composition Analysis (CCRAs’ Customers) ...... 213 Figure 6.8: Expectation-Performance Gap Composition Analysis (Investors / Public) .......... 214 Figure 6.9: Expectation-Performance Gap Composition Analysis (Regulators) .................... 215 Figure 6.10: The Wilcoxon Signed-Rank Test Results of Perception Gaps on All Duties ..... 216 Figure 6.11: The Mann-Whitney U Test Results of Perception Gaps on Individual Duties ... 216 Figure 6.12: Distribution of Perception Gaps (The Mann-Whitney Test Results) .................. 217 Figure 6.13: The Two Dimensions of the CCRAEG .............................................................. 218 Figure 6.14: The Two Dimensions of the CCRAEG (CCRAs) ............................................... 219 Figure 6.15: Two Dimensions of the CCRAEG (CCRAs’ Customers) ................................... 220 Figure 6.16: Two Dimensions of the CCRAEG (Investors / Public) ....................................... 221 Figure 6.17: Two Dimensions of the CCRAEG (Regulators) ................................................. 222 Figure 6.18: Verified Structure of Expectation and Perception in the CCRAEG ................... 223

W. Sun 2015

10

Figure 6.19: C10 Gap Component Analysis .......................................................................... 225 Figure 6.20: Participants’ Responses on performance of C10 .............................................. 225 Figure 6.21: C10 Perception Gap Analysis ............................................................................ 225 Figure 6.22: E11 Gap Component Analysis .......................................................................... 226 Figure 6.23: Participants’ Responses on Performance of E11 .............................................. 226 Figure 6.24: E11 Perception Gap Analysis ............................................................................ 227 Figure 6.25: The Gap Analysis of ‘ Attribute c’ ...................................................................... 228 Figure 6.26: The Gap Analysis of ‘Attribute d’ ....................................................................... 229 Figure 6.27: The Gap Analysis of ‘Attribute e’ ....................................................................... 231 Figure 6.28: C22 Gap Component Analysis .......................................................................... 232 Figure 6.29: Perception Gaps on C22 ................................................................................... 232 Figure 6.30: Participants’ Responses on Performance of C22 .............................................. 233 Figure 6.31: The Gap Analysis of ‘Attribute g’ ....................................................................... 233 Figure 6.32: The Gap Analysis of ‘Attribute h’ ....................................................................... 234 Figure 6.33: E16 Gap Component Analysis .......................................................................... 234 Figure 6.34: Participants’ Responses on Performance of E16 .............................................. 235 Figure 6.35: Perception Gaps on E16.................................................................................... 235 Figure 7.1: Knowledge Gap about Regulations ..................................................................... 246 Figure 7.2: Interactions among CCRAs and Interest Groups (Issuer-pay Approach) ........... 247 Figure 7.3: Attribute Lists from Studies 1 and 2 ..................................................................... 255

W. Sun 2015

11

LIST OF ABBREVIATIONS ABS:

Assets-based securities

ACRAA:

Association of Credit Rating Agencies in Asia

AEG:

Audit Expectation Gap / Audit Expectation-Performance Gap

AICPA:

American Institute of Certified Public Accountant

AMAC:

Asset Management Association of China

AMF:

Autorité des Marchés Financiers

AUDITQUAL:

Audit Service Quality

CBRC:

China Banking Regulatory Commission

CCRAs:

Chinese Credit Rating Agencies

CCRAEG:

Chinese Credit Rating Agencies Expectation(-Performance) Gap

CCIS:

China Credit Information Service Company

CCRNFS:

Commission of Credit Rating and National Financial Security

CCXAP:

China Chengxin Asia Pacific

CCXC:

China Chengxin Financial Consultancy

CCXE:

China Chengxin Information Technology

CCXI:

China Chengxin International Company

CCXM:

China Chengxin Credit Management

CCXR:

China Chengxin Security Rating Company

China Ratings:

China Credit Rating Company

CICPA:

The Chinese Institute of Certified Public Accountants

CIRC:

The China Insurance Regulatory Commission

CRA:

Credit Rating Agencies

CRAEG:

Credit Rating Agency Expectation Gap

CSRC:

China Securities Regulatory Commission

CNKI:

China National Knowledge Information System

DCO:

Designated Compliance Officer

ESMA:

European Securities and Markets Authority

FSC:

Financial Supervisory Commission in Taiwan

HKMA:

Hong Kong Monetary Authority

PBC:

The People’s Bank of China

MOF:

Ministry Of Finance of PR China

NAFMII:

National Association of Financial Market Institutional Investors

NPCC:

The National People’s Congress Committee

NRSROs

Nationally Recognized Statistical Rating Organizations (in the USA)

ICAEW:

Institute of Chartered Accountant in England and Wales

IOSCO:

International Organization of Security Commission

SAC:

Securities Association of China

SFC:

Securities and Futures Commission in Hong Kong

SLR:

Systematic Literature Review

SOEs:

State-Owned Enterprises

W. Sun 2015

12

SETCC:

State Economic and Trade Commission of PRC China

SMEs:

Small and Middle-sized Enterprises

SERVQUAL:

Service Quality

SERVPERF:

Service Performance Reference

TRC:

Taiwan Ratings Company

USA:

United States of America

11315:

11315 Corporate Credit Checking System or Beijing Chengxin Online Checking Limited Company

W. Sun 2015

13

ACKNOWLEDGEMENTS This research project owes much to many people. I wish to thank staff in the Centre for Academic Practice and Learning Development, Printing Department, ICT Services, the Graduate School, Estates and Building Department, and Library, as well as the administration staff, faculty finance manager and the Assistant Dean of Research and Education of Business School of the University of the West of Scotland, for their assistance, encouragement and trainings with different aspects of PhD programme. Moreover, I gratefully acknowledge the directors, managers and staff in the Chinese credit rating agencies, officers from supervision departments, and any other anonymous participants from banks, financial firms, and the public, as well as the Institute of Chartered Accountant in Scotland, for their valuable time, comments, experiences, and support. But for them, this study could not have proceeded with so much detailed in-depth information.

In addition, special thanks to my friends and colleagues in my study, Amanda Simpson, Christine Reilly, Dorn Carran, Grant Gray, Guido Scheer Schmidt, Duo Long, Heather Lambie, Iain McLellan, Jeanette Macloughin, John Sutherland, Juergen Seufert, Kaibo Xu, Malcolm Sutherland, Margaret Rose Train, Richard Jefferies, Samantha Drake, Torsten Howind, Urszula Roman, William Wilson, and many others. I should also thank my managers and colleagues in my work places, Allan Burns, Anne Clare Gillon, Brenda Hughes, Brian Booth, Bobby Mackies, Declan Bannon, Donna O’Neill, Eileen O’Neill, Ellen Farmer, Jane Russell, Linda Hunter, Linda Murray, Meg Dunn, Moira Divers, Sharon McGoldrick, and Thanos Kouroukils, as well as help from the lecturer in Statistics – Dr. Alan Terry, the academic staff within the Business School - Prof. Sam McKinstry, Prof. John Struthers, Prof. Edward Borodzicz, and Prof. Michael Danson (the previous Dean of Research and Commercialisation). Their support will be always remembered, and can never be repaid.

Finally and importantly, a great debt of gratitude is owned to my husband (Martin) and all family members from both China and Scotland. Especially my parents (Ping and Ximing), parents in law (Anne and Jim) and brother-in-law and sister-in-law (Kevin and Michelle), for their forbearance, encourage and support throughout the course of the research. Thoughts and ideas have shared and discussed with my husband (who is also a researcher) from perspective of Marketing and Research Method on a daily basis. This project would not be possible without financial support from my parents on tuition fees, accommodation, software, database, books, research appliances, and travelling expenses, as well as their networks and relationships within the Chinese finance market.

W. Sun 2015

14

AUTHOR’S DECLARATION I hereby declare that this thesis has not been submitted for another PhD or comparable academic award. This research is not a collaborative group project, and I am the sole author of this thesis. Except where explicit reference is made to the contribution of others, this dissertation is the result of my own work. The format and binding of thesis accord with the requirements and regulations of the University of the West of Scotland.

W. Sun 2015

15

CHAPTER 1 INTRODUCTION After the subprime crisis in 2008, the leaders from the Chinese Credit Rating Agency (CCRA) industry have become more recognisable to the local market (Xie, 2014). For example, Dagong Global and China Chengxin entered into the European credit rating market in 2010 and 2012 respectively with permission from the European Securities and Markets Authority (Chen, 2012; Dong, 2012; Yao and Zhang, 2011). Although Weston (2012) suggested that “…the Chinese credit rating industry is currently dominated by Fitch, Moody’s and S&P…[these companies] …have a standalone combined market share of over 67% in the Chinese credit rating industry”. Kidney et al. (2015) indicated that China Chengxin International Credit Rating, China Lianhe Credit Rating, and Dagong Global Credit Rating are the three main agencies who control 80% of the Chinese market, because foreign companies are allowed to hold no more than 49% of the share in joint ventures between global Credit Rating Agencies (CRAs) and CCRAs. Therefore, CCRAs dominate the local market.

Despite this increasing experience for CCRAs, Lu and Wang (2014) highlight a Knowledge Gap still exists in public’s understanding of the role of CRAs. Wang and Long (2012) indicate this is due to the Chinese public’s lack of understanding of the credit economy. This arose due to the influence of the historical socialist economy market and the inefficient media outlets (Tang, 2010) as well as inconsistency in the explanations of rating standards, methodologies, and terminologies from each CCRA (Zhi, 2009). Moreover, the performance and development of CCRAs are influenced by political factors, the development of the bond market, historical issues, and legislation development. For example, Elliott and Yan (2013), as well as Kennedy (2008), suggested that the political influence power from State-Owned Enterprises (SOEs) and the limited development of the Chinese bond market have had a negative impact on the development of CCRAs. Xu (2007) explained that the instigation of national government regulations and policies concerning the bond market has also been found to be restrictive in the historical development of CCRAs. (These are reviewed in Chapter 2 in detail, and the historical development of CCRAs is analysed in Section 3.4.3 through a Systematic literature Review (SLR). Therefore, this research investigated the Knowledge Gap, the Regulation Gap, the Actual Performance Gap, and also the Reasonable Expectation Gap within the CCRA industry according to expectations and perceptions from different interest groups (definitions of these gaps are reviewed and developed through Chapter 3, and the definitions adopted for the investigation framework are stated in Chapter 7).

W. Sun 2015

16

1.1 Introduction This chapter begins with a clarification of the research rationale of this study, in accordance with previous literature on CRAs, CCRAs, and Audit Expectation Gap or Audit Expectation-Performance Gap (AEG).

This is followed by the aims and

objectives of this research project, as well as a description of the research plan and research instruments. This chapter concludes with a structural outline of the remainder of this thesis and a summary of the issues that will be discussed in the next chapter. The originality of this research will be shown though the SLR results on Credit Rating Agency Expectation Gap (CRAEG) and Chinese Credit Rating Agency Expectation Gap (CCRAEG) in Section 3.3 and Section 3.4, and the contribution of this thesis to the canon of knowledge will be discussed in Chapter 7.

1.2 Rationale and Motivation of Study China has caught the attention of economists and politicians because of its increasing economic power (Hoium, 2011). Its Gross Domestic Product rate exceeded that of Japan, making China the world’s second-largest economy in 2010 and 2011 (Hong et al., 2011). China is expected to become the largest economy sometime between 2015 and 2030 (Maddison, 2007), even though the Subprime Crisis in the global market is still reverberating. China's economy effectively survived the Asian financial crisis in 1997 (Sharma, 2003), and it appears to have been unaffected by the Subprime Crisis of 2008; indeed, the economic growth of China has accelerated (Jacques, 2013). However, this alone does not prove that China has the best financial strategy and system: its financial market is still not considered to be advanced enough; its legislation system concerning certain financial products is still in the process of development; and, rating methodologies might be still immature due to a lack of historical data. Nevertheless, the most recent global financial crisis brought China a ‘once in a century opportunity’, which might result in China being the biggest winner from the crisis (Huang, 2010a; Yao and Zhang, 2011).

Moreover, CCRAs developed their own rating methodologies and published their initial sovereign rating report in 2010. They became “the world’s first non-Western CRA who provided a sovereign credit rating”, which was considered as a contribution to the world’s knowledge (Ministry of Commerce of PRC, 2010, translated from Chinese). Nevertheless, most of the CRA research excludes and ignores the released information from CCRAs, or the rating methodologies and models established by them. It should also be noted that Standard and Poor’s (S&P) report was not the first to reveal the rating; another three agencies had downgraded it before the big three (S&P, Fitch and Moody’s), namely, Weiss, Egan-Jones and Dagong Global (Baden, 2011). Dagong

W. Sun 2015

17

Global, which is a CCRA, caught the attention of international investors when they published their own sovereign rating system for the first time in 2010 (Ministry of Commerce of PRC, 2010). The usefulness of information in the CCRA's ratings, the regulation of CCRAs, and the accountability of CCRAs has been the subject of discussion in recent years. There is also a lack of knowledge and published literature about the following issues: (1) the performance of CCRAs; (2) how CCRAs have developed so far; and, (3) whether the issues of global CRA industry also exist in CCRAs. In addition, certain requirements from the regulatory system have enabled some global CRAs to hold a monopoly position in the global credit rating market for over a century. CRAs were implicated in the Subprime Financial Crisis and were accused of dereliction of duty and slow reaction to the incident. S&P (Standards and Poor’s), one of the three largest global CRAs, revealed the downgraded U.S.’s sovereign credit rating as 'AA+', despite issuing an 'AAA' rating for over a century. Buffett, W., who is considered as one of the most significant world investors in China, expressed firmly that S&P’s downgrading did not make any sense (Claman, 2011). Some researchers believed that ‘Rating the Raters’ was required (e.g. Smith and Walter, 2001; U.S. Congress, 2002; Baker and Mansi, 2002; Strier, 2008; Mathis et al., 2009). This was necessitated especially by the ‘Rating War’ (Folbre, 2011), which occurred among global CRAs, and they have provided verification and certification services, acting as ‘gatekeepers’ in financial markets for over a century (Coffee, 2002).

From the literature review it was found that: (1) there is a lack of research concerning expectations and perceptions of CRAs from the financial market in China; (2) most previous research has been conducted on rating methodologies or rating models; and, (3) there is a lack of research on the development of CCRAs although a number of studies have conducted on CRAs. Using Systematic Literature Review (SLR) results, this research investigates expectations and perceptions of CCRAs from society, with an SLR of AEG, CRA and CCRA. ‘Understanding’ and ‘perception’ of the role of CRAs in financial markets needs to be considered and comprehended (Duff and Einig, 2007; Ferguson et al., 2007; Duff and Einig, 2009a; White, 2009; Lynch, 2008; Sinclair, 2010; Solomon and McCluskey, 2010; Financial Crisis Inquiry Commission, 2011; Helleiner, 2011; Priebe, 2012; Weston, 2012). The concept of 'Expectation Gap’ is ascertained before investigating debates such as the role of CRAs, the 'rating war', and the 'rating game'. This research includes an examination of AEG, whose activities expanded and intensified around the world from 1974 onwards. The Chinese Credit Rating Agency Expectation Gap (CCRAEG) or Credit Rating Agency Expectation Gap (CRAEG) is

W. Sun 2015

18

also investigated. Furthermore, it is beneficial for CRAs to be aware of expectations from the market, as they have a responsibility to provide relevant information to investors and the public, who in turn need to develop a clear understanding about: (i) credit risk; (ii) the attributes and limitations of each CRA's credit opinion; and, (iii) the related credit risk assessment framework, policies and practices (ACRAA, 2011). Additionally, it is important to understand the perceptions of CRAs by people working in financial markets, and how the problem of 'Knowledge Gaps' in the financial markets can be solved more efficiently (Kewer, 2005).

1.3 Aim and Objectives This research applied Porter’s Audit Expectation Performance (AEG) model from auditing into credit rating, to examine the observed performance of CRAs from a Chinese financial market perspective. Moreover, a possible solution for narrowing the knowledge gap could be advised. The main model used for this research, Porter’s AEG model, is displayed in Figure 1.1.

Figure 1.1: Porter's AEG Model

Source: Porter and Gowthorpe (2004, p.6) Porter’s AEG model examines the problem of regulation, performance and unreasonable expectations with an issue-by-issue approach (Porter, 1990, 1993, 1996). This model has been applied in (i) empirical research of various countries, (ii) comparison studies between two countries as a cross-sectional research, or (iii) as a longitudinal research in one country but at different time. Porter’s model has been applied in Finland (Troberg and Viitanen, 1999); Malaysia (Lee et al., 2008); USA (Cohen et al., 2010); China (He, 2010); Thailand (Lee et al., 2010); Nigeria (Oseni and Ehimi, 2012) and Iran (Saeidi, 2012). There are relatively few cross-sectional and cross-cultural studies. There was a comparison study about auditors’ responsibilities in

W. Sun 2015

19

United Kingdom and New Zealand (Porter and Gowthorpe, 2004; Porter et al., 2012a; 2012b), and also a comparison study between India and Iran (Mahadevaswamy and Salehi, 2009). The study from Porter and Gowthorpe (2004) also examined the expectations of auditor’s responsibilities in New Zealand between 1989 and 1999, longitudinally. Porter’s model was also adopted in a study into corporate fraud in Dutch speaking countries (Hanssink et al., 2009), and the role of auditing education in Bangladesh (Siddiqui et al., 2009). This research includes the development and examination of a CCRAEG model, based on Porter’s AEG model, in an effort to explore and measure understanding and expectations about CCRAs within the Chinese financial market.

The main purpose of this research is to explain relationships, components, attributes and structure within CCRAEG in light of the existing suspicions of declining rating quality within the CRA and CCRA industry, this thesis investigates what is actually expected by market participants with regard to the role of CCRAs. Having explored the literature on AEG, CRAs and CCRAs, the CCRAEG can be reduced through a better understanding of the problem. As such, the Research Aim for this research can be specified as follows:

To develop, verify and measure CCRAEG amongst four identifiable stakeholders (CCRAs, CCRAs’ customers, regulators, and investors (and the public)) in line with their expectations and perceptions about the responsibilities and performance of CCRAs. This research aim underpins a tentative theory of CCRAEG, the potential practical upshot of which could be to establish a more efficient CCRA market through reducing the Regulation Gap, the Knowledge Gap, and the Reasonable Expectation Gap. Pragmatism was adopted as the main methodology in this study, for the purpose of providing a solution to reduce this CCRAEG (see Chapter 4).

In order to facilitate the achievement of a research aim, research objectives should be indicated in detail as “…specific tasks or components researchers will undertake…” (Lankshear and Knobel, 2004, p.51), and should be detailed as “…research topics or issues the project plans to investigate” (Thomas and Hodges, 2010, p.39). Therefore, the Research Objectives of this project are specified below:

1. Critically review existing knowledge on AEG through a Systematic Literature Review (SLR); 2. Identify and critically evaluate the role of CRA via an SLR;

W. Sun 2015

20

3. Review existing responsibilities associated with CCRA according to the results from the SLR; 4. Construct the structure, components and attributes of the CCRAEG model through SLR, questionnaires and interviews; 5. Develop a method to measure CCRAEG according to empirical literature on AEG and CRAEG; 6. Verify the validity of the tentative CCRAEG model via questionnaires; and, 7. Evaluate the CCRAEG according to results from SLR, questionnaires and interviews. These objectives were formed in line with research tasks (see Chapter 4). The first part of this study was the development of a CCRAEG model (through Study 1) by using the results from literature reviews on expectations and perceptions of CRAs, CCRAs, AEG. This CCRAEG model was modified and examined through the results of interviews and questionnaires in Study 2. Study 3 concentrated on measuring the CCRAEG, and considered how the law and policy of CCRAs could be improved, and what further research is required.

1.4 The Nature of Research This research seeks to explain a substantive theory to describe the role of CCRAs. It examines the expectation gap between society’s expectations and the observable performance of CCRAs. The literature-based research engendered a structural analysis of the expectations gap in credit ratings, and a provisional substantive theory to explain the role of CCRAs in the Chinese financial market, which was embodied from Porter’s AEG model. Empirical research was then conducted to test the hypotheses concerning the components and structure in CCRAEG. It should be noted that ‘substantive theory’, as used here, is a "set of propositions which furnish an explanation for an applied area of inquiry” (Grover and Glazier, 1986, p.233), and it is constructed from the process of identifying the differences and similarities of contextualized instances and patterns (Adelman, 2009). As such, the substantive theory in this research described the components and attributes which can be used to interpret mismatched expectations from four interest groups: (1) CCRAs’ staff; (2) CCRAs’ customers; (3) officers from supervision departments; and, (4) investors and the general public. This research assesses how, where and why there is a CCRAEG in the Chinese credit

W. Sun 2015

21

rating industry with reference to suggested responsibilities. The suggested CCRAs’ responsibilities were mostly collected from existing regulations, company policies, interviews, and related literature, in an effort to determine the Knowledge Gap (including misunderstanding), the Regulation Gap and the actual Performance Gap within CCRAEG in China. The expectations and perceptions of CCRAs’ responsibilities from the Chinese financial market were explored through the interviews conducted with subgroups from four interest groups in the financial market (including CCRAs’ managers, bank managers, financial managers, managers in investment companies, journalists, officers from regulatory bodies, professors, and academic researchers). In addition, this research seeks to establish a new framework to assess and examine the CCRAEG. This framework was created using a statistical procedure, which was ascertained by a comprehensive observation of different groups of participants in the Chinese financial market. The results of questionnaires (nqr1 = 69) and interviews (nin1 = 20) concerning the issue- the substance and the scope of CCRAs’ responsibilities in Study 2 - have provided the foundation for the composition of the structure of this framework, which is about each responsibility within CCRAEG. Some new issues were discovered in Study 2, that have not been mentioned in previous research and reports about CCRAs found in the literature review. For example, in terms of CCRAs’ definitions and business scopes, there are differences between the perceptions of the Chinese governments and CCRAs. Therefore, the composition of each component in the structure within the CCRAEG model was constructed especially on the special phenomena and background of the financial market in China and information collected from Study 1 and Study 2.

1.5 Thesis Structures By applying the Porter’s AEG model, this research examines people's understanding of CCRAs and the perceived performance of CCRAs from a Chinese financial market perspective. The content of each gap was recognised, explored and measured based on the gap components which were adapted from Porter’s study. These four components are: (1) A misunderstanding and a lack of knowledge about CRAs’ responsibilities from the Chinese financial market's point of view (Knowledge Gap); (2) Reasonable expectations about CRAs’ responsibilities have been reflected in the academic literature, but they were not stated in the regulations (Reasonable Expectation Gap);

W. Sun 2015

22

(3) Perceived legal loopholes of law and policies about CRAs in China (Regulation Gap); and, (4) Unperformed ‘due diligence’ of Chinese CRAs in accordance with Chinese laws and policies (Performance Gap), and as observed by society. (More detailed definitions are provided in Section 7.2.1). For the purpose of structure, the remainder of this thesis is presented in seven chapters as follows: i.

CHAPTER 2 - BACKGROUND. This chapter reviews the background and performance of CCRAs, and examines the relevance and substance of the CCRAEG. In addition, issues relevant to CCRAEG are identified for the development of model with a historical review of CCRAs.

ii.

CHAPTER 3 - SYSTEMATIC LITERATURE REVIEW. This chapter explains how the CCRAEG is developed, and examines and evaluates the relevant theories of AEG, CRAs and CCRAs. This is undertaken through three SLRs on different research questions of the development and content of AEG, the role of CRAs, and the role of CCRAs. Furthermore, comparisons are made amongst empirical CRA perception studies to establish attributes, which can be used in the investigation.

iii.

CHAPTER 4 - METHODOLOGY. Within this chapter, research design, research process and data analysis methods are clarified with an evaluation of methodological and theoretical frameworks. Moreover, research approaches, instruments and strategies are certified. Furthermore, possible research-related ethical issues are justified. Finally, possible limitation of methodology is explained.

iv.

CHAPTER 5 - FINDINGS FROM INTERVIEWS: The expectations and perceptions from four interest groups are described, and mismatched expectations are analysed. This chapter reveals that the participants have distinct perceptions amongst interest groups about CCRAs’ performance and their responsibilities. Moreover, their Knowledge Gap on the roles and terminologies are clarified. Furthermore, the complicated ethical relationships are analysed in line with Guanxi Web (Chinese ethical relationship web).

v.

CHAPTER 6 - FINDINGS FROM QUESTIONNAIRES. In the first part of this chapter the results from empirical research are revealed, including data from questionnaires and interviews. Mann-Whitney U Test results show Perception Gaps exist on certain duties. The existence of a Knowledge Gap, a Reasonable Expectation Gap, a Regulation Gap, and a Performance Gap are not verified through Wilcoxon Signed-Rank tests, however, the median analysis results can demonstrate the existence of differences between gap boundaries. Kruskal-Wallis test results prove that the pattern of which suggested responsibility should be required in the SLR

W. Sun 2015

23

results and regulation is different among nine attributes. In the second part of this chapter, quantitative empirical data are explained in more detail with reference to nine attributes. The extent of the gaps is measured through: (i) the percentage of suggested responsibilities which are performed well; (ii) whether they are in the existing regulations and literature; and, (iii) whether they are expected by the participants (with reference to the role of CCRAs). vi.

CHAPTER 7 - DISCUSSION. This chapter initially provides a conceptual theory to explain how the role of CCRAs in society is developed and summarised. The findings stemming from a cross-analysis (or convergent parallel analysis) are presented. This chapter also reviews the structure, composition and extent of each component in the CCRAEG in line with the three categories of contribution to knowledge, namely the: a. Theoretical contribution; b. Methodological contribution; and, c. Substantive contribution.

vii.

CHAPTER 8 - CONCLUSION. The final chapter provides a summary based on the research results, and the implications of this research are discussed within a global financial market setting. Moreover, this chapter also discusses the advice and suggestions regarding the solutions of how to narrow the gap. Further research is suggested, which may be conducted by employing the same approach and the conceptual theory of the CCRAEG. Furthermore, research limitation is discussed. Finally, further possible research publications are explained.

1.6 Summary In this chapter it has been noted that the role of CRAs is under question and that CRAs face widespread criticism. It has been postulated that the perceptions and expectations of CCRAs from society should be examined and evaluated due to the current involvement of CCRAs in the global market. This chapter outlines the fundamental research design employed and includes an overview of this thesis. It is reported that SLR was conducted to ascertain the suitability and feasibility of Porter’s AEG model and the CCRAEG model, as well as identify all the possible CCRA responsibilities. The next chapter provides general background information of CCRAs, and the existence of mismatched expectations and relevant issues, such as the complexity of terminologies and regulations, historical financial environment, and CCRA market information.

W. Sun 2015

24

CHAPTER 2 BACKGROUND 2.1 Introduction Within this chapter, the background of CCRAs is explored and analysed in order to ascertain issues that are relevant to the CCRAEG, and the major causes of this gap are discussed. A traditional literature review was conducted in order to identify and investigate issues, in an effort to develop the model and support the further discussion and synthesis in Chapter 3 (which utilises an SLR to review the theoretical frameworks, related theories and analyses related to CRAEG).

2.2 CCRAs In order to gain an understanding of the terminologies used in academia, governments and industries, the original meaning and background of terms such as “Credit” and “Credibility” are explored from a Chinese perspective. Various interpretations of “Credit Rating” adopted by the Chinese government, by local authorities, as well as by academia and journalists, are outlined and illustrated through a brief contextual insight, with respect to the different perceptions and uses of the term.

Moreover, the

background of some of the main CCRAs, along with some important regulations related to them, are provided and analysed in order to deliver a foundation for a further analysis of issues that are related to the CRAEG. This background review is separated into four sections, namely: (1) the origin of terminologies; (2) regulations and policies; (3) background of CCRAs and the industry; and, (4) background of the Chinese financial market.

2.2.1 Terminologies The People’s Bank of China (PBC) has been using “Zhengxin” as the term applied to the overall credit rating system, whereas the direct translation of this term is “Credit Checking” only. There is some confusion within the industry about the definitions of credit checking, credit rating, and Zhengxin. There are two main problems with reference to the meaning and application of terminologies. First, there are a variety of terms. Second, there are Chinese terms which might have different meanings if translated into English. The first part of this section is to explain these two problems, and then the second part reviews these problems in a more in-depth level from historical development of these terms.

2.2.1.1 Terms and definitions The first problem is that authors, journalists and authorities use different terms, eight of which are outlined in Figure 2.1. Consequently, a credit rating can have three similar

W. Sun 2015

25

meanings, for example, the first five terms (a), (b), (c), (d), and (e), but these are completely different from terms (f) and (g), or (h).

Figure 2.1: Credit Rating Terminology Meaning

Pinyin*

(a)

Xinyong Pingji

(b)

Xindai Pingji

Credit Rating (c)

Xinyong Pingdeng

(d)/(e)

Zixin Pingji / Zixin Pinggu

Concept in Accounting (Asset and qualification)

(f)/(g)

Zizhi Pingji / Zizhi Pinggu

Credit Checking or Investigation or Credit Information and Reference Collection

(h)

Zhengxin

Source Example  Government regulation: “Guiding opinion about credit rating management by PBC” (PBC, 2006);  Name of CCRAs: Dagong Global Credit Rating Company, China Cheng Xin Credit Ratings Comapny;  Journalists: Niu and Ji (2012);  Academic authors: Bai (2010b); Bao (2009); Deng (2009)  SFC (2011)

 Government policy: “Management policy for credit rating businesses” (Taiwan Authorities, 2002)  Government regulations: CSRC (2003a); PBC (2008a);  Name of CCRAs: CRAs (Shanghai Brilliance Credit Rating and Investors Company; Peng Yuan Credit Rating Company; Lianhe Rating Company);  Name of Associations: Association of Credit Rating Agencies in Asian (ACRAA);  Journalists: Yuan (2008a);  Academic authors: He and Sun (2005); Zhang (2007); Tang (2008)  Academic authors: Wang (2004); Wang and Cao (2007);  Journalists: Hao (2011);  Accounting firms: BDO China, ShuLun Pan public accountant LLP;  Credit checking firms: Beijing credit rating company  Journalists: Zhu (2015);  Academic authors: Li (2010);  Government departments: PBC reference centre, PBC credit information system bureau)

*Pinyin is the English alphabetic spelling and pronunciation of Mandarin

The second problem is the translation of these terms. There is a risk that without a full understanding of the cultural background, the translation of the terms will be inaccurate.

W. Sun 2015

26

The overall classification is called “Zhengxin”, a common parlance, which basically means “to verify the quality”, and this might be considered as a part of the linguistic historical and cultural heritage of China. This phrase is used with respect to the name of supervision department of CCRAs. “Zhengxin” has been translated in English by the Chinese government as “Credit Information” or “Credit Reference” on the PBC website, although its previous meaning was credit checking. According to the dictionary, credit checking usually implied unsolicited ratings, for borrowers, issuers or the object being rated without rating payments (Xia, 1979). Therefore,

“Zhengxin” may not be a

pertinent term that could be applied to an assortment of credit rating and credit reference or information related issues, because its direct meaning is credit checking. Nevertheless, this term has been used by the Chinese government since 2011. Moreover, ‘credit rating’, ‘credit reference’, ‘credit checking’, and ‘credit information’ are not used in the same way in China as in other countries. There may be a separation or differentiation between terms such as ‘credit checking (or reference)’ and ‘credit rating’ because they are considered to be different types of business. For example, in the UK, Experian and Equifax are “credit reference agencies” (also called credit checking agencies), which provide credit scores of individuals to loan providers with an investorpay approach (Atkinson, 2010); whereas, S&P’s, Moody’s and Fitch are global ‘credit rating’ agencies providing ratings of countries, enterprises, companies or financial products through an issuer-pay approach (Marston, 2013).

2.2.1.2 Credit rating, credit checking and credit information In light of the ambiguity over the exact meaning of these terms, it is important to seek a comprehensive explanation of the origin and variety of meanings and phrases of ‘Credit’ and ‘Credibility’. Moreover, according to the Counsellor of Real Estate (2013, p.19), “...the Chinese market remains a closed market these days...because of the significant language barrier…” with clients or partners from the other countries. Moreover, this language barrier also appears in the translation of the management ideas in the context of corporate code of ethics (Helin and Babri, 2014). Therefore, an in-depth comprehension of the ‘Understanding Gap’ or the ‘Knowledge Gap’ from all stakeholders is fundamental for understanding the performance of CCRAs. ‘Credit’ and ‘Credibility’ have dominated traditional Chinese philosophy since ancient times with the word, ‘Xin’, which defines attitudes and emotions towards ancestors or gods in sacrificial ceremonies. In addition, the phrase ‘Xinyong’ has been found within two ancient manuscripts, The Book of Documents and The Book of Poetry. The former recorded historical facts of the period between 1046 BC and 771 BC, and was written

W. Sun 2015

27

by Confucius (551 BC-479 BC); the latter contained 300 poems from different authors and was chronicled somewhere between 11th Century BC and 771 BC (Kong, 1984; Yan and Li, 2006; Ma, 2008a; Gao, 2010). In the wake of the early development of Confucianism in China, ‘Credibility’ became one of the crucial elements of ethical standards, with references made within the The Book of Rites and The Book of Changes (written by Confucius and his students) to record the requirements, polices and the workings of the King, Wen-Wang Zhou. Alternatively, Mengzi (372 BC - 289 BC), who was a fourth-generation Confucianist, elevated its importance by suggesting that it could have a great influence on how people related to one another (Chen, 1999; Sun, 2002; Fan, 2005; Long, 2007). The modern meanings of ‘Credit’ and ‘Credibility’ remain the same, but from an economic perspective, they were historically difficult to be accepted by the Chinese, partly as a result of the immature socialism prevalent in China’s more recent past. To begin with, according to the definition from a Chinese dictionary, ‘Xinyong’ (Credit) in Chinese, from an economic perspective, is described as a special transforming format of price or value. This term represents distinct characters in different economic systems: in capitalism, credit is the format of the movement of financial capital of a loan or debt; however, in a socialist system, credit is centralised by the national bank, and it is a method of mobilising idle funds with advanced planning according to national economic planning activities (Xia, 1979, p.247; Zhu, 2002). The Ministry Of Finance in China (1992) has recommended that, regarding credit in banking industry, there were five differences between capitalism and socialism: (1) economic foundation; (2) production relationships; (3) approaches of capital movement; (4) directions of capital movement; and, (5) social and ethical responsibilities. Therefore, it might be better to call the credit rating system the ‘Xinyong’ system instead of the ‘Zhengxin’ system with reference to their meanings in the dictionary.

Credit checking is different from credit rating according to history. The purpose and function of credit checking agencies and CRAs had been always separated according to the description in the dictionary. The first Chinese credit checking agency was established by several big banks in Shanghai in 1932. They were set up to investigate issues which related to production such as profit and loss statements of factories or firms, as well as provide personal credit information and investigation reports to special subscribers. In addition, they could also undertake commissions to investigate specific issues by providing individualised reports to the consigners. Their main income streams included payments for these reports, and allowances provided from banks and investor capitalists (Xia, 1979).

W. Sun 2015

28

As a result of using ‘Zhengxin’ in the credit rating, credit checking and credit information industries, there are various explanations about the meaning of credit or credibility. First, most authors within academia or journalism (or even among the general public) are using the explanation and definition provided by the PBC (2010). For example, Wu (2013) declares that three Zhengxin systems will be established within Zhengxin industry: there are financial Zhengxin, administrative management Zhengxin, and commercial Zhengxin.

Second, Chinese credit checking agencies, and most recent Chinese legislation related specifically to credit checking industry, also use ‘Zhengxin’ as credit information. For example, 11315 (11315 Corporate Credit Checking System or Beijing Cheng Xin Online Checking Limited Company) supported by the State Council (2013, No 631) use ‘Zhengxin’ in the policy and its website. However, CCIS (China Credit Information Service Company) and 11315 applied the term of ‘Zhengxin’ differently, although both of them interpreted it as credit information management. 11315 delivers credit information about corporates to the public for free, whereas CCIS provides information to their clients about how to implement effective credit management systems and reduce business risk with service charges (Law-library, 2013; 11315, 2013; CCIS, 2013a).

Third, recent studies have indicated that the definitions and usage of these terms varied distinctively within various subjects such as ethics, law and economics (Wu, 2001; Wu, 2002; Li, 2004; Guo, 2004; Jiang, 2005; Ou and Xiao, 2005; Wang, 2007; Hong, 2008; Ye, 2010). The scope of ‘credibility’ in the credit system is outlined in Figure 2.2, which highlights the structure, scope and categories of credibility systematically, in relation to different types of credit products (Ou and Guo, 2007). It illustrates that credit in commodities circulation is different from when it is within capital circulation, with only the credit in capital circulation being credit-checked by the Chinese government. 2.2.2 Regulations and Policies Like global CRAs, rating-dependent legislation has played an important role in the development of CCRAs (see Section 2.2.2.2). Such legislation has provided a platform for CRAs and CCRAs to obtain credit rating licences through recognition policies and it has adapted to increase rating demands through regulatory rating licenses for the issuers, which in turn require assured ratings of certain types of products as a threshold. However, there are different principles and guidelines on CCRAs from various government departments, or even professional promulgations such as associations in China and Asian professional bodies. As such, they will be examined in relation to the

W. Sun 2015

29

four areas as follows: (1) CCRA recognition policies in China; (2) rating requirements concerning the types of rating objects; (3) rating requirements regarding rating quality; and, (4) CCRA supervision policies in Mainland China. It should be noted that comparison between the Chinese regulations and the IOSCO code is examined as the Regulation Gap (explained in Chapter 4), comparison results are reviewed in Appendix 24, Chapter 6, and Chapter 7. However, regulations from European countries are not reviewed in this research, because these regulations do not appear to be closely relevant to the Chinese local financial industry.

Figure 2.2: The Scope of Credibility

Source: Translated from Ou and Guo (2007, p.5) 2.2.2.1 CCRA Recognition Policies in China CCRA recognition lists reflect conflicts and dissimilarities between different regulatory departments through varied periods. Possible reasons include the various mergers and acquisitions between CCRAs, their distinct business specialities, as well as the rapid development of financial products. According to the lists in Figure 2.3, the overall number of authorised CCRAs has changed since 1997,with different CCRAs appearing and disappearing. PBC recognised nine CCRAs in 1997, whereas the China Insurance Regulatory Commission (CIRC) and National Development Regulatory Commission (NDRC) recognised five in 2003 (as did China Securities Regulatory Commission

W. Sun 2015

30

(CSRC) in 2007 and PBC in 2011). It appeared that the reason for this disparity was mostly due to the mergers and acquisitions made after 1997. For example, Shenzhen Rating changed its name to Pengyuan after its acquisition, whereas Fujian Rating and Tianjin Zhongcheng adopted Lianhe as their name after they merged (Pengyuan, 2015; NAFMII, 2008). As for Yunnan Rating, Great Wall Rating, Shanghai Far East and Liaoning Rating, there appeared to be no clear explanation from any departments about what had happened to these agencies. Moreover, there appear to be no clear description or relevant document to explicate how these CCRAs are selected (this is recommended as limitation of regulations in China in Section 2.2.2.3). Figure 2.3: CCRA Recognition Lists Name CCXR Dagong Global Shenzhen Rating Yunnan Rating Great Wall Rating Shanghai Far East Shanghai Brilliance Liaoning Rating Fujian Rating Lianhe Pengyuan CCXI TJZC Golden Credit S&P Moody’s Fitch Fitch Taiwan Fitch International R&I TBW Ambest TRC

PBC (1997)         

CIRC & NDRC (2003)

CSRC (2007)

PBC (2011)



 









  



 

HKMA (2000)

SFC (2012)

FSC (2013)





    

  

   

   

Data collected from PBC (1997, cited in Hangzhou Credit Rating Company, 2006); CIRC (2003, Baojianfa No 74); NDRC (2003, Fagaicaijin No 1179); CSRC (2007, Zhengjianjigouzi No 250); PBC (2011) HKMA (2000), SFC (2013, cited in Dong, 2012) and FSC (2013)

Note: 1. PBC - The people’s bank of China, it is the central bank of the People’ republic of China with the power to control monetary policy and regulate financial institutions in mainland China. 2. CIRC - The China Insurance Regulatory Commission, it is authorised by the State Council (which has broad administrative and planning control over the Chinese economy) to conduct administrative supervision and regulation of the Chinese

W. Sun 2015

31

insurance market, to ensure that the insurance industry operates stably in compliance with law. 3. NDRC - National Development and Reform Commission, it is a macroeconomics management agency under the Chinese State Council. 4. CSRC - The Chinese Securities regulatory Commission, it is the main regulator of the securities industry in China. 5. HKMA - Hong Kong Monetary Authority, it is Hong Kong’s currency board and de facto central bank, which ensure stability of the Hong Kong currency and banking system. 6. SFC – The Securities and Futures Commission in Hong Kong, it is an independent statutory body which regulate Hong Kong’s securities and future markets. 7. FSC – The Financial Supervisory Commission in Taiwan, it is responsible for regulating securities market (including the Taiwan Stock Exchange and the Taiwan Futures Exchange), banking, and the insurance sector. 8. CCXR - China Chengxin Security Rating Company (sub company of CCXI) 9. Dagong Global - Dagong Global Credit Rating Company 10. Shenzhen Rating - Shenzhen City Credit Rating Company 11. Yunnan Rating -Yunnan Credit Rating Affairs Institute 12. Great Wall Rating - Great Wall Credit Rating Company 13. Shanghai Far East - Shanghai Far East Credit Rating Company 14. Shanghai Brilliance - Shanghai Brilliance Credit Information Investment and Services Company 15. Liaoning Rating - Liaoning Province Credit Rating Company 16. Fujian Rating - Fujian Province Credit Rating Commission 17. Lianhe - China Lianhe Credit Rating Company 18. Pengyuan - Shenzhen Pengyuan Credit Rating Company 19. CCXI - China Chengxin International Rating Company 20. TJZC - Tianjin Zhongcheng (changed as Lianhe in 2009 21. Golden Credit - Golden Credit Rating Company 22. R&I - Japan Rating and Investment Information 23. TBW - Thomson Bank Watch 24. TRC - Twaiwan Rating Company 25. If a CRA is not regonised by a supervision department, this CRA’s rating results are not approved by the supervision deparment for certain industry (These are analysed and exaplained in the text). 26. Differences among these supervision departments are explained in the text. 27. Recognition criteria for each supervision department cannot be found.

Interestingly, a total of eight different CCRAs were recognised between 2003 and 2011 (except by government departments in Taiwan and Hong Kong). Only three were consistently acknowledged (Dagong Global, Shanghai Brilliance and Pengyuan), with one recognised twice (CCXI) and the other four (CCXR, Lianhe, TJZC and Golden Credit) being completely unique within each department. These differences between department lists could be explained due to two reasons. Firstly, these lists were announced for three different industries, namely, corporate bond rating (PBC, 1997, Yinfa No 547), insurance industry (CIRC, 2003, Baojianfa No 74; NDRC, 2003, Fagaicaijin No 1179) and the securities market (CSRC, 2007, Zhengjianjigouzi No 250), which means qualification or certification of each domestic CRA are provided for different types of products from varied departments. An example from Dagong Global is listed in Figure 2.4.

W. Sun 2015

32

According to information released from the Hong Kong Monetary Authority (HKMA) in Hong Kong and the Financial Supervisory Commission (FSC) in Taiwan, the CRA recognition lists from Hong Kong and Taiwan outlined in Figure 2.3 differed from those of the government in mainland China. HKMA and FSC only provide the recognition of global CRAs instead of domestic CRAs, while SFC (Securities and Futures Commission) have only considered three domestic CRAs. The recognition by HKMA and SFC was also accepted by ESMA (European Securities and Markets Authority), which encouraged more and more domestic CRAs from mainland of China to obtain certificates from the government in Hong Kong, such as CCXI and Pengyuan in 2012 (Jie, 2012). The updated information about the current authorized CRAs within Hong Kong, Pengyuan did not appear to be on their websites, but these were cited in news report by Dong (2012b).

Figure 2.4: Qualification of Dagong Certification and Qualification Corporate Bonds Inter-bank Bonds Borrowing Enterprises SMEs Guarantee Entities Investment Bonds Issued by Insurers Construction of Asian Bond Market Credit Rating Business Permit in Securities Market Membership Membership

Authority PBC NDRC PBC PBC NDRC PBC NDRC CIRC

Year or Document Name Yinfa No 547 (1997) Fagaicaijin No 1179 (2003) No 22 [On PBC web (2004)] [On PBC web] Fagaicaijin No 1179 (2003) [On PBC web] Fagaicaijin No 1179 (2003) Baojianfa No 74 (2003) – Abolished Ministry of Finance [On ministry of finance in China in China website] Zhengjianjigouzi No 250 CSRC (2007) China Federation of Industrial Economics Association of Credit Rating Agencies in Asia

Adapted from Dagong Global (2012) Secondly, the lists were collated at different time periods. As such, Shanghai Far East has been removed from the lists, possibly due to problems with their bond rating of a short-term financial product named “Fuxi CP01” in 2006 (Wang et al., 2006). Unlike Shanghai Far East, Great Wall Rating appeared to be disregarded for no reason, and they questioned the announcement from NDRC. This company's managers felt embarrassed and upset about their diluted reputation that resulted from the announcement (Han, 2004). Nevertheless, both Shanghai Far East and Great Wall Rating were still on the list produced by PBC (2011), which affirmed the suitability of 74 CCRAs in 33 cities (in Figure 2.5), according to information released by PBC (2012). Various shades of blue shown in the map indicate the number of domestic CRAs in each province or city. This map illustrates that the recognized CRAs are scattered in

W. Sun 2015

33

different areas, but are concentrated near and within main cities and provinces such as Beijing, Shanghai and Guangdong. For a CCRA which is not recognised in any of these recognition lists, it’s rating results cannot be considered as permits to use according to any of the supervision department. This may imply that this agency’s rating results will not allow issuers to do transaction in the national inter-bank market and securities market. Moreover, like most the other countries, there appear no recognition criteria to be found for any of these lists (this is recommended as limitation of regulations in China in Section 2.2.2.3).

Figure 2.5: Recognised CRAs by Area

Note: Number of CRAs in each province or city is from PBC (2012)

2.2.2.2 Rating Requirements Concerning the Types of Rating Objects Zhu et al. (2006) produced a table with an analysis of some regulations and policies, which noted the distinct rating requirements from different government bodies for various rating objects. This list is revised and updated as two separate lists, entitled ‘Entity Can Be Rated’ in Figure 2.6 and ‘Entity Should Be Rated’ in Figure 2.7. Figure 2.6: Entity Can Be Rated Entity Can Be Rated Corporate bond Borrower Convertible bond Corporate borrower Short-term financial bonds from securities company Subprime periodical bond directional distributed by insurance company Assets-based securities which directional distributed to selected investor

Releasing Time 1993 (Revised in 2011) 1996 1997 2004 2004

Department State Council PBC State Council PBC/CBRC CSRC/PBC/CBRC

2004

CIRC

2005

PBC/CBRC

Adapted from Zhu et al., (2006, p.16)

W. Sun 2015

34

Rating objects of seven types of products are recorded in Figure 2.6. Issuers of these items do not have responsibilities to provide rating reports and documents to government departments about rating monitoring, project planning, and explanations of significant information from authorised CCRAs. In Figure 2.7, the entities that should be rated are displayed with the year and the department that issued regulations or policies. Issuers of these items have responsibilities stated above. There are three entities that can be excluded from credit rating:

(1) The subprime bond, which is issued through private placement. The credit rating can be waived by CRAs in accordance with Rule 23 in Regulation Three from the PBC and China Banking Regulatory Commission (CBRC) in 2004 (Yinjianhuiling No 4); (2) Policy banks are exempted with regard to Rule 10 in Regulation Two of PBC (2005, Renminyinhangling No 1); (3) Assets-Based Securities (ABS) from directional distribution and issued to selected investors can be provided without a credit rating, in accordance with Rule 41 (PBC and CSRC, 2005, Gonggao No 7). However, they cannot be exempted, according to another act from CBRC (2005, Yinjianhuiling No 3). This may cause conflict or confusion about the rating requirements of ABS in line with requirements in regulations from PBC and CBRC Figure 2.7: Entity Should Be Rated Entity Should Be Rated Listed company bond

Year 1990

Listed company bond

1991

Foreign currency bond issuer Guarantee companies

1998 2001

Bond from securities companies Subprime bond which issued by commercial banks Short term financing bond Financial bond in inter- bank bond market ABS and its trustee institution ABS and its trustee institution in inter-bank bond market ABS and its trustee institution Short-term fund investment financing bond in currency market Renminbi currency bond issued by international institution Investments from insurance companies Convertible corporate bond Corporate bond

2003 2004 2005 2005 2005 2005 2005 2005 2005 2005 2006 2007

Department PBC in Shanghai PBC in Shenzhen PBC Ministry of finance CSRC PBC/CBRC PBC PBC CBRC PBC/CBRC PBC PBC CSRC CIRC CSRC CSRC

Adapted from Zhu (2006, p.16) The distinctions between “entities can be rated” and “entities should be rated” is somewhat ambiguous, creating possible obstacles in terms of the transparency of

W. Sun 2015

35

policies and regulations for better understanding within the public. Related terms and legislative requirements are complicated and are lacking in precision, which may inhibit the ability of the Chinese government to explain them to the public. Particular questions need to be answered: (1) Why are some products allowed to be issued without being rated by CRAs? (2) Why are private placement and policy banks exempt from being rated? (3) How exactly can issuers and investors follow the regulations and policies issued by different departments?

2.2.2.3 Rating Requirements Regarding Rating Quality The PBC and CSRC released more specific requirements concerning supervision of investments between 2003 and 2008. In Appendix 1, this table is established according to existing Chinese legislation (37 documents), and it contains the relevant regulations and policies regarding the responsibilities for CRA supervision departments. Legislation especially relevant to the rating report, along with an assortment of legislation about rating quality, is created in a separated table in Appendix 2.

Compared with earlier regulations and policies about rating requirements from different departments, the three documents from ‘Specification for credit rating in the credit market and inter-bank market’ released by PBC (2006) were the most advanced and complete regulations. They include relevant rating standards, requirements, procedures, report formats, content and structure information in the regulations from PBC (rather than the CSRC, although CSRC also prescribed similar requirements and codes of conduct for recognised CRAs). Moreover, PBC provided its code of conduct for CRAs in 2006, which probably facilitated CRAs on the improvement of their standardised internal policies.

The code requires CRAs to submit their eleven

company policies, which consist of any of the following: (1) rating business quality control; (2) rating report quality control; (3) investigation policy; (4) rating committee policy; (5) rating result releasing policy; (6) monitoring rating policy; (7) internal management policy; (8) firewall policy; (9) document management policy; (10) database management policy; and, (11) rating information management policy. A general guide of credit rating quality, in reference to the code of conduct from PBC (2006) for all the CRAs, comprises five categories under the following headings: reality, consistent, independent, objective, and prudence.

Three issues were identified from the comparison and analysis of regulations and policies regarding requirements of rating reports among varied departments, in Appendix 1. First, requirements from different departments can be dissimilar, particularly within the specifications of the methods of providing rating results. Rating

W. Sun 2015

36

reports have to be submitted by a certain time under some regulations, while some other regulations do not indicate a specific submission time. For instance, rating reports required by PBC have to be submitted before 31st July every year, if it relates to ABS or financial bonds in the inter-bank market; whereas there is no specific time indicated for guarantee companies to update their periodical ratings to investors, according to Rule 18 from MOF (2001, Caijin No 77). Second, there clarity about certain information (such as the criteria and benchmarks for obtaining qualifications for CRAs to conduct business), even though some regulations or polices include general guides regarding rating quality. The third issue is that regulations and policies should be improved with respect to international standards, especially regarding issues about conflict of interests. The CSRC allows CRAs and their consumers to own 5% or less of each other’s shares; alternatively, they can also own financial products from each other that are worth ¥ 500,000 RMB (around £50,000 GBP ) or less (CSRC, 2007, Zhengjianhuiling No 50). In contrast, in reference to IOSCO (2010), this could be considered to be inappropriate as it may cause ‘conflicts of interest’ as a result of an ‘issuer-pay’ approach.

As for rating quality, in contrary to regulations and policies from the CSRC regarding rating quality, the PBC provides more guidance about specific issues:

(1) rating levels cannot be increased because of the increased credit marks provided by commercial banks; (2) CRAs should restart the rating job if the bond issuer is issuing bonds by a varied period; (3) rating results should be released at the same time if it is different due to the time differences, even though they might have been refused by issuers; (4) CRAs should improve the ability and knowledge of their employees through regular training schemes; and, (5) CRAs should improve the monitoring function in the secondary market, and should provide an analysis report of bond interest spread over five days after every quarter. On the other hand, PBC highlights that a rating methodology should integrate Macro and Micro methods; quantitative and qualitative methods; dynamic and static methods. By contrast, the CSRC (2007, Zhengjianhuiling No 50) imposed fewer specific requirements. However, the CSRC advised all the recognised CRAs to join the Securities Association of China (SAC) and comply with the self-regulation polices

W. Sun 2015

37

which are produced by the SAC. The CSRC will also penalise any member who disobeys any of the requirements of CICPA (Chinese Institute of Certified Public Accountants).

Nevertheless there is a lack of clarification in the existing regulations and polices about how to deal with issues relating to conflicts of interest and payment approach (Nie, 2011a). Some regulations and policies might have provided advice and opinions (as guidance), rather than implementing and enforcing rules (as the code of conduct). Several issues need to be explained and interpreted more clearly within regulations and policies. These issues include the following: (1) relevant information from government bodies regarding authorised CRAs and their recognition criteria (which are usually unclear, except in documents produced by PBC and CSRC); (2) a better explanation of differences of the authorised CRA lists among varied government departments; (3) clearer explanations on issues causing conflicts of interest; (4) better information about what kind of material from issuers is reliable information; (5) declarations of recognition process; and, (6) clearer, specific requirements for global CRAs (Nie, 2011b).

There is a lack of consistency within existing regulations and policies of the exact criteria for rating quality required by CSRC and PBC:

1) Data organising policies are not consistent between the PBC and the CSRC: The PBC requires CRAs to keep permanent copies of rating materials and data (PBC, 2006, Yinfa No 95). However, the CSRC advices CRAs to keep rating documents for at least 10 years, or at least 5 years after the duration of availability of a rating object (CSRC, 2003, Zhengjianhuiling No 15). On the other hand, Nie (2011) indicated that the regulations and policies about CRAs should not focus on governmental control over the market, which can result in a lack of innovation and creativity. He believed too extensive requirements were enforced by the Chinese government in relation to rating methodology, procedure design, time period length of investigation, and rating team establishment. 2) Differences in requirements of CRA staff members: The CSRC (2007, Zhengjianhuiling No 49) requires CRAs to have at least three senior management personnel; at least three staff with qualifications from the CICPA in China; at least 10 staff with more than ten years of work experience in rating business; and, more than twenty staff to have qualified certificates for conducting securities business. CRA employees who have shown a lack of integrity or who have invested in other CRAs should be penalised between ¥ 10,000 and ¥ 30,000 RMB (around £1,000 and £3,000 GBP). Any foreigners employed at senior management level in a CRA should have at least three years of work experience in mainland China,

W. Sun 2015

38

Hong Kong or Macao. However, PBC (2006) announced regulations which have included different perspectives, and it has requested that: rating team members should have at least one year of experience; that team leaders should have at least three years of experience and have finished more than five projects; and, that each rating team must include at least two specialists. 2.2.2.4 CCRA Supervision Polices in Mainland China There are varied regulations and policies on CRAs from different departments, which make the credit rating system complicated. Moreover, requirements and recognition systems in Hong Kong are distinctive from those on the Chinese mainland because of the national political environment. Chen (2010) made a comparison of CRA supervision systems amongst three government departments (shown in Figure 2.8), and indicated the limitation of supervision from distinct departments, namely, the PBC, CSRC and NDRC. Chen indicated that the current supervision system in China was not effective due to differences with the expected role of CRAs, the supervision area and the levels among the three departments. However, this table does not list all the existing regulations and policies. It does not include all the relevant government departments or associations such as the CBRC, CNRC, CIRC, State Council, MOF, the Hong Kong government, the Taiwan government, or the professional institutions such as NAFMII (National Association of Financial Market Institutional Investors) and the SAC.

Although the PBC is the main supervisory department chosen by the State Council in 2011 to readdress the multiple supervision problem, many government departments may still have significant influence in the development of CCRAs (Fu, 2011; Yao, 2008), and their roles in relation to rating requirements will be discussed in Section 2.2.3.2. It should be noted that both the Chinese government departments and the professional associations have influence within the industry. NAFMII has a crucial role in the supervision of CRAs, since it was endorsed by the PBC in 2008 as an independent supervision association. NAFMII also conducts credit rating business like the other CCRAs, but it has a supervision function among the other CCRAs and submits relevant evaluations and documents of the other CCRAs to the PBC (PBC general executive office, 2008). NAFMII released polices concerning self-regulations of credit rating business for non-financial corporate bonds-related financial products in 2013 (NAFMII, 2013, No 1), although the five CCRAs, which have been certified by CSRC, signed a self-regulation contract with SAC since 2009 and they submit documents to SAC and satisfy the ten principles of self-regulations from SAC (2009). In addition, the Asset Management Association of China (AMAC) was set up in 2012 for self-regulation among asset-related institutions and organisations including fund rating agencies (AMAC, 2012). Therefore, it appears that there are different associations in charge of

W. Sun 2015

39

supervision among different types of rating products due to the different type of endorsements they obtained from different supervision departments.

Figure 2.8: Supervision Departments

Supervision Implement

Released Policy and Regulation

Rating Entity

NDRC

PBC

CSRC

Corporate bond issed through supervision of NDRC

Bond market: financial bond and non-financial corporate bond financiaing tools relaeased in interbank bond market Credit market: lending corporates and insurance and gurantee instituions

Bond and bond financing tools issued through supervision of CSRC or listed in stock exchange market

Regarding corporate bond rating process and quality only

Guiding opinion about CR management by PBC (2006); Specification for credit rating in the credit market and inter-bank market part 1/2/3 (2006); Announcement for enhancing management of rating in inter-bank bond market by PBC (2008); etc.

Tentative method of bond management of securities companies (2003) Tentative management method of rating for security market (2007)

None

1. Set up system regarding rating resources and records back up and preparation, rating report management, CR business statistics reporting and default rate reporting in inter-bank bond market; 2. Form CR quality evulation policy and team that have trader association and local CR reviewing expert group as main members. 3. Encourage CRAs to form payment receiving and self-regulation methods 4. Carry out on-the-spot inspections

Set up business statistcs reporting system

Adapted from Chen (2010, p.59) It should be also noted that NAFMII is also the shareholder of China Credit Rating Company (China Ratings), which is the only agency that conducts credit rating business in China through an investor-pay approach (China Ratings, 2012). Moreover, it is reported that SAC will soon establish their own CRA (Wang, 2013) but the date is unknown.

Zhu (2012) demonstrated the role of each government department with various types of products (Figure 2.9, which provides a clear depiction of multiple supervisions among varied products). The state has designated PBC as the main supervision department, with self-regulation from another three associations (SAC, NAFMII and

W. Sun 2015

40

AMAC). However, the government did not clarify how PBC would work with the other departments and associations.

Figure 2.9: Supervision and Product Type TYPES OF BOND Government bond Central bank bill Policy bank bond General bond Commercial bank bond Subprime Financial bond bond Non-bank financial institution bond Company bond from security companies Short-term commercial paper from security companies Short-term commercial paper Medium-term notes The small and medium-sized enterprises collection notes Group company bond Company bond Convertible bond, detachable convertible bond ABS Bond from international organizations

SUPERVISION DEPARTMENT PBC, CSRC, Ministry of Finance PBC PBC PBC, CSRC PBC, CSRC PBC PBC, CSRC PBC, CSRC PBC (SAC) PBC (SAC) PBC (SAC) NDRC, PBC, CSRC CSRC CSRC PBC, CSRC PBC, Ministry of Finance

Source: From Zhu (2012, p.24)

In spite of the SAC, NAFII and AMAC, there are more associations within different areas for various types of businesses. There are another four associations: (1) the Taiwan Rating association, which was established in 2006 for CRAs in Taiwan; (2) the Shenzhen Credit rating association, set up in 2008 for CRAs in Shenzhen; (3) the Beijing Credit Guarantee Association has existed since 2002 for insurance and guarantee businesses, which also includes relevant rating agencies; and, (4) ACRAA (Association of Credit Rating Agencies in Asia), organised in 2001, and which has members across thirteen countries to agree to follow the code of conduct from ACRAA (2010). Five CCRAs who joined ACRAA as members include CCXI, Lianhe, Dagong Global, Shanghai Far East, and Shanghai Brilliance from China (ACRAA, 2011b). 2.2.3 Background of CCRAs and the industry Background information of some authorised CCRAs is explored and analysed in this section, including rating methodologies, rating processes, product types and business scope. In addition, this section examines the business strategy of Dagong Global, and the organisational structure of China Chengxin (CCX). This is an essential foundation that underpins an understanding of the strategy development and competitive

W. Sun 2015

41

environment of the CCRA industry. Moreover, this section includes detailed information about some individual CCRAs, which is used for selecting the sampling strategy outlined in Section 4.3.7.1. 2.2.3.1 CCRAs’ background in Connection with CRAs in Other Countries The majority of research concerning CCRAs development included a sketch of the historical development of global CRAs and their influences within the investment industry. It is important to examine the history of global CRAs, in an effort to better understand the role of CCRAs (Sun, 2002). Figure 2.10 outlines the four distinct stages (from 1909 to 2007) of the historical development of American CRAs, which were linked with special features and crucial events in global or local financial markets (Zhang, 2007).

Figure 2.10: History of American CRAs Time 1909-1930s

1930s-1970s

1970s-1980s 1980s-2007

Feature First CRA; unsolicited ratings based on public information paid for by the investor Unsolicited rating based on public information, paid for by the investor; rating results used for regulation and policy establishment Issuer pays for ratings, which are based on public information and issuer internal information Globalization and monopoly

Special Event Increasing demand for ratings because of the development of the bond market and railway industry. First world financial crisis (1930s) Second (1970s)

world

financial

crisis

More and more new financial products with complicated information

Adapted from Asian Banker Association, 2002, cited by Zhang, (2007, in Mao and Yan, 2007, p.4) The development of American CRAs was driven by investors from 1909 until the 1970s, owning to the investor-pay approach. The industry changed in the 1970s and 1980s by adopting a more issuer-pay approach. Furthermore, as the investment sector upgraded and evolved, and produced more financial products involving complicated information, the American CRA market became closely associated with globalization and monopoly between the 1980s and 2007 (Zhang, 2007). Rating-dependent regulations were alleged to be the trigger of the monopoly situation in America; however, there exist compelling arguments that regulating CRAs instead of deregulation seems more viable after financial crisis (Partnoy, 1999; Dittrich, 2007). Likewise, the CCRA industry is comprised of similar features: (1) monopoly within a local market (Zhu et al., 2006), which may have occurred as a result of the rating-dependent regulations; and, (2) cases of globalisation and cooperation between CRAs (see Section 2.2.4) have

W. Sun 2015

42

also arisen (for instance, Dagong Global and another two CRAs from America and Russia joined together as Universal Credit Ratings Group in the global market in a bid to challenge global CRAs (Ricking, 2012). Nevertheless, unlike the trends in the United States of America (USA), the CCRA market has been always been driven primarily by central government due to limited rating demands within Chinese financial markets (Yu, 2009a). Similarly, the bond and securities markets developed without participation of CRAs in China and the USA (Shi, 2010). The emergence of modern CRAs can be traced to the establishment of a commercial credit-reporting agency (or mercantile agency) by Lewis Tappan in 1841, which was formed to provide statements of creditworthiness concerning individuals to other merchants. Although the first bond was issued in 1609 by the Dutch East India Company, the subsequent bond markets developed without the need for independent rating agencies, probably due to the fact that most initial bonds were issued to wealthy citizens or to nations, and as such, investors were willing to make contributions simply based on honour. Thus, CRAs emerged due to the mergers between credit reporting agencies and specialised financial presses. This was followed by the collapse of some investment banks because of their inability to access enough information (Lauer, 2008).

In contrast, the establishment and origin of CRAs are completely different between the USA and China. The USA has by far the longest history of CRAs. The first one was founded in 1909 by John Moody, an analyst from a mercantile agency. His company published “Analysis of railway investments” rating reports by using rating standard symbols to differentiate between ninety various types of bonds from 250 railway companies. As the American landscape became more urbanised in the 19th Century, a huge international debt market grew upon the financing of the railroads, and by the 1850s the majority of these projects were raising capital through private corporations (Sylla, 2001). Consolidation of the rating system occurred between 1939 and 1945 during the Second World War, when the Security and Exchange Committee was looking to standardise financial markets for the requirement of ratings of any products to be issued. Since the 1970s, rating demands have been driven by various factors: financial structural changes, disintermediation, securitisation, globalised standards, and rating based regulation (IPSA, 2003; Gras, 2003, cited by Dittirich, 2007; Lucas et al., 2008; Nie, 2011a). By contrast, the establishment of Chinese CRAs was initially enforced by the government in 1987, and then halted in 1989, before development continued in the second stage of economic reform (in 1992), and later was stimulated by the government support after the Asian financial crisis in 1997.

W. Sun 2015

43

The development of CRAs within Asian countries was investigated by Dagong Global (2009), which noted that there are forty-eight countries within five geographical Asian areas (Eastern Asia, North Eastern Asia, North Asia, Western Asia and Central Asia), but with only eighteen countries having CRAs. Those countries include China, Japan, Korea, Singapore, Malaysia, Philippines, Indonesia, Bangladesh, Thailand, India, Pakistan, Bahrain, Turkey, Georgia, Armenia, Cyprus, Kazakhstan and Uzbekistan. Moreover, a comparison study about domestic CRAs from different countries was undertaken (Figure 2.11), from the perspective of the developmental level of the bond market and CRAs (Zhang, 2007).

Figure 2.11: Development Level of Domestic CRAs

Source: Asian banker association, 2002 cited by Zhang (2007, in Mao and Yan, 2007, p.6) This highlights two dimensions: (1) the year of establishment; and, (2) the bond issuing amount. However, it fails to fully evaluate (and provide a comparison of) CRAs from different countries, due to issues associated with the data. Possible limitations recognised include:

a) This figure does not indicate the comparison accurately, because the development level of CRAs is based on the year of establishment, which alone cannot be recommended as a development level measurement; b) This report identifies Hong Kong and Taiwan but does not mention mainland China, even though Taiwan and Hong Kong have completely different financial system and supervision departments, and these two regions of China are separated from the mainland. This can be somewhat misleading since the development of domestic CRAs may be better in the mainland than in Taiwan and Hong Kong. As indicated in Section 2.2.2.1, there were almost no domestic CRAs recognised by the Hong Kong government until 2012 (Jie, 2012; HKMA, 2012); and the Taiwan

W. Sun 2015

44

government did not have domestic CRAs on their list (FSC, 2013). The first and biggest local CRA within Taiwan, Taiwan Ratings Company (TRC), had existed for only nine years before the acquisition made by S&P in 2005 (TRC, 2012). This may indicate that the level of domestic CRA development in these two places is probably not comparable. In addition, it appeared that it was a more global CRA-dominated market in Hong Kong and Taiwan with much less development opportunity for local CRAs, and possibly as a result of the historical relationships between them and the Chinese government; The development of CRAs and the bond market cannot be examined in reference to only one set of data, as there are other data that should be included and analysed, such as performance result, defaults rates, the value of the bond, and so forth.

2.2.3.2 General Background Comparison of CCRAs There is also a paucity of complete and organised literature about individual CRAs within previous research. In addition, as regards the development and background of domestic CRAs in the global financial market or Asian countries, many previous studies did not include any local CRAs from mainland China (Estrella et al., 2000; Rhee and Luke, 2000 and BIS, 2006). This section therefore provides a summary of the results from previous studies and data collected from CCRA and Chinese government websites.

2.2.3.2.1 Previous Research Although Weston (2012) suggested that “the Chinese credit rating industry is currently dominated by Fitch, Moody’s and S&P …[these companies] …have a standalone combined market share of over 67% in the Chinese credit rating industry”, Kidney et al. (2014) indicated that CCXI, Lianhe and Dagong Global are the three main agencies who control 80% of the Chinese market, because foreign companies are allowed to hold no more than 49% of the share in the joint ventures between global CRAs and Chinese CRAs. Therefore, Chinese CRAs dominate the local market. Two main issues have been pointed out in previous research relating to CCRAs. In their reviews, some researchers have cited regulatory biases as one of the problems. Some leading CCRAs within the Chinese financial market strongly influence industry development, particularly in the current development environment (Zhu et al., 2006). These leaders were actually appointed by the government departments for various industries, and they are the authorised CCRAs that CCRA-beneficiaries have to rely on. Having been restricted to a few large SOEs (State-Owned Enterprises) such as China Three Gorges Corporation, BaoSteel Group, China Mobile limited, Petro China Limited and Sate Grid Corporation of China, the aggregate of registered and tradable corporate bonds were

W. Sun 2015

45

limited only to SOEs. Therefore, “...almost all non-financial corporate bonds were rated ‘triple A’ by China’s domestic rating agencies” (Bottelier, 2008, in Fleisher et al., 2008, p.165).

On the other hand, Huang (2003) noted that the results from global CRAs and domestic CRAs are not comparable, since the financial market and economic environment in China have been ameliorated but have not been recognised by global CRAs yet. The rating methodology of any rating products should reflect the special Chinese economic background and political environment, and domestic CRAs should have more knowledge about the local market than global CRAs (Zheng, 2010). Han and Yan (2007), who are experts from CCXI, postulated two limitations about the credit rating methodology from domestic CRAs, which might cause the problem of their incomparability (cited in Mao and Yan, 2007). First, the scale and proposition between quantitative and qualitative analysis is difficult to decide, and was not scientific enough, which can cause systematic errors. Second, there was a lack of network connection and integration with domestic CRAs, which was the root of difficulties of information sharing amongst CCRAs. Each CCRA had different rating methods and processes for each industry or each type of products, making investors’ decision process more difficult. 2.2.3.2.2 Data from CCRAs’ and Government’s Websites There was no justification about changes to the lists of authorised CCRAs on government websites or CCRA websites; in addition, the authorisation list from CIRC (2003, Baojianfa No 74) had been abolished. With the inconsistency of lists among the different departments over varied time periods, six CCRAs, which have been authorised by PBC (2011), are selected and presented in Figure 2.12. Another CCRA, China Ratings, is also in this table, since it is a new type of CRA, which adopts an investor-pay approach.

Background information about these seven CCRAs was mainly collected through their company websites and news reports. CCRAs appeared to put more effort on improving rating quality / accuracy (for example, rating methodology and process) than information transparency (for example, communication and customer service) at this stage (Multiple attributes or factors in relation to rating quality and accuracy are reviewed and discussed in Section 3.3.3.2 and 3.3.4.3, not all research results verified that transparency is part of rating quality according to the public’ perceptions. Moreover, interview results maintained that relevant terms can be understood differently according to various perspectives), since many studies have been done about rating

W. Sun 2015

46

methodologies, and the rating quality within Chinese academia and many CCRA websites are still under construction. Moreover, there was not enough information about shareholders from CCRAs, and this information was not available to the general public (claimed as a business secret by some CCRAs during this investigation). Information about shareholders might not be available due to three reasons: (1) policies regarding information transparency for communication from both of CRA websites and customer services need to be improved; (2) there was a lack of staff for website development and a lack of technology support within customer services department; or, 3) most CCRAs adopted an issuer-pay approach, which can result in an issuerscentred business style instead of a public or investors-centred one. Figure 2.12: CCRAs Background

Data collected from CCRAs’ websites

W. Sun 2015

47

In Figure 2.12, four out of seven CCRAs have had technical cooperation or affliation with global CRAs. Compared with Fitch and Moody’s, S&P arrived in China much later (entering mainland China in 2008), whereas Fitch and Moody’s established their presence in 1999. In addition, global CRAs may have changed their strategy as they initially started signing contracts for cooperation or joint ventures with CCRAs in 1999. However, by the end of 2003, some of these CRAs had terminated their contracts, and they established their own branches in the mainland. This appears to be due to the policy announced by CSRC in 2003, which allowed global CRAs to set up business in China directly and as such was the trigger for discontinuing cooperation. In addition, this policy clarified the requirement of global CRAs, in that they must have over ten years of experience within the rating business, and that they must have rated the Chinese corporate bond in other countries for more than seven years (Sun and Han, 2012; Dagong Global, 2013; Fitch, 2013). For example, Fitch Ratings (Fitch) had a joint venture with CCXR and another three companies (including CCXI), and signed contracts for conducting business that did not pose any competition issues with CCXI in 1999. Fitch later withdrew their shares in 2003, and CCXI then sold their 49% shares to Moody’s, who already had cooperation experiences with Dagong Global five years ago. After that, Fitch went back to the domestic CRA market again and commenced a joint venture with Lianhe in 2008.

2.2.3.3 The Scope and the Type of Products Product scope has been described according to different rating objects such as business, bank, consuming, securities, folktale or personal, sovereign and international finance (Zhu, 2002). Li et al. (2004) classified it according to the type of financial instrument, including: company or enterprise bonds, short-term bonds, convertible bonds, financial bonds, subprime bonds, hybrid capital bonds, structured finance products, funds, preferred stocks and others.

However, compared with the first

classification, the second classification does not reflect the distinct feature between each rating product. The third kind of classification is provided in Figure 2.13 (Zhu et al., 2006), which is adopted by most CCRAs according to time period differences (longterm or short-term), solicited rating or un-solicited rating, rating object difference (obligor rating or facility rating), and also the different types of markets, for example, credit market, inter-bank market and public market. There are seventeen different types of rating products.

W. Sun 2015

48

Credit Rating Classification

Figure 2.13: Product Scope

Asset market Debt Rating Capital market

Corporate bond Convertible corporate bond Financial institution bond Financial institution subprime bond Assets securitization bond Fixed income fund Credit and debt notes Financing bond factoring

Sovereign rating Debtor Rating

Financial institution rating

Commercial bank Security company Fund company Insurance company Assurance and Guarantee company

Industry and business rating Source: From Zhu (2006, p.44) There was no clear demonstration about the general information of rating methods on some CCRA websites, and few CCRAs provide any information about rating methodology. Most of them that were researched only allow information about rating methods for certain industries or certain types of companies to be available on their websites (such as the local government’ rating, sovereign rating, industry rating and guarantee companies rating). Concerning the scope and type of products, the following three issues were found: 1) There are different interpretations and explanations about classification of rating products from each CCRA: According to the information on their websites, their terms of each rating product can be completely different, and the way of sorting their rating products is also varied. For example, Dagong Global uses international and local categories, and then put different products underneath these two terms. CCXI use their own classification and use different terms for each type of products and services; their four categories are fixed-income financial products, rating objectives, rating of structure financing, and derived rating business. Most CCRAs provide ratings for corporate bonds, financial institutions and products, insurance and guarantee companies and consulting services, but only Dagong Global provides a sovereign rating (both Dagong Global and Pengyuan gives training services for clients). However, only CCXI and Lianhe Credit Rating Company classified structured financial product rating as a separate product from the financial institution rating, corporate rating, or non-financial institution rating. (Classification and demonstration of rating products and services

W. Sun 2015

49

from Dagong Global and CCX are provided in Appendix 3). 2) Some CCRAs (such as CCXI and Pengyuan) also provide credit checking services, since credit checking was not distinguished from credit rating, as noted in Section 2.2: For example, CCX appears to dominate the local Zhengxin market. However, the function and business scope of CCX might be the most complicated due to the type and the number of subsidaries it has (Figure 2.14). China Chengxin Technology Information (CCXI) is part of CCX, and has provided rating services in the domestic market in China since 2004 with 20 branches in mainland China. Seven companies have similar types of businesses which may cause conflicts of interest through the following three assertions, (CCX, 2013; Qingpu Chengxin, 2012): i. CCXI and China Chengxin Asia pacific (CCXAP) are both CRAs, but CCXI conducts business within Mainland China, whereas CCXAP operates in Hong Kong; ii. both China Chengxin Credit (CCXCREDIT) and China Chengxin Credit Management (CCXM) can provide information about market and companies; and, iii. CCXI, China Chengxin Security Rating (CCXR), CCXAP and CCXCREDIT, China Chengxin Financial Consultancy (CCXC) all provide consultancy services. Figure 2.14: Company Structure of CCX

Adapted from CCX (2012), CCXI (2012), CCXAP (2012), CCXR (2012), CCXE (2012), CCXC (2012), CCX CREDIT (2012), and CCXM (2012) 3) With such a complicated organisational structure, some CCRAs do not provide customer services. Moreover, CCRAs provide different kinds of products, and not just credit rating and credit rating-related services. Because of the varied organisational structures which exist, the functions and services of each CRA can vary. For example, Shanghai Brilliance and many other CCRAs place the Human Recourse department under the rating committee, whereas China

W. Sun 2015

50

Ratings place their Salary and Human resources department above the rating committee. Some CCRAs do not have a customer service department (for example, Golden Credit). Dagong Global has its customer management department and customer service center within its organizational structure, but the others seemingly do not (Dagong Global, 2012; Shanghai Brilliance, 2012; China Ratings, 2012).

2.2.3.4 Business Strategy and Business Model Adopted by Chinese CRAs CCRAs have adopted special business strategies under the influence of government regulations and policies, the financial market environment, and global competitors. The case of Dagong Global is examined in this section, and several issues concerning the business strategies, and business models are raised as well.

2.2.3.4.1 Dagong Global Dagong Global is reviewed in this section as an example of a world-renowned CCRA. It became well-known by advocating a national brand business strategy as well as improving, updating and publishing their rating methodologies for domestic industry. Hunt (2012) proposed that CCRAs challenge global CRAs, because Dagong Global joined with Egan-Jones (a small American investor-paid agengcy) and RusRating (a Russian investor and issuer paid agency) to form the “Universal Credit Ratings Group”, which established its headquarters in Hongkong in October 2012. This group is working with CCXAP, which is part of CCX, and which obtained permission for a rating business based in Hong Kong in 2012 (CCXAP, 2012). Compared with Dagong Global, and like other domestic CRAs, CCX adopted another strategy as it joined with global CRAs (Fitch in 2004; and Moody’s in 2006), and formed CCXI as a subsidiary of CCX, as explained previously in Section 2.2.4.3 (CCX Asia Pacific, 2012). However, even though CCXI have set up their headquarters in Hong Kong, they have only taken little steps towards establishing a global footprint and as such are still not as well-known as Dagong Global (Li, 2012, cited Hunt, 2012). Consequently, the type and scope of businesses or products of CCRAs appear more complicated than those of other global CRAs due to their previous history of cooperation and joint ventures. Four development strategies adopted by Dagong Global include the following:

1) Joining ventures or having technical cooperation with global CRAs, as mentioned above. In order to compete with other global CRAs, furnishing authority and reputation within the global and local financial markets is essential. Most authorised domestic CRAs have joint venture with global CRAs for technology cooperation. However, this has been questioned by many journalists and researchers as this strategy may have given more opportunities for global CRAs to penetrate into the

W. Sun 2015

51

CCRA industry, and CCRAs could lose out in the end (Wu, 2010). 2) Establishing a national brand: Dagong was established in 1994 with the agreement and approval from the PBC and the NDRC. Its business strategy focused on launching a national brand within the local market and then looking towards internationalisation for entering the global market. It has taken a national brand strategy and has refused the offers made from Moody’s, S&P and Fitch for purchasing its shares. It has thirty-four branches in China and another two branches in Hong Kong and New York (Dagong Global, 2012). Because of the financial crisis, (which was noted as the opening of Stage 5 in CCRAs’ development history in Section 3.4.2), global CRAs have entered into a so-called war about rating quality; this has been replicated with domestic CRAs in China. Ye (2011) raised an objection to Dagong’s business model by questioning its rating quality. Ye cited a case where it gave a rating to the national railway department short term bond, which was higher than the national sovereign rating. Ye criticised Dagong of “…roaring without doing any job…drawing others' attention, or irrationally rejecting the other’s rating models…" (Translated from Chinese). 3) Being innovative and creative with the knowledge of the domestic rating business industry: This domestic CRA set up a post-doctoral research institution and also the Dagong Credit Management School with the partnership of Tianjing University of Finance and Economics. Dagong is also the credit risk management consultant for The Bank of Beijing; The Industrial Bank, The Shanxi Venture Capital Company, the Inner Mongolia government, and The China Great Wall Asset Management Corporation. Dagong Global is using slogans such as “established, national favorite CRA” to break into global market (Pengyuan also has the same slogan) and released its own sovereign rating methodology in 2010 (Dagong Global, 2012). 4) Internationalisation: According to information provided by CCRAs on their websites, most of them put ‘integrity’ as a key component of their company culture, and less emphasis on business strategies. Only Dagong Global (2012) has separate strategies for both local and international markets, which are referred to as an “internationalized development strategy of a national brand”. Their reasoning is this: as a CCRA, Dagong Global supports the decision made from the government about social economic development, and protects the authority and rights of China and Chinese government in the international market; whereas in the global financial market, it has a responsibility to contribute to the world economic sustainable development with its knowledge and experience about the local market. This might be one of the reasons why it has a better reputation than other CCRAs in the world. Moreoer, it was also the first CRA to downgrade USA government bonds in 2010. Mark King, a global investment strategist at Investec Asset Management, said that, “Dagong is well respected as an independent credit rating agency which takes a more conservative view than better-known American credit rating

W. Sun 2015

52

agencies” (Cowie, 2010). Technology cooperation with global CRAs was also one strategy used by most of the well-known CCRAs (Li and Tian, 2006). Many authorised CCRAs encourage global CRAs to make investments as shareholders. However, according to the requirements and regulations of the Chinese government, global CRAs or companies from other countries cannot hold more than 49% of shares in Chinese companies (Wu and Zhao, 2005). This might also be the reason why Dagong Global has become a “favorite” of the Chinese government, for it has technology cooperation with S&P, but it did not appear to give up its shares to global CRAs.

2.2.3.4.2 Research about Business Strategies of Domestic CRAs Corporate culture and business strategy are very important for CCRAs in order to survive within the current CRA industry, because of the monopoly power from global CRAs (Li, 2010). However, there is limited research or discussion about CCRAs’ corporate culture and business strategy. Most CCRAs’ company culture and business strategy are ethically orientated (Liu, 2009), which emphases the importance of integrity and loyalty, instead of strategic decision-making for profit, income, achievement and contribution. Yao (2008) researched the business strategies of Pengyuan, and used SWOT analysis (Armstrong, 1996) and a Porter’s 5-Forces (1979) model to analyses its information and background, especially for future business strategies in the five year period of 2008-2013. However, this research does not provide enough evidence and data for undertaking a background analysis such as staff skills and knowledge, or the reputation of Pengyuan in the credit rating industry, or the level of influence of the other competitors, or the level of impact of the Chinese government, or its market share in the Chinese financial market, and so on. Moreover, none of Yao's primary data had been gathered through interviews or questionnaires. The findings depended mainly upon a case study with no reference to any data used for the analysis. However, this research did provide meaningful suggestions about possible strategies for Pengyuan, highlighting six objectives for improvement of strategy development (certification, coverage,

reputation,

technology,

expert

team

and

investor

relationships).

Unfortunately, Pengyuan has been taken off the CRA recognition list by the PBC for inter-bank bond market since 2011, although it is still authorized for rating services within the credit market in Hubei province (PBC, 2011).

W. Sun 2015

53

2.2.3.4.3 Main Issues There exists limited literature about the marketing and business strategies of CCRAs. The focus has mainly been upon archives of development history, with a lack of analysis of the format and approaches of CCRA business models and strategies. With the complicated history and background of each CCRA, as well as the ‘information transparency’ problem in China, information and data about business strategy and organisational structure of CCRAs is limited. Nevertheless, some conclusions can be drawn as follows: First, both “investor-pay” and “issuer-pay payment” approaches exist in the Chinese financial market. China Ratings uses the former (as mentioned in Section 2.2.3.4), whereas the other domestic CRAs have adopted the latter. The majority of researchers appeared to advocate the former, while there are issues of conflict of interest raised from the latter one (Covitiz and Harrison, 2003; Hill, 2004; Adelson, 2006; Hong, 2008; Li, 2008; Lucas et al., 2008; Hunt, 2009; Darcy, 2009; Brunner, 2010; Gu, 2010a; Chen et al., 2011; Chu, 2011) (which will be discussed in more detail within Chapter 3). The interactive relationship between issuers and CRAs can be explained by the principleagent model, which indicates that the different possible reactions from both the issuer and the rating agency can influence the accuracy of the rating result. Therefore, the latter payment approach affects the accuracy of rating (Yu and Han, 2010). However, Ying and Zhang (2006) insisted that the “issuer-pay” approach is more suitable for the financial environment in China, since there are a limited number of investors. Moreover, Lucas et al. (2008) indicated that some industry observers think investors do not have more incentive than issuers or underwriters for accurate ratings. Unlike Ying and Zhang (2006) and Lucas et al. (2008), Hou (2010) indicated that the “issuer-pay” approach can help CRAs obtain more profits and develop faster, and it is also beneficial to investors for getting free information. “Government pay” is the third option, and it is not adopted in China because it can allow the government to exert too much influence over CRAs by controlling the size of the payment (Persaud, 2008). Second, all the issuers or underwriters pay the same amount of rating fees in line with requirements from the PBC. Gu (2010b) indicated that the third party from the credit rating association needs to produce policies and regulations for CRAs’ performance supervision, and the amount of payment should be decided in connection with the size of the institution or company. However, if the rating charges depend on the size of the company, a smaller company should charge less money. It might encourage CCRAs to look for bigger issuers to increase their income, and it will become more difficult for SMEs to survive.

W. Sun 2015

54

Finally, there might be conflicts of interest over strategies with “a national brand” and rating adjustments for SOEs. Lots of investors rely on their own internal rating results from banks or institutions rather than the results from CCRAs because they have concern about CCRAs being pressurised by local governments into giving overly generous ratings to SOEs. Investors from other countries in Asia also questioned the independence of CCRAs (Hunt, 2012). Han and Yan (2007) identified several limitations concerning the quality of services and products, including: (1) a lack of quantitative data supporting the explanation of rating results; (2) the need to improve the development of credit rating models; (3) the lack of data supporting the rating methodology demonstration and development; and, (4) the rating monitoring system needing to be improved.

Figure 2.15: Methods of CCRAs and global CRAs Compare Item

Description

CCRA

The objective of CR

Provide fair, good information to the public, and help investors understand the role of CRAs in terms of portfolio governance. Macro-economic environment, industry background, operation system, competition, management, financial system, and so on. Concern for longer term of the default rate. Combination of qualitative and quantitative analysis, Consider the ratings by rating experts, and the professional level of the rating team.

Y

Global CRA Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

Y

This method takes into account two dimensions - time and spaces - and three most important elements for Chinese markets (government industry policy, economic differences between each local area, qualitative analysis methods index). Government behaviours, policy changes

Y

N

Y

N

Market pricing or government pricing; relationship between local markets and global markets,

Y

N

Index will be different for each industry, and different with global index

Y

N

The bad debt reserve rate is decided by supervising committees, in contrast with foreign banks.

Y

N

The main element for analysis Rating principle

Rating standard symbol Factor analysis methods Integration analysis Two dimension analysis with different variables Exterior analysis importance Working capital analysis importance and its liquidity sources Rating standard index for local industry Bad debt reserve rate compare with other banks

5P, 5C, 5W, 4F, CAMPARI, LAPP, CAMEL and so on

Source: Translated From ABE (2010)

W. Sun 2015

55

There have been several studies concerning rating methodology within CCRAs, and it should be noted that many researchers believe there was a strong connection and similarity between CCRAs and global CRAs. This appears to be due to CCRAs acquiring theories and skills from global CRAs through technology cooperation or global information sharing. Examples include: integrating quantitative and qualitative data; predicting futures according to historical data; or, identification and integration of rating factors (Pang, 2006). ABE (2010) demonstrated the similarities and differences of rating methods between CCRAs and global CRAs, as shown in Figure 2.15: 2.2.4 Background of the Chinese Financial Market Understanding credit and credibility is relevant to the background of the Chinese financial market. It is crucial to be aware of the distinct characters and influencing factors within this market in order to develop a comprehensive understanding about the CCRA industry (Guo, 2007; Sun and Zhang, 2005).

Figure 2.16: Financial Reform

Source: Qu and Li (2012, p.35)

The background to the economic system within a socialist society is an important factor, as this influences the understanding of credit rating and credit economics. The social market economy system in China is not perfect. The theoretical basis of this is to improve the fundamental development strategy (which is dominated by public ownership), but also to develop other diverse forms of ownership (Renmin University of China, 1984; Zheng and Gong, 2001). As such, there was no sound market system, and the government was not able to adapt to the changes required. In order to improve the function of the government, three concerns had to be addressed: (1) the establishment within the legal system; (2) the supervision and management by government within the market place; and, (3) improved public service (Shi and Han,

W. Sun 2015

56

2003; Chen, 2009c). The foundation for the credit economy establishment is relatively weak due to the historical planned economy within China. Qu and Li (2012) separated the development history of financial reform in China into three stages, and although the mono-banking system was abolished within the first stage (1978 to 1992, as outlined in Figure 2.16), several issues still appear to exist.

First, the current financial market is still not effective enough for the profitability of CCRAs, as the Chinese financial system appears only to benefit big companies and rich people (Lin, 2012), and the scale and scope of Chinese bond market is too small for CCRAs to survive in (Li et al., 2003; Zhu, 2006; Han and Yan, 2007; Liu, 2009a; Xiang, 2007). With the development of a more capitalist market system within China, although there was increasing demand for materialistic consumption, this was somewhat limited with the low incomes generated (Wang, 2011). Because of the old economic system in China, there were limited approaches and methods for making money quickly, which caused even more problems for the credit economy. People kept savings in the bank for years instead of spending them. In 2008, after financial structure reform, the percentage of bank deposit was 58% of all financial assets in Chinese financial market (McKinsey Global, 2008), which was the highest in the world, (see Figure 2.17).

Figure 2.17: Financial Assets by Region

Source: McKinsey Global (2008, cited in Roxburgh et al., 2009, p.27)

Second, SOEs might have too much economic power because of their special relationship with the government. To begin with, there was no system with clear property rights and responsibility clarification for SOEs. SOE directors cannot manage the company on their own terms, or accept sole responsibility for the company's profits

W. Sun 2015

57

and losses, act as a juridical party and take full civil liability. The Chinese government supports SOEs, postpones their debt repayment deadline or abates their debt within their capability if any SOE has difficulty making repayments (Yi, 2012). Therefore, SOE directors do not need to worry about debt as the state will be able to sustain SOE workers’ incomes (Wu, 2008; Wang, 2010a; Wang, 2012a). Third, the level of bond market development is varied in different Chinese cities and provinces, (similar to the development levels of CRAs, as previously indicated in Section 2.2.3.1). Beijing, Shanghai and Guangdong are the main contributors with the majority of investors (which was 5,057 in 2011, and accounted for 44.49% of all investors). The bond markets in the northwest provinces have fewer investors because of the lower level of economic development in these areas (Chinabond, 2012). By contrast, Beijing is the most significant economically influencial power in the world, according to a report from Price Waterhouse Cooper and New York Cooperation Association entitled Cities of opportunities in 2012, rising from number nine in 2011 to number one on the list of twenty-seven cities in 2012; Shanghai was ranked fifth in 2012. The most important reason why Beijing has such influence was that it has seventy-nine companies appearing within the top 500 global companies (Luo, 2012).

Finally, the old government administration system was an obstacle for credit economy development as the Chinese government had too many responsibilities due to the high public expectations of them being the only authority capable of solving problems (Jia, 2004). Liu (2002a) implied that limitations within Chinese bond market restricted the development of CCRAs: (1) the variety of financial products was insufficient; (2) the requirement of issuing interest rates could not exceed 40% of the deposit rate; (3) many financial products were not required to be rated, according to the regulations; and, (4) the procedure of bond issuing was long and complicated, which reduced efficiency within the financial market. However, the Chinese government has a limited ability to address these difficulties because of its limited resources, unique history and financial environment. Moreover, its administration system is heavily bureaucratic due to the many departments involved, making it difficult to coordinate and integrate them into a more acceptable system (Yi, 2009).

2.3 Summary Judging by the results from this chapter, it is evident that an examination of the CCRAEG is not as straightforward as it might first appear. This is a complex process due to the following five issues which were found within the traditional literature review of the history and background of CRAs:

W. Sun 2015

58

(1) Uses of terminologies: It has been noted that the key to defining the understanding gap is to comprehend the many varied definitions of credit rating as well as its scope. There exists ambiguity over (a) the meaning and application of terminologies (as summarised in Figure 2.1 and 2.2), and (b) unclear differentiation between credit checking, credit rating and credit information from the government. It is evident that an understanding gap exists within governments, industries and academia, based upon the existing literature and legislation examined; (2) Problematical legislation system: CCRA recognition lists differ between various departments (Figure 2.3). This is due to the complexity associated with: (a) variances of CCRA development within different cities and provinces in China (Figures 2.5); (b) the multitude of entities needing to be rated (Figure 2.7); (c) a possible overabundance of rating requirements (Figures 2.8 as well as Appendix 1 and 2); as well as, (d) multiple CCRA supervising departments (Figures 2.9). Analysis of these four issues reveals that the linchpin of reducing the regulation gap is (i) by reducing duplicated legislation and functions from each department within the supervisory system, and, (ii) tackling the lack of clarification within issues such as conflict of interests, and reasons for special requirements within the existing polices or for changes of legislation. (3) Special background of each CCRA: The general background of several CCRAs was collected from limited information, which was available on CCRA websites or from reports (Figure 2.12). Additional issues underpin CCRAEG because there is no global standard with so many different terms being used for each rating product (Appendix 3), and a company structure with multiple business scopes (Figure 2.14). (4) Special background of the Chinese financial market: There is a lack of understanding of the ‘credit economy’ within the development history of the Chinese financial market, and there are still issues needing to be highlighted: (a) the Chinese market is still not mature; (b) SOEs might still have too much power; (c) the level of the bond market varies significantly across different cities and provinces; and, (d) the government still holds too many responsibilities. It is essential that reasonable duties, suggestions and advice for the improvement of the CCRA industry must be cost-beneficial for CCRAs, and should also be compatible with the level of development within different provinces of the Chinese market. In summary, this chapter is an examination of background information concerning CCRAs. This chapter has provided a brief overview, along with a study intorelated regulations and policies, as well as information related to both the credit rating industry and Chinese financial markets, all of which are helpful in identifying and exemplifying the framework of CRAEG and CCRAEG in the next Chapter.

W. Sun 2015

59

CHAPTER 3 SYSTEMATIC LITERATURE REVIEW (SLR) 3.1 Introduction In this chapter, a tentative theory of CCRAEG is developed through an SLR analysis of AEG, CRA, and CCRA (this was indicated as the research objective of Study 1 in Section 1.3). SLR results of AEG are analysed and discussed in order to provide a contextual and conceptual content of Porter’s model. A comparison of alternative AEG models is undertaken in Section 3.2.

Furthermore, the dimensions, components and structure of CRAEG are developed in Section 3.3 in line with the SLR results of CRA-relevant literature in English and Chinese, and the CCRAEG model is proposed in Section 3.4. Finally, Section 3.5 provides a summary of how this conceptual framework is appropriate for the analysis and measurement of CCRAEG. (The limitation of Porter’s model will be discussed in Section 7.2.1 in line with the theoretical contribution of CCRAEG.).

3.2 Understanding and Application of AEG The purpose of this section is to explore how Porter’s model was developed and where the concept originated from, and why it coincides with research objective 1 in Section 1.3. This is essential for a CCRAEG study because previous AEG studies can have important implications in the development of the CCREAG model and research design. Prior research has reviewed the background, concepts, components and structures of AEG, and these studies investigated its existence in various countries.

Researchers have expressed varied opinions and preferences concerning the modification, comparisons, or classifications in AEG studies. As such, an SLR was adopted to deliver a comprehensive review of all accessible relevant publications with an explicit method. The first part of this section is to examine the general information of an SLR that can be used for a literature review of AEG (in Section 3.2.1). The origins of AEG will be explored and delineated in Section 3.2.2. This section contains a synthesis of information on the background and development of AEG, wherein historical publications and references on the CRA industry are displayed chronologically.

In addition, analysis of attributes of the Regulation Gap, the Expectation Gap and the Performance Gap will be presented in Section 3.2.3, which also provides some contextual information for understanding similarities and differences between the role of auditors and CRAs. The conceptual enquiry of definitions of AEG will be explained and analysed in Section 3.2.4, in an effort to isolate widely accepted definitions from

W. Sun 2015

60

others, and to establish an explicit definition for CCRAEG. Alternative frameworks in AEG studies will be analysed in Section 3.2.5. The application of Porter’s AEG model for identifying possible issues within AEG investigation practices will be discussed in Section 3.2.6. Finally, the proposed structure of CCRAEG theory is outlined in Section 3.2.7 according to SLR results of AEG studies. 3.2.1 General information of the SLR for a literature review of AEG Various databases were searched for reviewing the suitability of AEG. Search keywords were derived from terms found in the initial traditional literature review stage, for instance, “Audit Expectation Gap”, “Audit Expectations Gap” and “Audit Expectation-Performance Gap”; and “审计期望差距” (which means AEG in Chinese). The general search process is presented with inclusion and exclusion criteria in Figure 3.1.

Figure 3.1: The SLR Strategy of AEG

W. Sun 2015

61

Among seventeen databases (refer to Section 4.3.7.3), 377 publications (247 in English and 130 in Chinese) were found to be AEG-related studies within the initial screening process, and only 192 (190 in English and 2 in Chinese) were found to be accessible. References from these articles (N=192) were reviewed for potential relevance, resulting in a further nine publications. As such, 201 publications (199 in English and 2 in Chinese) were available in full text. Within all these 201 articles, 133 publications (132 in English and 1 in Chinese) were focused on AEG investigations, with the remaining 68 (67 in English and 1 in Chinese) only being AEG-related articles (see Chapter 4 for a full explanation of SLR methodology).

Moreover, the scope of literature used for each section of this literature review is listed in Figure 3.2. The precision of searches (38.7%) and percentage of inaccessibility (16.7%) shows that the coverage and precision of this literature search are appropriate for the SLR of AEG (Section 4.3.7.3).

Figure 3.2: The Scope of Literature for AEG Section

Questions or purpose

3.2.2

The origin of AEG

3.2.3

Attributes of AEG

3.2.4

Conceptual enquiry of definitions

3.2.5

Alternative conceptual theories in AEG studies

3.2.6

No. of publications or references N = 283 publications / references / citations/ in English, plus 6 publications / references / citations in Chinese = [199 publications + 85 references (or citations)] in English, plus [2 publication and 4 references (or citations)] in Chinese

Scope of literature

199 + (2) publications 225 + (4) references 59 140

85

N = 124 AEG investigations = 123 in English+ 1 in Chinese

Application of Porter’s model

3.2.2 The origin of AEG The exploration of the origins of AEG concepts and examining relevant research are beneficial for the in-depth understanding about the concept. However, many researchers appeared to recount the background or origins of AEG differently, and the sources in their reviews also varied. For example, the references used for describing

W. Sun 2015

62

the history or background of AEG from three publications in 1997 have been listed in Figure 3.3.

Gay et al. (1997) indicated that the findings on AEG in the 1990s were dominated by three reports: the Treadway Commission of the US National Commission on Fraudulent Financial Reporting which issued the first investigation of AEG in 1987; the second investigation report was from the MacDonald Commission of CICA (Canadian Institute of Charted Accountants) in 1988; and, the Cadbury Committee in the UK released the third one in 1992. However, other authors held different opinions, such as Harris and Marxen (1997) who did not mention reports from MacDonald Commission / CICA and Cadbury Commission, whereas Innes et al. (1997) included some research on the development of AEG in other countries. In Figure 3.3, references that are commonly cited are highlighted and linked with lines amongst these three publications. These researchers (Gay et al, Harris and Marxen, Innes et al) appeared to cite from different sources of literature, with only 3 sources commonly used to explain the background of AEG. Figure 3.3: References and Publications

283 references collected from studies between 1969 and 2012 were compiled together as a list. These references were used as sources of citations about the background or

W. Sun 2015

63

history of AEG. The aim of this part of the study was to try to present an extensive analysis through SLR about the derivation of AEG. By performing a statistical metaanalysis (Section 4.3.8.1) of these 283 references and their citations, it was found that AEG studies originated mainly from the UK, USA, Australia and Canada (Figure 3.4). However, further studies are not limited to developed countries, and it was found that research has been conducted in 33 other countries (Appendix 4 and 5). Moreover, it was found that the number of AEG studies increased through the decades. The historical development AEG is presented in Figure 3.5 in line with relevant discussions from these studies. Like any other SLR studies, many potential publication biases cannot be eliminated, see more detail in Section 4.3.7.3.

Figure 3.4: AEG Studies and Countries

Note: 1. 283 publications are in English; two Chinese publications are excluded. 2. Like the other literature reviews, publication biases may exist since only literature in English was reviewed for exploring the AEG orgin (Sections 4.3.7.3, 4.5, and 8.6).

The importance of empirical examinations of the differences in perceptions (of auditors’ responsibilities) among different interest groups has been notified by many professional bodies and researchers. After 1978, more professional bodies noted the

W. Sun 2015

64

existence of AEG in the 1970s and 1980s (see Appendix 4.b). Although the content and structure of an AEG has been well examined and defined by academia and professional bodies, the gap still exists. Several comparison studies between countries have been conducted, and those have indicated there were few differences amongst countries. However, it has been noted that there were variances in perceptions because of cultural differences and translation problems (Yoshimi, 1994; Garcia-Benau and Humphrey, 1992; 1993, cited in Yoshimi, 2002). Moreover, some professional bodies have taken action to reduce the gap. For example, in the USA, AICPA issued nine new ‘expectation gap’ standards in 1988 (Statement on Auditing Standards No. 53-60) to minimise this (Guy and Sullivan, 1988). A revised statement of Auditing Practice No. 16 (or AUS 210) was issued in 1993 in Australia (Gay et al., 1997). In the UK, the Auditing Practice Board (APB) published the 'Statement of Auditing Standard 600' document (Manson and Zaman, 1999). In addition, longitudinal studies from Porter et al. (2012a, 2012b) have revealed that the Reasonableness Gap (defined in Section 3.2.4.2) has been shortened within the UK and New Zealand (Appendix 4.c). Figure 3.5: AEG Historical Development Year 1937

Event Increasing amount of corporate fraud

Reference McEnroe and Martens, 2001

1969

The first AEG investigation

Lee, 1969, cited by Porter (1990)

1973

The public holds higher expectations on auditors

Beck, 1973, 1973, cited by Porter, 1990

1974

The first AEG definition is proposed by Liggo

1974

Opinion Research Corporation indicated the existence of an AEG

De Martinis et al., 2000; Gay et al., 1997; Porter, 1990; Porter, 1991, 1993; Chowdhury and Innes, 1998; Best et al., 2001; Al-Qarni, 2004; Porter and Gowthorpe, 2004; Porter et al., 2009; Adeyemi and Uadiale, 2011 Harris and Marxen, 1997

19741978

The AICPA (American Institute of Certified Public Accountants) was signposted by many authors as the ‘first’ professional body to investigate this issue. It appointed the Cohen Commission to examine whether this gap existed in 1974, and reported its existence after a full investigation in 1978.

Porter, 1990, 1991, 1993; Chowdhury and Jones, 1998; Porter and Gowthorpe, 2004; Porter et al., 2009, Fadzly and Ahmad, 2004; Sidani, 2007; Eldarragi, 2008; Onumah et al., 2009; Hassink et al., 2009; Velte and Freidanl, 2012

3.2.3 The attributes of AEG “The cause of this gap is too complex” (Cohen commission, 1978, p.xx). Humphrey et al. (1992) and Humphrey (1997) delivered a comprehensive summary on the causes of AEG: (1) the probabilistic nature of auditing; (2) misunderstandings and unreasonable expectations from non-auditors; (3) hindsight in the evaluation of auditors’ performance; (4) time lags in the technology and economic development

W. Sun 2015

65

process; (5) new expectations following corporate crises; and, (6) a self-interested profession in the self-regulatory system.

Figure 3.6: AEG Attributes Analysis (1)

A thematic synthesis and statistical meta-analysis (Section 4.3.8.1) were applied to analyses of the reasons, causes and factors of AEG. SLR results confirmed that multiple attributes and factors have to be considered due to the complex nature of AEG, including the role of auditors, incidents influencing AEG, environmental factors and psychological reasons. Summaries of the results are listed in Figures 3.6, 3.7, and 3.8.

Figure 3.7: AEG Attribute Analysis (2) Item 1

Attributes Conflicts of interest

No. 13

2

Failure to communicate sufficient information

8

3

Subjective nature of terms and concepts in standards Low visibility and uncertainty in auditing report

2

4

15

References Rizzo et al. (1970); Davidson (1975); Johnson (1988); Gloeck and De Jarger (1993); Chowdhury (1996); Hendrikson (1998); Koo and Sim (1999); Lu (2003); Lee et al. (2008; 2009b; 2010); Lee and Ali (2009); James and Izedonmi (2010) Guy and Sullivan (1988); Power (1997); Almer and Brody (2002); Swift and Dando (2002); Higson (2009); Ebimobowei (2010); Cohen et al. (2010); James and Izedonmi (2010) Humphrey (1997); Bogdanoviciutie (2011) Cohen Commission (1978); Schandl (1978); Miller et al (1993); Kelly and Mohrweis (1989); Hatherly et al. (1991); Lee (1994); Chowdhury (1996); Humphrey (1997); Sun and Jiang (1997); Chowdhury and Innes (1998); Zhang (2003); Kirk (2006); Chong and Pflugrath (2008); Alwardat (2010); He (2010);

W. Sun 2015

66

Item 5

Attributes Limitation of self-regulation

No. 16

6

Misunderstanding from the public

32

7

Difficulties in performance evaluation

14

8

Deficiency in standards

12

9

Time lags in establishment of legislation Low commitment/stewardship

1

11

Time lags in responses/early warning from auditors

9

12

Knowledge and experience can influence on observers’ perceptions Hindsight bias (Fischhoff, 1975) Hallo effect Actor-observer bias (Jones and Nisbett, 1972; Miller, 1975; Jones, 1979; Watson, 1982) Group effect and mood linkage (Totterdell et al., 1998) Primary Effect (Murdock, 1962) Self-protection bias (Miller and Ross, 1975) Self-serving bias (Zuckerman, 1979) Self-enhancement bias (Zuckerman, 1979) Fundamental attribution error (Jones and Harris, 1967) Self-interest bias (Darke and Chaiken, 2005) Lack of technical competence

1

10

13 14 15

16 17 18 19 20 21 22 23

6

References Byington (1991); Humphrey et al. (1992); Humphrey (1997); Chowdhury and Innes (1998); Gay (1998); Sikka et al. (1998); De Martinis et al. (2000); Haniffa and Hudaib (2007); Xu (2008); Noghondari and Foong (2009); Lee et al. (2009b); Ebimobowei (2010); Alwardat (2010); Cohen et al. (2010); He (2010); James and Izedonmi (2010) Cohen Commission (1978); MacDonald Commission (1988); Gay (1998); Porter (1990;1993); Humphrey et al. (1992); Humphrey (1997); Epstein and Geiger (1994); Chandler and Edwards (1996); Chowdhury and Innes (1998); Sikka et al. (1998); De Martinis et al. (2000); Boyd et al. (2001); Almer and Brody (2002); Adams and Evans (2004); Porter and Gowthorpe (2004); Noghondari and Foong (2009); Lee et al. (2007; 2008; 2009; 2010); Mahadevaswamy and Salehi (2009); Salehi (2008); Lee and Ali (2009); Abomobowei (2010); Alwardat (2010); Dennis (2010); He (2010); James and Izedonmi (2010); Adeyemi and Olowookere (2011); Norden and Svensson (2011); Enyi et al. (2012) Shaked et al. (1982); McNair (1991); Humphrey (1992;1997); Gay (1998); Shaikh and Talha (2003); Adams and Evans(2004); Noghondari and Foong (2009); Lee et al. (2008; 2009); Ebimobowei (2010); Cohen et al. (2010); He (2010); Norden and Svensson (2011) Kinney (1993); Chowdhury (1996); De Martinis et al. (2000); Liu (2006); Haniffa and Hudaib (2007); Liu (2008); Lee et al. (2008; 2009; 2010); Ebimobowei (2010); James and Izedonmi (2010); Enyi et al. (2012) He (2010) Zhao (2007); Xu (2008); Higson (2009; 2010); He (2010); Ebimobowei (2010) Humphrey et al. (1992); Chowdhury (1996); Chowdhury and Innes (1998); Gay (1998); Shaik and Talha (2003); Abomobowei (2010); He (2010); Adeyemi and Olowookere (2011); Norden and Svensson (2011) Amirhossen and Foong (2009)

1 2

Anderson et al. (1993); Jennings et al. (1993); Lee and Ali (2009); Hatherly et al. (1991, cited by Best, 2001) Arrington et al. (1983;1985)

2

He (2010); Liu and He (2010)

2

He (2010); Liu and He (2010)

2

He (2010); Liu and He (2010)

2

He (2010); Liu and He (2010)

2

He (2010); Liu and He (2010)

2

He (2010); Liu and He (2010)

2

He (2010); Liu and He (2010)

5

Swift and Dando (2002); Lee and Ali (2008, 2009); Ebimobowei (2010); Bognoviciutie (2011)

3

W. Sun 2015 Item 24

67

Attributes Increasing growth of responsibilities Recruitment process / admission of membership Political and legal structure

No. 1

References Humphrey et al. (1992)

5

27 28

Dominant societal value Company crises or the effect of released information

2 12

29

Management / Staff competence Low auditing payment The size of the auditing firm (smaller firm has worse performance) Political game between the public and the auditors

4 2 1

Lin (2004); Haniffa and Hudaib (2007); Lee et al. (2008, 2009); Ebimobowei (2010) Sikka et al. (1998); Lin and Chen (2004); Haniffa and Hudaib (2007); Ebimobowei (2010); He (2010) Haniffa and Hudaib (2007); Ebimobowei (2010) Humphrey et al. (1992); Humphrey (1997); Power (1994); Kinney and Nelson (1996); Gay (1998); Almer and Brody (2002); Porter and Gowthorpe (2004); Lee and Ali (2008); Noghondari and Foong (2009); Adeyemi and Olowookere (2010); Alwardat (2010); Bogdanoviciutie (2011) Cohen Commission (1978); Chowdhury (1996); Chowdhury and Innes (1998); Lin and Chen (2004) Lee et al. (2008); Ebimobowei (2010) He (2010)

1

Gaa (1991)

25 26

30 31

32

5

Along with these 32 attributes, the complex role of auditors might be another reason behind the existence of AEG, leading to a misunderstanding of an auditor’s performance (He, 2010; Gay et al., 1997; Lee et al., 2009; Cohen Commission, 1974; Humphrey et al., 1992, Humphrey, 1997; Shaikh and Talha, 2003; Lee and Ali, 2010; Lee et al., 2009, 2010; Noghondari and Foong, 2009; Alwardat, 2010; James and Izedonmi, 2010; Norden and Svensson, 2011; Dewing and Russell, 2002). Just like the role of CRAs (which will be discussed in Section 3.3), auditors also have conflicting interests. Significant events such as the Enron scandal have raised public expectations of auditors, who are now expected to perform multiple functions, for example, identifying operational risk, providing advice or suggestions for internal control, and reporting any illegalities (Boynton et al., 2005). However, Gloeck et al. (1993) maintained that ‘conflicts of interest’ is one of the most important issues that caused AEG, and they believed that the self-regulation system was not efficient enough for the development of good auditing (Humphrey et al., 2007). Auditors can provide advice and suggestions about management to organisations, but they might also advise companies about tricks for being deceptive in their reports and to confuse report users (Rizzo et al., 1970; Davidson, 1974; Koo and Sim, 1999). There is a lack of independence in auditing because auditors are seeking ways to increase their profits, rather than simply address the concerns of the public (Johnson, 1988; Hendrickson, 1998; Lu, 2003).

In addition, as indicated in the figures above, other possible reasons and causes of the gap include environmental factors and psychological reasons (Saedi, 2012) with some being components or elements within AEG, such as deficiencies in regulation, misunderstandings among the public, low commitment from auditors, etc. In order to

W. Sun 2015

68

capture the possible issues revealed from the literature on AEG, thirty-two attributes are interpreted in Figures 3.6 and 3.7. It should be noted that the size of the circle (in Figure 3.6) does not reflect the size of the gap, and there is no comparison of the sizes of expectation gaps in auditing in this research. Moreover, these diagrams are presented to show interconnected relationships amongst gaps and concepts, but they do not explain how the circles or elements contribute to each other. Finally, descriptions of each cycle are displayed in the note within the diagram in Figure 3.6; and attributes are listed in Figure 3.7.

Consequently, according to results listed in Figure 3.6, nine attributes appeared more often than the others in the literature: (1) conflicts of interest; (2) failure to communicate sufficient information; (3) low visibility and uncertainty in auditing reports; (4) limitation of self-regulation; (5) misunderstanding from the public; (6) difficulties in performance evaluation; (7) deficiency in standards; (8) time lags in responses and early warning from auditors; and, (9) company crisis or the outcome of released information. 3.2.4 The conceptual enquiry of definitions of AEG Studies examining the AEG are extensive and have a long and distinguished history. This study explored the concepts of AEG defined in research since circa 1969, and not every publication provided the same definition. From a study of wider research (283 in English and 6 in Chinese, as indicated in Section 3.2.1), it was found through a mixed method analysis of SLR results that 41 different definitions existed with varying similarities and differences amongst them. Although AEG is a “worldwide phenomenon” with an extensive literature (De Martinis et al., 2000, p.61), the understanding of the term is somewhat vague, but “…vagueness can be eliminated through an ‘evaluative conceptual enquiry’…in order to make the expressions or concepts more useful” (Dennis, 2010, p.132).

As such, the purpose of this section is to provide a comprehensive analysis of the AEG definitions, and to provide an understanding of the term from previous studies through a conceptual enquiry (Section 4.3.7.4). The methods used for examining the existing rules of the meaning of AEG will be explained in Section 3.2.4.1; an analysis of definitions will be revealed in Section 3.2.4.2, finally, other rules on defining AEG (constructed according to other researchers' comments) will be indicated in Section 3.2.4.3.

W. Sun 2015

69

3.2.4.1 Analysis method used for conceptual enquiry He (2010) suggested that there are three categories that can be considered when analysing AEG definitions. The meanings of these three categories have been reframed and defined in Figure 3.8.

Figure 3.8: Categories Used for Conceptual Enquiry Categories Subject

Meaning Whose expectation or perception

Actual Application sample frame, population and interest groups

Object

The entity acted upon by the subject

Questions to be asked to the interest groups

Gap

Boundaries of each gap component

What kind of information will be asked of the interest groups

Implication It should be noted that the scope of the population can be particular to one group of participants or several interest groups. However, the number and type of participants should be consistent between the boundaries of each gap AEG definitions require the inclusion of clear objects to inform the main type or wording of questions to crystallise the content within AEG A good indication of a gap should provide boundaries that they are by the same level for comparison. For example, expectation cannot be compared with performance, because expectation is a psychological picture in people’s mind, and it is not real; whereas performance is about actual conduct and operation. However, expectation can be compared to beliefs, understanding or views of actual performance, as they are subjective in nature.

3.2.4.2 Analysis of definitions As previously indicated, forty definitions have been found within 201 publications (199 English-language publications and two Chinese-language publications). Figure 3.9 outlines the number of articles which have quoted each definition. This figure demonstrates the popularity of each definition within previous studies. Porter’s definition has proven to be the most popular (being cited seventy-three times), with Liggio’s definition being quoted forty-two times (probably because it was the first to provide an AEG definition, more details in Appendix 6). Some definitions have similarities among them because they developed from each other. This section will provide an analysis of the definitions based around the three categories defined in the table above. The thematic analysis results of these definitions are listed in Appendix 7 with full explanations of each one. For example, Porter’s definition (1990, p.12): “The gap which exists between society's expectations of auditors and auditors' performance.” The Subject is “society”; the Object is “auditors and auditors’ performance”, and the Gap is “expectations” and “performance”. The summary of these results is presented as follows:

W. Sun 2015

70

(1) The Subject Chiang and Northcott (2011) argued that different meanings of AEG may vary depending upon whose perspective is used to measure auditors’ performance. Therefore, in order to understand and crystallise the meaning of theories, identifying the subject of each component in the gap is essential. Porter (1990) only indicated the subject for expectation as ‘the public’ in her first definition. However, Porter did state that auditors’ performance should be measured as perceived by society, both in her research aim (Porter, 1990) and in her analysis of results (Porter, 1993). In addition, she updated her definition in 2004 which provided more clarification of the subject, by defining the meaning of 'society' as the public’s expectation and perception.

Figure 3.9: Definition and No.of Articles

Note: These citations are in Appendix 6 and definitions are in Appendix 7 Porter’s AEG study considered Perception Gaps, which are illustrated in Figure 3.10, multiple perception comparison can be made among interest groups and gap

W. Sun 2015

71

boundaries. Like Porter’s inventions concerning Perception Gaps, Innes et al. (1997) illustrated the multiple dimensions of perceptions between users and auditors in line with what they should be doing and what they are doing (Figure 3.11). Gaps 1 and 3 represent perception differences between auditors and users; whereas gaps 2 and 4 relate to the Performance Gap. However, there are three issues with Innes et al.’s model, concerning the existence and meaning of some perception gaps (over-page): Figure 3.10: Perception Gaps in Porter’s Model

Figure 3.11: Six Perception Comparisons - AEG

Source: Innes et al. (1997, p.704)

First, the fifth gap (the gap between auditors’ perceptions of what auditors should be doing, and users’ perceptions of what auditors are doing) and the sixth gap (the difference between auditors’ perception of what auditors are doing, and users’ perceptions of what auditors should be doing) does not appear to have a significant

W. Sun 2015

72

meaning in AEG studies because this compare auditors’ perceived performance with users’ expectation. Moreover, the users’ perception on what auditors are doing is not comparable to the auditors’ perception of what they should do; the users’ perception of what auditors should do is not comparable with the auditors’ perception of what they are doing. Second, there is no evidence to ascertain the existence of either of the second, third, fourth, fifth, and sixth gaps since Innes et al. only examined the first gap (the difference between auditors’ perceptions of what auditors are doing and users’ perceptions of what auditors are doing).

(2) The Object In relation to questions being asked in the survey, Dennis (2010) criticised Porter’s choice of wording by claiming that the words ‘expectations’ and ‘expects’ were misleading, and that she should use the words ‘beliefs’ or ‘desire’ instead, since what auditors can do and what they should do are different. From the selected forty definitions, there are many other terms with similar meanings, such as ‘views’, ‘understanding’, ‘preferences’, ‘desires’ and ‘needs’, and it is difficult to evaluate which term is the most appropriate, if participants’ understandings and preferences were not considered. In the same fashion of clarifying the specific level of people's expectations, Hatherly et al. (1992) distinguished ‘what auditors should do ideally’, ‘what they should do according to standards’, and ‘what they can do’ as three levels of questions to be asked in the AEG study. This definition was adopted by Monroe and Woodliff (1994b), Leung and Chau (1994) and Asare et al. (2013). Concerning the specific questions to be asked, Porter used the following wording in her research design: “….auditors should perform this duty?”, “It is an existing duty of auditor”, and “how well existing duties are performed” (Porter, 1990, p.391). Those statements appear to be similar to the three levels outlined in the definition from Hatherly et al. (1992).

On the other hand, there is considerable debate among researchers about their inclusion of topics at each level of question. Gray and Manson (2010) suggested that there should be some other characteristics of auditors to add into Porter’s model in order to provide more detailed framework. Norden and Svensson (2011) suggested that ‘independence’ should be considered; Turner et al. (2010) pointed to the importance of ‘communication’, and Godsell (1992) advocated for the inclusion of audit reports. Humphrey et al. (1991) claimed that there are four elements in an AEG: (a) the role and responsibilities; (b) the quality of the audit function; (c) the structure and regulation of the profession; and, (d) the nature and meaning of audit report messages. However, Porter and Gowthorpe (2004) argued that their research objective was different from that of Humphrey et al., and that their survey instruments differed but

W. Sun 2015

73

that their “…general finding was the same” as they both found that an AEG existed. Therefore, the object with clarified or un-clarified elements may not have an impact on AEG results, and if there are too many elements, it may be too narrow to be applied in other research in different cultural contexts. Independence, communication, and audit report quality are included as auditors' responsibilities (specified by regulations), and simply adding in these elements may not help to identify and analyse the nature, composition and extent of AEG (which is Porter’s objective). However, making reference to these standards might help the public to understand the question with more information to collect comparable evidence (which is Humphrey et al.’s objective).

(3) The Gap The gap presented by Porter and Gowthorpe (2004) is between ‘Expectations’ and ‘Perceptions’ and it can be found that He (2010), William et al. (2004) and Gay and Simnett (2006) have similar expressions within their definitions. Similarly, another eight definitions (Cohen Commission, Jennings et al., CICA, Gray and Manson, Koh and Woo, Leung et al., Lowe and Zhao) have also used this expression of the gap between expectations and performance, although they did not indicate how to measure this performance. However, unlike these researchers, Porter produced definitions for each component to make the structure of the gap clearer. One was entitled the ‘Reasonableness Gap’, which was outlined as “…the gap between what society expects auditors to achieve, and what they can reasonably be expected to accomplish”. The Performance Gap was indicated as being “…the gap between what society can reasonably expect auditors to accomplish and what society perceives they achieve”. Moreover, according to Porter and Gowthorpe (2004, p.6), within the Performance Gap, two sub-gaps highlighted included the Deficient Standards Gap (defined as “…the gap between the responsibilities that can reasonably be expected of auditors, and auditors’ existing responsibilities as defined by statue and case law, regulations and professional promulgations”), and the Deficient Performance Gap (which is “…the gap between the expected standard of performance of auditors’ existing responsibilities and auditors’ performance, as expected and perceived by society”). Porter’s model (Figure 1.1) was compared with the one produced by the MacDonald Commission (Figure. 3.12) in order to explore how Porer developed her model. Although both models have the same main gap component (both identified Unreasonable Expectations, the Standard Gap and the Performance Gap), the McDonald Commission’s model has two limitations: (a) it failed to discover Unreasonable Expectations within the Performance Gap, because expectation about the performance could be unreasonable as well; and, (b) the Standard Gap should not

W. Sun 2015

74

be interpreted as Unreasonable Expectations and Reasonable Expectations, because standards gap might be better not be measured according to the expectations.

Figure 3.12: The First AEG Model

Source: MacDonald Commission / CICA (1988)

Consequently, with regards to a conceptual enquiry of these AEG definitions, the rules of framing AEG definition are: (a) The sample group and population of AEG should be clarified. (b) The wording about people's expectations, beliefs, or perceptions needs to be specific. (c) The gap itself needs to be expanded to include gaps between different interest groups about their expectations, beliefs or perceptions, or the differences in expectations, beliefs and perceptions within one interest group. Moreover, it was found that Porter’s definition of AEG appeared to be clearer and more scientific than the other definitions, with four advantages. For example, as they can be found in Appendix 7:  The gap which exists between society's expectations of auditors and auditors' performance (Porter, 1990, p.12);  Between: (i) society’s expectations of auditors; and (ii) auditors’ performance as perceived by society (Porter and Gowthorpe, 2004, p.vi);  The expectation gap stems from differing expectation levels as to both quality and standard of the accounting profession’s performance and what it is expected to accomplish (Liggio, 1974, p.24). First, Porter took both people's expectations and perceptions into consideration. Second, questions to be asked were illustrated in her questionnaire design, even

W. Sun 2015

75

though they were not mentioned in the definition. Third, although later research papers suggested that Porter’s model ignored certain elements of auditor responsibilities, Porter was still able to confirm the expectation gap from the public. Finally, Porter’s definition is the most widely adopted version in the literature.

In addition, as indicated by Porter (1990, 1991), her conceptual framework contained a broader meaning than the definitions from Liggio (1974) and the Cohen Commission (1978), and it also embraced the notion of unreasonable expectations. Porter and Gowthorpe (2004) asserted that there had been “no in-depth analysis of the nature, composition and extent of the gap” prior to Porter’s study in 1989. Porter therefore made a significant contribution to specifying the structure and components of AEG, as discussed above.

3.2.4.3 Other rules or comments Some researchers have compared and analysed several definitions in varied groups or pairs in their literature reviews, and have found similarities and differences amongst these definitions according to (1) the focus and main content of definition and (2) whether the definition is fully inclusive.

(1) The focus and the main content of definition: Saad and Lesage (2009) indicated that both Porter and Jennings et al. defined the gap as the “…difference between public expectations and actual service provided by audit profession”, but Porter provided more detail about components in the gap. However, Chiang and Northcott (2011) argued that the definition from Porter was concerning “…the audit objectives, rather than practices or outcomes”, while they believed that Jennings et al.’s definition highlighted “…the actual audit outcomes…[rather than]…perceived outcomes or preferred objectives”. Nevertheless, the result from the evaluation in Appendix 7 shows the ‘object’ (as defined in Section 3.2.4.1) of Porter’s definition is ‘what is auditors’ and ‘auditors’ performance’ with Jennings et al.’s ‘object’ being similar (with the focus of ‘what the profession provides’).

Monroe and Woodliff (1994) pointed out that Porter's and the Cohen Commission’s definitions differed from those of Hatherly et al. due to their varied focus. For instance, unlike Porter and the Cohen Commission (who asserted that there were three components in the gap), Hatherly et al. (1991, 1992) suggested that there are three levels in the gap. Similarly, thematic synthesis of definitions from SLR results in Appendix 7 also showed the same difference between them. Hatherly et al.’s definition tried to measure the gap between auditors and users from three levels ‘Expectations –

W. Sun 2015

76

Expectations’; ‘Beliefs – Beliefs’; and, ‘Perceptions – Perceptions’, but Porter illustrated the gaps between ‘Expectation – Perceived performance’.

Al-Qarni (2004) identified the definition from the Cohen Commission as being more understandable and general than that by Liggio, with Guy and Sullivan’s definition being even broader because it “…expands the sphere of responsibility to include accountants as well as auditors of the financial statements”. However, there is a very small similarity amongst these three definitions according to the result in Appendix 7, but these three presented three different ‘gaps’, namely, ‘Expectations – Performance’, ‘Expectation levels – Expectation levels’, and, ‘Beliefs – Beliefs’.

(2) Fully inclusive: Chiang and Northcott (2011) used the term ‘AEG II’ or ‘unrecognised expectation gap’ from Specht and Waldron (1992) and Specht and Sandlin (2004), and clarified it as the important distinction with the other research, although they thought it is similar to Porter’s Performance Gap. They indicated that AEG II is the comparison of auditors’ duties with actual audit practices, rather than society’s perceptions. However, they considered auditors’ perceptions about their own practice to be ‘the actual audit practice’, which might not reflect the real performance because these data also can only reflect perceptions. As such, this AEG II is dissimilar to actual auditors’ practices. Specht and Waldron (1992) indicated that this ‘unrecognised expectation gap’ exists between auditing standard setters and practising CPAs in line with their perceptions about the “stated objectives of the statement”, which is the gap between “what [the] standards were intended to accomplish, and auditor perceptions of what the statement would accomplish” (Specht and Sandlin, 2004, p.26). According to their original expression, this ‘new gap’ may exist between regulators’ and CPAs’ perceptions about responsibilities, standards and auditors’ performance. However, this had already been identified and examined as perceptual differences among varied interest groups in studies by Porter, Anderson et al. (1998) and Lowe (1994). In addition, this was the reason that Porter (1990, 1991, 1993) and Cohen Commission (1978) particularly noted the meaning of ‘the public’ or ‘society’ as the general population, which includes all non-auditors in their definitions. 3.2.5 The alternative conceptual frameworks in AEG studies Twenty studies adopted Porter’s model, and the other 104 AEG investigations adopted seventeen theoretical frameworks, as indicated in Figure 3.13. Conceptual frameworks should be confirmed by observations or experiments, and must be supported by

W. Sun 2015

77

organised principles. However, only three out of seventeen frameworks were identified as alternative conceptual frameworks that are appropriate for this research (the rest will be reviewed in Section 4.4). These three conceptual frameworks, which are analysed and discussed in this section, are outlined in Figure 3.13.

Figure 3.13: Conceptual Frameworj Used in AEG Studies Section

Conceptual Framework Role Theory (Davidson, 1975)

Reference Porter (1990); Adeyemi and Uadiale (2011)

3.2.5.1 Role Theory (Michael, 2001)

3.2.5.2

Attribution Theory (Mitchell and Wood, 1980; Arrington, et al., 1985) Service Quality Gap (PZB; ZPB)

3.2.5.3 Service Quality Gap (PZB; ZBP)

3.2.6

Porter’s AEG Model

Ebimobowei and Kereotu (2011)

Anderson et al. (1998)

Duff (2004) Turner et al. (2010) He (2010) Porter (1993); Manatunga (2003a); Porter and Gowthorpe (2004); Lee et al. (2007); Lee et al. (2008b); Majakoski (2008); Hassink et al. (2009); Lee et al. (2009a); Lee et al. (2009b); Porter et al. (2009); Siddiqui et al. (2009); Alwardat (2010); Cohen et al. (2010); He (2010); Lee et al. (2010); Oseni and Ehimi (2012); Adeyemi and Olowookere (2011); Porter et al. (2012a); Porter et al. (2012b); Saeidi (2012)

3.2.5.1 Role theory It appeared that different role theories have been adopted in the AEG studies. This theory explains that auditors have multiple roles and responsibilities amongst stakeholders. Within the 104 investigations which did not use Porter’s model, Adeyemi and Uadiale (2011) and Ebimobowei and Kereotu (2011) adopted role theory in their AEG studies. Adeyemi and Uadiale used the concept of multiple roles of auditors with multiple expectations (from the role theory by Davidson, 1975). However, this had been applied and adapted by Porter (1990), not only with results to show that differences can be found among varied interest groups, but also as a better designed AEG theory and practice. Whereas Ebimobowei and Kereotu applied the role theory from Michael (2001) to only ascertain that expectations are affected by the actual performance of auditors. As such, the content of these models do not appear to be as well-defined as Porter’s model, since Porter’s model provides more gap components (other than

W. Sun 2015

78

expectations and performance), with many Perception Gaps among different interest groups, as discussed in Section 3.2.4.2.

3.2.5.2 Attribution theory Attribution theory refers to attributes of expectations and perceptions. Anderson et al. (1998) used attribution theory (from Mitchell and Wood, 1980) and questionnaire design (from Jennings et al., 1993) to establish: (1) two regression models (see Figure 3.14) for auditors’ and judges’ attribution of responsibilities in a management fraud case with two dummy variables (collusion and materiality); and, (2) another two regression models for a bankruptcy case with six dummy variables (collusion, materiality, evidence reliability, timing of unpredicted events, interaction between attitude and reliability, and interaction between attitude and timing). Data was collected from 105 audit managers and partners and ninety-seven practicing judges, and differences were found between auditors’ and judges’ attributions in both the management fraud and bankruptcy cases. Figure 3.14: Attribution of Auditors’ Responsibilities Management Fraud: Auditor’s ATTRIB = β0 + β1COLL + β2MAT + ε Judge’s ATTRIB = β0 + β1COLL + β2MAT + β3ATT + β4ATT *COLL + β5ATT*MAT + ε Bankruptcy: Auditor’s ATTRIB = β0 + β1REL + β2TE+ ε Judge’s ATTRIB = β0 + β1REL + β2TE + β3ATT + β4ATT*REL + β5ATT*TE + ε

Where: ATTRIB: Attribution of auditor’s responsibility COLL: Collusion (coded 0 when collusion was present and 1 when absent) MAT: Materiality (coded 0 when low and 1 when high) REL: Evidence reliability (coded 0 when reliability was low and 1 when high) TE: Time elapsed between the audit opinion and the bankruptcy (coded 0 when long and 1 when short) ATT: Aggregate attitude score (the higher the score, the more unfavourable the attitude) ATT*COLL: Interaction between attitude score and collusion variable ATT*MAT: Interaction between attitude score and materiality variable ATT*REL: Interaction between attitude score and evidence reliability variable ATT*TE: Interaction between attitude score and time elapsed variable Adapted from Anderson et al. (1998)

Mitchell and Wood (1980) developed a theory for assessing supervisor’s expectations of subordinate, and identified different factors that affect a supervisor’s response. The second theory used by Anderson et al. was from Arrington et al. (1983, 1985), which proved that CPAs and business owners used different judgement models for evaluating auditors’ performance. This was based on seven cases, with data collected from ninety-two CPAs and fifty-five small business owners. Arrington et al. adopted Kelley’s (1972) attribution model, which posited consensus, consistency and distinctiveness as

W. Sun 2015

79

being the three main types of information for judgement (Heider, 1985; Kelly, 1967) with equation of [Internality = (Ability + Effort) – (Task Difficulty + Luck)] from Luginbuhl et al. (1975, cited in Arrington et al., 1983). They used these to calculate their respondents' judgement with results that can interpret each type of information variable used in the judgement (Figure 3.15). They revealed that actor-observer bias (Arkin et al., 1978) exists in auditing, because “actor and observer view the causes of the actors’ behaviour differently”. Moreover, they found that business owners showed “fundamental attribution error” in their judgement, because they only searched for one cue (in order to “make inferences about the causes of people’s behaviour”), and did not consider the consistency and distinctiveness of an auditor's behaviour, or the consensus behind such behaviour (Johns and Saks, 2010).

Models from Mitchell and Wood (1980), Arrington et al. (1983, 1985), and Anderson et al.(1998) were used to examine particular groups of the public, such as supervisors and subordinates, CPAs and small business owners, auditors and judges (respectively). They may not be suitable for this research because four distinct interest groups are investigated for evaluation of CCRAEG in this study. Nevertheless, the psychological theories adopted in them should be considered in development of any expectation gap model. These psychological factors were identified from the prior research in Section 3.2.5.2, whereby Attributes 12 to 21 were listed as influencing factors of expectations within AEG (from a psychological perspective).

Figure 3.15: Three Types of Information and Internality Factors Consensus

High level External

Low Level Internal

Consistency

Internal

External

Distinctiveness

External

Internal

Meaning Information about whether the behaviour of the actor corresponds to the behaviour of other actors in similar circumstances. Information about whether a specific auditor has a “record” of success or failure. Information about whether failure is common on this particular task but not on the other task

Note: 1. Consistency cue: “how consistently a person engages in some behaviour over time” 2. Distinctiveness cue: this “reflects the extent to which a person engages in some behaviour across a variety of situations”. 3. Consensus cue: “how a person’s behaviour compares to that of others”).

Adapted from Arrington et al. (1983)

3.2.5.3 Service quality gap 'Service Quality Gap' is a well-developed theory within Marketing. Gronroos (1984) appeared to be the first to define the perceived service quality, based on the literature concerning issues including: expected service and perceived service; promises and

W. Sun 2015

80

performance; technical quality and functional quality; and, image as a quality dimension (Figure 3.16). Gronroos collected empirical evidence by 219 responses from questionnaires, and produced results showing that functional quality is more important that technical quality. The Service Quality Gap model developed by Zeithaml, Berry and Parasuranman (1988) identified the four possible influence factors which were expectation, needs, experiences, and word of mouth. He (2010) and Duff (2004) adapted the Service Quality Model (Parasuraman, Zeithaml and Berry, 1985), and He (2010) also considered the expectation influencing factors identified by Zeithaml, Berry and Parasuranman (1993). Like He (2010), Turner et al. (2010) also adapted these models and constructed an Audit Service Quality model. The structure or components of AUDITQUAL or AEG were not verified by empirical data by Duff (2004), He (2010) or Turner et al. (2010). As such, service quality gap models in auditing might not be suitable for this study because conceptual theories have to be confirmed by observations or experiments. However, in order to develop a CCRAEG study, it may be beneficial to compare and contrast these models with Porter’s model to explore the structure, components and attributes of service quality gap for establishment of CCRAEG (Comparison results and Gantt chart analysis process can be found in Appendix 37).

Figure 3.16: Service Quality Gap

Source from: Gronroos (1984) 3.2.6 Application of Porter’s AEG There are similar AEG results in different countries, although there were differences in research methodologies (Martinis et al., 2000). Within these 124 AEG studies, there are twenty articles which adopted Porter’s (1990) model (Figure 3.17).

Those twenty articles investigated AEG in eleven countries and concluded the same result that AEG exists, as shown in Figure 3.18. However, some scholars (e.g. Schelluch (1996); De Martinis et al. (2000); Eldarragi (2008); Laurentiu et al. (2009); Adeyemi and Uadiale (2011); Chiang and Northcott (2010)) use “AEG” as the acronym

W. Sun 2015

81

for Audit Expectations Gap. Cohen et al. (2010) termed it “EG”. Duff (2004) and Beaties et al. (2003) referred to the Audit Expectation-Performance Gap as “AEP”. Figure 3.17: Publications Related to Porter’s Model 8 Adapted Porter’s model (theory) Criticism or limitation of Porter’s model Used Porter’s practice (method)

7

4 20

Adopted Porter’s model (theory)

Otaibi (2003); Duff (2004); Daud (2007); Chong and Pflugrath (2008); He (2010); Berson and Kmita (2011); Bogdanoviciutie (2011); De Almeida and De Almeida (unknown) Daud (2007); Porter et al. (2009); Turner et al. (2010); Dennis (2010); Chiang and Northcott (2011); Norden and Svensson (2011); Dewing and Russell (2012) Lin and Chen (2004); Sidani (2007); Lee et al. (2010); Muyler et al. (2012) Porter (1993); Manatunga (2003a); Porter and Gowthorpe (2004); Lee et al. (2007); Lee et al. (2008b); Majakoski (2008); Hassink et al. (2009); Lee et al. (2009a); Lee et al. (2009b); Porter et al. (2009); Siddiqui et al. (2009); Alwardat (2010); Cohen et al. (2010); He, J. (2010); Lee et al. (2010); Oseni and Ehimi (2012); Adeyemi and Olowookere (2011); Porter et al. (2012a); Porter et al. (2012b); Saeidi (2012)

Figure 3.18: Research that Adopted Porter’s Model Author/Year Porter (1993)

Country New Zealand

Troberg and Viitanen (1999, cited in Manatunga, 2003a) Porter and Gowthorpe (2004); Porter et al. (2009; 2012a; 2012b)

Finland

Lee et al. (2007, 2008a, 2008b, 2009a, 2009b)

Malaysia

Siddiqui et al. (2009)

Bangladesh

Hassink et al. (2009)

Netherland

Alwardat (2010) Cohen et al. (2010)

UK USA

Lee et al. (2010) He, J. (2010)

Thailand China

Oseni and Ehimi (2012)

Nigeria

Adeyemi and Olowookere (2011) Saeidi (2012)

Nigeria

UK + New Zealand

Iran

Result Standard gap= 50% of the AEG; unreasonable expectation gap= 34%; performance gap = 16% Three components of AEG examined and explained. Although the components and extent of the gap in the two countries are similar, the extent of the gap is prominently different. Reasonableness gap, standard gap and performance gap exist in Malaysia. Possible solutions proposed for reducing the gap Audit education can reduce AEG significantly in Bangladesh. Three components of AEG examined and explained with the context of Dutch auditing. AEG exists within public sector in UK. AEG still exist in USA and it relates to managers’ personality traits. AEG exists in Thailand. Identified the main elements in each component of the AEG in China, with possible solutions. The nature of AEG explained with empirical evidence and its influence on auditors’ credibility in Nigeria. There is no generally accepted description of the role of auditors. AEG exist in Iran

W. Sun 2015

82

An analysis of the application of Porter’s model by other researchers is showed in Figure 3.19. The majority of these authors used quantitative research instruments such as questionnaires with a Likert scale. Different statistics methods have been used, for example the Chi-Square, Mann Whitney, Wilcoxon-signed rank, Mean, Correlation, ttest, and Kruskal-Wallis tests. Questions asked in the questionnaires appeared to vary amongst these investigations. As for sampling strategy, several studies did not clarify the sampling strategy being used, but provided a sample framework. Finally, this comparison suggests that inclusive questions in the questionnaire can make it easier to compare and repeat AEG studies. Figure 3.19: Application of Porter’s Model

W. Sun 2015

83

3.2.7 The proposed structure of CCRAEG definition As a result of analysing the definition of AEG by using categories of “subject”, “object” and “gap” as explained in Section 3.2.4.2, a CCRAEG definition should be constructed in a similar way in order to present the information for these three categories. First, a clear clarification of subject can allow the investigation of perception gaps amongst interest groups. In this research, the four interest groups are CCRAs, CCRAs’ customers, investor/public, and regulators. This grouping method was generated by SLR results which will be discussed in Section 3.3.2 and 3.3.4.1. More detailed description of this grouping will be demonstrated in Section 4.3.7.2.1. Second, the object of this CCRAEG will be analysed in Section 3.3 and 3.4 through the discussion of expectation and perception related attributes. Finally, as Porter suggested, the first dimension of CCRAEG are gap components between expectation and perceived performance of CCRAs. However, in this research, it may difficult to find the unreasonable expectations by comparing expectations between CRA beneficiaries from the financial market and CRA beneficiaries from the non-financial market. Moreover, this research may have more significant contribution to knowledge if reasonable

expectation rather

than

unreasonable

expectation

is

identified.

Furthermore, with reference to literature review results in Section 2.2.1 and 3.4.2, knowledge gap (or understanding gap) can be found in people’ understanding about credit rating and credit economy and it may contributed to the CCRAEG as a main component. As such, Knowledge Gap, Reasonable Expectation, Regulation Gap, Actual Performance Gap are gap components of the first dimension of CCRAEG to present the structure of CCRA expectation-performance gap. This structure will be presented with hypotheses in Section 4.2 and detailed definition of CCRAEG will be explained in Section 7.2.1.

3.3 Development of the third dimension from CRA related literature SLR was adopted with mixed method analysis (Section 4.3.7.3 and Section 4.3.8.1) to explore attributes that should be considered for the development of CRAEG; this coincides with ‘research objective 2’ in Section 1.3. Some general information about the literature review process is presented in Section 3.3.1. The SLR result of twentyfive empirical CRAEG studies is explained in Section 3.3.2, and an analysis of attributes is provided in Section 3.3.3. Finally, the conceptual framework of three components of CRAEG is presented in Section 3.3.4. 3.3.1 SLR approach for CRA related literature Because of the extensive amount of CRA-related literature, an SLR was used to review publications concerning expectations, perceptions, views, and beliefs of CRAs. Within

W. Sun 2015

84

fifteen databases (Section 4.3.7.3) there are fifty articles accessible from the initial screen result, and 424 results obtained after checking their references and citations (Figure 3.20). Most of these studies were by academics, with thirty-four reports from regulators, and five reports from professional bodies.

Figure 3.20: The SLR Strategy of CRA

These studies, which review the issues and problems of CRAs, appeared to focus on three types of disputes: (1) investigation or discussion about rating quality or accuracy from the technical and methodological perspective; (2) examination or dialogue regarding market participants’ understanding of CRA or its performance; and, (3) debates about regulations. Studies about rating quality and accuracy feature in 40.3% of the 424 documents, and it appears that there was more research on the first topic than on the second or third (27.6% and 32.1%, respectively). Out of 117 articles that focussed on about people's understanding of CRAs or that examined their perception of CRA performance, only twenty-four documents were empirical investigations of CRAs, whereas the other ninety-two articles were merely discussions or review papers. The precision of searches (68.6%) and percentage of inaccessibility (2.8%) shows that

W. Sun 2015

85

the coverage and precision of literature search is appropriate for the SLR of CRAEG (Section 4.3.7.3). 3.3.2 Previous empirical perception studies With the exception of AMF (2007a, when they reviewed results from their reports in 2005 and 2006), twenty-four articles are closely relevant. These include ten regulator reports (AMF, 2005, 2006, 2007b, 2008, 2009, 2010; BCBS, 2000; SEC, 2008, 2011a, 2012a), ten academic documents (Baker and Mansi, 2002; Blaurock, 2007; Cantor et al., 2007; Cantwell, 1998; Duff and Einig, 2009a, 2009b; Ellis, 1997; Einig, 2008; Mohd, 2011; Radzi, 2012) and four association reports (AFP, 2002, 2004; Duff and Einig, 2007; RAM, 2000).

Ellis (1997) appeared to be the first to conduct investigations on CRAs. Questionnaires were collected from 102 issuers and 205 investors. His results reflected their opinions on: (1) the desirable or necessary number of ratings; (2) unsolicited ratings; (3) rating accuracy; and, (4) CRA performance. He drew two main conclusions from the American market. First, the number of ratings might be better to be condensed with regard to “whose opinions are widely followed and trusted by investors”, because this "would be more efficient use of time and corporate resources”. Second, it was found that consistency and accuracy are most important criteria among investors and issuers. Therefore, smaller CRAs should try to establish good reputation on these criteria (Ellis, 1997, p.17). Cantwell (1998, p.14) investigated and published extensive results on “the steps used by companies to manage their rating agency relationships” within the USA and another fifty countries. Questionnaires were mailed to over 1800 entities who were rated companies. He reported that the survey results demonstrated that issuers felt they had good relationships with CRAs which had a positive impact on their ratings, and the time and effort issuers spend on rating activities appeared to have increased during 1996 and 1997.

This study was followed by RAM and BCBS in 2000. The research task force of BCBS (2000, p.1) formed a working group in 1999 and adopted a method to collect publicly available information with the aim of providing “the available and relevant information on credit ratings in a single document”. However, BCBS concluded that there were no CCRAs in 2000. This was suggested as a false statement earlier in Chapter 3.3 as there were CCRAs in 1987 and 1988. This belief might have arisen because of the limitation of their research method, and also because of the ‘Chinese Wall’ and ‘Language Barrier’ between China and the rest of the world at the time. RAM conducted surveys and interviews in eleven Asian countries/cities with institutional investors,

W. Sun 2015

86

regulators and eighteen CRAs (sixteen Asian CRAs and two global CRAs). These Asian countries/cities included Japan, Hong Kong, South Korea, Taiwan, Indonesia, Malaysia, Philippines, Singapore, Thailand, India, and Pakistan. RAM asked questions about the usefulness, competency and independence of auditors in the questionnaires and interviews. They suggested that Asian CRAs should take a convergence strategic plan and form a regional alliance. With the exception of the BCBS report in 2000 that provided some basic information about CCRAs, there appeared to be no Englishlanguage reports on CCRAs from previous research [with exception of Kennedy (2008)], although the report from RAM (2000, p.ii) suggested that it would be better to include China as part of their future studies “it is a potentially big market”.

These twenty-four studies appeared to reveal distinct results, although it is difficult to compare them with one another due to varying types of research method, varied selection of sample groups, diverse location choices, as well as different research aims and questions. The analysis results are presented in Figure 3.21. Figure 3.21: Perception about CRAs or CRAs’s Performance Ref.

Country

Type

Sampling

Main Result

Ellis (1997)

USA

Academic

Investors and Issuers

Reducing the number of ratings can be beneficial; Smaller CRAs should try to build stronger reputation.

Cantwell (1998)

51 (global)

Academic

Rated Companies

Issuers devoted significant resources, time and effort to manage their CRA relationships and relevant activities.

Estrella et al. (2000)

11 (BCBS) and 4 (nonBCBS)

Regulatory

N/A

Part 1 and Part 2: Factual information collected in a single reference source.

RAM (2000)

11 (Asia)

Institutional

CRAs, Investors and Regulators

Convergence strategic plan for Asian CRAs, begins with formation of regional caucus.

AFP (2002)

USA

Institutional

Professionals

Treasury and finance professionals concerned about the quality and timeliness of credit ratings, and believed the SEC should take additional action to improve its oversight of ratings agencies and foster greater competition.

Baker and Mansi (2002)

USA

Academic

Investors and Issuers

Issuers and investors have different perceptions about CRAs.

AFP (2004)

USA

Institutional

Investors and Professionals

Financial professionals continued to be concerned about the quality and timeliness of credit ratings and believed the SEC has a key role in promoting competition among credit rating agencies. Survey respondents believed the SEC should require documentation of the internal controls and require rating agencies to document and implement policies and procedures.

AMF (2005)

France

Regulatory

Investors, Issuers and Professionals

It provided factual information of CRA industry in France.

AMF (2006)

France

Regulatory

CRAs and Issuers

Rated companies cannot understand their ratings sometimes.

W. Sun 2015

87

Ref.

Country

Type

Sampling

Main Result

AMF (2007b)

France

Regulatory

CRAs and Issuers

Part 2: It provided factual information about fund rating in France.

Duff and Einig (2007)

UK

Institutional

Investors, Issuers and Professionals

In the survey reputation, trust and values were ranked as the most important characteristic, with transparency and timeliness also ranked highly. The highest ranking items were focused on integrity, ethical standards and credibility as well as more competency based items such as the accuracy of ratings and qualification of staff.

Blaurock (2007)

USA, Canada and 9 European

Academic

Academia

Some legal systems have more transparent recognition requirements than the others. Regulators used approach as similar as SEC although it has damaging effect on competition.

Cantor et al. (2007)

USA and Europe

Academic

Plan Sponsors and Investment Managers

Minimum credit quality guidelines are the dominant motivation, although maximum portfolio and security limits by rating class are also important. These findings apply to both fund managers and plan sponsors, emphasising ratings’ role in the principalagent context.

AMF (2008)

France

Regulatory

CRAs and Issuers

It provided factual information of CRA industry in France.

Einig (2008)

UK

Academic

Investors, Issuers and Professionals

The key to CRAs to developing an effective relationship with corporate issuers was to secure the issuers’ trust, maximise opportunities for cooperation, be transparent in their actions, and pay attention to high levels of customer services.

SEC (2008a)

USA

Regulatory

CRAs

A range of issues were identified to improve CRAs’ practices, policies and procedures with respect to rating structured finance securities.

AMF (2009)

France

Regulatory

CRAs and Issuers

It provided factual information of CRA industry in France, especially on structured finance products.

Duff and Einig (2009a)

UK

Academic

Investors, Issuers and Professionals

Market-based mechanism, such as reputation, independence and ethical standards and norms were likely to exert a stronger influence on the quality of ratings than excessive regulation.

Duff and Einig (2009b)

UK

Academic

Investors, Issuers and Professionals

Ratings quality includes competence and independence, as well as technical and relationship factors. CRAs were generally highly regarded by credit market participants.

AMF (2010)

France

Regulatory

CRAs and Issuers

It provided factual information of CRA industry in France, especially on CRAs’ reaction to crisis.

SEC (2011a)

USA

Regulatory

CRAs

It was the first annual examination of each of 10 NRSRO, many findings have been found and recommendations have been made.

Mohd (2011)

India

Academic

CRAs and Issuers

CRAs played a fair and pervasive role in India and there is a need to let all corporates use credit rating because it is a marketing tool in the global context.

SEC (2012a)

USA

Regulatory

CRAs

It was the second annual examination of each of 10 NRSRO, many findings have been found and recommendations have been made.

Radzi (2012)

Malaysia

Academic

Investors, Issuers and Professionals

Market participants believed domestic CRAs are accurate.

First, when compared with the other studies, BCBS appeared to have gathered different information in their results due to their research method. Questionnaires, interviews and secondary data selection are the three kinds of research approaches

W. Sun 2015

88

that were used. Only BCBS used secondary data (mainly) and produced results on the understanding of factual information rather than on participants’ perceptions. (More detail regarding the appropriateness of research methods will be discussed in Section 4.3.7.2).

Moreover, these studies relied on different focus groups. With the exception of BCBS (which did not use sampling strategies), the other twenty-three documents contain data from ten sample groups: (1) investors, issuers and other professionals (AMF, 2005; Duff and Einig, 2007, 2009a, 2009b; Einig, 2008; Radzi, 2012); (2) CRAs and issuers (AMF, 2006, 2007, 2008, 2009, 2010; Mohd, 2011); (3) CRAs (SEC, 2008, 2011, 2012); (4) investors and issuers (Baker and Mansi, 2002; Ellis, 1997); (5) rated companies (Cantwell, 1998); (6) academies (Blaurock, 2007); (7) CRAs, investors and regulators (RAM, 2000); (8) plan sponsors and investment managers (Cantor et al., 2007); (9) investors and professionals (AFP, 2004); and, (10) professionals (AFP, 2002). Therefore, sample selections used in these studies are not completely inclusive, since they only selected certain interest groups. There tended to focus more on the opinions from investors, issuers and professionals, than from regulators and CRAs. (More details concerning sample size and sample selections will be discussed in Section 4.3.7.1).

In addition, the selections of regions were wide and varied. The majority of accessible studies were from the USA, France and the UK. Seven reports are USA-based (AFP, 2002, 2004; Baker and Mansi, 2002; Ellis, 1997; SEC, 2008, 2011a, 2012); six reports were from France (AMF, 2005, 2006, 2007, 2008, 2009, 2010); four documents were UK-based (Duff and Einig, 2007, 2009a, 2009b; Einig, 2008); one study was conducted in India (Mohd, 2011); one thesis on Malaysian CRAs was produced (Radzi, 2012); and, the other five reports were international studies (BCBS, 2000; Blaurock, 2007; Cantor et al., 2007; Cantwell, 1998; RAM, 2000).

Furthermore, the conclusions and key findings reflected different research aims. Thirteen reports seemed to try to reveal concerns about CRAs from some interest groups (AFP, 2002, 2004; AMF, 2005, 2006, 2007, 2008, 2009, 2010; Blaurock, 2007; SEC, 2008, 2011a, 2012a); nine documents were analyses of people's perceptions (Cantor et al., 2007; Cantwell, 1998; Duff and Einig, 2007, 2009a, 2009b; Einig, 2008; Estrella et al., 2000; Mohd, 2011; Radzi, 2011; RAM, 2000); two studies examined Perception Gaps between interest groups (Baker and Mansi, 2002; Ellis, 1997).

W. Sun 2015

89

Moreover, different questions were asked of people in interest groups, even though these researchers had similar research aims. Although studies of people's concerns about CRAs contain similar research questions, there are a wide range of issues that have to be identified and discussed. The results reflected the matter differently, especially those in the studies about perception analysis and perception differences. For example:  Ellis (1997) and Baker and Mansi (2002) asked questions about the number of ratings needed, rating accuracy and timeliness. Einig (2008) requested participants’ opinions on their definitions of rating quality, and identified foureen micro-factors regarding issuer-commitment relationships. Consequently, despite results on perception differences that can be found among these three studies, the importance of influencing factors of perceptions appeared to be different among them. For example, Cantwell suggested that 'reputation' had been the most important influencing factor on perception differences; Baker and Mansi implied that market participants believed 'accuracy' to be the most important.  As for the studies with a research focus on perception analysis, Cantor et al. (2007) and Cantwell (1998) produced completely different results, because their questions were different, one is on the use of rating guidelines and the other one is on rating activities. Similarities can be found in the research questions postulated by Mohd (2011), Radzi, (2011) and RAM (2000) on CRAs performance; even though their studies were in different countries, they all obtained positive results on accuracy or on the role of CRAs. 3.3.3 Attributes and topics in the previous empirical studies This part of discussion shows how these attributes have been collected and analysed for the development of CRAEG. Fourteen attributes and six topics were collected for a further thematic analysis (details about analysis used in Nvivo can be found Section 4.3.7.3 and 4.3.8.1) of empirical studies in section. These categories or topics stemmed from word frequency results in Nvivo 10. They were reviewed and selected according to the results from each study. A summary of these results is presented in Figure 3.22. Figure 3.22: Attribute Analysis of CRAEG (1) Attributes of Expectations 1

Communication / Transparency

Academic

Institutional

Regulatory

Cantwell (1998); Duff and Einig (2009a); Radzi (2012)

RAM (2000)

AMF (2005; 2006; 2008; 2009; 2010); SEC (2008; 2011; 2012)

No. of References 12

W. Sun 2015

2

Attributes of Expectations Accuracy / Quality

90 Academic

Institutional

Regulatory

Cantor et al. (2007); Cantwell (1998); Baker and Mansi (2002); Duff and Einig (2009a); Ellis (1997); Mohd (2011); Radzi (2012)

RAM (2000); AFP (2002; 2004)

AMF (2010); Estrella et al. (2000); SEC (2011; 2012)

No. of References 14

3

CRAs’ Understanding

Cantwell (1998); Mohd (2011)

RAM (2000); AFP (2002)

4

4

Timeliness

Baker and Mansi (2002); Ellis (1997); Radzi (2012)

AFP (2002; 2004)

5

5

Favour of interest / Conflicts of Interest Usefulness

Duff and Einig (2009a, 2009b); Einig (2008)

AFP (2002); Duff and Einig (2007)

6

RAM (2000)

1

7

Process and Procedure

Einig (2008); Duff and Einig (2009b)

8

Implementation of Policy Rating Fee

6

9

10

Duff and Einig (2007)

AMF (2005; 2008; 2010); SEC (2008; 2011; 2012a) SEC (2011; 2012a)

10

2

Cantwell (1998); Mohd (2011)

Duff and Einig (2007)

Estrella et al. (2000)

4

Competition / Entry Barrier / Oligopoly / Concentration Rating Trigger / Regulatory License / Number and Purpose of Ratings

Blaurock (2007);

AFP (2004); Duff and Einig (2007)

SEC (2012a)

5

Blaurock (2007) Baker and Mansi (2002); Cantor et al. (2007); Ellis (1997); Duff and Einig (2009a)

AFP (2004)

AMF (2007)

8

12

Unsolicited and Solicited Rating

Ellis (1997); Cantwell (1998)

Duff and Einig (2007)

Estrella et al. (2000)

4

13

Rating Method

Duff and Einig (2009a; 2009b); Einig (2008); Radzi (2012)

Duff and Einig (2007)

Estrella et al. (2000); AMF (2009)

8

14

CRAs’ Role

Blaurock (2007); Mohd (2011)

AMF (2009)

3

15

Product complexity Rating dissemination by issuers Factual information Needed actions

11

Others topics

16

17 18

19

Ranking of relevant characteristics

SEC (2008) Cantwell (1998)

1

Estrella et al. (2000)

Blaurock (2007)

Cantor et al. (2007); Duff and Einig (2009a); Duff and Einig (2009b); Ellis (1997); Einig (2008);

1

AFP (2002); AFP (2004); Duff and Einig (2007) Duff and Einig (2007);

2 3

6

W. Sun 2015

20

Attributes of Expectations Performance indicator

91 Academic

Institutional

Regulatory

Blaurock (2007); Mohd (2011)

No. of References 2

Note: Duff and Einig (2009a, 2009b) and Einig (2008) are different from Einig (2007), because Einig (2007) is a report published by professional association which is categorised as institutional, and the rest (Duff and Einig, 2009a; 2009b; Einig, 2008) is journal papers and thesis which is classified as academic.

This section is part of the development of CRAEG framework, and it explains how attributes were generated, developed, and reviewed (Nvivo analysis method will be reviewed in Section 4.3.8.1). These were classified and grouped through the identification of similar themes and meanings found within the previous studies. Academic studies and institutional reports presented results on twelve attributes, while regulatory reports investigated ten out of the fourteen attributes. However, ‘usefulness’ was only mentioned in institutional reports, and ‘implementation of policies’ was only investigated in regulatory reports.

3.3.3.1 Attribute 1: Communication / Transparency Cantwell (1998) appeared to be the first to indicate that issuers expected better communication on methodologies and process. Investors, issuers and other professionals called for better transparency and communications from both global CRAs and domestic CRAs (Radzi, 2012) as illustrated in Figure 3.23. Issuers seemed to underscore the importance of communication (because more than half of them treated their preparation for CRAs as a short-term process, except issuers with large borrowings), although most issuers had been “proactive” in attempting to communicate with CRAs ahead of any significant announcements (Cantwell, 1998, p.20).

Like issuers, investors suggested rating rationales should be more detailed (RAM, 2000). AMF (2009) reported that investors demanded more precise information on two matters: (1) specific assessment criteria; and, (2) the underlying data. From the regulators' perspective, some CRAs showed weakness in public disclosure, especially smaller CRAs.

For example, (1) some CRAs failed to disclose certain rating

methodologies; (2) some CRAs (their names were not revealed) did not disclose some rating methodologies properly; and, (3) certain rating designations for securities might not be characterised (SEC, 2011).

Issuers have sought better explanations on further areas, such as (1) methodological changes (AMF, 2005), however, this might have been improved if CRAs had asked for feedback on methodological revisions (AMF, 2010); (2) publications on the impact

W. Sun 2015

92

studies, and this kind of work might also have been enhanced through a wider range of publications provided by CRAs (AMF, 2010); (3) individual ratings, ratios and weights (AMF, 2005), as well as the ratios for satisfactory standard (AMF, 2010), and issuers seemed to not always understand where the rating opinions were from (Einig, 2008, p.176); (4) the underlying restatements of financial figures (AMF, 2005); (5) differences among methodologies (AMF, 2006); (6) CRAs’ defined uncertainties (AMF, 2006); and, (7) influences of contractual relationships between clients and CRAs on the ratings (AMF, 2006), for example, the type of contractual clause that could trigger early repayment, and details of consequences resulting from failure to comply with a contractual clause (AMF, 2010).

Figure 3.23: Perception Gaps about Communication Investors  Specific assessment criteria  The underlying data

Issuers  Methodology changes  Differences between methodologies  Individual ratings/ratios/weights/satisfactory standards  The underlying restatement of financial figures  CRAs’ uncertainties  Influence of contractual relationships  Difference between CRAs’ internal rule and the other code of conduct  Opportunities to discuss rating changes and adjustment  Information on rating process in domestic CRAs is not as adequate as Global CRAs

Regulators  Rating methodology  Rating designation  Rating dissemination and monitoring  Public disclosure of managing certain conflict of interest  Public disclosure on rating process  Public disclosure on internal procedure  The number of analysts  Additional information  Differences between NRSRO’s rating and NonNRSRO’s rating  Whether documents are fully translated from the original

Moreover, Issuers had also sought out better information on the rating process:

(1) They wanted cras to explain the differences between their internal rules and those of the other companies, or the code of conduct from regulators (AMF, 2006); (2) They wanted more opportunities to discuss ratings processes with cras in different stages of the process. For example, issuers want the opportunity to review cras’ write-ups before publication (Cantwell, 1998), and issuers believed that cras should consult with them before introducing changes or adjustments, especially when it would have a major impact on the market or sector (AMF, 2008).

W. Sun 2015

93

(3) Issuers have complained that the information about rating process is still inadequate within local cras compared to mature cras, although there have been improvements (AMF, 2009). Both investors and issuers have asked for more detailed public information (AMF, 2009), regulators also have provided more specific information on relevant requirements and CRAs’ implementation of these policies. The SEC (2008) reported that significant aspects of rating processes were not always disclosed by CRAs. In 2011, the SEC recommended that the information on internal procedures needed to be improved, and that some NRSROs should improve their disclosure in an effort to manage certain conflicts of interest, and show more transparency on their dissemination and monitoring of ratings (SEC, 2012a). Each of the smaller NRSROs (A.M. Best, DBRS, Egan-Jones, JCR, Kroll, Morningstar, RandI) were called upon to improve their public statements in several different areas via websites and press releases including (SEC, 2011a):

(1) misleading information on the description of the committee process;

(2) misleading information on the number of analysts working on the rating;

(3) additional information and analysis posted on websites;

(4) differentiation between the NRSRO rating and ratings from nonNRSROs; and,

(5) clarification of whether documents were fully translated from the original documents which had been written in other languages.

Unlike regulators (which require CRAs to have better communication from the perspective of rating processes and procedures only), researchers interpreted and assessed CRAs’ communication from different elements or attributes. For example, Radzi (2012) listed seven factors to be considered. Similarly, Duff and Einig (2007, p.123) believed that transparency included seven elements, while Einig (2008, p.313) and Duff and Einig (2009b, p.148) indicated eight items.

Moreover, when comparing the lists between 2007 and 2009b by Duff and Einig, it was found that items (1) and (2) were not on the list published in 2007; instead, there was a different requisite, “The CRAs adjusts the ratings/outlook if targets are not met.”

W. Sun 2015

94

Additionally, item (8) from Duff and Einig (2007; 2009b) and Einig (2008) was not included in Radzi’s list.

3.3.3.2 Attribute 2: Accuracy / Quality Cantwell (1998) appeared to be the first to identify that rating accuracy was considered by issuers to be a criterion with multiple attributes (including competitors’ ratings, their rating by another agency, and publicly available criteria). Cantor (2007) found that fund managers and plan sponsors showed a preference for accuracy rather than stability, where it is impossible to improve accuracy without reducing stability. Rating accuracy and rating quality should be closely related, according to their textual meaning. Technically, accuracy is relevant to methodologies, rating changes, inflation, default rate, rating shopping, and stability in the quantitative research. However, Cantwell (1998), Duff and Einig (2007; 2009a), Einig (2008), Ellis (1997) and Radzi (2012) interpret those differently. For example, (1) the use of terms: Einig (2008) and Duff and Einig (2007; 2009b) insisted on using the same fourteen proposed micro-attributes (including ten which were verified), but Duff and Einig (2009b) changed the term of 'responsiveness' to be 'service quality'. (2) the number of attributes or elements: Cantwell (1998) examined perceptions about service quality of CRAs through two factors (understanding of issuers’ concerns, and satisfaction about CRAs’ own research). However, Radzi believed that accuracy, timeliness, transparency and rating quality are different criteria to be used in order to capture market participants’ perceptions, and she endorsed another set of attributes for rating quality and accuracy, which were aggregated as thirty-two. Moreover, Ellis provided an additional set of five attributes (Figure 3.24).

Discrepancies also exist in views expressed by investors and issuers. For example, more issuers gave preference to CRAs’ understanding of issuers, whereas more investors attached greater importance to CRAs' consistent research and timeliness (Ellis, 1997). Moreover, the Permanent Subcommittee on Investigation (2011, cited by the SEC, 2012a) commented on the matter of inaccurate ratings, and concluded there were five responsible factors (Figure 3.24). Radzi recommended a list of attributes in accordance with Duff and Einig’s definition (Figure 3.25). This list focused more on technical qualities, rather than other psychological or expectation-influencing factors, such as Trust, Reputation, and 'Norm and Values', although 'competent' and 'consistency' were not included by Duff and Einig in their lists. It should be noted that 'consistency' is also a crucial attribute in some other investigations, such as those by Cantwell (1998) and Mohd (2011).

W. Sun 2015

95

Figure 3.24: Perception Gaps about Accuracy (1)

Figure 3.25: Perception Difference about Rating Quality

In addition, Krahen and Weber (2001) posited fourteen completely different requirements for evaluation of the quality of internal bank rating system: (1) comprehensiveness for rating all past, current and future clients; (2) completeness (for ratings should be conducted on all current and past clients); (3) complexity (because of the many different rating systems); (4) well-defined probabilities of default; (5) well-

W. Sun 2015

96

defined relationship (monotonicity) between ratings and expected default frequencies; (6) the degree of fineness in each rating system; (7) reliability of the rating system; (8) the probability of default should not be significantly different from the realised default frequency in the back-testing; (9) ratings should contain efficient information; (10) system development is improved over time; (11) past and current rating data are easily available with good data management; (12) the rating process is embedded within the organisation of credit business to reduce the risk of misrepresentation; (13) the distribution of rating outcomes is constantly monitored and assisted with random inspections; (14) external compliance to rating standards is continuously monitored by uninterested bodies, or on a random basis.

Cantwell (1998) concluded that Duff and Phelps, a smaller CRA, was perceived to have the highest quality of research, although it had been found that bigger CRAs are more accurate than smaller CRAs in some other studies (Ellis, 1997; Baker and Mansi, 2002). The overall satisfaction with CRAs’ research in the utility sector was rated the highest, and was the lowest in the industrial sector. Duff and Phelps also scored the highest on the quality of their analysis, but ratings by rated companies from different sectors and countries were varied. Analysts’ quality as perceived by the issuers in the utility sectors is better than the industrial sector, and the US issuers gave higher marks than Non-US issuers. However, Moody’s experienced the highest level of analysts’ turnover, according to rated companies. As for meeting schedules, a correlation had been found between the meeting location and the level of rating provided by CRAs. “Higher-rated issuers generally tended to host the meetings at their offices, while lower rated issuers more frequently held their meetings at the rating agency offices” (Cantwell, 1998, p.1822). Rating fees will be discussed in Attribute 10.

There are at least three factors or attributes affecting both the accuracy and quality of ratings. Factors include: (i) potential conflicts of interest; (ii) the selection and application of certain methodologies or rating models; and, (iii) the objectivity of the research or the rating criteria. Discrepancies can be also found in one's understanding about accuracy because of the other relative terms such as usefulness, reliability or credibility. Results in this research area are complicated. Many investigations concerning accuracy or relevant terms appeared to examine these terms based upon the varied understanding from the general public without detailed clarification from researchers, and these resulted in different conclusions. For instance, (1) There is a perception difference about the accuracy of CRA reports. Ellis (1997) reported that most investors and issuers perceived CRA ratings to reflect issuers’ creditworthiness most of the time. Baker and Mansi

W. Sun 2015

97

(2002) found that most investors and issuers believed that ratings accurately reflected a firm’s creditworthiness. However, some other professionals maintained that CRA ratings have been inaccurate (AFP, 2002; 2004); (2) There is a perception difference about the accuracy of unsolicited ratings. Ellis (1997) reported that many issuers felt unsolicited ratings were less accurate because such ratings are not sought but paid for in a traditional manner. However, the BCBS (2000, p.12) argued that unsolicited ratings can be treated as “a form of market discipline”, because they have less conflict of interest than “the hired raters” who can be "too" generous. Although different attributes can be assigned to accuracy (Figure 3.26, which summarises results about perception differences among different interest groups), the recorded perceptions of investors and issuers on CRA accuracy appeared to be similar in various studies. Figure 3.26: Perception Gap about Accuracy / Quality Academia Investors

     

Issuers

             

Professionals

Regulators

CRAs

        

Different attributes or definitions CRAs are independent Malaysian CRAs are competent Accuracy of Malaysian CRAs is not too bad Accuracy of Asian CRAs is "average" Many investors think that ‘understanding issuers’ and ‘consistent research’ are the most important responsibilities of CRAs Competitors’ rating The rating by another agency Publicly available data Unsolicited ratings are less accurate CRAs are independent Consistency is generally good Malaysian CRAs are competent Accuracy of Malaysian CRAs is not too bad Overall service quality of Duff and Phelps are better than other CRAs CRA research quality is varied amongst different industries The quality of CRA analysis varies amongst different industries and countries Moody’s have the highest rate for analyst turnover Higher-rated issuers hold their meetings with CRA representatives in their own offices rather than in CRA offices Many issuers think that a CRA's understanding of the issuer's needs is the most important requirement, followed by accurate research, objective research, and consistent research Inaccuracy is one of the causes of unreliable ratings Give more preference to accuracy than stability Malaysian CRAs are competent Malaysian CRAs are independent Accuracy of Malaysian CRAs is not too bad Unsolicited ratings are more accurate Independence of CRAs has improved Factors responsible for inaccuracy: conflict of interest, rating models, rating criteria, application of model, staffing A lack of consistency in methodology

W. Sun 2015

98

For example:

(1) Investors and issuers believe bigger CRAs are more accurate than smaller CRAs. According to investors and issuers in the mid-1990s, Moody’s, S&P and Duff and Phelps were more accurate than Fitch, Thompson Bank Watch and ICBA (Ellis, 1997). In another (slightly more recent) survey, Moody’s and S&P were perceived to be more accurate than Duff and Phelps and Fitch (Baker and Mansi, 2002); (2) The accuracy or credibility level of domestic CRAs has generally been ranked as "fair" or "average". RAM (2000) reported that most investors in Asia rated the accuracy of their domestic CRAs as "average", and that they might lose credibility because of issues relevant to (a) independence; (b) transparency; (c) accuracy; and, (d) quality of analysis. Similarly, Radiz (2012) suggested that the accuracy of Malaysian CRA research was acceptable; (3) Malaysian CRAs are also perceived to be independent by market participants (Radiz, 2012). AMF (2010) reported that CRAs have been focusing on increasing their independence, and that some CRAs are hiring specially assigned staff or teams to validate methodologies, monitor ratings, and improve the reliability of ratings. 3.3.3.3 Attribute 3: CRAs’ understanding Many professionals believe that ratings by CRAs are more reflective of the industry, in which they operate, than of an individual company’s performance (AFP, 2002). However, many investors in Asia suggested that methodologies of domestic CRAs were not reflected in their ratings of an industry sector, and that they have better understanding about local companies because of better access to local information (RAM, 2000). Surprisingly, back in the 1990s, Thomson Bank Watch and Duff and Phelps, who received lower rankings for accuracy than Moody’s and S&P, achieved much higher rankings for their understanding of issuers’ concerns (Cantwell, 1998). In addition, most issuers with unsolicited ratings felt CRAs did understand their concerns (Cantwell, 1998), although many issuers felt that unsolicited ratings were less accurate (Ellis, 1997). Comparisons of the perceptions of issuers, investors and professionals are presented in Figure 3.27. Figure 3.27: Perception Gap about CRAs’ Understanding Issuers 



Thompson Bank Watch and Duff and Phelps understand issuers’ concerns much better than Moody’s and S&P. CRAs did understand the concerns of issuers who received unsolicited ratings.

Investors

Professionals

Domestic CRAs have a better understanding of local companies than global CRAs. Methodologies of domestic CRAs cannot capture industry sector as a whole.

Ratings are more reflective of the industry rather than the companies’ performance.

W. Sun 2015

99

3.3.3.4 Attribute 4: Timeliness From 2002 to 2004, there was an increase in the percentage of professionals who strongly agreed that CRAs are timely. Even so, there were still many of them who believed that CRAs were not timely (AFP, 2002; 2004). Compared with the views of issuers, investors held more negative views (Baker and Mansi, 2002; Ellis, 1997). In addition, both investors and issuers felt it was not important for ratings to be updated in order to reflect any marginal changes in the financial market. However, unlike issuers, investors preferred ratings to be updated and to reflect all the relevant information, even though sudden changes could occur within a year (Ellis, 1997). In other words, investors did not expected updates for small changes; however, if a change is not small and it occurred within one year, they prefer to have all relevant information. The perceptions of issuers, investors and professionals are presented in Figure 3.28. Figure 3.28: Perception Gap about Timeliness of CRAs Issuers  They are timely  Do not need to reflect small changes  Do not need to update rankings if there are sudden changes (reverse) within a year

Investors  Not (always) timely  Do not need to reflect small changes  Should be updated even if there will be a change (reverse) within a year

Professionals Many professionals think CRAs are not timely

Duff and Einig (2007) interpreted timeliness as being four items. Radzi (2012) interpreted timeliness in more detail from eight perspectives, and reported that most of investors, issuers and professionals believed their domestic CRAs in Malaysia are not timely. These eight perspectives are listed below:        

Responds quickly to changes in a firm’s credit condition Responds quickly to changes in economic condition Ratings are completely up-to-date Does not make changes simply on economic cyclical considerations Upgrades and downgrades in a timely manner Provides early signals Provides early warnings Regularly reviews

3.3.3.5 Attribute 5: Favour of interest Duff and Einig (2007; 2009b) acknowledged that CRA's preferences towards investors and issuers is one attribute in a hypothesised model for evaluating CRA rating quality. However, they rejected this attribute, along with three other attributes (expertise, service portfolio and timeliness) in an effort to achieve a better internal consistency in their model. AFP (2002) reported that there were relatively few professionals who believed CRAs were more favourable towards investors than towards issuers.

W. Sun 2015

100

Moreover, among fund rating agencies, CRAs’ strategies continued to widen the scope of products, with less focus on improving services or quality for investors and issuers (AMF, 2007b).

3.3.3.6 Attribute 6: Usefulness / predictive ability Only RAM (2000) has considered the perceived usefulness of ratings, and noted that most investors believe the usefulness of ratings is "good". Moreover, in the literature, different understandings of “usefulness” of ratings were provided (Section 3.3.4.3).

3.3.3.7 Attribute 7: Process / Procedure It appears that regulators in different countries have diverse perceptions about CRA processes and procedures. In France, CRAs apply similar rating processes, although they use different rules of conduct and management procedures (AMF, 2005; 2008). However, the SEC (2008) reported that all the CRAs in the USA have various practices, policies and procedures. CRAs in France had developed a self-regulatory system back in 2008 (AMF, 2008), but their standards may still be inadequate compared with those of global CRAs. For example, not every French CRA had established a position for a Designated Compliance Officer (DCO) by 2010 (AMF, 2010).

The SEC (2008, 2011; 2012) provided a detailed investigation about relevant issuers, and indicated shortcomings in eleven areas:

1) Lack of policies and procedures for the transparency of rating dissemination; 2) Lack of policies and procedures for the transparency of monitoring of ratings; 3) Inconsistencies and weakness in ethical polices; 4) Weakness of the dual role of oversight committee members; 5) The need to ensure stability and clarity of the role, as well as the responsibilities and compensation policies of, the dco; 6) The requirement of section 15e(j)(3) should be fully addressed in the complaint policies and procedures (dco have duties to establish procedures for the receipt, retention and treatment of complaints); 7) Policies and procedures for management and disclosure of certain conflicts of interest need to be more specific (some nrsros have shown weaknesses in their policies and procedures regarding employees' own securities ); 8) The internal supervisory control structure needed to be improved; 9) The surveillance process needed to be improved; 10) All nrsros showed weaknesses in record retention, rating actions and committee procedures; and, 11) Documentation of policies and procedures needed to be improved.

W. Sun 2015

101

However, academics and associations have interpreted the term ‘Internal Process’ as concerning training, recruitment and staffing issues. Einig (2008, p.310), Duff and Einig (2007, p.120) and Duff and Einig (2009b, p.148) defined it as follows: (1) “CRA staff undertake regular professional development”; (2) “the CRA employs well-qualified and educated staff”; (3) “the CRA regularly evaluate the performance of its staff”; (4) “CRA staff have an appropriate workload”; and, (5) “the CRA deploys staff with adequate experience”.

3.3.3.8 Attribute 8: Implementation of polices The SEC (2011, 2012) studied the implementation of polices, and reported that all examined CRAs (no matter whether a big one or a small one in the USA) failed to follow rating procedures such as: 1) 2) 3) 4) 5)

Committee reviews of rating actions; Maintaining documentation with respect to each rating action; Complaint policies and procedures; Methodologies for certain ratings; Meeting certain requirements in Section E(t) with regard to corporate governance, organisation, and management of conflicts of interest; 6) Actively exercising their oversight duties; and, 7) Re-examining past policies and procedures for record keeping and post-employment. 3.3.3.9 Attribute 9: Rating fee Cantwell (1998) revealed that there was “a clear correlation” between rating levels and the size of rating fees being paid by issuers. Moreover, issuers' attitudes to rating fees changed in this report. More issuers preferred fees to be charged beforehand “...based on a fixed annual fee plus service charge for each issue”, but after 1996, more issuers (50%) wanted fees that were on a “fixed annual basis only”. Furthermore, the BCBS (2000) reported that regional CRAs obtained their fees from subscribers for rating information, but that large CRAs charged fees from rated entities in exchange for issuing a rating. However, according to the survey results from Mohd (2011), CRAs felt that the less developed domestic debt market was the cause of the development gap between local CRAs and international CRAs. Mohd also mentioned that many SMEs still cannot afford the payment for a rating.

3.3.3.10 Attribute 10: Competition / Entry barrier / Oligopoly / Concentration The excessive profits of some CRAs has raised suspicions about a lack of competition (Duff and Einig, 2007), leading to problems in the CRA industry (Blaurock, 2007). Duff and Einig (2007) noted that various commentators confirmed the lack of competition by citing high entry barriers, and excessive profitability, for example, rating payments

W. Sun 2015

102

increased by at least 11% from 2001 to 2004 (AFP, 2004). Commentators in the SEC (2012) studies criticised the new regulation Section 15E(w), and noted that competition could be improved if new entrants were given a chance to build a reputation by producing a track record, to counter the ‘stickiness’ (i.e. market participants choosing to approach more familiar and established CRAs).

3.3.3.11 Attribute 11: Rating trigger / number of purpose of rating ‘Rating triggers’ was one of the concerns raised in a survey (with eighty-one questions) to the National Committees of the Academy from nine countries: Belgium, Canada, France, Greece, Italy, the Netherlands, Poland, Switzerland, and the USA (Blaurock, 2007). Although this was not specifically linked with CRA activities, the admissibility and transparency of the contractual clauses linked to rating triggers are believed to magnify the effect of rating changes.

Generally, there are commercial reasons, internal management support-based reasons and regulatory incentives for seeking a rating (AMF, 2007). 87% of issuers indicated that their credit providers were required to obtain or maintain a rating from at least one of the four NRSROs (AFP, 2004). Most issuers obtain two or three ratings, but most investors require only one rating (Ellis, 1997). Similarly, Baker and Mansi (2002) mentioned that issuers use multiple ratings, and investors use two ratings at the most. This might be due to the fact that issuers obtain ratings for gaining access to the capital market, whilst investors require ratings to assist with their for investment decisions. An additional rating is usually used when a smaller CRA specialises in a particular market or region. There are three factors that were indicated by issuers as issues to be considered when selecting a CRA: market concentration, historical ongoing commitment relationships, and personal experience (Duff and Einig, 2009a). However, ratings are used differently within various industries. For instance, in the fund rating industry, both plan sponsors and fund managers use ratings mostly for setting minimum credit quality guidelines for bond purchases. Fund managers use ratings largely because their clients require them to do so, and rather less because regulators force them to, and plan sponsors are also less concerned with client guidelines. Moreover, preferences have been shown within various regions. European plan sponsors are much less likely to use below-investment-grade rating guidelines than their US counterparts. In addition, fund managers and plan sponsors in Europe are more likely to use ratings to determine portfolio objectives or investment strategies (Cantor et al., 2007).

W. Sun 2015

103

3.3.3.12 Attribute 12: Unsolicited rating and solicited rating The Estrella et al. (2000) noted that ‘solicited ratings’ generated issues related to conflicts of interest, because if the rating is by the lower end of the scale, this rating will not drive the issuers to pay for the rating fee. Although most issuers in the US felt that unsolicited ratings are as accurate as solicited ratings, many of them indicated that these ratings might be less accurate than the ratings sought and paid for in a traditional manner (Ellis, 1997). According to Cantwell (1998), the majority of issuers in the US and in 50 foreign countries who sought solicited ratings felt that CRAs did understand their concerns. However, most issuers showed no ‘trust’ on unsolicited ratings, and CRAs believed that unsolicited ratings were being provided by smaller CRAs in an effort to increase their presence in the market (Duff and Einig, 2007). Moreover, CRAs “…may issue unsolicited ratings…” with a negative outlook “…to force issuers to pay for ratings that they did not request” (Bai, 2010a, p.264), especially good firms which usually use solicited ratings whereas low quality is always revealed through unsolicited ratings (Byoun and Shin, 2012). For example, the case of Jefferson Country School Dist. No. r-1 vs. Moody’s investors’ Services, Inc. On the other hand, the BCBS (2000) observed that the size of CRAs did not correlate with the proportion of unsolicited ratings, and Roy (2006) found unsolicited ratings tend to be lower and more conservative than the solicited ratings, because the former use public information and the latter use both public and non-public information.

3.3.3.13 Attribute 13: Rating method The rating models and schemes used by the largest companies tend to dominate the CRA market, and only some smaller CRAs use different or simplified methods. Only two smaller agencies were reported by the Estrella et al. (in 2000) as ones using the calculative method (or derivation of an explicit probability of default), whereas other agencies' rating assessments were based on the relative likelihood of the default rate (Estrella et al., 2000). However, in a survey by the AMF in 2009, market participants felt that all of these methods are hard to apply with due diligence. 3.3.3.14 Attribute 14: CRAs’ role According to the AMF (2009, p.4), “Because of their central role in the structured product market, CRAs were seen to be partly responsible for the excess and failings that culminated in the subprime crisis.” Nevertheless, Blaurock (2007) confirmed the significance of the CRAs’ existence, although it is decisively reinforced by the regulators. Moreover, Mohd (2011) added that CRAs perceive themselves to be additional information providers in providing the market with information.

W. Sun 2015

104

3.3.3.15 Another six attributes Attribute 15: Product complexity The SEC (2008) reported that some CRAs appeared to struggle with the rising complexity of residential mortgage-backed securities and collateralised debt obligations. This can be an important attribute because product complexity is relevant to rating quality and accuracy (Section 3.3.4.3). Attributes 16 and 17: Industry information and Rating dissemination In addition to discussing the market share, background, and the scope of products of each CRA in their studies, some other information on ‘rating dissemination by rated companies’ and ‘rating methods’ used by CRAs were revealed. For example, according to Cantwell (1998, p.20), most issuers “buried” their ratings in their annual reports, and only 19% of them presented ratings “prominently”. Some issuers include ratings as part of press releases or websites (in company reports, brochures, debt issues, and investors’ presentations). Attribute 18: The job of the regulators / Enforcement CRAs have been classified into three types of agencies: national agencies, regionally targeted agencies, and global agencies. This is especially evident in Sweden (Estrella et al., 2000). Each country has their own definition of ‘recognised’ CRAs, although Blaurock (2007) did not find special rules and policies for CRAs except in France and the USA.

In the USA, more than half of surveyed professionals believed that the SEC should identify all the acceptable CRAs, and 90% of the respondents felt that the SEC should impose greater supervision on CRAs (AFP, 2002). They believed that the SEC should (i) encourage competition, (ii) establish transparent communication on the criteria for recognising and reviewing CRAs, and, (iii) require CRAs to document and implement policies and procedures to prevent disclosure of confidential information (AFP, 2004). Attribute 19: Ranking of relevant characteristics / Perception difference Cantor et al. (2007) showed that investment managers and plan sponsors preferred accuracy over stability. However, others, such as Duff and Einig (2007; 2009b), Einig (2008), Ellis (1997), and Radzi (2012), did not consider stability as a rating quality attribute or relevant characteristic. Although it is difficult to compare results amongst these studies, because they provide different definitions or items to each factor, their results are examined and analysed below.

W. Sun 2015

105

Figure 3.29: Perception Gap about Ranking of Factor (1) Factors

Ranking of importance by Issuers

Ranking of importance by Investors

Ranking of importance by others

1 2 3 4 5 6 7

1 3 2 6 4 7 9

1 2 3 5 4 6 7

Biggest differences in mean amongst each group 11 3 6 10 8 9 6

8

13

10

1

9

8

11

8

10

5

8

2

11 12 13 14

11 12 10 14

9 13 12 14

5 4 7 7

Reputation Trust Values Timeliness Transparency Expertise Methodology Issuer orientation Co-operation Investor orientation Independence Responsiveness Internal Process Service Portfolio

These data were re-sorted according to the results from Duff and Einig (2007)

Duff and Einig (2007) have shown significant perception differences, which are sorted and represented in Figure 3.29. Clearly, the largest perception differences among interest groups were seen for issuer orientation, investor orientation, trust, responsiveness, and independence (more so than for the other factors). Ellis (1997) summarised results about issuers' and investors' perceptions on the importance of factors with regards to CRA rating accuracy. Instead of calculating the mean of the ranking, he summed up the percentages of each group who chose certain factors as being the most important single factor. Ellis suggested that investors and issuers have more perception differences on the importance of understanding issuers, consistency of CRA research and their timeliness, compared with other factors such as the accuracy of CRA research and the objectivity of their research (Figure 3.30).

Figure 3.30: Perception Gaps about Ranking of Factors (2) Factors Accurate Research Objective Research Timely Understand Issuers Consistent Research Others

Ranking of percentage of Issuers 2 3 5 1 4 6

Ranking of percentage of Investors 2 4 3 1 1 5

Biggest differences in percentage between them 5 4 2 1 2 3

These data were re-sorted according to the results from Ellis (1997)

Radiz (2012) studied the perceptions of CRA performance by professional bodies, banks, issuers and investors. Their views on the competence of domestic CRAs were

W. Sun 2015

106

“not significantly different” from one another (p.394), and “no statistically significant result emerged from the test” on differences in their perceptions of rating accuracy (p.382); similarities can be found in their opinions about the transparency of CRAs. However, there were “significant differences in [their] perceptions” on the timeliness of CRA ratings (p.385) and on the independence of CRAs. However, those comparisons could not be generalised, as the chi-square results of each item (32 items for four attributes) were different. Attribute 20: Performance indicators Although perceptions about CRAs’ performance or rating quality have been examined through different sets of indicators as discussed above, CRAs have opined that their performance should be judged according to the acceptability of their ratings, and with respect to the performance of the industry that they are rating (Mohd, 2011). Blaurock (2007) also mentioned that academics agreed that market acceptance should be applied as an indicator of CRA performance. Many proposals have been suggested for evaluation of CRAs’ performance in the literaure (Section 3.3.4.6). 3.3.4 Nvivo analysis of gap element through CRA relevant literatures 424 documents addressing perceptions or understanding of CRAs or relevant issues were accessed. Word frequency can show the authors’ interest in, or preferences for, different topics. Associations and relationships amongst identified attributes (themes) of CRAEG are explained with an analysis of each cluster of attributes. Cluster analysis is the rearranging and organising of themes into clusters of themes that share similar issues. This is a part of thematic analysis, according to Attride-Stirling (2001). However, the full “cluster analysis” of the structure of attributes developed from NVivo would be too complicated to present in this thesis, as cluster analysis visualises every relationship amongst all nodes highlighted in the coding process of every publication, with many trivial circles and lines crowded together as a single picture. As such, cluster analysis in this section was conducted manually according to the main themes revealed in Section 3.3.3.

3.3.4.1 Word frequency analysis for perception differences Although Nvivo coding and analysis can be performed manually in both wording and imagery, ‘Query’ can only run with MS Word or Adobe PDF documents which are not in an image format. Out of the 424 files, twenty-one documents were image-based. Those included ones by: the SEC (2011); Blume et al. (1998); Cantor and Packer (1995; 1997); Coskun (2008); Elbannan (2008); Friedland (2009); Gunther (2002); Haque et al. (1996); Ho and Rao (2011); Kose et al. (2010); Johnson (1999); Livingston and

W. Sun 2015

107

Zhou (2010); White (2007); Prysock (2006); Rubinfeld (1972); Vaaler and McNamara (2004); Tanthanongsakkun and Treepongkaruna (2008); Sagner (2003); Schawarcz (2002); and, Smick and Posen (2008). Two documents are not available in digital copy: Cantwell (1998), and Cantor (2004). Subsequently, 401 documents were examined in Nvivo by using a word frequency search. Word Cloud in ‘Nivivo 10’ can present “up to 100 words in varying font size, where frequently occurring words are in larger fonts”. 364 documents were examined, including 326 academic papers, thirty-three documents by regulators, and five documents by associations. The word cloud of each group has been presented in Figure 3.31. These figures showed that their frequency patterns were completely different. However, topics relevant to ‘information’ may have been one of the favourite topics amongst these three groups.

Figure 3.31: Word Cloud - Academics

Academia

Associations

Regulators

In order to highlight preferred discussion topics or word choices by academia, regulators and associations, irrelevant words needed to be filtered out. Test results in Nvivo 10 can show the number of occurrences of every word and the weighted frequency of them, i.e. “the frequency of the word relative to the total words counted”. For example, from the search in academic sources, ‘also’, ‘see’ and ‘one’ did not have any important meaning in the discussion, but they were among the twenty-five most frequently used words, with weighted percentages being 0.26%, 0.23% and 0.22%, respectively. Moreover, these three words showed higher weightings than ‘model’, ‘investment’ and ‘capital’, which are all 0.20%, but they might be irrelevant to the discussion as attributes or topics. Therefore, words used for frequency comparison had to be selected manually by checking the relevancy of the words that appeared over 1000 times with respect of the SLR results from fourteen attributes and five topics in Section 3.3.3. Twenty-nine words were chosen, fourteen of them were among the most frequent 1000 words of the three interest groups. Fifteen relevant words did not appear in the most frequent wording list from all the interest groups:

W. Sun 2015

108

(1) 'Implement', 'Credibility', 'Reliability', 'Process', 'Cooperation', 'Acceptance', 'Stability', 'Timeliness', 'Competent', 'Communication' And 'Objective' Were Not In The List From Academic Sources; (2) 'Accountability', 'Credibility', 'Reliability', 'Efficiency', 'Expertise', 'Cooperation', 'Acceptance', 'Gatekeeper', 'Communication' And 'Reputation' Did Not Occur In The List Of Sources By Regulators; (3) 'Creditworthiness', 'Appropriate', 'Stability', 'Competent', 'Gatekeeper' And 'Reputation' Were Not On The List From Sources By Associations. Figure 3.32: Word Frequency of Each Popular Word Academic Quality

Regulator

Association 0.33

0.2

0.2

0.12

0.23

0.3

0.53

0.62

0.1

0.06

0.14

0.08

0.05

0.07

0.17

0.12

0.26

0.13

0.06

0.33

0.24

Independent

0.06

0.1

0.18

Transparent

0.04

0.09

0.1

Consistent

0.04

0.04

0.05

0.03

0.04

0.06

0.02

0.09

0.1

Service

Process 0.11

Competitive

Accuracy

Performance

0.29

Conflict 0.07

Method

Creditworthiness

Appropriate

Staff

0.19 0.02

Monitor

0.02

0.06

0.03

0.02

W. Sun 2015

109

When comparing the weighted percentage of the fourteen words amongst three interest groups, the percentage of ‘conflict’ and ‘staff’ was much higher in regulators’ documents than in documents from associations and by academics, with differences of at least 0.13% and 0.16%. Regulators mentioned ‘performance’ more frequently than academics (by 0.1%). However, ‘quality’ and ‘accuracy’ were mentioned less often in academic and regulators’ documents, than in associations’ reports (with 0.21% and 0.13% as differences). ‘Process’, ‘Method’ and ‘Service’ were also mentioned less often in academic documents than in regulators' and associations’ reports (with 0.53%, 0.48%. 0.18% and 0.11% differences).

Figure 3.33: Word Frequency of Each Topic Academic

Regulator

Association

monitor

0.02

consistent

0.04

gatekeeper

0.02

competent

0.04

staff

0.02

objective

0.05

appropriate

0.02

accurate

0.05

creditworthniess

0.03

monitor

0.06

communication

0.02

monitor

0.02

efficiency

0.03

staff

0.03

expertise

0.04

co-opertation

0.04

consistent

0.05

creditworthniess

0.06

reliability

0.07

accountability

0.03

stable

0.06

consistent

0.04

competitive

0.06

objective

0.08

transprent

0.04

transprent

0.09

acceptance

0.08

0.1

transparent

0.1

appropriate

0.1

0.06

independent

independent

method

0.06

implement

0.15

conflict

0.07

performance

0.17

performance accurate

0.07

timeliness

0.08

staff

0.18 0.19

performance

0.12

conflict

0.13

implement

0.13

competitive

0.14

credibility

0.17

independent

0.18

timeliness

0.18

0.1

quality

0.2

reputation

0.11

service

0.23

method

process

0.11

conflict

0.26

accurate

service

0.12

method

competitive

0.2

quality

0

0.33 0.53

process

0.5

0

1

0.24 0.29

service

0.3

quality

0.33 0.62

process

0

1

It is clear that the frequencies of these words varied amongst documents produced by each interest group. Moreover, having sorted the order of the frequency of each word

W. Sun 2015

110

from the highest to the lowest, it appears that these topics had been given different priorities in the discussion from each interest group (Figure 3.33): (1) In academic and association groups, ‘process’, ‘service’ and ‘quality’ appeared much more often than the other topics, while regulators also used the words ‘conflict’ and ‘method’ regularly; (2) Associations more frequently used the words ‘timeliness’, ‘independent’ and ‘credibility’ than the other two interest groups; (3) Compared with the list from academic research, ‘accuracy’, ‘competitive’ and ‘reputation’ appeared less frequently in regulators’ reports; (4) Regulators were less inclined to use the word ‘consistency’ more frequently than the other topics or attributes. (It should be noted that the weighted frequencies of words in Figure 3.32 and Figure 3.33 represents the total weighted frequency of that word, and its derivatives and synonyms. For example, the derivatives and synonyms of ‘method’ include methods, methodology, and methodologies).

3.3.4.2 Associations of Attributes 539 nodes were created in the coding process of 424 articles in Nvivo, except for the twenty attributes already mentioned in Section 3.3.3; another forty-five categories were selected as topics for discussion analysis. Collection method of the rest of the fortyfive attributes of CRAEG is the same with method used for attributes of AEG in Section 3.2.3 and the method of twenty attributes from the CRA empirical research in Section 3.3.3. These themes identified and selected through a word frequency list generated in Nvivo. As such, some of them could have similar meanings or clear relevance, such as Credibility (Attribute 24), Accountability (Attribute 21), Usefulness (Attribute 6), and Quality or Accuracy (Attribute 2), because these terms have broad textual meanings. The thematic analysis method used in this part of research is explained in Section 4.3.8.1, and this method was used to find the association relationships amongst attributes. Figure 3.34 shows which and how many attributes are associated to each of sixty-five attributes. For example, as discussed in Section 3.3.3, Communication / Transparency (Attribute 1) is closely relevant to Rating quality or Accuracy (Attribute 2) as expected by investors, issuers and professionals. Moreover, the dissemination of ratings through rating reports and websites (Attribute 17) was also investigated and it is one part of CRAs’ communication with the public. Transparency was considered as one attribute of rating quality, and its importance amongst another thirteen attributes (Attribute 19) was evaluated according to perceptions from the market participants

W. Sun 2015

111

(Duff and Einig, 2007; Einig, 2008; Duff and Einig, 2009b). This attribute is also relevant to expectations about regulators (Attribute 18) since AFP (2004) reported that SEC should establish transparent communication on the criteria for recognising and reviewing CRAs, and require CRAs to document and implement policies and procedures to prevent disclosure of confidential information. Finally, one of the main roles of CRAs (Attribute 14) in the financial market is to reduce the information gap in order to enhance information transparency and communication (Attribute 1) amongst different groups of market participants (Dittrich, 2007). As such, Attribute 1 is closely relevant to Attributes 2, 14, 17, 18, 19, 20. Figure 3.34: Attribute Analysis of CRAEG Attribute number Attribute 1

Attribute name

Attribute 2

Communication / Transparency Accuracy / Quality

Attribute 3 Attribute 4

CRAs’ Understanding Timeliness

Attribute 5

Associated attributes

2, 14, 17, 18, 19, 20

Number of associated attributes 6

1, 3, 4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 40, 42, 43, 44, 45, 51, 54, 63 2, 19, 22 2, 6, 10, 19, 34

41

2, 19

2

2, 4, 14, 20, 21, 24, 31

7

Attribute 7

Favour of interest / Conflicts of Interest Usefulness / Market Acceptability / Predictive ability Process and Procedure

2, 18, 19, 20

4

Attribute 8 Attribute 9

Implementation of Policy Rating Fee

2, 20, 39 2, 10, 20, 48, 53

3 5

Attribute 10

Competition / Entry Barrier / Oligopoly / Concentration

2, 4, 9, 18, 20, 22, 26, 29, 34,

14

2

1

1, 2

1

Attribute 13

Rating Trigger / Regulatory License / Number and Purpose of Ratings Unsolicited and Solicited Rating Rating Method

2, 19, 20, 26

4

Attribute 14

CRAs’ Role

1, 2, 6, 14, 38, 54

6

Attribute 15 Attribute 16

Product complexity Rating dissemination by issuers Industrial information / Report content Needed actions

2, 29 17, 47, 55, 56

2 4

1, 16

2

2, 10, 20, 23, 29, 33, 34, 43,

14

Attribute 6

Attribute 11

Attribute 12

Attribute 17 Attribute 18

3 5

43, 44, 45, 51, 55

44, 45, 46, 48, 53, 64 Attribute 19

Ranking of relevant characteristics / Perception gaps

1, 2, 3, 4, 5, 7, 13, 23, 27, 34, 35, 45

12

W. Sun 2015 Attribute number

112 Attribute name

Attribute 20

Performance indicator

Attribute 21 Attribute 22

Accountability Conflicts of interest / independence

Attribute 23 Attribute 24 Attribute 25

Associated attributes

1, 2, 6, 7, 8, 9, 10, 13, 18, 19, 21, 22, 25, 30, 31, 48, 53, 54, 62, 63 2, 6, 20, 24, 31 3, 10, 20, 30, 36, 47, 49, 50,

Number of associated attributes 20

5 12

51, 52, 53, 59

Consistency Credibility Completeness and comprehensiveness of data Firm size Service portfolio Rating arbitrage Rating shopping / Rating inflation/drift Incentive Reliability/Trust/Integrity Public scrutinise Recruitment Reputation

2, 18 2, 6 21, 31 2, 20

2 4 2

2, 10, 13 2 2 2, 10, 18, 15, 44

3 1 1 5

2, 20, 22 2, 6 21, 24 2, 34 2, 18 2, 4, 10, 18, 19, 32

3 4 2 2 6

2 2, 53 2, 18 2, 14 8 2 14

1 2 2 2 1 1 1

Attribute 42 Attribute 43 Attribute 44

Resources / Competence Behavioural economics Responsibilities / liabilities Understanding problem Language barrier Rating game Relationships / Two-sided market Rating effect Regulation effect Bias

2 2, 10, 18 2, 10, 18, 29

1 3 4

Attribute 45 Attribute 46 Attribute 47 Attribute 48

Standard Confidential information Hard principle (CRARA 2006) Feasibility

2, 10, 18 18, 64, 65 16, 18, 22, 53 1, 2, 9, 18, 20, 22, 53, 54, 55,

3 3 4 11

Attribute 26 Attribute 27 Attribute 28 Attribute 29 Attribute 30 Attribute 31 Attribute 32 Attribute 33 Attribute 34 Attribute 35 Attribute 36 Attribute 37 Attribute 38 Attribute 39 Attribute 40 Attribute 41

56, 57 Attribute 49 Attribute 50

First amendment Ancillary service

22, 53, 56 18, 53, 64, 65

3 4

Attribute 51 Attribute 52 Attribute 53

Efficiency Soft principle (Principle based) Proposed model

22, 10 10, 18, 22 9, 18, 20, 22, 36, 47, 48, 49,

2 3 14

50, 58, 59, 60, 61, 62 Attribute 54 Attribute 55 Attribute 56 Attribute 57 Attribute 58 Attribute 59 Attribute 60 Attribute 61 Attribute 62

Moral hazard IOSCO The need of regulation / selfregulation Marco and micro prudential Fiduciaries’ breach Government involvement Disclose or disgorge Handicap Back-tested results

2, 20, 48 10, 16, 48 16, 48, 49

3 3 3

48 53 22, 53 53 53 2, 20, 53

1 1 2 1 1 3

W. Sun 2015

113

Attribute number

Attribute name

Attribute 63

System development/research agenda DCO Tying / Tipping / Notching

Attribute 64 Attribute 65

Associated attributes

2, 20

Number of associated attributes 2

18, 46, 50 46, 50

3 2

Some main relationships amongst each topic have been manually created as presented in Figure 3.35 as a cluster map according to Nvivo analysis results of sixtyfive attributes stated in Figure 3.34.

Figure 3.35: Cluster Map of Attributes

W. Sun 2015

114

This cluster map demonstrate the map of several clusters, such as clusters around Attributes 2, 10, 18, 19, 20, 50, 53. These clusters were created by the coding of attributes. Attributes which had been coded similarly or related were clustered together. This map reflected the results of Figure 3.36 visually with lines to demonstrate associations which are the relevance between keywords. Attribute 2 is the main cluster with the biggest number of associated attributes (forty-one attributes as shown in Figure 3.35).

The clusters having linkages with Attributes 2, 10, 14, 18 as the centre were selected as the review structure in order to discuss all 65 attributes, and analysis of Attributes 19, 20, 50, and 53 were not selected as part of the review structure because the discussion of the cluster of Attribute 2 and 18 has covered the relevant topics of these attributes. For example, the relevance of Attribute 19 and Attribute 2 was revealed in Section 3.3.3.15. Different sets of ranking factors related to accuracy or quality were used in the previous investigations about participant’s perceptions of CRAs. In Section 3.3.3.15, the relevance of Performance Indicator (Attribute 20) to Ranking of Characterises (Attribute 19) and Market Acceptability (Attribute 6) was mentioned. Moreover, this attribute associate with the job of regulators (Attribute 18), and this is revealed in Section 3.3.4.6 with other associated attributes such as Attributes 1, 2, 6, 7, 8, 9, 10, 13, 21, 22, 25, 30, 31, 48, 53, 54, 62, and 63 according to SEC (2012a) and Krahnen and Weber (2001).

Scalet and Kelly (2012) concluded that CRA criticism can be usually be concluded into four categories (conflicts of interest, lack of transparency, lack of competence, and lack of significant industry competition) according to the most comment recent criticism about CRAs’ practices. However, through a SLR, it was found that Attributes 2, 4, 10, 11, 12, 13, and 14 were discussed in more in-depth with regard to the literature of empirical investigations of perceptions. The rest of the CRA-related literature shows that there are other factors relevant to accuracy, especially in Attributes 6, 21, 24 and 33, which do not have a distinct definition amongst them. Rousseau (2005) asserted that there is an accountability gap (Attribute 21), which makes CRA ratings less reliable (Attribute 31). Orhan and Alpay (2011) also explained the relationship between terms as whether or not the reliability of ratings (Attribute 31) is dependent on the accuracy (Attribute 21) of ratings. Rousseau (2005, p.42; 43) interpreted this term as the gap “…which constitutes an imbalance between CRAs’ power and the possibility to hold them responsible…" Although the concept of ‘quality’ (Attribute 2) is not defined, it appears to refer to the ratings, i.e. "...their ability to assess the creditworthiness of issuers or securities”. By contrast, Kerwer (2002, cited in Bruner and Abdelal, 2005,

W. Sun 2015

115

p.209) described the accountability gap as “the disjuncture between authority and responsibility”. The RAM (2000) report examined the usefulness (Attribute 6) of CRAs according to the perceptions of the market. Another similar term, ‘credibility’ (Attribute 24), by Estrella et al. (2000) described it as one of the criteria for eligibility of CRAs. They believed it is underpinned by the existence of internal procedures to prevent misuse of confidential information (Attribute 46), and that it derived from the other five criteria (objectivity, independence, transparency/international access, disclosure, and resources). 3.3.4.3 The cluster analysis of Attribute 2 (Accuracy / Quality) “The story of the credit rating agencies is a story of colossal failure” (Waxman, 2008, in Morgenson, 2008). Their ability of assigning correct ratings has been questioned in many studies where the usefulness of ratings were analysed. Nevertheless, usefulness (Attribute 6) is a complex term to investigated due to it’s many understandings. For example, Lombaro (2009) analysed usefulness of CRAs in the Australian context with reference to the role of CRAs as gatekeepers according to Coffee (2006)’s definition. However, Schroeter (2007, p.18) interpreted usefulness as the “usefulness of market information in general”, and this informative function means CRAs need to predict “raw default probability” (predict failure) and “systematic risk” (undiversifiable risk) according to suggestion from Hilscher and Wilson (2013). As such, usefulness is closely related to the predictive power of ratings. Furthermore, usefulness associates with other attributes as well. From the perspective of how to facilitate and enhance usefulness, Frost (2007, p.474) emphasised that “ratings timeliness” and “information usefulness” are the two characteristics to facilitate the usefulness of credit ratings, while “stability” and “conservatism” (greater level of verification) are two characteristics which enhance the usefulness of credit ratings. According to Bellotts et al. (2011), “Rating downgrades generally followed the market, rather than led it” because CRAs did not monitor their clients closely enough after making their initial ratings (Miglionico, 2012, p.92). Bozovic et al. (2011) implied that the effectiveness of ratings is similar to the predictive ability of ratings (Attribute 6), and they indicated that Reinhart (2002), Metz et al. (2004), Piazolo (2006) and Nakamura and Roszbach (2010) had also analysed the issue of CRAs failures, and had shown that ratings have a satisfactory level of forecasting ability (Attribute 6).

Nevertheless, Loffler and Posh (2007) argued that CRAs are relatively good at longterm default predictions, according to examination results from Altman and Rijken (2006) and Loffler (2007). In addition, Purda (2011) claimed that CRAs tended to delay changing their ratings (Attribute 4) in an effort to avoid disputes over inaccuracies,

W. Sun 2015

116

especially in ratings of structured finance products. Blinder (2009) questioned the ability of CRAs to understanding the complexity of these products (Attribute 15). Using empirical research, Purda (2011) provided a summary of reasons for rating inaccuracies. Three explanations were found: (1) temporary inaccuracies occur because of the rating model (Attribute 13) designed for archiving stability (Altman and Rijken, 2005, 2006; Loffler, 2004, Beaver et al., 2006; Johnson, 2004); (2) lack of competition (Attribute 10) in the industry (McNamara and Vaaler, 2000; Becker and Milbourn, 2009) although Becker and Milbourn (2011) proved that higher competition induces inflationary ratings; (3) conflicts of interest (Attribute 22) which arose from the issuer-pay model (Kraft, 2010; Benmelech and Dlugosz, 2010; He et al., 2010).

Issuer-pay systems and conflicts of interest (Attribute 22) are the main problems leading to inaccurate ratings (Abrantes-Metz and Teodosieva, 2013). Einig (2008), and Duff and Einig (2009a; 2009b) rejected 'service portfolios' (Attribute 27), 'favour of interest' (Attribute 5), 'expertise' (Attribute 35) and 'timeliness' (Attribute 4) as attributes of rating quality. However, other researchers found that Attributes 4, 5, and 35 were theoretically relevant to the quality and accuracy of ratings, as discussed in Section 3.3.2. There are many other possible factors (even when ignoring attributes discussed earlier in this chapter. For example, Lynch (2009) proposed that inaccurate ratings were caused by: (1) poor ‘due diligence’ or a lack of research resources (Attribute 35); (2) a lack of analytical resources (Attribute 35); (3) and, 'good faith' mistakes (ones made even with good intentions) (Attribute 38). Bar-Issac and Shapiro (2011) analysed CRA incentives (Attribute 30) over a theoretical business cycle within different economies in the economic environments, according to the countercyclical nature of rating accuracy. They claimed that three factors during an economic boom can lead to lower quality ratings (Attribute 2) due to the wrong incentive to ‘milk reputation’ (Attribute 34). These factors are: (a) a tighter labour market for analysts (Attribute 33); (b) larger revenue or high income for CRAs (Attribute 9); and, (c) lower average default probabilities for the securities (Attribute 42). CRAs initially release high quality ratings at a low cost, but raise their prices (Attribute 9) at a later stage with possible gains from cheating, which offset efforts of continuous high quality production (Shapiro, 1983; Dittrich, 2007). Doherty et al. (2009, cited in Hirth, 2011) endorsed that market entry of new CRAs (Attribute 10) can improve rating quality and accuracy. By contrast, the SEC (2012a) stated that low rating quality resulted from ‘incumbents’ (Attribute 43). The other relevant factors are explained as follows: (1) ‘Moral hazard’ (Attribute 54) played an important role on the recent failure of some CRAs (Bozovic et al., 2011; Cantor, 2001; khoi et al., 2013).

W. Sun 2015

117

(2) Schwarcz (2002) commented that a lack of ‘public scrutiny’ (Attribute 32) does not appear to influence rating accuracy, because the CRA market is driven by ‘reputation’ (Attribute 34). Basu (2013) also asserted that rating quality is partly sustained by reputation concerns. However, Bongaerts (2012) argued that reputation alone does not guarantee a good rating quality. Mathis et al. (2009) and Bar-Isaac and Shapiro (2010), noted that rating quality is also affected by regulation effects and regulatory license (Attribute 43). In addition, Bouvard and Levy (2013) examined the reputation of CRAs in line with the theory of multi-sided markets attracting different pools of customers in a platform consisting of two agents (media and operation system). They found that a CRA's reputation can damage its rating quality if the CRA is ‘too transparent’ with a wrong incentive (only for profit). (3) Incentives to maintain good rating standards (Attribute 30) can be driven by reputation concerns (Attribute 34), according to Basu (2013). Herring and Kane (2009) suggested that this incentive can be strengthened by (or outsourcing from) the public authorities and by greater transparency. CESR (2005, p.42) felt that “CRAs already faced significant incentives to main possible standards, particularly as they rely heavily on their good reputation with issuers and users of rating”. However, Berwart et al. (2013) indicated that lack of incentive resulted from the issuer-pay model, in a regulatory licenced environment (Ferri et al., 1999; Tian et al., 2012), together with CRAs’ strategic decisions (Barisaac and Shapiro, 2013). On the other hand, flawed incentives may persuade issuers to 'shop' around for the highest rating (Cantor, 2001). (4) 'Herd behaviour' (Attribute 36) and rating inflation (Attribute 29) appear when multiple ratings exist (Camanho et al., 2012). Rating 'shopping' is driven by ‘complexity’ of products and the market (Spatt, 2009; Crumley, 2012), and it is a cause of conflicts of interest (Evans, 2011). AbrantesMetz and Teodosieva (2013, p.3) claimed that “inaccuracy of ratings…is a problem of quality, which does not stem from a lack of competition”. Jewell and Livingston (1999, cited in Freytag and Zenker, 2012) declared that "competition increases rating shopping and rating inflation" (Camanho et al., 2012). Studies by Bolton et al. (2008) and Becker and Milbourn (2011) confirmed that competition and 'rating shopping' threaten the accuracy of ratings. Moreover, such competition can lead to reduced information disclosure, according to Faure-Grimaud et al. (2006, cited in Shahzard, 2013). Josephson and Shapiro (2012) concluded that if a higher rating trigger (or requirement) was required from constrained investors, CRAs’ profits will decrease, and rating inflation will increase. In addition, evidence of rating inflation was found from the 'Big Three' of the global CRAs (Cantor and Packer, 1995, 1997; Walter, 2002, cited in Shahzad, 2013; SEC, 2009, 2012a). Many researchers have suggested models to reduce the issue of rating shopping (Skreta and Velfkamp, 2009; Sangiorgi et al., 2009; BarIssac and Sharpiro, 2010; Bolton et al., 2010; Opp et al., 2010; Fulghieri et al., 2010; cited by Bakalyar and Galil, 2011). The AMF (2009) advised regulators to curb rating shopping; and Cantor (2001) advised regulators to consider all the ratings that have been assigned.

W. Sun 2015

118

(5) ‘Firm size’ (Attribute 26) can have an impact on the behaviour and accuracy of CRAs and issuers behaviours. (a) Ratings from smaller CRAs (Attribute 26) contained information that was not found in the ratings reports from the larger CRAs (Jewell and Livingston, 1999 and 2000, cited in Baker and Mansi, 2000). Moreover, smaller CRAs tend to downgrade ratings more quickly than the Big Three CRAs (Barterls and Mauro, 2013). (b) Larger issuers prefer to use multiple ratings (Duff and Einig, 2009a), whereas smaller issuers value the coverage and accuracy of SME business analysis more (Lang et al., 2003, cited in Cox, 2010). (c) The cost for internal monitoring is relatively cheaper in larger CRAs (Dittrich, 2007). (d) Taken from the 'rating game' (Attribute 40) perspective, the size of the issuers correlates with rating quality. A decrease in the size of the firm is linked with a decrease in the accuracy of the rating (Ozerturk, 2013). (e) There exists an entry barrier for smaller CRAs (AMF, 2010; Bolton et al., 2011; CESR, 2005; Dittrich, 2007; Einig, 2008). (6) Biases (Attribute 44) were examined from five perspectives: (a) Firm size. He et al. (2011) found that CRAs tended to rate larger issuers more favourably, and that “rating biases…reinforce the creation of ‘too big to fail'…” (Hau et al., 2013). Negative biases from CRAs on a specific asset class have also been found (Ismailescu and Kazemi, 2010, cited by Holden et al., 2013). (b) Upgrades and downgrades. Rating upgrades tend to be more accurate and less biased (Hui and Lui, 2012). (c) Smaller CRAs. Kondo (2012) contended that Japanese CRAs (which are smaller than global CRAs) are more stringent, but Lee and Shimizu (2013) and Yamorie et al. (2006) illustrated that Japanese CRAs provide higher ratings than global CRAs. However, Cantor and Packer (1997) also declared that smaller CRAs assign higher ratings. (d) Bias from the public. Issuers and investors may have ‘heuristic-driven biases’ which can influence their judgement about rating accuracy (Mansi and Baker, 2002, p.1371). Regulatory bodies were reported to show judgemental biases towards smaller CRAs (Attribute 26 / 44) in the decision-making process for the approval, registration or recognition of CRAs (China Daily, 2010). This arose from limited standards (Attribute 45) and 'the paradox of regulation' (Harper, 2011; Hill, 2005; Partnoy, 1999; 2001; 2006; Rousseau, 2005; 2012). Moreover, one commentator observed that political pressure could influence regulators’ decision-making processes on their recognition of CRAs (SEC, 2012a). (7) Rating Drift is a similar topic to that of rating inflation (Attribute 29). 'Drift' is the observed migration patterns of ratings seen in transition matrices after the initial assurance or assignment of ratings, and this has been examined in many studies (Altman and Kao, 1992a, 1992b; Lucas and Lonski, 1992; Carty and Fons, 1993; Carty, 1997). Therefore, it is coupled with naive investment decisions and wrong incentives caused by conflicts of interest (Pagano and Volpin, 2010; Holden et al., 2012). However, inflated ratings did not arise from mistakes in the rating methodologies (Spekkers, 2013). (8) 'Rating arbitrage' (Attribute 28) resulted from the regulatory license, and makes market participants rely on ratings (Opp et al., 2011; Bongaerts,

W. Sun 2015

119

2012, 2013). The excessive reliance of regulators on ratings increases the risk of potential 'cliff effects' (Kiff et al., 2012), and may reduce the ‘right’ incentive of CRAs to produce reliable data (Rousseau, 2012; ECON, 2011; Evans, 2012; Hassan and Kalhoefer, 2011; Hau et al., 2012, 2013; Helleiner, 2011; Holden et al., 2012; Kurlat and Veldkamp, 2011; Muligan, 2009; Pagliari, 2012; Partnoy, 2001, 2010; Ryan, 2012; SEC, 2012a; Theis and Wolgast, 2012; Veron, 2011; Veron and Wolff, 2011; Weber and Darbellay, 2008; Zhang, 2010). (9) 'Liabilities' (Attribute 37) should be imposed in an effort to improve rating quality and protect investors (Zhang, 2012). Commentators in the USA suggested that CRAs should be made financially liable for their rating failures, and that their accountability can be improved by adding a gross negligence liability standard in Section 15 E(w) of SEC 15E Registration of NRSROs (SEC, 2012a). However, Partnoy (2006, p.96) suggested that “strict liability would have advantages over negligence liability, because it would give gatekeepers appropriate incentives to investigate issuers rather than prepare legal defences” (with reference to Coffee (2004a, 2004b) and Partnoy (2004). Some commentators were concerned that some CRAs may “lose their interest in any improvement or innovation unless they deemed it necessary to retain the minimum standard necessary for designation…” (SEC, 2012a). Other voices insist that CRAs should be more exposed to civil liability instead of simply providing ‘opinions’ for investors and the public (Theis and Wolgast, 2012; Ryan, 2012; Tian, 2013; Tichy, 2011; Veron, 2011; Yeoh, 2012). However, it is difficult to know when to determine a punishment for CRAs, due to the lack of criteria for bad ratings, and insubstantial government oversight (Gudzowski, 2010). Moreover, Schroeter (2011) maintained that civil liability would not be effective because of three reasons: (a) Whether ratings are accurate is impossible to prove due to the subjective nature of a predictive opinion which is not a factual statement; (b) CRAs would have pressure to release less favourable ratings than what issuers deserve in order to avoid unlimited damage claims from innumerable investors; (c) ‘Actual malice’ (knowledge that the information was false or it was published with reckless disregard of whether it was false or not) is difficult to be proved under the constitutional protection of the feedom speech afforded by the American constitution. The accountability gap (Attribute 21) not only exists in the area of CRAs’ performance, but also within government establishments and their implementation of regulations. Clearly, a stronger regulator might be better to be placed to ensure that CRAs are more accountable (Partnoy, 2010). However, Bruner et al. (2005, p.209) questioned the accountability of people in government, because policy makers may be ‘piggy-backing’ on the decisions of others. Moreover, Mollers (2009) noted that regulation of CRAs need to be rendered more precisely than they had been iso n the last decade. Kerwer (2005) suggested that regulatory accountability depends on how decisions on standards are made. Darcy (2009) pointed

W. Sun 2015

120

out that there has been a lack of accountability by both regulations and private legal liability. These historical issues have shielded CRAs from liability in private litigation. From the perspective of stakeholder relationships, Cupta (2005, p.37) recommended “two external accountability relationships”: (a) “the principaltrustee relationship” - the principal (e.g. the SEC) authorises a trustee (a CRA), but does not control it; and, (b) “the stakeholder-agent relationship” - whereby the agents (CRAs) have an impact on the stakeholders (issuers) through their choices. Bellis (2012) suggested that an accountability gap has also existed in administrative laws and principles, procedures, and in review mechanisms for decision making by global bodies. Furthermore, Kerwer (2005) found that there to be a relationship between regulatory accountability and the different mode of standard setting (Figure 3.36). Although he insisted ‘soft law’ (which is voluntary best-practise rules, Attribute 52) is more efficient, a positive relationship between accountability and compulsory ‘hard law’ (Attribute 47) can also be found. Figure 3.36: Accountability and Standardising Private standardising

Committee Standardising

Network Standardising

Organisational standardising

Actors Standard setters, standard users, enforcement agents

Addresses Best practise for many (to whom it may concern)

Enforcement Legal, market exit, evaluation by information intermediaries Peer review, market pressure

Accountability Low

Transnational committee of regulators, private lobbying and consulting Transnational committee of regulators supervising national selfregulation of public and private actors Decision making supervised by members

Best practise for public administration

Identity of regulator and regulatee: rolling best practice rules for the regulators

Peer review

High

Standards for members

Compulsory for members

High, similar to directives

Medium

Source from Kerwer (2005, p.626)

3.3.4.4 The cluster analysis of Attribute 10 (Competition / Entry barrier / Oligopoly / Concentration) Lack of competition (Attribute 10) has been the subject of bitter complaints by participants in the market. The SEC (2008) observed that the concentration of high ratings is not consistent amongst the ‘Big Three’ across different rating classes. For example, compared to S&P, Fitch and Moody’s gave almost twice as many outstanding ratings to financial institutions. Fitch and S&P gave 4.5 and 5.5 times more outstanding

W. Sun 2015

121

ratings (respectively) than did Moody’s, to government securities’ ratings. It was reported that several respondents criticised (1) the ‘oligopoly’ of the dominant CRAs, and (2) the high rating prices or data service that investors and issuers have to pay (Attribute 9). They demanded more monitoring (Attribute 18) of the competition affecting different type of products (SEC, 2012a), and they demanded to reduction in entry barriers (CESR, 2005). Fitch and Egan-Jones (2002, cited in Elkhoury, 2008) accused S&P and Moody’s of practising ‘notching’, whereby they automatically downgraded companies which they were not hired to rate (Attribute 29 / 22). Rousseau (2005) identified this practise as an anticompetitive strategy from the bigger CRAs. Many other researchers insisted that the oligopolistic structure of CRA market reflects the lack of incentives (Attribute 30) to monitor and update ratings in a timely manner (Attribute 4) (Ekins et al., 2011, Doherty et al., 2012, Opp et al., 2011, Bolton et al., 2012, Berwart et al., 2013, Hill, 2005, Petit, 2011, Weston, 2012, Zhang, 2010).

However, Schwarcz (2002) recommended that tougher regulations (Attribute 43) may increase the conservative bias (Attribute 44) against innovative financial structures, reduce competition amongst CRAs, and compel CRAs into relocating their agencies to other countries to escape from these regulations. Therefore, regulations on competition may be unnecessary and costly. The CESR (2006) decided not to use regulatory measurement to increase or decrease entry barriers because the effect of regulation on competition was unclear. The CESR (2005) explained that the oligopoly market established itself naturally in the European market under light regulations. New CRAs may face a number of natural barriers to entry, but there are a number of small CRAs which have been able to operate successfully in niches that are too small for the bigger CRAs. Moreover, the CESR asserted that there was no real evidence to show that improving competition will reduce rating prices, even though there is a close relationship between regulatory-licence and the oligopolistic nature of the CRA market (Partnoy, 1999; Cinquegrana, 2009). CRAs should “behave more like information intermediaries than providers of regulatory licenses.” (Partnoy, 2010, p.14) Furthermore, Quaglia (2009) acknowledged the extra-territorial effect of regulations; especially whereby the bigger CRAs, those that are American based, also have agencies in other countries. New barriers (Attribute 10) could appear if there are differences amongst IOSCO (Attribute 55), European and American laws. Many researchers have advocated the use of reputation-related and competitionraising mechanisms to solve issues in the industry (Portes, 2008; Goodhart and Persaud, 2008; Bannier and Tyrell, 2006; Dittrich, 2007; Hunt, 2009; Lombard, 2009; Mathis et al., 2009; Rousseau, 2009; Setty and Dodd, 2003). However, Goor (2013) and Becker and Milbourn (2009, 2011) rejected the notion of a positive effect of higher

W. Sun 2015

122

levels of competition on rating quality. According to the Horner model (2002, cited by Becker and Milbourn, 2009, p.3), “the outside option, endogenously generated by competition” (where customers have to approach other CRAs), this can lead to rating shopping and rating inflation (Becker and Milbourn, 2011), although it may also increase the quality of ratings (through a reputation mechanism theory) as companies try to maximise their reputation (Klein and Leffler, 1983, cited in Becker and Milbourn, 2009). This is because the reputational system works better when the competition is not too severe, because competition can reduce future prices (Attribute 9) and increase the short-term gains from cheating. Therefore, the calls for more competition from public and academic “deserve a caveat” (Becker and Milbourn, 2009, p.26). Consequently, competition can reduce market efficiency (Attribute 51) as it facilitates rating shopping (Bolton et al., 2012). Moreover, additional regulation could increase the cost of supervision and implementation, and thereby reduce the market’s efficiency (Schwarcz, 2002). 3.3.4.5 The cluster analysis of Attribute 14 (The role and function of CRAs) “The role of rating agencies is more challenging and problematic in emerging economies than in developed countries” (Al-saka and Gwilym, 2010, p.80). Multiple roles and functions can be found within the literature (Figure 3.37). There is lack of understanding from the public about the role and function of CRAs and how they rate companies (Bruner and Abdelal, 2005; Sinclair, 1999). CRAs are required to reduce information gaps (Attribute 1) (Amternbrink and Heine, 2013; Bannier and Hirsch, 2010; Becker and Milbourn, 2011; Bongaerts et al., 2012; Brookfield and Ormrod, 2000; Bruner, 2008; Bruner and Abdelal, 2005; Bunjevac, 2009; Cantor and Packer, 1995; Cinquegrana, 2009; Clark and Newll, 2012; Cook et al., 2003; Cornaggia et al., 2012; Cox, 2010; Cupta, 2005; de Andrade, 2011; Densmore, 2013; Dittrich, 2007; Donaldson and Piacentino, 2012; Duff and Einig, 2009a; ECON, 2011; Elkhoury, 2008; Ellis et al., 2011; El-shagi, 2010; Frost, 2006; Gonis et al., 2012; Gropp and Richards, 2001; Haan and Amterbrink, 2011; Hackworth, 2002; Harper, 2011; Hermansson and Wallertz, 2013; Horton, 2013; Karmel, 2012; Keller, 2006; Kerwer, 2005; Macey, 2012; Manns, 2009; Partnoy, 1999, 2009, 2010; Radzi, 2012; Rousseau, 2005; Seaborn, 2011; White, 2010; Xu and Weng, 2011). CRAs must act as financial intermediaries (Boot et al., 2006, cited in Mahlmann, 2008) with certifications (Cook et al., 2003; Frost, 2007; Gonis et al., 2012; Herring and Kane, 2009; Levine, 2009; Miglionico, 2012; Ory and Raimbourg, 2011; Rousseau, 2005). Historically they were considered to be a solution, to enhance the accountability of borrowers (Kerwer, 2005). Nevertheless, the role of CRAs has shifted from that of information intermediaries to a body with a regulatory licence (Attribute 43) (Partnoy, 1999, 2009; Crumley, 2012; Cox, 2010; Levine, 2009; Hosp, 2009; Prysock, 2006; Reinhart et al., 2002). However, their

W. Sun 2015

123

existence has also relied on reputational capital (Cinquegrana, 2009; Hill, 2005) as they are reputational intermediaries (Ford, 2010) with a verification function (Patterson, 2011; Reinhart et al., 2002; Riddiough and Zhu, 2009; Setty and Dodd, 2003; Sufi, 2006; Sylla, 2001). Hill (2005) questioned how CRAs provide information, and what information should be provided (which includes information about CRAs themselves, rating fundamentals, signal information and specifications).

Figure 3.37: Roles and Functions of CRAs No.

Roles & Functions

References

1.

Reduce ‘information gaps’ or resolve ‘moral hazard’ as financial or ‘information intermediaries’

Boot et al. (2006); Cantor and Packer (1995); Scalet and Kelly, (2012)

2.

‘Reputational Intermediaries’ with ‘reputational capitals’

Cinquegrana, (2009); Hill, (2004)

3.

‘Gatekeepers’ with ‘verification’ function

Coffee (2004); Sylla (2001); Patterson (2011)

4.

Selling ‘regulatory license’ with function

Crumley (2012); Reinhart et al. (2002)

5.

A corporate governance device in the decision making process

Kisgen (2012); Kisgen (2009)

6.

Welfare improving coordination device

Boot et al. (2006); Holden et al. (2012)

7. 8.

An economic role to contribute to market efficiency Enhance accountability of borrowers

Dittrich (2007); Smith and Walter (2002) Kerwer (2005)

9.

Help issuer to decide how to structure transactions

Heggen (2011)

10.

Offer monitor services which influence issuers to avert possible downgrades

De Haan and Amtenbrink (2011)

‘certification’

Dittrich (2007) and Smith and Walter (2002) indicated that CRAs also enact an economic role by contributing to market efficiency. Boot et al. (2006), Goor (2013) and Holden et al. (2012) suggested that CRAs could act as a welfare-improving coordination device, and CRAs should be purely informational like a “information equalizer” to resolve the moral hazard (Attribute 54) among different participants in the market (Pagano and Jappelli, 1993; 2002, cited in Boot, 2006, p.85). Furthermore, CRAs also act as a corporate governance device for companies (Tang, 2009) in the decision making process (Kisgen, 2009, 2012; Priebe, 2012), helping issuers to decide how to structure transactions (Heggen, 2011), and offer monitoring services, which influence issuers into averting situations leading to possible downgrades (Haan and Amtenbrink, 2011).

In addition, although CRAs are described as gatekeepers (Coffee, 2004; Coffee and Sale, 2008; Coskun, 2009; Darcy, 2009; Hermansson and Wallertz, 2013; Hosp, 2009; Krebs, 2012; Legg and Harris, 2009; Lipszyc, 2011; Lombard, 2009; Manns, 2009; Smick and Posen, 2008; Strier, 2008; Yeoh, 2012), they are not like the other gatekeepers in the financial market, for four reasons: (1) they are hired by issuers and

W. Sun 2015

124

permitted access to confidential information (Attribute 46). According to Evans (2012, pp.1129-1130), “a bond analyst employed by a Wall Street is not permitted access to confidential information. Yet, that same confidential information is available to a bond analyst employed by a credit rating agency…”; (2) “their role in monitoring issuers is not explicitly required by legislation” (Pinto, 2006, p.19). “Because credit rating agencies perform evaluative and analytical services on behalf of clients, much as other financial ‘gatekeepers’ do, the activities of credit rating agencies are fundamentally commercial character and should be subject to the same standards of liability and oversight as apply to auditors, securities analysts, and investment bankers.” (Evans, 2012, p.1118); (3) their independence is less likely to be compromised than that of accountants because it is getting harder for CRAs to hide inaccurate ratings (Hill, 2005); and, (4) they have been exempt from civil liability as an expert under section 11 of the Securities Act Rule 436 in the USA (Partnoy, 2010; Dechert, 2011).

Ponce (2010, 2012) explained the role of CRAs as providers of a double certification in a two-sided market. This economic perspective is based on the academic journal pricing model, which was established by Jeon and Rochet (2009) to compare the average quality of papers in traditional reader-pay (not-for-profit) journals and openaccess journals. The concept of a two-sided market originated from the software industry (Parker and Van Alstyne, 2000; 2005) and the credit card market (Rochet and Tirole, 2005) to elucidate the two-sided network effects and interactions. Jeon and Rochet explained that there exists a two-sided market amongst authors, readers and academic journals in relation to the academic quality of papers, journal paper prices, the cost of reading papers, the cost of refereeing technology, the cost of publication, the numbers of readers, the numbers of authors, and social welfare. They concluded that an open-access policy could reduce the quality standard below a socially efficient level if the journal has an objective of maximising its impact rather than prioritising social welfare. Ponce was in favour of an investor-pay model for the CRA industry, and believed that the quality of ratings might fall below a socially efficient level in the event of a transition from an investor-pay to an issuer-pay system, according to the theoretical economic relationships in a two-sided market (Attribute 41). Unfortunately, this proposition has not been reached through empirical data. 3.3.4.6 The cluster analysis of Attribute 18 (The job of regulators / enforcement) Tian et al., (2012) suggested that the ‘hard law’ (Attribute 47) of ‘the Credit Rating Agency Reformed Act of 2006’ (CRARA 2006) was not good enough because information was only required to be disclosed (Attribute 60) to the SEC, but not to the public. However, principle-based or 'soft' regulation (Attribute 52) is usually broader, fluid (Coffee and Sale, 2009), flexible (BMA, 2004), and cost-effective, because it

W. Sun 2015

125

adapts to new regulatory demands faster (Weber and Baumann, unknown year). Coffee and Sale (2009) admitted that the interests from industry appeared to be a hybrid system of demands for bright-line rules that are easier to interpret and enforce, and principles that are compatible with different business models, legal situations and market circumstances. Therefore, a combination of rules and principles are used in the American system. Buiter (2009, cited in Sy, 2009) posited macro-prudential regulations (Attribute 57) (to safeguard financial stability) with micro-prudential considerations (which deal with issues of monopoly power, consumer protection and asymmetric information).

According to a performance evaluation by SEC (2012a), the feasibility of regulations (Attribute 48) is affected by statutory factors, such as: fee determination (Attribute 9), method of payment (Attribute 9), moral hazard (Attribute 54), operational feasibility and legal feasibility), as well as factors from the framework given by the Government Accountability Office, for example, independence (Attribute 22), accountability (Attribute 21), competition (Attribute 10), transparency (Attribute 1), feasibility (Attribute 48), market acceptance/choice (Attribute 6), and oversight (Attribute 18).

Coskun (2009) indicated that a supervision system should protect CRAs and CRA independence. However, as well as some conflicts of interest emphasised in the IOSCO code (whereby CRA analysts hold securities of the enterprises they assess), there are another three conflicts of interest which have risen from ancillary services (Attribute 50): (1) tying (Attribute 67) - CRAs may oblige clients to purchase additional services or actually force them to do so, because these companies are frightened of getting a lower rating; (2) notching (Attribute 67) – CRAs downgrade ratings when their clients seek ratings from another agency; and, (3) tipping (Attribute 65) – whereby employees of CRAs abuse confidential information (Attribute 46) of CRAs or issuers (Coskun, 2009).

The failure of CRAs to implement regulations (Attribute 8) has been described in Section 3.3.2, particularly in the areas of process and procedure (Attribute 7), communication, and DCOs (Attribute 64). Moreover, the CESR (2010, p.9) reported that local CRAs also have a ‘language barrier’ (Attribute 39) in their applications and supporting documents. Their documents were rejected because they were not translated according to CESR guidance. Therefore, this barrier could not only cause misunderstanding of a local market from the foreign CRAs (Section 2.2.1.2), but also act an obstacle for local CRAs in the registration process supervised by international regulators. CESR (2010) noted that only one out of seventeen CRAs has provided both

W. Sun 2015

126

applications and supporting documents in English. Hill (2005) criticised the role of regulators because they did not scrutinise and impose penalties upon CRAs. Although the CESR (2005) claimed that CRAs have enough incentive to produce good quality ratings, most market participants insisted that implementation of regulations (Attribute 8) should be monitored (Duff and Einig, 2007). The CESR also expressed doubts about the idea from the public that there should be ‘a level of playing field'. Partnoy (2006) argued that official recognition of CRAs by regulators should be discontinued if a market-based approach is more favourable. Otherwise, there is a need for more stringent criteria of regulators’ designation process concerning: (1) the frequency of rating updates, (2) disclosure of information by CRAs, and (3) the transparency and ethical practices of CRAs (Setty and Dodd, 2003). Moreover, White (2007) noted that legislation by the SEC had two main limitations: it focused on ‘inputs’ rather than ‘outputs’ (which is not appropriate for the recognition process), and the requirement that CRAs must be ‘generally accepted in the financial market’ which poses an entry barrier for new CRAs.

As mentioned earlier, concern has been expressed about the existing CRA liabilities (Attribute 37). Schwarcz (2002) suggested that normative regulations can improve market efficiency (Attribute 51) in the economic context, but that, according to historical data, reputation incentives alone might be sufficient. However, some commentators believe regulators should conduct their own analysis and evaluation of CRA performance to determine the accuracy of ratings - this requires highly skilled staff, and comes at a great cost. Nevertheless, one commentator suggested that this would deter CRAs from developing new methodologies (SEC, 2012a). This remains an unresolved issue amongst issuers, investors, the courts and CRAs. Hunt (2009) summarised three types of liabilities: (1) fraud liability – CRAs should currently be subject to security fraud liability (namely in the USA); (2) negligence liability – although CRAs are unlikely to be subject to this because it relies on providing and communicating information rather than provide the information on the quality of judgment; and, (3) gatekeeper liability – this is recognised by scholars as the role of CRAs. However, CRAs argued that they should not be subjected to this liability since they do not verify, audit, or perform any investigations of due diligence, or assess the completeness of information they receive (Moody’s, 2006). Zhang and Xing (2012) insisted that some CRAs have breached their ‘fiduciaries’ duty of care’ (Attribute 58) and ’expert liability’. Zhang (2010) analysed a series of legal cases involving CRAs in the USA, such as LaSalle v. Duff and Phelps (1997), American Saving Bank FSB v. UBS Paine Webber Inc. (2003), Commercial Financial Services

W. Sun 2015

127

Inc v. Arthur Anderson LLP (2004), Abu Dhabi Commercial Bank v. Morgan Stanley (2009), and Abu Dhabi Commercial Bank v. Moody’s and S&P (2009). There was a regulatory obstacle (first amendment protection, Attribute 49) that enables CRAs to contend claims under Section 11 of security law, thus enabling CRAs to avoid being convicted of expert liability (this dates back to the Jaillet v. Cashman court case in 1923). However, it is worth noting that the first amendment protection (Attribute 49) had been rejected in September 2009 for the case of Abu Dhabi Commercial Bank v. Moody’s and S&P, also CRAs should be accused of expert liabilities if they prepare reports “for use in connection with a registration statement” through their regulatory license (under the scope of Section 7 of Security Law. Nevertheless, on 20th July 2011, the removal of this expert liability introduced for CRAs in the Dodd Frank Act was approved by the US House Financial Services Committee (HR.1539 repealed Section 939 of Dodd Frank Act, and Section 939 is Dodd Frank repealed Rule 436 (g) of Securities Act of 1933, Rule 436 (g) stated CRAs should be excluded from expert liability) in response to the freeze in the asset-backed security market in 2010 (Dechert, 2011). Ideally, CRAs should be treated on an equal level as other ‘experts’.

Appropriateness and reliability of CRA performance indicators (Attribute 20) are also important, when condiering how to impose upon CRAs liabilities. Hunt (2009) indicated that the main performance measurement should be the ‘power curve’ of accuracy ratios of each CRA. In addition, comparability of ratings across different rating instruments was suggested as another dimension of quality measurement. Whereas Kovacic (2009) proposed another three criteria: (1) there should be substantive results of stimulating improvements in quality, reductions in costs, and increases in innovation within CRAs; (2) a CRA's rating process should include superior administrative techniques, effective quality control mechanisms, transparency and accountability tools, as well as a commitment to seek continuous improvement; and, (3) a CRA's long-term capital investments should contain consistent self-assessment and a research agenda. According to the most comprehensive framework proposed by Krahnen and Weber (2001), performance criteria should be determined by: (1) the comprehensiveness and completeness of data (Attribute 25); (2) the complexity of rating methodologies (Attribute 13); (3) a well-defined probability of default rate and monotonicity (Attribute 1); (4) fineness and reliability of the rating system (Attributes 2 and 33); (5) back-tested results (Attribute 62); (6) informational efficiency (Attribute 51); (7) good system development (Attribute 63) and data management (Attribute 7); (8) embedded rating processes for incentive compatibility (Attribute 7 and 32); and, (9) rating outcomes meeting internal and external compliance (Attribute 8). Moreover, many proposals (Attribute 53) have been made for improvement of regulations (Figure 3.38).

W. Sun 2015

128

Name

Content

Comment

‘Disclose or disgorge’ – Hunt proposal (Topic 64)

Figure 3.38: Regulation Proposals This was mentioned as a solution for novel financial products in Hunt’s (2009) proposal. Reputational mechanism cannot deter low quality ratings because investors with a higher tolerance of risk may want their investment portfolio to appear to be of a lower risk. Therefore, CRAs “…should be required to disgorge profits derived from issuing ratings on particular types of new products, or when the ratings fall below a specific level of quality, unless the agency discloses in advance that the ratings are of low quality…” (Hunt, 2009, p.1) Attribute 61

Comment about the Coffee and Hunt proposal (Darcy, 2009):

Handicap (Topic 65)

Coffee’s proposal

Establishing a maximum default rate for each rating grade or category. If the fiveyear default rate of a NRSRO exceeded this maximum rate, the NRSRO status of this agency will be suspended (Coffee, 2008).

Positive1. the suggestions seem relatively straightforward and inexpensive to administer 2. a liability system shifts the burden onto the CRAs to perform up to the required quality level 3. proposals seem consistent with the statutory limits on the SEC’s authority to regulate the substance of credit ratings 4. both contain relatively high visibility sanctions Negative1. both proposals may over-deter CRAs because they asked strict liabilities from CRAs 2. a small new entrant CRA may not be able to withstand a sanction as much as an established CRA Comparative critique (Darcy, 2009): 1. the true threat of NRSRO recognition may motivate a CRA in such a way that the possibility of profit disgorgement does not 2. In Coffee’s proposal, this 'lost profit' may not correlate with the gains the CRA made from inflating its ratings in the past 3. in Hunt’s proposal, ratings disclosed ahead of time may enable CRAs to avoid penalties (due to conflicts of interest) 4. Hunt's proposal seems to provide a better chance of deterring CRAs from issuing inflated ratings by imposing a optimal cost

It was proposed by Hosp (2009) to rank CRAs according to their scorings, so that investors can compare CRAs. This will also give CRAs more of an incentive to improve accuracy and to compete with one other. Attribute 62

W. Sun 2015

Standalone model

Regional CRA

Hill proposal

Designation model

Name

129

Content

Comment

“The issuer would be required to provide all interested NRSROs with the information necessary to rate the structured finance product and would pay the rating fees to a third-party administrator, which would manage the designation process…when the security was issued, the investors would designate which of the NRSROs that rated the security should receive fees, based on investors’ perception of the research underlying the credit ratings…After the initial rating, the issuer would continue to pay maintenance rating fees to the third-party administrator…when the debt was repaid (or repurchased by the issuer), a final rating fee would be paid in conjunction with the retirement of the security. The credit rating would be free to the public…” (SEC, 2012a, p.66). Increase the number of NRSROs gradually, and eliminate the NRSRO designation in five years, or revisit this issue. Regulators oversee the CRA industry through a case-by-case approach with a market-based solution, which is measured by credit spread. CRAs would be required simply to be registered, and to be subject to a public comment process. Greater accountability or liability should be expected from CRAs in the legislation, because the American government alone does not have ability to “readily destroy the oligopoly” (Hill, 2005, p.95). Unlike a caucus, a regional CRA (which is formed by contributions of expertise and resources from other CRAs) would need to be independent from the owners, who are DCRAs. It develops its own regional ratings standards and scales (RAM, 2000). The European Union has been called to establish a European CRA to compete with the 'big three' CRAs, increase competition and reduce concentration (Veron, 2011). Weber and Baumann (year unknown) proposed this as a standard-setting body and industry related organisation. To solve the conflicts of interests in the payment system, NRSROs would be compensated by transaction fees during the life of the security, which will be partly paid by the issuer or secondary-market seller, and by investors in either primary or secondary markets (instead of receiving payment from issuers for credit rating services) (SEC, 2012a).

All the comments received by the SEC (2012a) about this model were negative, such as the uncertainty of whether NRSROs would be compensated, which in turn would damage rating quality, competition and innovation in the industry. There was concern about the feasibility of this system because of the complexity in the payment system and its administrations; additional conflicts of interests can arise when CRAs favour particular investors or face pressure to issue lower ratings for investors who want a high risk premium.

In Asia, due to a lack of co-operation and interaction amongst CRAs, it would be difficult to form a regional CRA in the short-term (RAM, 2000).

Some comments received by SEC (2012a) suggest this will not eliminate conflicts of interest. NRSROs would seek to appease issuers if it is the issuers who select NRSROs to provide ratings. Some others have questioned the feasibility of implementation because of the complexity of payment system and administration. Moreover, the number of CRAs this model can support is also in question. Others have worried about whether CRAs would be adequately compensated.

W. Sun 2015

Platform-pays model

User-pay model

model) RA (IORA)

Investor-Owned

Investor-owned CRA (IOCRA)

Issuer and investor- pay

Name

130

Content All the NRSROs would be put in a continuous queue for receiving rating assignment. At least two NRSROs would be assigned at a time. CRAs’ performance would be measured by correlating their default and recovery rates, as well as by using a common, transparent and defensible methodology. Rating fees would be deposited in a fund by issuers who issue a new debt, and by investors who trade in the secondary market (SEC, 2012a). GAO (the Government Accountability Office in the USA) proposed this model in 2012 as an alternative for reducing conflicts of interest. Institutional investors who are 'highly sophisticated institutional purchasers' (HSIP), create and operate a NRSRO. Issuers have to obtain ratings from both investor-owned NRSROs and their own choice of NRSROs. Research and ratings from IOCRA are freely available to the public.

Comment

Additional conflicts of interest would arise from large institutional investors who are interested in rating inflations (Crumley, 2012). Comments received by the SEC (2012a) questioned the capability of IOCRA to collect extensive resources. IOCRA could lower the ratings for investors who seek higher risk premium, and maintain ratings for investors who need it at a certain level, and provide high ratings for investors who want it for a ‘cheap’ asset. Issuer-pay conflict of interest would still exist because HSIP may have different roles in the market (SEC, 2012a).

It would be formed by the largest fixed income investors (FIIs). The buying power from FIIs would be sufficient enough to require issuers to buy ratings from them (SEC, 2012a).

All the users of ratings would be required to pay for the ratings. A third-party auditor would ensure NRSROs receive their payment from users (SEC, 2012a). Rating fee can be decided according to the value of the debt, and CRAs can be paid by ‘fee from proceeds’ because investors need to pay for the ongoing rating reviews (Ellis et al., 2011). A self-regulated platform organised by both investors and issuers. It acts as an exchange, a clearing house or a central depository to be “…completely in control of the rating process, and it would also provide recordkeeping services to the different parties in the securitization operation” (Mathis, 2009, p669). The potential issuers pay a pre-issue fee to this platform, so it can pay NRSROs for their ratings. Nwogugu (2008) proposed six compensation models in more detail in relation to the payment percentages and funding approaches from different interest bodies including the investor pool, agency pool (maintained by SEC) and issuers.

Comments received by the SEC (2012a) suggested it would be impossible to capture who are the users in every case. Additional conflicts of interests will occur because investors and users want ratings at certain levels as well.

W. Sun 2015

license

Trademark

No CRA

Name

131

Content

Comment

The government (SEC) publish standards and methodologies for banks, investors and institutions to use in their evaluations. The SEC would rate industries and publish reports (Nwogugu, 2008). Alternatively, proposals from the commission’s (ECON, 2011) consultation document recommended that national central banks can be trusted to release ratings for regulatory purposes. European Commissions should provide sovereign ratings. A compulsory trademark license can be imposed on CRAs to help new entrants “overcome reputational disadvantages” and “information disadvantages by disclosing rating procedures and methodologies (Petit, 2011).

European central banks and national central banks may make poor judgments, according to the evidence from Greece. The European commissioners should not be relied upon for providing ratings because they are the “policemen” of EU who oversee fiscal policies (ECON, 2011, p.79).

Compulsory disclosure may deter innovation in CRA methodologies, and there are still disadvantages for new entrants if regulatory licenses exist (Petit, 2011).

3.3.5 CRAEG and patterns of attributes and gap components 424 articles were sorted into three research foci (as demonstrated in Figure 3.21): (a) rating quality or accuracy, (b) understanding about CRA or CRAs’ performance, and (c) regulations. Although there are similar discussions among these sources, studies with varied research foci appeared to cover different topics. An analysis results of sixtyfive attributes in line with these three research foci is demonstrated in Figure 3.39.

Figure 3.39: Attributes and Research Foci

This figure illustrates the attributes existing in the overlap in these issues. The presentation style of these association relationships used here is the same as the style used for AEG attributes analysis. For example, Rating fee (Attribute 9) was mentioned in the discussions of articles from research foci (a) CRAs’ performance and (c)

W. Sun 2015

132

regulations. Rating fee (Attribute 9) associates with regulation gap because SEC (2012a) recommended that the feasibility issues of regulations is affected by factors such as determination and payment method. This attribute is relevant to the performance gap because Shapiro (1983) and Dittrich (2007) found that a larger revenue or high income collected from rating fees could give CRAs the wrong incentive to milk reputation and so their rating quality would be reduced. More detail can be found in the discussion of each cluster with each attribute in Section 3.3.4.3, 3.3.4.4, 3.3.4.5, 3.3.4.6. (This is a SLR result of CRA studies, so the six attributes relevant to the CCRAs are not showed here; however, seventy-one attributes are illustrated in Section 3.4.2).

Possible attributes for the Understanding Gap, the Regulation Gap and the Performance Gap are proposed in Figure 3.42, in accordance with Figures 3.33, 3.34 and 3.37, and with the structure of Porter’s AEG model. Clearly, the structure from Porter’s model is suitable for evaluating the CRA industry since the relevant attributes collected from the literature can be interpreted as per the three components of (1) deficient performance, (2) deficient standards and (3) unreasonable expectations. Moreover, as shown in Figure 3.37, deficient performance includes attributes that might be caused by deficient standards and unreasonable expectations, and deficient standards also contain attributes that might be influenced by understanding from the market. When considering the overlaps amongst Understanding Gap, Regulation Gap, and Performance Gap, the nature of relationships between understanding, regulation and performance which appeared in Porter’s AEG model can be also found in the SLR analysis result of CRAs. Moreover, the applicability of Porter’s model in CRA industry is confirmed through SLR results of AEG and CRA. Perception gaps (Attribute 19) were found in multiple studies (Section 3.3.3.15), and this could link to misunderstanding (Attribute 38) from the public about the role of CRAs (Section 3.3.4.3), especially when no reliable and feasible performance indicators (Attribute 20) have been found; The existence of a Regulation Gap, which is the deficiency in regulation was observed by many scholars with reference to Attribute 45, 47-50, and 52-65 in Section 3.3.4.6; Performance Gap, which the deficiency in performance, was also emphasised due to Attribute 5, 6, 8, 16, 19, 25, 39, 42, 43 in Section 3.3.4.3. (Further detail about suitability and applicability of Porter’s model for this research is in Section 7.2.2 and 7.2.6.1). 3.3.6 Other relevant theories “Conceptual frameworks help researchers by: modelling relationships between theories; reducing theoretical data into statements or models; explicating theories that influence the research; providing theoretical bases to design, or interpret, research; creating theoretical links between extant research, current theories, research design, interpretations of findings and conceptual conclusions” (Leshem and Trafford, 2007,

W. Sun 2015

133

p.101). Moreover, conceptual frameworks “…come from...well-organised principles and propositions that have been confirmed by observations or experiments; models derived from theories, observations or sets of concepts; or, evidence-based best practices derived from outcome and effectiveness studies” (Bordage, 2009, p.313).

As indicated in Section 3.3.2, conceptual theories were not mentioned in the empirical investigations about CRAs, except AUDITQUAL from Duff (2004) which was adopted in Einig, 2008. However, the SLR results have shown that results from Einig (2008) about the attributes of rating quality may not consider all the possible attributes, since seventy-one attributes and factors can be found (Section 3.3 and 3.4). Therefore, in order to capture the expectation and perception of CCRAs, theories used in the AEG related studies should be considered. Alternative conceptual theories have been analysed and discussed in Section 3.2.4. Other existing theories are not suitable as conceptual theories because they are not structured upon evidence-based best practice, and they are too broad or lack organised principles (Figure 3.40). However, Porter’s model contains limitations, which will be discussed in Section 7.2.1.

Figure 3.40: Conceptual Theories Adopted by Others Theories Accountability

Role theory

Performance Service quality gap Information gap

Author / Year Chowdhury (1996); AlQarni (2004); Eldarragi (2008) Adeyemi and Uadiale (2011); Ebimobowei and Kereotu (2011) Okardo (2009); Chong and Pflugrath (2008) Duff (2004); He (2010); Turner et al. (2010) Salehi and Rostami (2010)

Perception gap

Salehi and Rostami (2010)

Communication

Otaibi (2003); Chong and Pflugrath (2008); Okardo (2009)

Agency theory

Adeyemi and Olowwohere (2011)

Policeman theory

Adeyemi and Olowwohere (2011)

The inspired confidence

Adeyemi and Olowwohere (2011)

Attribution theory Decision making theory (Lobby, 1979)

Anderson et al. (1998) Noghondari and Foong (2009)

Not a conceptual theory Too broad / not evidencebased best practices This is a conceptual theory

Too broad / not evidencebased best practices This is a conceptual theory No well-organised principle/ not evidence-based best practices No well-organised principle/ not evidence-based best practices No evidence/ not evidencebased best practices

No well-organised principle/ not evidence-based best practices No well-organised principle/ not evidence-based best practices No well-organised principle/ not evidence-based best practices This is a conceptual theory Not evidence-based best practices

W. Sun 2015 Theories Legitimacy theory

134

Professionalization

Author / Year Chiang and Northcott (2010; 2011) Lee (1994)

Institutional theory

Nagy (2000)

Culture

Leung and Chau (2001)

Service quality gap (ZBP) 4 dimensions and 14 elements of auditing (Gray and Manson, 2000)

He (2010) Daud (2007)

Not a conceptual theory Not evidence-based best practices No well-organised principle/ not evidence-based best practices No well-organised principle/ not evidence-based best practices Too broad / No wellorganised principle/ not evidence-based best practices This is a conceptual theory Not evidence-based best practices

Note: (1) Too broad means the theory is not specific with detailed components or attributes or relationships; (2) Lack of organised principles means the theory does not indicate assumptions or limitations or how to use it; (3) Not evidence-based best practices means the theory has not tested with empirical data with detailed specification of investigation method and process.

3.4 The Attributes of CCRAEG Understanding the CCRAs requires an SLR within Chinese literature due to the complex issues identified in Chapter 2. This is especially true given the limited availability of historical information of CCRAs outside China. CCRAs-related (English language-based) literature also has little methodological clarification. There are some doubts upon (1) the dependability of reports, (2) the legitimacy of sources, and (3) the certainty of authors' statements. For example, Weston (2012) made various assertions, which appeared to be based on limited sources, which in turn had been translated from Chinese language resources with unverifiable expertise of authorship and when examined in detail, raised some discrepancies (Figure 3.41).

As such, Section 3.4.1 outlines a methodological underpinned investigation into CCRAs, and the results obtained from this. The theoretical model of CCRAEG is proposed in Section 3.4.2. Because of the importance and influence of the historical development of CCRAs, a systematic analysis of the history of CCRAs is demonstrated in Section 3.4.3 for a further understanding of the industry. It is hoped that this will shed light on the history of their development. An historical method process and SLR are applied in an effort to assess (1) the reliability of records, (2) the authenticity of sources, and (3) the veracity of authors' statements.

W. Sun 2015

135

Figure 3.41: Discrepancies in CCRA Related Literature Assertion by Weston

Textual/Interpretative Discrepancy limitation from Weston

“… Chinese government is Veracity of Author’s considering creating government- Statement backed credit rating agencies …”

“The Chinese credit rating Authenticity of Sources industry is currently dominated Reliability of Records by Fitch, Moody’s and S&P …[these companies] …have a standalone combined market share of over 67% in the Chinese credit rating industry”

“… the Big Three CRAs have joint ventures with three of the four major remaining Chinese credit rating agencies: CCXI, LianHe Ratings and Shanghai Brilliance”

Reliability of Records

“… 2011 … [PBC] as the chief Veracity of Author’s regulatory …, will now supervise Statement CRAs and all ratings they provide in China”

China Credit Rating Company established in August 2010 (China Rating, 2013) with National Association of Financial Market Institutional Investors (NAFMII) as main shareholder, who is operated by PBC (NAFMII, 2013) Global CRAs were not permitted to own more than 49% of CCRAs until 2012 (MOC, 2008; 2012); global CRAs purchased shares from subsidy companies ‘Lianhe Zixin’ and ‘CCXI’ of Lianhe and CCX, but do not have control over them (Zhang, 2013). All ratings within China are required to be CCRA authorised according to requirements on the websites of PBC, CSRC, CIRC. CBRC and NDRC; but information provided in the newspaper might has been over generalised without any statistical support (Pan, 2012). Shanghai Brilliance did not have any merger and acquisition with GCRAs, but only technical operation with S&P in 2008 (Shanghai Brilliance, 2012).

There is no clear clarification of how PBC will supervise CRAs on its own but only management methods within the inter-bank debt market (PBC, 2013)

3.4.1 SLR Approach for CCRA related literature Search keywords were collected during the traditional literature review, and five terms were identified (Section 2.2.1). Applying the inclusion and exclusion criteria for the search resulted in an initial total of 285 records, of which only 236 Chinese and four (English) language records were accessible. Upon checking references within these documents, a further two books and various CCRA and government websites were added,

plus three (English) and sixteen (Chinese) journal articles, three master

dissertations, one PhD Thesis and one industry report in Chinese (which contained a discussion of the historical development of CCRAs - see Figure 3.42). 118 documents were found to be of relevance to the regulations, performance, and issues of CCRAs. SLR results of the second part of this section showed that there has been limited research on the CCRAEG, and so there is a literature gap within this area. 391 (273 accessible) CCRA-related articles in Chinese were found, and the precision of searches (41.3%) and percentage of inaccessibility (16.1%) shows that the coverage

W. Sun 2015

136

and precision of literature search is appropriate for the SLR of CCRAEG (Section 4.3.7.3).

Figure 3.42: The SLR Strategy for the History of CCRAs

3.4.2 The CCRAEG and attributes Relevant literature on the regulations and performance of CRAs are limited, and most of the documents concerning CCRAs were published as news articles in various magazines, and so these lack an academic underpinning. It is difficult to develop a CCRAEG model and identify the differences in perceptions amongst different interest groups using such literature. Therefore, the proposed model is established according to the SLR results of CRA, CCRAs and the regulation of both CRAs and CCRAs, in an

W. Sun 2015

137

effort to obtain a comprehensive understanding of the relevant factors, issues and topics, and in order to satisfy Research Objective 3 in Section 1.3.

According to SLR results from Section 3.3 (and the theoretical discussion of these issues), Attributes 2, 10, 18, 20 and 22 are the main issues, and the rest of the topics are all directly or indirectly relevant to them. In the CCRA industry there is an Understanding Gap in terms of these attributes. Moreover, discussions regarding Attribute 2 appeared to be focused on five issues:

(1) Competence of staff - Attribute 3 (Bai, 2008; Cai and Zhou, 2008; Chen, 2009b; Chen and Deng, 2000; Guo, 2008; Guo and Yang, 2010; Liu, 2009b; Liu, 2001; Ni, 2008; Tang, 2008; Tang, 2010; Wang and Long, 2012; Wu, 2006; Yang, 2004; Zhang, 2012a; Zhu, 2004); (2) Transparency - Attribute 1 (Bao, 2009; Chen, 2012; Cui, 2012; Li, et al., 2008; Tang, 2011; Nie, 2011a; Wang, 2008; Wang, et al., 2006; Zhang, 2008; Zhang and Zhang, 2010); (3) Reputation - Attribute 34 (Liu, and Zhang, 2012b); (4) Research methodologies - Attribute 13 (Bai, 2010a; Bao, 2009; Chen, 2009; Chen, and Deng, 2000; Chen and Jiang, 2009; Chen and Ma, 2008; Dong, et al., 2005; Gao, 2012; Guo, 2008; Guo and Yang, 2010; Hou, 2012, Pang, 2006; PBC XingTai HeBei province, 2008; Wang et al., 2006; Wu 2006; Yang, 2004; Yang, 2012; Zhang et al., 2005; Zhu, 2004); (5) Rating process and procedure - Attribute 7 (Guo and Yang, 2010; Li, 2010; Wang et al., 2006; Yang, 2004). However, another six factors (Attribute 66: information platform; Attribute 67: historical development issues; Attribute 68: SOEs’ influence; Attribute 69: cooperation between CCRAs; Attribute 70: limitation of bond market; Attribute 71: double ratings) were also discovered due to special features of CCRAs. Attribute 66, 69, 71 are proposals (Attribute 53) to enhance the market efficiency and reduce the information gap. Attribute 67 and 68 are background factors which have an impact and contribute to the deficiency in understanding from the public, regulation and CCRAs’ performance. Attribute 70 appeared to be the reason behind CCRAs’ performance deficiency only.

In addition, discussion about Attribute 38 is more extensive in the CCRA-related literature (compared to the literature on global CRAs), and many CCRA-related literature focuses on Attribute 71 and 20. As for Attribute 20, 40 and 71, despite the complexity of the meanings and use of terminologies and regulations (as discussed in

W. Sun 2015

138

Section 2.2.1 and 2.2.2), there are several other relevant problems which were asserted in relation to Attribute 40, 71, and 20:

(1) The CCRA industry was not highly valued by companies, banks, regulators, and public (Cai and Zhou, 2008; Liu, 2007; Liu, 2009; Tang, 2008; Wang and Long, 2012; Wang et al., 2007; Zhang, 2008; Zhu, 2004). Some of the local governments misunderstand the role of CRAs, and think that rating fees would be a burden for borrowing companies (Qu, 2008). Several banks presumed that external ratings would be just the same as internal ratings (Cheng, 2006; Jin and Cheng, 2005; Liu, 2007; Ma, 2008b; Xiao, 2004); (2) The public does not fully understand the credit economy (Duan and Ma, 2009; Guo, 2008; Guo and Yang, 2010; Liu, 2002; Liu, 2007; Liu, 2009; Lu, 2012; Ma, 2010; PBC in Xingtai HeBei Province, 2008; Wang and Long, 2012; Xia and Lin, 2006; Yuan, 2008; Zhang, 2005; Zhu et al., 2006). Section 2.2 has revealed some major possible reasons, complexity in terminologies, regulations, CCRAs’ background, and historical development in the Chinese financial market. There was not an efficient broadcasting method to educate the public about credit ratings, since the public still have problem in understanding the role of CRAs, whereas CRA has existed since 1909 in America (Tang, 2010). In addition, the standards and explanations about rating methodology, quality, and process from each CCRA appeared to be different (Fang, un-paginated; Liu, 2001; Zhi, 2009; Zhu, 2004); (3) There appears to be an over-reliance on global CRAs, since the big three are the most popular CRAs to be used in the global market, and the public is more familiar with their names than the local CRAs (Dagong, 2009; Huang, 2010); (4) CRA ratings lack impact in the Chinese financial market because of the impact from the historical development issues of the Chinese economic system (Chen and Deng, 2000; Cui, 2012; Deng, 2010; Guo and Yang, 2010; Kennedy, 2008; Liu, 2002; Ni, 2008; PBC YueYang HuNan Province, 2006; Qu, 2008; Tang, 2011; Xu, 2007); (5) An explicit definition of the role and function of CCRAs appeared to be absent since the term of credit rating can be translated the same as credit checking according to the regulations, see Section 2.2.1 (Dong et al., 2005; Li, 2004; Peng, 2009); (6) Credit ratings were not widely applied or used in financial markets in China because of the limited rating demands from the Chinese market (Dong et al. 2005; Guo and Zheng, 2009; Li, 2004; Liu, 2007; Ni, 2008; Pang, 2006; PBC, YueYang HuNan Province, 2006; PBC XingTai HeBei Province, 2008; Wu, 2006; Yang, 2004; Zhang, 2005; Zheng, 2007; Zhu, 2004);

W. Sun 2015

139

(7) Most CCRAs do not have the right to participate fully in the global CRA market due to the regulatory requirement from the CESR and SEC (Pan, 2010; Huang, 2010; Li, 2006; Li, 2009; Zhang, 2012). CCRAs, which are independent from global CRAs, are required in the Chinese market to be a national brand for helping and understanding Chinese companies with a ‘national interest’ (Li, 2006; Wang and Long, 2012); (8) There needs to be an information platform (Attribute 69)for integrating data from each CCRA to share information amongst CCRAs (Fang, unpaginated; Guo, 2008; Liu, 2007; Pang, 2006); (9) According to Guo (2008), Li et al. (2003), Liu (2009), Liu (2009), Mao and Yan (2007), Wang and Long (2012), Xu (2007), and Yuan (2008), historical development issues (Attribute 70) are closely relevant to CCRA performance, and are a cause of the incompleteness and incomprehensiblility of the accessible data (Attribute 25) (more details are provided in Section 3.4.3). Consequently, the government's involvement (Attribute 59) in the industry is crucial, as it can provide support for further development (Bao, 2009; Chen, 2009); (10) CCRAs need to cooperate (Attribute 72) with one other for innovation and research purposes, and share information in order to achieve better efficiency and reduce costs (Guo, 2008; Huang, 2011; Liu, 2001; PBC Xing Tai HeBei Province, 2008; Qu, 2008; Wang, 2007; Yang, 2004); (11) Elliott and Yan (2013) and Kennedy (2008) suggested that the politically influencial power of SOEs (Attribute 71) and the limitation of the Chinese bond market (Attribute 73) in China have a negative impact upon the development of CCRAs (see mode details in Section 2.2.3 and 2.2.4). Chen and Deng (2000), Dong et al. (2005), Li (2004), Liu (2004), and Xu (2007) also illustrated that the regulation of the bond market restricted the development of CCRAs; (12) Double ratings (Attribute 74) are needed for increasing the rating demand in the Chinese market, where all rating entities should have at least two ratings from CCRAs to be able to get into the Chinese bond and securities market (Ni, 2008; Yuan, 2008; Zhang, 2012). Competitive issues (Attribute 10), rating inflation or rating shopping (Attribute 29), tipping, notching and tying (Attribute 64) appear to exist within the CCRA industry, according to journalists, participants and academics (Duan and Ma, 2009; Guan, 2009; Li, 2009; Li, 2010; Ni, 2008; Tang, 2008; Wang and Long, 2012; Wu, 2006; Wu, 2010; Xu, 2007; Yang, 2012; Ying and Zhang, 2006; Zhu, 2004). However, another phenomenon with severe competition within the limited market was also revealed by many researchers and market participants as being a special feature of the Chinese market (Dong et al., 2005; Guo, 2008; Li, 2004; Li, 2010; Wu, 2006; Yuan, 2008). This was caused by four issues: (1) oligopoly from global CRAs (Li, 2010; Lu, 2012); (2)

W. Sun 2015

140

limited financial products (Chen, 2009; Cui, 2012; Duan and Ma, 2009; Guo and Yang, 2010; Liu, 2002; Liu, 2009; Wu, 2006; Xiao, 2007; Yang, 2004; Zhu, 2004); (3) local government protection (Tang, 2008; Xiao, 2004; Zhu, 2004); and, (4) limited rating demands (Tang, 2010; Xiao, 2004; Yang, 2012; Zhang, 2012; Zhi, 2009).

Comments on the limitations of current CCRA regulations include the following: (1) policies for recognition of CCRAs should be produced (Gan, 2009; Huang, 2011; Jiao, 2012; Li et al., 2003; Liang, 2011; Luo and Shang, 2009; Ni, 2008; PBC Tianjin, 2009; Peng, 2009; Zhang, 2011); (2) a general guide for rating methodologies should be established (Bai, 2010a; Bao, 2009; Chen, 2009; Chen and Jiang, 2009; Chen and Ma, 2008; Gao, 2012; Guo, 2008; Guo and Yang, 2010; Hou, 2012; Pang, 2006; PBC XingTai HeBei province, 2008; Wang et al., 2006; Wu, 2006; Yang, 2012); (3) qualification requirements for analysts in CRAs should be clarified (Peng, 2009); (4) supervision departments should take more responsibility in monitoring the internal policies of each CCRA (Duan and Ma, 2009; Gu, 2010a; Guo, 2008; Peng, 2009; Wang, 2008); and, (5) the liabilities of CRAs should be clarified in the regulations (Duan and Ma, 2009; Fang, un-paginated; Li, 2009; Xu and Weng, 2011).

Both the regulators and the public lack knowledge about the impact of ancillary services, competition, performance indicators, rating fees, and the ways in which the issuer-pay approach encroaches on the independence of CRAs (Attribute 22 / 50). This is because relevant literature appeared to be limited in China, and there is no requirement about ancillary services in the regulations in China. Moreover, the public is not fully aware of the function of CRAs, as discussed earlier. Concerning the SLR results of AEG, CRAEG and CCRAEG, there are five factors which influence the expectations from the public:

(1) the complexity in the role of CRAs, CRA terminologies, regulations and supervision departments (Attribute 20), as discussed in Chapter 2; (2) the influence of the historical development of CRAs on the public's understanding (Topic 71), also as discussed in Chapter 2; (3) bias, behavioural economics (Attribute 36) and other psychological factors (Attribute 44), as discussed in Sections 3.2 and 3.3; (4) inconsistent data about the impact of competition, entry barriers, oligopoly and rating games (Attribute 10 / 40), as discussed in Section 3.2;

W. Sun 2015

141

(5) time lags and limitations in establishing and broadcasting regulations (as discussed in Section 3.2); (6) the development of technology (mentioned in Section 3.2). In order to investigate the CCRAEG in line with perceptions from all interest groups, the elements of regulation gap and performance gap should derive from the requirements of CRAs stated in the legislation, and from an understanding of perception gaps in the literature. As discussed in Section 3.3.4, clusters around Attribute 2, 10, 18, 19, 20, 50, 53 are the main clusters, which are associated with the most of other attributes. However, in order to investigate market participants’ expectations and perceptions of CCRAs’ performance, the items under examination should be associated with performance, responsibility, and liability. CRA and CCRA related regulations were used to identify the majority of these items with detailed specific description, because there are no clear performance indicators (Attribute 20); this has been confirmed in the literature, and discussion about responsibilities and liabilities (Attribute 37) related to the Rating quality / Accuracy (Attribute 2) and Enforcement / the job of regulators (Attribute 18), which are the attributes with different sets of characteristics and disputable attributes which have no clear findings, such as rating fee (Attribute 9), competition (Attribute 10), bias (Attribute 44), etc., or many of the new regulation proposals. All relevant legislation about CCRAs’ duties were also reviewed to collect responsibilities of CCRAs. IOSCO was also used because it is the international code for all CRAs. The six areas of liabilities published in the Chinese regulations from PBC and CSRC (the content of rating reports, the standard for rating processes, procedures and methodologies, internal policies about confidential information, self-regulation, recordkeeping, and the liabilities of submitting documents to supervision departments) have been analysed in Section 2.2.2. The four codes of conduct released by IOSCO included (1) quality and integrity of rating process, (2) CRA independence and avoidance of conflicts of interest, (3) responsibilities to the public and issuers, and (4) disclosure of the code of conduct and communication with market participants, and have also been mentioned in Section 2.2.2. These should contain at least nine attributes: (1) communication and transparency (Attribute 1); (2) accuracy and quality (Attribute 2); (3) independence and avoidance of conflicts of interests (Attribute 22); (4) self-regulation (Attribute 56); (5) internal control and management of confidential information; (6) establishment of policies and duties of the DCO (Attribute 64); (7) competence of CRA employees (Attribute 35); and, (8) record keeping and record submission processes (Attribute 7 and Attribute 16); (9) self-regulated (Attribute 56).

W. Sun 2015

142

These attributes are suitable for CCRAEG examination because they are topics in the regulations, which CRAs and CCRAs should have responsibilities or liabilities to perform. The selection of interest groups is justified in Section 4.3.7.1. These CCRAs’ duties could be orientated towards three interest groups: the investing public, issuers, and supervision departments; this was identified through SLR analysis results of perception differences amongst interest groups in Section 3.3.3.15. CCRAs’ opinions should also be collected in order to reflect the overall perceptions and expectations of the market. As such, the four interest groups are CCRAs, CCRAs’ customers, investors and the public, as well as regulators. However, views within academic literature should also be valued according to SLR results in Section 3.3.4.1. Therefore, these views are reflected in the CCRAEG framework from analysis of the Reasonable Expectation Gap and Knowledge Gap, since the SLR results are used as one boundary for these two gaps.

Figure 3.43: Proposed the CCRAEG with two dimensions

Within the CCRAEG-relevant literature, it was found that CCRAs’ performance were affected by (1) incomplete or incomprehensible data provided by issuers (Attribute 25);

W. Sun 2015

143

(2) shortages of highly skilled workers (Attribute 35); (3) a lack of rating standards (Attribute 70). As such, this model is proposed in Figure 3.43.

It should be noted that the selection of gap components was discussed in Section 3.2 and summarised in Section 3.2.7, and the selection of sample groups was justified according to the SLR results of CRA relevant empirical studies in Section 3.3.1. The reason of why these nine elements were selected for investigation was due to the knowledge level of the public, and Regulation Gap, and the feasibility of survey method. This is review and discussed in Sections 7.2 and 7.3 as the results of framework and investigation method development with practical experience. “Attribution theory is a descriptive theory of the manner in which people arrive at judgments about the causes of an outcome and the responsibility for that outcome. The theory has been able to explain the Expectations Gap in a variety of fields including education, managerial accounting, clinical psychology, and sports psychology” (Arrington, 1983). According to the attribution model (Settle et al., 1971), the attribution model for CRAEG can be described as follows (Figure 3.44): the differences in the perception of influencing factors by each interest group can be separated into different dimensions. Firstly, there is the perception difference amongst four different groups. Second, there are different perceptions on four varied gap components. Third, each gap component contains nine examinable elements and other elements and factors. As such, the attributions of CCRAEG are contributed by at least 144 cells (4  4  9).

Figure 3.44: Proposed the CRAEG Conceptual Framework with Three Dimensions

W. Sun 2015

144

3.4.3. SLR results on CCRA history By adopting an SLR and an historical method, this section explores the development and characteristics of CCRAs. It discusses terminology, conflicts amongst regulatory departments, and complexities of business backgrounds- information that has been gathered through interviews, legislation, previous studies, CCRAs and government websites. In general, the developmental history of CCRAs can be mapped in five distinct stages, each one with key events and aspects unique to CCRA development.

3.4.3.1 Previous CCRA historical studies Differences exist among previous historical studies of CCRAs. Figure 3.45 provides a summary of nine studies from the SLR, which described the historical development of CCRAs within distinctive stages, in contrast with the other fourteen sources that were examined. There are differing opinions on the exact periods of development at each stage. Every one of the first nine studies includes information on when CCRAs and policies were launched, and when changes were made to regulations or policies associated with the Chinese credit rating industry.

Only three sources indicated five stages of development (A, C and D); five sources indicated four stages (D, F, G and I); and, one (B) indicated only three stages. Apart from one publication (B), all the others were in agreement that the 1st stage commenced in 1987. Three (A, C and E) indicated that the 2nd stage commenced in 1989; two (F and I) indicated it began in 1992; three (B, G and H) indicated it began in 1993; and, one (D) provided no commencement year. Similar discrepancies occurred in relation to the commencement of the 3rd stage, with three (A, C and E) signifying this was in 1990; two (G and H) stating it was in 1999; one (F) stating it was 2000; one (B) suggested it was sometime at the end of the 1990s; and, the other (D) provided no clear indication. Two studies (A and C) indicated that the 4th Stage commenced in 1992; one (I) stated it began in 1997; two (G and H) claimed it started in 2003; one (F) claimed it started in 2005; one (I) claimed it started in 2006; and the other two publications (B and D) provided no information. Finally, of the three studies that recognised a 5th stage, two (A and C) indicated this as beginning in 1996 and one (I) indicated it began in 2003.

These stages have also been labelled differently; this may be due to the use of terminology with similar meanings. As such, due to all these variations, tagging and separating periods of development in connection with the year (for example, of the release or change to a governmental regulation or policy) might not be a good way to describe the historical development of CCRAs. This may be even more so when

W. Sun 2015

145

studying the complicated regulation system and the multiple CCRA supervision departments which publish regulations and policies separately at various times in response to diverse issues (Wang, 2011).

Figure 3.45: Differences in History Studies about CCRAs

Moreover, from the other seventeen studies which examined the historical development of CRAs but failed to identify developmental stages, two appeared to have no clear distinction between credit rating and credit checking within their research (Cheng, 2006; Gan, 2009; Guo, 2008; Jin, 2008; Kennedy, 2008; Poon and Chan, 2008; Li, 2004; Liang, 2011; Liu, 2001; Liu and Zhou, 2003; Liu, 2004; Liu, 2006; Weston, 2012; Yuan, 2004; Zheng, 2007). For example, Liu (2006) stated that there

W. Sun 2015

146

were three types of credit rating (corporate rating, sovereign rating and personal credit rating). However, CRAs do not provide services to individuals about their personal credit rating reports, as this is usually a different kind of credit checking business and rating reports can be provided to individuals for free (Frikken et al., 2005). In addition, Fang (2006) described the development of the Chinese credit checking agencies as the development of CCRAs. In reality, credit checking agencies and CRAs are two different types of financial reporting firms. Although these studies appear to contain extensive information concerning the CCRA industry background and regulatory framework, such as in Weston (2012), there are inconsistencies in the background information about each CCRA from the different sources. For example, Li (2004) stated that Jinlin Credit Rating was the first CCRA in China, whereas Liu (2006), NAFMII (2008); Nie (2011); Shanghai Far East (2013) and Wang (2006) declared that Shanghai Far East was the first. Therefore, any historical study of CCRAs needs to be based on evidence which is fully inclusive, sufficient and reliable to explain the story, according to the different perspectives of academia, industry and the government.

3.4.3.2 Results Stage 1 - Government-driven and Protection (1987-1992) The first true developmental stage for CCRAs can be placed sometime between 1987 and 1993 (Figure 3.46) with the first indicated CCRA, the ‘Jilin Province Credit Rating Company’, providing ratings within the banking system since 1987. This development continued when the State Department released information on how to manage the securities market (Li et al., 2009; Zhang, 2011; Nie, 2011a). However, somewhat confusingly, the ‘Shanghai Far East Credit Rating Company’ has also been indicated as being the first external CRA in China (NAFMII, 2008). Irrespective of whichever was first, others quickly followed, and by 1992 some wellknown CCRAs appeared, including the ‘Shanghai Brilliance Credit Information Investment and Services Company’, the ‘China Cheng Xin Security Rating Company’, the ‘Shenzhen City Credit Rating Company’ and the ‘Xiamen Finance’ rating; these were followed by ‘Dagong Credit’ in 1994 (CCRAs’ websites, 2013).

On the other hand, due to the protection from the Chinese government, global CRAs cannot simply enter the Chinese marketplace, instead they had acquired CRAs, credit checking agencies or CRAs in Taiwan and Hong Kong, as well as establishing their own branches. For example, S&P acquired Taiwan ratings in 1992 and set up an operation centre in Hong Kong in 1990s (Taiwan ratings, 2013; S&P, 2013).

W. Sun 2015

147

Figure 3.46: 1st Stage (1987-1992)

Stage 2 - The rise of the Socialist Market Economy (1993-1996) The second stage of financial reform in China took place between 1993 and 1996 (Figure 3.47) with the initiation of the socialist economy market, which also saw the PBC being appointed as the main supervision department in 1993. However, there were no specific requirements for qualification and certification in credit rating

W. Sun 2015

148

businesses until 1997. The recognition lists (of qualified CCRAs) were released by the PBC in 1997. This helped CCRAs to be accepted by most participants in the financial market in China, (Okazaki, 2007; PBC, 1993 Yinhan No 408; Qu and Li, 2012). Figure 3.47: 2nd Stage (1993-1996)

Stage 3 - The Government Trial Period (1997-2002) Between 1997 and 2002 the third stage of financial reform occurred (Figure 3.48), and it was during this period that certification licences were issued by various governmental departments (Okazaki, 2007; Qu and Li, 2012). The recovery from the Asian financial

W. Sun 2015

149

crisis prompted the government into accelerating the development of the financial system, with many CCRAs becoming more dominant in local markets. However, there was hardly any rating demand due to (1) the development level of the bond market being quite low; (2) uncertainty over the exact role CCRAs should fulfil; (3) CCRA's perceived lack of experience. In 1997 Shanghai became the first trial city for the PBC to manage a credit rating business, with a temporary policy revival taking place in 2001 (China-Japan-Korea Forum on credit rating, 2007; Lin and Zhang, 2010). Since 1999, global CRAs have entered into technical cooperation with some CCRAs. CRAs and credit checking companies from Hong Kong and Taiwan have also been seeking opportunities to set up branches in the mainland (CCIS, 2013; CCRAs’ websites, 2013). Figure 3.48: 3rd Stage (1997-2002)

Stage 4 - Global Opportunities and Confusion (2003-2007)

W. Sun 2015

150

In the fourth stage between 2003 and 2007 (Figure 3.49) China became a member of the WTO, thus providing CCRAs with a new opportunity. This was heightened in 2002, with further regulations and policies released regarding rating formats, procedures and quality, as well as there being an increasing number of investors. sixteen CCRAs signed the first self-regulation contract for conducting credit rating business within the Chinese financial market in 2002 (Guan, 2012; Chinabond, 2012; Liu, 2002). Figure 3.49: 4th Stage (2002-2007)

Furthermore, in 2003, the SAC specified requirements for employee qualifications for working in CCRAs - this marked the beginning of the qualification and certification system concerning credit rating businesses. In addition, in 2006, the PBC released further regulations including three documents on management of credit rating business,

W. Sun 2015

151

and this has helped to improve the standards and systems for the credit rating industry (Chen, 2009). After the NDRC released a new policy concerning the management methods of global investors, some global CRAs stopped cooperating with CCRAs. For example, both Fitch and Moody’s finished their contracts with CCX and Dagong Global (respectively) in 2003 (CCXI, 2013; Dagong Global, 2013). However, in 2006 the strategy changed once again with mergers and acquisitions between global CRAs and CCRAs, such as CCXI and Moody’s in 2006, as well as Lianhe and Fitch in 2007 (CCXI, 2013; Lianhe, 2013). Because the term ‘Zhengxin’ is used within the credit rating, credit checking and credit information industries, there are various explanations about the meaning of credit rating, with references to ethics, law and economics (Wu, 2001; Wu, 2002; Guo, 2004; Jiang, 2005; Ou and Xiao, 2005; Wang, 2007; Hong, 2008; Ye, 2010; Kennedy, 2010). As such, the scope of ‘Credibility’ within the credit system has been outlined in Figure 2.2, highlighting the structure, scope and categories of credibility, in relation to different types of credit products.

Stage 5 - Global Recognition Opportunities and Debates (2008 - present) The fifth and current stage was initially shaped by the subprime crisis in 2008, which provided new opportunities for many CCRAs, particularly since they are now more experienced within the Chinese financial market (Figure 3.50). This is also a period where the leaders from the local markets have become more recognisable to both local and global markets. This is particularly the case with the appearance of ‘Dagong Global’ as a global player in the GCRA market since 2010. This period also witnessed CCX obtaining credit rating validation from the government in Hong Kong, and CCX is now preparing to enter the European markets (Huang, 2010; Yao and Zhang, 2011; Dong, 2012). Moreover, Wu (2013, un-paginated, translated from Chinese) foresees a time in China where “financial, administrative and commercial systems will be established within the credit checking industry”, thus confirming that the meaning of credit checking might actually relate to anything relevant to credit information. For example, 11315 Corporate Credit Checking System (Alternatively known as Beijing Chengxin Online), which is supported by the State Council (2013, No 631), delivers credit information about companies to the public for free (11315, 2013). Alternatively, CCIS (China Credit Information Service Company) defined ‘Zhengxin’ as credit information management, but CCIS also provides information to their clients about how to implement effective credit management systems and reduce business risk (CCIS, 2013). Unlike global CRAs,

W. Sun 2015

152

who can be considered to be gatekeepers in the financial market, CCRAs are more like participants without authority and power. CCRAs have to compete with each other to grasp their market share opportunities and as such may be forced to do whatever is required in order to “survive” in the market (Kennedy, 2008). Figure 3.50: 5th Stage (2008-Present)

In conclusion, this section has explored the meaning of credit rating terminology in China as well as the legislative environment and financial background behind the development of CCRAs. According to Golder (2000, p.159), “Although historical research may end up confirming established theory, its more useful application may be

W. Sun 2015

153

generating theory from new data.” As such, with new data collected from previous research, CCRA websites and government websites, this research has generated an in-depth discussion and overview of the historical development of CCRAs (Figure 3.51), and it provides a more holistic insight and addresses some of the inconsistencies that were prevalent within previous investigation.

Figure 3.51: Summary of History Development Stages

Debates have arisen from the review of previous studies, regarding inaccurate interpretation, unclear textural meanings and inconsistent uses of evidence. A

W. Sun 2015

154

systematic literature review and the historical method have both proved to be valuable in helping to identify and discuss these issues, by examining them through different sources.

3.4.3.3 Events and issues within each historical stage The five stages of the development of CCRAs within the Chinese financial market are presented in Figure 3.51 as a summary. This information is categorised into four themes: (1) financial background; (2) industry background; (3) regulations; (4) and, credit checking-related events.

3.5 Summary A theoretical understanding of the CCRAEG model is proposed in this chapter according to the results from SLRs (AEG, CRAEG and CCRAEG), in fulfilment of research objectives 1, 2 and 3 in Section 1.3. This model adopted components from Porter’s model (as conceptual theory), and was enhanced with the results of a conceptual enquiry of the definition of AEG. The subjects and the object of each gap are clearly demonstrated in the diagrams (Figure 3.43 and 3.44), with elements captured from SLRs. Moreover, a historical review of the development of CCRAs was conducted in an effort to attain a more comprehensive understanding because of its influences on the understanding and perceived performance of the CCRA industry.

W. Sun 2015

155

CHAPTER 4 METHODOLOGY 4.1 Introduction This research project is an empirical study to gain knowledge about the role of CRAs in China by means of indirect observation and experiences. It involves a “…systematic and methodical process of enquiry and investigation…”, which can be complicated (Collis and Hussey, 2009, p.3). Maxwell’s (2004) model shows the research design process is complicated and it is difficult to find the best method, which is reflected in the philosophy of pragmatism. In this chapter, the proposed hypotheses will be demonstrated in Section 4.2, the approaches, methods, strategies, and techniques will be explained and discussed in Section 4.3, and the explanation of method to avoid any possible research ethics issues can be found in Section 4.4. The limitations of the methodology are provided in Section 4.5.

4.2 Research Hypothesis In accordance with the proposed CCRAEG model in Figures 3.50 and 3.51, which were built upon clearly through the literature research undertaken (especially in Sections 3.2.7, 3.4 and 3.3.5), the hypotheses were developed. The suitability and applicability of Porter’s model can be found in Sections 3.3.5, 7.2.2, and 7.2.6.1. Through the literature, the existence of the Knowledge Gap, Regulation Gap, and Performance Gap with attributes having complicated patterns of associations were revealed. A Reasonable Expectation Gap was established to reflect the views from academia. Nine attributes were selected to capture all possible examinable items. Relevant discussion can be found in Sections 3.3.5 and 3.4.2. Four interest groups were identified according to the SLR results from Sections 3.3.3.15 and 3.3.4.1, and this was discussed in Sections 3.4.2 and 4.3.7.1. The objectives of Study 2 are to try to validate the model from three dimensions:

(1) the Expectation-performance Gap which contains the Knowledge Gap, Reasonable Expectation Gap, Regulation Gap and Performance Gap within the CCRA industry; (2) the Perception Gaps which define the differences in perceptions and expectations amongst interest groups (issuers, regulators, CRAs, as well as investors and the public); (3) the nine attributes contributing to each gap (gap components), which encompass:  attribute a: ancillary service;  attribute b: rating fees;  attribute c: communication / transparency / responsibility to the investing public and issuers;

W. Sun 2015



   

156

attribute d: accuracy / quality (integrity, monitoring, updating, timeliness, rating process, procedures, unsolicited or solicited ratings, usefulness, robust methodologies, accountability, reliability and credibility); attribute e: independence / avoidance of conflicts of interests / favour of interest; attribute f: the DCO; attribute g: the competence of CCRA employees; attribute h: record keeping and submitting; attribute i: self-regulation

Note: These attributes were termed using their shorter names in the figures.

The hypotheses concerning the CCRAEG are stated below. These hypotheses include:

(1) Gap Components (four gap components), which are reported in Section 6.2.1. These include four main hypotheses: Hypothesis One (H1): There is a significant difference between “society’s expectations of what CRAs can do” and “beliefs of what CRAs should do according to the SLR results”. The Null Hypothesis (H0’1): There is no significant difference between results from society and the SLR about whether or not CRAs should perform these duties. Some duties have been identified by society as existing duties, but they are not suggested or proposed by academia, regulators, or the media. Discrepancies in the understanding of these duties, between those of the society and the SLR results, contribute to the Knowledge Gap. It should be noted that 'society' includes all the interest groups, such as academics, bank managers, investors, issuers, journalists, regulators, and other relevant financial companies. The Wilcoxon Signed-Rank test result of Knowledge Gaps is detailed in Figure 6.1, with reference to the method of coding demonstrated in Figure 4.12 and the calculation process described in Figure 4.23. (Median analysis of data used in the hypothesis tests will be presented in Figure 6.2, 6.3, 6.4, and 6.5 with regard to expectation and perception from four interest groups.) Hypothesis Two (H2): There is a significant difference between “beliefs of what CRAs should do, according to the SLR results” and “existing global and local standards”. The Null Hypothesis (H0’2): There is no significant difference between the results from the SLR and the IOSCO code about whether or not CRAs should perform specified duties.

W. Sun 2015

157

Some duties have been suggested by the SLR results as being CRA responsibilities, but these duties may not be stated in the IOSCO code. Discrepancies in the understanding of CRAs’ existing duties, between the SLR results and the IOSCO code, contribute to the Reasonable Expectation Gap. There is a reasonable expectation gap if the duties identified from the SLR results are not stated in the existing regulations from the IOSCO. The IOSCO code was used because it is the regulation applied to all CRAs (Section 3.4.2). SLR results collected on each duty and the comparison of these results and the IOSCO code are presented in Appendix 24. The Wilcoxon Signed-Rank test result of the Reasonable Expectation Gap is detailed in Figure 6.1, with reference to the method of coding demonstrated in Figure 4.12 and the calculation process described in Figure 4.23. Hypothesis Three (H3): There is a significant difference between “existing global standards” and “existing national regulations”. The Null Hypothesis (H0’3): There is no significant difference between the IOSCO code and the Chinese regulations, in terms of CRAs’ liabilities. There are some duties outlined in the IOSCO code that do not exist in Chinese regulations, and this contributes to the Regulation Gap. The differences between international codes on CRAs’ responsibilities and those in the Chinese regulations suggest a deficiency in the Chinese regulations. The comparison results of IOSCO code and Chinese regulations are presented in Appendix 24. Analysis of the Chinese regulations can be found in Section 2.2.2 with a summary of comparison results of Chinese regulations and IOSCO code in Section 3.4.2. The Wilcoxon Signed-Rank test result of the Regulation Gap is detailed in Figure 6.1 with reference to the method of coding demonstrated in Figure 4.12 and the calculation process described in Figure 4.23. The IOSCO code was used because it is the only international code of conduct for CRAs. Hypothesis Four (H4): There is a significant difference between “existing global and local standards” and “CRAs’ performance as perceived by society”. The Null Hypothesis (H0’4): There is no significant difference between "existing global and local standards” and “CRAs’ performance as perceived by society”. There is an Actual Performance Gap if any existing duties stated in the Chinese regulations are performed poorly by the CCRAs. The Wilcoxon Signed-Rank test result of the Actual Performance Gap is detailed in Figure 6.1, with reference to the method

W. Sun 2015

158

of coding demonstrated in Figure 4.12 and the calculation process described in Figure 4.23. (Median analysis of data used in the hypothesis tests will be presented in Figures 6.2, 6.3, 6.4, and 6.5 with regard to expectation and perception from four interest groups.)

(2) Perception Gaps (six perception gaps for expectations, and six perception gaps for perception of performance), which are reported in Section 6.2.2; (3) Attributes within four gap components (nine attributes for each gap component), which are reported in Section 6.2.3. From the second dimension of the proposed CCRAEG model, it is proposed that there is a significant difference in the expectations and perceptions of CCRAEG among the four interest groups. As discussed in Section 3.3, perception differences should be measured at different levels (e.g. Expectations or Perceptions). There are twelve (6*2) comparisons that can be made for testing perception differences among interest groups on expectations and perceived performance. Hypothesis test results for these twelve perception gaps will be reported in Section 6.2.2 with regard to the coding method described in Figure 4.20 and calculation process (of both the Mann-Whitney U test and the Wilcoxon Signed-Rank test) explained in Figures 4.22 and 4.24. The Mann-Whitney U test results on these Perception Gaps for each duty are shown in Appendices 9 through 18, with a summary of these results reported in Figures 6.11 and 6.12.

Hypothesis tests were also conducted for the third dimension on the distribution of results among the nine attributes with reference to participants’ expectations, perceptions, SLR results, the IOSCO code and Chinese regulations. These can identify whether the results are associated with the proposed nine attributes. Statistical tests, which used for examining these hypotheses and how data were coded for the tests, will be explained in Section 4.3.8.2.4, and the Kruskal-Wallist test results for confirming associations between the CCRAEG with the proposed nine attributes is reported in Section 6.2.3 with the analysis explained in Sections 6.2 through 6.11.

4.3 Research Methodology Researchers and methodologists explain and distinguish methodological terms differently. For example, Crotty (1998, p.3) provided four levels for finding a right method: the first level is the “paradigm world view” (which includes beliefs about epistemology and ontology); the second level is the “theoretical lens” (which contains

W. Sun 2015

159

feminist, racial and social science theories); the third level is the “methodological approach” (which is constructed by ethnography, experiment and mixed methods); and, the last level is the “methods of data collection” (such as interviews, checklists and instruments). Creswell (2009) also used similar terms, but provided a different explanation about the interconnections among them, as shown in the diagram in Figure 4.1.

Figure 4.1: Interconnextion of Wolrdviews, Strategies of Inquiry and Research Methods

Source: Cresswell (2009, p.5)

This diagram demonstrates the interconnection between worldviews, strategies of inquiry and research methods. Grix (2002) noted that Objectivism and Constructivism are two forms of Ontology, and that Positivism and Interpretivism are forms of Epistemology. However, Morrow and Brown (1994) stated that objectivism and subjectivism are used for both Ontology and Epistemology. Guba (1990) provided another set of terms: (1) Realism, Critical Realism, and Relativism for Ontology; (2) Dualism, Objectivism, Subjectivism and Modified Objectivism for Epistemology.

Saunders et al. (2009, 2011) interpreted the methdological thinking through six layers in a 'Research Onion' in Figure 4.2. This model interprets the theoretical relationship between different layers of research elements and stages, and this appears to be the most complete and up-to-date illustration documenting the logic relating to these terms. This research project adopts Saunders et al's explanation about research methodologies due to its well-defined structure. As such, the discussion of methodologies used in this research project are derived from the Research Onion model, with justifications of why these methods were selected.

W. Sun 2015

160

Figure 4.2: Research Onion

Adapted from Saunders et al. (2011, p.160)

The Onion model in Figure 4.2 demonstrates the relationship between supersets and subsets for methodology terms, including elements of philosophy, approaches, methodological choices, strategies, time horizon, techniques, and procedures. After first proposing the Research Onion diagram published in 2009 (p.138), Saunders et al. added 'methodological choices' (the superset of strategies, time horizons, techniques and procedures) as a different layer to their 2011 version (p.160). They also added ‘abduction’ as one of the elements in the set of approaches into their 2011 version of the research onion. 4.3.1 Ontology, Epistemology and Axiology Ontology is the starting point of research, and its position is the answer to the question, “What is the nature of the social and political reality that is to be investigated?” (Hay, 2002, p.63) The epistemology of research is about the researcher and participant relationships in the research. Objectivism strongly supports the researcher’s intent on explaining the pattern and testing the hypothesis with a deductive approach (Grix, 2010).

The advantage about objectivism in relation to philosophy as specified by Saunders et al. (2010) is that it satisfies the requirements of Positivism: the researcher is independent of the subject, and the findings will be free from the observer’s bias. Easterby-Smith et al. (2004) as well as Denzin and Lincoin (1994) suggest that

W. Sun 2015

161

comparison and replication are allowed in positivism research, which is one advantage of using positivism for this research. However, the decision of choosing the right position between Subjectivism and Objectivism cannot be determined precisely, although the purpose of this research is try to find answers without any bias through empirical examination. This is because of the disadvantage of Positivism: that observation alone cannot answer all the questions. Positivist places the emphasis on objectivity rather than subjectivity; on analysis rather than description; on measurement rather than interpretation; on structure rather than agency (Denscombe, 2010); and, that “verified hypotheses” are “established as facts of laws” (Denzin and Lincoln, 2005, p.196). Subjectivism can bring more in-depth understanding and insights, instead of the “mirror with privileged knowledge” through Objectivism. However, Pragmatists believe that whether a researcher relies on the objective or subjective point of view should depend on the stage of the research cycle (Bond, 1993; Moccia, 1988; Payle, 1995). Therefore, Pragmatism appears to be more relevant to this research, by adopting subjectivism and objectivism differently in each stage of research according to different research questions, due to its nature of having a “problem-centred approach”.

According to Denzin and Lincoln (2005, p.193), pragmatists believe there is causal relationship between Naive Realism (“‘real’ reality but apprehensible”) and Critical Realism (“’real’ reality but only imperfectly and probabilistically apprehensible”), but that it is hard to identify the relationship (Teddlie and Tashakkori, 2009, p.93). Pragmatists and positivists both agree that the existence of real reality is independent of minds (Cherryholmes, 1992, p.14), but Pragmatism emphasises an ideographic statement, instead of the time-free and context-free generalisation held by positivism. Values are important when interpreting results, and this is a dimension held by Pragmatism in Axiology (Creswell, 2009; Easter-Smith et al., 2012; Tashakkori and Teddlie, 2003). As a result of this, the population, sample frame and sampling strategies have been selected carefully to reflect values from each interest group, and to better show the perception differences among them. Moreover, questions about demographic information have been included in the questionnaires in order to try to regulate and control bias from the population sample. In addition, the proposed CCRAEG model contains a Knowledge Gap as a gap component, which takes account of psychological bias and some other influencing factors. “Objective is the ideal goal, but values and other factors can produce some bias if not regulated or controlled for” (Fien, 2002, p.248). The discussion of the Knowledge Gap in the literature review of CRAs (Section 3.3) and CCRAs (Section 3.4) suggests that

W. Sun 2015

162

“what you say is not what you see”, instead of a direct or naive realism of “what you see is what you get” (Brown, 1994, p.43). As such, a hybrid view (on the position of subjectivism or objectivism) is adopted for the ontology, epistemology and axiology in this research, from a pragmatic stance (Figure 4.3).

Researcher’ stance

Both objectivism and subjectivism (Multiple view depends on research questions) Study 1: Subjectivism Study 2: Objectivism Study 3: Multiple Study 1: Subjectivism Study 2: Objectivism Study 3: Multiple

“…the nature of reality or being”

Epistermology

“…the nature of knowledge and what constitutes acceptable knowledge in a field of study”

Axiology

“…Judgement about the role of values”

Philosophy

“Overarching term relating to the development of knowledge and the nature of that knowledge in relation to research”

Pragmaticism

Approach

“General term for inductive, deductive or abductive research appraoch”

Study 1: Inductive Study 2: Deductive Study 3: Abductive

Methodological choices

General term for characteristics of research design

Mixed method complex

Strategies

“General plan of how the researcher will go about answering the research question(s)”

Study 1: Mixed method (Qualitative-based) – Narrative inquiry / Grounded theory Study 2: Mixed method (Quantitative-based) – Survey Study 3: Mixed methods (Convergent) – Survey

Time horizon

General term for corss-sectional or longitudinal choices

Cross-sectional study

Data collection and analysis

Data collection: Questionnaires / Interviews / SLR Sampling strategies: Convinient sampling and Heterogeneous sampling) Analysis: Nvivo, MannWhitney U test, Wilcoxon Signed Rank test

4.3.6 / 4.3.7

4.3.5

4.3.1

Ontology

4.3.2

Definition (Saunders et al., 2011, pp160-196; 665-684)

4.3.3

Layers

4.3.4

Section

Figure 4.3: Researcher’s Position for Each layer

Techiniques / procedures

Note:

W. Sun 2015

163

The actual methods are explained in the sections as stated in the table above. The sampling and coding strategy used for interviews are justified in Figure 4.15 within Section 4.3.7.2.1. Response rate and sample frame of both interviews and questionnaire are also in Sections 4.3.7.2.1 and 4.3.7.2.2.

4.3.2 Philosophy The term 'Pragmatism' appeared to have been the first applied by Charles Peirce in the 1870s in America as a ‘theory of meaning’, before being developed and evolved by William James, John Dewey and Ferdinand Schiller. The meaning of this term appeared to be flexible among methodologists (Thayer, 1981). However, the contemporary commonly accepted nature of this term was stated by Saunders et al. (2011, p.140) as a “focus on practical applied research, intergrating different perspectives to help interpret the data”. It should be noted that there are main five characteristics of Pragmatism (Biesta and Burbules, 2003; Bryman, 2006; Howe, 1988; Johnson and Onwuegbuzie, 2004; Maxcy, 2003; Morgan, 2007; Tasgakkori and Teddlie, 1998, 2003):

(1) Knowledge should be based on practical outcomes and on what works; (2) Research should test what works through empirical enquiry; (3) There is no single and best scientific method; (4) Knowledge is provisional and should be updated because of inevitable changes through development and evolution; (5) Rejection of the distinctive choices between facts and values, objectivism and subjectivism, rationalism and empiricism, quantitative and qualitative. Pragmatists “…argue that the most important determinant of the research philosophy adopted is the research questions...”, and that “…it is possible to work within both Positivist and Interpretivist positions...”. Pragmatism “…applies a pratical approach, integrating different perspectives to help collect and interpret data” (Saunders et al., 2011, p.678).

Saunders et al. (2011) posited that Interpretivism has two intellectual traditions: Phenomenology (people trying to make sense of the world) and Symbolic Interactionism (a continual process of interpreting the world and interactions; this interpretation leads to an adjustment of people’s understanding and actions because the reality is negotiable). Unlike Positivists, who focus on observable phenomena for providing credible data with a researcher that is independent from data, Interpretivists pursue subjective meanings and details of patterns, and seek a reality behind these

W. Sun 2015

164

details, with research bounded with data. (The advantages and disadvantages of Positivism have been outlined in Section 4.3.1.) Nevertheless, Pragmatism offsets the disadvantages of Subjectivism and Interpretivism by adopting a 'middle ground'.

There are alternative inquiry paradigms, such as Constructivism, Realism, PostPositivism, Post-Modernism, Critical Theory, etc. They were rejected because the results or data collection methods from these paradigms did not appear to fit the research aim of this study (i.e. to identify, analyse and measure expectations and perceptions). For example, researchers have to be a “passionate participant” in Constructivism (Guba and Lincoln, 1994; Perry et al., 1999; Sobh and Perry, 2006), which allows them to reconstruct multiple voices as facilitators (Denzin and Lincoln, 2005). However, in this study, a researcher would not be able to participate in the credit rating activities to understand the role of CRAs. The belief stance in Realism (which is usually applied in such as physical science, medical and nursing research) shows that “triangulation from many sources” must be made to show “all the possible causes” through different experiments (Sobh and Perry, 2006, p.1195). Nevertheless, experiments would not be possible in this study of perception differences and expectation gaps, where data has been gathered through survey and interviews. It was not possible to join in each stage of the CCRA rating process or rating dissemination process and fully test the hypotheses of all causes or relationships (of the seventy-four attributes or factors, as detailed in Sections 3.3 and 3.4). The influence of people's values is denied by post-positivists (Denzin and Lincoln, 2005), a stance that conflicts with the hypotheses proposed in this research about people's perception differences and expectation influencing factors (from bias and context). As such, Post-Positivism was not considered. Post-modernists believe the world can be objectively known only by removing the tacit ideological biases (Gephart, 1999), and it is not suitable for this study because it appears to be mostly adopted for Aesthetic studies (Denzin and Lincolin, 2005) or humanities and relevant professions (Hicks, 2011) for studying heterogeneous phenomena (differences and continual changes of perspectives through communications). Critical Theory takes the opposite stance to Positivism. It argues that knowledge is structured with historical insights (Denzin and Lincoln, 2005), and that reality is subjective and is composed of dialogic and dialectical data (Denzin and Lincoln, 2005), that are underlined by self-reflection and self-criticism (Agger, 1991). However, due to the use of Subjectivism in Critical Theory, it would not be able to verify the reliability of a CCRAEG conceptual model.

W. Sun 2015

165

4.3.3 Approach Induction is used to generate theories, and Deduction is used for verifying theories. Abduction begins with a “surprising fact”, followed by ”a plausible theory” and a process of working out how the phenomenom could have happened (Saunders et al., 2011, p.147). Pierce (1997, p.230) considerd Abduction as “the process of forming an exploratory hypothesis, and it is the only logical operation which introduces any ideas” so that researchers can learn or understand phenomena. VanMaanen et al. (2007, p.1149) believed that some theories can help researchers observe better and discover more suprising facts. They also noted that surprises can occur in the different stages of the research process because abduction is a continuous process, and that “analysis proceeds by the continuous interplay between concepts and data”. As such, these three approaches have been adopted in each stage of this study respectively: Study 1 - generates a conceptual model through an understanding of the problem from the literature; Study 2 - verifies the model through empirical data analysis; and, conducting a cross-analysis (or convergent parallel analysis) of data from studies 1, 2 and 3 (see Section 4.3.6). 4.3.4 Methodological choices Saunders et al. (2011) indicated that because of the wide range of methodological choices, researchers tend to follow mono-methods, multi-methods or mixed methods (including 'mixed method simple' and 'mixed method complex'). According to the research design shown in Section 1.4, an exploratory sequential design is used for Study 1; an explanatory sequential design is adopted for Study 2; and, a convergent parallel analysis with a mixed method is established in Study 3. Therefore, the overall methodological choice is a 'mixed method complex' design. Both qualitative and quantitative methods are implemented seperately in the exploratory and explanatory sequential process, but they converge in Study 3. Due to the limitations of exploratory sequential design, instruments have to be valid through empirical data in Study 2. The mixed method in Study 3 is used to obtain a diverse and representative sample of quantitative and qualitative data (Cresswell, 2003; Hesse-Bier, 2010). Mixed method research has been adotped in many disciplines, and it is selected because of the researcher’s intent on “mixing philosophical (i.e. worldview) positions” (Creswell and Clark, 2011, p.2) from a pragmatism’s perspective (Teddlie and Tasgakkori, 2009). 4.3.5 Strategies and Time horizon With reference to the phrases of the three research designs indicated by Creswell and Clark (2011, pp.121, 124, 126) and the overall “sequential mixed method designs” from Bergman (2008, p.68), several strategies were selected. Narrative Inquiry is one of the

W. Sun 2015

166

strategies used in Study 1 to “interpret” and “reconstruct” the CRA-releated events, history and background, as well as to analyse the linkages and relationships as complete stories rather than individual data. Grounded Theory was also selected in Study 1 in order to construct a theory from the perspective of an Interpretivist, through an inductive approach. The Survey method is more suitable for Study 2 and Study 3, so as to answer questions of “what”, “who”, “where”, “how much”, and “how many” through studying both quantitative and qualitative data. Clearly, this research is not a study of “changes” over an extended period of time; rather, it is a cross-sectional research to study a particular phenomena at “a particular time” (Saunders et al., 2011, p.190).

4.3.6 Research Process and Design The research process was planned for the purposes of: (1) answering research questions concerning the role of CRAs in China as observed by society; (2) testing hypotheses; and, (3) defining the CCRAEG theory regarding the role of CCRAs in China. The design of the process also considers other important elements for research approaches and instrument selections, such as research questions, methodologies, conceptual frameworks, and some contextual factors. These contextual factors include time limitation, financial constraints, perceived problems and preliminary data. The process is presented diagrammatically in Figure 4.4.

Figure 4.4: Research Process Design

W. Sun 2015

167

These three stages were as follows: Study 1 - an exploratory sequential design with qualitative-based methods; Study 2 - an exploratory sequential design using quantitative-based methods; and, Study 3 - the convergent parallel analysis completed with using a mixed method.

This research process design is intended to reflect the research objectives: (i) to establish the structure and components of the CCRAEG in Study 1; (ii) to test the validity of the substantive theory in Study 2; and, (iii) to define a substantive theory to explain the role of CRAs in China in Study 3. In Study 1 and Study 2, this research contains elements from Positivism as an explanatory sequential study, which can be described as a verification of the hypothesis through experimental and manipulative studies, especially with the results from quantitative methods (Denzin and Lincoln, 1994). With respect to Positivism, it is important to build an understanding of the current CCRAEG in China in Study 1, which is an exploratory sequential study in the first part of this research. After Study 1, the verification of the CCRAEG model was confirmed through using qualitative and quantitative data. In Study 3, the results from Study 1, Study 2 and the SLR were combined with more supplementary information, and analysed using Mixed-method analysis, which associates well with Pragmatism through analysis of several issues with a problem-centered approach (Creswell, 2009).

The Mixed Method research approach used in Study 3 applies quantitative and qualitative data collection techniques, as well as quantitative and qualitative analysis procedures. Not only were quantitative and qualitative data collected and analysed in Study 1 and Study 2, but in addition, techniques, software and methods were used to quantify the qualitative data and qualify the quantitative data. For example, both quantitative and qualitative analysis methods were used in the process of the SLR, with a comprehensive literature search for identifying the structure of CCRAEG, by the use of Nvivo10 to manage, code and analyse the literature both statistically and dynamically. Moreover, quantitative and qualitative results were cross-analysed as detailed in Chapter 7. (The SLR methods, questionnaire and interview design, analytical methods, statistical methods, and data collection techniques are reviewed in Section 4.3.7) .

The research methods and approaches of this project are presented in Figure 4.5, which shows the procedure and linkage of each step in this research. The traditional literature review analysis (critical and conceptual literature review) was conducted to provide general background and foundational knowledge for this research. This was followed by an SLR, a review of the literature with an evidence-based approach, and

W. Sun 2015

168

which consisted of three basic elements: (1) to ascertain the suitability and applicability of Porter’s AEG model; (2) to examine and evaluate the issues related to CCREAG; and, (3) to identify the existing CRA regulations. Suitability, feasibility and ethics are the three factors that have to be considered for the research design process, in order to make sure that the findings from research strategies, approaches, and instruments can answer research questions practically and ethically (Denscombe, 2010). The SLR and deduction methods (testing of hypotheses) were used for enhancing the suitability and feasibility of this research.

Figure 4.5: Research Methods / Approaches

Interviews and questionnaires were conducted among interest groups in society who are affected by the work of CRAs or who have an interest in credit rating reports or the CRA industry. Questionnaires (nqr1=69) were collected for contextual data and validation of the tentative theory regarding the CCRAEG in Study 2; Results from semistructured in-depth interviews (nin1=20) in Study 2 were used to inform the content design of the questionnaires in Study 3. Next, the second round survey with questionnaires and interviews (in Study 3) were conducted so as to generate the overall picture from the financial market with statistic results. The second round interviews were also used to verify and interpret some issues found in Study 2 that were not revealed in Study 1. Results of both quantitative and qualitative analysis of questionnaires (nqr2=689) and interviews (nin2=3) from Study 3 were compared with the results from Study 1 and Study 2. As such, “cross analysis” (or convergent parallel analysis) was adopted for comparing and contrasting the findings from different types of data from Studies 1, 2 and 3, after they had been analysed separately. Because

W. Sun 2015

169

both quantitative and qualitative data from Studies 1, 2, and 3 carry equal value for understanding the CCRAEG, a cross-analysis was applied according to the purpose of integrating data, as indicated by Creswell (2013) and Onwuegbuzie and Teddie (2003). 4.3.7 Data Collection In line with the nature of Pragmatism, Narrative Inquiry, the Mixed Research method, and bearing in mind the disadvantages and advantages of each data collection method, six techniques were chosen: questionnaires, interviews, the SLR, the traditional literature review, the conceptual inquiry, and the historical method (in Appendix 30). Convenient sampling and heterogeneous sampling strategies were adopted in the survey.

4.3.7.1 Primary data collection Both questionnaires and interviews were selected for primary data collection. It is advisable to select the instrument that has been used in other studies if other authors can prove it is reliable and valid with the respect to the method of grouping participants for a similar purpose in their studies (Morgan, 2006). As noted in Section 3.2.6, questionnaires and interviews have been widely adopted in the AEG-related research, although most CRA-related research is literature review-based.

Previous researchers identified the following limitations in their AEG studies: (1) their research only focused on one group of people because of the big sample size of the whole market (Baker and Manisi, 2001); (2) purely quantitative research could be perceived as being superficial in nature and has resulted in misleading information in some instances (Porter, 1996); (3) the response rate of the questionnaires was too low in China; and, (4) there have been confusing and heterogeneous views expressed by participants within interest groups, such as the conflicting views between auditing firms and auditors, and misunderstandings by members of the general public (Lin and Chen, 2004). Therefore, a combination of questionnaires and interviews was considered to be more efficient for CCRAEG studies.

4.3.7.1.1 Interview Protocol As previously indicated, this research focused on market participants’ expectations of the role and function of CCRAs, and so in-depth semi-structured interviews were used to explore the expectations and perceptions from four stakeholder interest groups, namely CCRA staff, customers, investors/the public, and officers of Chinese regulatory bodies (Figure 4.6). The rationale for this explorative approach was that there was a lack of previous research into this area, especially within a Chinese context. Moreover,

W. Sun 2015

170

the plan was to examine the perceptions of stakeholders from different perspectives, and so identify and investigate differences in beliefs and understanding, which is best suited for a qualitative approach (Denscombe, 2010).

Figure 4.6: Sampling and Coding for Interviews CRA Stakeholder Group Staff

Sample Description

Code

Size

Four directors and managers from the four largest CCRAs; One CEO of a smaller CCRA Four finance directors or managers from SOEs;

5

Two finance directors in private firms.

CS1, CS2, CS3, CS4, CS5 CC1, CC2, CC3, CC4, CC5, CC6

Investors/ Public

IP1, IP2, and IP3 are staff and managers from banks, financial institutions, and investment companies. IP4 is master student in Finance from university; IP5 is a journalist; IP6 and IP7 are professors from university.

IP1, IP2, IP3, IP4, IP5, IP6, IP7

7

Supervision Officers

SD1 is a retired officer from NDRC; SD2 is an officer in CSRC.

SD1, SD2

2

Customers

6

Note: Sampling strategy for interviews will be justified in Section 4.3.7.2.

All interviews were conducted face-to-face at the interviewees’ work premises. They were conducted in Chinese (Mandarin) and lasted between 1 and 3 hours, with the average length being around 1 hour and 33 minutes. Before commencing the interviews, each participant was informed that the session would be recorded via a digital voice recorder, to which they all agreed. A research information and consent form was then given to, read by, and signed by every participant. The forms were in Chinese and included a short section which the participants read out loud, indicating their agreement to participate and be recorded.

Appendix 19 details the interview outline that was designed due to the complex nature of the regulatory framework within the CCRA industry. As previously discussed within the literature review, fourteen expectation attributes were identified. These - along with a further six which emerged from the IOSCO (2008) code and Chinese regulations (PBC, 2006; CSRC, 2007; National credit standardisation work group, 2008) - were examined and compared (Appendicies 24 and 25). Out of a total number of ninetythree suggested responsibilities, the PBC (2006) appeared to offer the most inclusive comprehensive CRA-related Chinese regulations. It lists twenty-eight items which are not clearly included in the other Chinese regulations (B3, B6, B12, B17, B19, B23, B2532, C4, D9, E1-13, E15). Whereas the CSRC (2007) provided some similar information

W. Sun 2015

171

but with different meanings, and the IOSCO (2008) listed four items which are not in other Chinese regulations (B5, C12, C14 and D2), and the National credit Standardisation Work Group (2008) stated six items that were not found in the other Chinese documents (B15, B16, B18, B20, B24, and D4).

4.3.7.1.2 Questionnaire Design Questionnaires were used to examine the attitudes of the different groups (CRAs, CRA beneficiaries and non-beneficiaries) from the Chinese financial market. A backtranslation procedure (Brislin, 1970), and a collaborative approach for checking the accuracy of translation with a Chinese-English professional translator (Douglas and Craig, 2007) were used for translation. Each of the responsibilities listed in the questionnaires were collected from the IOSCO code and Chinese regulations, and adapted according to interview results and SLR results. Lists and comparisons between Chinese regulations and IOSCO codes are presented in Appendix 24. All accessible relevant Chinese (mainland) regulations, policies and guides that were released before 2010, were reviewed. More updated regulations and policies on credit ratings have been released in more recent years, such as (1) management policies of the credit checking industry (State Council, 2013, Guowuyuanling, No 631; PBC, 2013, RenminyinhangLing, No 1), (2) instructions on how to use ratings in the administrative management (NDRC, PBC and State Commission Office for Public Sector Reform, 2013, Fagaicaijin No 930), and (3) management methods for credit rating of microlending companies and financial guarantee companies (PBC, 2013, Yinbanfa No 43; No 45). With the exception of the five recent documents above, twenty-three legislative documents are relevant to rating needs within the Chinese financial market; eleven documents provide general principles of requirements within the CCRA industry; and four documents detail the supervision methods of the CCRA industry (Appendix 25). However, only three Chinese documents (PBC, 2006; CSRC, 2007; National Credit Standardisation Work Group, 2008) were quoted in the element list (Appendix 24) because these documents appeared to include most of the responsibilities of CCRAs and they contained more specific information to describe the role of CCRAs.

Sixty-eight responsibilities were listed in the questionnaires for Study 2 (see Appendix 21), and an additional twenty-five responsibilities were added (for verification of gaps and elements) in Part B and Part E of Study 3 (the 2nd round survey) for requirements of rating method and process, and supervision by the government and associations (Appendix 22 and 23). The number of responsibilities or items within each attribute varied, depending on the identified duties from regulations, interviews and the SLR. Attributes c, d, and e are the attributes with more responsibilities than attributes a, b, f,

W. Sun 2015

172

and i (Figure 4.7). Responsibilities that are closely relevant or have similar meanings were grouped into one attribute. As such, nine attributes were developed to combine duties revealed in legislation and results reflected from the SLR and interviews. Some responsibilities are relevant to multiple attributes, and they were included in the most relevant attribute. For example, duty C9 - whereby rating analysts and rating business should be separated from any other business (for example, operation and legal consultation) of a CRA, including consulting. This includes the responsibility, “A CRA should ensure that ancillary business operations which do not necessarily present conflicts of interest with the CRA’s rating business have in place procedures and mechanisms designed to minimize the likelihood that conflicts of interest will arise” (IOSCO, 2008). However, the statement of C9 is more appropriate to the meaning of Attribute e (Independence / Avoidance of Conflicts of Interests) instead of Attribute a (Ancillary Service). E5 (Have legal responsibility for the credibility of rating) and E11 (same amount of rating fees for each rating entity) were added into questionnaire items in Study 3 after the interviews. Moreover, more debatable items were also added into the questionnaire in Study 3 (Appendices 21 and 22). These items are attributes of: (g) staff competence; (h) record-keeping and submitting; (i) self-regulation. These three attributes were collected from the Chinese regulations, but there is no clear specification about the standard for these three attributes from the literature.

Figure 4.7: Attributes and Items in Questionnaires Attribute Ha Hb Hc Hd He Hf Hg Hh Hi Others

Item

Meaning

C10 E11 A1-A14; B13; D1-9 B1-B4; B6-12; B19-32; C3; C6; C20 B14; C1; C2; C4; C5; C7-9; C11-C19; 21 C22 B15-18 B5; E1-4 E16 E5-10; E12-15

Ancillary Service Rating Fee Communication / Transparency Accuracy / Quality / Procedure / Methodologies Independence / Avoidance of Conflicts of Interest DCO Staff Competence Record Keeping and Submitting Self-Regulated N/A (Supervision responsibility from the government and associations)

No. of items 1 1 24 28 18 1 4 5 1 10

Questionnaire items relevant to CCRAs’ duties were collected from Chinese regulations, the IOSCO code, interview results, and SLR results. Appendix 24 demonstrate where each item are from which part of which regulation. These items mainly based on the regulations but not literatures, because opinions from SLRs are gap boundary of the Unreasonable Expectation Gap and the Knowledge Gap. The influence of SLR results and interview results are discussion in Section 7.3. Within these ninety-three items, thirty-eight are suggested in the Chinese regulations, and

W. Sun 2015

173

fifty-nine are noted in the IOSCO (2008) codes. The differences between Chinese regulations and the IOSCO codes indicate that these regulations have varied foci. For example, for the fourteen items listed in Part A of Study 3: ones concerning requirements

of

rating

reports

(communication

and

transparency),

specific

requirements of the DCO, and many items for Ancillary Services as well as Independence / Avoidance of Conflicts of Interest, were sourced only from the IOSCO. Eighteen items about rating processes and staff competence (B15-18, B19-32), as well as sixteen items (from Part E: Supervision by the government and associations) were sourced from the Chinese regulations only. The questionnaire design in Study 3 attempted to embrace these differences in the all relevant Chinese regulations and the IOSCO (2008) code.

There are no age and gender issues, as this characteristic does not relate to the research aim and objectives. However, information on respondents' experiences and the roles of their jobs, and as well as their needs and interests in relation to the rating system may need to be asked, because the cognitive process for each group with different jobs, needs and interests will vary. Those issues could affect the results, and so that information was gathered in the first part of the questionnaire alongside the demographic questions. For the items in the second part of the questionnaires, answers were presented in a four-degree format (“well”, “average”, “poorly”, “unable to judge”) with numbers of 1, 0, -1; or in a three-degree format (“yes”, “no”, “not sure”) with numbers of 1, 0, -1 (results of using the coding of “3”, “2.5”, “2”, “1” and “3”, “2”, “1” in Appendices 9, 11, 13, 15, 17, 26, 27, and 28). More detail will be explained in Section 4.3.8.2.

4.3.7.2 Sampling strategies The sample population for this research includes CRAs, finance sector beneficiaries and non-finance sector beneficiaries from the CCRA markets. These include: (1) CCRAs; (2) CCRA customers (financial managers); (3) Investors and the public (e.g. bank managers and internal banking/corporate staff, experts, research students); and, (4)

non-beneficiaries

from

non-financial

markets

(officers

from

supervision

departments). A convenience sampling strategy (mixed with snow ball sampling, volunteering sampling, and purposive sampling) was used in the Studies 2 and 3, which helped to save time and expenditure. Interviews (nin1= 20) and questionnaires (nqr1 = 69) were completed in Study 1 (Figure 4.8). The sample frame is based on the CRA list that was available on the Chinese government website and CCRAs’ website, and the sample size was decided according to previous similar types of research, and the number and size of CRAs in China. Participants for interviews and questionnaires

W. Sun 2015

174

included in the sample are the participants who are easiest to be accessed within each interest group. Participants also help to invite other relevant CCRA officers, financial managers, bankers, regulatory officers, students, and professors to be involved in the research. For example, the researcher visited a CCRA which had agreed to participate in this research, and then the CCRA manager or director asked their officers to complete questionnaires, after this, the researcher gathered the completed questionnaires when they were ready for collection.

Figure 4.8: Sample Groups

CCRA staff CCRAs’ customers Investors and the public Officers from supervision departments Total

Questionnaires Study 2 13 23

Interviews Study 2 5 6

Questionnaires Study 3 102 220

Interviews Study 3 3 0

30

7

340

0

3

2

27

0

69

20

689

3

For the other interest groups, such as CCRAs’ customers, regulators, investors and the public, they were recruited from acquaintances, especially for the interest group of regulators who are a hidden population and difficult to access. Participants were provided with a sample frame to allow the researcher to develop the network for this research. Purposive sampling was also selected because the sample sizes for each interest group needed to be controlled for statistical purposes. In order to get access to as many as possible participants from each interest group, the method of sending questionnaires was selected by participants’ preference. For example, most CCRA staff expected questionnaires to be delivered to the office and collected by the researcher after. Researchers, professors, students, and journalists preferred the questionnaires to be sent via email. Financial Directors and Managers chose the option of having mailed questionnaires.

To follow up questionnaires and increase the sample size, questionnaires were sent via email to possible participants from the organisations which were not involved in this research utilising a volunteering sampling strategy. Their email addresses were collected from companies’ websites. This technique may not be representative of the definable population. However, Naresh (2007) argued that whlist convenience sampling is not good for descriptive research, it can be used in exploratory research to test the hypothesis, form a focus group and pre-test questionnaires. The selection and arrangement of the interest groups is in accordance with previous empirical research.

W. Sun 2015

175

Perception differences have been confirmed among investors, issuers and other professionals (Radzi, 2012): between CRAs, investors and regulators (RAM, 2000); amongst CRAs and issuers (AMF, 2006; 2007; 2008; 2009; 2010; Mohd, 2011); as well as between investors and issuers (Baker and Mansi, 2002; Ellis, 1997).

In the second round survey, the sample for interviews (nin2 = 3) and questionnaires (nq2 = 1000; nqr2 = 689) were selected from subgroups, with the stratified levels of the four interest groups, which is a heterogeneous sampling strategy (Saunders et al., 2011). It should be noted that the sample frames (or the contact detail) were constructed from the CCRAs’ customer list (which is available on CCRAs’ website), the author lists from several famous magazines (on the internet), and also the staff lists from CCRAs (on the internet), regulatory bodies (on the internet), and the relevant associations and institutions (on the internet). The samples size for both interviews and questionnaires were selected with respect to company size, industry classification and geographic location.

The response rate was always low, compared to response rates in the previous investigations about the AEG. 1000 copies of questionnaires were printed and delivered, more than 500 emails were sent in an effort to achieve better coverage in the verification process. The response rate of emailed questionnaires was only 2%, and the response rate of questionnaires in other format was 67.9% because they were delivered to the participants recruited with snow ball sampling and purposive sampling through acquaintances. Several other techniques have been used to improve response rate, including: prior notification (about researcher and study); incentives (e.g. a prize draw); adjusted questionnaires design and administration; and, 'follow up' and 'call back' actions (Malhotra, 2007). Moreover, a personalised covering letter was attached if the questionnaire was sent by email or mail.

4.3.7.3 SLR and traditional interview review In addition to the traditional literature review on the background of CCRAs, an SLR was adopted for narrative inquiry, to show a wider spectrum of literature on the AEG, CRAs and CCRAs. “Traditional literature reviews typically present research findings relating to a topic of interest…without including those studies or why certain studies are described and discussed while others are not…If the process of identifying and including studies is not explicit, it is not possible to assess the appropriateness of such decisions or whether they were applied in a consistent and rigorous manner.” (Gough et al., 2012, p.5) The process design for the SLR was different in each section in Chapter 3 according to the questions in each part of the literature review (Figure 4.9).

W. Sun 2015

176

An SLR is a rigorous, transparent and replicable way of gathering evidence-based information within research, and can be defined as being “…a systematic, explicit, comprehensive and reproducible method for identifying, evaluating, and synthesizing the existing body of completed and recorded work produced by researchers, scholars, and practitioners” (Popay et al., 2006; Petticrew and Roberts, 2008). Although firmly entrenched within medical science, the SLR has been adopted in the Social Science and Business-based research fields, where the perceived benefits include helping to eliminate researcher bias, closing the gap between research and practise, utilising multiple studies, and improving decision-making processes (Brimrose et al.., 2005; Becheikh et al., 2006; Hemsley-Brown and Oplatka, 2006; Lettieri et al., 2009; Thorpe et al., 2006; Popay et al., 2006; Petticrew and Roberts, 2008; Pittaway et al., 2004; Macpherson and Holt, 2007; Stead et al., 2007).

Bettany-Slatikov (2012) postulated that the main advantage of this approach is to produce statements derived from the literature review with a hierarchy of evidence, which assist in obtaining findings with more substance from a Narrative Inquiry perspective. The SLR process involved using various keywords in sixteen English and one Chinese (language) databases to identify and seek out historical studies of the AEG and CCRAs, and also theoretical studies of CRAs. Search keywords were decided upon based on the terms found in the initial traditional literature review stage. Unlike the other English-language research, this study included research in Chinese literature, in order to locate all potential relevant studies in English-language based databases and the Chinese language-based database. This decision was adopted so as to reduce possible “publication bias” which could “creep in and reduce external generalizability” (Gough et al., 2012, p.111). Moreover, the mixed method review was adopted in the SLR to test hypotheses. This method utilises the strengths of both quantitative and qualitative research through using both statistical meta-analysis and thematic synthesis to tell why and how the interventions might work (Harden and Thomas, 2005, 2010; Thomas et al., 2004), more detail is provided in Section 4.3.8.1. Nevertheless, this kind of ‘exhaustive’ search strategy cannot eliminate all publication bias. Researchers examine problems that confirm their personal opinions, so they are more likely to use papers to report positive findings from many angles in the reviewing process (Greenwald, 1975). These publication biases include results that are: (1) “more likely to be published”; (2) “more likely to be published rapidly”; (3) “more likely to be published in English”; (4) "more likely to be published more than once”; and, (5) “more likely to be cited by others” (Gough et al., 2012). As such, the SLR strategy for each review question was designed for locating the sample of studies most likely to answer

W. Sun 2015

177

the question reliably. Moreover, like primary research, the precision between theoretical populations and actual samples is presented for each SLR, for a clear description of results according to the formula established by Gough et al. (2012). According to Gough et al.’s adaption from Ree (2008, cited in Gough, et al., 2012, p.124), precision shows “relevance of the research strategy to identify records of interest”, which is calculated in this study as equal to “retrieved relevant records / all records retrieved from the search”.

CRA

20/06/2013

11/11/2013

05/03/2013

Before 2013

Before 2014

Before 2013

132 + (2 in Chinese)

424 (English only)

118 (include both English and Chinese based publications)

‘Credit Rating’, ‘Rating Agencies’ and ‘View’ or ‘Perception’ or ‘Belief’ or ‘Expectation’

‘Chinese credit rating’ or ‘China credit rating’ (信用评 级,资信评级, 信用评等, 信用评估, 资信评估 in CNKI)

Attributes and components should be considered in the CRAEG

Attributes and components should be considered in the CCRAEG

424 / (599+19) ≈ 68.6% in English

118/286 ≈ 41.3% in Chinese

Precision of searches

Review Question

Search String

Deadline of Search Date

15: Google Scholar; Emerald; Index to Thesis; ISI Web of Knowledge; Mintel; Sage; Science Direct; Wiley; Cambridge Online; Ingenta Connect; Oxford Journal; Zetoc; Taylor and Francis; Ethos; Springer Link

CCRAEG 16: Google Scholar; Emerald; Index to Thesis; ISI Web of Knowledge; Mintel; Sage; Science Direct; Wiley; Cambridge Online; Ingenta Connect; Oxford Journal; Zetoc; Taylor and Francis; Ethos; Springer Link, and CNKI (China National Knowledge Information System).

Timeframe of Search

AEG 17: Google Scholar; Emerald; Index to Thesis; ISI Web of Knowledge; Mintel; Sage; Science Direct; Wiley; Cambridge Online; Ingenta Connect; Oxford Journal; Zetoc; Taylor and Francis; Ethos; Springer Link, Swets, and CNKI (China National Knowledge Information System).

No. of Retrieved Records

Database

Figure 4.9: Data Log of SLR

‘Audit Expectation Gap’, ‘Audit Expectations Gap’ and ‘Audit Expectation-Performance Gap’; and ‘审计期望差距’ (means auditing expectation gap in Chinese) How did AEG developed (1. The origin; 2. Reasons, causes and factors; 3. The meaning; 4. Alternative conceptual theories in AEG studies; 5. Application of Porter’s model) 132 / (247+94) ≈ 38.7% in English

Percentage of unavailablity

W. Sun 2015

178 AEG

CRA

CCRAEG

57 / (247+94) ≈ 16.7% in English

19 / (599+19) ≈ 2.8% in English

46/286 ≈ 16.1% in Chinese

Note: The precision of researches = retrieved relevant records / all records retrieved from the search, and this is explained in the last paragraph of this section.

4.3.7.4 Conceptual inquiry The nature of conceptual enquiry formed from Wittgenstein’s philosophy (1953) of language was to examine “the existing rules for the meaning of an expression”, which also explains that language is naturalistic, and develops to fit human needs and direct people’s interests. It differs with prescriptive conceptual enquiry and evaluative conceptual enquiry, with the former being used to propose new rules; and the latter evaluating the rules by “reference to the reasons for conforming to the practice of using the expression” (Dennis, 2008, p.260). Unlike Dennis (2010) who adopted descriptive conceptual enquiry to examine limited literature on the AEG in 2010, this study looks at how the AEG was interpreted by all identifiable and accessible literature, to present a more comprehensive explanation of the term. 4.3.8 Data analysis Data analysis methods are discussed in two sections, for qualitative data and qualitative data, respectivly. Nvivo 10 was used for the qualitative and quantiative analysis of literature and interview data. Wilcoxon Signed Tests and the Mann-Whitney U Test (or Mann-Whitney-Wilconxon sum test) were used for examining the reliability of quantative data through SPSS 22. It should be noted that the sampling strategy for interiviews, interview protocol design and coding method were discussed in Sections 4.3.7.1 and 4.3.7.2.1, together with Figure 4.15, which illustrates how participants were chosen and where the sample frame is from.

4.3.8.1 Nvivo Each interview was transcribed and analysed in Chinese through the use of Nvivo 10 for identifying, organising, interpreting, exploring and integrating data with themes, categories, topics, patterns and relationships (Lewins and Silver, 2007). Thematic analysis was adopted to interpret and systematise data within a process of searching, reviewing, defining, and then naming themes and sub-themes (Braun and Clarke, 2006). A pragmatic view was taken when studying the perceptions of participants, in

W. Sun 2015

179

order to understand, analyse, and combine their opinions with empirical results and background data collected from literature in Nvivo (Aronson, 1994; Burnard et al., 2008). All quotations used were back-translated (Brislin, 1970) by a professional Chinese-English translator, through a collaborative approach for checking accuracy of translation (Douglas and Craig, 2007). This translation was conducted through the process of (1) English items were translated to Chinese or Chinese items were translated to English by the researcher; (2) Everything was translated back by the professional translator; (3) Checking translation differences between the researcher and the professional translator to confirm the English version and Chinese version of the questionnaires.

There are four types of qualitative data in Computer Assisted Qualitative Data Analysis (CAQADAS): background information, primary data, secondary data, and relevant supporting information (Lewins and Silver, 2007, p.17). The tasks of qualitative analysis are listed in Figure 4.10. CAQADAS software packages provide a flexible approach to combine inductive, deductive, theoretical and question-based coding approaches together, because codes can be generated at any point in the process, and code schema can be complicated. However, there is no best CAQDAS software package (Lewins and Silver, 2007, 2014).

Figure 4.10: Qualitative Analysis

Adapted from Lewins and Silver (2007, p.13)

NVivo 10 is used in this research because it is software provided by the university. Comparing comments about different CAQDAS software packages from Lewins and Silver (2007), NVivo focuses on systematically handing codes, locating memos, dispersing annotations and links with modelling tools, its main disadvantages include (1) difficulty in organising data; and (2) the sentence being used in the search has to be the exact sentence without variety. However, the first disadvantage seems to be

W. Sun 2015

180

minimised in NVivo 10, as literature can be imported from the full-content of PDFs, and metadata can be imported from bibliographic software such as Zotero, EndNote, RefWorks, and Mendeley. Moreover, Nvivo provides an environment similar to Outlook (which is also the email system used in the university) for moving around its main functions and windows with dragging and dropping, and this has made the coding process easier (Silver and Lewins, 2014).

As indicated in Section 4.3.7.3, a mixed methods synthesis was adopted in the SLR, and it has “a parallel interest” with the mixed research method (Gough et al., 2012, p.202). This method can be used to develop hypotheses through thematic synthesis, and then test them in the mixed methods synthesis by combining results from statistical meta-analysis and thematic synthesis. Meta-analysis is generally used for combining the numerical results of studies, whereas thematic synthesis prioritises interpretation and analysis over description through using thematic codes, themes or labels (Gough et al., 2012).

4.3.8.2 Statistics and reliability The statistical method was chosen according to the type of question being answered, the type of variables from questionnaires, and the scale of measurement (Zikmund, 1997a). The Mann-Whitney U test and the Wilcoxon Signed-Rank test, which are nonparametric statistical tests without assumed normal distribution, were used to verify the significant differences in perceptions, expectations, views or beliefs, since these samples were not randomly selected. Medians were used for describing the central tendency of ordinal data (Jamieson, 2004). Percentages and Standard Deviation (Std) were also used to measure the frequencies of rejected null hypotheses (no significant differences) such as the number of perception gaps which existed amongst interest groups.

4.3.8.2.1 Coding of Questionnaires The coding of questionnaires was conducted differently according to the purposes of hypothesis tests: (1) In Mann-Whitney U tests for perception gaps (significant differences in perceptions and expectations between interest groups), “Yes”, “No”, and “Not sure” were coded as 1, -1, and 0; “Well”, “Adequately/Average”, “Poorly” were coded as 1, 0, and -1 (“Unable to judge” was excluded in the test). (2) In Wilcoxon Signed Rank tests for gap components (significant difference between boundaries for each gap component) within each interest group. Both “Not sure” and “Unable to judge” items were put as the middle category as shown in Figure 4.11. The medians of participants’ choices within each interest group were used for the gap component tests

W. Sun 2015

181

within each interest group. These choices were coded as ordinal numbers, for example, “Yes”, “No”, and “Not sure” were coded as 1, -1, and 0; “Well”, “Adequately/Average”, “Poorly”, and “Unable to judge” were coded as 1, 1, -1, and 0.

Although previous research confirmed that the middle response category had been on occasions misused as a dumping ground, misuse did not adversely affect either the reliability or the validity of research, nonetheless this category has to be treated carefully (Kulas et al., 2008). Therefore, another set of coding was used to see whether any difference appears in the hypotheses tests results, if the “Not sure” items have been put at the end of scale in Figures 4.11 and 4.12.

Figure 4.11: Scales Used in the Mann-Whitney U Tests (Perception Differences) Choice / Coding Yes / 1 Not Sure / 0 No / -1

Choice / Coding Yes / 3

No / 2 Not sure / 1

Put “Not sure” in the middle of the scale Meaning Choice / Coding Positive expectation / perception Unsure expectation / perception Negative expectation / perception

Performed well

Adequately or Average /0 Poorly / -1

Average performance

Put “Not sure” at the end of the scale Meaning Choice / Coding Have correct knowledge There should be no middle ground for ethical decisions Have knowledge but not right Have no knowledge to answer the question

Meaning

Well / 1

Poor performance

Meaning

Well / 3

Performed well

Adequately or Average/ 2.5

Average performance

Poorly / 2

Poor performance

Unable to judge (excluded)

Have no knowledge to answer the question

Figure 4.12: Scales Used in the Wilcoxon Signed Rank Tests (Gap Components) Choice / Coding Yes / 1 Not Sure / 0 No / -1

Choice / Coding Yes / 3

No / 2 Not sure (excluded)

Put “Not sure” in the middle of the scale Meaning Choice / Coding Positive expectation / perception Unsure expectation / perception Negative expectation / perception

Positive perception

Poorly / -1

Negative perception

Put “Not sure” at the end of the scale Meaning Choice / Coding Have correct knowledge There should be no middle ground for ethical decisions Have knowledge but not right Have no knowledge to answer the question

Meaning

Well / 1; Adequately or Average/ 1 Unable to judge / 0

Unsure perception

Meaning

Well / 3

Performed well

Adequately or Average/ 2.5

Average performance

Poorly / 2

Poor performance

Unable to judge (excluded)

Have no knowledge to answer the question

W. Sun 2015

182

According to (Farrell and Farrell, 1998), there should be no neutral response to the question about whether or not certain responsibility should be performed, as such; “Not Sure” was not put in the middle of the scale between yes and no. The coding method can be explained as follows: (1) In Mann-Whitney U tests for perception gaps (significant differences in perceptions and expectations between interest groups), “Yes”, “No”, and “Not sure” were coded as 3, 2, and 1; “Well”, “Adequately/Average”, and “Poorly” were coded as 3, 2.5, and 2 (“Unable to judge” was excluded in the test). (2) In Wilcoxon Signed Rank tests for gap components (significant difference between boundaries for each gap component) within each interest group. The medians of participants’ choices within each interest group were used for the gap component tests within each interest group. These choices were coded as ordinal numbers, for example, “Yes” and “No” were coded as 3 and 2 (“Not sure” was excluded) ; “Well”, “Adequately/Average”, and “Poorly” were coded as 3, 2.5, and 2 (“Unable to judge” was excluded).

As such, data analysis will be presented in Chapters 6 and 7 by adopting the similar coding used by Porter. Results from using the alternative coding are also provided for reference in Appendices 9, 11, 13, 15, 17, 26, 27, and 28.

4.3.8.2.2 Mann-Whitney U Test A Mann-Whitney U test was conducted for perception difference among four interest groups (six comparisons) in their expectations of what should be performed and how well CCRAs performed them for each duty (ninety-three duties in the questionnaire). 1116 (6*93 + 6*93) times using the coding of “Yes”, “No”, and “Not sure” as 1, -1, and 0; “Well”, “Adequately”, and “Poorly” as 1, 0, and -1. This test was conducted an additional 1116 times using an alternative coding scheme which utilises 3, 2, and 1 for “Yes”, “No”, and “Not sure”, 3, 2.5, and 2 for “Well”, “Adequately”, and “Poorly”. Example 1: Test perception differences on expectations from agencies’ customers and regulators for Duty 1 (in the questionnaire) by using the code of Yes (1), No (-1), and Not sure (0). Figure 4.13: The Mann-Whitney U Tests on Perception Differences Participant No. 1 2 3 4 … 27 … 220

Ccras’ Customers (n=220) 1 1 1 1 … … … 0

Regulators (n=27) 1 1 1 1 … 0

W. Sun 2015

183

Note: 1. Data in the second and the third columns are answers provided by each participant within the interest groups of CCRAs’ customers and regulators. Mann Whitney U test was conducted on these two columns (2 nd and 3rd) of data for the perception differences between these two interest groups. 2. In the SPSS, these two columns data were put in as variable 1, and variable 2 was group number to separate these two interest groups (1 and 2 in this case). 3. Adjusted α = Desired α ⁄m Hypotheses = 0.05 / [1116*2 + (12*2+4) + 12 *2 + (16+6)] = 0.05 / 2306 ≈ 0.00002 There is no significant differences if Z < 4.1075 (ρ > 0.00002, after Bonferonni Correction). Bonferonni Correction was suggested by the university’s statistics expert.

4.3.8.2.3 Wilcoxon Signed-Rank Test

The Wilcoxon Signed-Rank tests were used for testing the existence of a gap between expectations and perceived performance and the four gap components of Knowledge Gap (difference between expectations and the SLR results on what CRAs should do), Reasonable Expectation Gap (difference between SLR results and the IOSCO code of what CRAs should do), Regulation Gap (difference between the IOSCO code and the Chinese regulations), and the actual Performance Gap (the difference between what CCRAs should do according to the Chinese regulations and their performance of these duties perceived by participants). This test was conducted 12 (3*4) times for Knowledge Gap, Actual Performance and the expectation-performance gap within each interest group (four groups) by using the coding of 1, 0, and -1, as well as another 12 times by using the alterative coding shown in Figures 4.11 and 4.12. Then an additional four times for testing Reasonable Expectation Gap and Regulation Gap by using two coding methods.

Example 2: Test the Knowledge Gap according to expectations from the CCRA staff by using the coding of 1, 0, and -1. Figure 4.14: The Wilcoxon Signed-Rank Tests on Knowledge Gap Duty Item. A1 A2 A3 A4 A5 A6 … E16

duty according to expectations from CCRA staff 1 1 1 1 1 1 … 1

SLR results 1 1 1 1 1 1 … 1

W. Sun 2015

184

Note: 1. Second column is the median of data collected from 102 CCRA staff for each duty; third column is data collected from SLR on whether CRAs and CCRAs should perform this duty or not. Wilcoxon test was conducted on these two columns (2 nd and 3rd) of data. 2. In SPSS, one column was put in as variable 1, and the other was put in as variable 2. 3. When use the coding of “1, 0, and -1”, duties E1, E2, E5, E11-E13, and E15 were excluded in the comparison with respect to the expectations of what CCRAs should do, because there is no clear results on whether CRAs or CCRAs should perform these items according to the SLR (with exception of E11 is not required according to SLR). 4. Duty E5-E10 and E12-E15 were excluded in the perceived performance related comparison because these duties are CCRAs’ responsibilities to the supervision departments, and only CCRA staff and officers from the supervision department could possiblly get access to relevant information. 5. Adjusted α = Desired α ⁄m Hypotheses = 0.05 / [1116*2 + (12*2+4) + 12 *2 + (16+6)] = 0.05 / 2306 ≈ 0.00002 There are no significant differences if Z < 4.1075 (ρ > 0.00002, after Bonferonni Correction). Bonferonni Correction was suggested by the university’s statistics expert.

Moreover, the Wilcoxon Signed-Rank tests were also used on perception gap tests among interest groups according to the medians of all participants from one group on each duty. Six comparisons could be made among four groups and these have been tested on both participants’ expectations and their perception of performance. As such, this kind of perception difference test was done 12 (6*2) times by using the coding of 1, 0, and -1, and an additional 12 times using the coding of 3, 2, and 1. Example 3: Wilcoxon Signed-Rank Test for perception gap on participants’ expectations between CCRA staff and CCRAs’ customers on all 93 duties by using coding of 1, 0, and -1. Figure 4.15: The Wilcoxon Signed-Rank Tests on Perpception Differences Duty Item. A1 A2 A3 A4 A5 A6 … E16 Note:

1 1 1 1 1 1 … 1

Median from CCRAs’ customers 1 1 1 1 1 1 … 1

W. Sun 2015

185

1. The second column is the medians of answers provided from 102 CCRA staff, and the third column is the medians of answers provided by 220 CCRAs’ customers; 2. In SPSS, the second column was put in as variable 1 and the third column was entered in as variable 2; 3. When using the coding “1, 0, -1”, duties E1, E2, E5, E11-E13 and E15 were excluded because the SLR results did not clearly identify whether these should be performed, and E12 was excluded because it was not required according to the literature. 4. Duties E5-E10 and E12-E15 were excluded in the perceived performance related comparison because these duties are CCRAs’ responsibilities to the supervision departments, and only CCRA staff and officers from the supervision department could possibly access to relevant information. 5. Adjusted α = Desired α ⁄m Hypotheses = 0.05 / [1116*2 + (12*2+4) + 12 *2 + (16+6)] = 0.05 / 2306 ≈ 0.00002 There is no significant differences if Z < 4.1075 (ρ > 0.00002, after Bonferonni Correction). Bonferonni Correction was suggested by the university’s statistics expert.

4.3.8.2.4 Kruskal-Wallis Test

A Kruskal-Wallis test was used for testing the third dimension, which includes nine attributes, to show that the distribution of results is different amongst these attributes. This test was conducted within each interest group according to the medians on (1) whether participants’ expectations on the identified nine attributes were significantly different; (2) whether participants’ perceptions of CCRAs’ performance on these nine attributes were significantly differently. This test on the perception and expectations within each interest group was conducted 16 (4*2 + 4*2) times due to two coding systems being used. It was then used for testing (3) whether the SLR results on the eighty-one responsibilities are significantly different amongst these nine attributes; (4) whether the existence of eighty-one responsibilities in the IOSCO code is significantly different among these attributes; (5) whether the existence of eighty-one responsibilities in the Chinese regulations is significantly difference amongst these attributes. As such, this test was also conducted on SLR results, the IOSCO code, and regulations, and it was used for another 6 (3*2) times because two coding systems were used.

Example 4: Kruskal-Wallis test for the significant differences amongst nine attributes according to SLR results by using coding of 1, 0, and 1.

W. Sun 2015

186

Figure 4.16: The Kruskal-Wallis Tests on Attributes According to the SLR results A1 A2 A3 A4 … E16

SLR Results 1 1 1 1 … 1

Attribute No. 3 3 3 3 … 3

Note:

1.

When using this test on SLR results, existence of these duties in the IOSCO code and Chinese regulations, duties E1, E2, E5-E10, E12-E15 were excluded because (1) SLR results did not clearly identify duties E1, E2, E5, E11-E13, and E15; (2) duties E5-E10 and E12-E15 are difficult to evaluate by the public, investors and CCRAs’ customers. 2. When using this test on participants perceptions and expectations, the second column is medians from all participants within each interest group for each duty. 3. In SPSS, a Kruskal-Wallis test was conducted on the data of the second and third columns. Where the second column is variable 1, the third column is variable 2 with grouping variables from 1 to 9 because there are nine attributes. 3. Adjusted α = Desired α ⁄m Hypotheses = 0.05 / [1116*2 + (12*2+4) + 12 *2 + (16+6)] = 0.05 / 2306 ≈ 0.00002 There are no significant differences if H < 35.691 (ρ > 0.00002, after Bonferonni Correction). Bonferonni Correction was suggested by the university’s statistics expert.

4.3.8.2.5 Subtractions and Medians Subtraction was used to calculate the number of duties with differences in medians between gap boundaries by using the coding of “Yes” (3) and “No” (2), or “Well” (3), “Adequately / Average” (2.5), and “Poorly” (2), with the exception of “Not sure” (1) and “Unable to judge2 (1) in the expectations and perceived performance. The existence of differences in medians between boundaries can reveal the existence of a gap with respect to these duties. However, subtraction was not used for the alternative coding of “Yes” (1), “Not sure” (0), and “No” (-1) or “Well” (1), “Adequately / Average” (0), and “Poorly” (-1) with the exception of “Unable to judge”, or Porter’s coding of “Well” (3), “Adequately” (2), “Poorly” (1), and “Unable to judge” (0) because this coding does not allow for a comparison to be made between boundaries.

Example 6: Subtraction calculation on medians for differences between boundaries within interest group of CCRAs’ staff

W. Sun 2015

187

Actual Performance Gap

Regulation Gap

Reasonable Expectation Gap

Knowledge Gap

Expectation Performance Gap

Medians Of Perceived Performance

Chinese Regulation

IOSCO

SLR Results

Medians Of Expectation

Duty

Figure 4.17: Subtraction for Differences in Medians

0 0 0 0

0 0 0 0

1

2.5

0 0.5 0 0.5

1

-1 -0.5 -1 -0.5















3

2.5

0.5

0

0

1

-0.5

A1

3

3

3

2

3

A2

3

3

3

2

2.5

A3

3

3

3

2

3

A4

3

3

3

2









E16

3

3

2

1 1

Note: 1. Column 6 = column 2 – column 4; column 7 = column 2 – column 3; column 8 = column 3 – column 4. 2. Duties E5-E10 and E12-E15 were excluded in the perceived performance related comparison because these duties are CCRAs’ responsibilities to the supervision departments, and only CCRA staff and officers from the supervision department could possibly get access to the relevant information. 3. Medians were then calculated on the subtraction of medians within each attribute for comparison amongst attributes. For example, the median of expectation-performance gap of attribute i (Record keeping / submitting) within the interest group of CCRAs = Median [Expectation-performance gap on duty B5, E1-E4 (according to medians from the interest group of CCRAs)] = Median [(subtraction of median on perceived performance from median on expectation for duty B5); (subtraction of median of perceived performance from median of expectation for duty E1); (subtraction of median of perceived performance from median of expectation for duty E2); (subtraction of median of perceived performance from median of expectation for duty E3); (subtraction of median of perceived performance from median of expectation for duty E4)]. Gap components and expectation-performance gap were then analysed according to these results in the ‘radar chart’ for each attribute within each interest group. 4. It should be noted that the size of differences cannot be measured since the data collected are ordinal data. However, the number of duties showed with difference in medians can be calculated. 5. Expectation-performance gap = =

Knowledge Gap + Reasonable Expectation Gap + Regulation Gap + Actual Performance Gap (Expectation of suggested duties* - Systematic Literature Review results on suggested duties) + (Systematic Literature Review results

W. Sun 2015

188

on suggested duties - Duties in the IOSCO code) + (Duties in the IOSCO code - Duties in Chinese regulations) + (Duties in Chinese regulations - Perceived Performance*) Expectations of suggested duties* - Perceived Performance of = suggested duties* Note: * Expectations and perceptions, as perceived by our interest groups: (1) CCRAs; (2) CCRAs’ customers; (3) investors and the public; and, (4) regulators.

4.4 Research Ethics It was considered whether the researcher had “covered all the concerns that could arise in such field work”. As such, the application for ethic approval (UWS REAG1) has been reviewed and accepted by Professor Michael Danson, confirming there is no need for further approval from the Ethics Committee. This project and its research methods were assessed by the university, and were judged to be ethically sound (see Appendix 20).

Possible ethical issues in this research could arise from interactions between the researcher and the participants. Rowley (2004) posited that the way in which information and data is gathered, used and communicated should be made transparent. Polonsky (1998) insisted that researchers should try to reduce personal bias. As such, this research followed university guidelines about ethical approval, which aims to ensure the integrity and quality of research, transparency of information to participants, proper management of confidential information and conflicts of interests, and the avoidance of any coercion or harm to participants (ESRC, 2013).

Although this research does not address any issues that could be upsetting to participants, consent forms and relevant information were provided, so that participants understood the procedure. Questionnaires were anonymised and coded with numbers. In addition, all digital and hard copies of data were securely held by the researcher on personal hard drives in a lockable filing cabinet.

4.5 Limitation of Methodology The selected methods, approaches, or methodologies have been explained and justified in Figure 4.3 with the disadvantages and advantages of alternative choices documented in Section 4.3. Pragmatism taking a middle ground between subjectivism and objectivism with a mixed method approach was selected to offset the disadvantages of other methodologies and approaches. However, this involves a complicated research design (Figures 4.4 and 4.5) to allow qualitative data and quantitative data to be merged, connected, and embedded (Palinkas et al., 2011). The disadvantages of Pragmatism can be the complexity in research design, it is time

W. Sun 2015

189

consuming in designing and implementing the process, more resources required, and there is no clear solution on how to resolve discrepancies in the interpretation of quantitative and qualitative data (ACET, 2013; FoodRisc Resource Centre, 2015). Other limitations of the selected strategies, techniques and methods were explained within Section 4.3. For example, the large number for questionnaire questions in Section 4.3.7.1.2 and 4.3.7.2, the translation of interviews and questionnaires in Section 4.3.8.1, and the limited access to literature and five publication biases in Section 4.3.7.3, and the number of hypotheses in Section 4.3.8.2; however, these challenges have been addressed with suitable techniques and strategies as detailed in these sections, respectively, such as, snowball, purposive, and convenience sampling with follow-up, call-back and prize draw; back-translation with a collaborative approach; providing precision rate of search and percentage of unavailability; Bonferonni Correction. Additional limitations in relation to research results will be explained in Section 8.6.

4.6 Summary Research methodologies, methods, approaches, choices, and data collection techniques or methods were chosen in line with all the contextual factors that should be considered for the research design. This research takes a pragmatic position, by identifying limitations in people’s perceptions, and by employing mixed quantitative and qualitative methods together with a problem-centred approach. The nature of the SLR and Porter’s model also reflect this approach, since they were examined through an issue-by-issue approach or attribute by attribute approach. The philosophy of Mixed Method research is that there is no single best scientific method to provide indisputable results. Therefore, in this research, there are relationships among factors, methods, strategies, and techniques which have been chosen in this research design. To coincide with research objective 5 in Section 1.3, details about this research method of examining the CCRAEG was presented and will be discussed in Section 7.3.

W. Sun 2015

190

CHAPTER 5 FINDINGS FROM INTERVIEWS 5.1 Introduction This chapter, which is the first of two chapters that describe and analyse the results from the survey (interviews and questionnaires), explores the findings from interviews. This assesses market participants’ understanding about CCRAs and the CCRA industry. Interviews conducted in Study 2 were designed with the intention of discovering influencing issues of the CCRAEG, which were not revealed or neglected from the SLR. The questions in the questionnaires were modified according to the responses from interviewers about their expectations of the role and responsibilities of CCRAs in relation to Chinese regulations which include six parts: (1) quality / integrity, (2) monitoring / updating, (3) the rating process, (4) independence / avoidance of conflicts of interests, (5) responsibilities to the investing public and issuers (transparency and timeliness of rating disclosure), and (6) the treatment of confidential information. Moreover, comments on the fourteen attributes (Appendix 19) were also collected.

The interviews in Study 3 provided more in-depth information from market participants about their opinions of ancillary service, conflicts of interests, relationships among investors, issuers, CCRAs and the government. (It should be noted how the questionnaire design in Study 3 was modified by interview results from Study 2 and SLR results from Study 1, and that was discussed in Section 4.3.7.2.2 with examples).

5.2 Mismatched Expectations- Study 2 The codes used to distinguish different groups and type of interviewees were detailed and explained in Chapter 4. There were clear Perception Gaps and Knowledge Gaps about the regulations, role, and functions of CCRAs. Analysis of all topics raised by the interviewees is presented in this section. Finally, the identified elements of gap components were modified, according to the comments from interviewees. 5.2.1 CCRAs Do Not Perform the Role of Gatekeepers SEC (2010) suggested that: “Credit rating agencies, including nationally recognized statistical rating organisations, play a critical ‘gatekeeper’ role in the debt market that is functionally similar to that of securities analysts, who evaluate the quality of securities in the equity market, and auditors, who review the financial statements of firms. Such role justifies a similar level of public oversight and accountability.”

W. Sun 2015

191

Most interviewees agreed that CCRAs are one of the main financial intermediaries to reduce any information gaps between issuers and investors. Nevertheless, on the other hand, according to some interviewees, CCRAs are only "expected" to act like gatekeepers, by scrutinising creditworthiness and reliability. Those interviewers believed that CCRAs cannot perform the role of gatekeepers. The CCRA managers and directors believed that ‘gatekeeper’ is merely a term used in academia in the western countries. This theory actually appeared to be an overarching theory, which relates to many responsibilities of accounting professionals according to the literature (see Section 3.3.4.5). CS3 suggested that even global CRAs do not consider themselves as gatekeepers but as “…information providers…” CS5 elucidated that CCRAs would not be able to act like gatekeepers in the same manner as global CRAs, due to the limited rating demands within the Chinese financial market. He insisted that CCRAs were trying to adapt to changes in the financial and political environment, as well as retaining as many of their customers as possible and sustain their business. CS5 used the analogy that: “In fact, CCRAs are players in a game of sport, whereas global rating agencies are the referees. CCRAs have to compete with one other to grasp their market share opportunities, and as such, may be forced to do whatever is required to ‘survive’ in the market.” Just like interview results from Kennedy (2008)’s report, most CCRA managers in this study complained that they have been working hard to try to survive through historical changes in the financial system. CS5 said that: “We did not make any profits a few years ago...and any revenue was always spent on further investigation work or development. Currently, we have more clients, and our revenue is increasing, but our profit is very tiny. If a CRA does not have lots of money to the extent of being able to say 'No' to their clients, how can this CRA behave like a gatekeeper?” Furthermore, participants from the other interest groups (SD1, CC3, IP2, and IP5) agreed that CCRAs have not performed the role of gatekeepers, and noted that CCRAs may not have fully satisfied their own expectations due to the complex and unique financial history and background of CCRAs in the Chinese economic market. These participants quite understandably believed that the Chinese government has the overall control of banks, companies, and the market, which in turn may influence opinions from CCRAs. One interviewee admitted that, “…the government has the overall control of the Chinese financial market, and most famous and big organisations

W. Sun 2015

192

are SOEs with support from the government” (SD1). Because of the overall power from the government, the objectivity of CCRA ratings would be influenced by the favours and opinions from the government. CCRA customers held similar opinions, with one stating that, “Whether CCRAs can live or die would depend on the government decisions, ..., of course, these agencies have to listen to the government and make sure the government is happy about them” (CC3). Moreover, interviewees who were investors and members of the public questioned the credibility and reliability of ratings. IP2 asked, “How can these agencies do their job properly on a rating product from SOEs or the government, when SOEs and the government have more power than them?” Another interviewee expressed concern about the power of SOEs, “It might be meaningless to rate SOEs while they are ‘too big to fall’ anyway, and most CRA customers are SOEs” (IP5). In addition, some CCRA managers expressed concerns about the relationship between certain CCRAs and the government, for example: “The vision and mission statements from CCRAs indicate the main vested interest of a CRA. They will be more influenced by the Chinese government if they share the same interest with the government. The government tended to show more interest towards the agencies who established a mission to protect the security of the Chinese financial market, with a slogan to build nationalised agency for the Chinese economy...” (CS5) CS1 suggested that, “Some CRAs may have this kind of conflict of interest because they are too localised, and are controlled by the local government. These kinds of conflicts of interest would be moderated if CRAs were driven by the fear of losing any market share.” Moreover, CS2 suggested that this kind of power struggle could exist in most CRAs: “This kind of conflict of interest could happen in any global CRA, and there is empirical research that proves global CRAs also have biases or conflicts of interest. Like the Chinese government, the SEC has already tried to gain more control over CRAs within their market by releasing more regulations. However, in China, CCRAs could reduce this kind of conflict of interest by entering the global market, instead of only holding onto the Chinese market share.” 5.2.2 Perceptions about CCRAs’ Performance Most participants found it too difficult to judge CCRAs’ performance in relation to certain attributes due to a lack of clear performance indicators, and they suggested there should be micro-attributes associated with each attribute, which would make meaning of each attribute more specific. However, CS5 indicated that,

W. Sun 2015

193

“There are varied dimensions, policies and requirements within each attribute, so it is difficult for me to comment on the performance of each attribute. My opinion was only based on my understanding of requirements from the PBC and the government.” A majority of participants also reported that they believed that the overall performance of CCRAs was acceptable. However, for certain attributes, the perceptions of performance by interviewees who were investors, members of the public and CCRAs’ customers, were mostly negative. Perhaps unsurprisingly, all the interviewees who were CCRA staff made positive comments. All interviewees, with the obvious exception of the CCRA staff, admitted that they did not pay attention to CCRAs’ performance, but did observe their failures according to information released by the media. For example, IP5 believed that CCRAs should be a market predictor, and suggested that CRAs are useless if they cannot predict approximately accurate default rates, although he was fully aware that global CRAs had currently stated that it was not their job to predict, and they denied any responsibility for the subprime crisis. However, CS1 claimed: “CCRAs have performed very well, considering the fact that we do not have as long a history as the global CRAs. The stability of ratings from CCRAs was better than those of global CRAs, and the accuracy of sovereign ratings of certain countries have been proved to be more reliable than the ratings provided by global CRAs. Moreover, CCRAs have greater knowledge of local companies and better access to them than global CRAs, who in turn might have biased opinions and language barriers, and these global CRAs also do not have sufficient knowledge about the local government and industry.” Nevertheless, participants from the interest group of investors / public illustrated disadvantages of qualitative-based rating methodologies, “For example, the ratings from CCRAs about the bonds of national railway could be lacking in credibility. Moreover, these rating methodologies are usually combined with qualitative and quantities research. It could be very easy to adjust the qualitative data to make the overall rating higher. However, rating methodologies used for internal rating within banks are more quantitative-based and they are based on sufficient historical data. As such, internal ratings within banks should be more reliable.” (IP3) Many more performance issues were mentioned by the group of investor/public. For example, IP6 alleged that CRAs should not exist anymore if they were paid by issuers. She argued that,

W. Sun 2015

194

“CRAs are useless in the current financial market. There is no need to have a rating from CRAs or CCRAs. The subprime crisis in 2008 is a great example to prove this point. They caused great economic damage in the global market and ruined the balance of the system.” Similarly, IP2 claimed that, “CCRAs were useless because they lacked credibility...and only did jobs for issuers, instead of providing useful information to the market.” IP3 believed that credit ratings given by CCRAs would be similar to ratings suggested by banks. Moreover, IP1 and IP4, who are bank managers, posited that internal ratings from banks would be more reliable than the ratings from CCRAs.

Although CCRA failure was highlighted, this was only done by one interviewee who noted that, “The performance of CCRAs has not been very good. Shanghai Far East can be an example, as it made a very serious mistake with a bond rating, the ‘Fuxi CP01’ …in 2006” (CC3). This was probably due to the ‘Fuxi CP01’ error being a wellknown case within the CCRA industry, but being less well-known by the public.

Responsibilities in line with rating accuracy / quality / integrity appeared to be the most important, according to the interviewees. These responsibilities associate with attribute (a) in the interview protocol (Appendix 19); attribute d in the proposed model of CCRAEG (Figure 3.50), and attribute 2 in the SLR (Section 3.3.3.2). Section 7.3.3 will discuss the differences among these attribute lists and how were they established. Section 7.4 will provide cross-analysis of results collected from interviews, SLRs, and questionnaires with reference to perceptions and expectations from each interest group. This attribute was extensively discussed by interviewees, and this despite the fact that participants found the concept difficult to define, with IP3 commenting that, “The concept of rating quality is complex and contains multiple dimensions. Investors would expect CCRAs to be able to predict the probability of default rate, and to form an accurate picture of the companies’ financial or industry background. Their rating methodologies have to be robust; their ratings need to be reliable. Their rating process should be operated with the highest standards with qualified staff. Rating teams should be composed of experts, who are competent and highly experienced. There are too many elements needing to be considered, and the concept of rating quality can be understood differently if a specific context or definition of the term is not giving. For example, the quality of CRA staff, the quality of communication, the quality of timeliness, the quality of the content of rating report, etc.” CCRA managers appeared to be aware of some of the expectations from the market, with CS5 admitting that “Ratings need to be useful, and it is not what a professor or

W. Sun 2015

195

researcher from the university can do, because they only know theories, which are not practical.” He believed that ‘usefulness’ has been the most crucial characteristic according to feedback and comments from his clients. This attribute was mentioned by many participants from the interest groups of investors and the public with IP6 delimitating this attribute as “a central reason” to investors, so that information gap can be reduced.

Furthermore, another three attributes were also mentioned in line with their expectations about CCRAs’ performance, namely ‘stability’, ‘innovation’, as well as ‘historical data and experiences’. CS2 acknowledged the disadvantage of CCRAs, and declared, “The quality of ratings by CCRAs may be unreliable due to lack of historical data...one of the advantages of global CRAs.” CC5 stressed that CCRAs should be “innovative on service portfolio and rating products, with flexible rating approaches that should contribute to better performance.” Alternatively, investors and the public appeared to be more interested in the stability of ratings, with one insisting that, “CCRAs should provide a rating with long-term predictions” (IP4). 5.2.3 Knowledge Gap The majority of interviewees from the interest groups of customers, investors and the public indicated that they were not aware of, or were familiar with, regulations about CCRAs. In addition, they also indicated a failure among CCRAs to provide sufficient information for investors to judge the quality of their ratings. As such, a Knowledge Gap was found to exist in relation to their understanding about regulations, role, and function of CCRAs. 20% of interviewees (IP1, IP2, IP3, CC6) were uncertain about the role and function of CCRAs. Moreover, the definition and concept of credit checking and credit rating was indicated as being obscure and confusing because the government seemed to give a different definition about the scope of credit checking business. In particular, three interviewees (who were investors or members of the public) indicated that they were unsure about the nature and definition of credit ratings. IP6 described credit rating as “a rating about ‘credit’ with lots of financial figures from financial statements, reports and companies’ financial performance”. IP8 admitted that, “The differences between credit checking report and credit rating report are not clear. The term of credit rating appear to be similar with credit checking.” IP7 stated, “This subject was not explained very well in the courses about economics or finance while I was doing my undergraduate degree in finance, or even Master degree in Finance. The course was more about personal financial management. Credit checking might be covered in the courses, but not in

W. Sun 2015

196

much detail. I believe CRAs serve as information providers between borrowers or issuers and investors.” The government use ‘Zhengxin’ to describe any business relevant to the credit information business. SD1 indicated that, “Definitions about terms are provided in the regulations, which can be found on the PBC website. These definitions are provided from a regulatory perspective, and researchers in academia, especially western researchers from the other countries might use different terms. We prefer to use ‘Zhengxin’, because it is a term that can be understood by the public easily. ‘Zhengxin’ has very long history in China, since it was stated in ‘Zuozhuan’ around 770BC…it means the collecting of information from the public by the government or authorities.” One comment made by a CCRA manager about the multiple meanings of terms within the industry was unexpected. The manager indicated that the government itself was also confused about the concepts, “The credit checking business had a great impact on the CCRAs in the past…even the government is not very sure about the difference between 'credit checking' and 'credit rating'” (CS4). As such, this term, ‘Zhengxin’, which originally means ‘credit checking’ could confuse the public, and make the public believe that credit rating, credit investigation, credit registers and other credit information business are all forms of credit checking. 5.2.4 Lack of Regulation, Over-Regulated and Complex Supervision System Due to a perceived lack of regulation, some interviewees proposed that the legal system should punish CCRAs when they make mistakes, and so propel them into taking better responsibility for the accuracy and credibility of ratings (IP1, IP4, IP6, and CC4). They believed that CCRAs should be more exposed to civil liability instead of simply being criticised over "a matter of opinions", and that liabilities should be imposed in an effort to improve rating quality and investors’ protections. IP6 asserted that “There is a lack of regulation concerning the supervision and management of CRA industry, which resulted in failure and chaos happening many times within the financial market.”

Most interviewees (with exception of CS4, CC3, IP2, and IP3) maintained that the payment approach should be investor-pay instead of issuer-pay. CS4 suggested it would be problematic to use the investor-pay approach due to four reasons. First, the amount of rating fee to be charged should be distinguished with reference to different types of information and subscribers. Second, demand for ratings by investors could be very low if they have to pay for rating reports. Third, it would not be cost-efficient for CRAs to try to control their copyright of information and prevent their subscribers from

W. Sun 2015

197

sharing or re-selling the information. Finally, the investor-pay approach would not satisfy the aim of reducing the cost of establishing an efficient financial market with asymmetric information.

In addition, the recorded expectations and perceptions of CCRAs and regulators on rating fees and rating approaches were found to be different. In terms of rating fees, although two CCRA officers (CC4 and CC5) complained that an apply-to-all rating fee system would not be scientific since the effort and time CCRAs spent on a rating project would depended upon the size of the company; two officers from supervision departments (SD1 and SD2) argued that a fixed rating fee (e.g. 250,000RMB) can help reduce ‘tipping and tying’ in the industry and provide an efficient platform for fair competition. Nevertheless, two directors and managers from CCRAs claimed that the rating fee is far too low due to the amount of work needed to be done for a continuous monitoring routine after the initial rating. Moreover, CC3 noted that, “how to charge fees for each rating report would be too complicated to determine...because the types of financial products are too many and too complex.”

In terms of rating approaches, participants perceived different disadvantages of the investor-pay approach, although they suggested this would be better than an issuerpay approach. They predicted that most users or readers of rating reports would rather collect and analyse information by themselves from different sources. As such, it would not make a difference to transfer from an issuer-pay to an investor-pay system, especially as the financial market seems much more transparent now than in the past, partly because of the use of various technologies and software (CC3, IP2, and IP3). However, most other participants from the interest groups of investors/public and CCRAs’ customers insisted that the government should change the payment approach to an investor-pay based one.

Differing opinions were expressed by CCRA staff in regards to these two issues. CC4 and CC5 claimed that rating fees are over-regulated in the Chinese legislation system. They contend that there is a lack of flexibility of rating payments, which in turn limits the scope of research and data collection in the rating process when it comes to the issue of rating entity for large-scale investment activity. Conversely, IP3 suggested that rating payments should be from both investors and issuers, so as to reduce the conflicts of interest and balance the financial problem caused by the small number of subscribers.

W. Sun 2015

198

In addition, one of the CCRA managers, CS5, postulated that certain requirements on rating reports were impractical. One example is the requirement of rating reports for having the same report format and structure indicated by the PBC, “…even the title and sub-title have to be as the same as those listed in the regulation. Innovation on rating methodologies and service portfolios were restricted by the requirement of format of rating reports from the supervision departments.” Finally, the complexity of the supervision was highlighted by SD1, who claimed that a system overseen by multiple supervision departments was too confusing, and it would prove difficult to change in a short period if required. He stated that, “…although the PBC was the main supervision department, different requirements were released in respect of diverse types of financial products from various departments, which were only suitable for the economic and technological environment in the past.” Moreover, CC2 declared that, “The requirements from the supervision departments are too complicated. Any issuers or companies usually would not have time to try to find out what the code of conduct of CCRAs is, or even assess the quality of each CCRA... they would just follow instructions from the PBC, and order ratings from one of the CCRAs on their [the PBC] list.” 5.2.5 The CCRA Market is not a Two-Sided Market The concept of a two-sided market is a well-developed theory within the video game industry (Parker and Van Alstyne, 2005). It usually has interactions with two distinct types of customers. Although the two-way relationships with two types of customers have been examined from an economic perspective in other industries, there is limited research in relation to this within the credit rating industry, according to Ponce (2012). CCRAs do not appear to operate in a standard two-sided market, according to the descriptions provided by Filistrucchi et al. (2014). This is due to the unique characteristics of the CCRA industry. Filistrucchi et al. (2014, p.296) defined the twosided market according to Evans (2003), and stated that, “A two-sided market is a market in which a firm acts as a platform: it sells two different products to two groups of consumers, while recognizing that the demand from one group...depends on the demand from the other group and, possibly, vice versa.” When considering that CCRAs actually provide both rating services and credit checking services, this market does fit into the definition (Figure 5.1).

W. Sun 2015

199

Figure 5.1: The Standard Two-Sided Market of CCRA Indsutry (Issuer-Pay Approach)

However, in the global CRA industry, CCRAs should not draw revenue from all platforms according to the guidance provided in the IOSCO code regarding issues concerning conflicts of interest or ancillary services. “In two-sided networks, cost and revenue are both to the left and the right, because the platform has a distinct group of users on each side” (Eisenmann et al., 2006, p.2). The investors and the general public should be able to have access to the rating reports for free from any CRAs’ websites. As such, in the global CRA market, investors and the public are not CRAs’ customers because there is no cost for them and CRAs do not receive revenue from them.

If revenue appears in any of the platform in the CCRA market (issuer-pay approach), it may only due to the structure of other relationships:

(1) the involvement of the government or associations in the regulation system through enforcement of implementation of any policies concerning CCRAs. These was reviewed in Section 2.2.2, 2.2.3.1, and 3.4.2; low rating demands and historical issues restricted the development of CCRAs; also CCRAs’ managers had indicated that they are trying to survive; (2) the economic background that has allowed the government to form close relationships with SOEs (Quanli Guanxi Web). This was explained in Section 2.2.3.2 and 3.4.2; (3) the unidentified relationship between credit rating and credit checking. This was analysed in Section 2.2.1 and 2.2.3.3. Therefore, the Chinese CRA industry is not two-sided because it has possibly three or more platforms, and it appears to be much more complex than the global CRA industry regarding these other relationships (Figure 5.2). The pattern of all these relationships (or Guanxi web) within the CCRA industry may significantly deviates from the pattern in the CRA sector in the USA, with over-laps among stakeholders. The government has a special relationship with SOEs, which is different compared to that in western countries (This will be reviewed in Section 5.3 with reference to comments from interviewees).

W. Sun 2015

200

Figure 5.2: Multi-Dimensions of CCRA Market (Issuer-Pay Approach)

The government, associations, investors and the public can be CCRA customers, and there are overlaps amongst these groups. Akin to the western regulatory system for CCRAs with issuer-pay approach, there are two one-way relationships: (1) the Chinese government validates CCRAs through their recognition lists of each agency, and supervise their implementation of policies; and, (2) investors and the public can check rating results through CCRA websites and published rating reports. The only two-way relationship that exists between CCRAs and their clients (issuers).

Moreover, CCRAs with investor-pay approach also exist in China, with stakeholders from the government or associations (Section 2.2.2.4). However, they sell ratings only to investors and the public. CC3 explained that, “…relationships among CRAs and other departments or organisations can be more problematic than in the western countries, because the power of local government and associations cannot be ignored.” Likewise, IP8 contended that, “This is definitely not a two-sided market anymore, when investors do not need to pay for the rating results. There is only one market between CRAs and CRAs’ customers, and the government gives them [CRAs] power through the government recognition list...the public can find rating results from their websites if they want to.” Therefore, the CCRA industry is not a standard two-sided market due to the overlapping of different interest groups (which associate with Guanxi Web) and the multiple payment approaches. Moreover, it will only exist within the CRA industry in the other countries if CRAs provide both rating services and credit checking or ancillary services which they can receive revenue from. Ideally, according to the IOSCO (2008) code and the Chinese regulations, ratings should be provided for free to the investors and the public.

W. Sun 2015

201

5.2.6 Other Mismatched Expectations Most interviewees declared that the history and background of CCRAs was the most important influencing factor on CCRA performance (CS1, CS2, CS3, CS4, CS5, CC1, CC5, IP3, IP4, IP5, SD1, and SD2). Expectations on the role of CCRAs varied among different interest groups. First, considering the need for regulatory license, SD2 asserted that, “the Chinese government might have been too domineering, especially over money and people.” IP7 acknowledged that, “The Chinese government has dominated the accounting and auditing process among different industries for thousands of years, and it still controls them too tightly.” However, CS5 argued, “We would have no business or would be even bankrupt if there is no requirement for issuers to have rating results.” Others disagreed, for example, IP7, who contended that “an efficient market should be built on a platform with information freedom, not enforcement.”

Nevertheless, what the Chinese have learned from the subprime crisis might already affect the government's decision-making principles. IP3 suggested, “The choice between a self-regulatory or government-driven economy will be a debate that keeps going on in any country. I guess we had learned the lesson from the subprime financial crisis in 2008...self-regulation would cause more problems to the financial market if there is no supervision of the CRAs at all. Everything has to be supervised and controlled, especially in China, which has complicated historical issues and cultural backgrounds, which are completely different compared with western countries. The best solution would not be a completely self-regulatory or completely government-driven economy; the best solution would be these two options combined. The question should be how to combine them, how to make selfregulation work, and what should be supervised.” Second, opinions on the ‘stability’ of rating quality differed, with IP2 insisting that, “I believe stability is one of the most important attributes for the quality of ratings.” Conversely, CS3 argued that: “Ratings can be either stable or unstable. If it is stable, the short-term changes will not be reflected, but only long-term default rate. If it is unstable, it will capture any changes in the economic or political environment at a certain time. It is impossible to make sure ratings are updated with any changes in the market and be stable at the same time. I suggested that accuracy would be the most important element, therefore, CRAs should provide ratings more accurately rather than stability, since preferences from issuers and investors are always different.”

W. Sun 2015

202

Third, participants from CCRAs had different opinions from people in the other interest groups on the requirements relevant to competition, entry barrier, oligopoly and concentration. For example, CC3 proposed that, “Competition level within the CCRA industry is not sufficient enough,” and IP3 suggested that, “The government should take action on increasing the level of competition within this industry to make the market work more efficiently.” However, CS3 claimed that, “The competition of the CCRA industry is much higher than in the American CRA industry; because there are more than five CCRAs which have the biggest market share, while there are only three in America.” CS5 indicated that, “CCRAs are trying to survive; therefore, I believe the competition level might be too high.”

Finally, expectations concerning explanations of rating methodologies were found to be different between CCRA staff and participants from the other interest groups. Participants who were investors and members of the public expected CCRAs to provide more information about their rating methodologies, with IP8 complaining that, “CCRAs would not tell the public or investors any information, but give a symbol, such as A, AA, or B, because investors and the public are not CCRA customers.” Moreover, CCRA customers agreed with the general indication. However, from their perspective, the explanation about the differences between methodologies was considered to be too generic. According to CC4, “…although there is sometimes an indication of the differences between methodologies among industries, these lack sufficient detail to make it considered to be useful, for example, the influence of any of these differences on the result.” However, one CCRA CEO held a completely different opinion about transparency of rating methodologies. CS5 argued, “Most information about methodologies and process is confidential, and should not be available to the general public. This is a weakness in the Chinese regulatory system. There is a lack of regulation on data protection and copyrights. We have many concerns about what can be published and what might be better to keep by ourselves. We will tell you if you are our clients, but we will not publish it for the people who are irrelevant to [not involved in] our rating business. We would like to save the ‘hassle’ [copyrights and data protection] and publish less to protect our confidentiality.”

W. Sun 2015

203

5.3 Guanxi Web, Ethical Relationships and Conflicts of Interest – Study 3 It has been proved that Guanxi, which is the 'relationship web' in Chinese, has played an important role in business ethics (Huang et al., 2014). The Guanxi web is complex with multiple dimensions, and is closely relevant to the business relationship. From an ethical relationship perspective, it is important to understand the maze of the rentseeking in the Quanli Guanxi Web (which reflects the authoritarian state’s organisation hierarchy) and favour-seeking in the Qinyou Guanxi Web (the type of interpersonal relationships among common people) for entering into business in mainland China (Su and Littlefield, 2001). According to results from seventy Chinese professional accountants in Hong Kong, their ethical judgement was restricted by their personal Guanxi Web. Subsequently, it was also noted that this impacted upon the management of conflicts of interest, and was deemed to be as important as moral philosophy and training. This was due to the three factors of (1) the Qinyou Guanxi web, (2) the obligation to return favours, and (2) Mianzi (which signifies honour or status in a hierarchical society) being present within the ethical relationship mechanism (Au, 2014).

The problem with conflicts of interest appears to be more complicated in China due to the different dimensions of relationships, the culture of favour-seeking (Guanxi) and the combination of credit-checking and credit-rating in some CCRAs. First, all three CCRAs managers or directors (in Study 3) expressed their concern about the special relationships CCRAs have with the local governments, especially since local governments asked some CCRAs to do credit checking reports for certain industries: “One CCRA has been asked for a favour by the local government to produce an investigation report, and the local government provided them with authority like a regulatory-license in the local industry” The ‘Quanli Guanxi Web’ among local governments, associations, CCRAs, and issuers could cause more conflicts of interest, and reduce the quality of rating reports that CCRAs provide for other clients (Figure 5.3). This Guanxi web exists in all accounting and auditing relevant industries due to historical influences on the development of China's economic and financial system. One CCRA manager reported, “The accounting and auditing of all companies and individuals were controlled by the government. Currently, it is still not truly independent, because the government has a responsibility to oversee problems and issues within the industry and banking system. Most accounting, auditing and rating firms are still influenced by the government, even after the sorting and clearing period in the 1980s and 1990s.”

W. Sun 2015

204

Figure 5.3: Associations with the Government and Associations

Like many researchers, such as Liu (2009), CCRA managers and directors also perceived the same issue that CCRAs have to rely on the government for obtaining customers, funding and support. A CCRA director of a smaller company admitted: “CCRAs would be the same as the accounting firms in China, and they would not be able to be completely independent from the government. The government would hope all SOEs have the highest ratings, because the Chinese economy is determined by the performance of SOEs. As such, some CCRAs will want to give higher ratings to SOEs to make sure they have good relationships with the government, and make them happy. Moreover, they would wish to have more SOEs... as their customers.” However, the manager of a bigger CCRA indicated, “The government gives the job to CCRAs to investigate and provide ratings, and they would hope that CCRAs provide ratings with good quality of any type of rating entity. The government would like to know any possible problems first, so they can solve these problems as soon as possible before they get too big. Therefore, I believe all CCRAs have strong incentives to provide good quality ratings, even if SOEs who have relationships with the government are our clients.” Second, all three CCRA managers agreed that CCRAs might have more disadvantages in terms of conflicts of interest, in comparison to global CRAs. Although they felt that CCRAs performed well in terms of being independent, the conflicts of interest caused by relationships among CCRAs and subscribers or issuers or investors could be a possible issue. This was indicated as being due to ‘tipping’, ‘notching’ and ‘tying’ from a sub-group of a CRA company (which have A, B, and C as sub-groups).

W. Sun 2015

205

One manager asserted that if CCRAs provide both rating business and credit checking business (Figure 5.4), then tipping, notching and tying could happen, according to the theoretical understanding about these possible relationships. He acknowledged that, “Any CRA could disclose confidential information of one client to the others [knowing as tipping]; they could ask their clients to purchase additional services with threats of lowering the rating [known as tying]; they could also give a lower rating than the other CRAs or CCRAs for the sake of ‘good quality’ of ratings [known as notching], to see whose rating is the lowest. The lower, the better, because it could be appear more strict; however, there are internal policies and governmental regulations about information management, how to deal with confidential information, how to avoid conflicts of interest, and many other requirements. I do not think these kinds of things could happen under the existing requirements. Additionally, CCRAs are afraid of losing their reputation, and we will not be stupid to do things like this, we have special relationships with our clients but we will still fulfil our responsibilities according to the law.” Figure 5.4: Associations with Other CCRAs

On the other hand, CCRAs have to establish relationships with their clients. One CCRA manager indicated, “Clients and CCRAs are somewhat tying together, like any other industry...all companies try their best to attain and retain relationships with customers through the marketing department. Nevertheless, in our company, rating committees and rating teams are separate from the marketing department, which reduces the possible problems of conflicts of interest.” Furthermore, another CCRA manager believed that credit rating and credit checking can be two types of business that exist within credit information companies, because these businesses are similar. He suggested that,

W. Sun 2015

206

“CCRAs are information providers. Credit-reporting companies are also information providers. The nature of the business between credit rating and credit reporting is the same. Our enterprise is very big... and contains several subsidiaries…, however, they are separate companies that provide ratings or credit investigation services, with completely different analysis teams, management teams, and directors or managers. We have very strict internal policies about data management and confidential information. Every step and very loop are scrutinised by our management team. I cannot see any problem at the moment. ” Moreover, the director of a smaller CCRA declared that, “The credit rating business originated from the credit checking business in China, and most CCRAs either provide credit checking business, or have partnerships and shareholder relationships with credit checking firms. This is embedded in history, and they cannot be separated. ”

5.4 Summary According to opinions from the interviewees, CCRAs were not viewed as gatekeepers in the local financial market, and the CCRA market was not considered as a two-sided market (because of multiple dimensions of relationships embedded within the credit checking and credit rating businesses). From this special pattern, a set of expectation attributes were revealed by participants from four interest groups (CCRAs’ staff, CCRAs’ customers, investors / the public, and officers in supervision departments). Clearly, expectations from participants varied and focused on multiple attributes.

A total of twenty-three attributes, whihch were listed in Figure 5.5, were generated from this research at the initial stage, although seventy-one attributes were found by the final stage of SLR analysis. Fourteen of the initial twenty attributes were collected from empirical documents about the people's perceptions of CRAs (Section 3.3.2); six of these initial attributes were highlighted in the international and the Chinese legislation (Section 2.2.2); and the additional three resulted from interview results regarding participants’ expectations about the ideal role of CCRAs and their expectations of CCRAs’ performance. More implications from these results can be concluded as follows.

First, these three sets of attributes (1-14; 15-20; 21-23) reflect perception differences between western society and Chinese society about the financial market, and the requirement from regulations which were not reviewed in the literature. This is due to the variety of sources where these attributes were collected from. Some attributes (114) were generated from the western soicety based CRA literature; attributes (15-20) were revealed according to the regulations; attributes (21-23) were collected from

W. Sun 2015

207

expectation and perception within the CCRA industry. These differences will be indicated in Section 7.3.3. Three new attributes identified in the interviews include: (1) experiences / historical data, (2) innovation, and (3) stability. They seem to reflect the impact of global CRAs on the perceptions of Chinese participants. This is due to the advantages enjoyed by global CRAs, compared with CCRAs: they possess more extensive historical data, as well as more in-depth experience within the financial market. For these reasons, global CRAs are perceived as being more reliable.

Figure 5.5: Attributes of Expectations (Initial Analysis) No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Note:

Attributes Accuracy / Quality / Integrity; Monitoring / Updating / Timeliness; Rating process / Procedure; Independence / Avoidance of conflicts of interest / Favour of interest; Responsibilities to investing public and issuers / Transparency of information / Communication; Competition / Entry barrier / Oligopoly / Concentration; Unsolicited and Solicited ratings; Implementing of policies; Purpose of rating / Rating license / Rating trigger / Multiple or double rating policies CRAs’ understanding/ Staff Competence; Usefulness; Rating fee; Robust Methodologies; CRAs’ role / Accountability / Reliability / Credibility; Treatment of confidential information / Internal policies; Consistency in methodologies and process or procedure; The structure, content, and format of rating report; DCO (Designated Compliance Officer); Ancillary service; Staff competence / Recruitment procedure and requirement Experiences / historical data; Innovative; Stability

Sources Empirical data

Regulations

Interviews

Attributes were used for the thematic analysis

Second, according to feedback from interviewees, there should be micro-attributes associated with each of attributes, providing more detailed requirements in terms of performance indicators. Because the meaning of these attributes are very broad, micro-attributes can provide more detailed framework for evaluation. As such, most suggested duties in questionnaires were derived from regulations, since expectations about CCRAs from interviews are too complex and varied.

Third, the contribution from interviews results to the questionnaire design was the establishment of items E5 and E11 for identification of the unreasonable expectations from the public. Interview results also informed part of SLR design, since interviewees

W. Sun 2015

208

indicated historical issues are crucial for the development and perception about CCRAs. Consequently, historical review of CCRAs was conducted and discussed in Section 3.4.3.

Fourth, conflicting opinions were expressed by interviewees about the transparency of rating methodologies, stability of ratings, and the need for self-regulation in the CCRA sector (Section 5.2.6). They discussed the special pattern of ethical relationships with features of non-standard two-sided market and an indication that CCRAs did not performed as gatekeepers. This has caught the attention of researcher to review futher literature throguh the SLRs.

Finally, these interview results confirmed that Perception Gaps exist among the four interest groups about CCRA performance, regulations and their role within the Chinese financial market.

W. Sun 2015

209

CHAPTER 6 FINDINGS FROM QUESTIONNAIRES 6.1 Introduction This is the second of two chapters in which the survey (interviews and questionnaires) findings are described and analysed. This chapter details the measurement results of each CCRAEG gap for each attribute, using a three-dimensional model. The hypothesis test results are reported in the first part of this chapter. This is followed by an analysis of each attribute in line with the three dimensions of the CCRAEG model. (Qualitative and quantitative data from the SLR was used to develop the CCRAEG model back in Sections 3.2 and 3.3).

6.2 Hypothesis Test Results As explained in Section 4.2, there are four main hypotheses for gap components, as well as hypotheses for Perception Gaps and attributes. These hypotheses were proposed (from three dimensions) about the structure and content of CCRAEG model. In Section 4.3.8.2, suitable statistical tests were identified for verifying these hypotheses. This chapter will report on the test results to verify the CCRAEG model. The tested CCRAEG model is illustrated in Section 6.2.4, which satisfies research objective 6 as stated in Section 1.3. 6.2.1 Verification of gap components – the first dimension A median analysis indicated that all proposed gap components of CCRAEG exist although there were no significant differences found between certain boundaries (with the exception of the Reasonable Expectation Gap, and the Performance Gap according to perceptions of the CCRAs and CCRAs’ customers), which were showed in Wilcoxon Signed-Rank tests’ results at significant level of 0.00002 (Figure 6.1).

Median analysis results are demonstrated for each interest group in Figures 6.2, 6.3, 6.4, and 6.5 with reference to the analysis method explained in Sections 4.3.8.2.5. These results confirmed that differences exist in medians between (1) expectations of what CCRAs should do and the perceived performance of these duties (referred to the Expectation-Performance Gap); (2) expectations of what CCRAs should do and SLR results on what CRAs and CCRAs should do (referred to as the Knowledge Gap); (3) SLR results on what CRAs and CCRAs should do and duties required by IOSCO code (referred as Reasonable Expectation Gap); (4) whether or not these duties are required by the IOSCO code and whether or not these duties required by Chinese regulations (referred to as the Regulation Gap); (5) duties required by Chinese regulations and perceived performance of these duties (referred as Actual Performance Gap). Medians

W. Sun 2015

210

were generated excluding all the responses of “not sure” and “unable to judge” from participants.

Within eighty-three performance perceivable duties (perceivable by the majority of participants), 47.0% of these suggested duties showed with no difference between medians of expectations and perceived performance according to CCRAs, 0.0% according to CCRAs’ customers, 10.8% according to investors/public, 19.3% according to regulators. 32.5% of these suggested eighty-three duties were reasonable expectations according to SLR results although they were not in the IOSCO code. 65.1% were not in the Chinese regulations but stated in the IOSCO code. CCRAs perceived 9.6% of these duties with the Actual Performance Gap compared with what is required by the Chinese regulations, 34.9% perceived by CCRAs’ customers, 32.5% by investors/public, and 24.1% by regulators. Figure 6.1: Results for Hypotheses H1-H4 (The 1st Dimension) Gaps

Interest groups

-0.404

0.686

-1.001

0.317

-1.921

0.055

Regulators

-2.008

0.045

CCRAs CCRAs’ customers Investors / public

-6.815

< 0.00002

Results No significant gap No significant gap No significant gap No significant gap Gap exists

-5.047

< 0.00002

Gap exists

-3.836

< 0.0001

Regulators

-4.002

< 0.0001

CCRAs

-0.041

0.968

-3.402

0.001

-1.526

0.127

-5.811 -4.975

< 0.00002 < 0.00002

-2.372

0.018

CCRAs H1: Knowledge Gap

H4: the Actual Performance Gap

CCRAs’ customers Investors / public

CCRAs’ customers Investors / public Regulators H2: Reasonable Expectation Gap ExpectationPerformance Gap

H3: Regulation Gap

Z

Significant level

No significant gap No significant gap No significant gap No significant gap No significant gap Gap exists Gap exists No significant gap

W. Sun 2015

211

Figure 6.2: The Median Analysis of Gap Components (CCRAs)

Gaps

Knowledge gap

Reasonable expectation gap

Regulation gap

Performance gap

Catergories of whether gap exists

Too high expectation Too low expectation Expectation = SLR results of what should be CRAs or CCRAs duties Reasonable expectation Unreasonable expectation SLR results of what should be CRAs's or CCRAs' duties = Duties required in IOSCO code Lack regulation Possibly over regulated Duties in IOSCO code = Duties in Chinese Actual performance gap Duties required in Chinese regulation < Perceived performance Duties required in Chinese regulations = Percieved performance

Expectation > Perceived performance Expectation - performance gap Expectation < Perceived performance Expectation = Percieved performance

No. of duties with differences between medians of two boundaires 2 4

Percentage

2.4% 4.8%

77

92.8%

27 6

32.5% 7.2%

60

72.3%

54 32 7 8

65.1% 38.6% 8.4% 9.6%

52

62.7%

23

27.7%

40 4 39

48.2% 4.8% 47.0%

Figure 6.3: The Median Analysis of Gap Components (CCRAs’ Customers)

Gaps

Knowledge gap

Reasonable expectation gap

Regulation gap

Performance gap

Catergories of whether gap exists

Too high expectation Too low expectation Expectation = SLR results of what should be CRAs or CCRAs duties

No. of duties with differences between Percentage medians of two boundaires 3 3.6% 2 2.4% 78

Reasonable expectation 27 Unreasonable expectation 6 SLR results of what should be CRAs's or 60 CCRAs' duties = Duties required in IOSCO code Lack regulation 54 Possibly over regulated 32 Duties in IOSCO code = Duties in Chinese regulation 7 Actual performance gap 29 Duties required in Chinese regulation < 54 Perceived performance Duties required in Chinese regulations = Percieved performance

Expectation > Perceived performance Expectation - performance gap Expectation < Perceived performance Expectation = Percieved performance

94.0% 32.5% 7.2% 72.3% 65.1% 38.6% 8.4% 34.9% 65.1%

0

0.0%

81 2 0

97.6% 2.4% 0.0%

W. Sun 2015

212

Figure 6.4: The Median Analysis of Gap Components (Investors / Public)

Gaps

Catergories of whether gap exists

Too high expectation Too low expectation Knowledge gap Expectation = SLR results of what should be CRAs or CCRAs duties Reasonable expectation Unreasonable expectation Reasonable expectation gap SLR results of what should be CRAs's or CCRAs' duties = Duties required in IOSCO code Lack regulation Possibly over regulated Regulation gap Duties in IOSCO code = Duties in Chinese regulation Actual performance gap Duties required in Chinese regulation < Performance gap Perceived performance Duties required in Chinese regulations = Percieved performance Expectation > Perceived performance Expectation - performance gap Expectation < Perceived performance Expectation = Percieved performance

No. of duties with differences between medians of two boundaires 3 8

Percentage

3.6% 9.6%

72

86.7%

27 6

32.5% 7.2%

60

72.3%

54 32

65.1% 38.6%

7

8.4%

27

32.5%

52

62.7%

4

4.8%

66 8 9

79.5% 9.6% 10.8%

Figure 6.5: The Median Analysis of Gap Components (Regulators)

Gaps

Catergories of whether gap exists

Too high expectation Too low expectation Knowledge gap Expectation = SLR results of what should be CRAs or CCRAs duties Reasonable expectation Unreasonable expectation Reasonable expectation gap SLR results of what should be CRAs's or CCRAs' duties = Duties required in IOSCO code Lack regulation Possibly over regulated Regulation gap Duties in IOSCO code = Duties in Chinese regulation Actual performance gap Duties required in Chinese regulation < Performance gap Perceived performance Duties required in Chinese regulations = Percieved performance Expectation > Perceived performance Expectation - performance gap Expectation < Perceived performance Expectation = Percieved performance

No. of duties with differences between Percentage medians of two boundaires 3 3.6% 1 1.2% 79

95.2%

27 6

32.5% 7.2%

60

72.3%

54 32

65.1% 38.6%

7

8.4%

20

24.1%

43

51.8%

20

24.1%

67 0 16

80.7% 0.0% 19.3%

W. Sun 2015

213

Moreover, with reference to the further gap composition analysis in Figures 6.6, 6.7, 6.8, and 6.9 of the expectations and perceptions from all interest groups: (1) the biggest contribution component is the Regulation Gap with from 63.2% to 80.0% of the duties presented with the medians on expectations greater than the medians of perceived performance of relevant duties; (2) Duty E1 (according to the interest groups of CCRAs and investors/public) or both of duty E1 and E2 (according to CCRAs’ customers and regulators) appeared with a Knowledge Gap by having too high expectations from participants; (3) Duty B3, B6, B12, C4, and D4 appeared with the actual performance deficiencies with reference to perceptions from CCRAs, CCRAs’ customers, and investors / public, but with only duty B3, C4, D4 in the expectations and perceptions from regulators. Duty 11 appeared with performance deficiencies only according to expectations and perceptions from CCRAs’ customers and investors/public. Figure 6.6: Expectation-Performance Gap Composition Analysis (CCRAs) Items

Duties

No. of duties Percentage

Duties presents with Expectation > performance gap A2, A4, A6, A7, A8, A9, A12, A13, B3, B4, B6, B7, B8, B9, B10, B11, B12, B13, B17, C1, C4, C5, C7, C8, C9, C10, C11, C12, c13, C14, C16, C17, C18, D1, D2, D4, D7, D8, E1, E16 40

Contributed by knowledge gap E1

Contributed by reasonable expectation gap B17, E16

1

2

2.5%

5.0%

Contributed by regulation gap

Duties with the actual performance deficiencies

A2, A4, A6, A7, A8, A9, A12, A13, B4, B7, B8, B9, B10, B11, B13, C1, C5, C7, C8, C9, C10, C11, C12, C13, C14, C16, C17, C18, D1, D2, D7, D8 32

B3, B6, B12, C4, D4

80.0%

12.5%

5

Figure 6.7: Expectation-Performance Gap Composition Analysis (CCRAs’ Customers) Items

Duties

Duties presents with Expectation > performance gap A2, A4, A5, A6, A8, A9, A11, A12, A13, A14, B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11,

Contributed by knowledge gap E1, E2

Contributed by reasonable expectation gap B15, B16 B17, B18, B19, B20, B22, B23, B24, B25, B26

Contributed by regulation gap

Duties with the actual performance deficiencies

A2, A4, A5, A6, A8, A9, A11, A12, A13, A14, B1, B2, B4, B5, B7, B8, B9, B10,

B3, B6, B12, C4, D4, E11

W. Sun 2015 Items

Duties

No. of duties Percentage

Duties presents with Expectation > performance gap B12, B13, B14, B15, B16 B17, B18, B19, B20, B22, B23, B24, B25, B26 B27, B28, B29 B30, B31, B32, C1, C2, C3, C4 C5, C6, C7, C8 C9, C10, C11, C12, C13, C15, C16, C17, C18 C19, C20, C21, C22, D1, D2, D3, D4, D5, D6, D7, D8, E1, E2, E3, E4, E11, E16 76

214 Contributed by knowledge gap

2

Contributed by reasonable expectation gap B27, B28, B29 B30, B31, B32, E3, E4, E16

20

2.6%

Contributed by regulation gap

Duties with the actual performance deficiencies

B11, B13, B14, C1, C2, C3, C5, C6, C7, C8 C9, C10, C11, C12, C13, C15, C16, C17, C18 C19, C20, C21, C22, D1, D2, D3, D5, D6, D7, D8

48 26.3%

6 63.2%

7.9%

Figure 6.8: Expectation-Performance Gap Composition Analysis (Investors / Public) Items

Duties

No. of duties Percentage

Duties presents with Expectation > performance gap A1, A2, A3, A4, A5, A6, A8, A11, A12, A13, A14, B1, B2, B3, B4, B5, B6, B7, B8, B9, B10, B11, B12, B14, B15, B19, B20, B23, B24, B25, B26, B28, B29, B30, B31, C1, C2, C3, C4, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15, C16, C17, C18, C19, C20, C21, C22, D1, D4, D6, D7, E1, E3 E4, E11, E16 66

Contributed by knowledge gap E1

Contributed by reasonable expectation gap B15, B19, B20, B23, B24, B25, B26, B28, B29, B30, B31, E3 E4, E16

Contributed by regulation gap

Duties with the actual performance deficiencies

A1, A2, A3, A4, A5, A6, A8, A11, A12, A13, A14, B1, B2, B4, B5, B7, B8, B9, B10, B11, B14, C1, C2, C3, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15, C16, C17, C18, C19, C20, C21, C22, D1, D6, D7,

B3, B6, B12, C4, D4, E11

1

14

45

6

1.5%

21.2%

68.2%

9.1%

W. Sun 2015

215

Figure 6.9: Expectation-Performance Gap Composition Analysis (Regulators) Items

Duties

No. of duties Percentage

Duties presents with Expectation > performance gap A2, A3, A4, A5, A6, A8, A9, A10, A14, B1, B2, B3, B4, B5, B7, B8, B9, B10, B11, B14, B16, B20, B21, B22, B23, B24, B25, B26, B27, B30, B31, B32, C1, C2, C3, C4 C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15, C16, C17, C18, C19 C20, C21, C22, D1, D2, D3, D4, D5, D6, D7, D8, E1, E2, E3, E4, E16 67

Contributed by knowledge gap E1, E2

2 3.0%

Contributed by reasonable expectation gap B16, B20, B21, B22, B23, B24, B25, B26, B27, B30, B31, B32, E3, E4, E16

15 22.4%

Contributed by regulation gap

Duties with the actual performance deficiencies

A2, A3, A4, A5, A6, A8, A9, A10, A14, B1, B2, B4, B5, B7, B8, B9, B10, B11, B14, C1, C2, C3, C5, C6, C7, C8, C9, C10, C11, C12, C13, C14, C15, C16, C17, C18, C19 C20, C21, C22, D1, D2, D3, D5, D6, D7, D8,

B3, C4, D4

47

3

70.1%

4.5%

6.2.2 Confirmation of Perception Gaps – the second dimension Perception gaps tests on the medians of duties showed no significant difference among most interest groups (Figure 6.10). However, Mann-Whitney U test results indicated that there were significant differences between some groups on certain duties (Appendices 9 - 18). Figure 6.11 illustrates a summary of these results with reference to the nine attributes. On attributes c, f, g, h, I, more perception gaps amongst interest groups were presented in participants’ perceived performance than their expectations with a significance level of 0.00002. For attribute b, all six perception gaps among four interest groups existed in participants’ expectations about responsibilities concerning the rating fee. Figure 6.12 illustrates further detail with regards to the individual comparison groups. Considering the sub-total of each comparison group, the comparison between CCRAs and investors / public, as well as between CCRAs’ customers and investors / public identified more perception gaps in both expectations and perceived performance than in the other four comparison groups. With reference to the average percentages of existing perception gaps within each attribute, perceptions on the performance of staff competence related duties were shown to have a higher average percentage of in relation to comparison groups than the other attributes.

W. Sun 2015

216

Figure 6.10: The Wilcoxon Signed-Rank Test Results of Perception Gaps on All

Perceived performance

Expectations

Duties Comparisons

Z

Significant level

Results

CCRA-customer

-1.404

0.160

No significant gap

CCRA-investor/public

-1.666

0.096

No significant gap

CCRA-regulator

-2.377

0.017

No significant gap

Customer-investor/public

-3.05

0.002

No significant gap

Customer-regulator

-1.035

0.301

No significant gap

Regulator-investor/public

-4.065

< 0.0001

No significant gap

CCRA-customer

-2.967

0.003

No significant gap

CCRA-investor/public

-3.442

0.001

No significant gap

CCRA-regulator

-4.735

< 0.00002

Gap exists

Customer-investor/public

-1.334

0.182

No significant gap

Customer-regulator

-3.176

0.001

No significant gap

Regulator-investor/public

-2.547

0.011

No significant gap

Figure 6.11: The Mann-Whitney U Test Results of Perception Gaps on Individual Duties

Attributes

No. of duties

a. Ancillary service

1

b. Rating fee

1

c. communication / Transparency

24

d. Accuracy / Quality

28

e. Independence / Conflict of interest

18

f. Designated Compliance Officer

1

g. Staff competence

4

h. Record keeping / Submitting

5

i. Self-regulation

1

No. of comparison No. of existed gap Expectation 6 1 Performance 6 1 Expectation 6 6 Performance 6 2 Expectation 144 40 Performance 144 47 Expectation 168 63 Performance 168 63 Expectation 108 28 Performance 108 25 Expectation 6 0 Performance 6 3 Expectation 24 9 Performance 24 15 Expectation 30 10 Performance 30 12 Expectation 6 0 Performance 6 2

Percentage 16.7% 16.7% 100.0% 33.3% 27.8% 32.6% 37.5% 37.5% 25.9% 23.1% 0.0% 50.0% 37.5% 62.5% 33.3% 40.0% 0.0% 33.3%

Sub-total

i. Self-regulation

c. Communication / Transparency d. Accuracy / Quality e. Independence / Conflict of interest f. Designated compliance officer g. Staff competence h. Record keeping / Submitting

b. Rating fee

a. Ancillary service

Attributes

No. of duties 1 1 1 1 24 24 28 28 18 18 1 1 4 4 5 5 1 1

Expectation Performance Expectation Performance Expectation Performance Expectation Performance Expectation Performance Expectation Performance Expectation Performance Expectation Performance Expectation Performance Expectation Performance

Items

CCRAcustomer 1 0 1 1 9 7 11 14 3 6 0 1 0 2 1 4 0 0 26 35 100.0% 0.0% 100.0% 100.0% 37.5% 29.2% 39.3% 50.0% 16.7% 33.3% 0.0% 100.0% 0.0% 50.0% 20.0% 80.0% 0.0% 0.0% 28.0% 37.6%

%

CCRAInvestor/public 0 0 1 1 15 8 15 20 6 8 0 1 3 4 2 3 0 1 42 46

CCRACustomer % Regulator Investor/public 0.0% 0 0.0% 0 0.0% 0 0.0% 1 100.0% 1 100.0% 1 100.0% 0 0.0% 0 62.5% 4 16.7% 11 33.3% 6 25.0% 11 53.6% 5 17.9% 16 71.4% 7 25.0% 10 33.3% 2 11.1% 10 44.4% 1 5.6% 7 0.0% 0 0.0% 0 100.0% 1 100.0% 1 75.0% 0 0.0% 3 100.0% 1 25.0% 3 40.0% 0 0.0% 4 60.0% 1 20.0% 3 0.0% 0 0.0% 0 100.0% 0 0.0% 1 45.2% 12 12.9% 45 49.5% 17 18.3% 37 % 0.0% 100.0% 100.0% 0.0% 45.8% 45.8% 57.1% 35.7% 55.6% 38.9% 0.0% 100.0% 75.0% 75.0% 80.0% 60.0% 0.0% 100.0% 48.4% 39.8%

%

Customer Investor/public Standard % % Average Regulator - Regulator deviation 0 0.0% 0 0.0% 40.8% 16.7% 0 0.0% 0 0.0% 40.8% 16.7% 1 100.0% 1 100.0% 0.0% 100.0% 0 0.0% 0 0.0% 51.6% 33.3% 0 0.0% 1 4.2% 24.8% 27.8% 7 29.2% 8 33.3% 7.2% 32.6% 2 7.1% 14 50.0% 20.5% 37.5% 6 21.4% 6 21.4% 19.9% 37.5% 3 16.7% 4 22.2% 16.4% 25.9% 2 11.1% 1 5.6% 17.7% 23.1% 0 0.0% 0 0.0% 0.0% 0.0% 1 100.0% 1 100.0% 0.0% 100.0% 0 0.0% 3 75.0% 41.1% 37.5% 2 50.0% 3 75.0% 26.2% 62.5% 0 0.0% 3 60.0% 32.7% 33.3% 0 0.0% 1 20.0% 31.0% 40.0% 0 0.0% 0 0.0% 0.0% 0.0% 0 0.0% 0 0.0% 51.6% 33.3% 6 6.5% 26 28.0% 16.7% 28.1% 18 19.4% 20 21.5% 13.0% 31.0%

W. Sun 2015 217

Figure 6.12: Distribution of Perception Gaps (The Mann-Whitney Test Results)

W. Sun 2015

218

6.2.3 Nine attributes – the third dimensions With reference to the Kruskal-Wallis test results on whether data were distributed differently according to the nine attributes, significant difference can be only found in the SLR results and the IOSCO code (Figure 6.13). These tests were conducted on medians of all responses within each interest group and results collected from SLR and regulations with significance level of 0.00002. More detail is provided in Section 4.3.8.2.4. However, differences in medians of these attributes can be found in all gap boundaries. Since Perception Gaps among interest groups exist, these differences are illustrated with reference to each interest group (Figures 6.14, 6.15, 6.16 and 6.17). These figures show the pattern is different amongst attributes (more details will be discussed in Section 7.4). For example:

(1) A Knowledge Gap with a too high expectation on attribute b (Rating fee) is larger than the other attributes according to expectations and perceptions from CCRAs’ customers, investors / public and regulators (with exception of CCRAs); A Knowledge Gap with a too low expectation on Attribute g (Staff competence) is higher than the other attributes according to the expectation and perception from investors / public; (2) A Reasonable Expectation Gap is present on Attribute d (Accuracy / Quality), Attribute e (Independence / Conflict of interest), and Attribute I (Self-regulation) but not on the other attributes; (3) Over-regulated items appear on attribute d (Accuracy / Quality), attribute g (Staff competence) and attribute h (Record keeping / submitting) but attribute a (Ancillary service), attribute c (Communication / transparency), attribute e (Independence / conflict of interest), and attribute f (DCO) contain more items with a lack of regulation; (4) With regard to perceptions from CCRAs and regulators, only attribute i (Self-regulation) have an Actual Performance Gap. This gap also appears in Attribute c (Communication / transparency), Attribute d (Accuracy / quality), Attribute e (Independence / conflict of interest), Attribute h (Record keeping / Submitting), and Attribute g (Staff competence) according to the other two interest groups. Figure 6.13: The Two Dimensions of the CCRAEG Items

Expectations

Perceived performance

H

Significance level

CCRAs

5.27

0.728

CCRAs’ customers

2.513

0.961

Investor/public

25.344

0.001

Regulator

1.226

0.996

CCRAs

15.402

0.052

CCRAs’ customers

19.516

0.012

W. Sun 2015

219

Items

H

Significance level

Investor/public

6.422

0.600

Regulator

5.753

0.675

SLR results

47.97

< 0.00002

IOSCO code

44.509

< 0.00002

Chinese regulations

37.428

< 0.0001

Figure 6.14: The Two Dimensions of the CCRAEG (CCRAs)

W. Sun 2015

Figure 6.15: Two Dimensions of the CCRAEG (CCRAs’ Customers)

220

W. Sun 2015

Figure 6.16: Two Dimensions of the CCRAEG (Investors / Public)

221

W. Sun 2015

Figure 6.17: Two Dimensions of the CCRAEG (Regulators)

222

W. Sun 2015

223

6.2.4 The structure of the CCRAEG The structure of the CCRAEG model proposed in Figure 3.44 proved to be suitable for measuring the CCRAEG. Nine attributes within each of the boundaries of four gap components (Knowledge Gap, Reasonable Expectations Gap, Regulation gap, and the Actual Performance Gap) and Perception Gaps have been examined though statistical tests. Many hypotheses were not verified, but the gaps exist due to differences in medians. However, the extent of these gaps varies among duties. Details about these results will be analysed and discussed in Sections 6.3 through 6.11.

For some duties, Perception Gaps of expectation or performance-based (twelve gaps in total) were found in all the comparisons between stakeholder groups, whereas for some other duties, no perception gaps were found (Figure 6.11). As a result of the variations of whether Perception Gaps exist in each duty associated with each attribute, seventy-two cells of the CCRAEG model were investigated as detailed in Figure 6.18 rather than that of Figure 3.45.

Figure 6.18: Verified Structure of Expectation and Perception in the CCRAEG

Note: 1. The number shown in the diagram is to label each cell. 2. For example, the perception of CCRAs performance on ancillary service according to perception from the CCRAs is ‘Cell 1’, and their expectations on what CCRAs should do are labelled as ‘Cell 2’.

This is due to two boundaries of the CCRAEG being used instead of four gap components. These changes were made for the following reasons:

W. Sun 2015

224

(1) Results from interviews and questionnaires suggested that assessing understanding from participants on regulations may not be important in the attribution model because it should be incorporated as part of a Knowledge Gap; (2) Participants’ perceptions of reasonable expectations should exist within their expectation of what CCRAs should do; (3) As such, participants felt it was difficult to separate their perceptions in relation to: (a) Knowledge Gap from the market, (b) Reasonable expected responsibilities that are not confirmed in the literature; (c) deficiencies in regulations; (d) deficiencies in performance. Consequently, instead of using four gap components, the two boundaries of the CCRAEG (perceptions of CCRAs performance; expectation of what CCRAs should do) were used as part of attribution model as the framework for investigation of expectations and perceptions from participants.

6.3 Attribute a: Ancillary service Only duty C10 was found relevant to attribute a (Ancillary service) with reference to SLR results and regulations. This duty states: “CRAs must provide definitions of what is or is not 'ancillary business', and state reasons why”. Figure 6.19 reported median analysis of this duty according to the method explained in Section 4.3.8.2.5. Neither a Knowledge Gap (difference between participants’ expectations and SLR results) nor a Reasonable Expectation Gap (difference between SLR results and the IOSCO code) was found, the performance was perceived poorly according to 50% of participants (excluding participants who were unable to answer the question) within the group of regulators, although less than 50% of participants from the other groups believed so (Figure 6.20). Nevertheless, only the comparison between CCRAs’ customers and investors/public (Figure 6.21) was found to contain significant differences in their perceived performance of C10. This might be due to obtaining only two useful responses from the regulator group (the rest indicated they were unable to judge), and its relevant comparison cannot be accepted through MannWhitney U test at significance level of 0.00002. Moreover, Siegel & Castellan (1988) suggested that the sample size for smaller group should be greater than “4” if the larger group is greater than “10” to avoid incorrect ρ value in Mann-Whitney U tests. As such, only non-regulator related perceptions can be compared for Perception Gaps on perceived performance of this duty. However, no actual performance gap can be found

W. Sun 2015

225

because the perceived poor performance may be caused by a Regulation Gap (lack of relevant requirements on duty C10 in the Chinese regulation).

Figure 6.19: C10 Gap Component Analysis CCRAs’ customers

CCRA s Expectation-performance gap Knowledge gap Performance gap

0.5

Investors/publi c

Regulator s

0.5

0.75

0.5

0

0

0

0

-0.5

-0.5

-0.5

-0.25

Reasonable expectation gap

0

Regulation gap

1

Figure 6.20: Participants’ Responses on performance of C10

9.1 43.2 25.0 % % % 32.6% say it is poor (except unable to judge)

20.0 49.1 10.0 % % % 12.6% say it is poor (except unable to judge)

Poorly

Adequate / Average

Well

Regulators

Poorly

Well

Adequate / Average

Investors/public

Poorly

Well

Poorly

Adequate / Average

Well

21.6 43.1 25.5 % % % 28.3% say it is poor (except unable to judge)

Adequate / Average

CCRAs’ customers

CCRAs

0

3.8 3.8 % % 50.0% say it is poor (except unable to judge)

Figure 6.21: C10 Perception Gap Analysis CCRAcustomer Expectations

Perceieved performance

Z= 4.439 ρ< 0.00002 Z=1.748 ρ = 0.080

CCRAinvestor/pu blic Z = -3.859 ρ = 0.001

CCRAregulator

Z = -2.086 ρ = 0.037

Z = -0.879 ρ = 0.379

Z = -0.349 ρ = 0.727

Customerinvestor/pu blic Z = -1.474 ρ = 0.141

Customer -regulator

Z = -5.336 ρ< 0.00002

Z=0.643 ρ = 0.520

Z=2.525 ρ = 0.012

Investor/pu blicregulator Z = -2.079 ρ = 0.038

Z = -1.412 ρ = 0.158

6.4 Attribute b: Rating fee Only duty E11 was found relevant to attribute b (Rating fee) with reference to the interview results (Section 5.2.3). This duty states: CRAs should charge the same amount of rating fee for all rating entity. This is a requirement from the PBC for all CCRAs according to all interviewees. However, this does not exist in the international regulations and it also cannot be found in the Chinese regulations (no Regulation Gap). SLR results suggested multiple solutions on rating fee and payment approach, but none of them have been proved to be feasible. As such, a Knowledge Gap can be found in three interest groups (Figure

W. Sun 2015

226

6.22) because they have a too high expectation with regard to SLR results which suggest the same rating fee for all companies or for all kind of products is not reasonable. The Actual Performance Gap should not exist since this duty is not in the Chinese regulations.

Figure 6.22: E11 Gap Component Analysis Expectationperformance gap Knowledge gap Performance gap Reasonable expectation gap Regulation gap

CCRAs

CCRAs’ customers

Investors/public

Regulators

-1

0.5

1

0

0 -1

1 -0.5

1 0

1 -1

0 0

Nevertheless, 62.3% of participants (excluding who chose “unable to judge”) from the interest group of investors/public believed the CCRAs’ performance on this duty was poor (Figure 6.23), while only 3.1% from the CCRAs themselves thought their performance was poor. The Mann-Whitney U tests indicate a significant difference in perceived perceptions between the CCRAs and their customers or investors/public, as well as between the investors/public and the regulators in terms of the CCRAs performance. Figure 6.23: Participants’ Responses on Performance of E11

29.1 36.8 23.2 % % % 26.0% say it is poor (except unable to judge)

22.1 0.9 37.9 % % % 62.3% say it is poor (except unable to judge)

Poorly

Adequate / Average

Well

Regulators

Poorly

Adequate / Average

Investors/public

Well

Poorly

Adequate / Average

Poorly

Adequate / Average

Well

71.6 19.6 2.9 % % % 3.1% say it is poor (except unable to judge)

Well

CCRAs’ customers

CCRAs

51.9 40.7 3.7 % % % 3.8% say it is poor (except unable to judge)

Consequently, the CCRAs’ perception about their performance on this duty is significantly more positive than perceptions from their customers and investors/public, and regulators’ perception of CCRAs’ performance on this duty is significantly better than that of the investors/public. Interestingly, all six Perception Gaps exist among these four interest groups on what CCRAs should do with reference to this duty (Figure 6.24).

W. Sun 2015

227

Figure 6.24: E11 Perception Gap Analysis CCRAcustomer Expectations

Perceived performance

Z=17.067 ρ< 0.00002 Z=7.158 ρ< 0.00002

CCRAinvestor/ public Z=11.766 ρ< 0.00002 Z = -8.257 ρ< 0.00002

CCRAregulator Z=9.022 ρ< 0.00002 Z=2.146 ρ < 0.05

Customerinvestor/ public Z=17.308 ρ< 0.00002 Z = -4.078 ρ < 0.0001

Customerregulator Z= -4.964 ρ< 0.00002 Z = -2.173 ρ < 0.05

Investor/ publicregulator Z = -6.658 ρ < 0.00002

Z = -4.157 ρ < 0.00002

6.5 Attribute c: Communication / Transparency A Regulation Gap does not exist on attribute c relevant in twenty-four duties, with the exception of duties D4 and D9. These two items state that:

CCRAs should notify the rating payment received from their clients on the company website, where should also have information about rating results, the performance of ratings, including verifiable, quantifiable, historical information about the performance of its rating opinions. This information should be organised and structured to assist the investing public to understand and make performance comparison between CRAs. Duty 4 is referenced in the National Credit Standardisation Work Group (2008, section 7.18) and Duty 9 in the PBC (2006, part 2, sections 3.5 and 4.2). The remaining twentytwo duties are required in the IOSCO code but not in the Chinese regulations. More detail can be found in Figure 6.25.

No Reasonable Expectation Gap was found to exist on all twenty-four duties. However, a Knowledge Gap can be found in certain duties and this varies amongst interest groups. For example, a too low expectation was found on Duty B13 according to expectations from the investors/public and the regulators. CCRAs’ customers have a too low expectation in relation to Duties D9, and CCRAs have a too low expectation on Duties D5 and D6. The medians from CCRAs indicate that they do not think:

CRAs or CCRAs should release information about the proportion of nonrating fee and they do not need to disclose the general nature of compensation arrangements if they received compensation from services unrelated to rating or if this compensation if 10% or more of its annual revenue from a single rating entity. The Actual Performance Gap does not exist on attribute c’s relevant duties because of a Regulation Gap in Chinese regulations. No perceived performance shortfall was found on Duties D4 and D9. However, a perceived performance shortfall existed in

W. Sun 2015

228

Duties A4, A10, and D3 according to the regulators, in D5 according to the CCRAs, and in B13 according to both the CCRAs and the regulators Figure 6.25: The Gap Analysis of ‘ Attribute c’

B1 B2 B3 B4 B6 B7 B8 B9 B10 B11 B12 B19 B20 B21 B22 B23 B24 B25 B26 B27 B28 B29 B30 B31 B32 C3 C6 C20

CCRA

Expectation-performance gap Customer Investor/public Regulator 0 0.5 0.5 0 0.25 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0.5 0 0.5 0.5 0 -0.5 -0.5 -1 0.5 -0.5 0 0.5 0.5 -1 0.5 0.5 0 0.5 0.5 0 0.5 0.5 0 0.5 -0.5 0 0.5 0.5 0 0.5 0.5 0 0.5 0.5 0 0.5 0.5 0 0.5 -0.5 0 0.5 0.5 0 0.5 0.5 0 0.5 0.5

0.5 0.5 0.5 0.5 0 1 0.5 0.5 1 1 0 0 0.5 1 1 0.5 0.5 0.5 0.5 0.5 0 0 0.5 0.5 0.5 0.5 1 0.5

CCRA 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 -1 0 0 0 0 0 0 0 0 0 0 0

Customer

Knowledge gap Investor/public Regulator 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1 0 -1 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Reasonable expectation gap Regulation gap CCRA 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 1 0 1 0 1 0 0 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 0 1 0 1 0 1 -1 -1 0.5 -0.5 0.5 -0.5 -0.5 -0.5 -0.5 -0.5 0.5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1 -1

Customer

Performance gap Investor/public Regulator -0.5 -0.5 -0.75 -0.5 0.5 0.5 -0.5 -0.5 0.5 0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 0.5 -0.5 0 0 -0.5 -0.5 0 0 0 0 0.5 1 1 0.5 0.5 0.5 0.5 0.5 0 0 0.5 0.5 0.5 -0.5 0 -0.5

W. Sun 2015 229

6.6 Attribute d: Accuracy / Quality

A Regulation Gap was found to exist in most duties relevant to attribute d. Only three

items (Duties B3, B6, and B12) did not exhibit a Regulation Gap. A Lack of regulation

was found on eleven items (Duties B1, B2, B4, B7, B8, B9, B10, B11, C3, C6, and

C20), which are more relevant to the requirement of rating methods and process. Figure 6.26: The Gap Analysis of ‘Attribute d’

W. Sun 2015

230

Additional fourteen items (Duties B19, B20, B21, B22, B23, B24, B25, B26, B27, B28, B29, B30, B31, and B32) exist in the Chinese regulations that they are not found in the IOSCO code, and they are more relevant to the requirement of rating reports. Chinese regulations appear to have many more specific requirements on rating reports and procedures than the IOSCO code, including providing the exact the number of days or months for the submission or release of reports.

A Reasonable Expectation Gap was also found on these fourteen additional duties. However, although these requirements can be found in the SLR results, the specific length of time for the processing of rating or submission was not indicated in the literature. As such, these duties were reasonably expected according to the SLR results, with a lack of research and evidence to show whether the length of time or the specific numbers is reasonable and feasible. For example, duty B28 (rating results can only be valid if 2/3 or more of the rating committee members agree with the result). The literature suggests that ratings can be valid if the majority of members (Capital Intelligence, 2011; Dagong global, 2013; European rating agency, 2015) agree, but there is no further research on exactly how many members can be counted as a majority to allow the rating decision to be more accurate.

A Knowledge Gap with a too low expectation can be found within some interest groups on certain duties. For example, according to the medians of responses:

(1) Both investors / public and regulators believed that Duty B21 is not what they expect of CCRAs. This item allows the rating entity to ask for a reevaluation from CRAs, if they disagree with the initial rating results. However, this is considered to be a reasonable request according to the literature (Cantwell, 1998); (2) Both CCRAs and investors / public do not expect Duty 22 to exist. This duty states CRAs should provide feedback of re-evaluation results as soon as possible, if the rating entity disagrees with their initial rating results. This is also considered a reasonable request according to the literature (Cantwell, 1998); (3) CCRAs do not expect Duty 24 to be in the list, which requires all the advanced rating experts to get involved in the rating if re-evaluation is required. According to suggestions from SLR results, the decision making results can be influenced significantly just by one expert advisor (Amtenbrink and Heine, 2013). Moreover, experts may be mostly likely to mislead investors in unstable financial environments (Vaaler and McNamara, 2004). As such, it is reasonable to ask all experts to get involved in the re-evaluation; (4) Investors / public have a too low expectation in relation to Duties 27 and 32. These two duties require CRAs to review rating reports and working paper through three evaluations from the managers of the rating team,

W. Sun 2015

231

the manager of the department, and the rating director. In addition, nonscheduled rating monitoring should be performed from the first day of releasing the rating reports. These relevant requirements were actually recommended in the research by Li (2011) and Sun and Tang (2009). An Actual Performance Gap appears to exist on Duties B21, but only B22 only with regard to re-evaluation and feedback of re-evaluation according to the medians of perceptions from regulators. Moreover, regulators also perceived poor performance on Duties B6, B7, B10, B11, and C6, however, these may be due to a lack of regulation in Chinese legislations.

6.7 Attribute e: Independence / Conflict of Interest A Regulation Gap with a lack of regulation was found to exist on all eighteen ‘attribute e’ relevant duties with the exception of duty C4 (CRAs should have their internal procedure and mechanisms to identify, eliminate, or manage, and disclose any actual or potential conflict of interest. These should be published on the CRAs’ websites).

B14 C1 C2 C4 C5 C7 C8 C9 C11 C12 C13 C14 C15 C16 C17 C18 C19 C21

CCRA

Expectation-performance gap Customer Investor/public Regulator 0 0.5 0.5 0.5 0.5 0.5 0 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0 0.5 0.5 0.5 0.5 1 0.5 0.5 0.5 0.5 0.5 0.5 0 0.5 0.5 0 0.5 0.5

0.5 0.5 0.75 0.5 0.5 0.75 0.5 1 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 0.5 1

CCRA

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Customer

Knowledge gap Investor/public Regulator 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Reasonable expectation gap Regulation gap CCRA 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1

-1 -0.5 -1 0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -1 -0.5 -0.5 -0.5 -1 -1

Customer

Performance gap Investor/public Regulator -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 0.5 0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -1 -0.5 -0.5 -0.5 -0.5 0 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5

-0.5 -0.5 -0.25 0.5 -0.5 -0.25 -0.5 0 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 0 -0.5 0

Figure 6.27: The Gap Analysis of ‘Attribute e’

W. Sun 2015

232

A Reasonable Expectation Gap, Knowledge Gap and Actual Performance Gap were found not to exist on any of these eighteen duties. However, regulators believed the performance on duties C9, C18, C21 were poor. Performance on D16 was poor according to the perceptions from investors / public. However, the performance shortfall on these four duties may be caused by a lack of regulation in Chinese legislations (as such, no Actual Performance Gap appears on these duties).

6.8 Attribute f: DCO A Regulation Gap was found to exist on duty C22 because it is stated in the IOSCO code, but not stated in the Chinese regulations (Figure 6.28). This duty requires CRAs to clearly specify a DCO responsible for CRAs’ and employees’ compliance with the code of conduct, relevant regulations and laws. No Reasonable Expectation Gap was found since it was discussed in Bai (2010), Coskun (2008), Hunt (2009), Katz (2009), and Utzig (2010) as part of existing regulations in America. In addition, neither a Knowledge Gap nor an Actual Performance Gap was found.

Figure 6.28: C22 Gap Component Analysis CCRAs Expectation-performance gap Knowledge gap Performance gap Reasonable expectation gap

CCRAs’ customers

Investors/ public

Regulator s

0

0.5

0.5

0.5

0 -1

0 -0.5

0 -0.5

0 -0.5

0

Regulation gap

1

Figure 6.29: Perception Gaps on C22 CCRAcustomer Expectations Perceived performance

Z= -0.645 ρ = 0.519 Z= -5.370 ρ< 0.00002

CCRAinvestor/ public Z= -0.201 ρ = 0.841 Z= -5.757 ρ< 0.00002

CCRAregulator Z= -0.678 ρ = 0.498 Z= -5.860 ρ