Journal of Risk Model Validation

Risk.net

Smoothing algorithms by constrained maximum likelihood: methodologies and implementations for Comprehensive Capital Analysis and Review stress testing and International Financial Reporting Standard 9 expected credit loss estimation

Bill Huajian Yang

Smoothing algorithms for monotonic rating level PD and rating migration probability are proposed. The approaches can be characterized as follows:

  1. These approaches are based on constrained maximum likelihood, with a fair risk scale for the estimates determined fully by constrained maximum likelihood, leading to a fair and more robust credit loss estimation.
  2. Default correlation is considered by using the asset correlation and the Merton model.
  3. Quality of smoothed estimates is assessed by the likelihood ratio test, and the impacted credit loss due to the change of risk scale for the estimates.
  4. These approaches generally outperform the interpolation method and regression models, and are easy to implement by using, for example, SAS PROC NLMIXED.

In the process of loan pricing, stress testing, capital allocation, modeling of probability of default (PD) term structure and International Financial Reporting Standard 9 expected credit loss estimation, it is widely expected that higher risk grades carry higher default risks, and that an entity is more likely to migrate to a closer nondefault rating than a more distant nondefault rating. In practice, sample estimates for the rating-level  default rate or rating migration probability do not always respect this monotonicity rule, and hence the need for smoothing approaches arises. Regression and interpolation  techniques are widely  used for this purpose. A common issue with these, however, is that the risk scale for the estimates is not fully justified, leading to a possible bias in credit loss estimates. In this paper, we propose smoothing algorithms for rating-level PD and rating migration probability. The smoothed estimates obtained by these approaches are optimal  in the sense of constrained maximum likelihood, with a fair risk scale determined by constrained maximum likelihood, leading to more robust credit loss estimation. The proposed algorithms can be easily implemented by a modeler using, for example, the SAS procedure PROC NLMIXED.  The approaches proposed in this paper will provide an effective and useful smoothing tool for practitioners in the field of risk modeling.

1 Introduction

Given a risk-rated portfolio with k ratings {Ri1ik}, we assume that rating R1 is the best quality rating and Rk is the worst, ie, the default rating. It is widely expected that higher risk ratings carry higher default risk, and that an entity is more likely to be downgraded or upgraded to a closer nondefault rating than a more distant nondefault rating. The following constraints are therefore required:

  0p1p2pk-11,   (1.1)
  pii+1pii+2pik-1,   (1.2)
  pi1pi2pii-1,   (1.3)

where pi, 1ik-1, denotes the probability of default (PD) for rating Ri, and pij, 1i,jk-1, is the migration probability from a nondefault initial rating Ri to a nondefault rating Rj.

Estimates that satisfy the above monotonicity constraints are called smoothed estimates. Smoothed estimates are widely expected for rating-level PD and rating migration probability in the process of loan pricing, capital allocation, Comprehensive Capital Analysis and Review (CCAR) stress testing (Board of Governors of the Federal Reserve System 2016), modeling of PD term structure and International Financial Reporting Standard 9 expected credit loss (ECL) estimation (Ankarath et al 2010).

In practice, sample estimates for rating-level PD and rating migration probability do not always respect these monotonicity rules. This calls for smoothing approaches. Regression and interpolation methods have been widely used for this purpose. A common issue with these approaches is that the risk scale for the estimates is not fully justified, leading to a possible bias estimate for the credit loss.

In this paper, we propose smoothing algorithms based on constrained maximum likelihood (CML). These CML-smoothed estimates are optimal in the sense of constrained maximum likelihood, with a fair risk scale determined by constrained maximum likelihood, leading to a fair and more justified loss estimation. As shown by the empirical examples for rating-level PD in Section 2.3, the CML approach is more robust than the logistic and log-linear models, with quality being measured based on the resulting likelihood ratio, the predicted portfolio level PD and the impacted ECL.

This paper is organized as follows. In Section 2, we propose smoothing algorithms for smoothed rating-level PD, for the cases with and without default correlation. A smoothing algorithm for multinomial probability is proposed in Section 3. Empirical examples are given accordingly in Sections 2 and 3, and in Section 2 we benchmark the CML approach for rating-level PD with a logistic model proposed by Tasche (2013) and a log-linear model proposed by van der Burgt (2008). Section 4 concludes.

2 Smoothing rating-level probability of default

2.1 The proposed smoothing algorithm for rating-level PD assuming no default correlation

Cross-section or within-section default correlation may arise due to some commonly shared risk factors. In which case, we assume that the sample is at a point in time, given the commonly shared risk factors, and that defaults occur independently given the commonly shared risk factors.

Let di and (ni-di) be the observed default and nondefault frequencies, respectively, for a nondefault risk rating Ri. Let pi denote the PD for an entity with a nondefault initial rating Ri. With no default correlation, we can assume that the default frequency follows a binomial distribution. Then the sample loglikelihood is given by

  LL=i=1k-1[(ni-di)log(1-pi)+dilog(pi)]   (2.1)

up to a summand given by the logarithms of the related binomial coefficients, which are independent of {pi}. By taking the derivative of (2.1) with respect to pi and setting it to zero, we have

  -(ni-di)1-pi+dipi=0,  
  di(1-pi)=(ni-di)pipi=dini.  

Therefore, the unconstrained maximum likelihood estimate for pi is just the sample default rate di/ni.

We propose the following smoothing algorithm for the case when no default correlation is assumed.

Algorithm 2.1 (Smoothing rating-level PD assuming no default correlation).
  1. (a)

    Parameterize the PD for a nondefault rating Ri by

      pi=exp(b1+b2++bk-i),   (2.2)

    where

      bk-1-ε1,bk-2-ε2,,b2-εk-2,b10   (2.3)

    for given constants εi0, 1ik-2.

  2. (b)

    Maximize, under constraint (2.3), the loglikelihood (2.1) for parameters {b1,b2,,bk-1}. Derive the smoothed estimates using (2.2).

By (2.2) and (2.3), we have

  pk-1=exp(b1)exp(0)=1,  
  pipi-1=exp(-bk-i+1)exp(εi-1)10p1p2pk-11.  

Thus, monotonicity (1.1) is satisfied. When ε1=ε2==εk-2=ε0, let ρ=exp(ε). Then ρ is the maximum lower bound for all the ratios {pi/pi-1} of the smoothed estimates {pi}.

2.2 The proposed smoothing algorithms for rating-level PD assuming default correlation

Default correlation can be modeled by the asymptotic single risk factor (ASRF) model using asset correlation. Under the ASRF model framework, the risk for an entity is governed by a latent random variable z, called the firm’s normalized asset value, which splits into the following two parts (Miu and Ozdemir 2009):

  z=sρ+ε1-ρ,0<ρ<1,sN(0,1),εN(0,1),   (2.4)

where s denotes the common systematic risk and ε is the idiosyncratic risk independent of s. The quantity ρ is called the asset correlation. It is assumed that there exist threshold values (ie, the default points) {bi} such that an entity with an initial risk rating Ri will default when z falls below the threshold value bi. The long-run PD for rating Ri is then given by pi=Φ(bi), where Φ denotes the standard normal cumulative distribution function (CDF).

Let pi(s) denote the PD for an entity with an initial risk rating Ri given the systematic risk s. It is shown in Yang (2017) that

  pi(s)=Φ(bi1+r2-rs),   (2.5)

where

  r=ρ1-ρ.  

Let ni(t) and di(t) denote, respectively, the number of entities and the number of defaults at time t for t=t1,t2,,tq. Given the latent factor s, we propose the following smoothing algorithm for rating-level-correlated long-run PDs by using (2.5).

Algorithm 2.2 (Smoothing rating-level-correlated long-run PDs given the latent systematic risk factor).
  1. (a)

    Parameterize pi(s) for a nondefault rating Ri by (2.5) with

      bi=(c1+c2++ck-i),   (2.6)

    where, for a given constant ε0, the following constraints are satisfied:

      ck-1-ε,ck-2-ε,,c2-ε,c10.   (2.7)
  2. (b)

    Estimate parameters {c1,c2,,ck-1} by maximizing, under constraint (2.7), the following loglikelihood:

      LL=h=1qi=1k-1[(ni(th)-di(th))log(1-pi(s)+di(th))log(pi(s))].   (2.8)

    Set pi=Φ(bi). Then monotonicity (1.1) for {pi}, ie, the rating-level long-run PDs, follows from constraints (2.6) and (2.7).

Optimization with a random effect can be implemented by using, for example, SAS PROC NLMIXED (SAS Institute 2009).

When some key risk factors x=(x1,x2,,xm), common to all ratings, are observed, we assume the following decomposition for the systematic risk factor s:

  s=-λci(x)-e1-λ2,eN(0,1),0<λ<1,  

where the common index ci(x)=[a1x1+a2x2++amxm-u]/v is a linear combination of variables x1,x2,,xm, with u and v being the mean and standard deviation of a1x1+a2x2++amxm.

Let pi(x) denote the PD given a scenario x. Assume that ci(x) is standard normal independent of e. Then we have (Yang 2017, Theorem 2.2)

  pi(x)=Φ[bi1+r~2+r~ci(x)]   (2.9)

for some r~.

Let ci(x(t)) denote the value of ci(x) at time t for t=t1,t2,,tq. Given ci(x), we propose the following smoothing algorithm for rating-level-correlated long-run PDs and rating-level point-in-time PDs by using (2.9).

Algorithm 2.3 (Smoothing rating-level-correlated PDs given the common index ci(x)).
  1. (a)

    Parameterize pi(x(t)) for a nondefault rating Ri by (2.6) with

      bi=(c1+c2++ck-i),   (2.10)

    where, for a given constant ε0, the following constraints are satisfied:

      ck-1-ε,ck-2-ε,,c2-ε,c10.   (2.11)
  2. (b)

    Estimate parameters {c1,c2,,ck-1} by maximizing, under constraint (2.11), the loglikelihood, as follows

      LL=h=1qi=1k-1[(ni(th)-di(th))log(1-pi(x(th))+di(th)log(pi(x(th))))].   (2.12)

    Set pi=Φ(bi). Then monotonicity (1.1) for {pi}, ie, the rating-level long-run PDs, and for {pi(x(th))} at time t=th, follows from constraints (2.10) and (2.11).

2.3 Empirical examples: smoothing of rating-level PDs

Example 1: smoothing rating-level long-run PDs assuming no default correlation

Table 1 shows the record count and default rate (DF rate) for a sample created synthetically with six nondefault risk ratings.

Algorithm 2.1 will be benchmarked by the following methods.

Table 1: Sample count by rating.
  Risk rating      
    Portfolio    
  1 2 3 4 5 6 level    
DF 1 11 22 124 62 170 391    
Count 5 529 11 566 29 765 52 875 4 846 4 318 108 899    
DF rate (%) 0.0173 0.0993 0.0739 0.2352 1.2833 3.9442 0.3594    
LGL1:

with this approach, the PD for rating Ri is estimated by pi=exp(a+bx), where x denotes the index for rating Ri, ie, x=i for rating Ri. Parameters a and b are estimated by a linear regression of the form below, using the logarithm of the sample default rate for a rating:

  log(ri)=a+bx+e,eN(0,σ2).  

A common issue with this approach is the unjustified uniform risk scale b (in the log space) for all ratings. In addition, this approach generally causes the portfolio level PD to be underestimated, due to the convexity of the exponential function (the second derivative of the function exp() is positive):

  E(yx)=E(exp(a+bx+e)x)=exp(a+bx+12σ2)>exp(a+bx).  
LGL2:

like method LGL1, rating-level PD is estimated by pi=exp(a+bx). However, parameters a and b are estimated by maximizing the loglikelihood given in (2.1). With this approach, the bias for portfolio PD can generally be avoided, though the issue with the unjustified uniform risk scale remains.

EXP-CDF:

this method was proposed by van der Burgt (2008). With this approach, the rating-level PD is estimated by pi=exp(a+bx), where x denotes, for rating Ri, the adjusted sample cumulative distribution,

  x(i)=(n1+n2++ni-1+12ni)(n1+n2++nk-1).   (2.13)

Instead of estimating parameters via a cap ratio (van der Burgt 2008), we estimate parameters by maximizing the loglikelihood given in (2.1).

LGST-INVCDF:

this method was proposed by Tasche (2013). With this approach, the rating-level PD is estimated by using pi=1/(1+exp(a+bΦ-1(x))), where x is as in (2.13). Parameters are estimated by maximizing the loglikelihood given in (2.1).

Estimation quality is measured by the following.

p-value:

this is the p-value calculated from the likelihood ratio chi-squared test with degrees of freedom equal to the number of restrictions. A higher p-value indicates a better model.

ECL ratio:

this is the ratio of expected credit loss based on the smoothed rating-level PDs to that based on the realized rating-level PDs, given the exposure at default and loss given default parameters for each rating. A significantly lower ECL ratio value indicates a possible underestimation of the credit loss.

PD ratio:

the ratio of the portfolio level PD aggregated from the smoothed rating-level PDs is relative to the portfolio level PD aggregated from the realized rating-level PDs. A value significantly lower than 100% for the PD ratio indicates a possible underestimation for the PD at portfolio level.

Table 2 shows the results for Algorithm 2.1 (labeled “CML”) when ε1=ε2==εk-2=0 along with the benchmarks, where the smoothed rating-level PDs are listed in columns P1–P6.

Table 2: Smoothed results by Algorithm 2.1 and benchmarks. [All values are given in percent.]
                Portfolio level
                 
                ECL PD
Method P1 P2 P3 P4 P5 P6 ? -value ratio ratio
CML 0.0173 0.0810 0.0810 0.2352 1.2833 3.9442 95.92 99.91 100.00
LGL1 0.0165 0.0416 0.1053 0.2663 0.6732 1.7022 00.00 46.09 072.57
LGL2 0.0032 0.1468 0.2901 0.4333 0.5763 0.7191 00.00 27.58 100.07
EXP-CDF 0.0061 0.0086 0.0294 0.3431 1.9081 2.5057 00.00 72.92 100.21
LGST-INVCDF 0.0104 0.0188 0.0585 0.2795 1.5457 3.4388 00.00 90.46 100.00
Table 3: Strictly monotonic smoothed rating-level PDs. [All values are given in percent.]
                Portfolio level
                 
                ECL PD
? P1 P2 P3 P4 P5 P6 ? -value ratio ratio
0.0 0.0173 0.0810 0.0810 0.2352 1.2833 3.9442 95.92 99.91 100.00
0.1 0.0173 0.0753 0.0832 0.2352 1.2833 3.9442 89.06 99.88 100.00
0.5 0.0173 0.0552 0.0910 0.2352 1.2833 3.9442 36.63 99.79 100.00
1.0 0.0120 0.0327 0.0890 0.2419 1.2833 3.9442 02.54 99.63 100.00

These results show that Algorithm 2.1 outperforms the other benchmarks significantly by p-value, impacted ECL and aggregated portfolio-level PD. The first log-linear model (LGL1) underestimates the portfolio level PD significantly. All log-linear models (LGL1, LGL2 and EXP-CDF) underestimate the ECL significantly.

Table 3 illustrates the strictly monotonic smoothed rating-level PDs by Algorithm 2.1 when ε1=ε2==εk-2=ε>0. However, while the p-value deteriorates quickly as ε increases from 0 to 1, the impacted ECL does not change that much.

Example 2: smoothing rating-level long-run PDs in the presence of default correlation

Table 4: Long-run default rate by rating calculated from the sample. [All values are given in percent.]
  Risk rating  
    Portfolio
  1 2 3 4 5 6 level
Long-run AVG PD 0.0215 0.1027 0.0764 0.2731 1.1986 3.8563 0.3818
Overall distribution 5.07 10.61 27.47 48.32 4.52 4.01 100.00

The sample created synthetically contains the quarterly default count by rating for a portfolio with six nondefault ratings between 2005 Q1 and 2014 Q4. The (rating-level or portfolio-level) point-in-time default rate is calculated for each quarter and then averaged over the sample window by dividing by the number of quarters (forty-four) to obtain the estimate for the long-run average realized PD (labeled “AVG PD”). Sample distribution (labeled “overall distribution”) by rating is calculated by combining all forty-four quarters. Table 4 displays sample statistics (with a heavy size concentration at rating R4).

Table 5: Smoothed correlated long-run rating-level PDs. [All values are given in percent.]
                Portfolio
                long-run PD
                 
                AVG PD
? P1 P2 P3 P4 P5 P6 AIC PD ratio
0.0 (no correl) 0.0179 0.0836 0.0836 0.2371 1.3076 4.0372 694.02 0.3710 097.17
0.0 (w correl) 0.0183 0.0828 0.0828 0.2545 1.1951 3.9340 594.62 0.3843 100.66
0.1 (w correl) 0.0183 0.0483 0.0966 0.2541 1.1942 3.9318 600.79 0.3842 100.64
0.2 (w correl) 0.0035 0.0176 0.0754 0.2775 1.1859 3.9237 617.96 0.3842 100.64
0.3 (w correl) 0.0010 0.0086 0.0560 0.2905 1.1961 3.9342 637.25 0.3845 100.71

Table 5 shows the smoothed correlated rating-level long-run PD for all six nondefault ratings obtained by using Algorithm 2.2.

Estimation quality is measured by the following.

AIC:

the Akaike information criterion. A lower AIC indicates a better model.

PD ratio:

the ratio of the long-run average predicted portfolio-level PD (labeled “AVG PD”) to the long-run average realized portfolio level PD. A value significantly less than 100% for this ratio indicates a possible underestimation for the PD at portfolio level.

The first row in Table 5 shows results for the case when no default correlation is assumed (labeled “no correl”) and ε is chosen to be 0, while the second row shows those for the case when default correlation is assumed (labeled “w correl”) and ε=0.

The results in the first row show that the estimated long-run portfolio level PD for the case assuming no default correlation is lower than that for the case when default correlation is assumed (second row), which suggests we may have underestimated the long-run rating-level PD when assuming no default correlation. The high AIC value in the first row implies that the assumption of no default correlation may not be appropriate.

Note that, when applying Algorithm 2.2 to the sample used in example 1, assuming no default correlation, we got exactly the same estimates as in example 1.

3 Smoothing algorithms for multinomial probability

3.1 Unconstrained maximum likelihood estimates for multinomial probability

For n independent trials, where each trial results in exactly one of h fixed outcomes, the probability of observing frequencies {ni}, with frequency ni for the ith ordinal outcome, is

  n!n1!n2!nh!x1n1x2n2xhnh,   (3.1)

where xi>0 is the probability of observing the ith ordinal outcome in a single trial, and

  n=n1+n2++nh,x1+x2++xh=1.  

The loglikelihood is

  LL=n1logx1+n2logx2++nhlogxh   (3.2)

up to a constant given by the logarithm of some multinomial coefficient independent of parameters {x1,x2,,xh}. By using the relation xh=1-x1-x2--xh-1 and setting to zero the derivative of (3.2) with respect to xi, 1ih-1, we have

  nixi-nh(1-x1-x2--xh-1)=0nixi=nhxh.  

Since this holds for each i and for the fixed h, we conclude that the vector (x1,x2,,xh) is in proportion with (n1,n2,,nh). Thus, the maximum likelihood estimate for xi is the sample estimate

  xi=ni(n1+n2++nh)=nin.   (3.3)

3.2 The proposed smoothing algorithm for multinomial probability

We next propose a smoothing algorithm for multinomial probability under the following constraint:

  0x1x2xh1.   (3.4)
Algorithm 3.1 (Smoothing multinomial probability).
  1. (a)

    Parameterize the multinomial probability by

      xi=exp(b1+b2++bh+1-i)exp(b1)+exp(b1+b2)++exp(b1+b2++bh).   (3.5)
  2. (b)

    Maximize (3.2), with xi given by (3.5), for parameters b1,b2,,bh subject to

      bh-ε1,bh-1-ε2,,b2-εh-1,b10   (3.6)

    for εi0, 1ih-1. Derive the CML-smoothed estimates by using (3.5). Then the monotonicity (3.4) for the estimates follows from (3.5) and (3.6).

In the case when ε1=ε2==εh-1=ε0, let ρ=exp(ε). Then ρ is the maximum lower bound for all the ratios {xi/xi-1}.

3.3 An empirical example: smoothing transition probability matrix

Table 6: Long-run transition probability matrixes before and after smoothing.
             
(a) Transition probability before smoothing
p1 p2 p3 p4 p5 p6 p7
0.97162 0.01835 0.00312 0.00554 0.00104 0.00017 0.00017
0.00621 0.94528 0.03071 0.01284 0.00215 0.00257 0.00025
0.00071 0.01028 0.93803 0.04089 0.00659 0.00277 0.00074
0.00024 0.00069 0.01260 0.96726 0.01261 0.00543 0.00118
0.00039 0.00118 0.00790 0.07996 0.82725 0.07048 0.01283
0.00022 0.00133 0.00266 0.04498 0.01197 0.89940 0.03944
             
(b) Transition probability after smoothing
p1 p2 p3 p4 p5 p6 p7
0.97162 0.01835 0.00433 0.00433 0.00104 0.00017 0.00017
0.00621 0.94528 0.03071 0.01284 0.00236 0.00236 0.00025
0.00071 0.01028 0.93803 0.04089 0.00659 0.00277 0.00074
0.00024 0.00069 0.01260 0.96726 0.01261 0.00543 0.00118
0.00039 0.00118 0.00790 0.07996 0.82725 0.07048 0.01283
0.00022 0.00133 0.00266 0.02847 0.02847 0.89940 0.03944

Rating migration matrix models (Miu and Ozdemir 2009; Yang and Du 2016) are widely used for International Financial Reporting Standard 9 ECL estimation and CCAR stress testing. Given a nondefault risk rating Ri, let nij be the observed long-run transition frequency from Ri to Rj at the end of the horizon, and let ni=ni1+ni2++nik. Let pij be the long-run transition probability from Ri to Rj. By (3.3), the maximum likelihood estimate for pij observing the long-run transition frequencies {nij} for a fixed i is

  pij=nijni.   (3.7)

It is widely expected that higher risk grades carry greater default risk, and that an entity is more likely to be downgraded or upgraded to a closer nondefault rating than a more distant nondefault rating. The following constraints are thus required:

  pii+1pii+2pik-1,   (3.8)
  pi1pi2pii-1,   (3.9)
  p1kp2kpk-1k.   (3.10)

The constraint (3.10) is for rating-level PD, which was discussed in Section 2.

Smoothing the long-run migration matrix involves the following steps.

  1. (a)

    Rescale migration probabilities {pi1,pi2,,pii-1} in (3.9) to make them sum to 1. Then find the CML-smoothed estimates by using Algorithm 3.1, and rescale these CML estimates in return to obtain the same summed value for {pi1,pi2,,pii-1} as that before smoothing. Do the same for (3.8).

  2. (b)

    Find the CML-smoothed estimates by using Algorithm 2.1 for the rating-level default rate. Keep these CML default rate estimates unchanged and rescale, for each nondefault rating Ri, the nondefault migration probabilities {pi1,pi2,,pik-1} so that the entire row {pi1,pi2,,pik} sums to 1.

Table 6 shows empirical results using Algorithms 2.1 and 3.1 for smoothing the long-run migration matrix, where for Algorithm 3.1 all εi are set to zero.

The sample used here is created synthetically. It consists of the historical quarterly rating transition frequency for a commercial portfolio from 2005 Q1 to 2015 Q4. There are seven risk ratings, with R1 being the best quality rating and R7 being the default rating.

Part (a) shows sample estimates for long-run transition probabilities before smoothing, while part (b) shows CML-smoothed estimates. There are three rows, as highlighted in bold in part (a), where sample estimates violate (3.8) or (3.9) (but (3.10) is satisfied). Rating-level sample default rates (the column labeled “p7”) do not require smoothing.

As shown in the table, the CML-smoothed estimates are the simple average of the relevant nonmonotonic sample estimates. (For the structure of CML-smoothed estimates for multinomial probabilities, we show theoretically in a separate paper that the CML-smoothed estimate for an ordinal class is either the sample estimate or the simple average of the sample estimates for some consecutive ordinal classes including the named class.)

4 Conclusions

Regression and interpolation approaches are widely used for smoothing rating transition probability and rating-level probability of default. A common issue with these methods is that the risk scale for the estimates does not have a strong mathematical basis, leading to possible bias in credit loss estimation. In this paper, we propose smoothing algorithms that are based on constrained maximum likelihood for rating-level PD and for rating migration probability. These smoothed estimates are optimal in the sense of constrained maximum likelihood, with a fair risk scale determined by constrained maximum likelihood, leading to a fair and more justified credit loss estimation. These algorithms can be implemented by a modeler using, for example, the SAS PROC NLMIXED package.

Declaration of interest

The author reports no conflicts of interest. The author alone is responsible for the content and writing of the paper. The views expressed in this paper are not necessarily those of the Royal Bank of Canada or any of its affiliates.

Acknowledgements

The author thanks both referees for suggesting extended discussion to cover both the case when default correlation is assumed and the likelihood ratio test for the constrained maximum likelihood estimates. Special thanks to Carlos Lopez for his consistent input, insights and support for this research. Thanks also go to Clovis Sukam and Biao Wu for their critical reading of this manuscript, and Zunwei Du, Wallace Law, Glenn Fei, Kaijie Cui, Jacky Bai and Guangzhi Zhao for many valuable conversations.

References

  • Ankarath, N., Ghost, T. P., Mehta, K. J., and Alkafaji, Y. A. (2010). Understanding IFRS Fundamentals. Wiley.
  • Board of Governors of the Federal Reserve System (2016). Comprehensive Capital Analysis and Review 2016: summary instructions. Report, January, Federal Reserve Bank.
  • Miu, P., and Ozdemir, B. (2009). Stress testing probability of default and rating migration rate with respect to Basel II requirements. The Journal of Risk Model Validation 3(4), 3–38 (https://doi.org/10.21314/JRMV.2009.048).
  • SAS Institute (2009). SAS 9.2 user’s guide: the NLMIXED procedure. SAS Institute Inc., Cary, NC.
  • Tasche, D. (2013). The art of probability-of-default curve calibration. The Journal of Credit Risk 9(4), 63–103 (https://doi.org/10.21314/JCR.2013.169).
  • van der Burgt, M. J. (2008), Calibrating low-default portfolios, using the cumulative accuracy profile. The Journal of Risk Model Validation 1(4), 17–33 (https://doi.org/10.21314/JRMV.2008.016).
  • Yang, B. H. (2017). Point-in-time probability of default term structure models for multiperiod scenario loss projection. The Journal of Risk Model Validation 11(1), 73–94 (https://doi.org/10.21314/JRMV.2017.164).
  • Yang, B. H., and Du, Z. (2016). Rating-transition-probability models and Comprehensive Capital Analysis and Review stress testing. The Journal of Risk Model Validation 10(3), 1–19 (https://doi.org/10.21314/JRMV.2016.155).

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here