Journal of Risk Model Validation

In this issue of The Journal of Risk Model Validation, we have the usual four papers, with Basel II/III clearly manifest in the first three. The final paper is more concerned with equity-trading backtesting issues, although its potential for application is much broader. As always, we welcome both mainstream and less conventional applications.

Our first paper, “Quantification of model risk in stress testing and scenario analysis” by Jimmy Skoglund, is concerned with being able to understand and quantify the model risk inherent in loss-projection models. Model risk should be a significant concern for both banks and regulators. It is an interesting concept, as it takes the basic assumptions of statistics to a new level. Instead of having a true model, we allow for the existence of multiple models. In such a framework, a Bayesian approach can be natural and convenient. This paper addresses these issues through the use of entropy and worst-case analysis, with the author referring to this procedure as the relative entropy approach.

In the issue’s second paper, “The utility of Basel III rules on excessive violations of internal risk models”, Wayne Tarrant highlights how the Basel III international regulatory framework for banks calls for the use of historical empirical distributions as a backup to conventional modeling. If the historical distribution is the fallback for an internal model, it is important to know if the empirical distribution gives a better view of future reality. The paper looks at the efficacy of risk measures on energy markets and across several different stock market indexes. Tarrant considers several different durations and levels for the historical risk measures selected, looking at the number of violations that Basel III measures for its requirements. He then makes some recommendations concerning the Basel III framework. As a consequence of its analysis, this paper indicates that the Bank for International Settlements should adopt harsher guidelines than are presently in place.

Xin Zhang and Tony Tung’s “On the mathematical modeling of point-in-time and through-the-cycle probability of default estimation/validation”, the third paper in this issue, shows that since Basel II, the second of the Basel Accords, was first published in June 2004, banks around the world have been engaged in a continuous effort to develop methodologies to estimate the key parameters: probability of default (PD), loss given default and exposure at default. The authors focus on PD estimation and validation, and provide mathematical modeling for both point-in-time and through- the-cycle PD estimation.

Our final paper is by Cathy W. S. Chen, Tsai-Yu Lin and T. Y. Huang. “Incorporating volatility in tolerance intervals for pair-trading strategy and backtesting” argues that return volatility plays a crucial role in securities trading. This paper incorporates volatility forecasting via the exponentially weighted moving average model into traditional tolerance limits for pair-trading strategies: the authors call this the semi-parametric version of the tolerance interval. This paper differs from our usual fare in that it is not about loan books or credit risk but more about conventional equity markets. The authors claim that their proposed method helps uncover arbitrage opportunities via the daily return spreads of fifteen US equities. This study compares the backtesting performance of the proposed pair-trading strategy with individual stocks and the traditional tolerance interval strategy over three semiannual periods from June 2016 to December 2017. The results show that the proposed trading strategy is the most profitable one, as opposed to investing in individual stocks or employing the traditional tolerance interval strategy.


Steve Satchell
Trinity College, University of Cambridge