Journal of Operational Risk

Welcome to the second issue of Volume 13 of The Journal of Operational Risk.

The most anticipated update in operational risk regulation has finally arrived: December 2017 saw the publication by the Basel Committee on Banking Supervision (BCBS) of the final version of its standardized measurement approach (SMA) methodology, which will replace the approaches set out in Basel II (ie, the simpler standardized approaches and the advanced measurement approach (AMA), which permitted the use of internal models) starting from January 1, 2022. The SMA is part of a new set of rules under Basel III, which also significantly changed the face of credit risk measurement and heaped many other  requirements on top of financial institutions. Basel III has discretionary transitional arrangements until 2027 in case the SMA has an impact equal to or higher than 25% of the current firm’s risk weighted assets (RWAs), ie, if the SMA reduces a firm’s RWAs by more than 25% a floor will be imposed. In most cases, this should not impact the revised operational risk framework’s implementation date of 2022, as it is a standardized approach.

I am sure that all of our subscribers and readers have already comprehensively and carefully digested the new rules, so this editor’s letter may not be the most appropriate place for dwelling on them. I understand that most practitioners are currently undergoing regulatory impact analysis, which, depending on where your firm is based, can be positive or negative. More comprehensive analysis will certainly come with time. What I can say is that, independent of the Basel III rules, in order to manage and mitigate your risks, you need to be able to measure them. The operational risk industry needs to keep that in mind. While the purpose of the now-defunct AMA was to measure a firm’s regulatory capital level in order to protect the firm against operational risks, we can and should still use models to estimate operational risk economic capital, precisely because, without them, the task of managing and mitigating capital would be incredibly difficult. These internal models are now unshackled from regulatory requirements and can be optimized for managing the daily risks to which financial institutions are exposed. In addition, operational risk models can and should be used for stress tests and Comprehensive Capital Analysis and Review (CCAR). From my conversations with practitioners, most firms are having their operational risk analytics teams focus on this.

One positive note regarding the Basel III rules is that the requirement for firms to provide greater disclosure of their losses is likely to bring operational risk up to the standard of credit risk in terms of data requirement and analysis. The regulatory focus under Basel III will be on the comprehensiveness of bank data collection and reporting.

However, if we look closely, we notice that the key regulatory tasks for operational risk can be performed by finance or controller functions. The calculation of operational risk capital can be performed using a spreadsheet, and the key data with which we will fill that spreadsheet, ie, the operational risk losses, can be taken straight from a firm’s balance sheet. Therefore, the focus of the operational risk manager should be diverted from these menial tasks toward actual loss avoidance and management. This reinforces the point I made earlier that, given the complexity of financial institutions, without models, managing and mitigating operational risk would be impossible.

Operational risk managers and quantitative analysts will now have more time to focus on the emerging operational risks that are almost constantly generating headlines. Despite much progress having been made over recent years, there are significant operational risks that remain poorly understood, and they deserve more attention and better measurement frameworks. A notable example is cybersecurity: an area where banks struggle to make the right cost–benefit trade-offs between necessary investments to improve the controls and risk exposures.

Given these changes in the regulatory landscape and their impact on the industry, it is fair to say that The Journal of Operational Risk, which is the most important publication in the industry and features most of the top-level thinking in operational risk, also needs to adapt to these changes. We are always talking to practitioners and academics in order to understand their needs and concerns so we can be useful to them. Over the next few months, we will increase these conversations and listen to suggestions on how we can change what we do to be more useful to our subscribers and the industry as a whole.

From now on, we will be expecting more papers on cyber and IT risks, and not only on their quantification but also on better ways to manage those kinds of risks. We would also like to publish more papers on important subjects such as enterprise risk management and everything that is included in this broad subject, such as establishing risk policies and procedures, implementing firm-wide controls, risk aggregation, revamping risk organization, etc. As I said before, we still anticipate receiving analytical papers on operational risk measurement, but these will now have a focus on stress testing and actually managing these risks.

These are certainly exciting times! The Journal of Operational Risk, as the leading publication in this area, aims to be at the forefront of these discussions, and we welcome any paper that will shed light on them.

 

PAPERS

In this issue, we have four technical papers. It is interesting to note that these papers are already the result of research focused on operational risk beyond the AMA.

 

RESEARCH PAPERS

In our first paper, “Operational risk measurement beyond the loss distribution approach: an exposure-based methodology”, Michael Einemann, Joerg Fritscher and Michael Kalkbrener present an alternative operational risk quantification technique called the exposure-based operational risk (EBOR) model. EBOR aims to replace historic severity curves by measuring current exposures as well as using event frequencies based on actual exposures instead of historic loss counts. The authors introduce a general mathematical framework for exposure-based modeling that is applicable to a large number of operational risk types. As a numerical example, an EBOR model for litigation risk is presented. In addition, the authors discuss the integration of EBOR and loss distribution approach models into hybrid frameworks facilitating the migration of operational risk subtypes from a classical to an exposure-based treatment. The implementation of EBOR models is a challenging task, since new types of data and a higher degree of expert involvement are required. In return, EBOR models provide a transparent quantitative framework for combining forward-looking expert assessments, point-in-time data (eg, current portfolios) and historical loss experience. Individual loss events can be modeled in a granular way, which facilitates the reflection of loss-generating mechanisms and provides more reliable signals for risk management.

“Distortion risk measures for nonnegative multivariate risks” is this issue’s second paper. In it, Montserrat Guillen, José María Sarabia, Jaume Belles-Sampera and Faustino Prieto apply distortion functions to bivariate survival functions for nonnegative random variables. This leads to a natural extension of univariate distortion risk measures to the multivariate setting. Certain families of multivariate distributions lead to a straightforward risk measure. The authors show that an exact analytical expression can be obtained in some cases, which makes the life of the operational risk analyst much easier. As an illustration, the authors provide a case study that considers a couple of distributions: the bivariate Pareto distribution and the bivariate exponential distribution. In this case study, the authors consider two loss events with a single risk value and monitor the two events over four different periods. They conclude that the dual power transform gives more weight to the observations of extreme losses, but that the distortion parameter can modulate this influence in all cases.

In “An operational risk capital model based on the loss distribution approach”, our third paper, Ruben D. Cohen constructs an economic capital model for operational risk based on the observation that operational losses can, under a certain dimensional transformation, converge into a single, universal distribution. Derivation of the model is then accomplished by directly applying the loss distribution approach to the transformed data, yielding an expression for risk capital that can be calibrated. The expression, however, is applicable only to nonconduct losses, because it incorporates empirical behaviors that are specific to them. For loss data that falls under the conduct category, this approach may not be applicable, so one may have to resort to a different type of modeling technique.

Henryk Gzyl presents a simple probabilistic model for aggregating very large losses to a loss database in our fourth paper, “Modeling very large losses”. Most operational risk databases will be composed of small to moderate losses that, in aggregate, amount to high values. However, as very large losses occur very rarely, they may sometimes be discarded due to a lack of fit with the overall distribution composed by these smaller loss events. Despite this, large losses cannot be discarded for regulatory (and economic) reasons. In this paper, the author develops a simple modeling procedure that allows us to include very large losses in a loss distribution fitted with smaller-sized losses, with little impact on the overall results.

 

Marcelo Cruz

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here