No silver bullet for AI explainability
No single approach to interpreting a neural network’s outputs is perfect, so it’s better to use them all
As artificial intelligence becomes more powerful, explaining the outputs of these models also becomes more challenging.
Deep learning techniques – and neural networks in particular – are playing an increasingly important role within financial institutions, where they are used to automate everything from options hedging to credit card lending. The outputs of these models are the result of interactions between the hidden layers of the network, which are often difficult to trace, let alone explain.
Efficiently interpreting these model outputs is not only necessary for financial institutions to build reliable and transparent models, but also to satisfy increasing regulatory scrutiny. “Regulators are looking into how automated decisions are made, and whether they have some biases that hadn’t been discovered before,” says Ksenia Ponomareva, global head of analytics at Riskcare, and one of the authors of Interpretability of neural networks: a credit card default model example.
Ponomareva, and Simone Caenazzo, a senior quant analyst at Riskcare, studied some popular approaches to explaining the outputs of neural networks.
They conclude that none of the models considered in their study is always superior to the others, but rather, that each has its own particular strengths. Furthermore, because they provide different insights, the combination of several techniques may be more informative.
Three explainability techniques – relevance analysis, sensitivity analysis and neural activity analysis – are considered in the paper.
The first of these measures the relevance of each input variable used in a neural network. By aggregating the individual measures of relevance, it is possible to assess their marginal contribution to the output.
Sensitivity analysis measures how changes to input variables affect the output. This can help researchers identify which input variables influence the output the most and how changing the relevant input variables can affect the output.
Neural activity analysis is used to catalogue the paths in the neural network that are activated most frequently. This can highlight potential biases or inefficiencies in the data or in the network itself by detecting paths or nodes that are either activated very often or not at all.
Ponomareva and Caenazzo tested the approaches using a standard neural network and a credit card dataset popular with researchers in finance. This is a widely tested application of AI and is useful for assessing the information that each approach to interpretability is able to provide.
Each of the three techniques provided information that when collated presented a broader and clearer picture of how the output was obtained: relevance analysis showed that gender, education and marital status are significant factors in a default probability model; sensitivity analysis revealed the output is particularly sensitive to late payments; and neuron activity analysis provided some insight into whether candidates were being clustered in a consistent way by observing how they activate particular neurons.
“We found the neural network was sensitive to how late customers made payments,” says Ponomareva, “whereas other models were more punitive towards a certain age or gender groups, or the marriage status.”
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe
You are currently unable to print this content. Please contact info@risk.net to find out more.
You are currently unable to copy this content. Please contact info@risk.net to find out more.
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Printing this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@risk.net
Copyright Infopro Digital Limited. All rights reserved.
You may share this content using our article tools. Copying this content is for the sole use of the Authorised User (named subscriber), as outlined in our terms and conditions - https://www.infopro-insight.com/terms-conditions/insight-subscriptions/
If you would like to purchase additional rights please email info@risk.net
More on Our take
Hedge funds must race the clock to check their dealer-rule status
Working out whether a firm is caught by SEC registration requirement could take months
Filling gaps in market data with optimal transport
Julius Baer quant proposes novel way to generate accurate prices for illiquid maturities
Why Europe still awaits a private credit CLO
Tricky questions face managers that plan to launch the structure on the continent
The signs of tacit collusion in the dividend play trade
Game theory and real-world data point to a different understanding of how arbitrage in markets works
Decades of history says you can beat high inflation with quality
Factors such as momentum and value generally outperform the market irrespective of inflation, but new research suggests quality stocks are best when prices are rising rapidly
Esma faces tough task in implementing Emir 3.0
EU regulator must contend with tight timeframes and increasing workload without additional resources
Quants are using language models to map what causes what
GPT-4 does a surprisingly good job of separating causation from correlation
China stock sell-off will test securities firms’ risk managers
Regulatory measures to support stock market could add to risks facing securities sector
Most read
- SG trader dismissals shine spotlight on intraday limit controls
- Basel Committee reviewing design of liquidity ratios
- Too soon to say good riddance to banks’ public enemy number one