Quants are using language models to map what causes what

GPT-4 does a surprisingly good job of separating causation from correlation

Divorce rates in Maine and the consumption of margarine; ice cream sales and drowning incidents: it’s easy to find examples of spurious statistical links.

Of course, those handling data should know well that causation and correlation are different things. Nobody would expect ice cream to cause drowning. In the arena of investing, though, true cause and effect can be harder to establish. And one group of quants thinks a lack of rigour in the area is a problem for the industry

Models should be treated as unscientific unless they’re preceded by a detailed causal analysis, they argue. Otherwise, mis-specified models are likely to find their way into production and to crowd out truer representations of how markets really work.

In pursuit of a purer understanding of the cogs and wheels of markets, quants have started to test different approaches. It isn’t altogether easy. 

The objective here is to create a so-called DAG (directed acyclic graph), essentially a data map of causal relationships, drawn as a network of arrows pointing from one variable to another. Documenting causality in such a way goes further than formulating broad hypotheses about a given strategy earning a positive return, the causal camp argues.

The in-vogue way to create DAGs, then, is to use causal discovery algorithms. These algorithms are able to determine from raw observational data what’s driving what. 

Is it going to be better at this stage than the best humans? I don’t know. But is it going to be useful when applied at scale? Potentially
Alik Sokolov, Sibli

That sounds good. Quants have access to more data about the world. And more computer power should help, too. But the algorithms have to calculate vast combinations of variables that grow in number exponentially with the complexity of the model. In finance, with its mountains of data, the process can be time-consuming and sometimes impossible based on data alone.

Another route, then, is simply to rely on human expertise. Quants following this approach have run into problems, too, though. It can take multiple experts to draw a relatively simple causal graph. And in markets that move rapidly, such an exercise can prove obsolete before it is even complete. 

So quants have come up with a third idea – and an obvious one in today’s world: to apply large language models to the task.

A 2021 project used a large language model built by the firm Causal Link to generate causal graphs based on the opinions of experts, which it gathers from 50,000 news articles a day.

The researchers constructed example graphs linking macro variables such as dollar strength, food prices, gold, oil demand, US inflation and so on. 

The build time of causal graphs with this “wisdom of the crowds” approach can be reduced to a “matter of seconds”, the researchers stated. 

In another project, conducted late last year, quants employed GPT-4 to organise 153 factors into clusters and map out causal charts within those clusters.

The groupings generated by GPT-4 predicted monthly returns just as well as conventional correlation-based versions, the researchers found, and were less correlated and easier to interpret.

A high rate – two-thirds – of the relationships proposed by GPT-4 aligned with statistical causality tests.

The trick to using large language models in this way, says Alik Sokolov who worked on the project, is to interrogate the model correctly, sometimes with chains of prompts. 

Sokolov is managing director of machine learning at RiskLab at the University of Toronto and co-founder and CEO of Sibli, which builds AI tools for investors. 

 Sokolov reckons firms could in future set up a strategy search loop using models in this way. “Potentially you come up with 10 candidate strategies that are much more likely to be sound from first principles,” he says. “Is it going to be better at this stage than the best humans? I don’t know. But is it going to be useful when applied at scale? Potentially.”

Crowd-sourced causal graphs could help build conviction in a strategy, he reckons. “A six-month research cycle could become a three-month research cycle.”

The process is not foolproof, of course. Human experts are needed to review the output. But in a world where establishing causality becomes a starting point for quant models – and some quants believe that will happen – large language models may have a role to play.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here