The bank quant who wants to stop gen AI hallucinating

Wells Fargo model risk chief thinks he has found a way to validate large language models

Ever since generative artificial intelligence reached mainstream consciousness, the idea of bot hallucinations has sparked a mix of concern and amusement. As long as the practical uses for generative AI were limited, the concern was mostly theoretical. However, fears are mounting as robo-generated falsehoods have led to a slew of lawsuits for offences ranging from defamation to negligence. 

In one instance, Air Canada was ordered last month to compensate a customer after its AI chatbot offered

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

Sorry, our subscription options are not loading right now

Please try again later. Get in touch with our customer services team if this issue persists.

New to Risk.net? View our subscription options

Register

Want to know what’s included in our free membership? Click here

This address will be used to create your account

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here