Ashby Monk, Executive & Research Director at Stanford Long-term Investing

Ashby Monk, Executive & Research Director at Stanford Long-term Investing

ChatGPT in Knowledge Management

Ashby Monk Assesses Application

Is ChatGPT the ultimate tool for knowledge management in investment organisations? We asked academic Ashby Monk about the relevance of these AI systems.

When OpenAI released the fourth iteration of ChatGPT in March 2023, it came along with several user cases that illustrated real-life applications of the technology.

One of those user cases was Morgan Stanley Wealth Management.

The company harbours a wealth of information about investment strategies, market research and analyst insights, but it can be cumbersome for its financial advisers to access the right data, as most of it is buried in PDF form deep within its systems.

So in 2022, Morgan Stanley started to explore whether ChatGPT could help in finding the right answers and – as Jeff McMillan, Head of Analytics, Data & Innovation, at Morgan Stanley stated – “unlock the cumulative knowledge of Morgan Stanley Wealth Management”.

image shows a quotation mark

Most investors, especially as they get heavy into alternatives, don't have great data on what they own. ChatGPT and other GPTs might be useful at extracting some of this info from PDFs or emails, but if the data doesn’t exist inside the organisation then there’s not much any generative AI tool can do

Essentially, what Morgan Stanley is trying to do is put in place a systematic form of knowledge management, an issue Ashby Monk, Executive & Research Director at Stanford Long-term Investing, has raised numerous times as being essential to the proper functioning of investment organisations.

Speaking to [i3] Insights from his home in Los Gatos, California, Monk is excited about the marriage between artificial intelligence and knowledge management.

When asked specifically whether Morgan Stanley’s application of ChatGPT is what he had envisioned in his 2015 paper on knowledge management – co-written with Eduard van Gelderen, Chief Investment Officer at Canadian pension fund PSP Investments – he says we are almost there.

“We’re super close now, yes,” he says.

He gives an example of how ChatGPT can be used to find experts within an organisation quickly, based on certain search terms.

“If I’m running a massive organisation and I want to say: ‘Who is the best person to talk to about autonomous vehicles?’, then ideally that knowledge platform is going to unearth the person who sent the most emails with [the term] autonomous vehicles written in it, the person whose documents most often say autonomous vehicles, the person whose calendar has the most meetings with autonomous vehicles somewhere in the headings,” he says.

“It would track all of that metadata across the organisation and reveal that person.”

But when it comes to specific investment questions, investors still need to do the hard work of building a trove of clean data to which they can apply these new tools.

“While this type of knowledge extraction is valuable to extract qualitative insights, the big problem facing most investors is still simply to collect quantitative data and put it in context for decision-makers,” Monk says.

“So most of the tech spend in the financial services industry today is about getting clean quantitative portfolio data, what I refer to as an investor’s ‘portfolio positioning systems’. What do I own? What geographies? What risks? Which funds?

“Most investors, especially as they get heavy into alternatives, don’t have great data on what they own. ChatGPT and other GPTs might be useful at extracting some of this info from PDFs or emails, but if the data doesn’t exist inside the organisation, then there’s not much any generative AI tool can do.

“So that one basis point of AUM that all these big super funds and pension funds around the world seem to be spending on their technology is still going to go into collecting data, getting it clean and putting it in context.

“Then information becomes reliable and it’s on that information that a thing like ChatGPT would be useful, because then you can trust that the information is right.”

The Problem with ChatGPT

The problem around the accuracy of the information a program like ChatGPT provides is where Monk becomes less enthusiastic about the current state of artificial intelligence.

He argues that it still produces too many errors to be useful in a professional setting, while the program’s eloquence makes it even more likely users will not recognise answers as false.

In its current iteration, ChatGPT is still more of a toy, he says.

“We should not trust it yet. It is a little bit dangerous because it is so elegant and confident. The lies it is telling you feel believable,” he says.

“For example, I asked it to write me a biography and it said I got a doctorate from Darden [School of Business] at the University of Virginia … I’ve never been to this university.

image shows a quotation mark

We should not trust it yet. It is a little bit dangerous, because it is so elegant and confident. The lies it is telling you feel believable

“It also said I wrote a book with the title that was something like Innovations and Fund Management – The Pathway to New Investment Models.

“It sounds like something I could have written, but I didn’t. So that elegance and confidence gives you a false sense that you’re really being given correct information.”

For these programs to become useful, it will be critical to set strict governance rules around the information they can access and use to base their answers on, and then program these rules into the system.

Monk believes it shouldn’t be too hard to ensure the answers the system throws up are verifiable facts.

“You could tell one of these large language models that you want to draw answers only from this data set and that it is not allowed to make stuff up,” he says.

“Then I think it meets what our vision of knowledge management was, but I don’t think we get to skip ahead to large language models without clean data being put into the correct contexts. Until then, it’s a very fun toy.”

Monk  on New forms of NLP

ChatGPT has attracted the most attention out of a wave of new and highly sophisticated natural language processing tools, but Monk says there are many more systems in the making, some with very specific applications.

He believes that as these systems get more refined, they will ultimately change the way investors analyse data and implement their investments.

For example, he points out around 94 per cent of the information produced by publicly listed companies is text, while only 6 per cent of reporting contains numbers. Yet, at the moment, much of the text-based information just gets ignored.

He is working with a number of start-ups in this space and one of them, called Deception and Truth Analysis (D.A.T.A), is looking for signs of deception in text.

“They have a suite of algorithms and around 30 different indicators where you can assess textual data for its deceptiveness. Apparently, there are markers in text when somebody’s trying to deceive you,” he says.

“So you’re going to start to see things like a deceptiveness score. Is this management team trying to deceive us?”

He says the model has thrown up some interesting results. For example, the company ran it over the annual report of the collapsed US bank Silicon Valley Bank to see if there were any indications of its demise.

“You can look at the Silicon Valley Bank’s annual report and you will see that the most deceptive part of the annual report was their discussion of interest rate risk,” Monk says.

“So these language-based models are really going to unlock new insights for investors.”

__________

[i3] Insights is the official educational bulletin of the Investment Innovation Institute [i3]. It covers major trends and innovations in institutional investing, providing independent and thought-provoking content about pension funds, insurance companies and sovereign wealth funds across the globe.