The potential value of AI in improving productivity looks promising, but focusing on making investment experts more productive misses the most exciting part of the revolution: the potential to augment investment expertise and re-engineer decision-making processes, Dragonfly’s Sue Brake writes
In institutional investing, the “widget” we manufacture is the investment decision. The implementation of that decision – like distributing a product – requires focus and scale, but the real value is driven by the quality of the decisions made long before the capital is deployed.
We have built an entire industry of tooling to provide quality assurance for the human decision. We sourced better raw materials in the form of timely market indicators and installed precise gauges to measure portfolio risks and performance. We also built presses – in the form of quantitative tools – to consistently apply our investment beliefs in selective instances of automation. Finally, we codified organisational jigs – in the form of structured processes, checklists, and committees – to guide and constrain the decision maker.
All of this has been designed to manage our wonderfully human, but often quality-destroying, proclivities. Among the most impactful of these are our susceptibility to bias, including confirmation bias (seeking data that confirms our views), herding (finding safety in the crowd), and authority bias (reluctance to challenge the boss). But they also include other shortcomings like our inability to deal parsimoniously with complexity, for which we have used simplification workarounds like investment beliefs, asset class labels and assumptions, and scenarios.
With the recent arrival of generative AI, we have a rare opportunity to reimagine how we ensure quality human decisions.
These proclivities are not academic; they are a material drag on returns. Global asset owners, on average, believe strong investment governance lowers risk and lifts returns, and peer groups estimate the return gap between moderate and strong investment governance to be 30-50 basis points per annum over the long term. This is the price of our ‘wonderfully human’ nature left unchecked: for a $100 billion fund, that’s $300 to 500 million per annum.
With the recent arrival of generative AI, we have a rare opportunity to reimagine how we ensure quality human decisions. While AI has its own proclivities, its strengths make the future of investing likely to be both agentic (relying on specific AI agents) and synergistic (relying on human + AI collaboration). Those who can, will make the most of AI’s comparative strengths, as well as the specialness of human intelligence, and the magic that they create together.
Much of the current discussion on AI focuses on improving the productivity of our current investment processes, i.e. of our decision tooling and distribution. The potential value unlocked through improved productivity looks promising, but focusing on making investment experts more productive misses the most exciting part of the revolution. The true transformation lies in AI’s potential to augment our expertise and re-engineer our processes – to make us better decision-makers. Improving decision quality adds significantly more value than lowering costs alone.
Four Ways AI Can Augment Expertise
There are four ways AI can augment investment expertise.
Firstly, AI introduces a different, non-human form of rigour. Humans are social animals. In my experience, even the most brilliant investment committee can suffer from groupthink. We want to please the CIO or the Chair, which can influence what we think and, more often, what we say. An AI is not driven by a social agenda. It doesn’t care about its bonus or its place in the hierarchy. A well-designed AI can apply a rigorous framework relentlessly, act as a persistent devil’s advocate, and challenge assumptions without social fear (see Dragonfly CEO Anthea Roberts’ article: ‘What Happens When Dissent Becomes Digital’). This isn’t to say the AI is ‘neutral’ – it carries the biases of its training – but its biases are different from our social ones, and that difference is a powerful tool.
Secondly, AI offers a cognitive partnership that nurtures innovation. An expert’s nascent, “crazy” idea may die before they can convince a colleague it’s worth exploring. An LLM, as a cognitive partner, will “go there” with you instantly. This non-judgmental partnership accelerates cognitive exploration, allowing us to rapidly prototype and test new ideas.
Thirdly, AI facilitates expert collaboration and breaks down siloes. Investing is a team sport, but our star athletes – quants, economists, ESG experts, asset class experts – often work in siloes, speaking different professional languages. AI can act as a universal translator and neutral facilitator, bypassing office politics. A recent study by researchers from Harvard and elsewhere found that when R&D and commercial experts worked together, they typically stayed in their own lanes. But when collaborating with AI, they produced significantly more balanced and integrated proposals. The AI acted as a bridge, dissolving the functional silos.
This works, in part, because the AI isn’t seen as having human ill-intent. Research on interactions with conversational AI has shown it can moderate even deeply held conspiracy beliefs. Why? The AI provides engagement without perceived social judgment (referred to as cognitive empathy). This lowers the “social cost” of being wrong, allowing people to engage with ideas rather than becoming defensive. This is exactly the dynamic we need in our investment teams.
Fourthly, AI can manage cognitive complexity far beyond human capacity. Our human artisans simply cannot hold the world’s interconnected complexity – geopolitics, climate transition, technological change – in their heads. AI can. In an Australian government pilot, John Blackburn of the AI CoLab worked with Dragonfly’s AI platform to assess sovereign risk and resilience. By integrating reports from 250 experts, the human-AI team created a complex systems map that revealed critical connections human experts alone had missed. This isn’t replacing human thought; it’s providing a canvas large enough to map our complex reality.
The Barriers to Adoption
If the opportunity is this significant, why is adoption so slow? The answer lies in our core responsibility: risk management. This new AI teammate introduces new and yet-to-be-fully understood risks – algorithmic bias, hallucinations, data security, and the potential for ‘black box’ opaqueness. Further, the speed and eloquence of AI can lead to overreliance and misplaced confidence in its objectivity.
However, these risks are not alien. As Investment Committees and Boards, we already spend an enormous amount of time building governance frameworks to manage the risks of our human teammates. While new AI risks may seem scary, we have the scaffolding for it. The task is to expand our governance to include well-designed and thoughtfully employed AI, so as to continually stress-test both human and AI teammates for bias, hallucinations, and their specific failure modes. Adoption will also involve the thoughtful acceptance of the residual risk.
The real barriers, then, are potentially not the risks themselves, but a failure of leadership and culture to appreciate the upside enough to face into the risks. There are four reasons why this might be:
- A lack of strategic imperative. The introduction of AI agents may be seen as an IT upgrade, not a fundamental shift in how value is created.
- A failure to embrace the unknown. People at all levels of an organisation will very often prefer the old, inefficient factory they know over the transformational, high-performance factory they don’t, creating blinkers against the upside potential.
- A lack of AI literacy. Senior leaders must understand artificial intelligence as well as they do human intelligence. Only then can they grasp the synergy and the risks.
- Not thinking in terms of workflows. Current (and foreseeable) AI technology must be built around defined processes or workflows. To see the upside of an AI teammate, then, we must first see, and value, decision-making process. For many investment organisations, this is not intuitive.
AI is not coming for the expert investor’s job. It is coming for the investor who refuses to collaborate. The successful investor of the future is not the one with the best instincts; it’s the one with the best team, and the best-governed team. That is how the highest-quality decisions, and the next generation of returns, will be manufactured.
Sue Brake is the ex-CIO of the Future Fund and currently sits on the Board or NZ Super and the Investment Committee of Aware Super. She is also the Chair of the Board of Dragonfly Thinking, an Australian start up building AI agents to assist humans in complex decision making.
__________________
In the spirit of cognitive partnership, the author used an LLM and Dragonfly agents (specifically the Cognitive Bias and Devil’s Advocate agents) to help brainstorm, challenge assumptions, and refine the arguments in this article. The opinions expressed in this article are her own.
_________
[i3] Insights is the official educational bulletin of the Investment Innovation Institute [i3]. It covers major trends and innovations in institutional investing, providing independent and thought-provoking content about pension funds, insurance companies and sovereign wealth funds across the globe.

