Concerns were widely reported on Friday around the development of ‘Mythos’ which is the latest model of an AI large language model called ‘Claude’ developed by the AI company ‘Anthropic’. In testing the model has been found to be so good at finding software bugs, system back doors and security holes that the company has decided that it is too dangerous to release it. It has instead released an updated version of a previous ‘Opus’ model, believed to be less effective at these harmful activities. Anthropic has made the Mythos model available to certain big tech companies and one bank under ‘Project Glasswing’, so that they can better understand how to establish protections.
The US Treasury Secretary Scott Bessent is known to have summoned systemically important US banks to Washington to discuss the new AI threat last week. It was also discussed among bankers and finance ministers attending the International Monetary Fund’s (IMF’s) spring conference in Washingdon D.C., with figures such as the Chief Executive of Barclays, Bank of England Governor and Canadian Finance Minister all making public statements on these concerns.
The ability to hack into computer systems and break them, or steal money or information, is clearly a concern for all industries. As we found out in the financial crisis of 2008, banks are systemically important to everything that happens in the modern economy: without them things fall apart. In particular huge swathes of the world are reliant on international payment systems such as SWIFT. This of course makes these systems a huge target for terrorists, rogue states or cyber criminals. Whilst Anthropic may have done the right thing by not releasing the software at the present time, other developers of AI may not be so scrupulous.
On the other hand at least some of this could be exaggerated hype, possibly brought about by the companies themselves who are keen to generate publicity around their systems and make them sound powerful. Similar fears have been raised in the past by OpenAI that delayed the rollout of ‘GPT-2’ in 2019. It should also be noted that AI can also be used by cyber security experts to improve cyber security.
Could AI destroy the banking system? Clearly, and as with technologies that have gone before it, it represents a new threat. There is always going to be an arms race between what the criminals can do and the wits of everyone trying to stop them. There has been more than one example of IT outages and system problems at a number of UK banks to date: AI adds a new and serious risk. What can be a particularly worry is if problems in a small number of institutions lead to a widespread lack of confidence in the system – a ‘contagion’ that can risk a full scale financial crisis.
So treasury managers should be wary of this new risk. As always investments should be suitably diversified so that any systems or payments problems in one counterparty do not have an oversized impact on an overall portfolio. Managers should review their business continuity plans and arrangements for the event that themselves or their banking providers experience a cyber attack. There has never been a better time to also make sure you are keeping your own house in order: think changing passwords regularly, having secure passwords, updating software, two factor authentication, appropriate training for staff etc. Institutions with weaker systems that have more vulnerabilities to exploit will be at higher risk.
The development of highly capable models like Mythos highlights how quickly the technological landscape is shifting: regulators, governments, financial institutions and all of us will need to keep up!
21/04/2026
Related Insights