Wall Street Banks rush to test Anthropic’s Mythos AI after urgent US warning

New Delhi: Major Wall Street banks are quietly testing a powerful new artificial intelligence model developed by Anthropic as US regulators push financial institutions to strengthen their cyber defences. A report by Bloomberg says that such companies as Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley have already started testing the AI system internally, which is called Mythos.

The relocation is the result of a top-level conference in Washington held by the US Treasury Secretary, Scott Bessent, and Federal Reserve Chair, Jerome Powell. Authorities encouraged banks to not dismiss the technology but apply it to learn possible weaknesses in their systems. Although no potential threat has been announced, the message was self-explanatory: AI-based cyber threats are rapidly expanding and must be addressed right away.

Government push signals urgency on cybersecurity

The authorities stressed that banks need to actively implement AI-based tools, such as Mythos, to test and enhance their security. The gathering, which was held on April 7, was made up of top executives who were already in Washington to be part of discussions organised by the Financial Services Forum.

According to sources reported to Bloomberg, regulators are becoming more worried about a new generation of cyberattacks that are fuelled by sophisticated AI solutions. Systemically important banks in the discussions are seen to be of great importance, and any failure will affect the global financial system. The push is part of a wider attempt of the US authorities to remain ahead of new digital threats.

What makes Mythos different?

The Mythos model created by Anthropic is unique in the sense that it is able to identify and take advantage of vulnerabilities during testing. The company has indicated that the system has shown the ability to automatically point out the vulnerabilities in more than one web browser and pool them together into sophisticated attack patterns.

In one of the examples provided by the Anthropic security team, the AI could replicate how a rogue site can access sensitive data on another site, say a banking site. This form of multi-step exploit, sometimes referred to as a vulnerability chain, has traditionally been challenging even to an experienced human attacker.

The approach is akin to those employed in the Stuxnet cyberattack, where a series of system vulnerabilities were linked to compromise the most secure infrastructure.

Limited access under ‘Project Glasswing’

Mythos has not been extensively published. Anthropic is instead making access to a limited number of companies in an action known as ‘Project Glasswing’. JPMorgan Chase, Amazon, and Apple are reportedly early entrants.

The idea is to safeguard vulnerable systems when the use of AI-based technologies is widespread. According to statements quoted by Bloomberg, US officials as well have advocated delaying a complete public release until risks are understood better.

The changes are in line with regulators focusing more on operational risks, such as cyberattacks. Banks already must have capital buffers to cover such risks, but most institutions claim that these are more difficult to quantify than the traditional financial risks.

Meanwhile, the US government is challenging Anthropic in court. Recently, the Pentagon declared the company a potential supply-chain risk, a title that Anthropic is challenging in court.

Nevertheless, such officials as the Director of the National Economic Council, Kevin Hassett, have emphasised urgency. In a recent interview, he pointed out that the security of financial systems is a priority because AI capabilities are quickly developing.