Artificial intelligence is making it easier for fraudsters to carry out more sophisticated attacks on financial firms, the Treasury Department said in a report Wednesday. 

Recent advancements in AI mean criminals can more realistically mimic voice or video to impersonate customers at financial institutions and access accounts, the agency wrote. They also allow bad actors to craft increasingly sophisticated email phishing attacks with better formatting and fewer typos, according to Treasury. 

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector,” Under Secretary for Domestic Finance Nellie Liang said in a statement accompanying the report, which was mandated under a presidential executive order last year.

The agency is the latest to sound warnings about AI, which presents risks as well as opportunities. Key financial regulators, including the Federal Reserve, the Securities and Exchange Commission and the Consumer Financial Protection Bureau, have raised concerns about everything from discrimination to potential systemic risk. 

The Biden administration will work with financial firms to use emerging technologies while also “safeguarding against threats to operational resiliency and financial stability,” Liang said.

As part of the report, the Treasury Department conducted 42 interviews with individuals from the financial-services and information-technology sectors, data providers, and anti-fraud and anti-money-laundering firms. One concern was potential “regulatory fragmentation” as federal and state agencies set ground rules for AI. 

Treasury said it will work with the industry-led Financial Services Sector Coordinating Council, and the Financial and Banking Information Infrastructure Committee—tasked with improving collaboration among financial regulators—to ensure regulatory efforts are in sync.

Gaps Between Firms
The report noted that smaller financial firms, unlike larger companies, have fewer IT resources and less expertise to develop AI systems in-house and often have to rely on third parties. They also have access to less internal data to train AI models to prevent fraud. 

To address the gap, the American Bankers Association is designing a pilot program to facilitate industry information-sharing on fraud and other illicit activities. The U.S. government may also be able to help by providing access to historical fraud reports to help train AI models, Treasury said. 

Treasury also laid out a number of other steps that the government and industry should consider, including developing a common language around AI and using standardized descriptions for certain vendor-provided AI systems to identify what data was used to train the model and where it came from. 

This article was provided by Bloomberg News.