Although you may have known someone for decades and can easily recognize their voice, don’t be so sure that you know who you are talking to on the other end of the line. Why? With the help of AI-software, cybercriminals can now clone voices so accurately that family members and even voice recognition software cannot tell that it is not the actual person who is speaking. In fact, cloned voices even have been used to stage fake kidnappings.

This recent nefarious innovation is not just creepy, it also creates a variety of financial and regulatory risks for wealth managers.

More specifically, many wealth management firm clients are easy pickings for cybercriminals. As detailed in a white paper that we published last month, most people use unsophisticated and/or similar passwords that are uncomplicated to hack. Few engage the necessary privacy and security settings on their online accounts, apps, devices, browsers and search engines. Even fewer use the necessary technology to shield their online communications.

Client reckless personal online behavior is problematic for wealth managers because it provides criminals with avenues by which to breach a firm’s cyber defenses. For example, cybercriminals regularly hack into client personal email accounts. Some have used them to send a request to a wealth manager to transfer funds to a third party.

To protect against this threat, a widely used industry protocol is for the wealth manager to call the client and confirm the transaction. Unfortunately, cell phones can be easily “spoofed” or hijacked. Technology that enables a criminal to copy a phone’s SIM card just by walking by its owner has been around for several years.

At the same time, cybercriminals are now going into unprotected social media accounts, taking sound from posted videos and using AI-software to create extremely accurate clones of voices. Consequently, when the wealth manager calls to confirm a transaction, it can be intercepted by the copied cell phone and confirmed using the cloned voice.

Certainly, some firms also will email the client an encrypted code their personal email and ask them to read back the number. However, if the client’s personal email has already been compromised, the cybercriminals can open the email and provide the code.

For a wealth manager, an event like this is the end of the world. It has wired a large amount of the client’s money out of their account without any legitimate authorization, and that money is gone.

Equally problematic, the firm is now at significant risk of a regulatory enforcement action. Under rules proposed last year, wealth managers are required to have written policies and procedures that “are reasonably designed to address cybersecurity risks.” By any measure, such an event would be viewed as a “process failure” that the wealth manager would be obligated to self-report to the SEC.

However, the risks to wealth managers from AI-based voice cloning go far beyond just potential fraudulent client account transactions. For example, cybercriminals can also clone firm employee voices by calling the firm after hours and going into individual advisors’ voice mails, who are easily identifiable from the wealth manager’s site.

These cloned voices can allow cybercriminals to pose as members of the firm in calls with other firm members or with clients. They can be used to call the office to ask someone to email confidential information to them or to direct a transaction from a client’s account.

They also can be used to exploit stolen devices. For example, the FBI has reported that criminals are now stealing both devices and their access passcodes at bars and restaurants. Unless someone is careful, a passcode can be memorized as it is entered and later, the criminals will distract the owner and steal the device.

In the intervening period before the device is turned off, it can access large amounts of company information. With client names and phone numbers and the cloned voices of advisors, criminals can call clients asking for personal information that can be used to steal their identities.

All of this points to two things. First, wealth managers must have a better way for advisors and clients to confirm with whom they are communicating. An inexpensive potential alternative would be to provide clients with a private email—i.e., one that is a pay-for service, has its own double authentication security protections tied to the device being used and is not linked to any other online accounts. It should also be anonymous—i.e., not use the client’s or firm’s name or other identifying information—making it much harder for criminals to figure out who it belongs to, much less hack it. A confirmation code could be sent to this account to confirm identities each time an advisor and client speak.

To be sure, like everything else online, private emails at some point can and will be breached. However, using one significantly complicates the task for cybercriminals, providing the firm with another layer of cyber protection.

Far more importantly, wealth managers must accept that they can no longer ignore how their clients operate personally online. The regulators have already decided that it is the wealth manager’s—and not the client’s—responsibility to ensure that bad cyber events do not happen. Moreover, any organization’s cyber protections are only as strong as their weakest link and so long as clients continue to operate online with poor cyber hygiene, communications between advisors and clients will remain one of the weakest links in every firm’s defenses.

Today it may be exploited using cloned voices to steal money. Tomorrow it could involve a client email with malware that allows criminals to hold the firm’s systems for ransom. And while this is going on, the wealth manager is going to have to justify to regulators why it should not be subject to an enforcement action.  

Mark Hurley is CEO of Digital Privacy and Protection (DPP). Carmine Cicalese, COL, U.S. Army Retired, is senior Advisor and partner at DPP.