Earlier this year, generative artificial intelligence burst into the popular consciousness as the public gained access to new chatbots, writing tools and image generators powered by sophisticated technology like ChatGPT that most people had never encountered before.

The wealth management industry immediately began to ask about the impacts of this emerging technology. When would it come into their businesses? And how would it change the work of advisors and their support staff? Some wealth technology companies moved quickly to implement generative AI applications in their platforms.

“People are hungry for innovation and better ways of doing things, and for productivity,” says Kelly Waltrich, founder and CEO of Intention.ly, a marketing engine for financial, wealth and technology firms. “I’m someone who is fully embracing it and trying to figure out all the ways it is going to make me smarter.”

Yet as the hype around generative AI finally settles, technologists are asking deeper questions about its potential uses, its benefits and its shortcomings—and in many cases they’re finding that more caution and care is needed in choosing where and how to apply the new technology.

While Waltrich doesn’t believe the financial services industry needs to slow down in its embrace and engagement with AI applications, she does think that it needs to make sure it better understands the technology before the applications are put to use.

“We need to be transparent about how things are working behind the scenes, and we need to be aware of what advisors are implementing,” she says. “Firms have a number of people available to work behind the scenes to do due diligence on [technology] vendors. ... They need to know what’s happening on the other side of their screens.”

What Is It?
Many financial advisors have been using some form of artificial intelligence for years in their technology stacks, perhaps without even knowing it. Many client relationship management and planning platforms have functions that allow advisors to choose their next best actions or give clients behavioral nudges. These functions are usually built on some form of intelligent data gathering and sorting capability. From the data, the software intuits what the best moves are for an advisor or client. Over time, the software becomes better at finding the best or optimal information that its user is seeking—just as a person learns how to do a job better with repetition.

Generative AI, on the other hand, offers something different: It can understand the natural language people write and speak with, and then it can generate responses in similar language (or with images, audio or video) that make sense in the context of the requests being made. Like previous iterations of AI, it also learns as it goes.

This technology isn’t really new. The first responsive chatbot was created over 50 years ago by MIT researchers. It’s been roughly a decade since the first chatbot passed mathematician and computer scientist Alan Turing’s test of how convincingly “human” interactive technology is: That bot convinced more than 30% of its users that they were not speaking with a program, but a real live person.

“Today there are many more that feel truly human to the user, with all the pros and cons that entails,” says Philipp Hecker, CEO and co-founder of Bento Engine, an advice engagement platform for financial advisors. “The pace of change and progress is amazing in terms of the underlying technology.”

Hecker says the only things that really hold AI applications back from sweeping across the wealth management space are humans’ willingness to engage with the technology and the regulatory framework that advisors operate within.

Are We Ready?
More recently, technology companies have experimented with using generative AI on social networks like Twitter (now X). In one infamous case, Microsoft deployed a chatbot, Tay, on Twitter in 2016. Twitter users quickly trained Tay to make outrageous, nonsensical and inflammatory statements, leading Microsoft to shut it down.

After ChatGPT’s upgrade brought generative AI to a broader audience this year, stories proliferated about students and technical writers who tried to use the bot to produce deeply cited academic research, only to find that the software was convincingly fabricating the sources it was citing and that, in the end, it had produced useless copy.

Even more recently, Microsoft incorporated ChatGPT into its Bing search engine so users could ask the software to generate statements and pictures. The company found out that despite the guardrails it had put in, users were able to “teach” the AI to make outrageous, racist and defamatory statements and images. One infamous image the bot produced showed Mickey Mouse brandishing a handgun as he pilots an airplane into the Twin Towers of New York’s World Trade Center. The incidents led Microsoft to “lobotomize” Bing’s AI, putting up even newer guardrails to prevent misuse.

Today, the world is contending with AI’s ability to create “deepfakes,” believable video and audio replicas of people’s faces, bodies and voices generated entirely by computer.

Given AI’s tendency to make mistakes, wealth managers using it could end up violating compliance rules when working with clients, according to Lincoln Ross, CEO of CircleBlack, a cloud-based software platform for advisors. Eventually, he says, the technology might help advisors’ support teams by collecting data, but he says it’s not ready.

The concern isn’t just that the financial industry may misapply artificial intelligence, but that the industry itself—as well as the capital markets—could be manipulated by it.

Not Really
“I would say that, for better or for worse, wealth management has been more conservative when it comes to technology, and that we need to continue to be a bit more conservative,” says Molly Weiss, chief product officer at Envestnet. “At the end of the day, trust is an essential part of delivering financial advice, and I think that as an industry you can argue that we haven’t been as careful as we should be.”

For that reason, she says financial advisors are likely to use AI right now mainly to make their operations more efficient, rather than using it in any way that “touches” an actual investor. “Trust in the relationship and trust in financial advice is too precious to risk,” she says.

Many technologists were surprised about any level of interest in AI from wealth managers, who are often hesitant to adopt new technology, largely because of regulators. With federal and state watchdogs looking over their shoulders, advisors may take a long time to embrace newer artificial intelligence for building client-facing tools. While “weak” forms of AI, like robo-advisor algorithms, will continue to be used in client-facing environments, Hecker says the industry is only in the “second or third inning” of finding ways to use generative AI in a way that complies with regulations.

Where It Can Work Now
Still, that doesn’t mean wealth management has to sit on the sidelines while other consumer-facing industries benefit from artificial intelligence. There are already wealth technology firms finding ways to help advisors implement technology behind the scenes.

“AI is best used as a way to augment non-client facing aspects of wealth management,” says Ritik Malhotra, CEO at Savvy Wealth, an RIA firm with a digital platform. “Two quick examples would be helping the advisor brainstorm how to take further actions and helping to automate some of the functions of the back office. Still, none of these applications yet include executing something entirely autonomously.”

Intention.ly’s Advisor Brand Builder platform uses AI to help build out an advisor’s branded marketing materials, including social media, logos, business cards and web pages.

Waltrich says about 75% to 80% of the platform’s content production is handled by AI, with professional editors and writers then stepping in to refine the work and make it ready to publish.

Hecker believes AI can best be used to help advisors and clients prioritize where to spend their time, both when it comes to money moves like saving, spending and investing, and when it comes to practice management. AI may also help increase the loading ratios—the number of clients per advisor—at some firms.

“We as an industry struggle at the supply side of human advice,” Hecker says. “We all can and must serve more clients; the only way to do that is by using more, better and smarter technology. That’s the force multiplier and leverage point.”

Never Fear
Some advisors look at this technology the same way they looked at robo-advisors before—questioning whether a more intuitive and behaviorally responsive AI could displace advisors. But technologists, in general, believe that will never be the case.

“There is no world where AI can replace a human financial advisor,” says Malhotra. What he sees instead is a future where AI could help in interactions that are purely transactional, where human interaction isn’t necessary.

While generative AI, like the robo-advisors, may help an advisor refine their value proposition—perhaps helping the advisor manage human behaviors and relationships rather than managing actual money, the technology is still incapable of filling an advisor’s shoes, Hecker says.

“Net-net, I view it as a much-welcome and much-needed force-multiplier that makes us human advisors even more human, even more often,” Hecker says. “The general trend will continue of pushing humans up higher in the value chain. The next phase of evolution could entail the human advisor focusing even more on behavioral dynamics and the convergence of health, wealth and family.”