Could artificial intelligence make your robo-advisor racist?

While the fiduciary debate around robo-advisors has typically revolved around their ability to provide advice and whether they should be used to create ‘nudges’ to influence the behavior of investors, there is reason to worry that an algorithm fed with biased data will start making biased recommendations, said Bradley Berman, an attorney in the corporate and securities practice at global law firm Mayer Brown.

“To use machine learning and AI, they’re feeding these algorithms with historical data,” said Berman. “The reasoning is that historical data on people could reflect discrimination that has occurred in the past, so the algorithm could become biased even if it was designed to be completely neutral.”

Berman cited an SEC Investor Advisory Committee meeting on “Ethical AI and RoboAdvisor Fiduciary Responsibilities” earlier this month that veered into the possibility that biased historical data can influence AI and machine-learning platforms.

Not A New Problem
To some extent, algorithms are built to discriminate, said Berman, especially by age. It makes sense that an asset allocation proposed to a 65-year-old would look different from one recommended for a 25-year-old.

But machine learning modules might also use information like location and Zip Code to make decisions.

“These might reflect housing patterns that have been affected by discrimination,” said Berman. “Redlining, urban renewal, things like that pushed certain types of people into certain neighborhoods and we’re still dealing with the aftereffects of that. So there’s a concern that backward-looking historical data could include the legacy of these biases.”

While algorithms themselves are neutral, the quality of their product depends on the quality of the data that is fed into them, said Berman, a concept often abbreviated to GIGO (garbage in, garbage out). The committee is concerned that machine learning could make algorithms more biased with time as they process more biased data.

Some early experiments with social AI illustrates the potential problem. On March 23, 2016, Microsoft launched Tay, a self-learning chatbot, across the Twitter platform. The chatbot was designed to emulate the ebullient chat style of a teenage girl but learn more about language and human interaction on the platform over time. Within a few hours, Twitter had taught Tay to tweet highly offensive statement like “Bush did 9/11 and Hitler would have done a better job.”

After 16 hours and over 95,000 tweets, many of them offensive, Microsoft decided to shut Tay down.

“The committee was most focused on whether there are unintentional, bad results based on some biases that might be built into the data,” said Berman. “Morningstar’s panelist discussed about how over the years they’ve gotten better and better at making the algorithm better, using a committee that reviews it for fairness. So these companies recognize that there is a potential danger here.”

No Regulation – Yet
The meeting was preceded by remarks from SEC Chair Gary Gensler, who, in addition to the bias issue, voiced concern about artificial intelligence being used to generate behavioral nudges and discussed the potential conflict of interest in robo-advisors operated by asset managers trying to optimize their own revenue.

"Today, platforms have an insatiable appetite for data," said Gensler. "The underlying data used in the analytic models could reflect historical biases, or may be proxies for protected characteristics, like race and gender."

Yet Berman said there was no discussion in the meeting itself of behavioral nudges or an actual fiduciary or best-interest duty, and little discussion of regulation.

Regulation is unlikely unless there is evidence that shows a robo-advisor’s recommendation for one group of people is far worse than the recommendation for another group of people, and it turns out the algorithm is doing it, Berman said

“I think robo-advice is here to stay and it’s only going to grow. The big question will be whether they come out with regulations,” said Berman. “Then we have to ask whether those regulations will harm or help.”