Could artificial intelligence make your robo-advisor racist?

While the fiduciary debate around robo-advisors has typically revolved around their ability to provide advice and whether they should be used to create ‘nudges’ to influence the behavior of investors, there is reason to worry that an algorithm fed with biased data will start making biased recommendations, said Bradley Berman, an attorney in the corporate and securities practice at global law firm Mayer Brown.

“To use machine learning and AI, they’re feeding these algorithms with historical data,” said Berman. “The reasoning is that historical data on people could reflect discrimination that has occurred in the past, so the algorithm could become biased even if it was designed to be completely neutral.”

Berman cited an SEC Investor Advisory Committee meeting on “Ethical AI and RoboAdvisor Fiduciary Responsibilities” earlier this month that veered into the possibility that biased historical data can influence AI and machine-learning platforms.

Not A New Problem
To some extent, algorithms are built to discriminate, said Berman, especially by age. It makes sense that an asset allocation proposed to a 65-year-old would look different from one recommended for a 25-year-old.

But machine learning modules might also use information like location and Zip Code to make decisions.

“These might reflect housing patterns that have been affected by discrimination,” said Berman. “Redlining, urban renewal, things like that pushed certain types of people into certain neighborhoods and we’re still dealing with the aftereffects of that. So there’s a concern that backward-looking historical data could include the legacy of these biases.”

While algorithms themselves are neutral, the quality of their product depends on the quality of the data that is fed into them, said Berman, a concept often abbreviated to GIGO (garbage in, garbage out). The committee is concerned that machine learning could make algorithms more biased with time as they process more biased data.

Some early experiments with social AI illustrates the potential problem. On March 23, 2016, Microsoft launched Tay, a self-learning chatbot, across the Twitter platform. The chatbot was designed to emulate the ebullient chat style of a teenage girl but learn more about language and human interaction on the platform over time. Within a few hours, Twitter had taught Tay to tweet highly offensive statement like “Bush did 9/11 and Hitler would have done a better job.”

After 16 hours and over 95,000 tweets, many of them offensive, Microsoft decided to shut Tay down.

First « 1 2 » Next