And Norway’s $1.4 trillion sovereign wealth fund has told boards and companies to get serious about the “severe and uncharted” risks posed by AI.

When OpenAI’s ChatGPT was launched last November, it quickly became the fastest-growing internet application in history, reaching 13 million daily users by January, according to estimates provided by analysts at UBS Group AG. Against that backdrop, tech giants developing or backing similar technology have seen their share prices soar this year.

But the absence of regulations or any meaningful historical data on how AI assets might perform over time is cause for concern, according to Crystal Geng, an ESG analyst at BNP Paribas Asset Management in Hong Kong.

“We don’t have tools or methodology to quantify the risk,” she said. One way in which BNP tries to estimate the potential social fallout of AI is to ask portfolio companies how many job cuts may occur because of the emergence of technologies like ChatGPT. “I haven’t seen one company that can give me a useful number,” Geng said.

Jonas Kron, chief advocacy officer at Boston-based Trillium Asset Management, which helped push Apple and Meta’s Facebook to include privacy in their board charters, has been pressing tech companies to do a better job of explaining their AI work. Earlier this year, Trillium filed a shareholder resolution with Google parent Alphabet asking it to provide more details about its AI algorithms.

Kron said AI represents a governance risk for investors and noted that even insiders, including OpenAI’s Altman, have urged lawmakers to impose regulations.

The worry is that, left unfettered, AI can reinforce discrimination in areas such as health care. And aside from AI’s potential to amplify racial and gender biases, there are concerns about its propensity to enable the misuse of personal data.

Meanwhile, the number of AI incidents and controversies has increased by a factor of 26 since 2012, according to a database that tracks misuse of the technology.

Investors in Microsoft, Apple and Alphabet’s Google have filed resolutions demanding greater transparency over AI algorithms. The AFL-CIO Equity Index Fund, which oversees $12 billion in union pensions, has asked companies including Netflix Inc. and Walt Disney Co. to report on whether they have adopted guidelines to protect workers, customers and the public from AI harms.

Points of concern include discrimination or bias against employees, disinformation during political elections and mass layoffs resulting from automation, said Carin Zelenko, director of capital strategies at AFL-CIO in Washington. She added that worries about AI by actors and writers in Hollywood played a role in their high-profile strikes this year.