When it comes to finance, says Nobel laureate Robert C. Merton, there are two essential components of trust: People need to believe their counterparty is acting honorably, and that it is competent. He likes to say that he trusts his children to act in his best interest, but not to operate on his knee. And he would trust a doctor to operate on his knee, but not if he had a side hustle selling body parts.
The problem is that, in an economy as big as America’s, it’s impossible for any one person to know everyone they transact with. So they have to rely on other methods of inferring trust: transparency, verification and regulation.
Regulation can engender trust by acting as a means of verification and by requiring transparency. This is largely done by government institutions, but with the right incentives, self-regulation can also work. Financial markets offer a case study of how both of these approaches can restore trust when it collapses, or enable business to keep functioning even when trust is low.
In financial markets, billions of transactions occur each day between people all over the world, for obligations far into the future. Trust is critical for all participants, even someone whose level of engagement is just investing in an index fund. After 1929, regulation helped the markets win back trust. Even if the chief financial officer of a public company is a liar, investors can trust the Securities and Exchange Commission to ensure what he presents is accurate. Or if their adviser is fiduciary, regulation requires she acts in their best interest.
But regulation is not a cure-all. In fact, if poorly constructed, it can lead to worse outcomes . Take that fiduciary standard, which explicitly commands advisers to act in their clients’ best interest. It seems this should be enough to engender trust—but what does “best interest” mean? Under current regulations, it is poorly defined and can result in advisers simply minimizing their regulatory risk and doing what everyone else in the financial industry does.
For example, some investors need long-duration bonds because they plan to buy an annuity in the future. This means their portfolios will experience more volatility, as long-duration bonds tend to fluctuate more in price. Advisers are often judged by the year-to-year volatility of their accounts, rather than how well they hedge a long-term goal (which is impossible to measure years in advance). When “best interest” is poorly defined, however, there is an incentive to follow the crowd—after all, how can an adviser get in trouble for just doing what all other advisers do?—and ignore a client’s long-term goals: A stable income is more important than a stable portfolio.
Better regulation is a matter of aligning incentives so investors can trust counterparties. In the case of financial advisers, many are paid a fixed fee instead of a commission—and thus have a professional incentive to act in their client’s best interest, instead of an imposed requirement. With fees, there is no pressure to follow what other advisers do. An adviser’s only goal is to act in their client’s actual best interest, even if it means a portfolio that looks different from everyone else’s. The fee structure creates trust and does not require a fuzzy and often arbitrary interpretation from a regulator.
Finance has also improved trust by striving for more transparency in its products. Index investing is more transparent than investing in active funds, because investors know which assets will be bought and in what quantity. The rise of index investing since the financial crisis can be explained in part because they are more trusted.
And yet transparency is also not a panacea, because too much information or information that is too complex—and many worthwhile financial products are complex—can make things more opaque. Who has the time or expertise to read hundreds of pages of an investment prospectus? The theory seems to be that, by putting lots of information out there, counterparties cannot be accused of hiding it. But what they’re actually doing is burying it.
An alternative is verification, which can come from testing an investment to see if it works under certain market conditions or how it performs compared with a benchmark. That can be determined by someone who has the time and expertise to vet it, such as an adviser, or from an institution such as Morningstar.
But verification has limitations too. Sometimes it just isn’t possible; if an investor’s timeframe is 40 years, testing a model on five years of data is not going to help very much. And there can also be issues of trust — if an adviser is making a commission on the product in question, for example.
Even with these limitations in mind, these lessons from finance can help restore trust in all institutions. Transparency, verification and regulation can act as a stand-in for trust.
Rethinking the approach to regulation—whether of the environment, the workplace or the housing market—will require abandoning the maxim that more is better. Micromanagement of details can undermine trust.
In finance, for example, regulation that deems certain assets “safe,” and then requires institutions to own specified quantities of them, tends not to work. Because more safety usually means less profitability, banks end up gaming the system or finding opaque ways to avoid the rules, further reducing transparency and trust. Or the regulators get it wrong, defining some asset as safe—say, Greek bonds—that turns out not to be. That also erodes trust.
Regulation needs to be simple and consistent, and to promote competition and good behavior. The way to do this is with better incentives rather than top-down rules. Think of those fee-only financial advisers, compared to the fiduciary standard.
The harder challenge is improving transparency and verification in a world overrun with inaccurate or misleading data. It is tempting to regulate data—what can be collected and what the public can see, as Europe aims to do, or how it is presented, as the U.S. is trying to do . But this strategy undermines trust in both institutions and the data people do see.
The best path is to let data flow while encouraging institutions to find ways to help people navigate it. Think the equivalent of an unweighted five-star review. There is a reason Amazon is more trusted than other big tech firms. The simplicity and lack of pretense of unbiased curation creates trust and enables people to navigate large amounts of data in ways they find helpful. It increases transparency and offers trustworthy verification.
True, some data is inaccurate or even malicious, just as some people leave fake bad reviews. But the law of large numbers (or the wisdom of crowds) holds that more data mean more accuracy. Rather than allowing institutions to edit out data they don’t like, the government and big tech should make and follow simple rules to present and process it.
And there is a final lesson from the financial industry, less obvious but more important: Trust goes both ways. One reason so many people engage in markets is that, despite the many interventions, regulators ultimately trust investors to make sense of the data and act in their own best interest. Compare that to trust in public health, which has cratered since the pandemic—largely because too many officials never trusted the public to be honest about what they knew and what they didn’t.
Rather than telling the public what to think, institutions should show the public how they arrived at their decisions—along with the data they used to get there. If more institutions put more trust in the public, and the government made regulations based on incentives rather than by decree, then maybe they could restore the trust they’ve lost.
Allison Schrager is a Bloomberg Opinion columnist covering economics. A senior fellow at the Manhattan Institute, she is author of An Economist Walks Into a Brothel: And Other Unexpected Places to Understand Risk.
This article was provided by Bloomberg News.