In presentations to fellow professionals, we often make self-deprecating comments similar to this: “And then later we’ll tell you exactly when the Fed will raise interest rates. We’ll just grab our crystal ball and take a look.” That simple joke, variations of which are repeated over and over among financial professionals, draws on fairly deep-seated anxiety about the predictability of financial markets. It also may understate the value of systematically studying financial markets—complex systems impacted by behavioral issues and multiple variables. As I consider this conundrum in early spring, two illustrations that may help sort out this ambivalence in making predictions come to mind: sports and the weather.

Sports, Data and Randomness
As March came to a close, a certain madness faded from the college basketball scene. As I do each year, I had selected the winners of each game in the big tournament, round by round, based on my rather limited game watching, a review of rosters, a cursory look at team schedules and a bit of gut feel. Millions of other people do the same, with predictions based on greater and lesser degrees of knowledge. Despite the sometimes considerable time, money and effort, there is no reputable evidence for any person ever having predicted every winner.

So, should people stop making efforts to more accurately predict winners? “If you can never get the bracket right,” the logic goes, “then why waste all the time and effort in trying?” Water cooler banter suggests the least informed might actually have a better chance at accurately predicting the results, leading to erroneous logic: If no one can be right, everyone’s predictions must be equally valid. And if the chance of being accurate is low, any expression of confidence in our predictive capability smacks of unwarranted hubris. So we downplay our chances, we joke about our predictions.

It’s worth considering from a predictive standpoint. Certainly there is randomness in any effort involving 18- to 22-year-olds thrust into the national limelight to throw an inflated orange ball through a 10-foot-high hoop. Yet there has been an entire season of games from which to glean data about relative strengths and weaknesses of teams, the behavioral tendencies of each player. The practice of statistical analytics in basketball is at such a high level, Massachusetts Institute of Technology’s Sloan School hosts a Sports Analytics Conference. Using similar analytics, Nate Silver and his friends over at the blog FiveThirtyEight developed a model for each round using a “composite of power rankings, pre-season rankings, the team’s placement on the NCAA’s S-curve, player injuries and geography.” That’s impressive predictive technology. And yet, teams with a 76 percent and 91 percent chance of winning lost in the first round and teams with a 72 percent and 88 percent chance in the second round went down. Even with sophisticated modeling and robust statistical input, along with randomness, the key issue is the number of variables.

Weather, Likely Outcomes And Planning
Each year, as March rains fall in the Great Lakes region and spring takes hold, I begin to think about when I can plant my backyard garden. But my planning pales in comparison to the financial importance of planting decisions for agricultural concerns across the country. It is one industry among many that are highly dependent on accurately predicting the weather to maximize the economic utility of their resources. The rise of computer modeling has increased overall accuracy of weather forecasts dramatically over the last few decades, from about 65 percent in 1985 to almost 90 percent by 2009. Yet we still have that nagging feeling that the weather forecast on our evening news won’t be accurate. The weather app on my phone often differs from one on my wife’s phone. Any claim of predictive supremacy here seems silly.

An interesting review of the current state of weather predictions appeared in a recent Bulletin of American Meteorology. It explores the many models explaining the “Earth system,” which includes weather, and the varying success in accurately projecting future weather activities. In academic-speak, here is how they explain the critical issue: “[The Earth system] is the canonical example of a complex system, in which its dynamics, resulting from interacting multiscale and nonlinear processes, cannot be predicted from understanding any of its isolated components.
In brief, there are many variables, but the key is to understand how they interact in order to project the future state of the weather. The authors go on to differentiate between the process of “projecting the weather,” which is a set of plausible outcomes, and “predicting the weather,” which is a likely outcome. That is where it gets interesting.

The predictive power of our weather models, not the projection, is what makes them useful for a farmer in Iowa. Knowing with great certainty it will rain approximately one inch less this year than last year may be mildly helpful. Knowing with a fairly high level of certainty the next week will have constant rain storms followed by a week of cold weather is far more relevant. As the article details, most computer modeling makes detailed projections based on specific parameters and processes, with little compatibility between models, leading to scientific exploration of plausible outcomes. An alternative is to build inter-compatible models that predict with clearly defined probability. This is the key to useful weather modeling—specific predictions based on multiple models with an expression of probability.

When faithful Eeyore, from A.A. Milne’s classic Winnie the Pooh series, intones woefully, “Looks like rain,” Pooh should ask, “How likely is it to rain? How confident are you? And based on what data?” That would be far more useful to Pooh’s quest to find honey. But imagine the difficulty of assessing a weather forecast that includes probability for each temperature prediction. The key issue may not be improving our models, but our desire as an audience for simplified answers.

The Markets And Crystal Balls
So what might the unpredictability of sports and the spring weather teach us about how we discuss financial market predictions? Financial markets also are “complex systems” that include behavioral issues, like we see in sports, and an incredible number of variables, like we see in in weather systems. The complexity inherently makes outcomes unpredictable, thus our joke about the elusive crystal ball.

As we work to further develop our understanding of behaviors and relative values in the global financial markets, we must remain clear on our objective. We are not attempting to reach 100 percent accuracy; rather, we are seeking to improve our accuracy with each prediction. This goal properly places the value on the experience, research and technical capabilities of our analytical staff. As we drill down from a global economic view all the way to specific investment vehicles, we look at the numbers (quantitative analysis) and then supplement that with how people are behaving (qualitative analysis).

First « 1 2 » Next