R. David Moon

Saturday, August 21, 2010

Strategy Development: Identifying external dependencies and assessing rankings

Most external dependencies are self-evident to senior management in any capably run business. Amazingly, very few of these dependencies have been formally evaluated as to their relative to revenues, costs and earnings. Our first step in the analytical process is to identify and rank the external dependencies. We’ll have the opportunity to change the prioritization of these dependencies later (and over time), as long as we make certain we’ve at least captured the primary dependencies here.
While literally endless layers of influencing dependencies could be identified, we especially want to determine a top ten to fifteen. The principles of Pareto analysis tell us that once these influencers are ranked properly, the relative impact of each factor is substantially diminished the further we go down the list. As we’ll discuss later, another part of our practical focus on strategy is to realize that while there may indeed be a 121st most important external dependency and a 168th most important external dependency as well, we need to focus in order to get meaningful results. In this case, that means simply narrowing down to a top 10-15 factors. Later in the process, we will concentrate corrective (meaning, adaptive) effort on the topmost of those 10-15, get them [properly addressed, and then move on down the list.

Modeling primary financial impacts

Our next step is to create models of each of the primary factors. While several firms, including ours, have proprietary means of assisting clients in developing these financial models, the essential element is to be able to estimate cause and effect. We need to assemble an understanding of the business, based on the pre-existing financial models currently in use, and past history along with forecasting input from key managers of each area in question.
In what direction, and to what extent, do we expect that a 10% increase in consumer inflation will affect revenues? To what extent would we expect profit margins to be impacted due to a 23% increase over time in transportation costs?

Notice that we are no longer asking functional managers about the likelihood of a given external condition. What we are quantifying in this step is the relationship between the external variable and the internal results. Already in this step, we’ve taken the process beyond the context of asking individual managers to forecast the future, which was never ultimately the process we were after in the first place. Instead, we’re asking that the manager understand and be able to quantify the relationship between things affecting input costs and the ultimate financial results from their area in the context of overall corporate performance. In our experience, this is a legitimate point of knowledge for most managers that we should expect them to be able to address – if not, we might ask if they are really in command of the basics necessary to manage their area of responsibility.

While it should be acknowledged that there are several very sophisticated organizations that have done a substantial portion of this type of analysis for themselves (public utilities, insurance companies, among others), the reality is most business enterprises, even those with very large external dependencies, have not. Not too many years ago I had been meeting to review an acquisition with two Senior VPs of a top-3 US airline. After wrapping up and on the way to lunch, I asked them about their fuel hedging program. They responded that they really did not engage in fuel hedging (they still don’t to this day), and that “fuel prices are nearly unpredictable”.
As we know, one key competitive advantage on the part of Southwest in particular has been their fuel hedging operation, particularly as volatility in global oil markets spiked in early 2008. With billions at stake across the airline industry in annual fuel costs, we can start to see that many industries where it might have been assumed external dependencies had been well identified and management processes put in place years ago to address them, the reality is that, just as in this example, there are gaps everywhere. Setting aside the fact that we may or may not have the sophistication to put some of the “risk-mitigation” strategies in place, the practical reality is that there are many options for addressing the situation, once we have first identified its impact on the business.
While some of this behavior may fall into the category of “corporate denial”, it has some similarity to the individual experiencing pain who wants not to visit the doctor and have tests done for fear of what they may learn. We need to honor shareholders and other stakeholders who depend on the business for results. We’ve all heard the old adage that “failing to plan is planning to fail”. Yet in the final analysis, if we have not identified the relationships between at minimum the top dozen or so external input factors and our business results at some quantifiable level, then it’s effectively as if we’ve said that their impact is zero.
This is true due to the absence of actionable information in the absence of the analysis. Therefore, unless we know that zero is the actual impact resulting from the relationship (meaning, no relationship), then we know we are operating on a false premise. And if we are to be honest with ourselves, even the assertion that the accurate relationship is zero, would itself have to be based on analysis in order to support that conclusion.
Up until the moment we are in possession of a credible analysis of the dependencies, we are implying that there is no relationship between the top external factors and our ability to produce predictable business results across the enterprise. This highlights the urgency of completing this seemingly academic exercise, and at the same time serves to explain why and perhaps how so many companies in so many industries have been brought up short and suddenly found themselves in literally unrecoverable trouble.

Assess probabilities

The next step is to assess the relative probabilities of individual variables reaching forecasted levels. Of course, we believe the forecast represents the most likely scenario or it would not be the forecast. Yet, as a practical matter we need to attach a probability since some forecasts are simply “stronger” than others. For instance, we have a greater probability in the fed funds rate, and therefore the cost of capital, being at a level consistent with what treasury futures would predict six months out, as opposed to the probability that fuel prices would be at nearly any level that we might predict a year from now. This is also why we look to establish ranges as described earlier.

As with other data, we need to test our estimated probabilities against external, independent sources. Thankfully, we have not only very competent and objective analysts available in most of these variables, but in many of them we also have markets. The predictive capability of markets has been demonstrated time and again to be highly accurate, although certainly not infallible. But market indicators, in league with analyst-developed probabilities, can develop a much clearer picture, one which gives us at least our starting place.
Along with other parts of the process, keep in mind that the mechanisms used to establish probabilities can themselves be adjusted over time. As we see unfavorable and perhaps repeated surprise factors develop around certain probability forecasts, we can re-evaluate our sources and the evaluation methods we use internally to develop our probabilities. Where did the surprise originate, and how is our data gathered in such a way as to not capture the thing that created the surprise? The ability to track our projected probabilities over time against the actual outcomes, will give us an ever greater ability to refine our methods and gain greater accuracy over time.

Rank each exposure: Probabilities/Impact = Exposure

Our next step is to perform a simple two-dimensional ranking of exposure. For our purposes her, we are defining exposure as:

P/I = E
Where
P = Probability of a forecast condition, stated as a percentage
I = Impact in earnings terms as variance from current earnings (EBITDA) if the condition materializes
And
E = Exposure, ranging from 1 to 100

From this calculation we prepare a graphic analysis, plotting each of the conditions in relative terms. This allows us a much greater understanding of both the conceptual exposure to external dependencies, as well as a truly quantitative picture. Much like the pilot of a large commercial aircraft, we now have instruments that can measure the effects of the external conditions on our business that correlate with wind direction, barometric pressure, temperature, humidity, and allow us to start to understand how they affect the results we can produce.

The resulting chart will resemble this format:

% | * *
Probability |* * *
| * *
|_ *___*______________
$ Impact

At this point there may be certain revelations in terms of how we look at the business. There also may be a tendency to call the results into question. While our process to arrive at the analysis can usually, and should, stand some refinement over time as pointed out earlier, it is important here to follow the process to completion, particularly in the first pass, then go back to make further refinements. Recall that we’re out to formulate a strategy that we can own, and a practical strategy is of greatest value when we do not “let the perfect be the enemy of the good”.
The next step is the identification of options and selection of specific strategies. Now that we have identified the relative effects on the business, we need to determine the optional strategies available to us, their cost and time required to implement each potential strategy, and select from among them the steps most practically suited to the business and its capital constraints.

Sunday, July 25, 2010


Financial Convulsion “Syndrome”

Professional traders often have observed during volatile market conditions that an entire session can see wild fluctuations both up and down, only to arrive back at substantially the same place. Just as this can happen with any security in the course of a given day, so to the same effect can occur over longer periods – a week, a month, or a year. A the world has moved further into the twenty-first century, a series of episodes of volatility have appeared in markets of all types, in all parts of the world, and across a diverse range of asset classes, that have shown in many cases not just volatility, but truly convulsive swings of some 200-400% above and below a historical range.



Already in the first decade of the twenty-first century, an extraordinary set of economic variables have gone through, and in some cases have continued to exhibit, an almost sine-wave shaped pattern. From interest rates copper and rice, a remarkable number of things – financial instruments, commodities, equities, and even foreign exchange rates – have gone through this astonishing and unpredicted roller-coaster ride.

Many professionals, including purchasing managers, corporate treasurers, retail merchandise buyers, bankers, traders and others – have seen swings in basic costs within a two year period that in the twentieth century would have taken an entire career before witnessing anything approaching this range of movement. What does it mean? Will it continue? Is this an emerging pattern of greater volatility, or are each of these areas simply going through a one-time “volatility event”, like some sort of financial tsunami?

These questions frame the state of uncertainty, ambiguity and bewilderment that results from this “sine wave effect”. And yet, mathematically, after all the volatility, in most of these cases we arrive at or near the same place. If something has rapidly increased and then decreased three-fold, yet returned to nearly the starting point, what should be the problem? What prevents us from simply ignoring that it ever made the trip in the first place? There are several reasons.

First, business and markets are, whether rational or not, heavily influenced by emotion. Much of our economic doctrine and governmental policy in the latter decades of the twentieth century had been based on the supposed rationality of market. And yet, evidence suggests that markets respond not only to rationality but also to emotion. The emotions experienced during the sine wave effect’s journey can become gut-wrenching moments of fear, anxiety, dread and self-doubt. During one such period in 2008, Tessie Lim captured these emotions in the Straits Times:

“There are days when fear strangles me so I can hardly breathe. My chest tightens as if dread itself turns the knot, my stomach spasms and my hands become clammy. All I can think of is that I’m not good enough. What if I fall short of my standards, lose control, and default to my core . . . The effort to go forward seems too heavy a burden. I feel dizzy as I see my life teeter on the brink”

What can cause this type of anxiety? How do we, as participants in markets far and wide, alter our behavior, our propensity to buy, sell, borrow or save, based on the sometimes dramatic swings in our collective and personal emotional experience? Despite the legions of analytical tools applied to modern markets, do we have the ability to discern the effects of emotion from the effects of rationality? Can we spot the effects of emotion when they emerge, and begin to hold sway in previously more rational and orderly markets?

Jared Diamond, in his masterful treatment of the interplay between climate, culture and human history “Guns, Germs, and Steel”, describes the first arrivals of Spanish ships in the New World as an event that indigenous people hardly recognized. The sighting of a large ship offshore was nearly akin to the arrival of a flying saucer. It’s not that people did not physically see the ships; it’s that they had no frame of reference with which to process such a thing. Almost without exception, the locals did not marshal their defense, flee, or attempt to scout the true nature of the Spanish ships. They were simply too far outside their own set of known phenomena that they literally could not be processed intellectually. Surely, we are more advanced in our own time. Surely, we have a grasp of nearly all the possible phenomena, patterns and possibilities affecting business. Do we have the ability, in our time, to recognize the completely unanticipated patterns and events while they are still emerging? If we were over-confident in our ability to see hugely divergent, uncharacteristic events as they develop, would we be able to identify our own over-confidence? How would we know?

While the twentieth century in particular saw great strides in the analytical, mathematical and econometric understanding of markets, very few predicted any of these financial convulsions. We have risk management teams, underwriters, securities analysts, think tanks, government economic agencies, independent auditors, and professional investment advisors. And yet with all this talent, very few of these massive oscillations were predicted. It’s as if we had weather forecasting that gave us very high accuracy as to tomorrow’s high temperature, but proved wholly unable to warn us of a once-in-a-lifetime hurricane. Is it simply that these events are “too big”? Are our early-warning systems somehow targeted unintentionally toward those events we know are possible? When the “impossible” happens, is the event “off the charts” literally, because we’ve constructed charts within a range that we know to include what we assume to be the possible outcomes? If these dynamics are at work in our predictive models, then we may indeed have a situation in which the biggest, most potentially damaging events affecting business and our economy are the very events that we are least prepared to see coming.

Generally, business exists in an environment where there is an assumed range of predictability, whether explicit or not, for most of the major variables affecting the enterprise. When a contract for raw materials is signed, management is making a statement about the expected range of prices for that particular input cost. We are accustomed to prices of consumer goods, raw materials and capital purchases like homes and vehicles all operating in a certain range that allows us all to make commitments and decisions with confidence.

Saturday, June 19, 2010

The Lost Understanding of Risk


You may recall a time when we understood risk (or, at least we believed we did). It was not that long ago, really, but now it's clear just how difficult it is for us to recapture the comfort we once derived from what may have been itself a misplaced confidence in our own sophistication around risk.

THEN:

“Risk Management” managed risk

Until recently, it was believed that common risk management practices, proven in past decades, were sufficient to manage risk. Naturally, a deep water oil exploration and drilling business has far different types of risk than a resort hotel operator. One does not necessarily have “greater risk” than the pother, just different risk. Understanding that these practices vary greatly by industry, and that many smaller organization rely on fairly basic forms of insurance as perhaps a sole risk mitigation technique, there are basic methods that had come to stand for sound risk management in most medium to large sized enterprises. These would have included:
- Risk assessment: examination of both points of exposure, and their respective probabilities, to produce at minimum, a rough ranking of the larger risks
- Risk prevention and mitigation programs to reduce risks outright. An example may be a physical modification that all company vehicles be upgraded with brighter brake lights, or a safety training program for all company employees, etc.
- Risk insurance, usually aligned to provide materially meaningful levels of compensation as to the magnitude of the possible risk, in relation to its expected probability.
The reality in most enterprises large and small is that there are many more risks than can be adequately addressed. By the time the organization had acquired insurance sufficient to cover every conceivable event from hurricanes, foreign uprisings and slippery floors in the employee lounge to insect infestations, officer’s liability and unknown propane leaks, a large share of earnings may be flowing to insurance as opposed to shareholders, employees and bondholders. And still there would undoubtedly be risks left unaddressed. So in practice there is a sequence of the obvious and most potentially damaging risks, down to lesser and lesser risks. As one works down the list, there is a point at which nearly every company simply determines it will leave the remaining risks in a category generally seen as “self-insured” – that is, the company accepts the fact that if these (perhaps) rare and unusual occurrences actually happened, they would be prepared to pay the consequences out of current cash reserves, rather than pay ongoing premiums, the cost of complete mitigation, or some combination of both.

Most businesses have high-priority risk areas that are obvious. A railway has multitudes of risks associated with moving thousand-ton trains across country at high speeds in all types of weather conditions. Likewise airlines, electrical utilities, food products companies, pharmaceutical companies and a host of other industries were the top risks are seen as either risks inherent in company operations (vehicles, dangerous chemical processing, etc.) or product/service liability related risks (food products, cosmetics, medical clinics, etc.).

However, it turns out that for most companies, the risks in both product liability and in the very nature of company operations themselves, are usually the best understood risks of all. While the conventional risk mitigation and risk insurance practices are necessary, they are only as good as our perception of the likelihood of the types of risks we expect.

What about the risks we do not expect? These include the unusual weather event, civil unrest, and economic upheaval. In the 21st century, even piracy – a risk virtually eradicated before the 20th century – made a comeback as a legitimate and very real risk to shipping operations, the energy industry and even leisure cruises in certain parts of the world.

Do we have adequate ability to assess the likelihood of these more infrequent “spontaneous” risks? Is there any relationship between our knowledge of a given risk and our willingness to prepare for it? Is there a relationship to our having experienced a risk, among our collective experience represented in our individual enterprise, and the likelihood of that risk actually materializing? These are two entirely different questions. But in the 21st century, the answers to these questions, taken together, have resulted in a breathtaking sweeping away of what we believed we knew about risk management.

NOW:

Only small and medium risks turn out to have been managed by traditional “Risk Management”

One of the most striking consequences of the failure of our older notions around risk management to prove adequate in recent crises, is the realization that if a risk event proves large enough, it simply sweeps away our ability to deal with it entirely. While an entire book could be written on the subject of the risk management lessons coming out of AIG alone, it is valuable to focus on the AIG case as one clear and recent illustration of older, and now certainly obsolete notions being swept away.

AIG is also important to understand since we have perhaps the world’s largest risk manager, suddenly unable to manage its own risks. While the complexity of AIG’s operations were enormous and beyond our examination here, the important facts are AIG’s inability to properly assess the likelihood of risks associate with the risk-management policies it was issuing to its customers. We now understand AIG did not adequately understand risk – even for its own part, let alone on behalf of its clients, those paying fees to AIG primarily for the very purpose of managing risk. While there are endless small and large issues that contributed to the AIG case, the central fact remains that the largest risk-management enterprise in the world was unable to manage its own risks sufficiently to secure its own basic survival.

If the AIG story were an outlier, some one-off unusual circumstance, the lessons to be learned might be different. Having myself been a Partner at Arthur Andersen in the early stages of the Enron collapse, I can well appreciate the differences. Enron was a sole large enterprise in the energy trading industry, which it had virtually pioneered itself. The practices which brought it down were a matter of criminal practices, as the legal system was ultimately able to determine.

In the case of AIG however, it appears that policies were issued in an ostensibly legitimate fashion, against risks that (at the time) were believed to have been correctly assessed. After these policies were in place, a sequence of events developed, in this case mortgage defaults, which developed into a pattern, frequency and volume that more than entirely overwhelmed their previous expectations. So much that even the abject liquidation of the entire enterprise could not have satisfied the claims outstanding.

Was it fraud? Not likely in my view. Similarly convulsive events swept through other large insurers, and both insurer and insured across the globe were caught nearly entirely unprepared. If it were fraud, our lessons learned would be largely about amending the regulatory system to fix newly identified holes, as in the post-Enron measures like Sarbanes-Oxley. In one sense, this would prove easier to adapt to than the real lessons from AIG.

Instead, we are left to attempt to reconcile the existential threat to AIG that emerged from the inability to effectively manage that which it was set up most to manage: risk.

“New” risks:
- Integration of the global financial system, resulting in a domino effect for many core institutions of capitalism
- Risks presented by debt at large, beyond traditional notions of debt which looked primarily at risks associated with an individual borrower and singular loan
- Risks associated with large-scale devaluations of currencies, bond market disruptions and radical, unanticipated shifts in central bank policies
- Risks of institutional failure, including the probability that insurance providers could fail, negating older, conventional notions about the ability to simply underwrite risk based on a contractual obligation from a third party
- Risk of “national default”, as in Iceland , Lithuania and potentially larger economies
- Risks arising from sharp, sometimes record-breaking swings in input costs, most notably commodity pricing, energy cost, raw materials costs, and supplier prices or even mere supplier viability due to supply chain risk associated with input costs
- Risks of certain markets attaining gridlock, such as the commercial real estate market in many countries during the global recession, wherein so little buying and selling went on for several years, that notions of fair market values were nearly indefinable

- Risks of major, unexpected shifts in regulation, government policy toward certain industries, and new or radically revised tax programs
With the onset of these new dynamics, risk management is having to remake itself, in order to continue to be actual risk management – that is, an effort that effectively addresses and manages the risks of today’s environment and forward. Not just the small and medium risks, but also the most massive risks, particularly those that serve to threaten the very survival of the firm itself. Risk management practices, to be considered such, now must step up to the sudden and dramatic rise of risks from external economic events, not the least of which is the myriad set of risks from the systemic upheavals in the global financial system, including the risks of collapse of the traditional providers of insurance and other risk management tools. It would seem this larger view of the risk management function – one that encompasses the management of risks inherent in the very techniques of risk management itself – is perhaps just the starting point for a fully capable approach to risk management – one that is sufficient to the realities of our current age.