R. David Moon

Saturday, January 21, 2012

The False and Fading World of Nostalgia

We know that the half-life of career paths, business models, products and processes is shrinking rapidly. If you long for the days of yore when permanence and a semplance of stability were attainable, you're probably not alone. Not only do we as individuals find comfort in nostalgia, but we are also surrounded by institutions and organizations that are architected for a past that will surely never return.

In his recent Fast Company article "Generation Flux", Robert Safian describes this trend: "Nostalgia is a natural human emotion, a survival mechanism that pushes people to avoid risk by applying what we've learned and relying on what's worked before. It's also about as useful as an appendix right now." (Fast Company issue 162, February 2012 - p. 67).

It seems to me that we have allowed ourselves an extra helping of nostalgia, more so in recent times. Unlike 100 years ago in 1912, when a world preparing for war was forced to embrace a blinding set of new technologies from the automobile and electricity to telecommunications and mass production, we live in an era of apparent reluctance, if not outrigth resistance to change.

Companies hold on to historically record-high amounts of cash on their balance sheets, unable to identify investments, innovations or expansions of their own core business that would yield more than even the infinitesimal 2.5% average return on cash assets. Families and individuals relocated in 2011 at a rate less than that of the late 1940s.

Ultimately, just like life among the Pennsylvania Dutch, we will arrive in a place where we can no longer realistically return to the mainstream. It would be too much of an adjustment, too costly, and too difficult. Falling far enough behind inevitably puts us where we no longer having the skills to even adjust at a fundamental level.

At this point, preservation has become a priority for many. Preservation of familiarity, preservation of the status quo, preservation of nostalgia. We find ourselves playing defense, and seeing erosion and fraying showing up at the edges of our private world we seek so much to maintain.

It’s one of the most prevalent tendencies of human nature. This desire to preserve the comforts of the present until they become the past is so ingrained that it permeates every corner of society. It happens with older family members – who often become stuck in their familiar living conditions, even after they are no longer workable. It happens to manufacturers – who may prolong the switch to new products or processes, trying to milk out just a bit more profitability and useful life from old plants, old machinery, old methods. It happened to Kodak, stuck in a pre-digital world while it stood still and let others gain control of the market it once owned across the globe. From the US Postal Service and American Airlines to Research in Motion and Borders Books, the attempts to preserve, to somehow extend the older business model in the face of aggressive change, to hold on to the comforts of the familiar, are fraught with great danger and often result in catastrophe.

What seemed so appealing for so long, and started to appear as a position of relative safety – slowly becomes a trap. Like quicksand, the realization that it is become harder and harder to break free becomes apparent nearly at the point where it may already be too late. Time and again, leaders marvel after the fact that it had seemed unthinkable at the time that “conditions could change so fast” or that “anyone could have imagined this”.

So we come eventually to recognize that the further behind the rest of the world we fall, the more expensive it becomes to maintain that position. Like the Amish, we begin to need things – parts, expertise, supplies – that only we want or need, and that cost us more and more over time, if they are available at all. In the end, our attempts at preserving this false nostalgia exhaust us. Many never make it back to the mainstream again, and see their careers, their businesses, their lives – wind down into an anachronistic world filled with rationalizations and self-justifications necessary to sanely preserve the illusion that it’s all OK. It happens to individuals, it happens to families, businesses, entire industries – and it even happens to entire societies.

As in any destructive habit of human nature, we know we must first admit that we have a problem before we can address the problem. We must then act to eradicate the problem, knowing that our tendencies may cause us to gravitate back to this harmful pattern. We must be mindful of the signs of our craving for nostalgia, the seeking of comfort from preservation of the present long past its usefulness.

In rejecting unnatural fear of change, progress and the uncertainty the future brings, we gain by embracing the inevitability of change. Only by acknowledging the relentless pace of 21st Century life do we enable ourselves to be a full participant in it, and to advance. But to engage this journey, we need the preparedness necessary, the core capabilities that equip us to operate and properly interact with the many opportunities and ongoing adaptations the current world will continue to require.

Saturday, August 21, 2010

Strategy Development: Identifying external dependencies and assessing rankings

Most external dependencies are self-evident to senior management in any capably run business. Amazingly, very few of these dependencies have been formally evaluated as to their relative to revenues, costs and earnings. Our first step in the analytical process is to identify and rank the external dependencies. We’ll have the opportunity to change the prioritization of these dependencies later (and over time), as long as we make certain we’ve at least captured the primary dependencies here.
While literally endless layers of influencing dependencies could be identified, we especially want to determine a top ten to fifteen. The principles of Pareto analysis tell us that once these influencers are ranked properly, the relative impact of each factor is substantially diminished the further we go down the list. As we’ll discuss later, another part of our practical focus on strategy is to realize that while there may indeed be a 121st most important external dependency and a 168th most important external dependency as well, we need to focus in order to get meaningful results. In this case, that means simply narrowing down to a top 10-15 factors. Later in the process, we will concentrate corrective (meaning, adaptive) effort on the topmost of those 10-15, get them [properly addressed, and then move on down the list.

Modeling primary financial impacts

Our next step is to create models of each of the primary factors. While several firms, including ours, have proprietary means of assisting clients in developing these financial models, the essential element is to be able to estimate cause and effect. We need to assemble an understanding of the business, based on the pre-existing financial models currently in use, and past history along with forecasting input from key managers of each area in question.
In what direction, and to what extent, do we expect that a 10% increase in consumer inflation will affect revenues? To what extent would we expect profit margins to be impacted due to a 23% increase over time in transportation costs?

Notice that we are no longer asking functional managers about the likelihood of a given external condition. What we are quantifying in this step is the relationship between the external variable and the internal results. Already in this step, we’ve taken the process beyond the context of asking individual managers to forecast the future, which was never ultimately the process we were after in the first place. Instead, we’re asking that the manager understand and be able to quantify the relationship between things affecting input costs and the ultimate financial results from their area in the context of overall corporate performance. In our experience, this is a legitimate point of knowledge for most managers that we should expect them to be able to address – if not, we might ask if they are really in command of the basics necessary to manage their area of responsibility.

While it should be acknowledged that there are several very sophisticated organizations that have done a substantial portion of this type of analysis for themselves (public utilities, insurance companies, among others), the reality is most business enterprises, even those with very large external dependencies, have not. Not too many years ago I had been meeting to review an acquisition with two Senior VPs of a top-3 US airline. After wrapping up and on the way to lunch, I asked them about their fuel hedging program. They responded that they really did not engage in fuel hedging (they still don’t to this day), and that “fuel prices are nearly unpredictable”.
As we know, one key competitive advantage on the part of Southwest in particular has been their fuel hedging operation, particularly as volatility in global oil markets spiked in early 2008. With billions at stake across the airline industry in annual fuel costs, we can start to see that many industries where it might have been assumed external dependencies had been well identified and management processes put in place years ago to address them, the reality is that, just as in this example, there are gaps everywhere. Setting aside the fact that we may or may not have the sophistication to put some of the “risk-mitigation” strategies in place, the practical reality is that there are many options for addressing the situation, once we have first identified its impact on the business.
While some of this behavior may fall into the category of “corporate denial”, it has some similarity to the individual experiencing pain who wants not to visit the doctor and have tests done for fear of what they may learn. We need to honor shareholders and other stakeholders who depend on the business for results. We’ve all heard the old adage that “failing to plan is planning to fail”. Yet in the final analysis, if we have not identified the relationships between at minimum the top dozen or so external input factors and our business results at some quantifiable level, then it’s effectively as if we’ve said that their impact is zero.
This is true due to the absence of actionable information in the absence of the analysis. Therefore, unless we know that zero is the actual impact resulting from the relationship (meaning, no relationship), then we know we are operating on a false premise. And if we are to be honest with ourselves, even the assertion that the accurate relationship is zero, would itself have to be based on analysis in order to support that conclusion.
Up until the moment we are in possession of a credible analysis of the dependencies, we are implying that there is no relationship between the top external factors and our ability to produce predictable business results across the enterprise. This highlights the urgency of completing this seemingly academic exercise, and at the same time serves to explain why and perhaps how so many companies in so many industries have been brought up short and suddenly found themselves in literally unrecoverable trouble.

Assess probabilities

The next step is to assess the relative probabilities of individual variables reaching forecasted levels. Of course, we believe the forecast represents the most likely scenario or it would not be the forecast. Yet, as a practical matter we need to attach a probability since some forecasts are simply “stronger” than others. For instance, we have a greater probability in the fed funds rate, and therefore the cost of capital, being at a level consistent with what treasury futures would predict six months out, as opposed to the probability that fuel prices would be at nearly any level that we might predict a year from now. This is also why we look to establish ranges as described earlier.

As with other data, we need to test our estimated probabilities against external, independent sources. Thankfully, we have not only very competent and objective analysts available in most of these variables, but in many of them we also have markets. The predictive capability of markets has been demonstrated time and again to be highly accurate, although certainly not infallible. But market indicators, in league with analyst-developed probabilities, can develop a much clearer picture, one which gives us at least our starting place.
Along with other parts of the process, keep in mind that the mechanisms used to establish probabilities can themselves be adjusted over time. As we see unfavorable and perhaps repeated surprise factors develop around certain probability forecasts, we can re-evaluate our sources and the evaluation methods we use internally to develop our probabilities. Where did the surprise originate, and how is our data gathered in such a way as to not capture the thing that created the surprise? The ability to track our projected probabilities over time against the actual outcomes, will give us an ever greater ability to refine our methods and gain greater accuracy over time.

Rank each exposure: Probabilities/Impact = Exposure

Our next step is to perform a simple two-dimensional ranking of exposure. For our purposes her, we are defining exposure as:

P/I = E
Where
P = Probability of a forecast condition, stated as a percentage
I = Impact in earnings terms as variance from current earnings (EBITDA) if the condition materializes
And
E = Exposure, ranging from 1 to 100

From this calculation we prepare a graphic analysis, plotting each of the conditions in relative terms. This allows us a much greater understanding of both the conceptual exposure to external dependencies, as well as a truly quantitative picture. Much like the pilot of a large commercial aircraft, we now have instruments that can measure the effects of the external conditions on our business that correlate with wind direction, barometric pressure, temperature, humidity, and allow us to start to understand how they affect the results we can produce.

The resulting chart will resemble this format:

% | * *
Probability |* * *
| * *
|_ *___*______________
$ Impact

At this point there may be certain revelations in terms of how we look at the business. There also may be a tendency to call the results into question. While our process to arrive at the analysis can usually, and should, stand some refinement over time as pointed out earlier, it is important here to follow the process to completion, particularly in the first pass, then go back to make further refinements. Recall that we’re out to formulate a strategy that we can own, and a practical strategy is of greatest value when we do not “let the perfect be the enemy of the good”.
The next step is the identification of options and selection of specific strategies. Now that we have identified the relative effects on the business, we need to determine the optional strategies available to us, their cost and time required to implement each potential strategy, and select from among them the steps most practically suited to the business and its capital constraints.

Sunday, July 25, 2010


Financial Convulsion “Syndrome”

Professional traders often have observed during volatile market conditions that an entire session can see wild fluctuations both up and down, only to arrive back at substantially the same place. Just as this can happen with any security in the course of a given day, so to the same effect can occur over longer periods – a week, a month, or a year. A the world has moved further into the twenty-first century, a series of episodes of volatility have appeared in markets of all types, in all parts of the world, and across a diverse range of asset classes, that have shown in many cases not just volatility, but truly convulsive swings of some 200-400% above and below a historical range.



Already in the first decade of the twenty-first century, an extraordinary set of economic variables have gone through, and in some cases have continued to exhibit, an almost sine-wave shaped pattern. From interest rates copper and rice, a remarkable number of things – financial instruments, commodities, equities, and even foreign exchange rates – have gone through this astonishing and unpredicted roller-coaster ride.

Many professionals, including purchasing managers, corporate treasurers, retail merchandise buyers, bankers, traders and others – have seen swings in basic costs within a two year period that in the twentieth century would have taken an entire career before witnessing anything approaching this range of movement. What does it mean? Will it continue? Is this an emerging pattern of greater volatility, or are each of these areas simply going through a one-time “volatility event”, like some sort of financial tsunami?

These questions frame the state of uncertainty, ambiguity and bewilderment that results from this “sine wave effect”. And yet, mathematically, after all the volatility, in most of these cases we arrive at or near the same place. If something has rapidly increased and then decreased three-fold, yet returned to nearly the starting point, what should be the problem? What prevents us from simply ignoring that it ever made the trip in the first place? There are several reasons.

First, business and markets are, whether rational or not, heavily influenced by emotion. Much of our economic doctrine and governmental policy in the latter decades of the twentieth century had been based on the supposed rationality of market. And yet, evidence suggests that markets respond not only to rationality but also to emotion. The emotions experienced during the sine wave effect’s journey can become gut-wrenching moments of fear, anxiety, dread and self-doubt. During one such period in 2008, Tessie Lim captured these emotions in the Straits Times:

“There are days when fear strangles me so I can hardly breathe. My chest tightens as if dread itself turns the knot, my stomach spasms and my hands become clammy. All I can think of is that I’m not good enough. What if I fall short of my standards, lose control, and default to my core . . . The effort to go forward seems too heavy a burden. I feel dizzy as I see my life teeter on the brink”

What can cause this type of anxiety? How do we, as participants in markets far and wide, alter our behavior, our propensity to buy, sell, borrow or save, based on the sometimes dramatic swings in our collective and personal emotional experience? Despite the legions of analytical tools applied to modern markets, do we have the ability to discern the effects of emotion from the effects of rationality? Can we spot the effects of emotion when they emerge, and begin to hold sway in previously more rational and orderly markets?

Jared Diamond, in his masterful treatment of the interplay between climate, culture and human history “Guns, Germs, and Steel”, describes the first arrivals of Spanish ships in the New World as an event that indigenous people hardly recognized. The sighting of a large ship offshore was nearly akin to the arrival of a flying saucer. It’s not that people did not physically see the ships; it’s that they had no frame of reference with which to process such a thing. Almost without exception, the locals did not marshal their defense, flee, or attempt to scout the true nature of the Spanish ships. They were simply too far outside their own set of known phenomena that they literally could not be processed intellectually. Surely, we are more advanced in our own time. Surely, we have a grasp of nearly all the possible phenomena, patterns and possibilities affecting business. Do we have the ability, in our time, to recognize the completely unanticipated patterns and events while they are still emerging? If we were over-confident in our ability to see hugely divergent, uncharacteristic events as they develop, would we be able to identify our own over-confidence? How would we know?

While the twentieth century in particular saw great strides in the analytical, mathematical and econometric understanding of markets, very few predicted any of these financial convulsions. We have risk management teams, underwriters, securities analysts, think tanks, government economic agencies, independent auditors, and professional investment advisors. And yet with all this talent, very few of these massive oscillations were predicted. It’s as if we had weather forecasting that gave us very high accuracy as to tomorrow’s high temperature, but proved wholly unable to warn us of a once-in-a-lifetime hurricane. Is it simply that these events are “too big”? Are our early-warning systems somehow targeted unintentionally toward those events we know are possible? When the “impossible” happens, is the event “off the charts” literally, because we’ve constructed charts within a range that we know to include what we assume to be the possible outcomes? If these dynamics are at work in our predictive models, then we may indeed have a situation in which the biggest, most potentially damaging events affecting business and our economy are the very events that we are least prepared to see coming.

Generally, business exists in an environment where there is an assumed range of predictability, whether explicit or not, for most of the major variables affecting the enterprise. When a contract for raw materials is signed, management is making a statement about the expected range of prices for that particular input cost. We are accustomed to prices of consumer goods, raw materials and capital purchases like homes and vehicles all operating in a certain range that allows us all to make commitments and decisions with confidence.

Saturday, June 19, 2010

The Lost Understanding of Risk


You may recall a time when we understood risk (or, at least we believed we did). It was not that long ago, really, but now it's clear just how difficult it is for us to recapture the comfort we once derived from what may have been itself a misplaced confidence in our own sophistication around risk.

THEN:

“Risk Management” managed risk

Until recently, it was believed that common risk management practices, proven in past decades, were sufficient to manage risk. Naturally, a deep water oil exploration and drilling business has far different types of risk than a resort hotel operator. One does not necessarily have “greater risk” than the pother, just different risk. Understanding that these practices vary greatly by industry, and that many smaller organization rely on fairly basic forms of insurance as perhaps a sole risk mitigation technique, there are basic methods that had come to stand for sound risk management in most medium to large sized enterprises. These would have included:
- Risk assessment: examination of both points of exposure, and their respective probabilities, to produce at minimum, a rough ranking of the larger risks
- Risk prevention and mitigation programs to reduce risks outright. An example may be a physical modification that all company vehicles be upgraded with brighter brake lights, or a safety training program for all company employees, etc.
- Risk insurance, usually aligned to provide materially meaningful levels of compensation as to the magnitude of the possible risk, in relation to its expected probability.
The reality in most enterprises large and small is that there are many more risks than can be adequately addressed. By the time the organization had acquired insurance sufficient to cover every conceivable event from hurricanes, foreign uprisings and slippery floors in the employee lounge to insect infestations, officer’s liability and unknown propane leaks, a large share of earnings may be flowing to insurance as opposed to shareholders, employees and bondholders. And still there would undoubtedly be risks left unaddressed. So in practice there is a sequence of the obvious and most potentially damaging risks, down to lesser and lesser risks. As one works down the list, there is a point at which nearly every company simply determines it will leave the remaining risks in a category generally seen as “self-insured” – that is, the company accepts the fact that if these (perhaps) rare and unusual occurrences actually happened, they would be prepared to pay the consequences out of current cash reserves, rather than pay ongoing premiums, the cost of complete mitigation, or some combination of both.

Most businesses have high-priority risk areas that are obvious. A railway has multitudes of risks associated with moving thousand-ton trains across country at high speeds in all types of weather conditions. Likewise airlines, electrical utilities, food products companies, pharmaceutical companies and a host of other industries were the top risks are seen as either risks inherent in company operations (vehicles, dangerous chemical processing, etc.) or product/service liability related risks (food products, cosmetics, medical clinics, etc.).

However, it turns out that for most companies, the risks in both product liability and in the very nature of company operations themselves, are usually the best understood risks of all. While the conventional risk mitigation and risk insurance practices are necessary, they are only as good as our perception of the likelihood of the types of risks we expect.

What about the risks we do not expect? These include the unusual weather event, civil unrest, and economic upheaval. In the 21st century, even piracy – a risk virtually eradicated before the 20th century – made a comeback as a legitimate and very real risk to shipping operations, the energy industry and even leisure cruises in certain parts of the world.

Do we have adequate ability to assess the likelihood of these more infrequent “spontaneous” risks? Is there any relationship between our knowledge of a given risk and our willingness to prepare for it? Is there a relationship to our having experienced a risk, among our collective experience represented in our individual enterprise, and the likelihood of that risk actually materializing? These are two entirely different questions. But in the 21st century, the answers to these questions, taken together, have resulted in a breathtaking sweeping away of what we believed we knew about risk management.

NOW:

Only small and medium risks turn out to have been managed by traditional “Risk Management”

One of the most striking consequences of the failure of our older notions around risk management to prove adequate in recent crises, is the realization that if a risk event proves large enough, it simply sweeps away our ability to deal with it entirely. While an entire book could be written on the subject of the risk management lessons coming out of AIG alone, it is valuable to focus on the AIG case as one clear and recent illustration of older, and now certainly obsolete notions being swept away.

AIG is also important to understand since we have perhaps the world’s largest risk manager, suddenly unable to manage its own risks. While the complexity of AIG’s operations were enormous and beyond our examination here, the important facts are AIG’s inability to properly assess the likelihood of risks associate with the risk-management policies it was issuing to its customers. We now understand AIG did not adequately understand risk – even for its own part, let alone on behalf of its clients, those paying fees to AIG primarily for the very purpose of managing risk. While there are endless small and large issues that contributed to the AIG case, the central fact remains that the largest risk-management enterprise in the world was unable to manage its own risks sufficiently to secure its own basic survival.

If the AIG story were an outlier, some one-off unusual circumstance, the lessons to be learned might be different. Having myself been a Partner at Arthur Andersen in the early stages of the Enron collapse, I can well appreciate the differences. Enron was a sole large enterprise in the energy trading industry, which it had virtually pioneered itself. The practices which brought it down were a matter of criminal practices, as the legal system was ultimately able to determine.

In the case of AIG however, it appears that policies were issued in an ostensibly legitimate fashion, against risks that (at the time) were believed to have been correctly assessed. After these policies were in place, a sequence of events developed, in this case mortgage defaults, which developed into a pattern, frequency and volume that more than entirely overwhelmed their previous expectations. So much that even the abject liquidation of the entire enterprise could not have satisfied the claims outstanding.

Was it fraud? Not likely in my view. Similarly convulsive events swept through other large insurers, and both insurer and insured across the globe were caught nearly entirely unprepared. If it were fraud, our lessons learned would be largely about amending the regulatory system to fix newly identified holes, as in the post-Enron measures like Sarbanes-Oxley. In one sense, this would prove easier to adapt to than the real lessons from AIG.

Instead, we are left to attempt to reconcile the existential threat to AIG that emerged from the inability to effectively manage that which it was set up most to manage: risk.

“New” risks:
- Integration of the global financial system, resulting in a domino effect for many core institutions of capitalism
- Risks presented by debt at large, beyond traditional notions of debt which looked primarily at risks associated with an individual borrower and singular loan
- Risks associated with large-scale devaluations of currencies, bond market disruptions and radical, unanticipated shifts in central bank policies
- Risks of institutional failure, including the probability that insurance providers could fail, negating older, conventional notions about the ability to simply underwrite risk based on a contractual obligation from a third party
- Risk of “national default”, as in Iceland , Lithuania and potentially larger economies
- Risks arising from sharp, sometimes record-breaking swings in input costs, most notably commodity pricing, energy cost, raw materials costs, and supplier prices or even mere supplier viability due to supply chain risk associated with input costs
- Risks of certain markets attaining gridlock, such as the commercial real estate market in many countries during the global recession, wherein so little buying and selling went on for several years, that notions of fair market values were nearly indefinable

- Risks of major, unexpected shifts in regulation, government policy toward certain industries, and new or radically revised tax programs
With the onset of these new dynamics, risk management is having to remake itself, in order to continue to be actual risk management – that is, an effort that effectively addresses and manages the risks of today’s environment and forward. Not just the small and medium risks, but also the most massive risks, particularly those that serve to threaten the very survival of the firm itself. Risk management practices, to be considered such, now must step up to the sudden and dramatic rise of risks from external economic events, not the least of which is the myriad set of risks from the systemic upheavals in the global financial system, including the risks of collapse of the traditional providers of insurance and other risk management tools. It would seem this larger view of the risk management function – one that encompasses the management of risks inherent in the very techniques of risk management itself – is perhaps just the starting point for a fully capable approach to risk management – one that is sufficient to the realities of our current age.

Monday, June 15, 2009

Reengineering - a necessary artifact from the Twentieth century - but what is the meaning of Reengineering in our modern world of management?

Since the 1993 debut of Michael Hammer and James Champy’s book Reengineering the Corporation, the reengineering movement has gained currency in a broad set of industries from manufacturing and aerospace to health care and financial services. Reengineering provided an entire ethos for organizing a corporation around the kind of processes that produce value, like distribution, and production. In this way the reengineering movement served as a catalyst to break down the barriers between previously compartmentalized departmental organizations.

In this way, reengineering was able to overturn much of the post-World War II thinking around industrial organization. Interestingly enough, it is now widely accepted that the corporate organizations of the 40s, 50s and beyond served American business well due to their focus on production. In an economic cycle predominated by household formation and the rapid rise of consumerism, the business enterprise that could out-produce its competition in terms of sheer numbers, while capturing additional economies of scale along the way, could emerge the winner. Many of these organizational models were offshoots of the need for massive volume production of aircraft, tanks, ammunition, ships and vehicles for the war effort of the early 40’s. A departmentalized structure with tightly defined functional responsibilities, where ambiguity was reduced to a minimum, facilitated volume production.

So from a reengineering perspective, it is at one level remarkable that the post-war form of organization served American business so well for so long. It may also be argued that the reengineered enterprise advocated by Hammer and Champy was the early emergence of a set of structural alternatives, and a fundamental way of analyzing a corporation, that put a premium on flexibility. Philosophically, the proponents of reengineering operated on a belief that flexibility itself had value, beyond the issue of in what direction the organization might have to flex.

This flexibility for its own sake allowed a sort of “contingency capability” for business to respond to as yet unforeseen circumstances. The value of this capability is of course, inherently in proportion to our beliefs about just how dynamic an environment we’re likely to face. For many, even those who adopted reengineering wholeheartedly, this basic principle of flexibility was a subtlety that was lost on them.

One of the downsides of reengineering was that it became associated with substantial layoffs through “downsizing”, which in many cases became a direct byproduct of reengineering. While this was not the intent of the reengineering movement, it may in retrospect be a case where reengineering became a pretext for layoffs and workforce reductions that were likely coming under any circumstances.

Thursday, January 1, 2009

Strategy Valuation

Valuation of Strategies – formal and informal

We know that strategies can be valued based on options analysis, just as options trading markets analyze and determine value. In this way of valuing strategies, having the option of executing or implementing a given strategy has value itself, whether or not the company chooses to act on that specific strategy. Note that to produce this “option value”, the strategy needs to be realistic and something the company could actually put into place (executing the option), with a defined set of steps and costs.

An example to highlight this idea of option value is a simple case of expanding warehouse facilities. Periodically, Acme Widgets surveys available warehouse space that meets its criteria for rail access, loading dock facilities, square footage, environmental conditions and accessibility to the company’s other facilities. While it has not negotiated for any of the potential facilities, it has sufficiently qualified the options (and revisited the options on a recurring basis), such that within roughly 30 days at any time, Acme could substantially expand its warehouse facilities to meet certain triggering conditions. Therefore, the existence of the option itself has value. And in this example, very minimal cost.

A next step on strategy valuation is validating the accountabilities for realizing and claiming the associated value. Quite often, specific functional areas within the business will clamor for individual strategies:
 Marketing wants a new, top-tier ad campaign
 Operations or manufacturing wants a major retooling, or updated plant & equipment
 Sales is advocating a new compensation structure and incentive program
The examples are endless, and vary over time. In most of these cases, the “advocacy” of the particular strategy is absent from a corresponding commitment from the very group advocating it, as to the value creation to be expected, in terms of increased revenues, direct contribution to earnings, and/or reduction in existing expense.

A practical step to solve for this, one rarely taken in more theoretical stagey development, is to make certain to “subscribe” value. This requires formal commitment on the part of functional areas advocating a particular strategy, that indeed the department or function in question will be able to capture the incremental value required if provided with the specific capabilities in question. If value commitments are far less than the incremental value required to make the given strategy a success, then either additional supporting value will have to be sought from elsewhere, or the strategy will have to be reworked or set aside for now.

As a related opportunity for better understanding value in specific strategies, we have found that gathering valuation input from related or even seemingly unrelated functional areas in the organization provides a practical qualification to our value projections. The essential question is: “If you had this capability, what do you think could be done with it?” If the answers contain a whole host of virtuous outcomes but nothing that has an impact on actual expenses, expansion of revenues, or advancement of earnings in some fashion, than we have to question the strategy, even if analytical evaluation shows otherwise.

Part of what we need to accomplish when we seek out practical strategies, is that we are after strategies that can be implemented and owned and become part of the daily operating fabric of the enterprise. In order for this to happen, we simply must have the people who will accomplish the outcome of the strategy prepared to realize a sufficiently compelling value.

Finally, we benefit by having an independent review of strategy proposals. This serves to validate or improve the valuation of each strategy. This valuation should be conducted by an uninvolved party – in most cases the corporate finance function. Taken together with value commitments from the functional areas affected and the value projections from those proposing the strategy, this gives senior management three distinct data points to evaluate each specific major strategy.

Wednesday, November 26, 2008

A New Stage of Progression for Technology?

With the new administration in Washington, it may be useful to look at what has engendered new stages of technological innovation and adoption. The US economy has always tended to thrive in periods associated with the rise of certain key technologies: the Railroad, the Automobile, the Telephone, the Computer, the Internet. Where are we now, and where do we go to identify and develop the next innovations and advances that will support growth from here forward?

In the case of the internal combustion engine and the automobile, the basic automobile appeared late in the 19th century, with the factory-produced version of what we would still recognize as an automobile – with pneumatic tires, transmission, and conventional steering - appearing in 1902. Vast improvements have been made since then, yet even after endless layers of enhancements, the basic architecture and technology of a modern-day Corvette, Lexus or Lincoln is based on the original design worked out by people like Ransom Olds, Karl Benz and Henry Ford in their day.
The same may be said for modern civilian aircraft. Jet engines, avionics and airframe design are the product of relentless improvement techniques over the last 60 years, but the basic nature of a passenger jet aircraft derives from 1954. One might well chart the path of the cell phone, the personal computer, and other technologies in the same manner.
Understanding these natural cycles of fundamental innovation, followed by decades of enhancement, improvement and fine-tuning, it may be that our development of business models, techniques and operating methods has followed a similar path. In each of these instances, then:
o Improvements are achieved from a wide variety of sources
o Once improvements are gained by one party, the others identify them and appropriate them rapidly
o Each round of improvements produces its own advances, yet the incremental advance gained from each successive round of improvements is significantly less than from the improvements in the early years. (In the 1920s, cars went to enclosed passenger compartments, electric starters and modern suspensions. By the 1990s, we went from rubber motor mounts to fluid motor mounts. Consider also the advance from MS-DOS in the mid-80s to Microsoft Windows, as opposed to the 2007 “advance” from Windows XP to Windows Vista.)
If we apply this concept to our business practices and methods, it may look something like this:

Innovation ==> Evolution ==> Improvement ==> Enhancement ==> Entropy ==> Innovation ==>

Looked at in this fashion, it may be that most of the basic features of industrial organization that we recognize today – the classic organization chart, the role of a Board of Directors, mass production, the assembly line, and the basics of wage and salary administration – showed up in the “evolution” phase.
Following the model, much of the internal arts showed up fairly late in the game, primarily in the “enhancement” phase. I would also suggest that in some ways, and in certain industries (airlines?) we may be already well into the “entropy” phase. Indeed, it is entirely possible that in sectors of our economy facing chronic entropy, it is entirely likely that several of the internal arts are being applied not in a fashion that truly results in enhancement, but merely to stave off or reduce the acceleration of entropy.
In this way, we may be at the close of an era that has seen the wide-ranging enhancement of our business techniques and operational methods, many of them internal, yet perhaps we are overdue for a re-evaluation of the type that would bring us to a new foundation, a new starting point in certain ways.
To examine this likelihood, we might very well examine the alternative, that being that we might be able to simply preserve our same basic business models and go on improving them as we have already been – perhaps for decades more. To evaluate this concept properly is truly the subject of a different book, and several already written attempts to get at some of these issues. For our purposes here, we’re best served to recognize that there are such things as moments of fundamental change. The migration from ancient monarchies to modern democracies, the American Revolution, the end of World War II, the rise of information technology, the creation of open markets for securities, the arrival of global telecommunications – these were all fundamental changes. In each instance, there is a residual faction that attempts to hang on and even preserve the fading structures. But at some stage, it becomes apparent and even obvious that a fundamental shift has taken place.