Working Paper
July 25, 2002
Glyn A. Holton
Contingency Analysis
glyn@contingencyanalysis.com
http://www.contingencyanalysis.com
Purpose:
This working paper is being distributed to solicit comments, recollections and
anecdotes from regulators and market participants who worked with VaR or related
risk measures prior to 1993. Please forward any comments directly to the author.
Topics of particular interest are:
· early implementations of VaR or VaR-like measures in trading environments
during the 1970’s or 1980’s;
· the extent to which industry practice (existing risk measures used in trading
environments) influenced the SEC’s Uniform Net Capital Rule, the SFA’s
1992 capital rule and
· early use (especially during the 1980’s) of names such as “value-at-risk”,
“capital-at-risk” and “dollars-at-risk”—which name arose first?
· papers published prior to 1993 that mention or describe VaR measures.
During the 1990’s, Value-at-Risk (VaR) was widely adopted for measuring
market risk in trading portfolios. Its origins can be traced back as far as
1922 to capital requirements the
member firms. VaR also has roots in portfolio theory and a crude VaR
measure published in 1945. This paper traces this history to 1998, when
banks started using proprietary VaR measures to calculate regulatory
capital requirements.
We define VaR as a category of probabilistic measures of market risk. Consider a
portfolio with fixed holdings. Its current market value is known. Its market value at some
future time—say one day or one month in the future—is a random variable. As a random
variable, we may ascribe it a probability distribution. A VaR metric is a function of:
1. that distribution and
2. the portfolio’s current market value.
With this definition, variance of return, standard deviation of P&L and .95-quantile of
loss are all VaR metrics. We define a VaR measure as any procedure that, given a VaR
metric, assigns values for that metric to portfolios.
Early VaR measures developed along two parallel lines. One was portfolio theory, and
the other was capital adequacy computations. Bernstein (1992) and Markowitz (1999)
have documented the history of VaR measures in the context of portfolio theory. This
paper reviews that material only briefly. It focuses primarily upon the development of
VaR measures in the context of capital adequacy computations.
The Leavens VaR Measure
The origins of portfolio theory can be traced to non-mathematical discussions of portfolio
construction. Authors such as Hardy (1923) and Hicks (1935) discussed intuitively the
merits of diversification. Leavens (1945) offered a quantitative example, which may be
the first VaR measure ever published.
Leavens considered a portfolio of ten bonds over some horizon. Each bond would either
mature at the end of the horizon for USD 1,000 or default and be worthless. Events of
default were assumed independent. Measured in USD 1,000’s, the portfolio’s value at the
end of the horizon had a binomial distribution.
Writing for a non-technical audience, Leavens did not explicitly identify a VaR metric,
but he mentioned repeatedly the “spread between probable losses and gains.” He seems to
have had the standard deviation of portfolio market value in mind. Based upon this
metric, his portfolio had a VaR of USD 948.69.
The Markowitz and Roy VaR Measures
Markowitz (1952) and, three months later,
measures that were surprisingly similar. Each was working to develop a means of
selecting portfolios that would, in some sense, optimize reward for a given level of risk.
For this purpose, each proposed VaR measures that incorporated covariances between
risk factors in order to reflect hedging and diversification effects. While the two measures
were mathematically similar, they support different VaR metrics. Markowitz used a
variance of simple return metric.
upper bound on the probability of the portfolio’s gross return being less than some
specified “catastrophic return.”
Both Markowitz and Roy skirted the issue of how probabilistic assumptions might be
specified.
factors. He observed that these must be “estimated from information about the past”.
Markowitz’s VaR measure required only a covariance matrix for risk factors. He
proposed that this be constructed using procedures that would be called “Bayesian”
today:
These procedures, I believe, should combine statistical techniques and the judgment
of practical men.
In a (1959) book, Markowitz elaborated, dedicating an entire chapter to the construction
of subjective or “personal” probabilities, as developed by Savage (1954).
Early Innovations
Markowitz and Roy intended their VaR measures for practical portfolio optimization
work. Markowitz’s (1959) book is a “how-to” guide to his optimization scheme, boldly
describing for a non-technical audience computations that would remain infeasible until
processing power became more available during the 1970’s. Markowitz was aware of this
problem and proposed a more tractable VaR measure that employed a diagonal
covariance matrix. William Sharpe described this VaR measure in his Ph.D. thesis and a
(1963) paper. The measure is different from, but helped motivate Sharpe’s (1964) Capital
Asset Pricing Model (CAPM).
Because of the limited availability of processing power, VaR measures from this period
were largely theoretical, and were published primarily in the context of the emerging
portfolio theory. This encompassed the work of Tobin (1958), Treynor (1961), Sharpe
(1964), Lintner (1965) and Mossin (1966). The VaR measures they employed were best
suited for equity portfolios. There were few alternative asset categories, and applying
VaR to these would have raised a number of modeling issues. Real estate cannot be
marked to market with any frequency, making VaR impractical. Applying VaR to either
debt instruments or futures contracts entails modeling term structures. Also, debt
instruments raise issues of credit spreads. Futures that were traded at the time were
primarily for agricultural products, which raise seasonality issues. Schrock (1971) and
Dusak (1972) described simple VaR measures for futures portfolios, but neither
addressed term structure or seasonality issues.
Lietaer (1971) described a practical VaR measure for foreign exchange risk. He wrote
during the waning days of fixed exchange rates when risk manifested itself as currency
devaluations. Since World War II, most currencies had devalued at some point; many had
done so several times. Governments were secretive about planned devaluations, so
corporations maintained ongoing hedges. Lietaer (1971) proposed a sophisticated
procedure for optimizing such hedges. It incorporated a VaR measure with a variance of
market value VaR metric. It assumed devaluations occurred randomly, with the
conditional magnitude of a devaluation being normally distributed. Computations were
simplified using a modification of Sharpe’s (1963) model. Lietaer’s work may be the first
instance of the
Twenty Years of Change
The 1970s and 1980s wrought sweeping changes for markets and technology. For VaR,
these had the combined effect of:
· expanding the universe of assets to which VaR might be applied;
· changing how organizations took risk; and
· providing the means to apply VaR in these new contexts.
When the Bretton Woods agreement collapsed in 1971, exchange rates were allowed to
float, and an active foreign exchange forward market soon emerged. Today, this is the
largest forward market in the world.
OPEC’s dominance of world oil supplies lead to two oil crises, which sent crude prices
skyrocketed from USD 2 to USD 35 during the 1970s. Oil markets, which had been the
province of a handful of global oil companies, were rapidly liberalized to counter the
state pricing of OPEC.
Shortages of natural gas lead the US Government to pass the 1978 Natural Gas Policy
Act (NGPA). This started an eleven-year process of deregulation that transformed
pipeline companies from distributors into transporters and opened the door for
competition among independent marketing and distribution companies. Later, European
natural gas markets and world electricity markets would experience similar liberalization.
Floating exchange rates, rampant inflation and monetarist experiments by the US Federal
Reserve caused USD interest rates to gyrate. At the same time, archaic
were incompatible with the high interest rates investors demanded, so the market for
USD deposits migrated overseas. The US Federal Government embarked on a period of
staggering budget deficits that lasted through the end of the century. This propelled a
huge market for US Treasury securities. Disintermediation roiled the banking industry as
borrowers sought financing directly from securities markets. New markets for US and
Euro medium-term notes (MTNs) grew rapidly. Investment bank Drexel Burnham
popularized the use of high-yield bonds in corporate restructurings. The mortgage passthrough
market grew dramatically and spawned markets for collateralized mortgage
obligations (CMOs), strips and related instruments. First
asset-backed security (ABS) in 1985, launching a vibrant market for securitized loans,
leases and revolving debt.
The Chicago Mercantile Exchange (CME), which had long traded agricultural futures,
introduced financial futures contracts. First came currency futures in 1972 and then US
Treasury bill futures in 1975. Over time, futures contracts on bonds, deposits, indexes
and currencies came to trade on exchanges around the world.
Currency and interest rate swaps were introduced in the early 1980s, starting with a
currency swap arranged by Solomon Brothers in 1981 between the World Bank and IBM.
Chase Manhattan Bank introduced the first commodity swap in 1986, and Bankers Trust
introduced the first equity swap in 1989.
In 1973, Black and Scholes published their groundbreaking option-pricing model. That
same year, the first registered options exchange, the Chicago Board Options Exchange
(CBOE), opened for business.
Starting in the early 1980s, a market for over-the-counter (OTC) options gradually
formed. Dealers experimented with new and “exotic” structures, including swaptions,
caps, floors, Asian options, barrier options and lookback options. Initially, underliers
were financial assets such as equities or currencies, but derivatives were soon introduced
on oil and other commodities. By the close of the decade, volumes were mounting.
Perhaps the greatest consequence of the financial innovations of the 1970s and 1980s was
the proliferation of leverage. Prior to 1970, avenues for compounding risk were limited.
With the proliferation of new instruments, opportunities for leverage abounded. Not only
new instruments, but new forms of transactions also offered leverage. Commodity
leasing, securities lending, repos and short sales are leveraged transactions. All of these
either did not exist or had limited use prior to 1970.
Within organizations, leveraging decisions became decentralized. Portfolio managers,
traders, product managers and even salespeople acquired the tools of leverage.
Transactions were implemented with a phone call. A single derivatives trader might
leverage or deleverage his employer a hundred times a day.
As leverage proliferated, trading organizations sought new ways to manage risk taking. In
turn, this motivated a need for new measures of risk. The traditional risk metrics of
financial accounting were ineffective, especially when applied to derivatives. Exposure
metrics such as duration, convexity, delta, gamma, and vega were widely adopted, but
were primarily of tactical value. Trading organizations started to resemble a Tower of
Babble, with each trading desk adopting risk metrics suitable for its own transactions.
Even when two desks adopted similar metrics, there was no means of measuring their
aggregate risks—you can’t aggregate a crude oil delta with a JPY delta. Organizations
increasingly needed a single risk metric that could be applied consistently across asset
categories.
By 1990, a single processor could easily perform the most complex analyses proposed by
Markowitz (1959). The age of the mainframe was waning. Personal computers were
ascendant. Financial firms were embracing technology and were using it for such tasks as
Another important development was the rapid growth of a financial data industry.
Reuters, Telerate, Bloomberg and more specialized firms started compiling databases of
historical prices. These would provide the raw data needed to specify probabilistic
assumptions used by VaR measures.
As the 1970s turned to the 1980s, markets were becoming more volatile. Firms were
becoming more leveraged, and the need for financial risk measures, such as VaR, was
growing. The resources to implement VaR were becoming available, but VaR remained
primarily a theoretical tool of portfolio theory. Firms needed some way to measure
market risk across disparate asset categories, but did not recognized how VaR might fill
this need.
Origins of Regulatory Capital Requirements
Prior to 1933, US securities markets were largely self-regulated. As early as 1922, the
New York Stock Exchange (NYSE) imposed its own capital requirements on member
firms.1 Firms were required to hold capital equal to 10% of assets comprising proprietary
positions and customer receivables.
By 1929, the NYSE capital requirement had developed into a requirement that firms hold
capital equal to:
· 5% of customer debits;
· a minimum 10% on proprietary holdings in government of municipal bonds;
· 30% on proprietary holdings in other liquid securities; and
· 100% on proprietary holdings in all other securities.
This anticipated today’s capital requirements for securities firms. As we shall see, it
evolved into the VaR measures that firms use today.
During October 1929, the
spilled into the
investments. Fearing that banks would be unable to repay money in their accounts,
depositors staged a “run” on banks. Thousands of US banks failed.
1 See Dale (1996), pp. 60-61.
2 As measured by the Dow Jones Industrial average.
The Roaring ‘20s were over, and the Great Depression had begun. During this period, the
US Congress passed legislation designed to prevent abuses of the securities markets and
to restore investors’ confidence.
The 1933 Banking Act combined a bill sponsored by Representative Steagall to
establishing federal deposit insurance with a bill sponsored by Senator Glass to segregate
the banking and securities industries. It distinguished between:
· commercial banking, which is the business of taking deposits and making loans, and
· investment banking, which is the business of underwriting and dealing in securities.
All banks were required to select one of the two roles and divest businesses relating to the
other. Chase National Bank and the National City Bank both dissolved their securities
businesses. Lehman Brothers dissolved its depository business. The First Bank of
split off its securities business to form First Boston. JP Morgan elected to be a
commercial bank, but a number of managers departed to form the investment bank
Morgan Stanley.
The 1933 Securities Act focused on primary markets, ensuring disclosure of pertinent
information relating to publicly offered securities. The 1934 Securities Exchange Act
focused on secondary markets, ensuring that parties who trade securities—exchanges,
brokers and dealers—act in the best interests of investors. Certain securities—including
US Treasury and municipal debt—were largely exempt from either act’s provisions.
The Securities Exchange Act established the Securities and Exchange Commission (SEC)
as the primary regulator of US securities markets. In this role, the SEC gained regulatory
authority over securities firms,3,4 which include investment banks as well as non-banks
that broker and/or deal non-exempt securities.5 The 1938 Maloney Act clarified this role,
providing for self regulating organizations (SRO’s) to provide direct oversight of
securities firms under the supervision of the SEC. SRO’s came to include the National
Association of Securities Dealers (NASD) as well as national and regional exchanges.
The original Securities Exchange Act imposed a modest capital requirement on securities
firms. It required firms to not incur aggregate indebtedness in excess of 2,000% of their
net capital. This requirement limited credit available for stock market speculation, but its
primary purpose was to ensure that securities firms had sufficient liquidity to meet
obligations to clients. For this reason, the act excluded non-liquid fixed assets and
exchange memberships from a firm’s net capital.
3
4 This authority originally applied only to firms that were members of securities exchanges or who
transacted business through an exchange member. In 1938, Congress amended the Securities Exchange
Act, extending the SEC’s authority over all securities firms transacting in non-exempt securities..
5 Separate banking regulators oversaw commercial banks.
Government or by states. Federally-chartered banks were primarily regulated by the Office of the
Comptroller of the Currency (OCC). State-chartered banks were primarily regulated by respective state
regulatory agencies. In addition, most banks were required to be members of the Federal Deposit Insurance
Corporation (FDIC) and most were members of the Federal Reserve System (the Fed).
In 1938, the Securities Exchange Act was modified to allow the SEC to impose its own
capital requirements on securities firms, so the SEC started to develop a Net Capital Rule.
In 1944, the SEC exempted from this capital rule any firm whose SRO imposed more
comprehensive capital requirements. Capital requirements the NYSE imposed on
member firms were deemed to meet this criteria.
In 1944, the SEC modified its Net Capital Rule to subtract from net capital 10% of the
market value of most proprietary securities positions held by a firm. This haircut
afforded a margin of safety against market losses that might arise during the time it
would take to liquidate such positions. In 1965, the haircut for equity securities was
increased to 30%.
The Paperwork Crisis
Between 1967 and 1970, the NYSE experienced a dramatic increase in trading volumes.
Securities firms were caught unprepared, lacking the technology and staff to handle the
increased workload. Back offices were thrown into confusion trying to process trades and
maintain client records. Errors multiplied, causing losses. For a while, this “paperwork
crisis” was so severe that the NYSE reduced its trading hours and even closed one day a
week. In 1969, the stock market fell just as firms were investing heavily in back office
technology and staff. Trading volumes dropped, and the combined effects of high
expenses, decreasing revenues and losses on securities inventories proved too much for
many firms. Twelve firms failed, and another 70 were forced to merge with other firms.
The NYSE trust fund, which had been established in 1964 to compensate clients of failed
member firms, was exhausted.
In the aftermath of the paperwork crisis, Congress founded the Securities Investor
Protection Corporation (SIPC) to insure client accounts at securities firms. It also
amended the Securities Exchange Act to require the SEC to implement regulations to
safeguard client accounts and establish minimum financial responsibility requirements
for securities firms.
As a backdrop to these actions, it came to light that the NYSE had failed to enforce its
own capital requirements against certain member firms at the height of the paperwork
crisis. With its trust fund failing, it is understandable that the NYSE didn’t want to push
more firms into liquidation. This inaction would mark the end of SROs setting capital
requirements for US securities firms.
The SEC’s Uniform Net Capital Rule
In 1975, the SEC updated its capital requirements, implementing a Uniform Net Capital
Rule (UNCR) that would apply to all securities firms trading non-exempt securities. As
with earlier capital requirements, the capital rule’s primary purpose was to ensure that
firms had sufficient liquid assets to meet client obligations. Firms were required to detail
their capital calculations in a quarterly Financial and Operational Combined Uniform
Single (FOCUS) report.
As with the SEC’s earlier capital requirement, haircuts were applied to proprietary
securities positions as a safeguard against market losses that might arise during the time it
would take to liquidate such positions. However, the system of haircuts was completely
redesigned. Financial assets were divided into 12 categories such as government debt,
corporate debt, convertible securities, preferred stock, etc. Some of these were further
broken down into subcategories primarily according to maturity. To reflect hedging
effects, long and short positions were netted within subcategories, but only limited
netting was permitted within or across categories. An additional haircut was applied to
any concentrated position in a single asset.
Haircut percentages ranged from 0% for short-term treasuries to, in some cases, 30% for
equities. Even higher haircuts applied to illiquid securities. The percentages were
apparently based upon the haircuts banks were applying to securities held as collateral.6
In 1980, extraordinary volatility in interest rates prompted the SEC to update the haircut
percentages to reflect the increased risk. This time, the SEC based percentages on a
statistical analysis of historical security returns. The goal was to establish haircuts
sufficient to cover, with 95% confidence, the losses that might be incurred during the
time it would take to liquidate a troubled securities firm—a period the SEC assumed to
be 30 days.7 Although it was presented in the archaic terminology of “haircuts”, the
SEC’s new system was a rudimentary VaR measure. In effect, the SEC was requiring
securities firms to calculate one-month 95% VaR and hold extra capital equal to the
indicated value.
US Securities firms became accustomed to preparing FOCUS reports, and started using
them for internal risk assessments. Soon they were modifying the SEC’s VaR measure
for internal use. Because they were used for internal purposes, there is limited
information on the measures customized by specific firms. One interesting document is a
letter from Stephen C. Francis (1985) of Fischer, Francis, Trees & Watts to the Federal
Reserve Bank of
SEC’s but employed more asset categories, including 27 categories for cash market
Treasuries alone. He notes:
We find no difficulty utilizing on an essentially manual basis the larger number of
categories, and indeed believe it necessary to capturing accurately our gross and
net risk exposures.
Over time, securities firms found a variety of uses for these proprietary VaR measures.
An obvious use was to provide a measure of a firm’s overall market risk on an ongoing
basis. Related applications were to calculate internal capital requirements and to support
market risk limits.
6 See Dale (1996), p. 78.
7 See Securities and Exchange Commission (1980) and Dale (1996), pp. 78, 80.
Garbade’s VaR Measures
During the 1980’s, Kenneth Garbade worked in the Bankers Trust Cross Markets
Research Group developing sophisticated modeling techniques for the
As part of the firm’s marketing efforts, he prepared various research reports for
distribution to institutional clients. Two of these, Garbade (1986, 1987), described
sophisticated VaR measures for assessing internal capital requirements.8 Garbade (1986)
noted:
In view of the importance of risk assessment and capital adequacy to regulatory
agencies and market participants, it is not surprising that many analysts have tried to
devise procedures for computing risk and/or capital adequacy which are (a)
comprehensive and (b) simple to implement. Without exception, however, those who
make the effort quickly discover that the twin goals of breadth and simplicity are
seemingly impossible to attain simultaneously. As a result, risk and capital adequacy
formulas are either complex or of limited applicability, and are sometimes both.
Garbade’s (1986) VaR measures modeled each bond based upon its price sensitivity to
changes in yield—its “value of a basis point.” Portfolio market values were assumed
normally distributed. Given a covariance matrix for yields at various maturities, the
standard deviation of portfolio value was determined. With this characterization, VaR
metrics—including standard deviation of loss and .99-quantile of loss—were calculated.
Principal component analysis was used to reduce the dimensionality of the problem.
Garbade (1987) extended this work. He introduced a bucketing scheme that allowed him
to remap a large portfolio of bonds as a smaller portfolio of representative bonds. He
introduced a technique to disaggregate a portfolio’s risk and allocate it among multiple
profit centers.
Garbade’s papers attracted little attention. They were circulated only to institutional
clients of Bankers Trust. For prospective buy-side users of VaR, they were years ahead of
their time. For Garbade, they were just an application of the theoretical research he was
performing with other members of the Cross Markets Research Group. He did not include
them in his (1996) edited collection of the papers he wrote while with the group.
The 1988
On June 26, 1974, German regulators forced the troubled Bank Herstatt into liquidation.
That day, a number of banks had released payment of DEM to Herstatt in
exchange for USD that was to be delivered in
differences, Herstatt ceased operations between the times of the respective payments. The
counterparty banks did not receive their USD payments.
8 I am indebted to Craig Dibble, formerly of Bankers Trust, for bringing Garbade’s 1986 paper to my
attention.
Responding to the cross-jurisdictional implications of the Herstatt debacle, the G-109
countries formed a standing committee under the auspices of the Bank for International
Settlements (BIS).10 Called the Basle Committee on Banking Supervision, the committee
comprises representatives from central banks and regulatory authorities. Over time, the
focus of the committee has evolved, embracing initiatives designed to:
· define roles of regulators in cross-jurisdictional situations;
· ensure that international banks or bank holding companies do not escape
comprehensive supervision by some “home” regulatory authority;
· promote uniform capital requirements so banks from different countries may compete
with one another on a “level playing field.”
While the Basle Committee’s recommendations lack force of law, G-10 countries are
implicitly bound to implement its recommendations as national laws.
In 1988, the Basle Committee published a set of minimal capital requirements for banks.
These were adopted by the G-10 countries, and have come to be known as the 1988
Accord. We have already discussed the SEC’s UNCR. The 1988 Basle Accord differed
from this in two fundamental respects:
· It was international, whereas the UNCR applied only to US firms;
· It applied to banks whereas the UNCR applied to securities firms.
Historically, minimum capital requirements have served fundamentally different
purposes for banks and securities firms.
Banks were primarily exposed to credit risk. They held illiquid portfolios of loans
supported by deposits. Loans could be liquidated rapidly only at “fire sale” prices. This
placed banks at risk of “runs.” If depositors feared a bank might fail, they would
withdraw their deposits. Forced to liquidate its loan portfolio, the bank would succumb to
staggering losses on those sales.
Deposit insurance and lender-of-last-resort provisions eliminated the risk of bank runs,
but they introduced a new problem. Depositors no longer had an incentive to consider a
bank’s financial viability before depositing funds. Without such marketplace discipline,
regulators were forced to intervene. One solution was to impose minimum capital
requirements on banks. Because of the high cost of liquidating a bank, such requirements
were generally based upon the value of a bank as a going concern.
9 The G-10 is actually eleven countries:
10 The BIS is an international organization which fosters international monetary and financial cooperation
and serves as a bank for central banks. It was originally formed by the Hague Agreements of 20 January
1930, which had a primary purpose of facilitating
I. Today, the BIS is a focal point for research and cooperation in international banking regulation.
The primary purpose of capital requirements for securities firms was to protect clients
who might have funds or securities on deposit with a firm. Securities firms were
primarily exposed to market risk. They held liquid portfolios of marketable securities
supported by secured financing such as repos. A troubled firm’s portfolio could be
unwound quickly at market prices. For this reason, capital requirements were based upon
the liquidation value of a firm.
In a nutshell, banks entailed systemic risk. Securities firms did not. Regulators would
strive to keep a troubled bank operating. They would gladly unwind a troubled securities
firm. Banks needed long-term capital in the form of equity or long-term subordinated
debt. Securities firms could operate with more transient capital, including short-term
subordinated debt. The 1988
concern. It set minimum requirements for long-term capital based upon a formulaic
assessment of a bank’s credit risks. It did not specifically address market risk. The SEC’s
UNCR focused on a securities firm’s liquid capital with haircuts for market risk.
Because banks and securities firms are so different, it is appropriate to apply separate
minimum capital requirements to each. This was feasible in the
which both maintained a statutory separation of banking and securities activities.
The
The
industries, but distinguished between them as a matter of custom. The Bank of
supervised banks. Securities markets were traditionally self-regulating, but the sweeping
1986 Financial Services Act—informally called the “Big Bang”—changed this. It
established the Securities and Investment Board (SIB) to regulate securities markets. The
SIB delegated much of its authority to SROs, granting responsibility for wholesale
securities markets primarily to the Securities and Futures Authority (SFA). If a British
firm engaged in both banking and securities activities, both the Bank of England and the
SFA would provide oversight, with one playing the role of “lead regulator.”
In 1992, the SFA adopted financial rules for securities firms, which included capital
requirements for credit and market risks. These specified a crude VaR measure for
determining market risk capital requirements for equity, fixed income, foreign exchange
and commodities positions.
By the 1990’s, concepts from portfolio theory were widely used by institutional equity
investors.
other financial centers,11 and this emphasis appears to have influenced the SFA in
designing its VaR measure. While crude from a theorist’s standpoint, the measure
incorporated concepts from portfolio theory, including the CAPM distinction between
systematic and specific risk. The measure did not employ covariances, but summing risks
11 See Scott-Quinn (1994).
under square root signs and applying various scaling factors seems to have accomplish an
analogous purpose. Because of its pedigree, the SFA’s VaR measure came to be called
the “portfolio approach” to calculating capital requirement. As fate would have it, the
SFA’s initiative would soon be overtaken by events within the European Union.
banks and securities firms. Under German law, securities firms were banks, and a single
regulatory authority oversaw banks.
regimes. Accordingly,
· the Continental, or German model of universal banking, and
· the Anglo-Saxon, or British model of generally separate banking and securities
activities.
The European Union (EU) had a goal of implementing a common market by 1993. As the
nations of
financial regulation came into conflict. New EU laws needed either to choose between or
somehow blend the two approaches.
The issue was settled by the 1989 Second Banking Coordination Directive and the 1993
Investment Services Directive. These granted European nations broad latitude in
establishing their own legal and regulatory framework for financial services. Financial
firms were granted a “single passport” to operate throughout the EU subject to the
regulations of their home country. A bank domiciled in an EU country that permitted
universal banking could conduct universal banking in another EU country that prohibited
it. With
effectively opened all of
maintain a separate regulatory framework for its non-bank securities firms.
Since the securities operations of
for the two. The solution implemented with the 1993 Capital Adequacy Directive (CAD)
was to regulate functions instead of institutions.
The CAD established uniform capital standards applicable to both universal banks’
securities operations and non-bank securities firms. A universal bank would identify a
portion of its balance sheet as comprising a “trading book”. Capital for the trading book
would be held in accordance with the CAD while capital for the remainder of the bank’s
balance sheet would be held in accordance with the 1988 Basle Accord, as implemented
by
12 The CAD and 1988 Basle Accord only set minimum requirements. National authorities were free to set
higher requirements.
according to the 1989 Own Funds Directive, but local regulators had discretion to apply
more liberal rules for capital supporting the trading book.
A bank’s “trading book” would include equities and fixed income securities held for
dealing or proprietary trading. It would also include equity and fixed income OTC
derivatives, repos, certain forms of securities lending and exposures due to unsettled
transactions. Foreign exchange exposures were not included in the trading book, but were
addressed organization-wide under a separate provision of the CAD.
A minimum capital requirement for the market risk of a trading book was based upon a
crude VaR measure intended to loosely reflect a 10-day 95% VaR metric.13 This entailed
separate “general risk” and “specific risk” computations, with the results summed. The
measure has come to be known as the “building-block” approach.
General risk represented risk from broad market moves. Positions were divided into
categories, one for equities and 13 for various maturities of fixed income instruments.
Market values14 were multiplied by category-specific risk weights—8% for equities and
maturity-specific percentages for fixed income instruments. Weighted positions were
netted within categories, and limited netting was permitted across fixed income
categories. Results were summed.
Specific risk represented risk associated with individual instruments. Positions were
divided into four categories, one for equities and three covering central government,
“qualifying” and “other” fixed income instruments. Risk weights were:
· 2% for equities,
· 0% for central government instruments,
· 0.25%, 1% or 1.6% for qualifying instruments, depending upon maturity, and
· 8% for other instruments.
Results were summed without netting, either within or across categories.
By netting positions in its general risk calculation, the CAD recognized hedging effects to
a greater extent than the SEC’s UNCR. Like the UNCR, it recognizes no diversification
benefits. In this regard, both the CAD and UNCR were less sophisticated than the SFA’s
portfolio approach.
Weakening of Glass-Steagall
Across the Atlantic, the
universal banking. The history of the Glass-Steagall act is one of incremental weakening
of its separation between the banking and securities industries. Some of this stemmed
13 See Dale, p. 42.
14 Derivatives were included in both the general and specific risk calculation based upon their deltaequivalent
values.
from regulatory actions. Much of it stemmed from market developments not anticipated
by the act.
The original Glass-Steagall Act permitted banks to deal in exempt securities. Banks were
also permitted to engage in limited brokerage activities as a convenience to clients who
used the bank’s other services. Over time, that authorization was expanded.
Glass-Steagall did not prevent commercial banks from engaging in securities activities
overseas. By the mid 1980s,
and JP Morgan had thriving overseas securities operations. During the late 1980s, banks
were also permitted to engage in limited domestic activities in non-exempt securities
through so called “Section 20” subsidiaries.15
Currencies were not securities under the Glass-Steagall Act, but when exchange rates
were allowed to float in the early 1970s, they entailed similar market risk. In 1933,
futures markets were small and transacted primarily in agricultural products, so they were
excluded from the act’s definition of securities. Also, the Glass-Steagall Act did not
anticipate the emergence of active OTC derivatives markets, so most derivatives did not
fall under its definition of securities. By 1993, US commercial banks were taking
significant market risks, actively trading foreign exchange, financial futures and OTC
derivatives.
The Basle-IOSCO Initiative
With banks increasingly taking market risk, in the early 1990s, the Basle Committee
decided to update its 1988 accord to include bank capital requirements for market risk.
This would have implications for non-bank securities firms.
As indicated earlier, capital requirements for banks and securities firms served different
purposes. Bank capital requirements had existed to address systemic risks of banking.
Securities capital requirements had originally existed to protect clients who left funds or
securities on deposit with a securities firm. Regulations requiring segregation of investor
assets as well as account insurance had largely addressed this risk. Increasingly, capital
requirements for securities firms were being justified on two new grounds:
1. Although securities firms did not pose the same systemic risks as banks, it was argued
that bank securities operations and non-bank securities firms should face the same
capital requirements. Such “harmonization” would create a competitive “level playing
field” between the two. This was the philosophy underlying
15 Section 20 of the Glass-Steagall Act forbade banks that are members of the Federal Reserve System from
affiliating with any company engaged principally in underwriting or distributing non-exempt securities. In
April 1987, the Fed interpreted this provision as permitting member banks to affiliate with companies
engaged in limited securities activities. This interpretation was upheld by US courts, and commercial banks
started forming “Section 20” affiliates.
2. Some securities firms were active in the OTC derivatives markets. Unlike traditional
securities, many OTC derivatives were illiquid and posed significant credit risk for
one or both counterparties. This was compounded by their high leverage that could
inflict staggering market losses on unwary firms. Fears were mounting that the failure
of one derivatives dealer could cause credit losses at other dealers. For the first time,
non-bank securities firms were posing systemic risks.
Any capital requirements the Basle Committee adopted for banks’ market risk would be
incorporated into future updates of Europe’s CAD and thereby apply to
securities firms. If the same framework were extended to non-bank securities firms
outside
would be harmonized globally. In 1991, the Basle Committee entered discussions with
the International Organization of Securities Commissioners (IOSCO)16 to jointly develop
such a framework.
The two organizations formed a technical committee, and work commenced in January
1992. At that time, European regulators were completing work on the CAD, and many
wanted the Basle-IOSCO initiative to adopt a similar building-block VaR measure. US
regulators were hesitant to abandon the VaR measure of the UNCR, which has come to
be called the “comprehensive” approach. The SFA’s portfolio approach was a third
alternative.17
Of the three VaR measures, the portfolio approach was theoretically most sophisticated,
followed by the building-block approach and finally the comprehensive approach. The
technical committee soon rejected the portfolio approach as too complicated. Lead by
European regulators, the committee gravitated towards the building-block measure, but
Richard Breeden was chairman of the SEC and chairman of the technical committee.
Ultimately, he balked at discarding the SEC’s comprehensive approach. An analysis by
the SEC indicated that the building block measure might reduce capital requirements for
US securities firms by 70% or more. Permitting such a reduction, simply to harmonize
banking and securities regulations, seemed imprudent. The Basle-IOSCO initiative had
failed. In the
distinct.
By 1993, a fair number of financial firms were employing proprietary VaR measures to
assess market risk, allocate capital or monitor market risk limits. The measures took
various forms. The most common approach generally followed Markowitz (1952, 1959).
A portfolio’s value would be modeled as a linear polynomial of certain risk factors. A
16 IOSCO was founded in 1974 to promote the development of Latin American securities markets. In 1983,
its focus was expanded to encompass securities markets around the world.
17 See Shirreff (1992) for a discussion of the competing issues faced by the technical committee.
18 See Dimson and Marsh (1995) for a comparison of the three regulatory VaR measures.
covariance matrix would be constructed for the risk factors, and from this, the standard
deviation of portfolio value would be calculated. If portfolio value were assumed normal,
a quantile of loss could be calculated.
Thomas Wilson was working as a project manager for McKinsey & Co. He published
(1993) a sophisticated VaR measure, noting:19
… This article aims to develop a method of incorporating stochastic covariance
matrices into risk capital calculations using simple assumptions. In the most
straightforward case, the adjustment to standard risk capital calculations is as simple
as replacing the usual normal distribution with the standard t-distribution. The tdistribution
has “fatter tails” than the normal distribution, reflecting the fact that the
covariance matrix is also a random variable about which the risk manager has only
limited prior information.
heteroskedasticity in the practical VaR measures used on trading floors. It is also the first
detailed description of a VaR measure for use in a trading environment since Garbade’s
(1987) paper. The author’s casual assumption that readers are familiar with the use of
VaR measures on trading floors is indicative of how widespread such use had already
become.
Without acknowledging his doing so,
some practical importance. He suggested that the covariance matrix for risk factors
actually exists, but that a user may have limited knowledge as to its values. This objective
interpretation of the underlying probabilities runs counter to Markowitz’s (1952, 1959)
subjective approach, which suggests that the covariance matrix does not actually exist,
but is constructed by the user to reflect his own perceptions.
G-30 Report
In 1990, risk management was novel. Many financial firms lacked an independent risk
management function. This concept was practically unheard of in non-financial firms. As
unease about derivatives and leverage spread, this started to change.
The term “risk management” was not new. It had long been used to describe techniques
for addressing property and casualty contingencies. Doherty (2000) traces such usage to
the 1960s and 1970s when organizations were exploring alternatives to insurance,
including:
· risk reduction through safety, quality control and hazard education, and
· alternative risk financing, including self-insurance and captive insurance.
19
European countries agreed to intervene in markets to maintain exchange rates between their respective
currencies within certain trading “bands.”
Such techniques, together with traditional insurance, were collectively referred to as risk
management.
More recently, derivative dealers were promoting “risk management” as the use of
derivatives to hedge or customize market-risk exposures. For this reason, derivative
instruments were sometimes called “risk management products.”
The new “risk management” that evolved during the 1990’s is different from either of the
earlier forms. It tends to view derivatives as a problem as much as a solution. It focuses
on reporting, oversight and segregation of duties within organizations. Such concepts
have always been important. In the early 1990’s they took on a new urgency.
On January 30, 1992, Gerald Corrigan addressed the New York Bankers Association over
lunch during their mid-Winter meeting at
from
launched the ill-fated Basle-IOSCO initiative. Now he was speaking in his other capacity
as president of the New York Federal Reserve. His comments would set the tone for the
new risk management:20
… the interest rate swap market now totals several trillion dollars. Given the sheer
size of the market, I have to ask myself how it is possible that so many holders of
fixed or variable rate obligations want to shift those obligations from one form to
the other. Since I have a great deal of difficulty in answering that question, I then
have to ask myself whether some of the specific purposes for which swaps are
now being used may be quite at odds with an appropriately conservative view of
the purpose of a swap, thereby introducing new elements of risk or distortion into
the marketplace—including possible distortions to the balance sheets and income
statements of financial and nonfinancial institutions alike.
I hope this sounds like a warning, because it is. Off-balance sheet activities have a
role, but they must be managed and controlled carefully, and they must be
understood by top management as well as by traders and rocket scientists.
That summer, Paul Volker, chairman of the Group of 30,21 approached Dennis
Weatherstone, chairman of JP Morgan, and asked him to lead a study of derivatives
industry practices. Weatherstone formed an international steering committee and a
working group of senior managers from derivatives dealers, end users and related legal,
accounting and academic disciplined. They produced a 68-page report, which the Group
of 30 published in July 1993. Entitled Derivatives: Practices and Principles, it has come
to be known as the G-30 Report. It describes then-current derivatives use by dealers and
20 This incident is documented in Shirreff (1992). See Corrigan (1992) for a full text of the speech.
21 Founded in 1978, the Group of 30 is a non-profit organization of senior executives, regulators and
academics. Through meetings and publications, it seeks to deepen understanding of international economic
and financial issues.
end-users. The heart of the study was a set of 20 recommendations to help dealers and
end-users manage their derivatives activities. Topics included:
· the role of boards and senior management,
· the implementation of independent risk management functions,
· the various risks that derivatives transactions entail.
With regard to the market risk faced by derivatives dealers, the report recommended that
portfolios be marked-to-market daily, and that risk be assessed with both VaR and stress
testing. It recommended that end-users of derivatives adopt similar practices as
appropriate for their own needs.
While the G-30 Report focused on derivatives, most of its recommendations were
applicable to the risks associated with other traded instruments. For this reason, the report
largely came to define the new risk management of the 1990’s. The report is also
interesting, as it may be the first published document to use the word “value-at-risk.”
Organizational Mishaps
By the 1990’s, the dangerous affects of derivatives and leverage were taking a toll on
corporations. In February 1993,
USD 1,050MM loss from speculating on exchange rates. In December of that same year,
MG Refining and Marketing, a
reported a loss of USD 1,300MM from failed hedging of long-dated oil supply
commitments.
The popular media noted these staggering losses, and soon focused attention on other
organizational mishaps. In 1994, there was a litany of losses.
CITIC conglomerate and
40MM and USD 207MM trading metals on the London Metals Exchange (LME). US
companies Gibson Greetings, Mead, Proctor & Gamble and Air Products and Chemicals
all reported losses from differential swaps transacted with Bankers Trust.
Kashima Oil lost USD 1,500MM speculating on exchange rates.
County announced losses from repos and other transactions that would total USD
1,700MM. These are just a few of the losses publicized during 1994.
The litany continued into 1995. A notable example is
US-based bond traders had secretly accumulated losses of USD 1,100MM over a 10 year
period. What grabbed the world’s attention, though, was the dramatic failure of
Barings PLC in February 1995. Nick Leeson, a young trader based at its
office, lost USD 1,400MM from unauthorized Nikki futures and options positions.
Barings had been founded in 1762. It had financed
Napoleonic wars. It had financed
one British pound.
RiskMetrics
During the late 1980’s, JP Morgan developed a firm-wide VaR system.22 This modeled
several hundred risk factors. A covariance matrix was updated quarterly from historical
data. Each day, trading units would report by e-mail their positions’ deltas with respect to
each of the risk factors. These were aggregated to express the combined portfolio’s value
as a linear polynomial of the risk factors. From this, the standard deviation of portfolio
value was calculated. Various VaR metrics were employed. One of these was one-day
95% USD VaR, which was calculated using an assumption that the portfolio’s value was
normally distributed.
With this VaR measure, JP Morgan replaced a cumbersome system of notional market
risk limits with a simple system of VaR limits. Starting in 1990, VaR numbers were
combined with P&L’s in a report for each day’s 4:15 PM Treasury meeting in
Those reports, with comments from the Treasury group, were forwarded to Chairman
Weatherstone.
One of the architects of the new VaR measure was Till Guldimann. His career with JP
Morgan had positioned him to help develop and then promote the VaR measure within
the firm. During the mid 1980’s, he was responsible for the firm’s asset/liability analysis.
Working with other professionals, he developed concepts that would be used in the VaR
measure. Later as chairman of the firm’s market risk committee, he promoted the VaR
measure internally. As fate would have it, Guldimann’s next position placed him in a role
to promote the VaR measure outside the firm.
In 1990 Guldimann took responsibility for Global Research, overseeing research
activities to support marketing to institutional clients. In that capacity he managed an
annual research conference for clients. In 1993, risk management was the conference
theme. Guldimann gave the keynote address and arranged for a demonstration of JP
Morgan’s VaR system. The demonstration generated considerable interest. Clients asked
if they might purchase or lease the system. Since JP Morgan was not a software vendor,
they were disinclined to comply. Guldimann proposed an alternative. The firm would
provide clients with the means to implement their own systems. JP Morgan would
publish a methodology, distribute the necessary covariance matrix and encourage
software vendors to develop compatible software.
Guldimann formed a small team to develop something for the next year’s research
conference. The service they developed was called RiskMetrics. It comprised a detailed
technical document as well as a covariance matrix for several hundred key factors, which
was updated daily. Both were distributed without charge over the Internet. The service
was rolled out with considerable fanfare in October 1994. A public relations firm placed
ads and articles in the financial press. Representatives of JP Morgan went on a multi-city
tour to promote the service. Software vendors, who had received advance notice, started
promoting compatible software. Launched at a time of global concerns about derivatives
and leverage, the timing for RiskMetrics was perfect.
22 See Guldimann (2000).
RiskMetrics was not a technical breakthrough. While the RiskMetrics Technical
Document contained original ideas, for the most part, it described practices that were
already widely used. Its linear VaR measure was arguably less sophisticated than those of
Garbade (1986) or Wilson (1993). The important contribution of RiskMetrics was that it
publicized VaR to a wide audience.
Regulatory Approval of Proprietary VaR Measures
In April 1993, following the failure of its joint initiative with IOSCO, the
committee released a package of proposed amendments to the 1988 accord. This included
a document proposing minimum capital requirements for banks’ market risk. The
proposal generally conformed to
trading book and hold capital for trading book market risks and organization-wide foreign
exchange exposures. Capital charges for the trading book would be based upon a
building-block VaR measure loosely consistent with a 10-day 95% VaR metric. Like the
CAD measure, this partially recognized hedging effects but ignored diversification
effects.
The committee received numerous comments on the proposal. Commentators perceived
the building-block VaR measure as a step backwards. Many banks were already using
proprietary VaR measures.23 Most of these modeled diversification effects, and some
recognized portfolio non-linearities. Commentators wondered if, by embracing a crude
VaR measure, regulators might stifle innovation in risk measurement technology.
In April 1995, the committee released a revised proposal. This made a number of
changes, including the extension of market risk capital requirements to cover
organization-wide commodities exposures. An important provision allowed banks to use
either a regulatory building-block VaR measure or their own proprietary VaR measure
for computing capital requirements. Use of an proprietary measure required approval of
regulators. A bank would have to have an independent risk management function and
satisfy regulators that it was following acceptable risk management practices. Regulators
would also need to be satisfied that the proprietary VaR measure was sound. Proprietary
measures would need to support a 10-day 99% VaR metric and be able to address the
non-linear exposures of options. Diversification effects could be recognized within broad
asset categories—fixed income, equity, foreign exchange and commodities—but not
across asset categories. Market risk capital requirements were set equal to the greater of:
· the previous day’s VaR, or
· the average VaR over the previous six days, multiplied by 3.
23 A 1993 survey conducted for the Group of 30 (1994) by Price Waterhouse found that, among 80
responding derivatives dealers, 30% were using VaR to support market risk limits. Another 10% planned to
do so.
The alternative building-block measure—which was now called the “standardized”
measure—was changed modestly from the 1993 proposal. Risk weightings remained
unchanged, so it may reasonably be interpreted as still reflecting a 10-day 95% VaR
metric. Extra capital charges were added in an attempt to recognize non-linear exposures.
The Basle Committee’s new proposal was incorporated into an amendment to the 1988
accord, which was adopted in 1996. It went into effect in 1998.
The Name “Value-at-Risk”
Origins of the name “value-at-risk” are murky. Several similar names were used during
the 1990’s, including: “dollars-at-risk” (DaR), “capital-at-risk” (CaR), “income-at-risk”
(IaR), “earnings-at-risk” (EaR) and “value-at-risk” (VaR). It seemed that users liked the
“-at-risk” moniker, but were uncomfortable labeling exactly what was “at risk”. The
“dollars” label of DaR was too provincial for use in many countries. The “capital” label
of CaR seemed too application-specific. Some applications of VaR—such as VaR
limits—were unrelated to capital. The “income” and “earnings” labels of IaR and EaR
had accounting connotations unrelated to market risk. Software vendor Wall Street
Systems went so far as to call its software “money-at-risk”. It is perhaps the vagueness of
the label “value” that made “value-at-risk” attractive. Also, its use in the RiskMetrics
Technical Document added to its appeal. By 1996, other names were falling out of use.
Guldimann (2000) suggests that the name “value-at-risk” originated within JP Morgan
prior to 1985:
… we learned that “fully hedged” in a bank with fully matched funding can have
two meanings. We could either invest the Bank’s net equity in long bonds and
generate stable interest earnings, or we could invest it in Fed funds and keep the
market value constant. We decided to focus on value and assume a target
duration investors assigned to the bank’s equity. Thus value-at-risk was born.
It seems likely that the “DaR” and “CaR” names also arose during the 1980’s, since use
of both was common by the early 1990’s. “DaR” appears24 in the financial literature as
early as June 1991—two years prior to the first known appearance of “VaR” in the July
1993 G-30 Report. “CaR” appears as early as September 1992.25
The VaR Debate
Following the release of RiskMetrics and the widespread adoption of VaR measures,
there was somewhat of a backlash against VaR. This has come to be called the “VaR
debate”. Criticisms followed three themes:
1. that different VaR implementations produced inconsistent results;
24 Mark (1991).
25
2. that, as a measure of risk, VaR is conceptually flawed;
3. that widespread use of VaR entails systemic risks.
Critics in the first camp include Beder (1995) and Marshall and Seigel (1997). Beder
performed an analysis using
sixteen different VaR measurements for each of three portfolios. The sixteen
measurements for each portfolio tended to be inconsistent, leading Beder to describe VaR
as “seductive but dangerous.” In retrospect, this indictment seems harsh. Beder’s analysis
employed different VaR metrics,26 different covariance matrices and historical VaR
measures with very low sample sizes. It comes as no surprise that she obtained disparate
VaR measurements! Despite its shortcomings, Beder’s paper is historically important as
an early critique of VaR. It was cited frequently in the ensuing VaR debate.
Marshall and Siegel (1997) approached eleven software vendors that had all implemented
the RiskMetrics linear VaR measure. They provided each with several portfolios and a
covariance matrix and asked them to calculate the portfolios’ one-day 95% VaR. In this
way, each vendor would be calculating VaR for the same portfolios using the same
covariance matrix based upon the same VaR measure and the same VaR metric. The
vendors should have obtained identical results, but they did not. Marshall and Siegel’s
results are summarized in Table 1:
Portfolio
Standard Deviation
of Vendors’ VaR
Measurements
Foreign exchange forwards 1%
Money market deposits 9%
Forward rate agreements 10%
Bonds 17%
Interest rate swaps 21%
Table 1:
measurements for each portfolio. Standard deviations are calculated as: standard deviation of VaR measurements
divided by median of VaR measurements. Most vendors provided results for only certain portfolios, so standard
deviations are each based on six to eight VaR measurements. A single outlier skewed the standard deviation for the
money market portfolio.
The implementation issues that Marshall and Siegel highlighted were—and still are—an
important concern. However, such issues arise with any quantitative software, and can be
addressed with suitable validation and verification procedures.
Of more concern were criticisms suggesting that VaR measures were conceptually
flawed. One such critic was Taleb (1997):
The condensation of complex factors naturally does not just affect the accuracy of
the measure. Critics of VaR (including the author) argue that simplification could
result in such distortions as to nullify the value of the measurement. Furthermore,
it can lead to charlatanism: Lulling an innocent investor or business manager into
26 She employed one-day 95%, one-day 99%, two-week 95% and two-week 99% VaR metrics, applying
each in one quarter of her VaR measurements. This was the largest contributor to the dispersion in her
results.
a false sense of security could be a serious breach of faith. Operators are dealing
with unstable parameters, unlike those of the physical sciences, and risk
measurement should not just be understood to be a vague and imprecise estimate.
This approach can easily lead to distortions. The most nefarious effect of the VaR
is that it has allowed people who have never had any exposure to market risks to
express their opinion on the matter.
Some criticism of VaR seems to have stemmed from traders resistant to independent
oversight of their risk taking activities. Taleb’s closing remark seems to play to that
audience. As founder and head of risk management software vendor Algorithmics,
Dembo (1998) speaks more to risk managers:
Value-at-risk, per se, is a good idea. But the way it’s measured today, VaR is bad
news because the calculation errors can be enormous. Often, the number that is
computed is almost meaningless. In other words, the number has a large standard
error …
I also find a real problem with the idea that one can forecast a correlation matrix.
If you try and forecast the correlation matrix, you’ve got a point estimate in the
future. The errors that we’ve seen, resulting from correlation effects, dominate the
errors in market movements at the time. So the correlation methodology for VaR
is inherently flawed.
Such concerns have a practical tone, but underlying them are philosophical issues first
identified by Markowitz (1952, 1959). If probabilities are subjective, it makes no sense to
speak of the “accuracy” of a VaR measure or of a “forecast” of a correlation matrix.
From a subjective perspective, a VaR measurement or a correlation matrix is merely an
objective representation of a user’s subjective perceptions.
The third line of criticism suggests that, if many market participants use VaR to allocate
capital or maintain market risk limits, they will have a tendency to simultaneously
liquidate positions during periods of market turmoil. Bob Litzenberger of Goldman Sach
comments:
Consider a situation when volatilities rise and there are some trading losses.
VaR’s would be higher and tolerances for risk would likely be lower. For an
individual firm, it would appear reasonable to reduce trading positions; however,
if everybody were to act similarly, it would put pressure on their common trading
positions.27
This risk is similar to that of portfolio insurance, which contributed to the stock market
crash of 1987, but there are differences. Stock positions tend mostly to be long because
short selling comprises only a small fraction of equity transactions. Portfolio insurance
programs in 1987 were designed to protect against a falling market, so they responded to
the crash in lockstep. In other markets, positions may be long or short. In fixed income
27 Quoted in
markets, there are lenders and borrowers. In commodities markets, there are buyers and
sellers. In foreign exchange markets, every forward position is long one currency but
short another. If VaR measures compel speculators in these markets to reduce positions,
this will affect both long and short positions, so liquidations will tend to offset.
Conclusion
VaR has its origins in portfolio theory and capital requirements. The latter can be traced
to NYSE capital requirements of the early 20th century. During the 1950’s, portfolio
theorists developed basic mathematics for VaR measures. During the 1970’s, US
regulators prompted securities firms to develop procedures for aggregating data to
support capital calculations reported in their FOCUS reports.
By the 1980’s, a need for institutions to develop more sophisticated VaR measures had
arisen. Markets were becoming more volatile, and sources of market risk were
proliferating. By that time, the resources necessary to calculate VaR were also becoming
available. Processing power was inexpensive, and data vendors were starting to make
large quantities of historical price data available. Financial institutions implemented
sophisticated proprietary VaR measures during the 1980’s, but these remained practical
tools known primarily to professionals within those institutions.
During the early 1990’s, concerns about the proliferation of derivative instruments and
publicized losses spurred the field of financial risk management. JP Morgan publicized
VaR to professionals at financial institutions and corporations with its RiskMetrics
service. Ultimately, the value of proprietary VaR measures was recognized by the
Committee, which authorized their use by banks for performing regulatory capital
calculations. An ensuing “VaR debate” raised issues related to the subjectivity of risk,
which Markowitz had first identified in 1952. Time will tell if widespread use of VaR
contributes to the risks VaR is intended to measure.
References
Bernstein, Peter L. (1992). Capital Ideas: The Improbable Origins of Modern Wall
Street,
Chew, Lillian (1993a). Exploding the Myth, Risk, 6 (7), 10 - 11.
Chew, Lillian (1993b). Made to measure, Risk, 6 (9), 78 - 79.
Corrigan, Gerald (1992). Remarks before the 64th annual mid-Winter meeting of the New
Reserve Bank of
Dembo, Ron, (1998). excerpt from roundtable discussion: The limits of VAR,
Derivatives Strategy, 3 (4), 14-22.
Dimson, Elroy and Paul Marsh (1995). Capital requirements for securities firms, Journal
of Finance, 50 (3), 821-851.
Francis, Stephen C. (1985). Correspondence appearing in: United States House of
Representatives (1985). Capital Adequacy Guidelines for Government Securities Dealers
Proposed by the Federal Reserve Bank of
on Domestic Monetary Policy of the Committee on Banking, Finance and Urban Affairs,
Garbade, Kenneth D. (1986). Assessing risk and capital adequacy for Treasury securities,
Topics in Money and Securities Markets, 22,
Garbade, Kenneth D. (1987). Assessing and allocating interest rate risk for a multi-
0sector bond portfolio consolidated over multiple profit centers, Topics in Money and
Securities Markets, 30,
Garbade, Kenneth D. (1996). Fixed Income Analytics,
Hardy, Charles O., (1923). Risk and Risk-Bearing,
Hicks, J. R., (1935). A suggestion for simplifying the theory of money, Economica, 11
(5), 1-19.
Leavens, Dickson H., (1945). Diversification of investments, Trusts and Estates, 80 (5),
469-473.
Lintner, J. (1965). The valuation of risk assets and the selection of risky investments in
stock portfolios and capital budgets, Review of Economics and Statistics, 47, 13-37.
Markowitz, Harry, M., (1952). Portfolio Selection, Journal of Finance, 7 (1), 77-91.
Markowitz, Harry, M., (1959). Portfolio Selection: Efficient Diversification of
Investments,
Markowitz, Harry, M., (1999). The early history of portfolio theory: 1600-1960,
Financial Analysts Journal, 55 (4), 5-16.
Mossin, Jan (1966). Equilibrium in a capital asset market, Econometrica, 34, 768-783.
Roy, Arthur D., (1952). Safety first and the holding of assets, Econometrica, 20 (3), 431-
449.
Savage, Leonard J., (1954). The Foundations of Statistics,
Sons.
Scott-Quinn, Brian (1994). EC securities markets regulation, International Financial
Market Regulation, Benn Steil (editor),
Sharpe, William F. (1963). A simplified model for portfolio analysis, Management
Science, 9, 277-293.
Sharpe, William F. (1964). Capital asset prices: A theory of market equilibrium under
conditions of risk, Journal of Finance, 19 (3), 425-442.
Shirreff, David (1992). Swap and think, Risk, 5 (3), pp. 29 - 35.
Taleb, Nassim (1997). Dynamic Hedging,
Tobin, James (1958). Liquidity preference as behavior towards risk, The Review of
Economic Studies, 25, 65-86.
Treynor, Jack (1961). Towards a theory of market value of risky assets, unpublished
manuscript.
Wilson, Thomas (1992). Raroc remodeled, Risk, 5 (8), pp. 112 – 119.
Wilson, Thomas (1993). Infinite wisdom, Risk, 6 (6), 37-45.