# password hacking compromise

Actuarial Credit Risk Accounting (ACRA) Figure 1 1987 Crash Swiss portfolio damaged in a hail storm over a specific time period. For details, see Schmock (1997). Further interesting new products are the multiline, multiyear, high-layer (infrequent event) products, credit lines, and the catastrophe risk exchange (CATEX). For a brief review of some of these instruments, see Punter (1997). Excellent overviews stressing the financial engineering of such products are Doherty (1997) and Tilley (1997). Alternative risk transfer and securitization have become major areas of applied research in both the banking and insurance industries. Actuaries are actively taking part in some of the new product development and therefore have to consider the methodological issues underlying these and similar products. Also, similar methods have recently been introduced into the world of finance through the estimation of value at risk (VaR) and the so-called shortfall; see Bassi, Embrechts, and Kafetzaki (1998) and Embrechts, Samorodnitsky, and Resnick (1998). ‘‘Value At Risk for End-Users’’ (1997) contains a recent summary of some of the more applied issues. More generally, extremes matter eminently within the world of finance. It is no coincidence that Alan Greenspan, chairman of the U.S. Federal Reserve, remarked at a research conference on risk measurement and systemic risk (Washington, D.C., November 1995) that ‘‘Work that characterizes the statistical distribution of extreme events would be useful, as well.’’ For the general observer, extremes in the realm of finance manifest themselves most clearly through stock market crashes or industry losses. In Figure 1, we have plotted the events leading up to and including the 1987 crash for equity data (S&P). Extreme value theory (EVT) yields methods for quantifying such events and their consequences in a statistically optimal way. (See McNeil 1998 for an interesting discussion of the 1987 crash example.) For a general equity book, for instance, a risk manager will be interested in estimating the resulting down-side risk, which typically can be reformulated in terms of a quantile for a profit-and-loss function. EVT is also playing an increasingly important role in credit risk management. The interested reader may browse J.P. Morgan’s web site (http://www. jpmorgan.com) for information on CreditMetrics. It is no coincidence that big investment banks are looking at actuarial methods for the sizing of reserves to guard against future credit losses. Swiss Bank Corporation, for instance, introduced actuarial credit risk accounting (ACRA) for credit risk management; see Figure 2. In their risk measurement framework, they use the following definitions: • Expected loss: the losses that must be assumed to arise on a continuing basis as a consequence of undertaking particular business • Unexpected loss: the unusual, though predictable, losses that the bank should be able to absorb in the normal course of its business • Stress loss: the possible—although improbable— extreme scenarios that the bank must be able to survive. EVT offers an important set of techniques for quantifying the boundaries between these different loss classes. Moreover, EVT provides a scientific language for translating management guidelines on these Name /8042/03 04/21/99 09:19AM Plate # 0 pg 32 # 3 32 NORTH AMERICAN ACTUARIAL JOURNAL, VOLUME 3, NUMBER 2 NAAJ (SOA) boundaries into actual numbers. Finally, EVT helps in the modeling of default probabilities and the estimation of diversification factors in the management of bond portfolios. Many more examples can be added. It is our aim in this paper to review some of the basic tools from EVT relevant for industry-wide integrated risk management. Some examples toward the end of this paper will give the reader a better idea of the kind of answers EVT provides. Most of the material covered here (and indeed much more) is found in Embrechts, Klu¨ppelberg, and Mikosch (1997), which also contains an extensive list of further references. For reference to a specific result in this book, we will occasionally identify it as ‘‘EKM.’’ 2. THE BASIC THEORY The statistical analysis of extremes is key to many of the risk management problems related to insurance, reinsurance, and finance. In order to review some of the basic ideas underlying EVT, we discuss the most important results under the simplifying iid assumption: losses will be assumed to be independent and identically distributed. Most of the results can be extended to much more general models. In Section 4.2 a first indication of such a generalization will be given. Throughout this paper, losses will always be denoted as positive; consequently we concentrate in our discussion below on one-sided distribution functions (df’s) for positive random variables (rv’s). Given basic loss data X , X ,…, X iid with df F, (1) 1 2 n we are interested in the random variables X 5 min(X ,…., X ), X 5 max(X ,…, X ). n,n 1 n 1,n 1 n (2) Or, indeed, using the full set of so-called order statistics X # X # zzz # X , (3) n,n n21,n 1,n we may be interested in k O h (X ) (4) r r, n r51 for certain functions hr , r 5 1, . . . , k, and k 5 k(n). An important example corresponds to hr [ 1/k, r 5 1, . . . , k; that is, we average the k largest losses X1,n k ,…, X ,n. Another important example would be to take k 5 n, hr(x) 5 (x 2 u) where 1 1y 5 max(0, y), for a given level u . 0. In this case we sum all excesses over u of losses larger than u. Typically we would normalize this sum by the number of such exceedances yielding the so-called empirical mean excess function; see Section 4.1. Most of the standard reinsurance treaties are of (or close to) the form (4). The last example given corresponds to an excess-ofloss (XL) treaty with priority u. In ‘‘classical’’ probability theory and statistics most of the results relevant for insurance and finance are based on sums n S 5 O X . n r r51 Indeed, the laws of large numbers, the central limit theorem (in its various degrees of complexity), refinements like Berry-Esse´en, Edgeworth, and saddle-point, and normal-power approximations all start from Sn theory. Therefore, we find in our toolkit for sums such items as the normal distributions N(m, s2 ); the astable distributions, 0 , a , 2; Brownian motion; and a-stable processes, 0 , a , 2. We are confident of our toolkit for sums when it comes to modeling, pricing, and setting reserves of random phenomena based on averages. Likewise we are confident of statistical techniques based on these tools when applied to estimating distribution tails ‘‘not too far’’ from the mean. Consider, however, the following easy exercise. Exercise It is stated that, within a given portfolio, claims follow an exponential df with mean 10 (thousand dollars, say). We have now observed 100 such claims with largest loss 50. Do we still believe in this model? What if the largest loss would have been 100? Solution The basic assumption yields that X ,…, X are iid with df P(X # x) 1 100 1 5 2x/10 1 2 e , x $ 0. Therefore, for Mn 5 max(X1,…, Xn), 100 P(M . x) 5 1 2 (P(X # x)) 100 1 5 2x/10 100 1 2 (1 2 e ) . From this, we immediately obtain P(M $ 50) 5 0.4914, 100 P(M $ 100) 5 0.00453