Quantifying credit portfolio losses under multi-factor models

ABSTRACT In this work, we investigate the challenging problem of estimating credit risk measures of portfolios with exposure concentration under the multi-factor Gaussian and multi-factor t-copula models. It is well-known that Monte Carlo (MC) methods are highly demanding from the computational point of view in the aforementioned situations. We present efficient and robust numerical techniques based on the Haar wavelets theory for recovering the cumulative distribution function of the loss variable from its characteristic function. To the best of our knowledge, this is the first time that multi-factor t-copula models are considered outside the MC framework. The analysis of the approximation error and the results obtained in the numerical experiments section show a reliable and useful machinery for credit risk capital measurement purposes in line with Pillar II of the Basel Accords.


Introduction
Financial institutions need to evaluate and manage the risk arising from its business activities. Credit risk is the risk of losses from the obligor's failure to honour the contractual agreements. It is usually the main source of risk in a commercial bank. Banks are subject to regulatory capital requirements under Basel Accords and they are forced to keep aside a cushion to absorb potential losses in the future. The capital needed in order to remain solvent at a certain confidence level is called economic capital. The basic regulatory risk measure in credit risk is Value-at-Risk (VaR) and it is a quantile of the loss distribution computed with a one-year time horizon at a high confidence level α ∈ (0, 1) where the regulatory level for credit risk is α = 0.999. Although it is still the regulatory measure, the VaR value has mainly two drawbacks that may impede a robust credit risk measurement. One of these two disadvantages is that VaR is not sub-additive and contradicts the idea of diversification. The second is that VaR gives no indication about the severity of losses beyond the computed quantile. This is the reason why the Expected Shortfall (ES) might be used in place of the VaR value for internal risk capital assessment (i.e. for economic capital calculation).
The Vasicek model forms the basis of the Basel II approach. It is a Gaussian one-factor model where default events are driven by a latent common factor that is assumed to follow a Gaussian distribution, also called the Asymptotic Single Risk Factor (ASRF) model. Under this model, loss only occurs when an obligor defaults in a fixed time horizon. If we assume certain homogeneity conditions, this one factor model leads to a simple analytical asymptotic approximation for the loss distribution and VaR value. This approximation works well for a large number of small exposures but can underestimate risks in the presence of exposure concentrations (see [14]). Concentration risks in credit portfolios arise from an unequal distribution of loans to single borrowers (exposure or name concentration) or different industry or regional sectors (sector or country concentration). While regulatory capital is estimated by means of the ASRF model under Pillar I, economic capital takes into account concentration risks and is calculated under Pillar II. Monte Carlo (MC) simulation either with one-factor or multi-factor models (to account for sector concentration or for modelling complicated correlation structures) is a standard method for measuring the risk of a credit portfolio. However, this method is time-consuming when the size of the portfolio increases. Computations can become unworkable in many situations, taking also into account that financial companies have to re-balance their credit portfolios frequently. On top of that, when using MC methods the variance is always an issue when estimating the risk measures at high confidence levels. For the aforementioned reasons, numerical methods are appealing in this field.
Different techniques can be found in the literature for estimating the risk with multi-factor Gaussian copula models, like MC methods in [10], Hermite approximations in [17], where the main application is for large loan or mortgage portfolios (3000 loans), a hierarchical factor model in [6] where closed-form solutions are derived under the assumption that the number of sectors in the portfolio is large, and an extension of the granularity adjustment technique to a new dimension is developed in [18]. However, as pointed out in [11], some works suggested that default events driven by t-distributed random variables provide better empirical fit to the observed data. This is the so-called t-copula model, where default events are expressed as the ratio of a normal and a scaled chi-square random variable. The bivariate version of this last type of models is tackled with simulation in [3,19] and a complicated multi-factor version in [11].
In the present work, we develop numerical techniques to contribute to the efficient measurement of VaR and ES values for small or big portfolios in the presence of exposure concentration under highdimensional models. It is worth remarking that small and/or concentrated portfolios are particularly challenging cases, since asymptotic methods usually work out well for large and diversified portfolios. We model the dependence among obligors by means of multi-factor Gaussian copula and multi-factor t-copula models. To the best of our knowledge, this is the first time that multi-factor t-copula models are considered outside the MC framework. We estimate the risk measures in a procedure composed of two main parts. The first part is the numerical computation of the characteristic function associated to the portfolio loss variable. We tackle this part with different techniques depending on the underlying model. For the (bivariate) t-copula model we perform a double integration with Gauss-Hermite and generalized Gauss-Laguerre quadrature, while the multi-factor Gaussian model is treated with the quadratic transform approximation (QTA) method put forward in [9], where the authors calculate the price of a collateralized debt obligation. We derive the characteristic function for the most challenging model, this is, the multi-factor t-copula, by conditioning on the chi-square random variable of the model and applying the QTA method at each discretization point of the resulting one-dimensional integral. This last model is by far the most involved in terms of computing effort. Once the characteristic function for the loss variable has been obtained, then the second part of the procedure consists of a Fourier inversion to recover its cumulative distribution function (CDF). For this purpose, we use a method based on Haar wavelets developed in [12] for the one-factor Gaussian copula model. Moreover, we have improved the efficiency of this method by computing the coefficients of the expansion by means of an fast fourier transform (FFT) algorithm and we have shown that this method outperforms the well-known numerical Laplace transform inversion (NLTI) method [1,2] used in risk management in [8,9] in terms of efficiency and robustness. The numerical experiments carried out in this work show the high accuracy and speed of the method. Another point of importance is the robustness of the wavelet approach. We show how the scale of approximation (this is, the number of terms used to approximate the CDF) is related to the absolute error of the method. All these features make the proposed methodology an efficient and reliable machinery to be used in practice.
The outline of this paper is as follows. We start with the formulation of the credit portfolio losses problem in Section 1.1 and we give a brief overview on Gaussian and t-copula models for dependence among obligors in Section 1.2. We derive the characteristic functions for all the models in Section 2. The methodology for the efficient evaluation of characteristic functions is explained in Section 3. In Section 4 we study in detail two inversion methods. Section 5 is devoted to the numerical examples and Section 6 concludes.

Credit portfolio losses
To represent the uncertainty about future events, we specify a probability space ( , F, P) with sample space , σ -algebra F, probability measure P and with filtration (F t ) t≥0 satisfying the usual conditions. We fix a time horizon T > 0, where T usually equals one year.
Consider a credit portfolio consisting of N obligors. Any obligor n can be characterized by three parameters: the exposure at default E n , the loss given default which without loss of generality we assume to be 100% and the probability of default P n , assuming that each of them can be estimated from empirical default data. The exposure at default of an obligor denotes the portion of the exposure of the obligor that is lost in case of default. The portfolio loss L is defined as, with L n being the individual credit loss given by L n = E n · D n , where D n is the event that obligor n defaults during the risk measurement horizon. We build upon the work in [12] where the one-factor Gaussian copula model was considered as the model framework and we tackle the more involved problem concerning multi-factor models. The one-factor model belongs to the class of structural models and it is a one period default model, that is, loss only occurs when an obligor defaults in a fixed time horizon. Based on Merton's firm-value model, to describe the obligor's default and its correlation structure, we assign to each obligor a random variable called firm-value. The firm-value (or, more precisely, the asset value log-return) W n of obligor n at time T is represented by a common, standard normally distributed factor Y component (the state of the world or business cycle, usually called systematic factor) and an idiosyncratic noise component Z n , where Y and Z n , ∀n ≤ N are i.i.d. standard normally distributed and ρ 1 , . . . , ρ N ∈ (0, 1) are correlation parameters calibrated to market data. In case that ρ n = ρ for all n, the parameter ρ is called the common asset correlation. Using the factor structure (2), obligors become independent conditional on Y .
In the Merton's model, obligor n defaults when its firm-value falls below the threshold level c n , defined by c n := −1 (P n ), where −1 (x) denotes the inverse of the standard normal CDF. We can therefore define D n := {W n < c n } and the probability of default of obligor n conditional to a realization Y = y is given by, Consequently, the conditional probability of default depends on the systematic factor, reflecting the fact that the business cycle affects the possibility of an obligor's default.
In Section 2, we move a step forward by considering, on the one hand, multi-factor models to account for sector concentration and, on the other hand, by selecting different copula models like the Gaussian and the t-copula, where the last one is capable of introducing tail dependence in credit portfolios. For the t-copula model we assume that latent random variables W 1 , . . . , W N are generated by, where Z 1 , . . . , Z N , Y ∼ N (0, 1), V ∼ χ 2 (ν) and Z 1 , . . . , Z N , Y and V are mutually independent. Again, ρ 1 , . . . , ρ N ∈ (0, 1) are correlation parameters calibrated to market data. Systematic risk factor Y may be interpreted as an underlying risk driver or economic factor, with each realization describing a scenario of the economy. Random variables Z 1 , . . . , Z N represent idiosyncratic, or obligor specific, risk. Scaling the model in Equation (2) by the factor √ ν/V transforms standard Gaussian random variables into t-distributed random variables with ν degrees of freedom. As a result, we compute the risk measures by means of both, the multi-factor Gaussian copula model as well as the multi-factor t-copula model. Copulas are simply the joint distribution functions of random vectors with standard uniform marginal distributions. Their value in statistics is that they provide a way of understanding how marginal distributions of single risks are coupled together to form joint distributions of groups of risks, that is, they provide a way of understanding the idea of statistical dependence. For sake of completeness, we give in Section 1.2 a review of the dependence structure given by the Gaussian and t-copula models. For further details on dependence and copulas see [13,19].

Gaussian and t-copula models
We start this section by focusing our attention to the case in which default dependence is modelled as a multivariate Gaussian process. With the same notation as in Section 1.1, the unconditional probability of default P n of obligor n becomes, P(D n ) = P W n < −1 (P n ) , and the joint default probability is given by, Let (u 1 , . . . , u N ) = (P 1 , . . . , P N ) be a vector in [0, 1] N , and choose a dependence structure described by correlation matrix . Then, the unique Gaussian copula C associated with (W 1 , . . . , W N ) is, where is the multivariate standard Gaussian distribution function with correlation matrix .
In the same way we can extract a copula from the multivariate normal distribution, we can extract an implicit copula from any other distribution with continuous marginal distribution functions. For example, the (N-dimensional) t-copula takes the form, for any (u 1 , . . . , u N ) ∈ [0, 1] N , where ν, is the multivariate t distribution function with correlation matrix , and −1 ν denotes the inverse of the distribution function of a standard univariate t distribution with ν degrees of freedom.

Multi-factor firm-value models and their characteristic functions
As mentioned in Section 1, one of the two steps involved in the estimation of risk measures is the efficient computation of the characteristic function associated to the loss random variable L in Equation (1). We consider in Section 2.1 the multi-factor Gaussian copula model, which is the extension from one factor to several factors of model (2) treated in [12,14]. We consider the t-copula and multi-factor t-copula models in Sections 2.2 and 2.3, respectively. We tackle these two last models separately for methodological reasons that will be explained later on.

Multi-factor Gaussian copula
Multi-factor models aim at modelling complicated correlation structures. The d-factor Gaussian copula model assumes that the correlation among W n is introduced by a T of independent standard normal random variables, representing the systematic risk factors, Note that we represent vectors by bold symbols throughout the paper. Here, a n = [a n1 , a n2 , . . . , a nd ] T is a d × 1 vector of real constants satisfying a T n a n < 1, which represents the vector of factor loading coefficients of the common factors, and Z n are N (0, 1) random variables representing the idiosyncratic risks, independent of each other and independent of Y. The constant b n , being the factor loading of the idiosyncratic risk factor, is chosen so that W n has unit variance, that is, b n = 1 − (a 2 n1 + a 2 n2 + · · · + a 2 nd ), which ensures that W n are N (0, 1). The incentive for considering the multi-factor version of the Gaussian copula model becomes clear when one rewrites it in matrix form, ⎡ While each Z n represents the idiosyncratic factor affecting only obligor n, each Y j , j = 1, . . . , d, may affect all (or a certain group of) obligors. Although the factors Y j are sometimes given economic interpretations (as industry or regional risk factors, for example), the key role of the factors Y j is that they allow us to model a complicated correlation structure in a non-homogeneous portfolio. More detailed information is available in [9].
Here, the probability of default of obligor n conditional on the realization y ∈ R d of systematic risk factor Y is defined as, Given a realization of the systematic risk factor Y, defaults are independent. Then, where c n = −1 (P n ) and −1 (·) denotes, as usual, the inverse of the standard normal CDF. By the law of iterated expectations, the characteristic function 1 ψ L of L is therefore given by,

t-copula with ν degrees of freedom and t-distributed margins
As pointed out in [19], while Gaussian copulas do not exhibit tail dependence, this is, the defaults do not occur simultaneously, t-copula models admit tail dependence, with fewer degrees of freedom producing stronger dependence. We assume that latent random variables W 1 , . . . , W N are generated by the model specified in Equation (3). The probability of default of obligor n conditional on Y = y and V = v is then given by, As pointed out in [20] and because of the conditional independence, the characteristic function of the loss L now reads, where the threshold level is defined by c n = −1 ν (P n ) and, f Y is the standard normal probability density function (PDF) of Y , and f V is the chi-square PDF with ν degrees of freedom, that is, Note that we have replaced the joint density of Y and V in Equation (9) by the product of the marginal densities because they are independent random variables.

Multi-factor t-copula model
In this section, we consider a multi-factor model with dependence among obligors driven by a tcopula. We assume that latent random variables W 1 , . . . , W N are generated by, where Y, Z n , a n and b n are defined and follow the same hypothesis as in Section 2.1, with V ∼ χ 2 (ν) as in Section 2.2. Here the probability of default of obligor n conditional on Y = y and V = v is given by, the conditional expectation reads, and thus, where ϑ n (ω, y, v) : Gaussian density and f V is the chi-square PDF with ν degrees of freedom.

Efficient computation of characteristic functions
The evaluation of the characteristic function (9) in Section 2.2 in a certain point w, which is a particular case of (15) for d = 1, involves the computation of a double integral that we solve efficiently in Section 3.2.1 by means of numerical quadrature. However, looking at the expressions of the characteristic functions (8) and (15), corresponding to the multi-factor Gaussian and t-copula models, we realize that a direct attempt of solving the d-and (d + 1)-dimensional integrals, respectively, at fixed points w is not affordable with numerical integration. For these challenging tasks, we rely on the QTA put forward in [9] for computing the Laplace transform of the portfolio loss within the multi-factor Gaussian copula model. We first introduce the QTA method in Section 3.1 and use it to calculate the characteristic function (8). We show in Section 3.3 how we can benefit from the QTA approach for the challenging case of the multi-factor t-copula of Section 2.3 by conditioning on the V factor.

Multi-factor Gaussian copula
We start this section by enunciating the proposition which forms the basis of the QTA method.
where (z) denotes the real part of z.
First of all, let us define the mapping s → g n (s) for n = 1, . . . , N as, Thus, one can rewrite the conditional expectation (7) as follows, If we approximate ln g n (w, S n ) by a quadratic function of S n , where the scalars α n (w), β n (w) and γ n (w) are complex-valued, then one can use (18), (19) and Proposition 3.1 to obtain a closed-form approximation for ψ L in Equation (8), The last equality follows from the fact that S n 's are linear in Y. The scalar c, vector g, and matrix H are given explicitly by, β n (w)a n a T n a n , and H(w) = N n=1 γ n (w) a n a T n a T n a n .
In order to obtain the coefficients (α n (w), β n (w), γ n (w)) we make use of the weighted least-squares method. We will go into the details in the numerical examples section.

t-copula with ν degrees of freedom and t-distributed margins
If we replace ϑ n , f Y and f V in expression (9) by their corresponding expressions (10), (11) and (12), we obtain the iterated integral, We propose in Section 3.2.1 a numerical integration method to evaluate ψ L (ω) in Equation (22) at a fixed point w. We show in Section 3.2.2 how the one-dimensional version of the QTA method can be applied by conditioning on the V factor. We will compare the efficiency of both methods in the numerical experiments section and show that in this case, numerical integration is superior to the QTA method.

Numerical integration
We solve the inner integral in Equation (22) by Gauss-Hermite quadrature and the outer integral by generalized Gauss-Laguerre quadrature. For sake of completeness we briefly review both quadrature methods. In numerical analysis, Gauss-Hermite quadrature is a particular type of Gaussian quadrature for approximating the value of integrals of the form, In this case, where n h is the number of sample points used. The x i , i = 1, 2, . . . , n h , are the roots of the Hermite polynomial of degree n h , H n h (x), and the associated weights w i are given by Generalized Gauss-Laguerre quadrature is an extension of the Gauss-Laguerre quadrature method for approximating the value of integrals over R + with integrands of the form x α e −x f (x) for some real number α > −1. Precisely, This allows us to efficiently evaluate such integrals for polynomial or smooth f (x) even when α is not an integer. In this case, where x i is the ith root of Laguerre polynomial of degree n l , L n l (x), and the weight w i is given by Finally, if we perform the change of variables y = √ 2x in Equation (22), and apply the quadrature rules (23) to the inner integral and (24) to the outer integral, then, , and w h j , x h j are the weights and sample points in Gauss-Hermite quadrature and w l i , v l i are the weights and sample points in the generalized Gauss-Laguerre quadrature. We observe that the computational complexity of evaluating the numerical formula (25) for a fixed value w is O(n l · n h ).

One-dimensional QTA method
Let us start by enunciating the one-dimensional version of formula (16) in Proposition 3.1.

Corollary 3.1:
Let Z be a standard normal random variable. For any scalars c, g, h ∈ C, for which (h) ≤ 0, Proof: Follows immediately from Proposition 3.1 by considering d = 1.
The key idea in this section is conditioning on the factor V in Equation (3) and applying Corollary 3.1 for each fixed value v of V . Looking at expression (9), we observe that we can write, where L(v) emphasizes the fact that V takes a fixed value v and, We can compute E[e N n=1 ln g n (w,Y,v) ] using a similar approximation as in Section 3.1, and applying Corollary 3.1, for (h(w, v)) ≤ 0 and, Hence, from Equations (27) and (29) we end up with, We solve the integral in Equation (31) by means of generalized Gauss-Laguerre quadrature. It is worth remarking the dependence on v for the coefficients c(w, v), g(w, v) and h(w, v). As a consequence, this method involves N quadratic approximations of the type in Equation (28) for each discretization point considered in the quadrature rule.

Multi-factor t-copula model
In a similar manner as in Sections 3.1 and 3.2.2, we define, With this notation we can rewrite, For each n, we approximate ln g n (w, X n , v) by a quadratic function in X n , and then, where, β n (w, v)a n a T n a n , and H(w, v) = N n=1 γ n (w, v) a n a T n a T n a n .
If we follow a similar procedure as in Section 3.2.2 and we apply Proposition 3.1, then the characteristic function in Equation (15) can be approximated as, where is the PDF of a chi-square distribution with ν degrees of freedom. We want to underline that in order to apply Proposition 3.1 we need (H) to be negative-semidefinite. In the numerical examples section, we will explain how to proceed when this condition is not satisfied. We compute the integral in expression (32) by generalized Gauss-Laguerre quadrature.

Inversion methods
Central to the methodology presented in this work is an efficient method to carry out the inversion of characteristic functions. We have seen in Section 3 that characteristic functions of the loss L are known (numerically) for multi-factor Gaussian and t-copula models. We devote the present section to the numerical method selected to perform the inversion step. Let f L and F L be the PDF and CDF of L, respectively. Without loss of generality we assume that the sum of the exposures E n is one, this is, N n=1 E n = 1. Then, for a certain F c defined in [0, 1]. Recall that ψ L is the Fourier transform of the density f L and then, where F L is the derivative of distribution function F L in the context of generalized functions. If we integrate by parts, and then the Fourier transform of F c is given by, We aim at recovering F c from its Fourier transformF c . For this purpose, we use the method initially developed in [12] for Laplace transform inversion and further extended in [15] for Fourier transform inversion, where numerical errors are studied in detail as well. It is based on Haar wavelets (for a deep insight in wavelets we refer the reader to [4]) and called WA. The method is presented in Section 4.1 where this time we have computed the coefficients with the FFT to speed up the overall inversion process. We briefly recall in Section 4.2 how to calculate the VaR (as in [12]) and the ES (as in [14]). In Section 4.3 we show the popular method [1,2] for NLTI, since it is used in our main reference paper [9] and it was also used in [8] for credit portfolio losses. We make a comparison against the WA method in terms of accuracy and speed. The choice of Haar basis seems natural since the CDF of L is a piecewise constant function. In [16] the authors show how the WA method compares to the state-of-the-art COS method [5] for Fourier inversion in the framework of credit portfolio losses. It is shown that COS produces oscillations in the tail leading to non-reliable risk measures.

Haar wavelet approach
The WA method is based on a wavelet expansion of the CDF of L using Haar wavelets as scaling functions. For the sake of completeness, we give a brief overview of the theory of approximation by wavelets.
Consider the space L 2 (R) = {f : For simplicity we can view this set as the set of functions f (x) which get small in magnitude rapidly as x goes to plus and minus infinity. A general structure for wavelets in L 2 (R) is called a Multi-resolution Analysis . We start with a family of closed nested subspaces, If these conditions are met, then there exists a function φ ∈ V 0 such that {φ j,k } k∈Z is an orthonormal basis of V j , where, The function φ, called the father function, generates an orthonormal basis for each V j subspace. For any f ∈ L 2 (R) a projection map of L 2 (R) onto V m , is defined by means of, where c m,k = +∞ −∞ f (x)φ m,k (x) dx are the scaling coefficients. The right-hand side of Equation (37) gives a sum in terms of the scaling functions φ m,k . Considering higher m values (i.e. when more terms are used), the truncated series representation of the function f improves.
To develop our work we consider Haar wavelets (see [4]). Using these wavelets, V j is the set of L 2 (R) functions which are constant on each interval of the form [k/2 j , (k + 1)/2 j ) for all integers k. In this case the father function is given by, As opposed to Fourier series, a key fact about using wavelets is that wavelets can be moved (choosing the k value), stretched or compressed (choosing the j value) to accurately represent the local properties of a function. Moreover, φ j,k is non-zero only inside the interval [k/2 j , (k + 1)/2 j ). In what follows we make use of this fact to compute VaR without the need of knowing the whole distribution of the loss function.
In our case, we approximate F c in Equation (33) by a finite combination of Haar scaling functions at a fixed scale of approximation m, with convergence in the L 2 (R)-norm. Observe that we cover the domain of definition of F c , which is [0, 1], by the union of non-overlapping supports of φ m,k . Next step is the approximation ofF c byF c m , (2 m i ln(z))/ φ(i ln(z)), we have P m (z) ≈ Q m (z), z = 0. Since P m is a polynomial, it is analytic inside a disk of the complex plane {z ∈ C : |z| < ρ} for ρ > 0. Thus, by means of Cauchy's integral formula we obtain the following expression for the coefficients, where γ denotes a circle of radius r, r > 0, about the origin. If we apply the change of variables z = r e iu , r = 1, and we replace P m by Q m in Equation (39), we end up with the expression, Observe that Q m (z) has a pole at z = 1, sinceφ(i ln(z)) = (z − 1)/ln(z), and this is the reason why we select r = 1.
If we apply the trapezoidal rule with step 1/2 m to solve the integral in Equation (40), we can write, where J = 2 m . We note here that we have used the fact that the integrand for j = 0 and j = J takes the same value, and we have added them together. If we look at the expression (41), the FFT algorithm can be applied straightforwardly. The optimal value of ρ to balance the discretization and round-off errors when solving numerically (40) is set to r = 0.9995 (see [15] for details).

Computation of credit risk measures
Let α ∈ (0, 1) be a given confidence level (usually the α of interest is close to 1). The α-quantile of the loss distribution of L in this context is the VaR value, This is the measure chosen in the Basel II Accord for the computation of capital requirements, meaning that a bank that manages its risks according to Basel II must reserve capital by an amount of VaR α (L) to cover potential extreme losses. According to [12], the coefficients c m,k satisfy, and, Then, looking at (38), we conclude that, ]. Thus, we can simply start searching for the VaR value by means of the following iterative procedure. First we compute F c m , and so on. This algorithm finishes after (at most) m steps storing thek value such that F c m (k/2 m ) is the closest value to α in our m resolution approximation that satisfies F c (k/2 m ) ≥ α. Subsequently, we approximate the VaR value by the midpoint of the interval [k/2 m , (k + 1)/2 m ], this is, By definition, the ES at confidence level α is given by, where l α := VaR α (L). From [14], if we integrate by parts the integral in Equation (44) and we use the expansion in Equation (38), we obtain the following formula for computing the ES, where l α is replaced by the VaR value ł m α calculated at scale m in Equation (43).

Remark 4.1:
The VaR value can be obtained by calculating (at most) m coefficients c m,k following the algorithm explained in Section 4.2. Taking into account that the computation of each coefficient in Equation (41) involves M = 2 m terms, the overall complexity for obtaining the VaR is (at most) 2 m · m operations. If we apply the FFT algorithm, as explained in Section 4.1, we get all the coefficients c m,k at once (ranging from k = 0 until k = 2 m − 1) with 2 m · log 2 (2 m ) = 2 m · m operations, and we therefore need the same computational effort. The computational saving when using the FFT comes into play when we desire to compute the ES as well, since in that case we need to compute many more extra coefficients (fork + 1 ≤ k ≤ 2 m − 1).

Laplace transform inversion
A popular method for Laplace transform inversion was presented in [1,2] and used in the context of credit portfolio losses in [8,9]. We provide a brief description of the method and compare it with the WA method. Let f be a density function andf its Laplace transform, given by, According to the Bromwich inversion theorem (see for instance Appendix B of [8]), where b is a real number to the right of the singularities off . By performing numerical integration, applying a change of variable and using the technique of Euler summation (as in [2]) to accelerate the convergence of the approximation (see Appendix B of [8]) the integral in Equation (47) is approximated by the partial sum, where, with, where, and The parameters l,m and A are chosen to control the round-off and discretization errors of the approximation. Usually l is chosen equal to 1, so we consider l = 1. The parameterm controls the round-off error in an inversely proportional way because the functions b k (t) in Equation (52) are computed with a round-off error of 10 −m . In our examples we takem = 8 to achieve a round-off error of no more than 10 −8 . The parameter A controls the discretization error of the approximation and according to [8] the following relation provides efficient control of the parameters, (m ln(10) + ln(2lt)) .
The remaining parameter n is the size of the partial sums. We compare the WA approximation presented in Section 4.1 with the NLTI method. For this purpose, we plot the tail probabilities computed with both methods and we perform an MC simulation with 10 7 scenarios for the systematic factor to get our benchmark distribution. The model considered is the simple one-factor Gaussian copula model (2) since this comparison is related only to the inversion step. The test portfolio has N = 50 obligors and shows severe exposure concentration E 1 = E 2 = 50, E n = 1, n = 3, . . . , N. We consider P n = 0.21%, ρ n = 0.15, n = 1, . . . , N. The corresponding characteristic function is calculated as in [12], this is, following expression (8) for the one-dimensional case and solving the integral by means of Gauss-Hermite quadrature with 20 nodes. We plot the results in Figure 1. The left side plot of Figure 1 shows the tail probabilities obtained with the WA, NLTI and MC methods. The number of terms employed in NLTI is n = 350, while WA has been run at scale of approximation m = 10. We have made this choice of n in order to compare the accuracy of both methods at the same cost of CPU time 2 . We observe that while WA is capable to accurately approximate the benchmark solution, the NLTI method shows an oscillatory behaviour further in the tail, which may have an important negative impact in the computation of the ES. We zoom in to show in the right side plot of Figure 1 the approximation carried out by the NLTI method when considering more terms in the expansion. In particular, we represent the approximation with n = 1000 and observe that heavy oscillations remain in the tail. In addition to these oscillations, we underline the fact that there is not a prescription on the selection of the suitable parameter n, and adjusting its value may become a matter of trial and error. Regarding the selection made of the scale parameter m = 10 for the WA method it is worth remarking that, if we assume that coefficients c m,k in Equation (39) were computed exactly, the true VaR value VaR α (L) would lie within the interval [k/2 m , (k + 1)/2 m ] and would differ at most 1/2 m+1 from the computed VaR value ł m α = (2k + 1)/2 m+1 following the algorithm explained in Section 4.2, since |VaR α (L) − ł m α | ≤ 1/2 m+1 . For m = 10 we have 1/2 m+1 ≈ 5 · 10 −4 , and we therefore expect (at most) three digits accuracy in terms of absolute error with respect to the true solution. We take as a reference in this case an MC simulation with 10 7 scenarios. The error is roughly proportional to 1 √ 10 7 ≈ 3 · 10 −4 , since this error is affected by the variance √ α(1 − α) and the true density f L evaluated at the true quantile (see [7] for details), √ α(1 − α)/f L (VaR α (L)) √ 10 7 .

Numerical examples
In this section, we show the performance of the WA method for a wide variety of portfolios when the firm-value is driven by the three different models presented in Section 2. The test portfolios are described in Table 1. The matrix A = (a ij ) for i = 1, . . . , N, j = 1, . . . , d contains the factor loadings associated to the models (4) and (13), that have been generated by simulation following a uniform distribution U(a, b). As pointed out in [9], the approximation of Section 3.1 works out well for moderate correlation among obligors, this is, when A is small, where A := max j k |a jk |. The rationale behind is that the accuracy of the approximation depends on the goodness of the approximation (19). If a T n a n is small, then it can be seen from Equation (17) that ln g n (w, S n ) will be almost linear in S n and therefore (19) will fit better. In the extreme case where a T n a n = 0, ln g n (w, S n ) becomes constant for all S n and the approximation (19) becomes exact (a complete error analysis is presented in Section 3.2 of [9]). A remedy for the case of strong correlation is presented in [9]. The correlation parameter ρ in Equation (3) is assumed to be constant for all the obligors in the portfolio. The dimension d = 1 refers to the model in Equation (3). Observe that all test portfolios considered show name concentration and C is computed such that N n=1 E n = 1. In all experiments of this section, we run the MC simulation with 10 6 scenarios for the systematic risk factors to estimate the VaR and ES risk measures at the regulatory 99.9% confidence level and write down those results as a reference. In regard to this point, we underline that for the reasons mentioned in the previous section, this is not a robust reference to compare with. A robust benchmark is to consider the average of a large sample of VaR and ES estimates with (for instance) 10 6 scenarios each rather than a unique estimate. We dismiss this last strategy because the computation time is in general unaffordable for the models presented in this work applied to portfolios P2 and P4. The wavelet-based approximation F c m in Equation (38) converges to the true distribution function F c with the L 2 -norm when m increases, since F c belongs to L 2 ([0, 1]). We therefore assume that the true solution is given by the WA method at scale m = 10 and use it as our benchmark.
The QTA approximation presented in Section 3.1 relies on the quadratic approximation of function g in Equation (17) as well as on Proposition 3.1. This proposition can be applied when (H) is negative-semidefinite and, as pointed out in [9], this is accomplished when γ n (w) in Equation (21) satisfies (γ n (w)) ≤ 0 for all n = 1, . . . , N. We carry out the computation of α n (w), β n (w), γ n (w) by means of weighted least squares and perform a linear approximation when (γ n (w)) is strictly positive. We look for α n (w), β n (w) and γ n (w) that solve the minimization problem, The summands represent the approximation errors at certain grid points λ ∈ , where ω(λ) represents the penalty weight for the errors. We assume that and ω are the same for all n and that λ∈ ω(λ) = 1. The set of grid points and the weight ω should be chosen to reflect the fact that variable S n in Equation (19) is a Gaussian distribution. For our numerical examples, we choose ω(λ) as exponentially decreasing in |λ| 2 , specifically, we choose ω to be the PDF of a standard Gaussian normalized over . While the grid points in [9] are chosen evenly between −3 and 3, we select the range −7 and 7, since our numerical experiments are more accurate within this interval. The advantage of using the least-squares method to determine α n (v), β n (v) and γ n (v) is that the optimization problem has a unique closed-form solution.

Multi-factor Gaussian copula
In this section we select portfolios P3 and P4 for the experiments. Figure 2 shows the tail probabilities compared to MC simulation. We name WA-QTA the numerical method employed, where WA stands for the inversion methodology based on wavelets and QTA refers to the quadratic transform approximation for obtaining the characteristic function, as explained in Section 3.1.
We present the VaR and ES values obtained in Table 2 and the corresponding CPU time in Table 3. The risk measures can be obtained with less than 1% relative error at scale m = 9 in about one second for P3 and in about 7 s for P4, where the dimensions are 8 and 6, respectively. We can get the risk measures in half of those CPU times at scale m = 8 keeping accurate results for VaR, although the accuracy worsens for ES.

t-copula with ν degres of freedom and t-distributed margins
In this section we select portfolios P1 and P2 for the experiments. We use the name WA-HLXX for the numerical method presented in Section 3.2.1, where H stands for Hermite, L stands for Laguerre and XX specifies the number of nodes considered for the generalized Gauss-Laguerre quadrature (n l ). The number of nodes for Gauss-Hermite quadrature is fixed to n h = 20 for all experiments, as    in [12,14] for the one-factor Gaussian copula model. As in the previous section, WA-QTA refers to the numerical method presented in Section 3.2.2. Figure 3 shows the tail probabilities of portfolio P1 compared to MC simulation. We observe that the accuracy at high loss levels depends on the parameter ν, where a small ν requires a large n l and conversely. We present in Table 4 results on portfolios P1 and P2 for ν = 5 and n l = 50, and the corresponding CPU time is shown in Table 5. It is worth remarking that, as mentioned at the end of Section 4.3, the theoretical absolute error estimate when computing the VaR value ł m α at scale m is 1/2 m+1 . We observe that the theoretical error is in good agreement with the order of the empirical error at scale m = 9 when we compare with the benchmark at scale m = 10 since,

Multi-factor t-copula model
We consider in this last section the most challenging case among the three models presented in this work. We perform our experiments with portfolio P5 of dimension 5, we assume that ν = 7 and consider 25 nodes of integration for generalized Gauss-Laguerre quadrature in Equation (32). We plot tail probabilities in Figure 4, where WAm-QTALj stands for WA-QTA method at scale of approximation m and using j nodes of generalized Gauss-Laguerre quadrature. We plot as usual MC as a reference and we observe highly accurate VaR values at scales m = 8,9 with respect to the benchmark scale m = 10 both at 99.9% and 99.99% levels. These results are confirmed looking at VaR values in Table 6. The ES values are very accurate at the regulatory level 99.9% and less accurate (but still competitive) values are obtained further in the tail at level 99.99%. It is worth mentioning that this high confidence level is particularly very demanding when computed by MC simulation. The CPU times are presented in Table 7. We highlight the impressive CPU times (measured in seconds) in particular at scales m = 8,9.  Note: The relative errors at scales m = 8,9 with respect to the WA-QTA method at scale m = 10 are shown in parenthesis.

Conclusions
In this work we have investigated two highly efficient numerical methods to obtain the VaR and ES values for portfolios with exposure concentration under multi-factor Gaussian and t-copula models. It is well-known that MC methods are highly demanding from the computational point of view when dealing with big sized and high-dimensional models for the estimation of VaR and ES values at high confidence levels. These two methods are called WA-HL and WA-QTA and they are composed of two main parts. The first part is the numerical computation of the characteristic function associated to the portfolio loss variable. We tackle this part by different techniques depending on the underlying model. For the (bivariate) t-copula model we perform a double integration with Gauss-Hermite and generalized Gauss-Laguerre quadrature and call this method WA-HL, while the multi-factor Gaussian model is treated with the QTA method put forward in [9] and called WA-QTA. The characteristic function for the most challenging model, this is, the multi-factor t-copula , is derived by conditioning on the chi-square random variable of the model and computed by applying the QTA method at each discretization point of the resulting one-dimensional integral. This last model is by far the most involved in terms of computing effort. Once the characteristic function for the loss variable has been obtained then the second part of the procedure comes into play. This second step is the Fourier inversion method called WA, which is based on Haar wavelets and it was developed in [12,14] to recover the CDF of the loss variable. We have improved the efficiency of the WA method by computing the coefficients of the expansion by means of an FFT algorithm and we have shown that this method outperforms the NLTI inversion method in terms of efficiency and robustness.
The overall CPU time of the numerical methods employed is impressive taking into account their size and dimension as well as the confidence levels considered. This may be the first time that multifactor t-copula model is considered outside the MC framework. This research opens the door to calculate the risk contributions to the VaR and ES risk measures under the same model assumptions and we will consider this problem in our future work.

Notes
1. Note that we use as the definition for the characteristic function the Fourier transform of the density function. 2. The programs are coded in MATLAB and run in a laptop with processor 2 GHz intel core i5 and memory 8 GB 1867 MHz LPDDR3.

Disclosure statement
No potential conflict of interest was reported by the authors.