Advertisement

The approach allows for a richer pattern of interactions across risk types. However, these interactions are still assumed to be linear and fixed over time.
The approach allows for a richer pattern of interactions across risk types. However, these interactions are still assumed to be linear and fixed over time.

How banks can aggregate individual risk exposures

Risk aggregation can be done by either calculating individual risk exposures first and then aggregate individual risks to form total risk.

This seeks to provide all the core risk measurement inputs to a centralised risk management function so that it can perform the overall economic capital calculation.

The banks usually aggregate risk along two dimensions. Thus, the identification of the type of risk such as market, credit, operational risks and among others and by the structure of the organisation that could be the business lines or the legal entities in which the banks operate. This form of calculation, therefore, involves the statistical concept of correlation. That is, a particular value for the correlation between the risks is selected, and standard statistical methods are then invoked to produce the aggregate risk measure.

 In theory correlations can be measured by observing the long-run relationship between two data series, but in practice, there is a limited amount of relevant data currently available for measuring correlations across risk types. And also, the qualitative nature of some risk types like operational risk makes it cumbersome to estimate or quantify for better analysis for risk management decision-making. The commonly risk aggregation methodologies include:

Simple summation

This simple approach involves adding the individual risk components. Typically, this is perceived as a conservative approach since it ignores potential diversification benefits and produces an upper bound to the true economic capital figure. Technically, it is equivalent to assuming that all inter-risk correlations are equal to one and that each risk component receives equal weight in the summation.

Fixed diversification percentage

This approach is essentially the same as the simple summation approach with the only difference that it assumes the sum delivers a fixed level of diversification benefits, set at some pre-specified level of overall risk.

Variance-Covariance matrix

The approach allows for a richer pattern of interactions across risk types. However, these interactions are still assumed to be linear and fixed over time. The overall diversification benefit depends on the size of the pair wise correlations between risks.

Copulas

This is a much more flexible approach to combining individual risks than the use of a covariance matrix. The copula is a function that combines marginal probability distributions into a joint probability distribution. The choice of the functional form for the copula has a material effect on the shape of the joint distribution and can allow for rich interactions between risks.

Full modelling of common risks across portfolio

This represents the theoretically pure approach. Common underlying risks are identified and their interactions modelled. Simulation of the risks (or scenario analysis) provides the basis for calculating the distribution of outcomes and economic capital measure. This method produces an overall risk measure in a single step since it accounts for all risk interdependencies and effects for the entire bank interdependencies and effects for the entire bank.

The table below summarises the comparison of the risk aggregation methodologies discussed above based on their advantages and disadvantages

Bank risks measurement

The standard deviation and Value at Risk (VaR) risks measures.

Empirically, there is wide range of risk measure use for risk management application. These include the standard deviation, Value at Risk (VaR), expected shortfall (ES) and spectral and distorted risk measures. Studies shows that all the risk measures have strengths and weaknesses, in that since that there has not been a single measure that can capture all the sophisticated elements of risk measurement; hence, there is no ideal risk measure which means that more further research and studies has to be done to arrive at generally ideal risk measure. Nonetheless, in practice, and for easy estimation and analysis purpose, this article will dwell on the commonly used ones by most banks for risk management such as the standard deviation and value at Risk (VaR)

The Value at Risk (VaR) and standard deviation risk measurement

One of the widely considered approaches for bank risk measurement and management is the use of VaR which measures the economic capital as the main focus of bank risk measurement which has been discussed in the previous paragraphs. The VaR can be applied to any portfolio of assets/liabilities whose market values are available on periodic basis and whose price volatilities (Standard deviation) can be estimated.

An empirical study reveals that practitioners, regulators and academics have welcomed the VaR and has been recognised as an ideal component of current appealing practices in risk measurement.

Michael Minnich (vice president of capital market risk advisors, Inc.) defined VaR as “the maximum loss a portfolio is expected to incur over a specified time period, with a specified probability.”

Also, Wilmott (1999) defines VaR as “an estimate, with a given degree of confidence, of how much one can lose from one’s portfolio over a given period.”

The above given definitions suggest that three main inputs are required for estimating VaR. These include a common measurement unit (mostly in currencies such as ¢, $, €, £), a degree of confidence or confidence interval (preferable between 1%-5% or 99%-90%, respectively) and lastly the holding period of analysis that could be any length (preferably daily, weekly or two weeks period).

The personal illustration below suffices the value at risk defined.

Exhibit 3  (INSERT TABLE)

Empirical studies have shown that the VaR actually assign a probability to a cedi (could be any currency) amount of lose occurring unlike scenario analysis or stress testing which shows what loss would occur given a certain scenario. As illustrated above, the probability of two per cent and its corresponding loss of ¢0.648 million are not associated with any specific event, but encompass any event that could cause such loss.

The VaR estimate should not be taken as the maximum amount of loss that will occur; the actual loss could be more than that. It is only loss threshold say two per cent in the example above. In practice, one per cent probability level with two weeks holding period is considered to be the standardised requirement for the VaR estimation. Also the VaR measure should be multiplied by three to allow for the fact that, the measure/estimate may be subject to discrepancies and that unrealistic axioms may impact upon the VaR measure or estimate for analysis.

According to capital market risk advisors Inc, A Primer on Value at Risk, there are three main categories of VaR methodologies whose primary distinction is the type of calculation performed. These are variance-covariance, Monte Carlo stimulation and historical stimulation. Any of them could be used but should follow the underlying assumptions made and the data available in order not to distort the estimate and the meaning or analysis of the VaR measure.

The exhibit below illustrates in summary the strength and weakness of three measures of VaR methodologies

Exhibit 4 (INSERT TABLE)

The main concern with risk measurement and management is to minimise possible losses from investment portfolio. The variance which is usually used as measure of risk does not achieve the goal of risk management simply because it is a measure of dispersion around a central point. The VaR is, therefore, intuitively appealing in the sense that it collapses the total risk of a portfolio into one estimate/value. However, not only can VaR be used to measure market risk, models also exist to measure credit VaR as well as other risk factors. On the other hand, Standard Deviation which measures the volatility of the returns is recommended to be estimated on daily basis for analysis purpose. The standard deviation measure also helps in explaining the assumption of the normal distribution.

The table below summarises the strength and weakness of the VaR and standard deviation measures of risk discussed above taking into account each measure’s intuitiveness, stability, computational difficulty, understandability and coherence.

Exhibit 5  (INSERT TABLE)

 Summary and conclusion

This article/ work has focused in detailed but brief on the issues of risk measurement and management approach, with much emphasis on the risk measure of VaR and Standard deviation, with consideration of types of bank risks, the strength and weakness of three measures of VaR methodologies, its importance to risk management and decision-taking with much detailed on the economic capital as the core goal of risk measurement and management and lastly the strength and weakness of the VaR and standard deviation measures of risk taking into account each measure’s intuitiveness, stability, computational difficulty, understandability and coherence.

From the discussion so far, it is intuitively to infer that the many risks, especially credit risk that could be dangerous to the survival of banks’ economic capital even if adequately measured and obtained could be manage by putting up sound practices such as establish an appropriate credit risk environment, operate under a sound credit-granting process, maintain an appropriate credit administration, measurement and monitoring process, ensure adequate controls over credit risk.

As it has been spelt out in Basel accords, especially Basel III, on the need of banks economic capital in order for the banks to stay solvent should there be any unforeseen crisis in the industry, this work will then suggest that efforts that the banks have been making to develop more systematic and integrated firm-wide approaches to risk measurement and management should carry on to be robustly encouraged by the regulatory and supervisory authorities and work hand in hand to reach a win-win solution/ approach that would be beneficial to all players in the industry.

Furthermore, the supervisory authorities should allow time for continued research to come up a more generally and idealistic simple-to-use risk measure (and management) methodology for the economic capital in order to avoid the diverse discrepancies emerging with the many use methodologies for one basic objective. GB

REFERENCES;

- Basel committee on Banking Supervision, Range of practices and issues in economic capital frameworks, March 2009

- Basel Committee on Banking Supervision, The Joint Forum. Trends in risk integration and aggregation, August 2003

- Saunders and Cornett. (2014). Financial Institutions Management: A Risk

- Management Approach (8th ed.). McGraw-Hill International Edition

 

 

 

 

Connect With Us : 0242202447 | 0551484843 | 0266361755 | 059 199 7513 |