UEM: verso una maggiore integrazione

Ott 23 2015

La Commissione europea sta attuando i passaggi necessari per rafforzare l’Unione Economica e Monetaria (UEM), attraverso l’applicazione del ‘Report dei cinque Presidenti’.

Report dei cinque Presidenti
Comunicato stampa
FAQ
Comunicazione sui passaggi necessari per il completamento dell’UEM
Decisione sull’istituzione di un European Fiscal Board indipendente e consultivo
Raccomandazione (Consiglio europeo) per la creazione di un consiglio per la competitività nazionale nell’area euro
Comunicazione sui passaggi necessari per una più coerente rappresentazione dell’area euro nei forum internazionali
Proposta per stabilire in modo progressivo una rappresentanza unificato dell’area euro nel FMI

BCBS 239: PERDAR on the Road to compliance
di Luigi Mastrangelo e Mattia Monti

Ott 23 2015
BCBS 239: PERDAR on the Road to compliance <small><small><I> di Luigi Mastrangelo e Mattia Monti </I></small></small>

1.   Regulatory context

In January 2013, the Basel Committee on Banking Supervision (BCBS) issued 11 principles for effective risk data aggregation and risk reporting (hereafter also called BCBS #239) and it outlined the path to compliance for G-SIB and D-SIB[1]. The Regulators drown-up the following deadlines for achieving the full compliance of BCBS #239: January 2016 for G-SIB, 2019 for D-SIB[2].

This principles-based document is intended to address what Supervisors consider as a major weakness that banks carried into the crisis: the inability to understand quickly and accurately their overall exposure and other risk measures that influence their key risk decisions. The principles-based approach used by BCBS has the aim to leave to financial institutions the capability to interpret and conceive a tailored approach for coping with BCBS 239 standards.

These principles-based rules can be a great opportunity for banks in order to transform these requirements in value added points. Therefore, Banks should assess with strategic perception this new regulatory body for driving and steering the activity avoiding a mere “checking-the-list” activity for the compliance.

2.   Challenges for Banking Sector

The scenario in which BCBS #239 has to be applied is quite complicated because Banks has complex and structured organizations and they have set-up a risk monitoring view that has always been considered (even by Regulators themselves) as “silos-based” instead of cross risk as outlined by the new ECB rules (i.e. Comprehensive Assessment).. BCBS #239 aims at breakdown the old “silos” view where the risk is monitored for each type without having a common aggregated view of the counterparty. BCBS #239 is crucial to avoid in the future lack of risk monitoring processes that cannot timely control the exposure impacting the going concern of the financial institution. The new regulatory view is forcing the banking sector to find a value added solution for each of the following challenges:

  1. Data Governance: BCBS #239 aims to improve the governance of data elaborated by IT systems in order to ensure the best data quality. First of all the normative highlights that a clear process for validating the data is crucial to increase the control over them and data quality. The second point outlined in the BIS document deals with the definition of the measure that is crucial for a correct data aggregation process through a data taxonomy. The normative remarks also the need to have a clear documentation in which banks can control and steer the running process with a proper exception or escalation action, if needed.
    The monitoring and the creation of these processes should be performed at a group-wide level with an “orchestra leader” that steers all the activities.
  2. Infrastructure & Data Quality: In the financial breakdown several banks show that data capture and aggregation processes are unwieldy and relatively unsophisticated. This needs data cleansing and manual reconciliation before the production of aggregated management reports. In addition, the different risk types require data with varying degrees of granularity, reducing the consistency and quality of data. These activities also increase the attention span of validator on the creation of manual patches for the data squaring instead of assessing the figures and steering the business. Therefore, banks also need the ability to generate aggregated risk data across all critical risk types in all the situations, i.e. normal run or specific “ad-hoc” regulatory request. The BCBC #239 embeds the request to improve the data automation in order to improve data accuracy and timeliness without losing a certain degree of flexibility. In the current days, a new comprehensive risk view is stressing the silos IT systems in order to produce aggregated data with manual corrections at group level. This approach sometimes is affecting the local validation process because the corrections cannot display or recreate in a proper way the local figures validated at facility/product by local validator, as faced during the Asset Quality Review exercise.[3]
  3. Reporting: Banks have more requirements today when it comes to meet reporting demands. Both National Controlling Authorities and European Central Bank are asking for more information aim at increasing transparency and a clear accountability. Therefore, top management is looking for more information to cope with this requirement and use these additional information sets for a new strategy plan. This scenario is growing the pressure on both Finance & Risk Departments and IT infrastructure as well.

If Banks will identify the correct business mix for winning these three main challenges, they can manage both business activities and capital charge in a more punctual way:

Image A

Image B

Therefore, due to this wide range of opportunities that banks can envisage, banks should conceive a proper implementation path of the 11 principles of BCBS #239. A clear action plan before starting any activity is crucial for orientating effort and financial resources.

3.   G-SIBs approach for compliance

As mentioned in the last status from BCBS and also in the last Deloitte’s assessment, G-SIBs are finalizing the BCBS #239 plan following three main drivers that sometimes display a not forward looking and value creation orientated view:

  • Fix or build: Several G-SIBs are focused on the simple filling-in the gaps in order to comply with regulatory requirement and avoid Regulatory fines on January 2016 in case of missing compliance.
  • Tactical approach against a Target Solution: The wide range of activities required by BCBS #239 and the short timeframe for applying there are forcing G-SIB to identify alternative path for the compliance. Indeed, Broad strategic transformation of data and technology that achieve the full IT and governance compliance is difficult to finalize in few years. Therefore, each bank is tailoring its tactical solution for achieving the best results. The main drivers that G-SIBs are using in Europe are the following:
    • Risk Type: it identifies the risk typology (i.e. Credit risk) and operations that are managing huge exposure in bank portfolio
    • Reports: relevant report deliveries to Regulator (i.e. RWA) or Top management
    • Audience: final users of the report (i.e. Regulator, Top management, Business Analysts)
    • Measures & Attributes: they identify the most critical risk measures/attributes cross risk generated by the banks (i.e. Exposure at Default)
    • Business Unit: it identifies the relevant business units in the bank business mix
    • Legal entity: it identifies the relevant legal entities in the bank business mix

Based on the possible drivers the definition of a strategy is leveraging on the following mix of approaches:

    • Subset of risk reports, data and measures: Top management identifies the relevant risk reporting processes that has to cope with BCBS #239 requirements (i.e. RWA).
    • All the risk reports, data and measure for the relevant areas of the banks such as business unit or legal entity: This approach defines the perimeter considering the relevant business area of the bank identified by business volume and risk taken.
  • Compliance against Business model modification: G-SIBs banks seem focused to the compliance without taking the opportunity to review their business model in order to catch the business opportunity of the normative.

For all the approaches mentioned in the first paragraph, G-SIBs have considered three relevant points in accordance with its on-going activities in order to take the highest level of synergy. Their action for their action plan will be depicted in next subparagraphs.

3.1  Data Governance

BCBS #239 aims to define a new approach for managing the data governance within the financial institution. Indeed, two main actions are required: Definition of a clear monitoring process and data ownership, data dictionary for the same level playing field of each measure/attribute.

Regarding the first point, a new unit has been considered in several banks for monitoring and developing the new BCBS #239 paradigm: the Chief Data Officer unit (hereafter also CDO). This new unit will play a relevant role in the following areas[4]:

  • Voice of the data: providing stewardship, champion and implementing data management strategies and data quality management standards.
  • Measure and manage data risk: Developing capability to measure and predict risk and influence enterprise risk appetite at executive tables.
  • Influence corporate strategy: enabling a better analytics for decision making, helping refining corporate strategy using the insights gained from effective analysis of data
  • Improve the top line: increasing revenues, customer approval rating, and market goodwill through the effective governance and use data.
  • Improve the bottom line: concerning low cost of quality and cost compliance, improving productivity through availability of timely correct data.

The prevision of the Chief Data Office will be the corner stone of the entire BCBS #239 due to her/his capability of monitoring the data aggregation process in all its features from policy to aggregation key logic. CDO is going to lead the bank to a long term solution that will cope with both optimization of the available resources and BCBS #239 principles.

3.2  Infrastructure & Data Quality

Secondly, each bank has to carefully evaluate and improve is its IT infrastructure. Before BCBS #239, each bank considered a risk stand-alone without an aggregated view. This view has led to an IT infrastructure that has developed one system for each specific risk, tailored to the business unit that is using it. This approach can produce several misalignments in term of taxonomy and aggregation keys to enable actual communication among different silos. Therefore, a deep review of the IT architecture is needed in terms of:

  • Authoritative source: Identifying the source that is the right data recognized by business owner
  • Granularity: Identifying the useful level of information available in the system for the business needs
  • Aggregation process: defining the rule of aggregation of the inputs received for each risk type

In addition to the points listed above, banks should also consider the data quality tools to be applied in the IT infrastructure in order to provide the most complete and accurate data to business owners.

3.3  Reporting

The third point that banks should carefully consider for the full compliance with BCBS #239 is the design of a reporting process. The new reporting will have a well-known distribution process of the reports that have been considered useful and clear for risk monitoring purposes. Therefore, each bank should define the rules for distributing the report and also the design of each report. Banks should also take care particularly of the designing phase in order to develop a new culture in top management about reporting. Top management should start to rely on standard reports that are automatically generated and use specific drill-down functionalities only for specific “ad-hoc” analysis. This new approach will reduce time and effort on business side to create presentations or templates for specific reports among units that can cost time and effort on user side that now can be reallocated from data crunching to data analysis.

Nowadays, G-SIBs are close to finalize their BCBS #239 plan and the experience and the challenge faced by them should be a good starting point for D-SIBs in order to immediately start a consistent and reliable action plan.

4.   How banking sector should cope with BCBS #239 and take economical advantage

The best approach can rely on two relevant cornerstones that are mandatory to cope with BCBS #239 and to achieve the best results with the lower effort.

Before starting any activity, top managers have to evaluate the current status of the capability to aggregate of the bank through a detailed assessment. The goal of the assessment is to understand the capability to define:

  • Governance and Responsibility for monitoring cross risk aggregation processes identifying the right owner of each data.
  • Clear definitions of measures/Attributes understanding the grade of sharing of the definition of each risk measure/attributes.
  • IT architecture to automatically aggregate cross risk data considering the technology already available in the bank.
  • Accuracy and clarity for the cross reporting

The final outcome of this assessment is to highlight the areas in with the top management will invest or start recovery actions for the best compliance, avoiding and waste of money and time. During the assessment, managers have to consider all the on-going initiatives to identify which of them can cope with BCBS #239 compliance. This additional task reduces the allocation of budget optimizing the bank’s assets. Leveraging on the on-going projects, top management is going to anticipate the BCBS #239 compliance but also start the sharing of the new governance approach with middle management for a common understanding of the view.

Finalized the assessment, the action plan is needed for filling the BCBS #239 gaps not covered by on-going initiatives already started. In this phase, top managers have not to look through the simple BCBS #239 compliance but they have to consider the opportunities for creating value in the long term. In this respect, a multi-year plan have to be set up (if needed above the regulatory deadline) in order to have a clear definition of the final target and strategic benefits to achieve. Each plan has to address:

  • Target operating model to carry over the monitoring and the assessment of the activities in the next years
  • Scope and target capabilities to conceive a value-oriented view of scope, in terms of measures and reports and in terms of level at which apply the regulation (i.e. group, legal entity or division). At the same level of granularity, defining individual target aspirations that again are oriented towards generating value will exploit better the potentiality of this regulation
  • Quality and controls for all the measures affected by BCBS #239. Define a set of quality aspects that allow measurement and control improving quality performance. It would also be valuable to define evidence that will be collected to prove compliance to regulator. The earlier these goalposts are established and regularly measured, the greater the accuracy and acceleration will be provided to the plan
  • Implementation of the IT solution for having the required risk data aggregation level considering all the panel of technology available for improve the aggregation processes

Considering these drivers and the elapsed for the target solution, top management has to define the milestones for both minimal regulatory compliant and finish line for the best economical advantage. This clear definition of the roadmap can properly clarify the path for each of these points maximising the results with the lowest level of expenses but with highest business benefit.

Bibliography

[1] Principle for an effective risk data aggregation and risk reporting, Basel committee for Banking Supervision, Publication #239, BIS, January 2013

[2] G-SIB: Global Significant Important bank. G-SIB is a bank that has global implications in case of its default; D-SIB: Domestic Significant important bank, D-SIB is a bank that affects this home country in case of its default. Ref – http://www.bis.org/publ/bcbs224.pdf

[3] Progress in adopting the principles for effective risk data aggregating and risk reporting, Basel committee for Banking Supervision, Publication #239, BIS, January 2015

[4] Deloitte White paper, Deloitte Consulting, 2013.

Risk Appetite Framework: considerations on Operational Risk measures – Part II
di Francesco Sacchi

Ott 23 2015
Risk Appetite Framework: considerations on Operational Risk measures – Part II <small><small><I> di Francesco Sacchi </I></small></small>

In the previous article [RAF15] we stressed the necessity of developing specific measures of the Operational Risk (OpRisk) for the Risk Appetite Framework (RAF) in the financial services’ industry.

Although we recommended considering a performance measure (based on a ratio of the losses over the revenues) rather than a pure risk one (based only on the losses), in this article we will focus our efforts on the estimation of the losses, considering that the revenues are usually provided by the CFO with already well developed and established models.

In particular we suggest to develop two different models: one to cover the Operational Losses (OpLosses) from the Event Types (ETs) 4 and 7, “clients, products & business practices” and “execution, delivery & process management” [BCBS04], related to the Compliance and Organizational Risks. The other one to cover the OpLosses from the ET6, “business disruption and system failures” [BCBS04], related to the Information and Communication Technology Risk (ICT Risk).

The outcomes of both models are useful to measure the OpRisk of the financial institution under examination, but the necessity of distinguishing their results arises from the nature of these OpLosses. As a matter of fact, the ICTRisk’s main effects (the indirect ones) are not usually registered in the datasets, so these models need not be based mainly on the registered losses. On the other hand the Compliance and Organizational Risks (at least in case they intersect the OpRisk, that it is their perimeter that we consider here) are well described by the data collected in the standard OpRisk datasets (considering the OpLosses from the ETs 4 and 7 respectively).

In this article we present a model of the OpLosses from the ETs 4 and 7 and, at the end, we briefly explain how to use the OpLosses’ forecasts to obtain an Operational RAF measure that considers also the related revenues and some Key Risk Indicators (KRIs). Note that, once a model considering also the indirect losses is developed, this procedure could be straightforwardly extended to the second typology of models, obviously choosing proper revenues and KRIs.

1.      The model

The idea is to model the cumulative OpLosses from the ETs 4 and 7 of a specific division of a financial institution over a fixed period of time – that goes from one month to one year.

First of all, we consider the specific characteristics of the OpLosses time series, where there is usually a standard quantity of losses per day with few huge peaks – usually clusters of out-of-range amounts in specific periods of the year, e.g., near the closure of the quarters, especially at the end of the year and at end of the 1st Semester.

We divide the cumulative OpLosses (OpL) into two components, the Body (B) and the Jump (J), that respectively represent the most common and small losses’ contribution and the extraordinary (in size) ones’ contribution. Therefore, it holds

Image 1

with

Image 2

We assume that the change in a quantity (ΔX for a given quantity X) is the one happening over one day (Δt >= 1).

Note that, if we consider long periods of time, the cumulative OpLosses can be approximated by continuous time stochastic processes. Therefore, regarding the Body component B, we choose to represent it as a scaled Wiener process with drift of the following type:

Image 3

where µ(t) is the deterministic instantaneous mean of the Body component OpLosses at instant t and σ (assumed constant through time) is the deterministic instantaneous standard deviation of the Body component OpLosses. Finally W is a Wiener process.

Regarding the Jump component, we have to represent not only the severity of the OpLosses, but also the frequency, particularly important for our purposes. So we choose to represent the Jump component J as an inhomogeneous compound Poisson process with lognormal distribution of the following type:

Image 4

where the frequency of the jumps is described by the stochastic process N (independent from the Wiener process W), an inhomogeneous Poisson process with intensity λ(t), a given deterministic function of time.

This intensity function (λ(t)), necessary to represent the seasonality of the OpLosses, is defined by the scaled maximum of three periodic functions:

Image 5

where

  • θ is a positive constant representing the maximum expected number of jumps per unit of time;
  • s(t) is a periodic function representing the normalized jump intensity shape;
  • the parameters of s(t) are the following: the period of the function Image 6 is represented by the positive value Image 7, i.e. the jump occurrence exhibits peaking levels at multiples of  years (usually  Image 8annual, Image 9 biannual, Image 10 quarterly). The first peak of the function Image 11 is at time Image 12 in [0,k]. Finally, the positive exponent Image 13 allows us to adjust the dispersion of jumps around peaking times and to create a wider shape the lower the value is; moreover, it has a stronger effect (i.e. wider shape) the longer the period is.

Note that the form of the function Image 11 is the one used in the model of Geman and Roncoroni (see [GR06]) for representing the seasonality of the electricity prices. We choose this typology of functions because it fits also the intensity of the OpLosses’ jumps, that usually appear in short (referring to the time) clusters at the end of accounting periods.

On the other hand the severity of the Jump component J is described by the random variables Image 14, that are independent and identically distributed (iid) with normal distribution, independent from the Wiener process W and the counting process N.

Summarising we assume the OpLosses happen continuously (as a random walk) through time with sudden independent jumps lognormally distributed (as an inhomogeneous compound Poisson process with lognormal distribution). Thus the distribution of losses at the end of any finite interval of time is the sum of a normal with known mean and constant variance and (eventually) of lognormal(s).

2.      The RAF measure

The proposed model (or an equivalent one in the ICTRisk), opportunely evaluated, could be directly considered as a RAF measure (even if we do not suggest that). In fact, we could obtain an empirical distribution of the loss via a Montecarlo simulation procedure. Once the simulations from the current distribution have been performed, we obtain the forecasts (for example quarterly through the year) selecting the quantile we are interested in to represent the Profile of the bank, calling it “Position”. For example we could choose 50%, i.e. we use a VaR(50%). Moreover, at the beginning of the year, we could conventionally choose the VaR(40%) and the VaR(70%) (called “Target” and “Trigger”) to fix for the year under examination the Appetite and the Tolerance values of the division respectively.

Note that the choice of the quantiles should depend on the history and the strategic choices of the company. For example, if we are satisfied with the recent results and there are not relevant changes in the business, we can fix for the Position 50%. On the other hand, if we want to reduce risk since in the last years the business has suffered risks that we eliminated, we would choose a threshold such as 40%. On the contrary, if we have just added a product riskier than the others we offer (because of its related earnings), we would accept a 60% threshold.

In any case we think the Appetite should usually be represented by a quantile lower than the Position one and the Trigger higher because, regarding the former context, the top management usually challenges the business management to perform better and, regarding the latter context, the top management sets a warning value that, if it have been already breached, it would have triggered actions to reduce the risk at an acceptable level.

However, this framework could perform in a better way using a performance measure, rather than a pure risk one.  As a matter of fact, we think that it is better to judge the losses’ results (the “consequences” of the risks taken by the financial institution) valuating them in comparison to the ones of the revenues (the “sources” of the risks), rather than valuating them as a pure stand-alone measure.  Note that this reasoning is especially correct for the OpRisk, where the losses are strictly related to the volume of the business and so to the earnings.

So we suggest to consider a ratio of the losses over the revenues, obtaining as results new values of the Position, the Target and the Trigger (we keep the same names previously used, but now we consider the revenues as also inserted in the measure). Furthermore, we propose a couple of adjustments to the value of the losses used in the Profile of the measure (i.e. the Position).

The first correction allows us to insert a view on the actual business based on the current year’s estimated revenues, that are easier than the losses to predict (because they are less volatile) and, as said before, have a direct influence on the losses, especially in the OpRisk. Therefore, we introduce what we call “Business Adjustment”: to modify the results of the losses we could use a function of the growth rate of the revenues.

The second correction, called “Indicator Adjustment”, would insert a forward looking perspective focusing, among the others, on the emerging risks that the bank faces and will face, weighting this effect on the strategic choices of the Board/CEO. It would focus on the Compliance and the Organizational Risks (or on the ICTRisk), seen as main drivers, respectively, of the future OpLosses of the ET4 and ET7 (or of the ET6).

The main idea behind the Indicator Adjustment is to insert the “tomorrow” in the forecasts and in the actual losses used in the measure. In that case, we would choose a proper set of KRIs with two drivers in mind: they should give a forward-looking view (with an accent on the emerging risks) and they should be not too many, focusing only on the highest impact risks. Therefore, to insert the second correction we advise choosing a proper corrective capped function whose outcomes depend on the performance of the KRIs chosen.

Note that while the Business Adjustment would only apply to the Forecasts of the OpLosses, the Indicator Adjustment would apply to both the Forecasts and the Actual OpLosses.

Indeed in the former case we would be “refining” the Forecasts, whereas in the latter we would be only partially “refining” the Forecasts. In reality the main goal of the Indicator Adjustment is to highlight the effects of the risks that will happen in the future.

The reason is that the Indicator Adjustment could allow the measure to consider not only the effects that will be seen in the results of the year under examination, but also the ones that will be seen in the following few years. As a matter of fact, it is important to highlight that most of the OpLosses have a time gap, measurable in years, between the date of the Event that originates the Economic Manifestations and their booking dates.

Finally, note that the insertion of the Indicator Adjustment in the measure could also be seen as a way the Board/CEO chooses to discount (or increment) the effective OpLosses of a percentage (linked to the cap of the function) related in reality to the quantity of incentives (or disincentives) adopted by the top management to reduce the future risks that the bank will face because of the choices of the current year.

Bibliography

[BCBS04] Basel Committee on Banking Supervision, “International convergence of capital measurement and capital standards” (Basel II), Bank of International Settlements, June 2004. A revised framework comprehensive version, June 2006.

[BankIt13] Banca d’Italia, “Nuove disposizioni di vigilanza prudenziale per le banche”, Circolare n. 263, December 2006. 15th Review, July 2013, partial version.

[CS14] Credit Suisse AG, “Litigation – more risk, less return”, Ideas engine series – Equity research Europe multinational banks, June 2014.

[FSB10] Financial Stability Board, “Intensity and effectiveness of SIFI supervision”, November 2010.

[FSB13] Financial Stability Board, “Principles for an effective risk appetite framework”, November 2013.

[GR06] H. Geman and A. Roncoroni, “Understanding the fine structure of electricity prices”, Journal of Business, Vol. 79, No. 3, May 2006.

[RAF15] F. Sacchi, “Risk Appetite Framework: considerations on Operational Risk measures – Part I”, FinRiskAlert.com.

[SSG09] Senior Supervisors Group, “Risk management lessons from the global banking crisis of 2008”, October 2009.

[SSG10] Senior Supervisors Group, “Observations on developments in risk appetite frameworks and it infrastructure”, December 2010.