Il Financial Stability Board ha pubblicato l’aggiornamento annuale delle banche e assicurazioni di importanza sistemica.
Comunicato stampa – banche
Comunicato stampa – assicurazioni
Il Financial Stability Board ha pubblicato l’aggiornamento annuale delle banche e assicurazioni di importanza sistemica.
Comunicato stampa – banche
Comunicato stampa – assicurazioni
Si è tenuto il nono incontro del gruppo regionale europeo del FSB. Tra i temi trattati, gli sviluppi regionali macroeconomici e dei mercati finanziari e i bassi tassi di interesse.
L’EIOPA ha modificato la metodologia di calcolo della struttura del tasso di interesse senza rischio.
Comunicato stampa
Documentazione tecnica
Struttura del tasso di interesse senza rischio
L’ESMA ha pubblicato una dichiarazione per migliorare la qualità delle divulgazioni nelle dichiarazioni finanziarie.
L’EBA ha aperto una consultazione sulla raccolta di informazioni confidenziali nell’ambito della BRRD: gli obiettivi sono garantire un’informazione simmetrica e la convergenza delle pratiche.
Il termine ultimo per partecipare è il 27 gennaio 2016.
L’EBA ha reso noti istruzioni e modelli per partecipare al Quantitative Impact Study (QIS) sulla definizione di default. Le istituzioni partecipanti a tale studio dovranno completare i documenti entro il 10 dicembre.
L’EBA ha aggiornato la lista degli strumenti rientranti nel CET 1, aggiornando il documento dopo le ultime modifiche di dicembre 2014.
La Commissione europea sta attuando i passaggi necessari per rafforzare l’Unione Economica e Monetaria (UEM), attraverso l’applicazione del ‘Report dei cinque Presidenti’.
Report dei cinque Presidenti
Comunicato stampa
FAQ
Comunicazione sui passaggi necessari per il completamento dell’UEM
Decisione sull’istituzione di un European Fiscal Board indipendente e consultivo
Raccomandazione (Consiglio europeo) per la creazione di un consiglio per la competitività nazionale nell’area euro
Comunicazione sui passaggi necessari per una più coerente rappresentazione dell’area euro nei forum internazionali
Proposta per stabilire in modo progressivo una rappresentanza unificato dell’area euro nel FMI
In January 2013, the Basel Committee on Banking Supervision (BCBS) issued 11 principles for effective risk data aggregation and risk reporting (hereafter also called BCBS #239) and it outlined the path to compliance for G-SIB and D-SIB[1]. The Regulators drown-up the following deadlines for achieving the full compliance of BCBS #239: January 2016 for G-SIB, 2019 for D-SIB[2].
This principles-based document is intended to address what Supervisors consider as a major weakness that banks carried into the crisis: the inability to understand quickly and accurately their overall exposure and other risk measures that influence their key risk decisions. The principles-based approach used by BCBS has the aim to leave to financial institutions the capability to interpret and conceive a tailored approach for coping with BCBS 239 standards.
These principles-based rules can be a great opportunity for banks in order to transform these requirements in value added points. Therefore, Banks should assess with strategic perception this new regulatory body for driving and steering the activity avoiding a mere “checking-the-list” activity for the compliance.
The scenario in which BCBS #239 has to be applied is quite complicated because Banks has complex and structured organizations and they have set-up a risk monitoring view that has always been considered (even by Regulators themselves) as “silos-based” instead of cross risk as outlined by the new ECB rules (i.e. Comprehensive Assessment).. BCBS #239 aims at breakdown the old “silos” view where the risk is monitored for each type without having a common aggregated view of the counterparty. BCBS #239 is crucial to avoid in the future lack of risk monitoring processes that cannot timely control the exposure impacting the going concern of the financial institution. The new regulatory view is forcing the banking sector to find a value added solution for each of the following challenges:
If Banks will identify the correct business mix for winning these three main challenges, they can manage both business activities and capital charge in a more punctual way:
Therefore, due to this wide range of opportunities that banks can envisage, banks should conceive a proper implementation path of the 11 principles of BCBS #239. A clear action plan before starting any activity is crucial for orientating effort and financial resources.
As mentioned in the last status from BCBS and also in the last Deloitte’s assessment, G-SIBs are finalizing the BCBS #239 plan following three main drivers that sometimes display a not forward looking and value creation orientated view:
Based on the possible drivers the definition of a strategy is leveraging on the following mix of approaches:
For all the approaches mentioned in the first paragraph, G-SIBs have considered three relevant points in accordance with its on-going activities in order to take the highest level of synergy. Their action for their action plan will be depicted in next subparagraphs.
BCBS #239 aims to define a new approach for managing the data governance within the financial institution. Indeed, two main actions are required: Definition of a clear monitoring process and data ownership, data dictionary for the same level playing field of each measure/attribute.
Regarding the first point, a new unit has been considered in several banks for monitoring and developing the new BCBS #239 paradigm: the Chief Data Officer unit (hereafter also CDO). This new unit will play a relevant role in the following areas[4]:
The prevision of the Chief Data Office will be the corner stone of the entire BCBS #239 due to her/his capability of monitoring the data aggregation process in all its features from policy to aggregation key logic. CDO is going to lead the bank to a long term solution that will cope with both optimization of the available resources and BCBS #239 principles.
Secondly, each bank has to carefully evaluate and improve is its IT infrastructure. Before BCBS #239, each bank considered a risk stand-alone without an aggregated view. This view has led to an IT infrastructure that has developed one system for each specific risk, tailored to the business unit that is using it. This approach can produce several misalignments in term of taxonomy and aggregation keys to enable actual communication among different silos. Therefore, a deep review of the IT architecture is needed in terms of:
In addition to the points listed above, banks should also consider the data quality tools to be applied in the IT infrastructure in order to provide the most complete and accurate data to business owners.
The third point that banks should carefully consider for the full compliance with BCBS #239 is the design of a reporting process. The new reporting will have a well-known distribution process of the reports that have been considered useful and clear for risk monitoring purposes. Therefore, each bank should define the rules for distributing the report and also the design of each report. Banks should also take care particularly of the designing phase in order to develop a new culture in top management about reporting. Top management should start to rely on standard reports that are automatically generated and use specific drill-down functionalities only for specific “ad-hoc” analysis. This new approach will reduce time and effort on business side to create presentations or templates for specific reports among units that can cost time and effort on user side that now can be reallocated from data crunching to data analysis.
Nowadays, G-SIBs are close to finalize their BCBS #239 plan and the experience and the challenge faced by them should be a good starting point for D-SIBs in order to immediately start a consistent and reliable action plan.
The best approach can rely on two relevant cornerstones that are mandatory to cope with BCBS #239 and to achieve the best results with the lower effort.
Before starting any activity, top managers have to evaluate the current status of the capability to aggregate of the bank through a detailed assessment. The goal of the assessment is to understand the capability to define:
The final outcome of this assessment is to highlight the areas in with the top management will invest or start recovery actions for the best compliance, avoiding and waste of money and time. During the assessment, managers have to consider all the on-going initiatives to identify which of them can cope with BCBS #239 compliance. This additional task reduces the allocation of budget optimizing the bank’s assets. Leveraging on the on-going projects, top management is going to anticipate the BCBS #239 compliance but also start the sharing of the new governance approach with middle management for a common understanding of the view.
Finalized the assessment, the action plan is needed for filling the BCBS #239 gaps not covered by on-going initiatives already started. In this phase, top managers have not to look through the simple BCBS #239 compliance but they have to consider the opportunities for creating value in the long term. In this respect, a multi-year plan have to be set up (if needed above the regulatory deadline) in order to have a clear definition of the final target and strategic benefits to achieve. Each plan has to address:
Considering these drivers and the elapsed for the target solution, top management has to define the milestones for both minimal regulatory compliant and finish line for the best economical advantage. This clear definition of the roadmap can properly clarify the path for each of these points maximising the results with the lowest level of expenses but with highest business benefit.
[1] Principle for an effective risk data aggregation and risk reporting, Basel committee for Banking Supervision, Publication #239, BIS, January 2013
[2] G-SIB: Global Significant Important bank. G-SIB is a bank that has global implications in case of its default; D-SIB: Domestic Significant important bank, D-SIB is a bank that affects this home country in case of its default. Ref – http://www.bis.org/publ/bcbs224.pdf
[3] Progress in adopting the principles for effective risk data aggregating and risk reporting, Basel committee for Banking Supervision, Publication #239, BIS, January 2015
[4] Deloitte White paper, Deloitte Consulting, 2013.
In the previous article [RAF15] we stressed the necessity of developing specific measures of the Operational Risk (OpRisk) for the Risk Appetite Framework (RAF) in the financial services’ industry.
Although we recommended considering a performance measure (based on a ratio of the losses over the revenues) rather than a pure risk one (based only on the losses), in this article we will focus our efforts on the estimation of the losses, considering that the revenues are usually provided by the CFO with already well developed and established models.
In particular we suggest to develop two different models: one to cover the Operational Losses (OpLosses) from the Event Types (ETs) 4 and 7, “clients, products & business practices” and “execution, delivery & process management” [BCBS04], related to the Compliance and Organizational Risks. The other one to cover the OpLosses from the ET6, “business disruption and system failures” [BCBS04], related to the Information and Communication Technology Risk (ICT Risk).
The outcomes of both models are useful to measure the OpRisk of the financial institution under examination, but the necessity of distinguishing their results arises from the nature of these OpLosses. As a matter of fact, the ICTRisk’s main effects (the indirect ones) are not usually registered in the datasets, so these models need not be based mainly on the registered losses. On the other hand the Compliance and Organizational Risks (at least in case they intersect the OpRisk, that it is their perimeter that we consider here) are well described by the data collected in the standard OpRisk datasets (considering the OpLosses from the ETs 4 and 7 respectively).
In this article we present a model of the OpLosses from the ETs 4 and 7 and, at the end, we briefly explain how to use the OpLosses’ forecasts to obtain an Operational RAF measure that considers also the related revenues and some Key Risk Indicators (KRIs). Note that, once a model considering also the indirect losses is developed, this procedure could be straightforwardly extended to the second typology of models, obviously choosing proper revenues and KRIs.
1. The model
The idea is to model the cumulative OpLosses from the ETs 4 and 7 of a specific division of a financial institution over a fixed period of time – that goes from one month to one year.
First of all, we consider the specific characteristics of the OpLosses time series, where there is usually a standard quantity of losses per day with few huge peaks – usually clusters of out-of-range amounts in specific periods of the year, e.g., near the closure of the quarters, especially at the end of the year and at end of the 1st Semester.
We divide the cumulative OpLosses (OpL) into two components, the Body (B) and the Jump (J), that respectively represent the most common and small losses’ contribution and the extraordinary (in size) ones’ contribution. Therefore, it holds
with
We assume that the change in a quantity (ΔX for a given quantity X) is the one happening over one day (Δt >= 1).
Note that, if we consider long periods of time, the cumulative OpLosses can be approximated by continuous time stochastic processes. Therefore, regarding the Body component B, we choose to represent it as a scaled Wiener process with drift of the following type:
where µ(t) is the deterministic instantaneous mean of the Body component OpLosses at instant t and σ (assumed constant through time) is the deterministic instantaneous standard deviation of the Body component OpLosses. Finally W is a Wiener process.
Regarding the Jump component, we have to represent not only the severity of the OpLosses, but also the frequency, particularly important for our purposes. So we choose to represent the Jump component J as an inhomogeneous compound Poisson process with lognormal distribution of the following type:
where the frequency of the jumps is described by the stochastic process N (independent from the Wiener process W), an inhomogeneous Poisson process with intensity λ(t), a given deterministic function of time.
This intensity function (λ(t)), necessary to represent the seasonality of the OpLosses, is defined by the scaled maximum of three periodic functions:
where
Note that the form of the function is the one used in the model of Geman and Roncoroni (see [GR06]) for representing the seasonality of the electricity prices. We choose this typology of functions because it fits also the intensity of the OpLosses’ jumps, that usually appear in short (referring to the time) clusters at the end of accounting periods.
On the other hand the severity of the Jump component J is described by the random variables , that are independent and identically distributed (iid) with normal distribution, independent from the Wiener process W and the counting process N.
Summarising we assume the OpLosses happen continuously (as a random walk) through time with sudden independent jumps lognormally distributed (as an inhomogeneous compound Poisson process with lognormal distribution). Thus the distribution of losses at the end of any finite interval of time is the sum of a normal with known mean and constant variance and (eventually) of lognormal(s).
2. The RAF measure
The proposed model (or an equivalent one in the ICTRisk), opportunely evaluated, could be directly considered as a RAF measure (even if we do not suggest that). In fact, we could obtain an empirical distribution of the loss via a Montecarlo simulation procedure. Once the simulations from the current distribution have been performed, we obtain the forecasts (for example quarterly through the year) selecting the quantile we are interested in to represent the Profile of the bank, calling it “Position”. For example we could choose 50%, i.e. we use a VaR(50%). Moreover, at the beginning of the year, we could conventionally choose the VaR(40%) and the VaR(70%) (called “Target” and “Trigger”) to fix for the year under examination the Appetite and the Tolerance values of the division respectively.
Note that the choice of the quantiles should depend on the history and the strategic choices of the company. For example, if we are satisfied with the recent results and there are not relevant changes in the business, we can fix for the Position 50%. On the other hand, if we want to reduce risk since in the last years the business has suffered risks that we eliminated, we would choose a threshold such as 40%. On the contrary, if we have just added a product riskier than the others we offer (because of its related earnings), we would accept a 60% threshold.
In any case we think the Appetite should usually be represented by a quantile lower than the Position one and the Trigger higher because, regarding the former context, the top management usually challenges the business management to perform better and, regarding the latter context, the top management sets a warning value that, if it have been already breached, it would have triggered actions to reduce the risk at an acceptable level.
However, this framework could perform in a better way using a performance measure, rather than a pure risk one. As a matter of fact, we think that it is better to judge the losses’ results (the “consequences” of the risks taken by the financial institution) valuating them in comparison to the ones of the revenues (the “sources” of the risks), rather than valuating them as a pure stand-alone measure. Note that this reasoning is especially correct for the OpRisk, where the losses are strictly related to the volume of the business and so to the earnings.
So we suggest to consider a ratio of the losses over the revenues, obtaining as results new values of the Position, the Target and the Trigger (we keep the same names previously used, but now we consider the revenues as also inserted in the measure). Furthermore, we propose a couple of adjustments to the value of the losses used in the Profile of the measure (i.e. the Position).
The first correction allows us to insert a view on the actual business based on the current year’s estimated revenues, that are easier than the losses to predict (because they are less volatile) and, as said before, have a direct influence on the losses, especially in the OpRisk. Therefore, we introduce what we call “Business Adjustment”: to modify the results of the losses we could use a function of the growth rate of the revenues.
The second correction, called “Indicator Adjustment”, would insert a forward looking perspective focusing, among the others, on the emerging risks that the bank faces and will face, weighting this effect on the strategic choices of the Board/CEO. It would focus on the Compliance and the Organizational Risks (or on the ICTRisk), seen as main drivers, respectively, of the future OpLosses of the ET4 and ET7 (or of the ET6).
The main idea behind the Indicator Adjustment is to insert the “tomorrow” in the forecasts and in the actual losses used in the measure. In that case, we would choose a proper set of KRIs with two drivers in mind: they should give a forward-looking view (with an accent on the emerging risks) and they should be not too many, focusing only on the highest impact risks. Therefore, to insert the second correction we advise choosing a proper corrective capped function whose outcomes depend on the performance of the KRIs chosen.
Note that while the Business Adjustment would only apply to the Forecasts of the OpLosses, the Indicator Adjustment would apply to both the Forecasts and the Actual OpLosses.
Indeed in the former case we would be “refining” the Forecasts, whereas in the latter we would be only partially “refining” the Forecasts. In reality the main goal of the Indicator Adjustment is to highlight the effects of the risks that will happen in the future.
The reason is that the Indicator Adjustment could allow the measure to consider not only the effects that will be seen in the results of the year under examination, but also the ones that will be seen in the following few years. As a matter of fact, it is important to highlight that most of the OpLosses have a time gap, measurable in years, between the date of the Event that originates the Economic Manifestations and their booking dates.
Finally, note that the insertion of the Indicator Adjustment in the measure could also be seen as a way the Board/CEO chooses to discount (or increment) the effective OpLosses of a percentage (linked to the cap of the function) related in reality to the quantity of incentives (or disincentives) adopted by the top management to reduce the future risks that the bank will face because of the choices of the current year.
Bibliography
[BCBS04] Basel Committee on Banking Supervision, “International convergence of capital measurement and capital standards” (Basel II), Bank of International Settlements, June 2004. A revised framework comprehensive version, June 2006.
[BankIt13] Banca d’Italia, “Nuove disposizioni di vigilanza prudenziale per le banche”, Circolare n. 263, December 2006. 15th Review, July 2013, partial version.
[CS14] Credit Suisse AG, “Litigation – more risk, less return”, Ideas engine series – Equity research Europe multinational banks, June 2014.
[FSB10] Financial Stability Board, “Intensity and effectiveness of SIFI supervision”, November 2010.
[FSB13] Financial Stability Board, “Principles for an effective risk appetite framework”, November 2013.
[GR06] H. Geman and A. Roncoroni, “Understanding the fine structure of electricity prices”, Journal of Business, Vol. 79, No. 3, May 2006.
[RAF15] F. Sacchi, “Risk Appetite Framework: considerations on Operational Risk measures – Part I”, FinRiskAlert.com.
[SSG09] Senior Supervisors Group, “Risk management lessons from the global banking crisis of 2008”, October 2009.
[SSG10] Senior Supervisors Group, “Observations on developments in risk appetite frameworks and it infrastructure”, December 2010.