Course on Suptech and Regtech
August 24-28, 2020.
Given the success of the first edition of this Course, held in February in Jamaica, and the interest raised by regional central banks on topics related to Suptech and Regtech, it was decided to launch the second edition of the Course in a virtual format. Based on network science, machine learning and artificial intelligence, Regtech and Suptech methodologies boost central banks’ capacities to monitor and supervise the well-functioning of the financial system. The Course is intended to give an insight on the main foundations of Suptech and Regtech from a theoretical perspective but at the same time to apply the theory to practical exercises using relevant data.
Introduction to Regtech and Suptech
The Course started with a broad introduction to Regtech & Suptech. First it was defined the main terms related:
Fintech: Technology that helps facilitate retail financial services in a new way.
Insurtech: Technology helping insurers drive efficiency & innovation in the way they serve customers.
Regtech: Technology helping banks & FMIs comply with regulatory requirements.
Suptech: Technology that helps authorities in their mission to monitor, oversee and supervise financial markets.
Techfin: Technology giants entering the financial services markets (Google, Amazon, Apple, Alibaba, Tencent, etc).
After the above explanation on the terminology, most important Suptech drivers were addressed, which are the increase of the complexity (global interconnectedness, digitalization of financial institutions, new market entrants and increase in the velocity of transactions), higher expectations (inclusion of Artificial Intelligence (AI/ML) tools and increasingly digitized systems) and the increase of data collection after the 2008 crisis. Suptech challenges were addressed, like: collection of data with high quality that permits access across organizations, development of capabilities and skills related to data science and technology, and a reduction in the gap between research and production.
Then a review was made on the role that Suptech plays regarding the technology adoption by Supervisors, which covers data collection and data analytics along with all its branches, and the current stage of adoption for some selected Supervisors.
Larger share of financial sector under supervision
Improved consumer outcomes (better protection, increased confidence in market)
Improved conduct of providers
Better value for limited government resources
Finally, it was mentioned that areas which deserve major focus are Data Analytics, Artificial Intelligence, Robotic process automation and Distributed Ledger Technology; also, as a final observation, we saw that Regtech is spread across a variety of functional areas and that its solutions in many financial institutions are a mixture of in-house and external application.
Introduction to Network Science, simulation and Artificial Intelligence & Machine Learning in central bank applications
In this session the main concepts were introduced and applications of Network Science and Graph Analytics, which are powerful tools already been used in applications for Google (Knowledge Graph), Facebook (Social Graph), Amazon (Product Graph), PayPal (Payment Graph), among others.
Network theory enables the possibility to model and measure connections, the possible reasons for these connections and how these complex systems evolve in time; some examples of these complex systems are financial markets, payment systems, friendship networks and almost every socio-economic system.
The approaches to analyze networks are:
Top-down analysis, where some typical use cases are: Systemic risk analysis, system monitoring, design and stress testing, clustering/classification, early warning and anomaly detection.
Bottom-up analysis, where some typical use cases are: Criminal investigation, terrorist networks, money laundering, KYC and KYCC, fundamental investment analysis and supply chain analysis.
Features of data, where some typical use cases are: AI/ML, fraud algorithms, recommendation engines and algorithmic trading.
Continuing with the session, the main networks concepts and metrics were given. The concepts reviewed were networks and its components: nodes and links (which can be directed or undirected and weighted or unweighted); also examples were introduced. The metrics reviewed were: centrality, which measures the importance of nodes (or links) that form a network, where depending on the process carried out it can be classified on trajectory or transmission driven measures; and community detection, which is used to simplify, categorize and label network nodes into meaningful groups or communities.
Finally, different network visualization methods were summarized, where it was seen that the use of one over another would depend on the task been carried out.
Exposure Networks and Stress Testing
The next part of the Course was devoted to show, from an exposures networks perspective, the importance of the study of financial networks by reviewing different use cases. In particular, it was shown how to measure aspects as banks' interconnectedness and systemic risk, to bring transparency to some specific markets (as the derivates market) through the use of data, monitor important changes in the financial system over time, identify concentration risk and the development of visualization tools that showed the dynamics of the network. Finally, a hands-on exercise was done using data from the BIS Consolidated Statistics on exposures among BIS reporting and no reporting countries to observe the behavior before, during and after 2008 crisis of some key players of the Eurozone.
Then, it was shown a way to study interconnectedness in the CCPs network. It was presented how CCPs connects among them, here we saw aspects like how a given CCP connects with others, its connections and how important these are, also how a failure on an important link could generate a shock that could propagate through the network. Then, it was shown through an example how interconnectedness within a CCP (CCP settlement and clearing members) looks and how stress testing exercises exhibit issues as concentration of settlement flows on a few participants.
The first part of this session started with a comparison between transaction-based (payments, trades, exposures, flows, etc.) and similarity-based networks (correlations, partial correlations, granger causality, transfer entropy, etc.). Due to the increase of markets interconnectivity there's a need to understand correlations structures of much larger scale, in this regard, networks can help to develop intuition and understand stress tests. It is possible to use network layouts to better detect patterns from noise, for example one can try a Force-Directed network in order to identify clusters, then identify the Minimum Spanning Tree and filter out correlations to finally through the use of a specific distance function look at the maximum spanning tree and get the backbone correlation structure. Often networks are large and complex and we want to filter out noise. Filtering methods as the above-mentioned gives solutions and shed light on the correlation structure of a given network.
Early Warning Signals
For this session was studied the US Housing Bubble. Through a correlation networks approach were developed an early warning monitoring system, where by using the radial tree layout and the level of correlations of real estate prices from it was observed how crisis evolved and some interesting clusters that were formed during the whole process.
Stress Testing Correlation Networks
This session was focused on a hands-on exercise that aimed to understand and attribute the impact of changes/shocks in portfolio drivers through the use of visual network-based methods that allowed to:
Understand the correlation of large scale structures
Develop correlation scenarios based on historical structures
Create new correlation structures
Design and stress testing of FMIs
This session was devoted to show how network simulations can be used to purposes as the design of FMIs, stress testing and scenarios, model validation and monitoring. It was reviewed the concept of simulation and the basis for agent-based modeling. Simulation is a methodology to understand the behavior of complex systems (that generally are large with many interacting elements and non-linearities). It was also mentioned that for agent-based modeling each agent has a set of rules that defines its behavior which may have material impact on the results depending on how it is defined; the choices to make for the agents are its design of rules, if the agents will be homogenous or heterogeneous and if they will be static agents or learning agents. The session continued with the review of concrete use cases of FMIs design.
Then, a revision was made on a framework to evaluate the trade-off between liquidity and delay on RTGS systems. After highlighting important points as the large amount of liquidity that RTGS systems consumes, the increasing demand of faster payments and the complexity of the system, that serve as motivation to develop methodologies that improves the above-mentioned trade-off,; different Liquidity Saving Mechanisms (LSM) were mentioned: bypass, bilateral offset and queue optimization. To end with the session concrete use cases as the design of an interbank payment system, liquidity optimization for an RTGS, FX and Retail Remittance system (Ripple) and the determination of liquidity reduction of a specific FMI, were presented.
FMIs monitoring and identification of important banks
The session initiated showing how through network analysis one can monitor the banking system, detect abnormal behavior, develop stress tests and identify bank's liquidity problems. Then a review was made on how the development of different methodologies as core-periphery nodes classification, Payment System Liquidity Simulator (PSLI), and the SinkRank, could be useful for liquidity analysis.
Then it was reviewed a use case using data from the SWIFT network, this part of the session aimed to identify anomalies first from a volume perspective by the use of time series and Hierarchical Gaussian Process Regression (a machine learning method); and next from a networks perspective where network metrics and communities (clustering) algorithms where implemented in order to detect anomalies in the structure of the network.
Fighting Cyber/Fraud/AML/CFT risks - Anomaly detection and investigation in FMIs
The last session of the Course aimed at presenting tools that can be implemented in order to detect and investigate anomalies, such as money laundering or cyber-attacks, inside FMIs. The session began presenting cases where cyber-attacks and money laundering caused important losses to central banks. After the before-mentioned motivation it was defined the problem of anomaly detection, where it was said that it is a classification problem where the anomalies could be related to operational errors, frauds, money laundering, terrorism financing, etc. Then was made a summary of the two approaches to tackle the problem by using machine learning techniques (supervised and unsupervised). Next a supervised approach was reviewed that showed to improve its performance when using network features related to centrality (Sinkrank, Pagerank, betweenness, closeness, etc.), network distance (shortest pass, random walk, etc.) and communities/clusters. The next part of the session was devoted to study a test case of anomaly detection in a RTGS system where the technique used was neural networks; it was shown that under a simple set of features composed of the weighted distance of the network connections, the model correctly predicted 85% of them, and then by adding to the set of features the number of connections between the sender and the receiver and the influence of the sender over the receiver, the model improved link prediction to 94%. Finally, it was mentioned that the approach based on network features outperforms standard supervised and unsupervised methods.
Monday, August 24
Dr. Serafín Martinez Jaramillo Advisor, CEMLA
Analytics Technology at Central Banks - the State of the Art
Session 1. Introduction to Regtech and Suptech and overview of the training course. Examples of how Regtech & Suptech can support regulatory processes and supervisory cycles during systemic disruptions
Session 2. Overview of Suptech Applications for Risk Exposure Diagnostics, Macroprudential Supervision, Market Correlation Detection, Payment Analytics, Financial Market Infrastructures Design and Oversight
Session 3. Basics of network analytics, simulation and Artificial Intelligence & Machine Learning in central bank applications.
Session 4. Hands-on exercise: real-life central bank analytics applications by FNA.
Tuesday, August 25
Exposure Networks & Stress Testing
Session 5. Practical applications of network analytics in measuring, mapping and modelling of financial exposures. Analysis of transactions data at granular level and identification of systemically important financial institutions.
Session 6. Hands-on exercises on Bank for International Settlement's bank country risk exposure and interconnectedness of Financial Market Infrastructures
Session 7. Understanding complex instruments that may hide substantial risks - as highlighted by the Financial Crisis 2007-08.
Session 8. Hands-on exercises: use of interactive dashboards in relation to Case Studies of e.g. CBMX index & CCP Interconnectedness.
Wednesday, August 26
Correlation Maps - a Systemic View of Financial Markets
Session 9. Presentation of advanced correlation maps visualizing interconnectivity of markets with a focus on Value at Risk analytics and outlier detection.
Session 10. Hands-on exercise on cross asset correlation networks incl. case studies on Brexit, the US presidential election and Covid-19 crisis.
Session 11. Development of early warning signals through monitoring and visualization of interconnected market dynamics. Statistical identification of hidden patterns in complex data.
Session 12. Hands-on exercise on correlation dashboards incl. use of Case Studies of e.g. US Housing Bubble and Crisis and European Debt Crisis.
Stress Testing Correlation Networks
Session 13. Focus on financial markets as a complex system with numerous measurable interdependencies. Production of 'what if' scenarios to predict movements of markets under stress.
Session 14. Hands-on exercise: use of interactive dashboards in relation to Case Studies of e.g. Brexit Referendum and US Presidential Election.
Thursday, August 27
Using Network Simulations to Design FMI
Session 15. The disrupted landscape of FMIs and Central Bank Digital Currencies (CBDCs).
Session 16. Hands-on exercise: modelling FMIs as complex systems with Agent Based Modelling.
Session 17. Measuring liquidity efficiency and Developing new Liquidity-Saving Mechanisms.
Session 18. Hands-on exercise: carrying out simulations with synthetic transactions data.
Monitoring and Stress Testing FMIs
Session 19. Developing stress scenarios (participant failure, operational issues, etc.) and complying with PFMIs. Using Agent Based Simulations to evaluate different stress scenarios.
Session 20. Hands-on exercise: Developing rewiring models, running stress simulations and developing interactive dashboards.
Friday, August 28
Diagnostic analytics: Detection and Investigation of Cyber & Financial Crime
Session 21. Introduction to Anti-Money Laundering/Combating the Financing of Terrorism and cyber-crime.
Session 22. Improving fraud detection and Anti-Money Laundering with network theory. "Following the money" in manual investigation of financial crime. Applications of related party networks and analytics.
Session 23. Hands-on exercise: use of interactive dashboards in relation to Anti-Money Laundering and Combating the Financing of Terrorism Case Studies.
Course Conclusion: Delegate actions points and next steps.
Founder and CEO Financial Network Analytic
Kimmo Soramäki is the Founder and CEO of Financial Network Analytics (FNA) and the founding Editor-in-Chief of the Journal of Network Theory in Finance.
Kimmo started his career as an economist at the Bank of Finland where in 1997, he developed the first simulation model for interbank payment systems. In 2004, while at the research department of the Federal Reserve Bank of New York, he was among the first to apply methods from network theory to improve our understanding of financial interconnectedness. During the financial crisis of 2007-2008, Kimmo advised several central banks, including the Bank of England and European Central Bank, in modeling interconnections and systemic risk. This work led him to found FNA in 2013 to solve important issues around financial risk and for exploring the complex financial networks that play a continually larger role in the world around us.
Kimmo holds a Doctor of Science in Operations Research and a Master of Science in Economics (Finance), both from Aalto University in Helsinki.
Chief Scientist at Financial Network Analytic
Samantha Cook is the Chief Scientist at FNA, where her primary responsibilities are overseeing data analysis and the communication of the capabilities of FNA software to the scientific community. Since finishing her PhD she has worked both in academics and industry, and is most content when she can combine teaching with working on real-world, practical problems. She has published in statistics as well as psychology, public health, finance, and economics, and am especially interested in statistical computation and graphics. She also works as a Spanish to English translator, specializing in academic and scientific texts.
Managing Director – Advanced Analytics
Risk management executive with 20 years of experience in financial markets. Subject matter expertise across various aspects of risk management (including market and counterparty credit risk), derivatives valuation, ML/AI, neural networks, and Bayesian modeling. Understands CCP business strategies, products/services, as well as the general governance structure, relevant regulations, clearing and settlement practices. Process oriented and resourceful team player with strong relationship management, writing and analytical skills.